• Floppy@beehaw.org
    link
    fedilink
    arrow-up
    30
    ·
    2 years ago

    Thing is, this isn’t AI causing the problem. It’s humans using it in incredibly dumb irresponsible ways. Once again, it’ll be us that do ourselves in. We really need to mature as a species before we can handle this stuff.

    • MagicShel@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 years ago

      I mean I won’t disagree with you but I think a more fundamental issue is that we are so easy to lie to. I’m not sure it matters whether the liar is an AI, a politician, a corporation, or a journalist. Five years ago it was a bunch of people in office buildings posting lies on social media. Now it will be AI.

      In a way, AI could make lie detection easier by parsing posting history for contradictions and fabrications in a way humans could never do on their own. But whether they are useful/used for that purpose is another question. I think AI will be very useful for processing and summarizing vast quantities of information in ways other than statistical analysis.

      • riskable@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        2 years ago

        AITruthBot will be just downvoted into oblivion on half of social media. They’ll call it, “liberal propaganda bot”

        • MagicShel@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          2 years ago

          There is a [slight] difference between people pushing propaganda and those taken by it. Their actions are similar, but if the latter can be convinced to actually do their own research instead of being handfed someone else’s “research” there is hope of reaching some of them.

          The real trick is ensuring they aren’t being assisted by a right wing truth bot, which the enemies of truth are doubtless working tirelessly on.

          • Drusas@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            2 years ago

            It may be pessimistic, but I don’t think we’re going to get very far in trying to convince people who don’t believe in fact checking to do their own actual research.

      • Leeks@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        AI is only as good as the model it is trained on, so while there are absolute truths, like most scientific constants, there are also relative truths, like “the earth is round” (technically it’s irregularly shaped ellipsoid, not “round”), but the most dangerous “truth” is the Mandela effect, which would likely enter the AI’s training model due to human error.

        So while an AI bot would be powerful, depending on the how tricky it is to create training data, it could end up being very wrong.

        • MagicShel@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          I didn’t mean to imply the AI would detect truth from lies, I meant it could analyze a large body of text to extract the messaging for the user to fact check. Good propaganda has a way of leading the audience along a particular thought path so that the desired conclusion is reached organically by the user. By identifying “conclusions” that are reached by leading /misleading statements AI could help people identify what is going on to think more critically about the subject. It can’t replace the critical thinking step, but it can provide perspective.

    • shackled@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      2 years ago

      Completely agree. For every tool we have created to accomplish great things, we have without fail also used it for dumb things at best and completely evil things at worst.

    • CarbonIceDragon@pawb.social
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      What exactly does it mean to “mature as a species” though? Human psychology (as in, the way human minds fundamentally work) doesn’t fundamentally change on human timescales, not currently anyway. It’s not like we can just wait a few years or decades and the various tricks people have found to more effectively convince people of falsehoods will stop working. Barring evolution (which takes so long as to be essentially not relevant to this) and some sort of genetic or other modification of humans (which is technology we don’t have ready yet and opens up even bigger cans of worms than the kind of AI we currently have), nothing fundamentally changes about us as a species except for culture and material circumstance.