• Naatan@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    2 years ago

    I have a hard time seeing what we are currently calling “AI” evolve to address issues like this. It’s not real intelligence, it’s just text prediction. It seems fundamentally flawed for use-cases where you need 100% certainty of the answers being appropriate.

    This isn’t the AI people think it is. And the only danger it poses is irresponsible use.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      2 years ago

      Part of why the NLP community is so excited about it is because text prediction as an optimization problem eventually necessitates some form of intelligence in order to reduce the loss, and the architectures we’re using scale nearly linearly in quality vs size and show no real signs of diminishing returns, meaning you can make them arbitrarily smart just by making them bigger.

      I would encourage you to consider what you mean by “real” intelligence and “just text prediction”, because AI throws a lot of our assumptions out the window. Talk to GPT-4 in a chatbot cognitive architecture for a few hours and you get a sense of just how intelligent it can be (with the right prompting), but the architecture itself is literally incapable of “thinking” (some wiggle room for inter-layer states) - that is, internal, stateful, causal processes which drive external behavior. A chatbot CA can vaguely approximate it via chain of thought prompting, but without that it essentially has to guess what its thoughts “would” be if it had them, which is very weird and hard to understand intuitively.

      In case it isn’t clear, what I mean by “cognitive architecture” is the machinery surrounding the language model which lets it interact with the world. A language model in isolation is a causal autoregressive inference engine that will happily autocomplete anything. They are not chatbots, only components in chatbots - that’s just the modality we’re most familiar with because ChatGPT broke ground, but it’s not their only or even most useful form. The LLM is comparable to a human broca’s area, which will generate an endless stream of language if you let it. It’s the neural circuitry around it which give rise to coherent thoughts and subjective experience.

      To be able to accurately discuss these concepts, we need to change the language we use. Words like “intelligence”, “consciousness”, “sentience”, “sapience”, etc have always been incredibly vague, approaching completely undefined. They can’t be adequately applied to AI until they’ve been operationalized, such that you could objectively falsify whether or not they apply to a given system.

      • cellador@feddit.de
        link
        fedilink
        arrow-up
        2
        ·
        2 years ago

        Very nicely put. If I observe any real person replying in text, what im seeing is essentially just them thinking about what word to put next and entering it on the keyboard. It is an extremely complex task. I’m not saying that state of the art language models are also mulling the same thoughts in their “minds” like we are but that they’re solving the same problem. And our current paradigm of training these models show no sign of slowing progress so I understand sentiment that calling these models just “text prediction machines” is too simplistic.

    • ᴇᴍᴘᴇʀᴏʀ 帝@feddit.uk
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      This isn’t the AI people think it is.

      It’s definitely not as good as people think it is. The best description I heard was that AI outputs “hallucinations” as it only needs to look plausible, it doesn’t have to be right.

      Which is why using it to detect cheating is a concern - you’d hope that it would only be used for a first pass only to be reviewed by a human later but some people are going to think that AI is infallible and leave it there.

    • jmp242@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Yea, this AI is good for writing unimportant stuff like “talking to” famous dead people, or D&D descriptions on the fly. It can be useful for basic coding if you know how to fix its mistakes. Oh and keeping telemarketers busy a la Jolly Rodger. And I guess spam blog posts.

      It’s best used as a toy still, after I tried to use it to augment my work, it’s just usually worse than a good search engine still in terms of answering questions.

      The free image generators are pretty impressive again for like making flavor art for D&D on the fly, or just if you’re not an artist. Some of the tuned ones can make decent unconnected art or fake pictures, but so far I don’t think you can pick a character you create and like get it to make a graphic novel with it.

      So - watch out people who make RPG modules I guess.

      • Naatan@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        2 years ago

        Yeah, gaming and the arts are where I can see this AI shine. Aside from mundane artistry, I don’t think anyone needs to be worried about their job. This AI isn’t going to steal your job because once again; it has no real intelligence. It requires an intelligent person to steer it and process its results. It’ll only cost you your job if you don’t evolve to use this new tool.

  • ArtZuron@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    I’ve seen so many horror stories of profs and teachers who can’t adapt just burning any and all good will with their students by failing people on the whims of these stupid checkers as well.