Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    Would it be accurate so say that while current AI does have the knowledge, it lacks the reasoning skills needed to apply the knowledge correctly?

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      10 months ago

      No, it can solve word problems that it’s never seen before with fairly intricate reasoning. LLMs can even play chess at Grandmaster levels without ever duplicating games in the training set.

      Most of Lemmy has no genuine idea about the domain and hasn’t actually been following the research over the past year which invalidates the “common knowledge” on the topic you often see regurgitated.

      For example, LLMs build world models from the training data, and can combine skills from the data in ways that haven’t been combined in the training data.

      They do have shortcomings - being unable to identify what they don’t know is a key one.

      But to be fair, apparently most people on Lemmy can’t do that either.

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      10 months ago

      I don’t think it’s generally true, because current AI can solve some reasoning tasks very well. But it’s definitely something where they are lacking.

      • rambaroo@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        edit-2
        10 months ago

        It isn’t reasoning about anything. A human did the reasoning at some point, and the LLM’s dataset includes that original information. The LLM is simply matching your prompt to that training data. It’s not doing anything else. It’s not thinking about the question you asked it. It’s a glorified keyword search.

        It’s obvious you have no idea how LLMs work at a fundamental level, yet you keep talking about them like you’re an expert.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          10 months ago

          So if I find a single example of an AI doing a reasoning task that’s not in its training material, would you agree that you’re wrong and AI does reason?

          • rambaroo@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            3
            ·
            edit-2
            10 months ago

            You won’t find one. LLMs are literally incapable of the kind of reasoning you’re talking about. All of their solutions are based on training data, no matter how “original” your problem might seem.

      • stoy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        That’s fair, I have seen AI reason at a low level, but it seems to me that it is lacking higher levels of reasoning and context

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          10 months ago

          It definitely is lacking for now, but the question is: are these differences in degrees, or fundamental differences? I haven’t seen research suggesting that it’s the latter so far.