• FiniteBanjo@feddit.online
    link
    fedilink
    English
    arrow-up
    49
    ·
    2 days ago

    I mean that’s literally a line from IBM’s 1979 training manual:

    A computer can never be held accountable

    Therefore a computer must never make a management decision

    • hikaru755@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      2 days ago

      Yeah I’m pretty sure the OP is a reference to that, not an entirely original thought

    • MoffKalast@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Some MBA reading this was probably like, a computer can never be held accountable? The perfect manager!

    • grandma@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Funny because management is also never held accountable as long as their decisions makes the line go up next quarter.

    • Oisteink@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      2 days ago

      And I saw that photo both here and on reddit yesterday.

      Due in OP’s post is a poser and not a very good one

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Dude … OP is one of the people that is significantly responsible for the Perl programming language and its powerful modules. Also crazy well-read

        • Oisteink@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          1 day ago

          Yeah - he could be the king of USA and I’d still say that post is a lame effort to sound smart.

          • TheJesusaurus@piefed.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            You think someone repackaged annold quote to sound smart? Or do you think it’s more likely to point out the insanity that’s happening daily

  • pulsewidth@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 days ago

    I agree with the underlying premise that AI should not be given the reigns to anything of importance.

    I disagree that they can’t find out.

    The Amazon servers in the UAE and Bahrain found out just recently.

  • Programmer Belch@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    Why isn’t the blame thrown onto the AI company and their lack of guardrails to the program? Shouldn’t they face backlash and lawsuits regardless of what the terms of service specify?

    • expr@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      It’s not possible to add guardrails due to how the technology works.

      The fact of the matter is that it should not be used for what it’s being used for at all.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Whenever system prompts get leaked, it’s always depressingly hilarious how much of it is “Hello Mr. AI. You will not do any bad things, and will only do good things.”

        The “guardrails” are just the same damn way end-users prompt them, but inserted behind the scenes before every “user prompt”.

      • Programmer Belch@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Guardrails are considering the AI another user with low privilege. The amount of breaches happening are because the company has low security and adds AI (high security risk) without separating it from critical data.

        • expr@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          I mean yeah, I agree that’s unbelievably stupid. But when people talk about guardrails generally, they are talking about controlling the output of the LLM, which is what I was saying is not possible to do.

          • Programmer Belch@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            That’s also true but considering that option is unavailable, there are multiple ways to protect against AI hallucinations.

            This was the future AI ethics people were warning:

            Picture a robot you tell to make an apple pie.

            To get to the apples a human is blocking the path.

            The robot just kills the human by running at full speed through them.

            Considering the robot is that dumb to try and go through the human, you can make the robot smaller or lighter so that bumping someone is not harmful.

            None of these options is considered when talking about AI, line go up and other buzzwords I guess.