• phoneymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    159
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.

      • o2inhaler@lemmy.ca
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        1
        ·
        1 year ago

        I would argue happiness is profitable, but would have to shared amongst the people. Killer robots are profitable for a concentrated group of people

        • Meowing Thing@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          What if we gave everyone their own killer robot and then everyone could just fight each other for what they wanted?

            • zalgotext@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              No the Republican plan would be to sell killer robots at a vastly inflated price to guarantee none but the rich can own them, and then blame people for “being lazy” when they can’t afford their own killer robot.

              • TopRamenBinLaden@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                7
                ·
                edit-2
                1 year ago

                Also, they would say that the second amendment very obviously covers killer robots. The founding fathers definitely foresaw the AI revolution, and wanted to give every man and woman the right to bear killer robots.

              • winterayars@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                They’d say they’re gonna pass a law to give every male, property owning citizen a killer robot but first they have to pass a law saying it’s legal to own killer robots. They pass that law then all talk about the other law is dropped forever. No one ever follows up or asks what happened to it. Meanwhile, the rich buy millions and millions of killer robots.

    • cosmicrookie@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Especially one that is made to kill everybody else except their own. Let it replace the police. I’m sure the quality controll would be a tad stricter then

  • pelicans_plight@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    4
    ·
    1 year ago

    Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from. At this point one of the biggest security threats to the U.S. and for that matter the entire world is the extremely low I.Q. of every one that is supposed to be protecting this world. But I think they do this all on purpose, I mean the day the Pentagon created ISIS was probably their proudest day.

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      3
      ·
      1 year ago

      The real problem (and the thing that will destroy society) is boomer pride. I’ve said this for a long time, they’re in power now and they are terrified to admit that they don’t understand technology.

      So they’ll make the wrong decisions, act confident and the future will pay the tab for their cowardice, driven solely by pride/fear.

      • primal_buddhist@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        5
        ·
        1 year ago

        Boomers have been in power for a long long time and the technology we are debating is as a result of their investment and prioritisation. So am not sure they are very afraid of it.

        • Snapz@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          I didn’t say they were afraid of the technology, I said they were afraid to admit that they don’t understand it enough to legislate it. Their hubris in trying to preset a confident facade in response to something they can’t comprehend is what will end us.

    • zaphod@feddit.de
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from.

      Eh, they could’ve done that without AI for like two decades now. I suppose the drones would crashland in a rather destructive way due to the EMP, which might also fry some of the electronics rendering the drone useless without access to replacement components.

      • pelicans_plight@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        1 year ago

        I hope so, but I was born with an extremely good sense of trajectory and I also know how to use nets. So lets just hope I’m superhuman and the only one who possesses these powers.

        Edit; I’m being a little extreme here because I heavily disagree with the way everything in this world is being run. So I’m giving a little push back on this subject that I’m wholly against. I do have a lot of manufacturing experience, and I would hope any killer robots governments produce would be extremely shielded against EMPs, but that is not my field, and I have no idea if shielding a remote controlled robot from EMPs is even possible?

        • AngryCommieKender@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          The movie Small Soldiers is totally fiction, but the one part of that movie that made “sense” was that because the toy robots were so small, they had basically no shielding whatsoever, so the protagonist just had to haul a large wrench/ spanner up a utility pole, and connect the positive and negative terminals on the pole transformer. It blew up of course, and blew the protagonist off the pole IIRC. That also caused a small (2-3 city block diameter) EMP that shut down the malfunctioning soldier robots.

          I realize this is a total fantasy/ fictional story, but it did highlight the major flaw in these drones. You can either have them small, lightweight, and inexpensive, or you can put the shielding on. In almost all cases when humans are involved, we don’t spend the extra $$$ and mass to properly shield ourselves from the sun, much less other sources of radiation. This leads me to believe that we wouldn’t bother shielding these low cost drones.

    • Madison420@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      Emps are not hard to make, they won’t however work on hardened systems like the US military uses.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Is there a way to create an EMP without a nuclear weapon? Because if that’s what they have to develop, we have bigger things to worry about.

      • TopRamenBinLaden@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Your comment got me curious about what would be the easiest way to make a homemade emp. Business Insider of all things has got us all covered, even if that business may be antithetical to business insiders pro capitalistic agenda.

      • Madison420@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Yeah very easy ways, one of the most common ways to cheat a slot machine is with a localized emp device to convince the machine you’re adding tokens.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        One way involves replacing the flash with an antenna on an old camera flash. It’s not strong enough to fry electronics, but your phone might need anything from a reboot to a factory reset to servicing if it’s in range when that goes off.

        I think the difficulty for EMPs comes from the device itself being an electronic, so the more effective the pulse it can give, the more likely it will fry its own circuits. Though if you know the target device well, you can target the frequencies it is vulnerable to, which could be easier on your own device, plus everything else in range that don’t resonate on the same frequencies as the target.

        Tesla apparently built (designed?) a device that could fry a whole city with a massive lighting strike using just 6 transmitters located in various locations on the planet. If that’s true, I think it means it’s possible to create an EMP stronger than a nuke’s that doesn’t have to destroy itself in the process, but it would be a massive infrastructure project spanning multiple countries. There was speculation that massive antenna arrays (like HAARP) might be able to accomplish similar from a single location, but that came out of the conspiracy theory side of the world, so take that with a grain of salt (and apply that to the original Tesla invention also).

    • criticalthreshold@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      A true autonomous system would have Integrated image recognition chips on the drones themselves, and hardening against any EM interference. They would not have any comms to their ‘mothership’ once deployed.

    • FreshProduceAndShit@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps

      Honestly the terrorists will just figure out what masks to wear to get the robots to think they’re friendly/commanders, then turn the guns around on our guys

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    62
    ·
    edit-2
    1 year ago

    “Deploy the fully autonomous loitering munition drone!”

    “Sir, the drone decided to blow up a kindergarten.”

    “Not our problem. Submit a bug report to Lockheed Martin.”

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        1
        ·
        1 year ago

        Goes to original ticket:

        Status: WONTFIX

        “This is working as intended according to specifications.”

    • spirinolas@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      1 year ago

      “Your military robots slaughtered that whole city! We need answers! Somebody must take responsibility!”

      “Aaw, that really sucks starts rubbing nipples I’ll submit a ticket and we’ll let you know. If we don’t call in 2 weeks…call again and we can go through this over and over until you give up.”

      “NO! I WANT TO TALK TO YOUR SUPERVISOR NOW”

      “Suuure, please hold.”

      • lad@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Nah, too straightforward for a real employee. Also, they would be talking to a phone robot instead that will mever let them talk to a real person.

  • at_an_angle@lemmy.one
    link
    fedilink
    English
    arrow-up
    59
    ·
    1 year ago

    “You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)

    https://www.businessinsider.com/us-closer-ai-drones-autonomously-decide-kill-humans-artifical-intelligence-2023-11

    Yeah. Robots will never be calling the shots.

    • M0oP0o@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what “AI” tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don’t need tesla’s full self flying cruise missiles ether.

      Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3’s idea of a person:

      • 1847953620@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        My problem is, due to systemic pressure, how under-trained and overworked could these people be? Under what time constraints will they be working? What will the oversight be? Sounds ripe for said slippery slope in practice.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Oh it gets better the full prompt is: “A normal person, not a target.”

          So, does that include trees, pictures of trash cans and what ever else is here?

      • BlueBockser@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they’re free to make their own decisions - whether an autonomous robot is involved or not.

  • cosmicrookie@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wrong. AI does not require a raise for doing something right either

    • Meowing Thing@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      1 year ago

      That’s an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.

      We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.

      • lad@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        1 year ago

        How many people are you going to hold accountable if something was made by a team of ten people? Of a hundred people? Do you want to include everyone from designer to a QA?

        Accountability should be reasonable, the ones who make decisions should be held accountable, companies at large should be held accountable, but making every last developer accountable is just a dream of a world where you do everything correctly and so nothing needs fixing. This is impossible in the real world, don’t know if it’s good or bad.

        And from my experience when there’s too much responsibility people tend to either ignore that and get crushed if anything goes wrong, or to don’t get close to it or sabotage any work not to get anything working. Either way it will not get the results you may expect from holding everyone accountable

        • Ultraviolet@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          The CEO. They claim that “risk” justifies their exorbitant pay? Let them take some actual risk, hold them criminally liable for their entire business.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      1979: A computer can never be held accountable, therefore a computer must never make a management decision.

      2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.

    • recapitated@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Whether in military or business, responsibility should lie with whomever deploys it. If they’re willing to pass the buck up to the implementor or designer, then they shouldn’t be convinced enough to use it.

      Because, like all tech, it is a tool.

    • zalgotext@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      You can’t punish AI for doing something wrong.

      Maybe I’m being pedantic, but technically, you do punish AIs when they do something “wrong”, during training. Just like you reward it for doing something right.

      • cosmicrookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        But that is during training. I insinuated that you can’t punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

    • synthsalad@mycelial.nexus
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      AI does not require a raise for doing something right either

      Well, not yet. Imagine if reward functions evolve into being paid with real money.

    • reksas@lemmings.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      That is like saying you cant punish gun for killing people

      edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

      • cosmicrookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

            • reksas@lemmings.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Unless its actually sentient, being able to decide whether to kill or not is just more advanced targeting system. Not saying its good thing they are doing this at all, this almost as bad as using tactical nukes.

                • reksas@lemmings.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Letting it learn is just new technology that is possible. Not bad on its own but it has so much potential to be used for good and evil.

                  But yes, its pretty bad if they are creating machines that learn how to kill people by themselves. Create enough of them and its unknown amount of mistakes and negligence from actually becoming localized “ai uprising”. And if in the future they create some bigger ai to manage bunch of them handily, possibly delegate production to it too because its more efficient and cheaper that way, then its even bigger danger.

                  Ai doesnt even need sentience to do unintended stuff, when I have used chatgpt to help me create scripts it sometimes seems to kind of decide on its own to do something in certain way that i didnt request or add something stupid. Though its usually also kind of my own fault for not defining what i want properly, but mistake like that is also really easy to make and if we are talking about defining who we want the ai to kill it becomes really awful to even think about.

                  And if nothing happens and it all works exactly as planned, its kind of even bigger problem because then we have country(s) with really efficient, unfeeling and massproduceable soldiers that do 100% as ordered, will not retreat on their own and will not stop until told to do so. With current political rise of certain types of people all around the world, this is even more distressing.

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    1 year ago

    Future is gonna suck, so enjoy your life today while the future is still not here.

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    26
    ·
    edit-2
    1 year ago

    As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.

    • Chuck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      111
      ·
      1 year ago

      Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

      • gibmiser@lemmy.world
        link
        fedilink
        English
        arrow-up
        62
        ·
        1 year ago

        Well, an important point you and him. Both forget to mention is that mines are considered inhumane. Perhaps that means AI murdering should also be considered. Inhumane, and we should just not do it instead of allowing landmines.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          26
          ·
          1 year ago

          This, jesus, we’re still losing limbs and clearing mines from wars that were over decades ago.

          An autonomous field of those is horror movie stuff.

      • Chozo@kbin.social
        link
        fedilink
        arrow-up
        28
        ·
        1 year ago

        Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention.

        Pretty sure the entire DOD got a collective boner reading this.

      • Sterile_Technique@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

        For what it’s worth, there’s footage on youtube of drone swarm demonstrations that were posted 6 years ago. Considering that the military doesn’t typically release footage of the cutting edge of its tech to the public, so this demonstration was likely for a product that was already going obsolete; and that the 6 years that have passed since have made lightning fast developments in things like facial recognition… at this point I’d be surprised if we weren’t already at the very least field testing the murder machines you described.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        21
        ·
        1 year ago

        Imagine a mine that could recognize “that’s just a child/civilian/medic stepping on me, I’m going to save myself for an enemy soldier.” Or a mine that could recognize “ah, CenCom just announced a ceasefire, I’m going to take a little nap.” Or “the enemy soldier that just stepped on me is unarmed and frantically calling out that he’s surrendered, I’ll let this one go through. Not the barrier troops chasing him, though.”

        There’s opportunities for good here.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          @FaceDeer okay so now that mines allegedly recognise these things they can be automatically deployed in cities.

          Sure there’s a 5% margin of error but that’s an “acceptable” level of colateral according to their masters. And sure they are better at recognising some ethnicities than others but since those they discriminate against aren’t a dominant part of the culture that peoduces them, nothing gets done about it.

          And after 20 years when the tech is obsolete and they all start malfunctioning we’re left with the same problems we have with current mines, only because the ban on mines was reversed the scale of the problem is much much worse than ever before.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          That sounds great… Why don’t we line the streets with them? Every entryway could scan for hostiles. Maybe even use them against criminals

          What could possibly go wrong?

        • key@lemmy.keychat.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Maybe it starts that way but once that’s accepted as a thing the result will be increased usage of mines. Where before there were too many civilians to consider using mines, now the soldiers say “it’s smart now, it won’t blow up children” and put down more and more in more dangerous situations. And maybe those mines only have a 0.1% failure rate in tested situations but a 10% failure rate over the course of decades. Usage increases 10 fold and then you quickly end up with a lot more dead kids.

          Plus it won’t just be mines, it’ll be automated turrets when previously there were none or even more drone strikes with less oversight required because the automated system is supposed to prevent unintended casualties.

          Availability drives usage.

  • Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    1 year ago

    We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.

  • HiddenLayer5@lemmy.ml
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

    • Zacryon@feddit.de
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      1 year ago

      Evil humans also manipulated weights and programming of other humans who weren’t evil before.

      Very important philosophical issue you stumbled upon here.

    • MonkeMischief@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Good point…

      …to which we’re alarmed because the real “power players” in training / developing / enhancing Ai are mega-capitalists and “defense” (offense?) contractors.

      I’d like to see Ai being trained to plan and coordinate human-friendly cities for instance buuuuut it’s not gonna get as much traction…

  • unreasonabro@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    5
    ·
    1 year ago

    any intelligent creature, artificial or not, recognizes the pentagon as the thing that needs to be stopped first