• RotaryKeyboard@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    116
    arrow-down
    2
    ·
    11 months ago

    Using AI to flag footage for review by a person seems like a good time-saving practice. I would bet that without some kind of automation like this, a lot of footage would just go unreviewed. This is far better than waiting for someone to lodge a complaint first, since you could conceivably identify problem behaviors and fix them before someone gets hurt.

    The use of AI-based solutions to examine body-cam footage, however, is getting pushback from police unions pressuring the departments not to make the findings public to save potentially problematic officers.

    According to this, the unions are against this because they want to shield bad-behaving officers. That tells me the AI review is working!

    • jaybone@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      11 months ago

      I bet if they made all footage publicly available, watchdog style groups would be reviewing the shit out of that footage. But yeah AI might help too maybe.

      • Scubus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        11 months ago

        While I agree wholeheartedly, that is unrealistic due to laws. You can’t reveal certain suspects identity because for certain crimes, like pedophilia, people will attempt to execute the suspect before they know whether or not they actually did it.

        • LarmyOfLone@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          11 months ago

          I mean police footage would be privacy invading as hell for victims and even just bystanders.

        • gaylord_fartmaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          A charge being filed against someone is already public record in the majority of areas in the United States, as well as any court records resulting from those charges.

          • Scubus@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 months ago

            I forgot to add suspects that are minors, not positive but pretty sure they can’t be shown either.

            • gaylord_fartmaster@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Then they could just withhold the video from public since they’re already withholding the charge. The real issue would be protecting victims, not suspects.

    • Null User Object@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      11 months ago

      Exactly, and this also contradicts the “few bad apples” defense. If there were only a few bad apples, then the police unions should be bending over backwards to eradicate them sooner than later to protect the many good apples, not to mention improve the long suffering reputation of police.

      Instead, they’re doing the exact opposite, making it clear to anyone paying attention that it’s mostly, if not entirely, bad apples.

      • Rai@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        11 months ago

        You’ve got it backwards.

        The phrase is “A few bad apples spoil the bunch”. It means everyone around the bad apples is also bad, because they’re all around and do nothing about it. It’s not a defense, it’s literally explaining what your comment says.

        • Ithi@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          ·
          11 months ago

          I think that poster is right in this context. It gets abbreviated and used as a defense of there just being “a few bad apples” and they they just drop/ignore the reset of the phrase.

    • Ottomateeverything@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      11 months ago

      The whole police thing and public accountability kinda makes sense, but I don’t think this means we should be pushing on AI just because the “bad guys” don’t like it.

      AI is full of holes and unknowns. And relying on it to do stuff like this is a dangerous precedent IMO. You absolutely need someone reviewing it, yes. But they’re also not going to catch everything and starting with this will mean it will start being leaned on and it will replace thorough reviews by people.

      I think something low stakes and unobtainable without the tools might make sense - like AIs reading through game chat or Twitter posts to identify issues where it’s impossible to have someone reading everything, and if some get by, oh well it’s a post on the internet.

      But with police behavior? Those are people with the authority to ruin people’s lives or kill them. I do NOT trust AI to catch every problematic behavior and this stuff ABSOLUTELY should be done by people. I’d be okay with it as an aid, in theory, but once it’s doing any “aiding” it’s also approving some behavior. It can’t really be telling anyone where TO look without implying where NOT to look, and that gives it some authority, even as an “aid”. If it’s not making decisions, it’s not saving anyone any time.

      Idk, I’m all for the public accountability and stuff like that here, but having AI make decisions around the behavior of people with so much fucking power is horrifying to me.

      • harmsy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        An AI art website I use illustrates your point perfectly with its attempt at automatic content filtering. Tons of innocent images get flagged, meanwhile problem content often gets through and has to be whacked manually. Relying on AI to catch everything, without false positives, is a recipe for disaster.

          • deranger@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            I really don’t think it’s better than nothing. You put a biased AI in charge of reviewing footage and now they have a reason to say they’re doing the right thing instead of doing nothing, despite what they’re doing being worse.

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    99
    arrow-down
    3
    ·
    11 months ago

    dont hey always tell us

    ‘you have nothing to worry about if you have nothing to hide’

    uh huh… hows that feel now, government employee?

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      2
      ·
      11 months ago

      Yep.

      Ya 'all like surveillance so much, let’s put all government employees under a camera all the time. Of all the places I find cameras offensive, that one not so much.

      • mndrl@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        30
        ·
        11 months ago

        I sure hope you get your daily dosis of enjoying people’s misery watching the substitute teacher crying in the teacher’s lounge.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 months ago

          Cameras in a teacher’s lounge would be ridiculous but, in principle, cameras in classrooms make a lot of sense. Teachers are public officials who exercise power over others, and as such they need to be accountable for their actions. Cameras only seem mean because teachers are treated so badly in other ways.

          • mndrl@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            11 months ago

            Sure thing, buddy. They exert such power that they can barely make teens stay put for dice minutes without fucking around with their phones. So much power.

              • mndrl@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                11 months ago

                I checked just in case. Exactly as I said, most government workers have no power or means to exact it. You must be thinking of something else.

                Although I can recognize when someone has silly power fantasies. It is wild, man.

  • Tristaniopsis@aussie.zone
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    11 months ago

    If the police unions don’t like it, then it’s certainly going to be a positive step towards public safety.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    2
    ·
    edit-2
    11 months ago

    I have a sneaking suspicion if police in places like America start using AI to review bodycam footage that they’ll just “pay” someone to train their AI so that way it’ll always say that the police officer was in the right when killing innocent civilians so that the footage never gets flagged That, or do something equally as shady and suspicious.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      3
      ·
      11 months ago

      These algorithms already have a comical bias towards the folks contracting their use.

      Case in point, the UK Home Office recently contracted with an AI firm to rapidly parse through large backlogs of digital information.

      The Guardian has uncovered evidence that some of the tools being used have the potential to produce discriminatory results, such as:

      An algorithm used by the Department for Work and Pensions (DWP) which an MP believes mistakenly led to dozens of people having their benefits removed.

      A facial recognition tool used by the Metropolitan police has been found to make more mistakes recognising black faces than white ones under certain settings.

      An algorithm used by the Home Office to flag up sham marriages which has been disproportionately selecting people of certain nationalities.

      Monopoly was a lie. You’re never going to get that Bank Error In Your Favor. It doesn’t happen. The House (or, the Home Office, in this case) always wins when these digital tools are employed, because the money for the tool is predicated on these agencies clipping benefits and extorting additional fines from the public at-large.

      • butterflyattack@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Bank errors in your favour do happen, or at least they did - one happened to me maybe twenty five years ago. I was broke and went to the bank to pay in my last £30-something of cash to cover an outgoing bill. Stopped at the cash machine outside my bank to check my balance was sufficient now, and found that the cashier had put an extra 4 zeros on the figure I’d deposited. I was rich! I was also in my early 20s and not thinking too clearly I guess because my immediate response was to rush home to get my passport with the intention of going abroad and opening an account into which to transfer the funds, never coming back. I checked my balance again at another machine closer to home and the bank had already caught and corrected their mistake. Took them maybe thirty minutes.

        After a bit of occurred to me that I was lucky really, because I didn’t know what the fuck I was doing and the funds would have been traced very easily and I’d have been in deep shit.

        But yeah, anecdotal, but shit like that did happen. I assume it’s more rare these days as fewer humans are involved in the system, and fewer people use cash.

    • Kalkaline @leminal.space
      link
      fedilink
      English
      arrow-up
      48
      arrow-down
      1
      ·
      edit-2
      11 months ago

      AI can’t be the last word in what gets marked for misconduct etc., however using it as a screening tool for potentially problematic moments in a law enforcement officer’s encounters would be useful. It’s an enormous task to screen through those hours upon hours of video and probably prohibitively expensive for humans to work through.

      • CameronDev@programming.dev
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        4
        ·
        11 months ago

        Need to be certain the false negative rate is near zero though, or important events could be missed, and with AI its nearly impossible to say that with certainty.

          • RaoulDook@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            11 months ago

            Yep we have countless stories from all over the place of people trying to get help with crimes that got no help from the police. Over and over I’ve heard people describe how they were robbed and the police don’t put any effort towards catching the perpetrators or returning the property. And that’s far from the worst of it.

            • pearsaltchocolatebar@discuss.online
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              11 months ago

              I mean, that’s because there’s usually very little evidence to go on after a robbery, unless you have security cameras.

              Most PDs don’t even have the resources to process evidence from things like murders and rapes, so robbery isn’t super high on their priority list.

        • originalucifer@moist.catsweat.com
          link
          fedilink
          arrow-up
          10
          arrow-down
          1
          ·
          11 months ago

          so we should just drop the good in pursuit of perfect?

          ai is just an additional tool to be applied based on its efficacy. the better the tooling gets, the more we can trust its results. but no one…

          no one

          is expecting ai to be perfect, and to be trusted 100%.

    • The Snark Urge@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      11 months ago

      Maybe if it’s just being used to flag potential areas of interest for review by a human? I’m open to the idea as long as there’s definite accountability and care.

      Which, returning to the real world, we know is a fat chance.

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      It’s just flagging for human review. The dataset is too large and it can be made more objective than human review. As soon as I hear anything upsets police unions, I know it’s gotta be good. Support this.

    • RobotToaster@mander.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      One thing AI is generally pretty good at is identifying what is in a video. So at the very least you don’t have to waste money paying someone to watch 100s of hours of videos of donuts.

    • Cheradenine@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Is this the kind of thing anyone could be happy about?

      Cops reviewing themselves, we know how that works out.

      1. Cops being reviewed by shitty AI.
      2. ???
      3. ???
      4. wtf
      • BearOfaTime@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        11 months ago

        Then again, when the police union doesn’t like something, makes me wonder what it’s exposing about them…

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    If this works, why not have an AI automatically investigate Judges and government officials. The AI should indicate for example if the judge needs to recuse him or herself… That came up several times this year. And for politicians, the AI would tell us if they are lying or if they are allowing or taking part in corruption. For this purpose, they should wear a microphone and camera the entire time they are government officials. Don’t like it? Too bad, that’s the law. Right?

    • Adalast@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      I had a shower fantasy the other day where I wrote an AI that hacked all of the data brokers and web trackers then doxxed every politician and political candidate’s web history in it’s totality the instant they were added to the ballot, worldwide. It was decentralized running many instances and used a similar structure for distributing the information.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      why not have an AI automatically investigate Judges and government officials

      Because the power is supposed to originate with said Judges/Officials. The AI tool is a means of justifying their decisions, not a means of exerting their authority. If you give the AI power over the judges/officials, why would they want to participate in that system? If they were proper social climbers, they would - instead - aim to be CEOs of AI companies.

  • Dukeofdummies@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    11 months ago

    I am so confused by this, why does there need to be AI involved in this at all?

    If somebody has a complaint, pull the footage, then the plaintiff goes over the footage and makes their case against the police officer. Why would an AI be necessary to find complaints that are not being complained about?

    I feel like it’s a technology solution for what should be a “more transparency and a better system” solution. Make complaints easier and reduce the fear factor of making complaints.

    • bhmnscmm@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      11 months ago

      The people most likely to be abused by police are the least likely to be able or willing to file a formal complaint.

      • Dukeofdummies@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        8
        ·
        11 months ago

        So fix that. Don’t make an AI to dole out justice against police like some messed up lottery. This is such a hollow solution in my mind. AI struggles to identify a motorcycle, people expect it to identify abuse?

        • quirzle@kbin.social
          link
          fedilink
          arrow-up
          11
          arrow-down
          1
          ·
          edit-2
          11 months ago

          So fix that.

          Were it so simple, it would have been fixed decades ago. The difference is that having AI review the footage is actually feasible.

          • Dukeofdummies@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            11 months ago

            Until you realize that the people who make the final decision on whether something the AI saw is indeed too far or extreme are the exact same people making the decision now and all we’ve succeeded in doing is creating a million dollar system that makes it look like they’re trying to change.

            • quirzle@kbin.social
              link
              fedilink
              arrow-up
              7
              arrow-down
              1
              ·
              11 months ago

              So what’s you’re proposed solution? Your directive to “fix that” was a bit light on details.

              This is a step in the right direction. The automated reviews will supplement, not replace, the reviewing triggered by manual reports you supported in your initial comment. I’d argue the pushback from police unions is a sign that it actually might lead to some change, given the reasoning the give in the article.

    • RobotToaster@mander.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 months ago

      I’m just theorising how AI could be used, but consider the situation where someone makes a complaint, but doesn’t remember the exact time of the incident (say they remember it was within a six hour time frame for this example), or what the officer looked like.

      You have (for example) 20 officers on duty it could be potentially be, in a six hour time frame, that’s 120 hours or 5 days of footage. An AI can use facial recognition to find the complainant within minutes.

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    The union will demand to train the model that scans the footage… In the future

  • Alien Nathan Edward@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    I’m significantly more confident in this AI if police unions are against it. Police unions serve one function: to protect criminals from the law as long as said criminals have paid their union dues.

  • HarkMahlberg@kbin.social
    link
    fedilink
    arrow-up
    7
    arrow-down
    6
    ·
    11 months ago

    Let’s not confuse ourselves here. The opposite of one evil is not necessarily a good. Police reviewing their own footage, investigating themselves: bad. Unreliable AI beholden to corporate interests and shareholders: also bad.

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      11 months ago

      It’s fine to not understand what “AI” is and how it works, but you should avoid making statements that highlight that lack of understanding.

      • tabular@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        11 months ago

        If you feel one’s knowledge is lacking then explaining it may convince them, or others reading your post.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          11 months ago

          Speaking of a broad category of useful technologies as inherently bad is a dead giveaway that someone doesn’t know what they’re talking about.

      • HarkMahlberg@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        11 months ago

        It’s fine to not understand what “AI” is and how it works

        That’s highly presumptive isn’t it? I didn’t make any statement about what AI is, or the mechanics behind it. I only made a statement regarding the owners and operators of AI. We’re talking about the politics of using AI to aid in police accountability, and for those intents and purposes, AI need not be more than a black box. We could call it a sentient jar of kidney beans for all it matters.

        So for the sake of argument - the one I made, not the one I didn’t make - what did I misunderstand?

        Unreliable

        On June 22, 2023, Judge P. Kevin Castel of the Southern District of New York released a lengthy order sanctioning two attorneys for submitting a brief drafted by ChatGPT. Judge Castel reprimanded the attorneys, explaining that while “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the attorneys “abandoned their responsibilities” by submitting a brief littered with fake judicial opinions, quotes, and citations.

        Judge Castel’s opinion offers a detailed analysis of one such opinion, Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019), which the sanctioned lawyers produced to the Court. The Varghese decision is presented as being issued by three Eleventh Circuit judges. While according to Judge Castel’s opinion the decision “shows stylistic and reasoning flaws that do not generally appear in decisions issued by the United States Court of Appeals,” and contains a legal analysis that is otherwise “gibberish,” it does in fact reference some real cases. Additionally, when confronted with the question of whether the case is real, the AI platform itself doubles down, explaining that the case “does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.”

        https://www.natlawreview.com/article/artificially-unintelligent-attorneys-sanctioned-misuse-chatgpt

        Regardless of how ChatGPT made this error, be it “hallucination” or otherwise, I would submit this as exhibit A that AI, at least currently, is not reliable enough to do legal analysis.

        Beholden to corporate interests

        Most of the large, large language models are owned and run by huge corporations: OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Copilot, etc. It is already almost impossible to hold these organizations accountable for their misdeeds, so how can we trust their creations to police the police?

        The naive “at-best” scenario is that AI trained to identify unjustified police shootings sometimes fails to identify them properly. Some go unreported. Or perhaps it reports a “justified” police shooting (I am not here to debate that definition but let’s say they occur) as unjustified, which gums up other investigation efforts.

        The more conspiratorial “at-worst” scenario is that a company with a pro-cop/thin-blue-line sympathizing culture could easily sweep damning reports made by their AI under the rug, which facilitates aggressive police behavior under the guise of “monitoring” it.

        As reported by ProPublica, Patterson PD has a contract with a Chicago-based software company called Truleo to examine audio from bodycam videos to identify problematic behavior by officers. The company charges around $50,000 per year for flagging several types of behaviors, such as when officers use force, interrupt civilians, use profanities, or turn off their cameras while on active duty. The company claims that its data shows such behaviors often lead to violent escalation.

        How does Truleo determine what is “risky” behavior, what is an “interruption” to a civilian? What is a profanity? Does Truleo consider “crap” to be a profanity? More importantly, what if you disagree with Truleo’s definitions? What recourse do you have against a company that has zero duty to protect you? If you file a lawsuit alleging officer misconduct, can Truleo’s AI’s conclusions be admissible as evidence, and can it be used against you?

        (1/2)

        • HarkMahlberg@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          11 months ago

          And shareholders

          He couldn’t have imagined the drama of this week, with four directors on OpenAI’s nonprofit board unexpectedly firing him as CEO and removing the company’s president as chairman of the board. But the bylaws Altman and his cofounders initially established and a restructuring in 2019 that opened the door to billions of dollars in investment from Microsoft gave a handful of people with no financial stake in the company the power to upend the project on a whim.

          https://www.wired.com/story/openai-bizarre-structure-4-people-the-power-to-fire-sam-altman/

          Oh! Turns out I was wrong… “a handful of people with no financial stake in the company” doesn’t sound like shareholders, and yet they could change the direction of the company at will. And just so we’re clear, whether it’s four faceless ghouls or Sam Altman, 1 or 4, the fact that the company is beholden to so few people, who themselves are not democratically elected, nor necessarily law experts, nor necessarily have any history being police officers… their AI is what decides whether or not to hold a police officer accountable for his misdeeds? Hard. Pass.

          Oh, and lest we forget Microsoft is invested in OpenAI, and OpenAI has a quasi-profit-driven structure. Those 4 board directors aren’t even my biggest concern with that arrangement.

          (2/2)