• self@awful.systems
    shield
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    if you post a thread about intolerable dickheads, the most intolerable dickheads on Lemmy will post some shit like “intolerable dickhead checking in, how fucking dare you”

    it’s like catnip for the Reddit-brained, and by catnip I mean meth

    • thespcicifcocean@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      I haven’t met that many dickheads here though, maybe one or two, but not as many as reddit.

      Unless I’m the asshole.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          Sometimes they file reports against regulars, accusing them of “ableism” for being anti-slop-machine. That’s also entertaining.

          • funbreaker@piefed.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            Some people will go as far as to call you racist and genocidal if you express dislike for slop machines

          • 𝕸𝖔𝖘𝖘@infosec.pub
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            There’s a term for being anti-ai slop? Hahahaha that is hilarious!

            Edit. That term is against those with disabilities. I agree that that’s a dick move. Are they claiming that by hating ai slop, we are against those with disabilities? I’m feeling like I missed something. They can’t mean that, that’s idiotic. Like… Idiocracy level.

            • David Gerard@awful.systemsOPM
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 days ago

              AI SHILL: if you hate AI, you hate the disabled!
              DISABLED PERSON: hi, help me in some practical way
              AI SHILL: fuck off

  • Dagamant@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I love technology and seeing what I can and can’t get to work. I have a self hosted image generator and Lon (stable diffusion and ollama). It was fun for a little while, generating images of whatever popped in mind and using the llama for code completion, grammar checks, and rewording things. I even started working on something like that AI streamer, Neuro. It’s all garbage though. The whole stack has been relegated to sending welcome messages on a discord server. It’s a neat toy but anything past that is just adding a whole layer of inaccuracy to whatever you’re using it for and way too many people don’t realize that.

  • Psythik@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    As an aging, burnt out, unpleasant asshat who dabbles in LLMs, I feel called out.

    I mean it beats adding site:reddit.com to every web search to get a proper answer over some SEO-optimized bullshit. So long as you keep your bullshit detector well-maintained, check the sources—and actually use an AI that cites it’s sources—I see nothing wrong with them. The tech is still in its infancy; it’ll improve with time.

  • Elvith Ma'for@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    science shows as true what you thought was only 99% true

    Still a way higher accuracy than LLM output, so…

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    The only thing that drives me to AI is the extreme uselessness of modern search engines. This is not an endorsement of hallucination engines as much as it is a condemnation of late stage enshittification of search engines and the internet in general. I miss the days when I could google something and actually find what I was looking for.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        … the search engines became crap because of ai

        I mean, search has always been built on some kind of LLM. That’s how you convert a query into a list of page-results.

        We’ve just started trying to wrap the outputs into a natural language response pattern in order to make them feel more definitive and more engaging. The “AI” part of search is mostly window-dressing.

        Plus, ai just lies.

        It has inaccurate heuristics and often tries to back-fill what it can’t find with an approximation in order to maintain user engagement.

        Idk if I’d even call it lying, so much as bullshitting.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          I mean, search has always been built on some kind of LLM. That’s how you convert a query into a list of page-results.

          no it fucking hasn’t. the stemming and page ranking algorithms used in traditional search have absolutely nothing to do with LLMs.

          shit, neither stemming nor PageRank as originally defined even have a machine learning component. here’s postgres’ full text search suite, which literally converts a textual query into a list of results (sans page ranking, which is out of scope for a database) in a matter suitable for a production search engine, utterly without any machine learning or other stochastic crap.

  • FrogmanL@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Everyone is allowed their take. I’ll be the first to admit that AI is overhyped and environmentally dangerous, but to call all people that use it asshats seems, at least, reductive.

    AI is tool like anything else. If I need a hammer, I get a hammer. It does some things well, some poorly. It doesn’t (and shouldn’t) replace the person. The problem begins when I think the hammer is the solution to all problems. Or, to really stretch the metaphor, if I think the hammer is all powerful and we should either fear or worship it.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Interesting. Why would more manipulative people and ones with more focus on self-interest use AI more than other people? Because they’re more likely to take shortcuts while doing stuff? Or is there any other direct benefit for them?

    • howrar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      Excerpt from the paper (page 6)

      AI tools may appeal to these individuals because they offer a form of cognitive leverage, enhancing their ability to produce sophisticated output, craft persuasive communications, or solve complex problems more efficiently than others. Indeed, prior work has observed high Machiavellian individuals strategically manage information disclosure in social relationships, where Machiavellian individuals do not reduce the amount of information they share to others, but instead they control its honesty and accuracy. It is possible that high Machiavellian individuals feel less inhibited about using AI because they are more comfortable with manipulation (e.g., having a tool to communicate for them, adjusting their words) and instrumental approaches to achieving their goals.

      Similarly, the association between narcissism and AI use in the student sample may reflect these individuals’ motivation to maintain self-images of competence and distinctiveness. Narcissistic individuals may be drawn to cutting-edge technologies that signal their innovativeness or technical sophistication, viewing AI adoption as a way to demonstrate their superiority in academic, professional, and social contexts. AI tools may serve as a means of cognitive enhancement that supports their grandiose self-concept, allowing them to produce work or engage in activities that reinforce their perceived exceptionalism.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      My completely PIDOOMA take is that if you’re self-interested and manipulative you’re already treating most if not all people as lesser, less savvy, less smart than you. So just the fact that you can half-ass shit with a bot and declare yourself an expert in everything that doesn’t need such things like “collaboration with other people”, ew, is like a shot of cocaine into your eyeball.

      LLMs’ tone is also very bootlicking, so if you’re already narcissistic and you get a tool that tells you yes, you are just the smartest boi, well… To quote a classic, it must be like being repeatedly kicked in the head by a horse.

      • David Gerard@awful.systemsOPM
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        increasing number of social media responses which come across as they think they’re giving clarifying orders to a chatbot

    • owenfromcanada@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      I imagine there are a few reasons. An LLM is a narcissist’s dream–it will remain focused on you and tell you what you want to hear (and is always willing to be corrected).

      In addition, LLMs are easy to manipulate, and sort of mimic a person enough to give you a sense of power or authority. So if you’re the type of person who gets something from that, there’s likely a draw to that kind of person.

      Those are just guesses, though. I don’t use LLMs myself, so I don’t really know.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Thanks, that sounds reasonable. Especially the focus/attention.

        Maybe it’s the same as with other games or computer games… Some people also really get something out of fantasy achievements and when they win and feel like the main character…

    • sleepundertheleaves@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      I’m just spitballing here, but I suspect it’s for the same reason people with “dark triad” traits (narcissism, Machiavellianism, and psychopathy) are more successful in business and politics than the average person.

      Dark triad types give quick, confident, and persuasive answers, and aggressively challenge anyone who disagrees with them. But they don’t actually care if the answers are true as long as they can win the debate or argument they’re having. This lets them be totally confident and persuasive in any situation - whether they know the answer or not - and so demonstrate more “leadership skills” than people who are less willing to bullshit.

      Same with policies - a dark triad type is going to confidently and aggressively support policies that make him look good or benefit him personally in other ways. He doesn’t actually care whether they are good policies or bad policies, whether they’ll be good for the organization or the people or not - the dark triad type will lie, cheat, or steal to make sure his policies look successful, get himself promoted upwards, and blame his successor for the long term failure of the policy.

      I’m kind of not surprised people who care more about persuasiveness than honesty, and more about results than processes, would find AI tools appealing.

  • 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    It’s almost like telling asshats that they’re right all the time is, somehow, not good for their social-emotional development. Who would’a thunk it!

  • howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    This got me curious about where I fall in the distribution. Checked my browser history between the start of October and today and I got 1.4%. I would’ve thought my usage would fall in the high end, but apparently, their threshold for high is over 4%. Interestingly, there’s at least one person who was above 15%.


    The paper gives an interesting definition of Machiavellianism

    Machiavellians are characterized by strategic thinking and a desire for control and influence over their environment

    This sounds like it should describe any human being that has (or is trying to get) their shit together, no? The Oxford Languages definition (and how they use it in the rest of the paper) is more in line with how I would use the word:

    cunning, scheming, and unscrupulous behavior or character.


    Also, you can’t be surprised that people come in to defend themselves when you call them unpleasant asshats. Especially so when the paper doesn’t say that. They compared high usage vs low usage individuals, not users vs non-users.

  • Wronnay@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Until this article I wasn’t sure how big the AI bubble is…

    But seeing that more than 99% of the public don’t really use AI and then comparing that to tech stocks makes me sure the bubble is just like in the 2000s

    Of course some stuff will stick just like in the 2000s but with these high valuations and 99% of people not using it, it will take a long time till break-even