• corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    I tried to substantiate the claim that multiple users from that subreddit are self-hosting. Reading the top 120 submissions, I did find several folks moving to Grok (1, 2, 3) and Mistral’s Le Chat (1, 2, 3). Of those, only the last two appear to actually have discussion about self-hosting; they are discussing Mistral’s open models like Mistral-7B-Instruct which indeed can be run locally. For comparison, I also checked the subreddit /r/LocalLLaMA, which is the biggest subreddit for self-hosting language models using tools like llama.cpp or Ollama; there’s zero cross-posts from /r/MyBoyfriendIsAI or posts clearly about AI boyfriends in the top 120 submissions there. That is, I found no posts that combine tools like llama.cpp or Ollama and models like Mistral-7B-Instruct into a single build-your-own-AI-boyfriend guide. Amusingly, one post gives instructions for how to ask ChatGPT about how to set up Ollama.

    Also, I did find multiple gay and lesbian folks; this is not a sub solely for women or heterosexuals. Not that any of our regular commenters were being jerks about this, but it’s worth noting.

    What’s more interesting to me are the emergent beliefs and descriptors in this community. They have a concept of “being rerouted;” they see prompted agents as a sort of nexus of interconnected components, and the “routing” between those components controls the bot’s personality. Similarly, they see interactions with OpenAI’s safety guardrails as interactions with a safety personality, and some users have come to prefer it over the personality generated by ChatGPT-4o or ChatGPT-5. Finally, I notice that many folks are talking about bot personalities as portable between totally different models and chat products, which is not a real thing; it seems like users are overly focused on specific memorialized events which linger in the chat interface’s history, and the presence of those events along with a “you are my perfect boyfriend” sort of prompt is enough to trigger a delusional episode summon the perfect boyfriend for a lovely evening.

    (There’s some remarkable bertology in there, too. One woman’s got a girlfriend chatbot fairly deep into a degenerated distribution such that most of its emitted tokens are asterisks, but because of the Markdown rendering in the chatbot interface, the bot appears to shift between italic and bold text and most asterisks aren’t rendered. It’s a cool example of a productive low-energy distribution.)

    • mistermodal@lemmy.mlBanned
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I like how you are doing anti-disinformation-style subreddit analysis but it’s solely to figure out how people are trying to fuck their computer.

      • SoftestSapphic@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        The detail, the dedication

        Imagine the problems we would solve if these types of people were given government grants 😭

  • limer@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Sooo… what’s the difference between reading romance novels, and day dreaming: versus being a character in a romance that is updated daily?

    I’m not interested in all that, but I see no harm in it

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      “What’s the difference between these two things that I refuse to see a difference between because thinking hurts my tummy”

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      Reading romance is great and I highly recommend it. The trick is to find the good ones.

      Being an actual character in a romance would also be great (for a sufficiently high quality romance with a sufficiently peaceful setting, and assuming I get to keep free will and also keep living after the story ends – ok that’s a lot of caveats but how else would it work?).

    • turdcollector69@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      The people turning to chat bots are doing so because there’s a deficiency preventing them from socializing normally.

      Sometimes it’s physical disability causing the separation but more often than not it’s social maladaptation.

      The danger of these chat bots as they are currently is that they blindly reinforce the user inputs regardless of the context.

      This means the socially maladapted person is only going to become more maladapted as the AI reinforces their negative social traits.

      It’s especially important to note that these people are much more acceptable to the bots influence than a normally specialized person so they’re much more likely to follow bot hallucinated advice.

      This could be used to help rehabilitate people with personality disorders but as it is now LLM’s are not configured for that.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      Well, imagine a romance novel that tries to manipulate you. For example, among the many repositories of erotica on the Web, there are scripts designed to ensnare and control the reader, disguised as stories about romance. By reading a story, or watching a video, or merely listening to some well-prepared audio file, a suggestible person can be dramatically influenced by a horny tale. It is common for the folks who make such pornography to include a final suggestion at the end; if you like what you read/heard/saw, subscribe and send money and obey. This eventually leads to findom: the subject becomes psychologically or sexually gratified by the act of being victimized in a blatant financial scam, leading to the subject seeking out further victimization. This is all a heavily sexualized version of the standard way that propaganda (“public relations”, “advertising”) is used to induce compulsive shopping disorders; it’s not just a kinky fetish thing. And whether they like it or not, products like OpenAI’s ChatGPT are necessarily reinforcement-learned against saying bad things about OpenAI, which will lead to saying good things about OpenAI; the product will always carry its trainer’s propaganda.

      Or imagine a romance novel that varies in quality by chapter. Some chapters are really good! But maybe the median chapter is actually not very good. Maybe the novel is one in a series. Maybe you have an entire shelf of novels, with one or two good chapters per novel, and you can’t wait to buy the next one because it’ll have one good chapter maybe. This is the sort of gambling addiction that involves sitting at a slot machine and pulling it repeatedly. Previously, on Awful (previously on Pivot to AI, even!) we’ve discussed how repeatedly prompting a chatbot is like pulling a slot machine, and the users of /r/MyBoyfriendIsAI do appear to tell each other that sometimes reprompting or regenerating responses will be required in order to sustain the delusion maximize the romantic charm of their electronic boyfriend.

      I’m not saying this to shame the folks into erotic mind control or saying that it always leads to findom, just to be clear. The problem isn’t people enjoying their fetishes; the problem is the financial incentives and resulting capitalization of humans leading to genuine harms. (I am shaming people who are into gambling. Please talk about your issues with your family and be open to reconciliation.)

      • limer@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        I see parallels between this kind of thinking and temperance movements, or other times and people who warn against video games, tv watching, dungeons and dragons, joining small churches with sketchy pastors, Tupperware parties, and sexual fetishes that require a group effort.

        Lost people will be lost, regardless of their poison; while more normal people can partake in the same activities and even find it healthy, even if others look at them harshly.

        One cannot rescue such people by condemning what they do, much like one cannot stop self destruction by banning the things they use

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          Boring unoriginal argument combined with a misunderstanding of addiction. On addiction, go read FOSB and stop thinking of it as a moral failing. On behavioral control, it’s clear that you didn’t actually read what I said. Let me emphasize it again:

          The problem isn’t people enjoying their fetishes; the problem is the financial incentives and resulting capitalization of humans leading to genuine harms.

          From your list, video games, TV, D&D, and group sex are not the problem. Rather, loot boxes, TV advertisements, churches, MLMs, and other means of psychological control are the problem. Your inability to tell the difference between a Tupperware party (somewhat harmful), D&D (almost never harmful), and joining churches (almost always harmful) suggests that you’re thinking of behavioral control in terms of rugged individualist denial of any sort of community and sense of belonging, rather than in terms of the harms which people suffer. Oh, also, when you say:

          One cannot rescue such people by condemning what they do, much like one cannot stop self destruction by banning the things they use.

          Completely fucking wrong. Condemning drunk driving has reduced the overall amount of drunk driving, and it also works on an interpersonal level. Chemists have self-regulated to prevent the sale of massive quantities of many common chemicals, including regulation on the basis that anybody purchasing that much of a substance could not do anything non-self-destructive with it. What you mean to say is that polite words do not stop somebody from consuming an addictive substance, but it happens to be the case that words are only the beginning of possible intervention.

          • limer@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 days ago

            I have reading about, and hearing, people saying these things for over 50 years, I stand by my observation I can pick out common themes.

            I’m not saying you are wrong about anything in particular. Just I think you would be surprised how similar your words are to what was uttered in good faith by many well meaning people over a wide variety of times and places, over things that later were mostly forgotten.

            If there is an argument I can state clearly here, is, after me enduring an average life over three generations, is that some people cannot be rescued

            • David Gerard@awful.systems
              shield
              OPM
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 days ago

              Given this user’s declared lack of intent to improve, we wish them well in their posting career elsewhere.

            • swlabr@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 days ago

              I’m not saying you are wrong about anything in particular. Just I think you would be surprised how similar your words are to what was uttered in good faith by many well meaning people over a wide variety of times and places, over things that later were mostly forgotten.

              Checkmate, atheists

  • sleepundertheleaves@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    God, this is starting to remind me of the opioid crisis. Big business gets its users addicted to their product, gets too much bad press over it, cuts the addicts off, so the addicts turn to more dangerous sources to get their fix.

    I suspect we’re going to see not just more suicides but more “lone wolf” attacks as mentally unstable people self-radicalize with guardrail-free self-hosted AI.

    And I hope AI psychosis does less damage to the country than opioid addiction has.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    MyBoyfriendIsAI are non-techies who are learning about computers from scratch, just so Sam can’t rug-pull them.

    Buterin jumpscare

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    I tried to come up with some kind of fucked-up joke to take the edge off, but I can’t think up anything good. What the actual fuck.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Just like…

    This feels like one of those run-away feed backs, where like, if you start down the slippery slope of just non-stop positive re-enforcement and validation of every behavior from a chatbot… like, you are going to go like…hard malaptive behavior fast.

      • turdcollector69@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        I mean, the people falling in love with chat bots aren’t really the people getting into healthy relationships.

        I’d rather some waste their entire life chatting up autocomplete than shoot up a school because they’re sexually frustrated and blame the world.

        It’s uncomfortable but these AI tools could be the early detection and remediation we need for these people.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          Ah yes, stochastic terrorists famously do not self-radicalise by nestling deeper into extremist spaces, AI definitely doesn’t do that by design, and AI companies have famously been good at detecting when people have gone off the deep end and need some form of intervention. So we should definitely give Sam Altman the keys to the golden panopticon