• 0 Posts
  • 61 Comments
Joined 2 years ago
cake
Cake day: June 28th, 2023

help-circle
  • I’m not sure how much this observation can be generalized, but I’ve also wondered how much the people who overestimate the usefulness of AI image generators underestimate the chances of licensing decent artwork from real creatives with just a few clicks and at low cost. For example, if I’m looking for an illustration for a PowerPoint presentation, I’ll usually find something suitable fairly quickly in Canva’s library. That’s why I don’t understand why so many people believe they absolutely need AI-generated slop for this. Of course, however, Canva is participating in the AI hype now as well. I guess they have to keep their investors happy.




  • there’s no use case for LLMs or generative AI that stands up to even mild scrutiny, but the people funneling money into this crap don’t seem to have noticed yet

    This is why I dislike the narrative that we should resist “AI” with all our power because supposedly, if our employers got us to train the chatbots, they would become super smart and would be able to replace us in no time. In my view, this is simply not true, as the past years have shown. Spreading this narrative (no matter how well-intentioned) will only empower the AI grifters and reinforce employers’ beliefs that they could easily lay off people and replace them with slop generators because supposedly the tech can do it all.

    There are other very good reasons to fight the slop generators, but this is not one of them, in my view.


  • I’m old enough to remember the dotcom bubble. Even at my young age back then, I found it easy to spot many of the “bubbly” aspects of it. Yet, as a nerd, I was very impressed by the internet itself and was showing a little bit of youthful obsession about it (while many of my same-aged peers were still hesitant to embrace it, to be honest).

    Now with LLMs/generative AI, I simply find myself unable to identify any potential that is even remotely similar to the internet. Of course, it is easy to argue that today, I am simply too old to embrace new tech or whatever. What strikes me, however, is that some of the worst LLM hypemongers I know are people my age (or older) who missed out on the early internet boom and somehow never seemed to be able to get over that fact.



  • In my experience, copy that “sells” must evoke the impression of being unique in some way, while also conforming to certain established standards. After all, if the copy reads like something you could read anywhere else, how could the product be any different from all the competing products? Why should you pay any attention to it at all?

    This requirement for conformity paired with uniqueness and originality requires a balancing act that many people who are not familiar with the task of copywriting might not understand at all. I think to some extent, LLMs are capable of creating the impression of conformity that clients expect from copywriters, but they tend to fail at the “uniqueness” part.






  • I disagree with the last part of this post, though (the idea that lawyers, doctors, firefighters etc. are inevitably going to be replaced with AI as well, whether we want it or not). I think this is precisely what AI grifters would want us to believe, because if they could somehow force everyone in every part of society to pay for their slop, this would keep stock prices up. So far, however, AI has mainly been shoved into our lives by a few oligopolistic tech companies (and some VC-funded startups), and I think the main purpose here is to create the illusion (!) of inevitability because that is what investors want.






  • reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’

    True (even if this answer sounds like something a chatbot would generate). I have come across a few human slop generators/bots in my life myself. However, making up entire titles of books or papers appears to be a specialty of AI. Humans would not normally go to this trouble, I believe. They would either steal text directly from their sources (without proper attribution) or “quote” existing works without having read them.




  • I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

    I am fully aware of this. However, in my experience, it is sometimes the IT departments themselves that push these chatbots onto others in the most aggressive way. I don’t know whether they found them to be useful for their own purposes (and therefore assume this must apply to everyone else as well) or whether they are just pushing LLMs because this is what management expects them to do.