Will Manidis is the CEO of AI-driven healthcare startup ScienceIO
But I mean, AI is the asshole, so maybe that’s why they went to the front page?
Lemmy is not safe either.
there isnt so much incentive. No advertisement. Upvote counters behave weirdly in the fediverse (from what i can see).
There are no virtual points to earn on Lemmy. So hopefully it will resist the enshitification for while.
I still dont see why people care about reddit karma. Its just a number?
Account age and karma makes an account look more legit and it’s thus more useful for spreading misinformation and/or guerilla marketing.
Same reason why people play cookie clicker, watch the useless number go up.
Also, some subs are downright hostile to people with low karma.
Some subreddits require a minimum karma score for posting. And it gets less likely to get shadow banned the more karma you have.
Ohhh right. I remember subs had that bullshit. I didnt know about the shadowban thing though.
I politely disagree.
Most people who have worked in customer service would believe every word because they have seen the absurdity of real people.
Shiri’s Scissor was supposed to be a cautionary tale…
In the age of A/B testing and automated engagement, I have to wonder who is really getting played? The people reading the synthetically generated bullshit or the people who think they’re “getting engagement” on a website full of bots and other automated forms of engagement cultivation.
How much of the content creator experience is itself gamed by the website to trick creators into thinking they’re more talented, popular, and well-received than a human audience would allow and should therefore keep churning out new shit for consumption?
It’s ultimately about ad money. They haven’t cared it’s humans or bots either. They keep paying out either way. This predates long before the LLM era. It’s bizarre.
It’s pretty much a case of the POSIWID. The system is meant to be genuine human engagement. What the system does is artificial at every step. Turns out its purpose is to fabricate things for bots to engage with. And this is all propped up by people who for some reason pay to keep the system running.
(Already said this before, but let me reiterate:)
Typical AITA post:
Title: AITAH for calling out my [Friend/Husband/Wife/Mom/Dad/Son/Daughter/X-In-Law] after [He/She] did [Undeniably something outrageous that anyone with an IQ above 80 should know its unacceptable to do]?
Body of post:
[5-15 paragraph infodumping that no sane person would read]
I told my friend this and they said I’m an asshole. AITAH?
Comments:
Comment 1: NTA, you are abosolutely right, you should [Divorce/Go No-Contact/Disown/Unfriend, the person] IMMEDIATELY. Don’t walk away, RUNNN!!!
Comment 2: NTA, call the police! That’s totally unacceptable!
And sometimes you get someone calling out OP… 3: Wait, didn’t OP also claim to be [Totally different age and gender and race] a few months ago? Heres the post: [Link]
🙄 C’mon, who even think any of this is real…
Typical AITA post:
“I want to do what I want with my own life. AITA?”
Everybody Sucks Here
Man, sometimes when I finish grabbing something I needed from Reddit, I hit the frontpage (always logged out) just out of morbid curiosity.
Every single time that r/AmIOverreacting sub is there with the most obvious “no, you’re not” situation ever.I never once seen that sub show up before the exodus. AI or not, I refuse to believe any frontpage posts from that sub are anything other than made up bullshit.
If it’s well-written enough to be entertaining, it doesn’t even matter whether it’s real or not. Something like it almost certainly happened to someone at some point.
Needs to feature both a wedding and a pregnancy and you’ve nailed it
insert plot from an episode of Friends
AITAH?
I asked my friend to help move a couch into my apartment but he got it stuck in the stairwell. AITAH?
I feel like we’re collectively writing the custom instructions for this bot.
Way too many…
I was born before the Internet. The Internet is always lumped into the “entertainment” part of my brain. A lot of people that have grown up knowing only the Internet think the Internet is much more “real”. It’s a problem.
I’ve come up with a system to categorize reality in different ways:
Category 1: Thoughts inside my brain formed by logics
Category 2: Things I can directly observe via vision, hearing, or other direct sensory input
Category 3: IRL Other people’s words, stories, anecdotes, in face to face conversations
Category 4: Acredited News Media, Television, Newspaper, Radio (Including Amateur Radio Conversations), Telegrams, etc…
Category 5: The General Internet
The higher the category number, means the more distant that information is, and therefore more suspicious I am.
I mean like, if a user on Reddit (or any internet fourm or social media for that matter) told me X is a valid treatment for X disease without like real evidence, I’m gonna laugh in their face (well not their face, since its a forum, but you get the idea).
I would recommend switching categories one and two. Sometimes our thoughts are fucked.
So here’s the thing:
I sometimes though I saw a ghost moving in a dark cornet of my eyes.
I didn’t see a ghost.
But then later I walk through the same place again, and also saw the same vision, but I already held the belief that ghosts dont exist, so I investigated, it turned out to be a lamp (that was off) that casted a shadow of another light source, so, when I happend to walk though the area, the shadow moved, and combined with my head turning motion, it made it appear like a ghost was there, but it was just a difference in lighting, a shadow. Not a ghost. I bet a lot of “ghosts” could be just interpreting lighting wrong and think its a ghost, not an actual ghost.
Having you thoughts/logics prioritized is important to find the truth, and not just start believing the first thing you interpret like a vision of a “ghost”.
You know what, that’s entirely fair.
Vision is processed in our brains
deleted by creator
I genuinely miss the 90s. I mean, yeah, early forms of internet and computers existed, but not everyone had a camera, and not everyone got absolutely bukkaked with disinformation. Not that I think everything is bad about the tech in of itself, but how we use it nowadays is just so exhausting.
ESH
Look at that, the detection heuristics all laid out nice and neatly. The only issue is that Reddit doesn’t want to detect bots because they are likely using them. Reddit at one point was using a form of bot protection but it wasn’t for posts; instead, it was for ad fraud.
Oh boy, identity mechanics to curb out the last vestiges of privacy.
let me scan your eyeballs. it’s the only way
Also doesn’t fix the problem at all, I can still just use AI to post to my main account
They’re pretty much declaring a war on VPNs also
Yep. More than half the time I can’t access Reddit through Proton VPN.
I can. But i have an account.
Meh. Thanks but fuck Reddit anyway.
I wonder where people in the future will get their information from. What trustworthy sources of information are there? If the internet is overrun with bots, then you can’t really trust anything you read there, as it could all be propaganda. What else to do, though, to get your news?
That’s the killer app right there: the complete inability for the common person to distinguish between true and false. That’s what they’re going for.
you could try to cook up some kind of trust chain, without totally abandoning privacy.
Get a government-certified agencies minting master key tied to your id. You only get one, with trust rating tied to it.
With that master key you can generate infinite amount of sub-ids that dont identify you but show your trust rating(fuzzed).
Have a cross-network reporting system that can lower that rating for abuses like botting.
idk Im just spitballing
What’s stopping me from using my key to post ai slop?
I dunno, part of me is ok with it. It’s clear to me how bad things are going to get. So having certain platforms or spaces with some level of public identity validation seems like it might be ok…
Well, it’s a great method to find people to target for political speech.
Especially when it’s about gathering real information. When everything you read is written by an anonymous author, you’d have no chance to know whether it’s true or wrong, except if it’s a paper on theoretical maths of course.
Yeah, a real problem solver would probably be to remove the incentive for someone to do this.
It would probably be far less likely for someone to do that on lemmy, as there is no karma and you dont get paid for upvotes or something. (Still there are incentives, like creating credibility, celebrity accounts, maybe influence public opinion, self-pleasure from seeing upvotes to “your” posts/comments etc., but they arent such potent incetives as directly monetary incetives.)
Dead internet theory
There are at least 37 of us. Unless a bot posted this…
At least I know to blame Claude.
Claude classique
It’s stupidly easy to make up stuff on AITA and get upvotes/comments. I made up one just for fun and was surprised at how popular it got. Well, now not so much, but back when I did.
If you know the audience and what gets them upset, you’ve got easy karma farming.
It’s like reality TV & soap operas in text form. You can somewhat easily spot the AI posts though, which are plentiful now. They all tend to have the same “professional” writing style and a high tendency to add mid sentence “quotes” and em dashes (—) which you need a numpad combo to actually write out manually - a casual write-up would just use the - symbols, if at all. LLMs also make a lot of logic errors that may pop up. Example from one of the currently highly upvoted posts:
He pulled out what looked like a box from a special jewelry store. My heart raced with excitement as I assumed it was a lovely bracelet or a special memento for our wedding day. But when he opened the box, I was absolutely stunned. Inside was a key to a house he supposedly bought for us. I was taken aback because I had no idea he was even looking for real estate. My first reaction was one of shock and confusion, as I thought it was a huge decision that we should have discussed together.
As I processed the moment, I realized the house wasn’t just any house—it was a fixer-upper on the outskirts of town. Now, I get that it can be a great investment, but this particular house needed a ton of work. I’m talking major renovations and repairs, and I honestly had no desire to live there.
Aside from the weird writing (Oh jolly! Expensive gifts! How exciting!), this lady somehow realized & identified this house, location and its state just by looking at some random key in that moment. Bonus frustration if you read through the comments who eat all of this shit up, assuming they aren’t also bots.
Interesting observation about the em dash. I never thought about it that hard, but reddit’s text editor (as well as Lemmy’s, at least on the default UI) automatically concatenate a double dash into an en dash, rather than an em dash.
I use em dashes (well, en dashes, as above) in my writing all the time, because I am a nerd.
For anyone who cares, an en dash is the same width as an N in typical typography, and looks like this: –
An em dash is, to no one’s surprise, the same with as an M. It looks like this: —
(For what it’s worth, Lemmy does not concatenate a triple dash into an em dash. It turns it into a horizontal rule instead.)
Huh, ms word does that as well.
That’s probably because the posts are stored as plain text, and any markdown within them is just rendered at display time. This is presumably also how you can view any post or comment’s original source. So, here you go:
Double –
En – (alt 0150)
Em — (alt 0151)
And for good measure, a triple:
Actually, I notice if you include a triple that’s not on a line by itself it does render it as an em dash rather than en, like so: —
You’re right, it means you don’t have to save two versions or somehow convert it back into a source format instead. The triple renders as a line below on mbin. I don’t remember what they’re called.
That’s a horizontal rule.
Refreshingly, the frontend just converts it to a plain old HTML <hr> tag and doesn’t try to reinvent the wheel, either.
I use the poor man’s emdash (two hyphens in a row) here and there as well. I guess I never noticed Reddit auto-formats them. I have been accused of being an AI on a few occasions. I guess this is a contributing factor to why that is.
Funny how Reddit technically formats it into the wrong glyph, though. Not like anyone but the most insufferable of pedants would notice and care, of course. I find it merely mildly amusing.
Now that you mention it, I might be the only non-AI using em dashes on the internet (I have a program that joins two hyphens into an em dash).
Apparently Lemmy, and Reddit (I can’t test either one), actually render it that way too. Not sure how many people know about that though.
Two weeks ago someone on one of those story subs, I think it was amioverreacting, was milking off karma making updates. They made 5 posts about the whole thing and even started to sell merch to profit in real life until they took the last post down.
Is reddit still feeding Googles LLM or was it just a one time thing? Meaning will the newest LLM generated posts feed LLMs to generate posts?
The truly valuable data is the stuff that was created prior to LLMs, anything after this is tainted by slop. Any verifiable human data would be worth more, which is why they are simultaneously trying to erode any and all privacy
I’m not sure about that. It implies that only humans are able to produce high-quality output. But that seems wrong to me.
- First of all, not everything that humans produce has high quality; rather, the opposite.
- Second, with the development of AI i think it will be very well possible for AI to generate good-quality output in the future.
Microsoft’s PHI-4 is primarily trained on synthetic (generated by other AIs) data. It’s not a future thing, it’s been happening for years
These days the LLMs feed the LLMs so you can model models unless you’re excluding any public data from the last decade. You have to assume all public data based on users is tainted when used for training.
Maybe they’re using the subreddit to try to train morality into the model?
Why not? r/AmlTheAsshole is about entertainment, not truth. It would be an indictment of AI if it couldn’t replicate a short, funny story.
If your statement was true, then it should be disclosed in a visible manner, which it isn’t.