I don’t think the Lemmy is well prepared to handle bots or more sophisticated spam, for now we’re just too small to target. I usually browse by new and see spam staying up for hours even in the biggest communities.
Just chiming in here: there are at the moment some problems with federation. I’m an admin on LW, and generally we remove spam pretty quickly but it currently doesn’t federate quickly. We are working on solutions that temporarily fix it till the lemmy devs themselves fix it.
The moderation on Lemmy is pretty poor and there is no clear (at least to me) avenue to help or offer help. Reporting it is pointless.
So I agree. Lemmy cannot handle it at the moment. That does not give confidence of it being handled when it gets larger, and the spam / bots becoming more sophisticated.
I do appreciate however, that all platforms have to go through this learning process.
Just tried to create an account. Got error that it couldn’t send me an email to verify. Login button spins and then does nothing. Resetting password is the same.
We have been experiencing problems with email verification in the past week. Apparently there is an issue with our email sender provider. Sorry about that.
Ste spam is bad but I can just ignore it, but last week there was an attack with CSAM which showed up while casually surfing new, that made me not want to open Lemmy anymore.
I think that is what needs to be fixed before we can taccle spam.
Whatever is done to fight spam should be useful in fighting CSAM too. Latest “AI” boom could prove lucky for non-commercial social networks as content recognition is something that can leverage machine learning. Obviously it’s a significant cost so pitching in will have to be more common in covering running costs.
Admins are actively looking into solutions, nobody wants that stuff stored on their server, and there’s a bunch of legal stuff you must do when it happens.
One of the problems is the cost of compute power for running programs detecting CSAM in pictures before uploading, making it not viable for many instances. Lemmy.world is moving towards only allowing images hosted via whitelisted sites I think.
Any reports you make are visible to the admins of your instance.
E.g. if you make a report, the community mods may choose to ignore it while your admins choose to remove it for everyone using their instance.
Everything you see on Lemmy is through the eyes of your instance, people of other instances may see different stuff. E.g. some instances censor certain slurs, but that doesn’t affect users outside that instance. (de)federations also dictates what comments you will see on a post.
But they do go to the community mods, even on a different instance? And if the community mods remove the content that removal federates?
I prefer to rely on the community mods to remove most ‘spam’ as it’s their role to decide what is spam in their community. (Obviously admins can/should remove illegal content etc)
Admins for the most part shouldn’t have to remove content on their copy of other instances communities.
It goes to the community mods too yeah. But when it comes to spam/scams that is being posted, admins (at least on programming.dev) will remove it immediately and not wait for community moderators. Spammers will usually spam multiple communities at once and only admins have the capability of banning users entirely from the site/their instance.
A few days ago a person created multiple accounts and spammed scat content across multiple communities. Moderators can’t effectively stop those kind of things.
Lol, well it’s not immune to either. As soon as anyone thinks Lemmy has ROI, it will be targeted by bots, corporate greed, and scrapers.
But all of our posts are publicly available in the Internet and in my opinion should be fair game for web crawlers, archivists, or whoever wants to use it. That’s the free and open Internet.
What’s shitty is when companies like reddit decide it’s “their” data.
I’m on Lemmy due to this!
I literally use this platform just to run from bots and cooperate greed.
I don’t think the Lemmy is well prepared to handle bots or more sophisticated spam, for now we’re just too small to target. I usually browse by new and see spam staying up for hours even in the biggest communities.
Just chiming in here: there are at the moment some problems with federation. I’m an admin on LW, and generally we remove spam pretty quickly but it currently doesn’t federate quickly. We are working on solutions that temporarily fix it till the lemmy devs themselves fix it.
The moderation on Lemmy is pretty poor and there is no clear (at least to me) avenue to help or offer help. Reporting it is pointless.
So I agree. Lemmy cannot handle it at the moment. That does not give confidence of it being handled when it gets larger, and the spam / bots becoming more sophisticated.
I do appreciate however, that all platforms have to go through this learning process.
Join a larger instance. We rarely see spam on SJW anymore because we now have a bot that removes it automatically.
I see a lot of usernames from SJW so I think I might do that!
Just tried to create an account. Got error that it couldn’t send me an email to verify. Login button spins and then does nothing. Resetting password is the same.
Not sure what went wrong but sh.itdoesnt.work
We have been experiencing problems with email verification in the past week. Apparently there is an issue with our email sender provider. Sorry about that.
Is this the account?
https://sh.itjust.works/u/lostmykeys
If so then it should be fixed. If not, please DM me the email address you are using to register.
Ste spam is bad but I can just ignore it, but last week there was an attack with CSAM which showed up while casually surfing new, that made me not want to open Lemmy anymore.
I think that is what needs to be fixed before we can taccle spam.
Whatever is done to fight spam should be useful in fighting CSAM too. Latest “AI” boom could prove lucky for non-commercial social networks as content recognition is something that can leverage machine learning. Obviously it’s a significant cost so pitching in will have to be more common in covering running costs.
Admins are actively looking into solutions, nobody wants that stuff stored on their server, and there’s a bunch of legal stuff you must do when it happens.
One of the problems is the cost of compute power for running programs detecting CSAM in pictures before uploading, making it not viable for many instances. Lemmy.world is moving towards only allowing images hosted via whitelisted sites I think.
Be diligent with reporting, and consider switching instance if your admins aren’t really active.
The reports go to the community mods not your instance admins though don’t they?
Any reports you make are visible to the admins of your instance.
E.g. if you make a report, the community mods may choose to ignore it while your admins choose to remove it for everyone using their instance.
Everything you see on Lemmy is through the eyes of your instance, people of other instances may see different stuff. E.g. some instances censor certain slurs, but that doesn’t affect users outside that instance. (de)federations also dictates what comments you will see on a post.
But they do go to the community mods, even on a different instance? And if the community mods remove the content that removal federates?
I prefer to rely on the community mods to remove most ‘spam’ as it’s their role to decide what is spam in their community. (Obviously admins can/should remove illegal content etc)
Admins for the most part shouldn’t have to remove content on their copy of other instances communities.
It goes to the community mods too yeah. But when it comes to spam/scams that is being posted, admins (at least on programming.dev) will remove it immediately and not wait for community moderators. Spammers will usually spam multiple communities at once and only admins have the capability of banning users entirely from the site/their instance.
A few days ago a person created multiple accounts and spammed scat content across multiple communities. Moderators can’t effectively stop those kind of things.
Lol, well it’s not immune to either. As soon as anyone thinks Lemmy has ROI, it will be targeted by bots, corporate greed, and scrapers.
But all of our posts are publicly available in the Internet and in my opinion should be fair game for web crawlers, archivists, or whoever wants to use it. That’s the free and open Internet.
What’s shitty is when companies like reddit decide it’s “their” data.
…corporate?
Corporations cooperate greedily.
Testify! 👏🏻👊🏻
Long live Lemmy!