- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lemmy.dbzer0.com/post/123857
This is my current attempt at preparing to counter the spam waves that will be appearing as the fediverse becomes more and more popular.
It involves the creation of whitelists based on a chain of trust between instances with easy ways to add and remove into it with few overheads.
Let me know what you think and if you’re interested, please do register your instance at https://overctrl.dbzer0.com.
So what happens to instances who don’t want to participate in a centralized allowlisting project? This is an allow list system, so eventually we just get cut out of federation? I’m still wishing for a centralized deny list, that would keep track of instances blocked by other instances, and block someone once maybe 3 other instances I trust do. That way we can still allow by default, rather than requiring that any admin who wants to set up a new system is required to know another admin who will endorse them. Frankly, I don’t have a personal relationship with even a single other fediverse admin; I wouldn’t want to endorse them, because I just don’t know them, and I’m quite sure they also wouldn’t endorse me. But saying “I trust you to block bad instances most of the time” seems way easier than “I trust you to vet all of your users”.
The problem with blacklists is that it’s trivial to make endless domains to spam. The fediverse avoided this by being too small to matter , but as the reddit exodus begins this is about to change
So require paid ssl certificates or something. I just can’t sign on to any system that requires me to establish personal friendships with other instance admins so I can beg them for endorsements. Begging Reddit to improve accessibility didn’t work. I have no interest in a system where my instance now needs to beg other admins for the right to federate. Even email doesn’t work this way.
Email does rely on IP reputation as a major component in deciding if something is spam. The system has matured to a point where it works fairly well and transparently … but the consequence has been you can’t reliably send from an IP block unless somebody is very actively handling abuse and working with the reputation services to keep their IP space in the internet’s good graces.
But: I wouldn’t want to allowlist based just on one reputation service. I’ve got some ideas on how to handle spam for my instances involving a few different datapoints. This could be useful as one, if it ends up with enough data.
So instead of having to “beg for endorsements” you’d rather have to pay to set up a FOSS server?
Yes. I already have to pay for a VPS, for a domain…nothing wrong with paying for an SSL cert. At least I can pick my vendor.
I’m not sure how’d you accomplish this without requiring an EV cert, which is expensive and time-consuming to get, right? I guess manually maintaining a list of free CAs like Let’s Encrypt? Idk, I’d never pay for a cert I’d have to manually update where my LE certs are all automatic.
I’ve only recently setup my own lemmy instance, to test new stuff, fix a few things, whatever. Something like this would prevent me from federating with the content I want to see, and I’d have to go try and be buddies with a trusted instance admin to get endorsed?
I think this may be something we want to discuss more as a community, and see what better solutions might be out there
Totally unlimited federation doesn’t work, just look at what happened with email. You have to jump through a lot of hoops to set up a server because of spam protection
And I get that, but I feel like there might be better options than an old boys club that requires endorsement to get in. For instance, maybe levels of trust? a newly federated instance gets rate limited or something, so that it can’t suddenly start spamming 1000s of posts or whatever
Oh yeah, there’s probably middle ground there somewhere, although I’m not entirely sure some sort of “web of trust” model is avoidable
I agree that we need far stronger admin and moderation tools to fight spam and bots. I disagree with the idea of a whitelist approach, and think taking even more from email (probably the largest federated system ever) could go a long way.
With email, there is no central authority granting “permission” for me to send stuff. There are technologies like SPF, DKIM, DMARC, and FcRDNS, which act as a minimum bar to reach before most servers trust you at all, then server-side spam filtering gets applied on top and happens at a user, domain, IP, and sometimes netblock level. When rejections occur, receiving servers provide rejection information, that let me figure out what is wrong and contact the admins of that particular server. (Establish a baseline of trust, punish if trust is violated)
A gray-listing system for new users or domains could generate reports once there is a sufficient amount of activity to ease the information gathering an admin would have to do in order to trust a certain domain. Additionally, I think establishing a way for admins to share their blacklisting actions regarding spam or other malicious behavior (with verifiable proof) could achieve similar outcomes to whitelisting without forcing every instance operator to buy in to a centralized (or one of a few centralized) authority on this. This would basically be an RBL (which admins could choose to use) for Lemmy. This could be very customizable and allow for network effects (“I trust X admin, apply any server block they make to my instance too” sort of stuff).
I think enhancements to Lemmy itself would also address help. Lemmy itself could provide a framework for filtering and report when an instance refuses a federated message with relevant information, allowing admins to make informed decisions (and see when there are potential problems on their instance). Also having ways to attach proof of bad behavior to federated bans at an instance level, and some way to federate bans (again with proof) from servers that aren’t a user’s home instance.
Finally, as far as I can tell everything following a “Web of Trust” model (basically what you are proposing) has struggled to gain widespread adoption. I have never been to a key signing party. I once made a few proofs on keybase, but that platform never really went anywhere. This doesn’t mean your solution won’t work, it just concerns me a little.
I expanded a bit more on some of how email tooling could be used within lemmy in [this comment(https://lemmy.nrd.li/comment/114218) as well. My ideas aren’t fully baked yet, but I hope they at least make some sense.
The problem with using email as a playbook is… Email has soft failed as a distributed system. It’s been captured by a few mega corporations who have created a web of trust between them and everyone else is struggling to get through and silently dropped in the spam bin often enough with no recourse. Cory doctorow has spoken extensively on this.
I think to avoid this happening to the fediverse as well we need to start building our own web of trust early on and not let the fate of email happen once more
Perhaps I am a unicorn, but I have self-hosted my email for years and don’t have deliverability problems. The only problems I have had:
- I think I had to sign up with some sort of Microsoft thing or submit a ticket to them or something because I had an issue with sending mail to o356. That was resolved quickly and I haven’t had a problem since.
- My server host (Linode, and Digital Ocean before them) is on the UCEPROTECT-L3 blacklist, because they (and whitelisted.org) are a bunch of scammers and block entire ASNs for almost any amount of spam, then extort individual mail server operators to get their IP specifically delisted.
To me one of the big things that differentiates Lemmy (and the fediverse in general) from email is that most of it is public, so the things in email that would involve sharing someone’s private information (email addresses, IPs, email contents, etc) are public (at least the post/comment and username+instance), and can all be verified. I think there is a lot of potential because of this. Maybe I’m crazy, but I just really don’t like the idea of a whitelist-based system because it means I as a small instance operator may have to sign up to dozens of services like the one you are building. I want my instance to be able to federate pretty much as widely as possible, and to me such a burden is too much to ask within a system/protocol/fediverse that is designed to facilitate sharing and decentralization.
Also, I think there is already room for a problem with “capture”. What motivation is there for .world .ml or beehaw to bother signing up for your thing? Even assuming you get 100 like minded admins to sign up for Overseer that is probably a pretty small fediverse island without them, some or all “mega” instances will probably just end up getting a pass anyways and at the end of the day no system is in place to help with the problem of bot/spamming users on trusted instances (whether in that WoT or just blindly trusted by the WoT).
Most of the spam I get is from gmail addresses, I don’t see it going any differently here.
Yes I am cognisant to the fact that it’s likely most won’t bother to sign. I’m considering just importing all instances and allow others to guarantee for them whether they’ve been claimed for not. Claiming am instance will likewise just allow their admins to guarantee and endorse other instances
Way I see it, they motivation to sign up is to crowdsourced trust building in the fediverse, instead of relying on defacto webs of trust which will develop organically around the big players.
About email, check the experience of doctorow https://doctorow.medium.com/dead-letters-73924aa19f9d
While I am just a mere fedizen, I applaud innovative ideas! Backing those ideas up with open source code is next level contribution to the community. Thank you!
I think it may make sense to not only track instances but also users, because any instance with open signup is going to be very volatile.
In fact it would be interesting if you had explicit user vouching. Or at least track “karma” by user. So you can take a given user and “propogate” karma. So people I have upvoted, and people upvoted by people that I have upvoted have trust.
I don’t think it’s possible to do user tracking like that. The scale is way too large