- Decentralisation: in the case of social media, I mean, how should we moderate in a fairly trustless fashion preventing bad actors from abusing such moderator rights
Moderation is a very hard problem to solve across large entities, albeit, big tech companies that hosts forums. Obvious ones that come to mind are:
- reddit (subreddit/sub-community based moderators)
- twitter (reports/block based? some crowd sourced fact checks?)
- instagram (report based i believe)
The question is, does the playing field change when we consider decentralisation? Or should we pick one and stick to them?
how should we moderate in a fairly trustless fashion preventing bad actors from abusing such moderator rights
This is a tough one. First of all I think it depends on what type of social media we’re talking about, because like you mentioned, Reddit/Twitter/Instagram all have different moderation methods because they’re structured differently.
does the playing field change when we consider decentralisation?
I think so, yeah, because with Web3 we won’t have traditional accounts like we did in Web2 where we can simply ban an email and be done with it. Now we can generate a bunch of wallets in seconds, give each some XRD and pretend that each one is a different person.
I asked myself this question when considering a Web3 login for RadixTalk, and the best idea I came up with so far is simply trying to moderate the content, not the users: use Akismet and other AI tools to detect obvious spam, and user reports for the rest.
To prevent multiple accounts/reports flood/etc check IP addresses, but those can be circumvented with VPNs (hence why it’s easier to moderate content than users).