How should a web3/decentralised social media be moderated fairly?

Definitions:

  • Decentralisation: in the case of social media, I mean, how should we moderate in a fairly trustless fashion preventing bad actors from abusing such moderator rights

Moderation is a very hard problem to solve across large entities, albeit, big tech companies that hosts forums. Obvious ones that come to mind are:

  • reddit (subreddit/sub-community based moderators)
  • twitter (reports/block based? some crowd sourced fact checks?)
  • instagram (report based i believe)

The question is, does the playing field change when we consider decentralisation? Or should we pick one and stick to them?

2 Likes

how should we moderate in a fairly trustless fashion preventing bad actors from abusing such moderator rights

This is a tough one. First of all I think it depends on what type of social media we’re talking about, because like you mentioned, Reddit/Twitter/Instagram all have different moderation methods because they’re structured differently.

does the playing field change when we consider decentralisation?

I think so, yeah, because with Web3 we won’t have traditional accounts like we did in Web2 where we can simply ban an email and be done with it. Now we can generate a bunch of wallets in seconds, give each some XRD and pretend that each one is a different person.

I asked myself this question when considering a Web3 login for RadixTalk, and the best idea I came up with so far is simply trying to moderate the content, not the users: use Akismet and other AI tools to detect obvious spam, and user reports for the rest.
To prevent multiple accounts/reports flood/etc check IP addresses, but those can be circumvented with VPNs (hence why it’s easier to moderate content than users).

1 Like

The key in my opinion is having a very strong and customizable system for censoring or hiding content, while having the decentralized back-end be immutable and permissionless.

People need to be able to choose words, topics, images, and specific users that they don’t want to see content from. For topics, images, and users, you will need a robust artificial intelligence system, regular manual reviews, and a way to identify and track users (even if anonymous). The latter is just to ensure that you can hide content from a specific user and hide it from alt accounts that they create.

To truly scale, you really do need complex systems AND manual moderators to review the content after it’s been through automated filters, for multiple views, e.g. “appropriate for children”, and manually hide content. To do this, you need to delay content being displayed on the front end for a period of time to allow moderators to view it, but that creates a poor experience in terms of lively and quick discussion, so you’d want a reputation system that allows for users who are trusted, have existed for a while, and consistent with their behavior do not have to go through the delay, and heavily penalize them somehow if they break rules after being trusted.

As far as the front-end, I think we need it to be open source and encourage forks for different views, and we also need the user to be able to highly customize what they see, maybe with community “plugins” like “hide poop images” or whatever.

In order to abide by laws e.g. pornography of minors or extremely violent image or video content, perhaps we can’t have decentralized storage of those assets, only text, and if people want to put images or videos, they can include a link that the front end only supports if it’s from a source that is trusted to abide by these rules and delete those kinds of illegal content.

In summary, it IS very important to have absolute freedom of speech without permissions, with immutability, and we can have that by allowing text and then heavily censoring views. If people want to see the unfiltered content then they can figure out how to do it on their own, or fork a view and remove the permissions and provide an unfiltered website at their own legal risk.

EDIT: One more thing to mention. Two advantages web3 social media has over traditional media:

  1. You can prove whether something is original content (at least, the first person to post it on the network) and prove when it was posted.

  2. You have to pay a small fee to post, which limits spam and low-effort content immensely.

1 Like