How to Regulate Social Media
A Third Way Between Allowing Anything and Making Social Media into Editors
If social media companies remove content they dislike, that is obviously problematic. However, allowing anything to be posted is equally concerning. Is there a third way? Let's examine some of the reasons why social media is so destructive today.
Fix the Toxic Algorithms
Social media algorithms, which determine what content users see, are typically designed to exploit flaws in human psychology to keep people hooked. They often promote divisive and enraging content because it maximizes engagement.
A potential solution is to establish ethics boards involved in designing these algorithms. When corporations attempt to manipulate users into addiction, these boards could intervene. The goal should be to enlighten the public, not simply to keep them addicted to extremist content.
There are multiple ways to structure an ethics board. One approach is to include employees, who are less profit-driven than a corporate board of shareholders. Another option is a jury-style selection process, where randomly chosen members of the public, without financial ties to the company, rotate in and out of the board to maintain independence.
We don’t need a single, rigid solution for ethics boards. The key is to experiment with different models and find what works best.
Outsource Moderation to the Public
Having been on both ends of content moderation, I’ve seen its flaws firsthand. On one hand, unchecked vile content spreads harm. On the other, users can be silenced without a meaningful appeal process. Often, takedowns are accompanied only by an automated response linking to vague community guidelines, leaving users guessing about what rule they violated.
A good moderation system must balance these concerns:
People should be able to voice their opinions and be treated fairly. If content is removed, users deserve a clear explanation.
Lies and disinformation that undermine society should not be allowed to spread freely.
Nuanced, high-quality content should be prioritized over inflammatory, divisive content lacking factual support.
To achieve this, platforms need better tools that empower users to moderate and promote responsible content.
Give the Community Tools to Moderate
Currently, social media platforms only allow users to express agreement (e.g., likes, upvotes). That’s not enough. I frequently debate people I disagree with, and I want to acknowledge when they engage in serious, logical debate rather than resorting to ad hominems or misinformation.
Social media should offer:
A way to express agreement or disagreement with an opinion.
A way to evaluate how the opinion is expressed. Civil, nuanced discussions should be rewarded.
We also need a more granular rating system. Some users may receive many likes but lack deep respect among their peers. Others may be less known but highly regarded by those who do engage with them. The system should promote and reward thoughtful contributors, not just those who garner widespread but shallow approval.
This ranking should be structured like a domino effect: approval from highly respected users should carry more weight than approval from trolls and bad actors. This concept is inspired by Google Search, which ranks websites based on how frequently they are linked by reputable sources rather than keyword spam.
How Community-Based Moderation Would Work
The first step in moderation would be to reduce visibility for users with low behavior scores and posts with overwhelmingly negative ratings. However, it’s important that controversial but well-articulated opinions remain visible, even if unpopular. In other words, controversial content should still have a platform—so long as it’s presented in a responsible manner.
Similarly, nuanced and thoughtful content should receive more promotion. Today, viral content is often incendiary rather than insightful. A better system would promote well-reasoned, constructive discussions.
What About Banning Content?
For deeply offensive or harmful content, simply reducing visibility may not be enough. There should be a mechanism for reporting and reviewing such content. Users should be able to see when a post is flagged for removal.
Votes for removal should require users to provide a brief reason. This ensures that those whose content is removed understand why and have a meaningful opportunity to appeal. Today’s moderation systems often fail in this regard, leaving users frustrated and without recourse.
Moderation doesn’t need to be exclusively community-based. Platforms should be required to act swiftly against the most egregious content—such as beheadings or child pornography. However, for less severe violations, a community-driven approach is preferable.
How to Get Social Media to Implement This
Rather than prescribing a rigid framework, these ideas should be open to multiple implementations.
Possible Approaches:
Industry-Led Reform: Encouraging companies to pioneer such solutions and inspiring others to follow suit.
Government Regulation: Rather than mandating a specific user interface, regulations could set broad requirements, such as: "Media corporations must provide mechanisms for users to reward positive behavior and penalize bad behavior, influencing content visibility and promotion."
Reference implementations or mockups could serve as inspiration without being legally mandated.
A Public, Non-Profit Platform: Governments or international bodies could fund a non-profit global social media platform, independent of corporate influence. Western governments, for example, could provide seed funding and then withdraw from direct control.
This last option is my preferred approach, though it may be unpopular among free-market advocates. Free speech should not be controlled by corporations that answer only to shareholders. Public broadcasters like the BBC and NRK offer a viable model, as does The Guardian, which is run by a trust rather than a for-profit entity.
User Anonymity and ID
Some argue that requiring real names would curb online toxicity. While I personally post under my real name, mandatory identity verification would silence important voices—such as women with stalkers or corporate and government whistleblowers.
Both anonymity and verification should be supported. Reputation can still be attached to anonymous accounts.
A few possible approaches:
Reputation-Based Systems: Users build credibility over time, discouraging trolls from creating disposable accounts.
Government-Based Electronic ID: Verifying that an account belongs to a real individual without exposing their identity.
Public-Private Key Encryption: Users could cryptographically prove they have a valid ID without revealing their identity.
Proof of Work: Inspired by cryptocurrency, users could be required to perform computationally expensive tasks to create accounts, making it costly for trolls and spam bots to mass-produce fake identities.
Conclusion
Many of social media’s problems have technical solutions, but they need to be widely discussed and implemented. Discussions about reform tend to focus on simplistic solutions, like banning anonymity or increasing censorship, rather than exploring technical and structural alternatives.
This is largely a technological challenge, and computer scientists must take part in the conversation. We already have excellent solutions in other fields—cryptocurrency is just one example.
While many reading this may not be technical, I hope you find these ideas accessible enough to help promote discussion. My proposals may not be perfect, but I hope they encourage deeper, more nuanced conversations about how to fix social media’s flaws.