Reading the many ways tech discussion site Hacker News silences posts its moderators don’t like reminds one that contributing to someone else’s forum, mailing list, or any other centrally-hosted discussion is one way to be censored. User moderation is no better than administrator moderation; it’s very easy to learn the themes that pass for acceptable on some tech sites such as Hacker News or Slashdot: celebrating proprietary software is fine, software freedom (the freedom to run, inspect, share, and modify published computer software) is eschewed even though software freedom addresses most of the stories on these corporate media repeater sites and addresses public interest.
What sites don’t often let on is that their lists of censor tricks can be quite subtle: “shadow banning” can come in the form of not telling a user that nobody can see their posts but them, or nobody can see a user’s posts unless the reader looks for them. That’s what Twitter has been reported doing for posts that challenge the current narrative known as “Russiagate” where Russians spending thousands of dollars on ads before and after the 2016 US election somehow put Pres. Trump in the White House and prevented Hillary Clinton from winning. There’s no evidence to back this up, but it doesn’t stop the mainstream corporate media from posting story after story about (usually unnamed) Russians doing dastardly things that are indistinguishable from exercising free speech.
Sadly, Internet access is not seen as a human right, something civilized countries supply to all and never take away. So long as Internet access is privatized we’ll always publish on sufferance—say something someone powerful doesn’t like and you could be prevented from speaking in a way others can see, read, or hear online again.
What would an ethical discussion service look like?
Such a service would use a protocol to convey posts to clients which is fully documented and available for anyone to implement, convey data only via encrypted connections, and mask which posts were being read as much as possible. The goal has to be designing the service around preserving user’s privacy and freedom to express themselves.
I think ethical services would do more operations client-side to give the user powerful means of selecting what to read, see, or hear. The client would track what’s been read, sort and filter posts according to a particular user’s desires (the user defines what is unsolicited, objectionable, or somehow not worth reading) but all posts would be available to all by default. The client would cache data to enable offline reading and posting by batch upload. Doing any of this server-side is just another way to enable censorship.
An ethical service offers multiple entry points like netnews among a lot of cooperating and publicly-available news servers. If one server becomes unavailable one can easily move to another server and continue the discussion. All current web-based services rely on being single points of failure (from the user’s perspective). It doesn’t matter that Twitter, Facebook, and other currently popular sites have many servers when they all obey the same rules of exclusion and all draw posts from the same database. We rightly consider all Twitter servers as part of the same oppressive system. Users aren’t kicked off or silenced when posting to one Twitter server, their account is (shadow) banned from all Twitter servers. The same is true of any centralized system, that’s why these services implement their services from a central setup. Therefore freedom of speech requires massive decentralization.