One of the downsides to censorship — even the well-meaning kind — is that it throttles the “reality signal.” The world, as experienced directly, expresses itself in a bewildering fashion, containing both signal and noise and often incomplete information about what we are trying to observe. The viewer is frequently unsure of what he is witnessing and puts forward a series of hypotheses to explain and predict what transpires. And often he is wrong.
Doubtless, it would be more convenient for some if reality could be filtered before it was perceived. That way the recipients of the signal would never be confused. Everyone would get the same official message. No one would feel upset or confused unless it was intended that they should be. In an age where populations “see” the world through social media, the appeal of pre-processing the view to governments and social engineers can be readily understood. It allows them to lead the multitudes where they should go.
But mediated views as an agency of information have drawbacks, the most important of which are information reduction, narrative artifacts, and pixelization. Information can be interpreted as quantifying the level of “surprise” of a particular outcome, a notion that is captured in the popular saying, “tell me something I don’t know.” Censorship is predictable; it tells us what we should know by stripping away unvetted or “unreliable” reports and always contains less raw information than the unfiltered capture. There is less surprise but also less information.
As everyone who has used badly programmed graphics programs knows, artifacts occur when narrative objects left over from the last rendering intrude on the current view. To deal with artifacts, George Orwell coined the phrase “memory hole.” “A memory hole is any mechanism for the deliberate alteration or disappearance of inconvenient or embarrassing documents, photographs, transcripts or other records, such as from a website or other archive, particularly as part of an attempt to give the impression that something never happened.” Often audiences of mediated views simply pretend not to see the artifacts, even though they have not yet been refreshed away, because by convention that is the way the game works. Lastly, there is pixelization, caused by mediation’s ability to portray an event in arbitrarily fine detail. In social media, these often appear as slogan-categories like ‘climate denier’, ‘Nazi’, ‘Karen’, ‘oligarch’ etc.
We get a beautiful picture that is also wrong. Inflation, not prosperity, pops up after printing money. The underdog starts winning the war. Vaccines don’t work as advertised. Unexpectedly. Or perhaps not so unexpectedly if information apps are filtered to let only official narratives through.
The danger of inadequate information is so great that the challenge of noise and misinformation has usually been accepted to obtain raw data. There is a survival aspect to freedom of speech that brought it into existence, and people would rather hear and make up their minds about something than not hear at all. While filtering in uncensored systems still exists (there may be a lot of it) its position in the message flow shifts to an a posteriori by the users themselves, rather than a priori by the provider. A censorship-free but properly designed social media application should have good tools that allow users to block any content desired, and conversely follow anything in the stream. This allows a fine tuning of message selection, to which any amount of subsequent analysis can be applied. This makes sense. In a society of normal humans, only the blind actually need seeing-eye dogs. Most can see for themselves. Those who prefer a degree of pre-processed information can choose that as well by following whatever publications suit them.
Some will object that uncensored platforms will provide a venue for defamation and abuse or the release of classified/dangerous information, like the “secret of the atomic bomb.” But imagine a text-only, length-limited app restricted to the domain of natural persons. In an American channel, the U.S. constitutional protections on privacy and free speech would apply to those natural persons, as well as to the penalties for defamation and criminal use, making the scope of its liberty and restraint coextensive with that of the political space.
In that hypothetical app, everyone has a definite identity. The message sender’s ID could be keyed to his telephone number and the limited length message body encrypted in the public key of the addressee who will be named in the first few bits. The message can be decrypted using the private key of the addressee, including links to further details. Imagine further that the downlink used the unused precision positioning bandwidth of GPS L2C and L5 channels (that most civilian apps don’t need) and any backhaul available for uplink and you would have the equivalent of a telephone party line on a planetary scale. For those too young to remember:
A party line (multiparty line, shared service line, party wire) is a local loop telephone circuit that is shared by multiple telephone service subscribers. Party lines provided no privacy in communication. They were frequently used as a source of entertainment and gossip, as well as a means of quickly alerting entire neighbourhoods of emergencies such as fires, becoming a cultural fixture of rural areas for many decades.
That is exactly what a free speech app should be. Because the party line format supports only limited-length text messages, it is inherently difficult to misuse for purposes of directly broadcasting the “secret of the atomic bomb” or performing any other function not already found on the dark web. But its uncensored messageID/header:summary/optional-link format is perfect to support a running log of humanity’s conversation with itself. The party line app potentially represents a stream of discovery, insight, news, trivia, and garbage so enormous that it could spawn downstream markets for detailed information trading, aggregation, commentary, and analysis whose potential can only be guessed at. It is what Twitter should have become before it declined into a narrative control machine.
Information is one of the most important things in the world; far too valuable to mine simply to target ads or advance the careers of politicians. It is the mother lode of all that makes a civilized man. We should be eliciting and fostering it, not shutting it down. Properly handled, there’s money in it. Like any other precious substance, it is found mixed with dirt and must be cleansed and transformed to be useful by those who can own it as recompense for their labors. Just as no town ever got rich by sealing off mines, no civilization can advance by censoring, twisting, and faking the stream of information, which is the age’s greatest resource. We shall starve on pat narratives, but we shall thrive on what we have yet to discover.
Books: Open Curtains: What if Privacy were Property not only a Right by George Spix (Author), Richard Fernandez (Author). Technology represents both unlimited promise and menace. Which transpires depends on whether people can claim ownership over their knowledge or whether human informational capital continues to suffer the Tragedy of the Commons.
In order to solve the competing claims of privacy and transparency must be reconciled. The best way to achieve this is to treat knowledge as property.
Join the conversation as a VIP Member