The biggest threat posed by censorship is the objective effect of suppressing the facts, which is far more consequential than any resentment a dissident group may feel. Censorship is like plucking out the eyes of an individual but multiplied by millions because it affects not merely a single person but a whole population.
What drives institutions is repertoires specified in character strings. These determine an organization's allowable action. They produce other strings through a process of precedent, and these collectively describe where it can and cannot go. Policies produce policies. DEI takes control of this process, which makes it so powerful.
In this context, censorship is a taboo that stops institutions from formulating certain strings. It determines what is inconceivable. Whatever is still conceivable eventually becomes inevitable. This produces surprising results that seem to defy common sense. For example, John Schindler remarked on Twitter that he could not conceive how government agencies would allow obviously unfit people to get top security clearances.
How did a left-wing radical Antifa type who didn't hide his extremism get and keep TS/SCI clearances for years in one of the US military's most sensitive units -- before killing himself in front of the Israeli embassy?
But common sense is not part of the formal processes of a machine. What "everybody knows" cannot prevent the grant of a TS/SCI clearance to an Antifa type because institutions, as Nick Bostrom pointed out, are a form of superintelligence -- he calls it "collective superintelligence" -- that achieves its power by aggregating the efforts of many individuals. To achieve this aggregation they rely far more on legalisms than do individuals. Nothing is obvious to the machine. Common sense is what people know, and if they do not instruct the machine or institution, then it does not know it.
This similarity between machine and collective intelligence institutions makes them vulnerable to similar weaknesses, like text string censorship and unnatural fixation. For example, when artificial intelligence or institutions find they can expand their instrumental power by linking it to a final goal, it sets up an addictive cycle called instrumental convergence. The end increases their access to means and the means reinforce the ends. Ultimately, the process becomes captured by an obsession. Cautionary examples of instrumental convergence abound in AI literature.
The Riemann hypothesis catastrophe thought experiment provides one example of instrumental convergence. Marvin Minsky, the co-founder of MIT's AI laboratory, suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal. If the computer had instead been programmed to produce as many paperclips as possible, it would still decide to take all of Earth's resources to meet its final goal.
If this dynamic is coded into machine AI, it may run wild. Runaway code may have the same effect in collective superintelligence, such as institutions. If a government agency gets more funding for doing DEI things, it will become even more Woke to get more money. Like the paperclip scenario, it becomes more DEI until no more funds are forthcoming. This may give us some understanding of the remarkable power of DEI inside organizations.
Once a censoring ideology is incorporated in an institution it will actually become impossible to express a forbidden string, giving rise to a set of things you cannot say, calculate upon, perceive, or even schedule for discussion. This sets up a divorce from reality. With whole classes of phenomena now forbidden to understanding, an institution can become completely defenseless against them. A censored system will by definition generate hallucinations. According to IBM:
AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent. … One significant source of hallucination in machine learning algorithms is input bias. If an AI model is trained on a dataset comprising biased or unrepresentative data, it may hallucinate patterns or features that reflect these biases.
Ironically enough, the biggest victims of AI hallucination are institutions supposedly rooted in science. Science crucially relies on not omitting pertinent facts. But censored 'science' cruises along, confident in its objectivity, until one day, the bottom falls out. There are two major causes.
- Hallucination from data
- Hallucination from modeling
Woke attacks both.
Hallucinations are not typically detected prior to a collision with reality. One example is pundit Keith Olbermann's reaction to a 9-0 SCOTUS decision allowing Donald Trump to remain on the Colorado ballot. "The Supreme Court has betrayed democracy. Its members, including Jackson, Kagan and Sotomayor have proved themselves inept at reading comprehension. And collectively the 'court' has shown itself to be corrupt and illegitimate. It must be dissolved."
The indignant Olbermann is completely shocked. So is Vox. Ian Millhiser writes: "Many Court observers, including myself, were shocked by the February 28 order because it appeared to rest on the flimsiest of pretexts. ... The courts were never going to save America from Donald Trump. No one is coming to save US democracy, except for ourselves."
It's as if Olbermann and Millhiser stumbled onto parts of reality they never suspected existed up until the moment they ran into them. Yet the inconceivable proved obvious, to the point where millions of people, including all justices of SCOTUS, could think what the two pundits were unable even to conceive.
The big danger of censorship is not that it angers those who are silenced but that it blinds those who are doing the silencing.
Join the conversation as a VIP Member