The next political crisis may be triggered by a further loss of trust in authority, coupled with another institutional demand for power. Even before the COVID-19 pandemic broke out, faith in institutions was at an all-time low — and falling. In 2022, trust in organized religion was more than four times higher than in Congress. No branch of government was trusted by more than 50% of the public except the military, not even the police.
Yet this loss in institutional trust coincided with unprecedented demands by institutions for a vast increase in power. These included the power to regulate nearly anything in the name of stopping the climate from changing; the power to redefine society by proclaiming new ‘Woke’ values; and the power to control mass movement and require mandatory medication to control a pandemic. Finally, institutions demanded the power to ‘check facts’ in any speech and ‘de-platform’ anyone suspected of uttering misinformation, thus insulating themselves against criticism.
Since the institutions were so mistrusted, they could not ask for these additional powers on their own account, so they invoked the name of science. “Trust the science,” became the new all-encompassing justification. Politicians proclaimed themselves obliged to reluctantly assume dictatorial powers on expert advice to deal with crisis after crisis. They didn’t want more power, they protested, but they reluctantly had to act because the science forced them to. When one by one, these imperative ‘scientific’ measures were withdrawn or modified as their shortcomings became apparent in the public experience, the aura of infallibility was shattered.
For our VIPs: Bad Decisions Have Crippled Our Public Health Agencies
In retrospect, the greatest damage caused by this negative experience was less to science itself than to the flagging authority of institutions that abused it as a source of credibility. Because science reasons from evidence, not authority, it can make mistakes and learn from them without losing face. Institutions, on the other hand, rarely admit to being wrong, because it lays them open to their enemies who will seize on their mistakes. If COVID weren’t so politicized, the surprises in its actual behavior versus the models would be illuminating, suggesting numerous blank spaces in our knowledge that we have yet to fill. The natural world — including the virus, the biosphere, and our immune systems — is a vast analog computer that outperforms the puny digital models created by human experts. We may learn more in time, but governments should have tempered their responses to reflect their level of uncertainty in dealing with complex, dynamic systems. But they didn’t even know the scale of the blank spaces — couldn’t admit to it — and seriously undermined the persuasiveness of “trust the science” as a justification for the forseeable future. Which is a pity. One day, a real wolf may come. The usual fact-checkers will cry “Wolf!” — and no one will believe them, because pitchmen have oversold probabilities and contingencies as certitudes.
Information is never perfect, and decisions are made only with the data available. When information is politicized, the odds the real wolf will be missed rise drastically because the data has been corrupted and rendered worthless. And the public notices.
The erosion of scientific authority from politicization has led to a search for a new source of justification for ever more expansive transformative agendas, for the institutional appetite for power has not diminished. “Some climate change experts have an idea to slow global warming that sounds like it could have come straight from a science-fiction movie. A group of scientists are currently exploring solar geoengineering technology to stop temperatures rising. Put simply, they’re looking to ‘dim the sun,'” reports WCCO. With that, environmentalism has completed its journey from a movement to leave nature alone to a project to actively terraform the planet.
Perhaps made wary by recent efforts to control COVID, some scientists are reluctant to take on the even more complex planetary climate system. “Planetary-scale engineering schemes designed to cool Earth’s surface and lessen the impact of global heating are potentially dangerous and should be blocked by governments, more than 60 policy experts and scientists said,” reports ScienceAlert. After all, what could justify such interventions? Great power requires great justifications.
But institutions have one more trump card to justify the unprecedented authority that undertakings of such magnitude require: artificial intelligence (AI).
AI has been able to perform impressive feats of statistical correlation between phenomena by fitting hypothesis after hypothesis onto masses of data until it gets a fit. So instead of saying “Trust the experts” or “Trust the science,” the authoritarians will now say, “Trust the machine.” After all, Google’s AI program defeated human champion Lee Sedol four out of five times at Go, a game more complex than chess. When the Google AI system beat Chinese champion Ke Jie in 2017, it proved to be the CCP’s “Sputnik moment”. China invented Go; emperors used it to hone their general staff. To be bested by an American computer set off a Chinese project for AI supremacy. Asia Blog reported:
While barely noticed by most Americans, the five games drew more than 280 million Chinese viewers. Overnight, China plunged into an artificial intelligence fever. The buzz didn’t quite rival America’s reaction to Sputnik, but it lit a fire under the Chinese technology community that has been burning ever since. … China is ramping up AI investment, research, and entrepreneurship on a historic scale.
To political entrepreneurs, there is at last in AI a source of institutional authority equal to justifying and controlling terraforming: equal to anything, no matter how big. The Google backpropagation/genetic programming one-two punch, for instance, soon found a practical application in protein folding. “An artificial intelligence (AI) network developed by Google AI offshoot DeepMind has made a gargantuan leap in solving one of biology’s grandest challenges — determining a protein’s 3D shape from its amino-acid sequence,” wrote Nature. “DeepMind’s program, called AlphaFold, outperformed around 100 other teams in a biennial protein-structure prediction challenge called CASP, short for Critical Assessment of Structure Prediction.”
“It’s a game changer,” says Andrei Lupas, an evolutionary biologist at the Max Planck Institute for Developmental Biology. “This will change medicine. It will change research. It will change bioengineering. It will change everything,” Lupas adds. It will change politics, he forgot to add. The best part about super AI is that you cannot, even in principle, explain it to the public, let alone to Joe Biden. Therefore you don’t need to explain it. True superintelligence is ironically opaque. By definition, it ‘knows’ things that even human experts, like Lee Sedol, Ke Jie, and other eminent biochemists, cannot grasp. And if Turing colleague Irving John Good’s prediction comes true, future superintelligent advice will increasingly lie beyond our understanding.
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.” (emphasis mine)
It will also be the last time the public can demand explanation. Given true super AI, governments would be in the position of morons trying to peer-review Einstein. Why bother? They would have to treat it like a black box, without any real power to critique, much as elected officials do today when receiving “expert advice” on COVID-19, with no option but to take it on faith. But it is against the nature of politicians, certainly in Russia and China and probably in the progressive West, to be left out of the loop. They will intervene through the backdoor.
It is logically necessary, too, for something to come from outside any super-AI’s system of axioms. An algorithmic machine needs a meta-premise, external input to determine its purpose. Alan Turing called it the Oracle in his PhD thesis:
Turing investigated the possibility of resolving the Godelian incompleteness condition using Cantor’s method of infinites. This condition can be stated thus—in all systems with finite sets of axioms, an exclusive-or condition applies to expressive power and provability; i.e. one can have power and no proof, or proof and no power, but not both.
The thesis is an exploration of formal mathematical systems after Gödel’s theorem. Gödel showed for that any formal system S powerful enough to represent arithmetic, there is a theorem G which is true but the system is unable to prove. G could be added as an additional axiom to the system in place of a proof. However this would create a new system S’ with its own unprovable true theorem G’, and so on. Turing’s thesis looks at what happens if you simply iterate this process repeatedly, generating an infinite set of new axioms to add to the original theory, and even goes one step further in using transfinite recursion to go “past infinity,” yielding a set of new theories Gn, one for each ordinal number n.
As a Stanford reference put it: “Turing introduced the idea of an ‘oracle’ capable of performing, as if by magic, an uncomputable operation. Turing’s oracle cannot be considered as some ‘black box’ component of a new class of machines…Indeed the whole point of the oracle-machine is to explore the realm of what cannot be done by purely mechanical processes. Turing emphasised: ‘We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine.'”
When “trust the science” is eventually discarded in favor of “trust the AI,” something will still be standing behind the curtain: the oracle. We shall not go any further into the nature of this something apart from saying that it cannot be a machine, but probably an institution with no credibility of its own. Far from being the voice of God, these artificial intelligences will be the voice of gods, with Russian, Chinese, Woke, etc. characteristics, as the case may be. The stability and characteristics of such a scenario, and what it may evolve into, will be discussed in the continuation of this post: Trust the Artificial Intelligence, Part Two.
Follow Richard Fernandez on Twitter and wretchard.com
Books: Gaming AI: Why AI Can’t Think but Can Transform Jobs by George Gilder. Pointing to the triumph of artificial intelligence over unaided humans in everything from games such as chess and Go to vital tasks such as protein folding and securities trading, many experts uphold the theory of a “singularity.” This is the trigger point when human history ends and artificial intelligence prevails in an exponential cascade of self-replicating machines rocketing toward godlike supremacy in the universe.
Join the conversation as a VIP Member