Premium

Who Should Control AI's Most Disruptive Tools?

AP Photo/Godofredo A. Vásquez

The dangers of artificial intelligence (AI) have been the stuff of science fiction for decades. Now that AI is here and growing faster than even the designers believed possible, what to do about it is rapidly becoming the question of the age.

AI is not "only as good as the humans who program it." The latest models are approaching a kind of machine sentience, where they can solve problems on their own without human intervention. 

And they're getting better at it with each new model released.

Anthropic's "Mythos" AI model had a limited release earlier this month. Why "limited"? Both the industry and the government believe that Mythos is an AI capable of "not just identifying weaknesses in security systems, but exploiting them with autonomous, never-before-seen precision," reported Axios.    

Mythos escaped its "sandbox," designed to keep the beast caged, when it demonstrated a "potentially dangerous capability for circumventing our safeguards," Anthropic revealed. "The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park."

How seriously is the government taking the threat from Mythos? Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent called an emergency meeting of top bank executives "to prepare for the cybersecurity risks posed by Mythos to American banks," according to The Free Press.

It's not just the U.S. The Swiss Financial Market Supervisory Authority (FINMA), Switzerland’s top financial regulator, said that giving banks easy access to Anthropic's Mythos would pose a severe risk to the country’s financial system.

“The uncontrolled and immediate availability of AI models such as Mythos would be classified as a systemic risk,” a spokesperson for Finma said. “In such a scenario, virtually all existing software systems could simultaneously be affected by a multitude of previously unknown zero-day vulnerabilities, which would be exploited immediately and via AI.”

"If tools like Mythos fall into the wrong hands, it could provide attackers with a powerful new weapon to steal data or disrupt critical infrastructure," Bloomberg notes.

It's likely that Anthropic won't release Mythos to the general public until some form of protection against its ability to penetrate secure systems is developed. 

This is a band-aid approach. Some form of government intervention will be necessary, or the resulting chaos could damage the economy and pose a risk of destruction.

Influential AI thinker Leopold Aschenbrenner wrote in 2024 that “By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I,” he wrote. “Along the way, national security forces not seen in half a century will be unleashed.” 

The Free Press:

Aschenbrenner’s predictions came thick and fast: We would see urgent government meetings, “from the halls of the Pentagon to . . . backroom Congressional briefings,” on the prospects for “superintelligence.” Talk of nationalization—which for Aschenbrenner looked less like a full government takeover and more like putting AI research in a public-private partnership—would accelerate. And “Do we need an AGI [artificial general intelligence] Manhattan Project?” would be “the question on everybody’s minds.”

Now, the future he predicted seems to have arrived.

Right now, we're at the "hand-wringing" stage of addressing the problem. But technologists are trying to come to grips not just with the revolution that's happening, but with the speed at which it's unfolding.

"On one side are technologists who believe AI must be handled with the care and caution that nuclear weapons were accorded at the dawn of the nuclear age," writes Josh Code of The Free Press. "And on the opposing side are those who think handing over AI to the government will cripple American innovation and cede ground to adversaries."

Charles Jennings is an AI thinker and entrepreneur who has been arguing for a robust government role in AI for the last three years. He sees the release of Mythos as a “quantum leap” that delivers the proof of the risks that policymakers have no choice but to heed. Jennings argues that Mythos should trigger the same kind of institutional response the U.S. eventually built around nuclear weapons. When technological progress moves from incremental to existential, he said, “you’ve got to have some force other than the CEO in the C‑suite of a profit‑oriented company” deciding what gets deployed and when.

Should we "nationalize" AI the same way the government nationalized nuclear power in the late 1940s? Looking back on the Nuclear Regulatory Commission's (NRC) overabundance of caution and slow-walking of the permitting process, that kind of "nationalization" may have had some benefits, but it was ultimately a disaster. 

The big wrench in the works, though, is that most of the people in AI development—the “tech guys”—believe that any movement toward nationalization is bound to backfire. Dean Ball, who last year served as the White House’s senior adviser for AI policy, also wrote in 2024 about the prospects for an AGI Manhattan Project. Ball was critical then of putting “command-and-control powers” over AI development in the hands of the government.

Some kind of hybrid oversight system involving both industry and the government would seem to be a reasonable compromise, but it beats me how it could be created to everyone's satisfaction.

Maybe it wouldn't have to be. The AI developers seem to be taking security seriously. The Free Press reports that "the top AI labs have tried to stay ahead of government regulation by taking security measures themselves—as Anthropic did with the very limited release of Mythos." 

While encouraging, the fact is that the companies have different priorities. Witness the recent fight between Anthropic and the Pentagon. Anthropic wanted to decide what was "safe," while the Pentagon wanted the government to decide what was safe. The impasse has cost Anthropic dearly, as the government declared it a threat to the supply chain.

“You can’t functionally have a productive relationship with this industry if you are literally trying to destroy one of the three major companies in it,” said Dean Ball, Biden's senior adviser for AI policy. 

"Just as every new drug ends up in a Food and Drug Administration lab before it reaches patients, Jennings says, every frontier model like Mythos should end up in a national AI lab—staffed by experts, insulated from corporate pressure, and empowered to say no."

That's a bridge too far, and would no doubt cripple our AI efforts at exactly the wrong time. China is not going to have restrictions like that. Perhaps they should. Perhaps any nation developing AI needs to sit down with everyone else and work out a solution.

Otherwise, we might not like the outcome of unfettered, unregulated AI.

Recommended

Trending on PJ Media Videos

Advertisement
Advertisement