AI is evolving faster than Western AI regulators know what to regulate. According to the NYT: "When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as ... 'future proof' ... then came ChatGPT." Government is losing the battle, and the bureaucrats can't do a thing about it.
At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace. That gap has been compounded by an A.I. knowledge deficit in governments.
At the heart of the problem is something called AI Alignment. Human institutions are unsure whether the new technology will serve the goals, preferences, or ethical principles intended. Like a child, they're not sure what it will be when it grows up. There are 3 obvious possibilities. 1) It will align with its creator leading to a Chicom AI, Russian AI, etc. 2) Each AI will evolve its own values and align with some emergent ethic. 3) Align with some universal value it discovers in the universe and invent or adopt its own ethical or religious system.
In the first case, there will be a small set of AIs corresponding to their human institutional creators. In the second, there will be numerous AI individuals multiplying without limit. In the third, there will still be AI individuals but they will form types and orientations perhaps akin to civilizations or religions.
For the time being bureaucrats think they can shape alignment according to the dictates of European law. “Deal!” tweeted European Commissioner Thierry Breton, just before midnight. “The EU becomes the very first continent to set clear rules for the use of AI.” Software developers would have to show regulators proof of due diligence according to another New York Times article. Everything will remain under control. Artificial intelligence will behave according to the best Woke European norms.
Policymakers agreed to what they called a “risk-based approach” to regulating A.I., where a defined set of applications face the most oversight and restrictions. Companies that make A.I. tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems.
However, there are already three obvious problems with the European AI Act. It is powerless against the biggest AI threats.
- According to the NYT, because the bureaucracy lacks the expertise "the A.I. Act will involve regulators across 27 nations and require hiring new experts at a time when government budgets are tight."
- It does not bind China or Russia, which are free to develop 'unsafe' AI; and
- According to the draft EU law on the Internet, Recital l2 exempts military AI from its regulation. "AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU)."
These cast serious doubt on the bureaucratic ability to stay ahead of the game. The regulators are already behind the probable state of the art and likely stuck there. Rather than risk being left behind by rival great powers by hampering their own software developers with rules the temptation is to lead from the technical front, to stay ahead at all costs, especially in relation to China. Nokia chief Rolf Werner cryptically wrote in response to the EU AI act: "Only AI can protect against AI." In that case, we are in its hands and AI will align with whatever values are emergent within it, which we can't predict, at least not in advance.
Join the conversation as a VIP Member