Biden Will Unveil Executive Order Regulating AI on Monday

(AP Photo/Elise Amendola)

The Biden administration will publish an executive order on Monday that will mark the first major attempt to regulate artificial intelligence (AI).

Advances in AI are coming so fast that some experts believe we’re already behind the curve in hoping to control a technology that promises much but also could have the capacity to destroy us.

Advertisement

One thing is sure: the effort to regulate AI will grow the size of the government substantially.

The executive order will create several new agencies and task forces and, according to Politico, “pave the way for the use of more AI in nearly every facet of life touched by the federal government, from health care to education, trade to housing, and more.”

At the same time, the Oct. 23 draft order calls for extensive new checks on the technology, directing agencies to set standards to ensure data privacy and cybersecurity, prevent discrimination, enforce fairness and also closely monitor the competitive landscape of a fast-growing industry. The draft order was verified by multiple people who have seen or been consulted on draft copies of the document.

The White House did not reply to a request to confirm the draft.

Though the order does not have the force of law and previous White House AI efforts have been criticized for lacking enforcement teeth, the new guidelines will give federal agencies influence in the US market through their buying power and their enforcement tools. Biden’s order specifically directs the Federal Trade Commission, for instance, to focus on anti-competitive behavior and consumer harms in the AI industry — a mission that Chair Lina Khan has already publicly embraced.

Biden has pledged that he would make sure “America leads the way toward responsible AI innovation.” What exactly would “responsible AI innovation” look like?

Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), told Fox News Digital. “We should applaud the first step through the EO but quickly need a framework for the detailed steps beyond that truly safeguard our freedoms.”

Doing so, Siegel argued, would require the administration to lean into what he called “four pillars” of regulation that would address concerns about AI safety. Pillar one, Siegel said, was to protect children and other vulnerable populations from “scams and other harms.” The second would be to pass new rules in the criminal justice code to ensure AI cannot be used as cover for criminals. The third, according to Siegel, would be to ensure “fairness” by not allowing current biases to be rooted into AI data and models, while the fourth would be to ensure there is a focus on “trust and safety” in AI systems that “includes agreement on how the systems are used and not used.”

Not exactly Isaac Asimov’s “Three Laws of Robotics” but it’s a good start:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

This is an enormously complex undertaking with many moving parts. The risks are enormous and the breadth of industries affected may be unprecedented. There are going to be mistakes. There are going to be areas that are underregulated and overregulated. In short, this is a regulatory process with stakes that are higher than perhaps any regulatory undertaking in the past.

We would do well to try our best to keep the regulatory process for AI out of the partisan food fights that usually roil Washington and offer constructive criticism of how best to protect our country and us.

Advertisement

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement