Why We Can't Trust Open AI to Protect Us From a 'Dangerous Technology'

AP Photo/Michael Dwyer

A group of Open AI insiders is sounding the alarm about the company building AGI, or Artificial General intelligence, saying that the company has failed to do enough to prevent its AI systems from becoming dangerous.

Advertisement

The group includes nine current and former Open AI employees, and they say that the company has put the race for profits and growth ahead of safety as it looks to build systems that can do anything a human can.

They also say that Open AI is using "hardball tactics" with employees to prevent them from airing their concerns and forcing workers who leave the company to sign nondisparagement agreements.

“OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division. Kokotajlo is one of the organizers of the dissident group.

“We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk," said Open AI spokeswoman Lindsey Held. "We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society, and other communities around the world.”

This is not the first time that Open AI employees have tried to warn us about the pace of development.

New York Times:

Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.

So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”

Neither Dr. Sutskever nor Dr. Leike signed the open letter written by former employees. But their exits galvanized other former OpenAI employees to speak out.

Advertisement

“When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” said William Saunders, a research engineer who left OpenAI in February.

Yikes. I hope that's an exaggeration.

Indeed, you have to wonder how much the danger is exaggerated by the dissidents to make their point. The group wants Open AI to "establish greater transparency and more protections for whistle-blowers," according to The Times.

Some of the former employees have ties to effective altruism, a utilitarian-inspired movement that has become concerned in recent years with preventing existential threats from A.I. Critics have accused the movement of promoting doomsday scenarios about the technology, such as the notion that an out-of-control A.I. system could take over and wipe out humanity.

Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.

In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.

Transparency is always a good thing. Or is it? Spreading Open AI's proprietary secrets across the internet would be damaging to the company. And I'm not convinced some of these utilitarians aren't opposing the company's drive for greater profit out of spite.

Advertisement

At the same time, Skynet, or whatever the AGI equivalent is, should be prevented from growing too big, too quickly. 

Can we really manage the growth of AGI? I guess we'll find out by 2027.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement