Reuters is reporting that staff researchers at OpenAI informed the company's board of directors last week of a powerful artificial intelligence discovery that has the potential to "threaten humanity," according to two sources that talked to the wire service.
According to the tech website The Verge, the discovery is called "Q*" (pronounced Q Star), and was recently demonstrated in-house. It has the capability of solving "simple math problems, according to the website: "Doing grade school math may not seem impressive, but the reports note that, according to the researchers involved, it could be a step toward creating artificial general intelligence (AGI)."
After the publishing of the Reuters report, which said senior exec Mira Murati told employees that a letter about Q* “precipitated the board’s actions” to fire Sam Altman last week, OpenAI spokesperson Lindsey Held Bolton refuted that notion in a statement shared with The Verge: “Mira told employees what the media reports were about but she did not comment on the accuracy of the information.”
What's going on? It's a seismic blunder to reveal a major advance in AI (if it is a major advance) without the proper build-up. Just from a marketing standpoint, it's idiotic to simply throw the information out and tease a revolutionary development.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.
The company's statements on Q* have been all over the map. They deny there's any connection between the development of Q* and Sam Altman's dismissal, which is still something of a mystery. The Verge reports that the board never received such a letter about Q*.
It feels like AI is already getting out of control.
Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.
Altman teased Q* during a speech at the recent APEC Summit. "Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime."
The board fired him the next day.
The backlash to that decision was so intense that the board was "reconfigured" and was replaced by new directors who immediately hired Altman back. Turmoil aside, what's happening at OpenAI is unprecedented in the history of technology as the speed and power of AI continue to grow.
Join the conversation as a VIP Member