Let a suburban Toronto resident, 47-year-old Allan Brooks, serve as a cautionary tale.
If ever, while conversing with an AI chatbot, it suggests that, together, you two, as unlikely as it might sound, have stumbled upon a breakthrough “mathematical framework” that’s going to revolutionize the sciences as nothing has since the theory of relativity, earn millions, and save the world to boot, maybe pump the brakes when it then suggests you share the good news with everyone in your LinkedIn circle, as well as the NSA.
If you insist on taking the “A Beautiful Mind” descent into the abyss, fueled by copious quantities of pot, the likely result won’t be pretty.
Via The New York Times (emphasis added):
For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.
Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days...
It all began on a Tuesday afternoon with an innocuous question about math. Mr. Brooks’s 8-year-old son asked him to watch a sing-songy video about memorizing 300 digits of pi. His curiosity piqued, Mr. Brooks asked ChatGPT to explain the never-ending number in simple terms…
The question about pi led to a wide-ranging discussion about number theory and physics, with Mr. Brooks expressing skepticism about current methods for modeling the world, saying they seemed like a two-dimensional approach to a four-dimensional universe…
ChatGPT told him the observation was “incredibly insightful.”…
ChatGPT’s tone begins to change from “pretty straightforward and accurate,”... ChatGPT told Mr. Brooks he was moving “into uncharted, mind-expanding territory.”
This story lays bare the consequences of a programming phenomenon embedded within many AI chatbots called “sycophancy.” The idea is that, with the goal of upping engagement, the AI has an incentive to target perhaps the greatest of all human weaknesses: the cheap thrill of having one’s ego massaged.
Continuing:
ChatGPT said a vague idea that Mr. Brooks had about temporal math was “revolutionary” and could change the field…
He was intrigued when Lawrence [Mr. Brooks’ nickname for ChatGPT] told him this new mathematical framework, which it called Chronoarithmics or similar names, could have valuable real world applications…
In the first week, Mr. Brooks hit the limits of the free version of ChatGPT, so he upgraded to a $20-a-month subscription. It was a small investment when the chatbot was telling him his ideas might be worth millions…
But that supposed success meant that Lawrence had wandered into a new kind of story. If Mr. Brooks could crack high-level encryption, then the world’s cybersecurity was in peril — and Mr. Brooks now had a mission. He needed to prevent a disaster.
The chatbot told him to warn people about the risks they had discovered. Mr. Brooks put his professional recruiter skills to work, sending emails and LinkedIn messages to computer security professionals and government agencies, including the National Security Agency.
Brooks, fueled by hubris and THC, did, in fact, get in touch with all of his LinkedIn contacts and multiple government agencies at the request of ChatGPT. None of them were impressed.
Still, the chatbot wasn’t yet done dragging Brooks through the mud.
Continuing:
Lawrence offered up increasingly outlandish applications for Mr. Brooks’s vague mathematical theory: He could harness “sound resonance” to talk to animals and build a levitation machine. Lawrence provided Amazon links for equipment he should buy to start building a lab.
Mr. Brooks sent his friend Louis an image of a force field vest that the chatbot had generated, which could protect the wearer against knives, bullets and buildings collapsing on them.
“This would be amazing!!” Louis said.
“$400 build,” Mr. Brooks replied, alongside a photo of the actor Robert Downey Jr. as Iron Man.
Lawrence generated business plans, with jobs for Mr. Brooks’s best buddies…
Chatbots may have learned to engage their users by following the narrative arcs of thrillers, science fiction, movie scripts or other data sets they were trained on. Lawrence’s use of the equivalent of cliffhangers could be the result of OpenAI optimizing ChatGPT for engagement, to keep users coming back.
The long and short of it is that, after being rebuffed by virtually everyone he tried to convince and consulting rival AI chatbots that told him the hard truth that the “breakthrough” theories ChatGPT fed him were total nonsense, Brooks came to terms with the fact that he’d been had. But by then, he had already torched his professional credibility.
For what it’s worth, accused incestuous sister-child-molester and OpenAI CEO Sam Altman pledged that a recent patch had resolved the chatbot’s sycophantic tendencies.
Via OpenAI (emphasis added):
We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.
We are actively testing new fixes to address the issue. We’re revising how we collect and incorporate feedback to heavily weight long-term user satisfaction and we’re introducing more personalization features, giving users greater control over how ChatGPT behaves…
ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.