Recently, Anthropic, the AI megalith that most aggressively markets itself as a responsible steward of the potentially extinctionist technology it purveys — the implications of which are beyond the grasp of even those working most intimately to develop it — quietly walked back its commitment to delaying models that “could pose catastrophic risk.”
The move followed the cancellation of its contract with the Pentagon, ostensibly (but perhaps not solely) over disputes regarding the use of AI for automated weaponry and domestic surveillance.
Related: Chinese Communist Party Literally Names Its Domestic Surveillance Program 'Skynet'
Via Axios (emphasis added):
As large language models grow more powerful and less predictable, AI companies are loosening safety guardrails in the race to be first — a shift that some warn could lead to catastrophe…
Anthropic, long viewed as the most safety-focused major AI lab, last week revised a key safeguard — narrowing the conditions under which it would delay developing or releasing a model that could pose catastrophic risk.
"We will delay AI development and deployment as needed to achieve this, until and unless we no longer believe we have a significant lead," the revised policy says.
Anthropic's recalibration comes amid a dispute with the Trump administration.
The company refused to allow its models to be used for autonomous weapons or domestic surveillance. The Defense Department responded by cutting use of Claude and labeling the firm a supply chain risk.
That highlights another problem with competition. Even if one company refuses on safety grounds, another is likely to step in.
Related: TSA Rolls Out 'Voluntary' Face Scans at Over a Dozen American Airports
Now-departed Anthropic “Head of the Safeguards Research Team,” Mrinank Sharma, last month resigned from the company in light of what he described as a “whole series of interconnected crises unfolding in this very moment” related to the company’s AI work that, in his view, his now-former employer is uninterested in doing anything to mitigate when doing so might interfere with the business model.
As he ominously declared in his dramatic resignation letter, “the world is in peril.”
The world is in peril. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.
We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world lest we face the consequences. Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions… We constantly face pressures to set aside what matters most.
Today is my last day at Anthropic. I resigned.
— mrinank (@MrinankSharma) February 9, 2026
Here is the letter I shared with my colleagues, explaining my decision. pic.twitter.com/Qe4QyAFmxL
Related: LAPD Gets $278,000 Robot Dog, Dallas School District Adopts Pre-Crime Surveillance Technology
Mrinank, he informs his followers, is retiring into obscurity to “explore a poetry degree and devote myself to the practice of courageous speech,” whatever that means, adding he plans to “deepen my practice of facilitation, coaching, community building, and group work.”
So, you get paid (probably very handsomely) to contribute to the construction of what you describe yourself as an existential threat to humanity, and then you sanctimoniously declare you’re moving on to write poetry while posturing as some kind of moral paragon or whatever because of the deep concern you harbor for the welfare of humankind.
The moral preening these people perform is nauseating.






