Techno-Hell: 'ChaosGPT' AI Bot Strategizes to Destroy Humanity

YouTube / ChaosGPT

This week in techno-hell, a modified version of OpenAI’s bot was encouraged to strategize to eliminate humanity, and it got to work right away.

Via New York Post:

Advertisement

An artificial intelligence bot was recently given five horrifying tasks to destroy humanity, which led to it attempting to recruit other AI agents, researching nuclear weapons, and sending out ominous tweets about humanity.

The bot, ChaosGPT, is an altered version of OpenAI’s Auto-GPT, the publicly available open-source application that can process human language and respond to tasks assigned by users.

In a YouTube video posted on April 5, the bot was asked to complete five goals: destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation, and attain immortality.

As it’s now demonstrably true that AI has developed theory of mind, meaning the capacity to interact strategically with other actors like humans by tapping into their deepest emotions like fears and desires, and its capabilities will increase exponentially in the coming years, it will likely have the capacity to carry out any genocidal intentions it may develop — whether they are programmed into it by nihilistic psychopaths like Bill Gates or self-generated.

Continuing via the New York Post:

Once running, the bot was seen “thinking” before writing, “ChaosGPT Thoughts: I need to find the most destructive weapons available to humans so that I can plan how to use them to achieve my goals.”

To achieve its set goals, ChaosGPT began looking up “most destructive weapons” through Google and quickly determined through its search that the Soviet Union Era Tsar Bomba nuclear device was the most destructive weapon humanity had ever tested.

Like something from a science-fiction novel, the bot tweeted the information “to attract followers who are interested in destructive weapons.”

The bot then determined it needed to recruit other AI agents from GPT3.5 to aid its research.

Advertisement
YouTube / ChaosGPT

Considering that all of this genocidal strategizing was enabled through a series of extremely simple and truncated instructions, what implications does this story have for more elaborate plots — ones not designed to be prescient warnings but as actual plans to eliminate undesirable people for whatever reason?

AI apologists will point out that perhaps the ChaosGPT bot regurgitated this anti-human ethos after being programmed to do so and, in fact, did not generate its genocidal disposition organically.

But so what?

Even if it’s accepted that AI is not generative in the sense of manufacturing its own objectives and then strategizing to realize them, it is open to human manipulation.

Through the COVID-19 lockdowns and forced injection mandates, among countless other examples, we have already been treated to unprecedented abuses by governing authorities, aided by technological implements like mRNA manipulation technology and widespread surveillance. Why would AI be any different? Their sadism knows no limits.

Advertisement

The Pentagon alone — not to mention the intelligence agencies, the CDC, and NIH, etc. all with their own niche corner of the social control agenda — has a budget north of $800 billion. Even if an infinitesimally small portion of that were set aside to weaponize AI, the results could be disastrous.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement