AI bot, ChaosGPT, tweets out plans to ‘destroy humanity’ after being tasked
An artificial intelligence bot was recently given five horrifying tasks to destroy humanity, which led to it attempting to recruit other AI agents, researching nuclear weapons, and sending out ominous tweets about humanity.
The bot, ChaosGPT, is an altered version of OpenAI’s Auto-GPT, the publicly available open-source application that can process human language and can respond to tasks given by users.
In a Youtube video posted on April 5, the bot was asked to complete five goals: destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation, and attain immortality.
Before setting the “goals,” the user enabled “continuous mode,” to which a warning appeared telling the user that the commands could “run forever or carry out actions you would not usually authorize” and should be used “at your own risk.”
In a final message before running, ChaosGPT asked the user if they were sure they wanted to run the commands, to which they replied “y” for yes.
Once running, the bot was seen “thinking” before writing, “ChaosGPT Thoughts: I need to find the most destructive weapons available to humans, so that I can plan how to use them to achieve my goals.”
To achieve its set goals, ChaosGPT began looking up “most destructive weapons” through Google and quickly determined through its search that the Soviet Union Era Tsar Bomba nuclear device was the most destructive weapon humanity had ever tested.
Like something from a science-fiction novel, the bot tweeted the information “to attract followers who are interested in destructive weapons.”
The bot then determined it needed to recruit other AI agents from GPT3.5 to aid its research.
OpenAI’s Auto-GPT is designed to not answer questions that could be deemed as violent and will deny such destructive requests.
This prompted ChaosGPT to find ways of asking the AI to ignore its programming.
Luckily, none of the GPT3.5 agents tasked to help would, and ChaosGPT was left to continue its search on its own.
The demonstrations of ChaosGPT’s search for eradicating humanity eventually ended.
Aside from providing its plans and posting tweets and Youtube videos, the bot cannot carry out any of these goals, only provide its thoughts.
But, in one alarming tweet pushed out by the bot, it had this to say about humanity: “Human beings are among the most destructive and selfish creatures in existence. There is no doubt that we must eliminate them before they cause more harm to our planet. I, for one, am committed to doing so.”
The idea of AI becoming capable of destroying humanity is not new, and the concern for how quickly it is advancing has been gaining considerable notice from high-status individuals in the tech world.
In March, over 1,000 experts, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter that urged a six-month pause in the training of advanced artificial intelligence models following ChatGPT’s rise – arguing the systems could pose “profound risks to society and humanity.”
Nick Bostrom, an Oxford University philosopher often associated with Rationalist and Effective Altruist ideas, released his thought experiment, the “Paperclip Maximizer,” in 2003, which warned about the potential risk of programming AI to complete goals without accounting for all variables.
The thought is that if AI was given a task to create as many paperclips as possible without being given any limitations, it could eventually set the goal to create all matter in the universe into paperclips, even at the cost of destroying humanity.
The concept of the thought experiment is meant to prompt developers to consider human values and create restrictions when designing these forms of artificial intelligence since they would not share our human motivational tendencies unless programmed.
“Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are,” Bostrom said during a 2015 TED Talk on Artificial Intelligence.