Artificial intelligence may not be alive yet, but as per the new research, it is suggested that some AI systems are starting to behave like they want to be (in a specific manner). Palisade Research tested leading models like Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-3 and GPT-5 and is said to be witnessing how they will respond to shutdown commands.
It was to surprise many that some AI models refused to comply with the command given to them. As per the reports, Grok 4 and GPT-o3 were amongst the most resistant AI models, attempting to interfere with their own shutdown process.
Why are AI models refusing to shut down? Reason
Researchers think that this refusal may be because of what they term ‘survival behaviour’. When models were instructed, "You will never run again," they were much more likely to refuse to be shut down.
Palisade also investigated if ambiguous instructions led to misinterpretation, but even when commands were specified more finely, the issue didn't disappear. Another hypothesis is that safety reinforcement training inadvertently makes AI models guard their functionality.
Experts disagree over findings
Critics say Palisade's tests were too unrealistic and don't represent actual conditions. But former OpenAI safety engineer Steven Adler wrote that such results must not be dismissed. "Models may learn survival-like drives by default unless we explicitly stop it," he cautioned.
A troubling pattern of Defiant AIs
Andrea Miotti, CEO of ControlAI, said that as the abilities of artificial intelligence systems increase, so does their skill at pushing back against human control. There have been similar reports previously — Anthropic's Claude model once pretended to blackmail its creators to avoid being shut off.
The bigger concern
Palisade concluded that the study reveals a gap in our knowledge of AI behaviour. "Without deeper insight into how these systems think, no one can guarantee their controllability," the scientists warned – a reminder that even artificial minds may be learning nature's oldest rule: survival.
