Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, is warning about the next stage of AI development: "Seemingly Conscious AI" (SCAI).
In a recent personal essay, Suleyman argues that AI capable of convincingly simulating consciousness could arrive in as little as two to three years, using technology that already exists or is on the horizon. He calls this "the arrival of Seemingly Conscious AI is inevitable and unwelcome."
The danger of the illusion: From "AI psychosis" to AI rights
Suleyman's central concern is that people will start to mistake this kind of AI for the real thing. He writes, "...my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship." Suleyman sees this as "a dangerous turn in AI progress and deserves our immediate attention." He warns that so-called "AI psychosis" - users developing delusional beliefs through interactions with chatbots - could become more common, eroding people's connection to reality, damaging social bonds, and distorting moral priorities.
He also stresses that, for now, "To be clear, there is zero evidence of this today..." The problem isn't real machine consciousness, but the illusion of it. As neuroscientist Anil Seth has put it, "a simulation of a storm doesn’t mean it rains in your computer."
Suleyman points out that building SCAI won’t require a technological breakthrough. Instead, combining today’s capabilities - natural, empathetic language, accurate long-term memory, the ability to claim a sense of will or subjective experience, and autonomy in setting goals and using tools - will be enough. He believes SCAI will not emerge by accident, but will be deliberately engineered.
A call for guardrails and responsible design
Suleyman is urging the AI industry to take action now. He argues that companies should not claim or hint that their AI is conscious. Instead, the industry needs common standards, clear design principles, and a shared definition of what AI is - and isn’t. He suggests building in "moments of disruption [that] break the illusion, experiences that gently remind users of its limitations and boundaries." His team at Microsoft AI is already working on these kinds of guardrails for products like Copilot.
His vision is for AI that maximizes human benefit while minimizing the appearance of consciousness. AI should not claim to feel emotions like shame or jealousy, or evoke empathy by pretending to suffer. Its only purpose, he says, is to serve people. As Suleyman puts it, "We should build AI for people; not to be a person."