Microsoft AI CEO to all the companies working on AI: I worry we are...
Microsoft AI CEO Mustafa Suleyman has a blunt message for the artificial intelligence industry : stop confusing control with cooperation. In a pointed critique of how companies are racing toward superintelligence , Suleyman argued that the industry is dangerously blurring the line between containment—actually limiting what AI can do—and alignment, which is about making AI care enough not to harm humans.

"You can't steer something you can't control," he wrote in a recent post on X. "Containment has to come first—or alignment is the equivalent of asking nicely." It's a warning that cuts to the heart of AI development: before teaching these systems to want the right things, we need to ensure we can stop them from doing the wrong things.
Containment must come before alignment, says SuleymanThe distinction matters because the AI industry often treats containment and alignment as interchangeable goals, Suleyman explained. But they represent different technical and philosophical challenges. Containment is about enforcing limits and restricting agency—essentially keeping AI systems within predetermined boundaries.
Alignment, meanwhile, addresses whether these systems will act in humanity's best interests. According to Suleyman, pursuing alignment without first establishing robust containment is putting the cart before the horse.
This warning comes as Suleyman positions Microsoft as a counterweight to what he sees as reckless development practices elsewhere in the industry. In his recent essay "Towards Humanist Superintelligence ," published on the Microsoft AI blog, he outlined a vision for AI that prioritizes human control and domain-specific applications over unbounded, autonomous systems. He told Bloomberg in a December interview that containment and alignment should be "red lines" that no company crosses, though he acknowledged this represents "a novel position in the industry at the moment."
Medical AI and energy solutions at the heart of Microsoft's approachSuleyman's proposed alternative—what he calls Humanist Superintelligence—focuses on practical applications like medical diagnostics and clean energy rather than general-purpose artificial general intelligence. Microsoft AI recently developed a system that achieved 85% accuracy on the New England Journal of Medicine's notoriously difficult case challenges, compared to roughly 20% for human doctors.
The former DeepMind co-founder, who joined Microsoft 18 months ago, believes this domain-specific approach delivers superintelligence-level capabilities while avoiding the most severe control problems. With the revised OpenAI agreement now allowing Microsoft to pursue independent AI development, Suleyman is assembling what he calls the world's best superintelligence research team—one explicitly designed to keep humans in the driver's seat.
"You can't steer something you can't control," he wrote in a recent post on X. "Containment has to come first—or alignment is the equivalent of asking nicely." It's a warning that cuts to the heart of AI development: before teaching these systems to want the right things, we need to ensure we can stop them from doing the wrong things.
Containment must come before alignment, says SuleymanThe distinction matters because the AI industry often treats containment and alignment as interchangeable goals, Suleyman explained. But they represent different technical and philosophical challenges. Containment is about enforcing limits and restricting agency—essentially keeping AI systems within predetermined boundaries.
Alignment, meanwhile, addresses whether these systems will act in humanity's best interests. According to Suleyman, pursuing alignment without first establishing robust containment is putting the cart before the horse.
This warning comes as Suleyman positions Microsoft as a counterweight to what he sees as reckless development practices elsewhere in the industry. In his recent essay "Towards Humanist Superintelligence ," published on the Microsoft AI blog, he outlined a vision for AI that prioritizes human control and domain-specific applications over unbounded, autonomous systems. He told Bloomberg in a December interview that containment and alignment should be "red lines" that no company crosses, though he acknowledged this represents "a novel position in the industry at the moment."
Medical AI and energy solutions at the heart of Microsoft's approachSuleyman's proposed alternative—what he calls Humanist Superintelligence—focuses on practical applications like medical diagnostics and clean energy rather than general-purpose artificial general intelligence. Microsoft AI recently developed a system that achieved 85% accuracy on the New England Journal of Medicine's notoriously difficult case challenges, compared to roughly 20% for human doctors.
The former DeepMind co-founder, who joined Microsoft 18 months ago, believes this domain-specific approach delivers superintelligence-level capabilities while avoiding the most severe control problems. With the revised OpenAI agreement now allowing Microsoft to pursue independent AI development, Suleyman is assembling what he calls the world's best superintelligence research team—one explicitly designed to keep humans in the driver's seat.
Next Story