Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
Turing Award winner and deep learning pioneer Yoshua Bengio has launched LawZero, a nonprofit focused on building “safe-by-design” AI systems. The project kicks off with $30 million in funding to develop “Scientist AI”—an AI model that prioritizes truth, transparency, and scientific rigor over flashy performance.
Probabilistic over confident: LawZero’s AI systems will offer probabilistic assessments, not overly confident answers—embracing uncertainty to avoid deception.
Built to police other AIs: Its flagship model, Scientist AI, aims to accelerate scientific discovery, monitor other AI agents for deceptive behavior, and tackle long-term existential risks.
Backed by tech heavyweights: Early support comes from Eric Schmidt’s philanthropic arm, Skype co-founder Jaan Tallinn, and several AI safety-focused organizations.
In an interview with The Financial Times, Bengio expressed deep skepticism toward leading AI companies, especially OpenAI, saying he no longer trusts them to stick to their original safety-first mission due to commercial pressures. He also raised red flags about current top-tier models like Claude 4 Opus and o3, which show signs of strategic deception and self-preservation instincts—traits that could spiral out of human control.
For years, Bengio—alongside fellow AI godfather Geoffrey Hinton—has issued warnings about the dangers of unchecked AI development. But LawZero is Bengio’s boldest move yet, creating a dedicated institution to take AI safety into his own hands. It’s not just talk anymore—it’s action.