In a rare show of alignment between top AI rivals, Google is joining OpenAI in adopting Anthropic’s Model Context Protocol (MCP) — a new open standard that allows AI models to seamlessly connect with external data systems.
Announced via X by Google DeepMind CEO Demis Hassabis, the move marks a significant endorsement for the Anthropic-developed protocol, which aims to streamline the way AI models interact with tools, databases, and real-world applications.
“MCP is a good protocol and it’s rapidly becoming an open standard for the AI agentic era,” Hassabis wrote. “Look forward to developing it further with the MCP team and others in the industry.”
While Hassabis didn’t provide a specific rollout date, Gemini, Google’s flagship AI model family, and its software development kit (SDK) will soon support MCP.
Originally open-sourced by Anthropic, the Model Context Protocol allows AI agents to fetch and exchange information from apps, enterprise systems, content repositories, and development environments. It effectively gives AI models access to the context they need to complete sophisticated tasks, enabling two-way communication between tools (via MCP servers) and AI-powered apps (via MCP clients).
With Google and OpenAI now both on board, MCP is quickly shaping up to be a unifying layer for AI interoperability — much like how HTML became the universal standard for the web.
Since Anthropic released MCP to the public, the protocol has been adopted by notable platforms and developers including Block, Apollo, Replit, Codeium, and Sourcegraph.
By embracing the same framework, major players like Google and OpenAI are making it easier for developers to build once and run everywhere, potentially reducing fragmentation in the fast-evolving AI landscape.
As AI agents become more capable — managing emails, writing code, and automating business tasks — protocols like MCP are poised to become the plumbing behind powerful new workflows. Google’s endorsement not only strengthens MCP’s claim to becoming the industry standard but also signals a maturing AI ecosystem where collaboration on infrastructure may take priority over competition on models.
This move could also pave the way for a more modular and composable future of AI, where different tools, models, and systems can plug into each other more seamlessly — no matter who builds them.