Anthropic proposes a brand new standard for connecting AI assistants to the systems that store data.
Anthropic calls itself the Model Context Protocol, or MCP, and says the usual, which the corporate is now making available as open source, could help AI models provide higher, more relevant answers to queries.
With MCP, models – all models, not only Anthropic's – can pull data from sources similar to business tools and software to finish tasks, in addition to from content repositories and app development environments.
“As AI assistants turn into more widespread, the industry has invested heavily in modeling capabilities, making rapid advances in reasoning and quality,” Anthropic wrote in a single Blog post. “But even probably the most sophisticated models are limited by their isolation from the info – they’re trapped behind information silos and legacy systems. Each recent data source requires its own custom implementation, making truly connected systems difficult to scale.”
MCP supposedly solves this problem through a protocol that enables developers to construct bi-directional connections between data sources and AI-powered applications (e.g. chatbots). Developers can expose data through “MCP servers” and create “MCP clients” – similar to apps and workflows – that hook up with these servers on command.
Here's a fast demo using the Claude desktop app where we configured MCP:
Watch as Claude connects on to GitHub, creates a brand new repo, and creates a PR via an easy MCP integration.
Once MCP was arrange in Claude Desktop, constructing this integration took lower than an hour. pic.twitter.com/xseX89Z2PD
— Alex Albert (@alexalbert__) November 25, 2024
Anthropic says corporations including Block and Apollo have already integrated MCP into their systems, while development tool corporations including Replit, Codeium and Sourcegraph are adding MCP support to their platforms.
“Instead of maintaining separate connectors for every data source, developers can now construct on top of a normal protocol,” Anthropic wrote. “As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today’s fragmented integrations with a more sustainable architecture.”
Developers can now start developing with MCP connectors, and subscribers to Anthropic's Claude Enterprise plan can connect the corporate's Claude chatbot to their internal systems via MCP servers. Anthropic has released prebuilt MCP servers for enterprise systems similar to Google Drive, Slack and GitHub, and says it would soon provide toolkits for deploying production MCP servers that may serve entire organizations.
“We are committed to constructing MCP as a collaborative open source project and ecosystem,” Anthropic wrote. “We invite (developers) to shape the longer term of context-aware AI together.”
MCP appears like a very good idea in theory. But it's removed from clear that it would gain much traction, particularly with competitors like OpenAI, which would definitely prefer customers and ecosystem partners to make use of data-linking approaches and specifications.
In fact, OpenAI recently added a knowledge connection feature to ChatGPT, its AI-powered chatbot platform, enabling ChatGPT to read code in development-oriented coding apps – much like the use cases that MCP drives use. OpenAI has said it plans to bring the Work with Apps feature to other sorts of apps in the longer term, but is aiming for implementations with close partners relatively than openly sourcing the underlying technology.
It also stays to be seen whether MCP is as useful and powerful as Anthropic claims. For example, the corporate says that MCP can enable an AI bot to “higher retrieve relevant information to higher understand the context surrounding a coding task,” but the corporate doesn’t offer any benchmarks to support this claim.