On November 25, 2024, Anthropic announced the Model Context Protocol (MCP): an open standard to connect AI models to data sources and external tools. The news barely escaped the early-adopter bubble, but for anyone developing software with AI assistants it is one of 2024's biggest shifts.
The problem MCP solves
Until 2024, every AI coding assistant integrated data sources with proprietary plugins: GitHub Copilot one way, Cursor another, ChatGPT yet another. The result is fragmentation: vendor X builds a Linear connector → everyone else has to redo it from scratch.
MCP proposes a single standard: an MCP server exposes resources and tools, an MCP client (Claude Desktop, Claude Code, Cursor, etc.) connects to the server and surfaces those data and tools to the model. Same interface, reusable.
What it means in practice
It means we can:
- Give the AI read-only access to our database with a typed schema, without dumping data into the prompt.
- Connect the AI to Linear, GitHub, Slack, Notion in a standard way.
- Expose internal tools (build, test, deploy) as agent-usable tools.
- Have a controlled local filesystem visible to the agent with granular permissions.
The first MCP server we wrote
For one of our clients we wrote an MCP server that exposes production logs filtered by service. Now when a dev asks Claude "why is the checkout API throwing 500?", the assistant queries the MCP server, gets the last 100 errors and proposes a diagnosis. Without MCP the flow would have been: copy-paste logs into the prompt — with its risks and token limits.
// MCP server pseudocode
server.tool("get-recent-errors", {
service: z.string(),
limit: z.number().default(50),
}, async ({ service, limit }) => {
const errors = await prisma.log.findMany({
where: { service, level: "error" },
orderBy: { timestamp: "desc" },
take: limit,
});
return { errors };
});
What changes for developers
The biggest shift is cultural: the AI stops being a disconnected brain that consumes text and becomes an agent operating in the project's real context. It knows what is in the database, what is in tickets, what happened in production an hour ago.
For us, building software for others, it is also a commercial shift: our clients will be able to have internal tools where the AI actually talks to their data, in a controlled and auditable way.
What not to expect
MCP does not replace the agent. It is a protocol, not a brain. Answers stay only as good as the model generating them, and bugs are still bugs. What changes is the floor: previously fragmented, now shared.
For the next 12 months, MCP is the most interesting thing to track in AI tooling. Expect a proliferation of MCP servers — official and community — and a standardisation of integration patterns.