AI & Automation

Why the World Needed MCP When We Already Had APIs

By Ginbok6 min read

A Question Worth Asking

APIs have been around for decades. REST, GraphQL, gRPC — we have mature, well-documented standards for connecting software systems. So when MCP (Model Context Protocol) appeared, a reasonable developer's first reaction was: "Do we really need another protocol?"

The answer is yes — and understanding why reveals something important about the shift happening in software right now.

What APIs Were Designed For

APIs were designed for developers writing code. When you consume a REST API, a human reads the documentation, understands the intent of each endpoint, writes authentication logic, maps response schemas, and handles errors. The human is the intelligent layer between the API and the application.

This works beautifully — when a human is in the loop.

But what happens when the consumer is not a human developer, but an AI?

The Problem: APIs Are Opaque to AI

An LLM looking at a raw API faces several hard problems:

1. Discovery

How does the AI know what endpoints exist? What they do? Which ones to call for a given goal? OpenAPI specs help, but they were written for human readers. An AI has to parse and interpret them with no guarantee of consistency across providers.

2. Intent vs. Structure

An API tells you how to call it — the method, the path, the parameters. It doesn't tell you when or why. A developer reads the docs and understands context. An AI needs that context explicitly encoded — and every API encodes it differently, or not at all.

3. Integration Cost Multiplies

Connecting an AI to one API requires custom wrapper code: authentication, parameter mapping, error handling, retry logic. Connecting it to ten APIs means writing ten bespoke integrations. And every AI platform — Claude, GPT, Gemini — needs its own version of each integration.

The result: a combinatorial explosion of glue code that nobody wants to maintain.

What MCP Actually Is

MCP (Model Context Protocol), introduced by Anthropic, is a standardized protocol for AI-to-service communication. Think of it as the USB-C of AI integrations — one standard connector that works across devices.

Instead of an AI trying to understand and call a raw API, an MCP server sits in between:

The AI never has to know what the underlying REST endpoint looks like. That's the MCP server's job.

A Concrete Example

Suppose an AI needs to create a bug ticket in Azure DevOps.

Without MCP: The AI needs to know the correct REST endpoint, the right HTTP method, the authentication header format, the exact JSON body structure, which fields are required, and how to handle errors. A developer has to hardcode all of this — and redo it for every AI platform.

With MCP: The AI sees a tool called create_work_item with a description: "Creates a work item in Azure DevOps. Requires title and workItemType." The AI calls it with those two fields. The MCP server handles everything else.

The AI goes from needing to understand an entire API surface to needing to understand one tool's purpose.

The Three Things MCP Standardizes

① Tool Discovery

MCP servers self-describe. When an AI connects to an MCP server, it receives a manifest: here are my tools, here is what each one does, here are the parameters. The AI can immediately begin using them with no additional context from a developer.

② A Common Language

AI and services now speak through one protocol. Whether the underlying service is Azure DevOps, a CMS, a database, or a file system — the AI's interface is always the same. MCP servers handle the translation to whatever proprietary API lives underneath.

③ Write Once, Use Everywhere

An MCP server built for a service works with any AI that supports MCP — Claude, GPT, Gemini, or open-source models. Build the server once; every AI benefits. This is the opposite of the current state, where each AI platform requires its own custom integration.

How This Changes the Architecture of AI Systems

Before MCP, building an AI that could take actions in the world looked like this:

AI ──── custom code ────▶ Service A
AI ──── custom code ────▶ Service B  
AI ──── custom code ────▶ Service C

With MCP, it looks like this:

AI ──▶ MCP ──▶ Service A
AI ──▶ MCP ──▶ Service B
AI ──▶ MCP ──▶ Service C

The AI's job is to decide which tool to use and when. The MCP server's job is to execute it correctly. Separation of concerns — a principle as old as software engineering itself, now applied to AI integration.

The Deeper Shift

APIs were built for a world where humans consume software. A developer reads docs, writes code, and builds a product that humans use.

MCP is built for a world where AI consumes software. The AI reads tool descriptions, decides what to call, and takes action — autonomously, in real time, in pursuit of a goal.

This is not a minor tweak. It's a fundamental change in who the consumer of software is. And when the consumer changes, the interface has to change with it.

APIs answered the question: "How do systems talk to each other?"

MCP answers the question: "How does an AI agent act in the world?"

Both questions matter. They just have different answers.

Summary

API is the road. MCP is the GPS that lets an AI drive on it.
#mcp#api#ai#agent#automation#llm
← Back to Articles
APIs have existed for decades. So why did the AI world need to invent MCP? The answer reveals a fundamental shift in how software is consumed — not by humans, but by AI. - Ginbok