MCP is not an alternative to function calling. Function calling is a model capability: the LLM outputs structured JSON saying "call tool X with args Y." OpenAI's tools API and Anthropic's tool use are SDK wrappers over that capability. MCP (Model Context Protocol) is a transport protocol built on JSON-RPC 2.0 that lets a separate server expose tools, which a client then surfaces to the model via function calling. They live on different layers of the same stack. Most "MCP vs tools" articles get this wrong. This one shows the layer cake with code on each layer.

What's the difference between MCP and function calling?

Function calling is a model capability. MCP is a transport protocol. Function calling lets an LLM emit structured JSON requesting a tool be invoked. MCP is a JSON-RPC 2.0 spec that lets a client discover and invoke tools hosted on a separate server process. MCP uses function calling under the hood, then routes the resulting tool call across a network or stdio pipe.

Here's the layer cake, top-down:

Layer What it is Examples
4. Agent / app code Your orchestration loop LangGraph, CrewAI, your Python script
3. SDK Vendor convenience wrapper OpenAI Agents SDK, Anthropic Python SDK
2. Function calling Model capability: LLM emits tool-call JSON tool_calls field in OpenAI / tool_use block in Claude
1. Transport How tool definitions and results move In-process function call, OR MCP over stdio/HTTP

Function calling lives at layer 2. SDKs at layer 3. MCP is an optional substitute for layer 1, used when the tool implementation lives outside your app process. According to the MCP specification, MCP is transport-agnostic but ships with two standard transports: stdio (local subprocess) and Streamable HTTP (remote server).

How does function calling work without MCP?

Function calling works in three steps: define a tool schema, send it with the prompt, execute the tool the model picks. No protocol, no server, no network. The tool is a function in your code that the LLM "requests" via structured JSON.

Here's the same task, get the weather, implemented with raw OpenAI function calling:

from openai import OpenAI
client = OpenAI()

tools = [{
  "type": "function",
  "function": {
    "name": "get_weather",
    "description": "Get current weather for a city",
    "parameters": {
      "type": "object",
      "properties": {"city": {"type": "string"}},
      "required": ["city"]
    }
  }
}]

resp = client.chat.completions.create(
  model="gpt-4.1",
  messages=[{"role": "user", "content": "Weather in Tokyo?"}],
  tools=tools,
)

tool_call = resp.choices[0].message.tool_calls[0]
# Your code executes the tool here
result = my_weather_function(**json.loads(tool_call.function.arguments))

The model never runs the tool. It returns a structured request. Your code runs the function. Per OpenAI's function calling docs, this is the entire mechanism: schemas in, structured tool calls out, your runtime executes.

Is MCP the same as OpenAI's tool calling API?

No. OpenAI's tool API is a vendor SDK that wraps function calling for OpenAI models. MCP is a vendor-neutral transport protocol. Confusion comes from both involving "tools." The difference is where the tool lives and who can invoke it.

  • OpenAI tools API: tool defined in your code, schema sent in the API request, your code executes. Single app.
  • Anthropic tool use API: same shape, different vendor. Per Anthropic's tool use docs, Claude returns a tool_use block; your app runs the tool.
  • MCP: tool defined in a separate server process. Any compatible client (Claude Desktop, ChatGPT, Cursor, your custom agent) can discover and invoke it over JSON-RPC.

In March 2025, OpenAI officially adopted MCP across ChatGPT and the Agents SDK. By Q1 2026, Google added MCP support to the Gemini API and Vertex AI Agent Builder, per The New Stack's coverage. MCP is now the cross-vendor standard for how tools are exposed. Function calling is still the mechanism for how models invoke them.

What does the same task look like with MCP?

With MCP, the tool lives in a separate server process. Your app discovers it at runtime instead of hard-coding the schema. Same weather task, three layers up.

Server (Python, using the official mcp SDK):

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("weather-server")

@mcp.tool()
def get_weather(city: str) -> str:
    """Get current weather for a city."""
    return fetch_weather(city)

if __name__ == "__main__":
    mcp.run()  # speaks JSON-RPC over stdio

Client (using the OpenAI Agents SDK with MCP):

from agents import Agent, Runner
from agents.mcp import MCPServerStdio

async with MCPServerStdio(
    params={"command": "python", "args": ["weather_server.py"]}
) as server:
    agent = Agent(
        name="weather-bot",
        mcp_servers=[server],
    )
    result = await Runner.run(agent, "Weather in Tokyo?")

Notice what changed:

  1. No tool schema in the agent code. The client calls tools/list on the server to discover them at runtime.
  2. Tool execution lives in the server, not in your agent. The server receives a JSON-RPC tools/call, runs the function, returns the result.
  3. Function calling still happens. The OpenAI model still emits a tool_calls block. The SDK translates that into a JSON-RPC call to the MCP server.

Should I use MCP or roll my own tool calls?

Use direct tool calls for one app with under 10 tools. Use MCP when tools must be shared across clients, governed independently, or maintained by a different team. This is the cleanest decision rule. Anything else is rationalization.

Direct function calling wins on:

  • Latency. No subprocess, no JSON-RPC framing, no network hop. Anthropic's engineering team notes that MCP tool calls add network latency on every invocation.
  • Context efficiency. Most MCP clients load all tool definitions upfront, eating context window. With 50+ tools this becomes a measurable cost.
  • Simplicity. No servers to deploy, no transport layer to debug.

MCP wins on:

  • Reuse across clients. One MCP server: Claude Desktop, ChatGPT, Cursor, your own agent all use it.
  • Independent deployment. Tool team ships a new version of the server without touching the agent.
  • Discovery. Clients fetch the tool list at runtime; no hard-coded schemas.
  • Ecosystem. Per Anthropic's December 2025 announcement, the MCP ecosystem hit over 10,000 active public servers. You can plug into them today.

If you're building a single-purpose chatbot with 5 tools you control, raw function calling is faster to ship and easier to debug. If you're building a platform other teams will extend, MCP pays for itself.

Can I use MCP with non-Anthropic models?

Yes. MCP is model-agnostic. Anthropic created it but donated MCP to the Linux Foundation in December 2025 under the Agentic AI Foundation, with co-founders including Anthropic, Block, and OpenAI, per Anthropic's announcement.

Native MCP support, by vendor:

  • OpenAI: MCP support across ChatGPT, the Agents SDK, and Connectors since March 2025. Documented in the OpenAI Agents SDK MCP guide.
  • Anthropic: native first-party support since the November 2024 launch.
  • Google: MCP support in the Gemini API and Vertex AI Agent Builder as of Q1 2026.
  • Microsoft: MCP servers shipped for GitHub, Azure, Microsoft 365, and Teams since Q3 2025.
  • Open-source: LangChain, LlamaIndex, CrewAI, and AutoGen all ship MCP client adapters.

The pattern: the MCP client SDK translates between the protocol and whatever function-calling format the underlying model expects. Your weather server doesn't know, or care, whether the request came from GPT-4.1, Claude Opus, or Gemini 2.5.

MCP Native Support by Major AI Vendor (as of May 2026)
Anthropic (Claude)
100%
OpenAI (ChatGPT, Agents SDK)
100%
Microsoft (GitHub, Azure, M365)
100%
Google (Gemini, Vertex AI)
100%
AWS (Bedrock)
90%
Source: Vendor docs, compiled May 2026

When is MCP overkill for a simple agent?

MCP is overkill when you have one app, fewer than 10 tools, and no plan to share them. The protocol's value is portability and decoupling. If you don't need either, you're paying transport-layer overhead for nothing.

Concrete anti-patterns. Skip MCP when:

  1. You have 1-5 tools that only this app will ever call. Hardcode the schemas.
  2. Latency budget is tight (voice agents, real-time UX). Each MCP call adds an IPC or network round trip.
  3. Tools are tightly coupled to app state. A tool that needs the user session, request context, or in-memory caches is awkward to lift into a separate server.
  4. You're prototyping. Get the agent working with raw function calling first. Extract to MCP only when you need to.
  5. Context window matters more than reuse. Research from arXiv ("MCP Tool Descriptions Are Smelly") found typical MCP tool descriptions consume substantially more tokens than hand-tuned function-call schemas.

The right move for most teams: build with raw OpenAI or Anthropic tool calling, then graduate to MCP when a second client (a CLI, an IDE plugin, another agent) needs the same tools. Premature MCP adoption is the new premature microservices.

How do all three layers work together?

They compose. A real production agent uses all three. SDK at the top, function calling in the middle, MCP at the bottom for shared tools, with native function calls for app-local tools.

A realistic architecture for a customer-support agent:

App code (Python)
    |
    v
OpenAI Agents SDK (layer 3)
    |
    +-- Native tool: lookup_user(user_id)         [in-process]
    +-- Native tool: log_event(event)              [in-process]
    +-- MCP server: github                          [remote MCP]
    +-- MCP server: linear                          [remote MCP]
    +-- MCP server: internal-orders-api             [stdio MCP]
    |
    v
gpt-4.1 with function calling (layer 2)

Two of the tools live in your code; they need session state, so MCP would be friction. Three live in MCP servers: the github and linear servers are off-the-shelf from the public registry, the orders server is owned by a different team. The model sees a unified tool list. It emits one tool_calls block per turn. The SDK routes each call to either a local function or an MCP server. Function calling is the universal verb. MCP and direct execution are alternative nouns.

What does the MCP adoption curve look like in 2026?

MCP is the fastest-adopted AI infrastructure standard ever. Concrete numbers as of mid-2026:

  • 97 million monthly SDK downloads by March 2026, up from 100,000 at the November 2024 launch, per BeingGuru's coverage.
  • 17,468 public MCP servers indexed by Q1 2026 (independent Nerq census), per Digital Applied's MCP Adoption Statistics.
  • 78% enterprise team adoption, with 67% of CTOs committing to MCP as default within 12 months.
  • Linux Foundation governance since December 2025 under the Agentic AI Foundation. Platinum members include AWS, Google, Microsoft, Cloudflare, and Bloomberg.

For comparison, Kubernetes took nearly four years to reach comparable scale. The takeaway is not "adopt MCP for everything." It's that the protocol-layer answer to tool integration has been settled. Function calling will not go away; it's how models invoke tools at the model layer. But the cross-process, cross-vendor protocol war is over, and MCP won.

MCP Monthly SDK Downloads (Nov 2024 to March 2026)
Nov 2024 (launch)
0.1M
Mar 2025
5M
Sep 2025
38M
Dec 2025
72M
Mar 2026
97M
Source: Anthropic / BeingGuru, March 2026
AttributeFunction calling (raw)OpenAI tools / Anthropic tool useMCP
LayerModel capabilityVendor SDK over function callingTransport protocol (JSON-RPC 2.0)
Where tool livesIn your app processIn your app processIn separate server process
DiscoveryHard-coded in codeHard-coded in codeRuntime via tools/list
Cross-vendor portabilityNo (per-model JSON shape)No (per-vendor SDK)Yes (any MCP client)
Latency overheadNone (in-process)None (in-process)stdio: low. HTTP: network hop
Context costPer-tool schema in promptPer-tool schema in promptAll tools loaded upfront by default
Best for1-5 app-specific toolsSingle-vendor production agentShared tools, multi-client, ecosystem
Standardized inEach vendor's APIOpenAI / Anthropic SDKmodelcontextprotocol.io spec
GovernanceVendor-controlledVendor-controlledLinux Foundation (since Dec 2025)