Building Your First AI Agent with MCP: A Practical Guide
Most "AI" projects are still just API calls wrapped in if/else logic. True agentic AI gives the model real tools — file access, database queries, API calls — and lets it decide how to use them to accomplish a goal.
Model Context Protocol (MCP), developed by Anthropic, is the emerging open standard for connecting AI agents to those tools in a secure, structured way. In this guide you'll configure two MCP servers, write a simple agent, and automate a daily reporting task — using Claude, OpenAI, or a self-hosted Ollama model.
What is MCP and Why Does It Matter?
MCP defines a protocol between AI clients (Claude Desktop, custom agents) and MCP servers — small processes that expose capabilities like filesystem access, database queries, or API calls.
Think of it as USB-C for AI tools: any MCP-compatible model can connect to any MCP server without custom glue code. For data engineering teams this means your existing stack — Postgres, S3, REST APIs — becomes natively accessible to your agents. Swap Claude for OpenAI in a single config line. Add a new data source by adding a new server entry.
The Example Task: Automated Daily Report Agent
Scenario: Every morning, an agent queries your Postgres data warehouse, summarizes key metrics, detects anomalies, and posts a formatted report to Slack.
Without MCP you write custom Python adapters for each data source. They break when schemas change, and swapping the underlying LLM means rewriting the integration layer. With MCP, the agent handles tool selection — you just configure which servers are available.
Setting Up MCP Servers
Prerequisites: Python 3.11+, uvx (from the uv package manager), npx, a running Postgres instance, and a Slack bot token.
Save the following as mcp_config.json:
{
"mcpServers": {
"database": {
"command": "uvx",
"args": ["mcp-server-postgres"],
"env": {
"POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost:5432/metrics_db"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/data/reports"]
},
"slack": {
"command": "uvx",
"args": ["mcp-server-slack"],
"env": {
"SLACK_BOT_TOKEN": "${SLACK_TOKEN}",
"SLACK_TEAM_ID": "${SLACK_TEAM}"
}
}
}
}
command— the executable that starts the MCP server processargs— arguments passed to the command; the server package name or pathenv— environment variables injected at runtime; keep secrets out of the config file by using${VAR_NAME}references
Start each server with uvx mcp-server-postgres or npx @modelcontextprotocol/server-filesystem and confirm they respond to the MCP handshake before connecting your agent.
Connecting Your LLM
| LLM | Best For | MCP Support | Cost Model |
|---|---|---|---|
| Claude 3.5 Sonnet | Complex reasoning, long context, tool use | Native (Anthropic-built) | API pay-per-use |
| OpenAI GPT-4o | Broad ecosystem, function calling | Via LangChain or Agents SDK | API pay-per-use |
| Ollama (Llama 3.1 / Mistral) | On-prem, sensitive data, zero API cost | Via MCP client bridge | Self-hosted |
Claude connects to MCP servers natively through Claude Desktop or the anthropic Python SDK with an MCP client wrapper — it was designed with MCP from the start.
OpenAI models connect via LangChain's tool abstraction or the new openai-agents SDK, which wraps MCP servers as function-calling tools.
Ollama works with any MCP-compatible LangChain setup. Point it at http://localhost:11434 and use the same tool wrappers. Ideal when data cannot leave your infrastructure.
The Agent System Prompt
The system prompt is where engineering discipline matters most. Vague prompts produce unreliable agents.
You are a daily metrics reporting agent for the engineering team.
You have access to three tools:
- database: query the metrics_db Postgres database
- filesystem: write reports to /data/reports/
- slack: post messages to Slack channels
Your task each morning:
1. Query the `daily_metrics` table for yesterday's data
2. Calculate: total_events, p99_latency_ms, error_rate_pct
3. Compare against the 7-day rolling average from `metrics_history`
4. If error_rate_pct > 2.0 OR p99_latency_ms > 500, flag as ANOMALY
5. Write a JSON report to /data/reports/YYYY-MM-DD.json
6. Post a summary to #data-alerts in Slack using this format:
📊 Daily Metrics — {date}
Events: {total_events:,} | P99: {p99}ms | Errors: {error_rate}%
Status: {NORMAL | ⚠️ ANOMALY — {reason}}
Do not make up data. If a query fails, report the failure and stop.
Key principles: explicit output format, explicit anomaly thresholds, explicit failure behavior. The agent should never have to guess what "done" looks like.
Running and Scheduling the Agent
Run it manually first and inspect the tool call trace — most agent frameworks log each tool invocation and the model's reasoning. Fix the system prompt based on what you observe before automating.
To schedule it:
Cron + Python:
# Run at 7am every weekday
0 7 * * 1-5 /usr/bin/python3 /opt/agents/daily_report_agent.py >> /var/log/daily_report.log 2>&1
Airflow DAG (if you already have an Airflow deployment):
from airflow.decorators import dag, task
from datetime import datetime
@dag(schedule='0 7 * * 1-5', start_date=datetime(2025, 6, 1), catchup=False)
def daily_report_agent():
@task
def run_agent():
import subprocess
subprocess.run(['python3', '/opt/agents/daily_report_agent.py'], check=True)
run_agent()
daily_report_agent()
For stateful multi-step agents — where one agent's output feeds another — LangGraph provides a clean graph-based orchestration layer that makes the flow inspectable and debuggable.
Next Steps
MCP makes your data stack AI-native without abandoning your existing infrastructure. The same Postgres instance, the same Slack workspace, the same report format — now driven by an agent instead of a cron script calling a Python function.
The next layer is production hardening: data quality validation before LLM consumption, observability on tool call rates and latency, and cost guardrails on API usage.
Building agents for production requires data quality guarantees, observability, and the right architecture. Book a strategy session with the Metadata Morph team and we'll design yours.