Skip to main content

Stop Chatting With a Stranger — Make Your AI an Assistant That Knows Your Business

· 11 min read
Metadata Morph
AI & Data Engineering Team

Your AI assistant is only as useful as the context it can reach. Ask Claude to summarize your latest GitHub PR and it draws a blank — unless you've given it a way in. Ask ChatGPT to pull last week's Slack thread and it can't — unless it's connected.

Model Context Protocol (MCP) is the open standard that changes this. Originally developed by Anthropic, now adopted by OpenAI, Google DeepMind, and the broader AI ecosystem — it lets any compatible model connect to any compatible tool through a single, consistent interface. One protocol. Any model. Any tool.

What This Actually Looks Like

Before the setup guide, a concrete example of why this matters.

It's 8:47am. Jordan, a sales manager at a mid-size SaaS company, opens Claude Desktop. She has Slack, Notion, Linear, Zapier (connected to HubSpot), and Postgres all configured as MCP servers.

She types one message:

"Give me a quick brief before my 9am — what happened with the Acme Corp deal over the weekend, any product bugs reported that could affect the demo, and what's our current trial-to-paid conversion rate this month?"

Here's what happens in the next 12 seconds:

Claude queries Slack — searches #sales and #acme-corp for messages since Friday. Finds a thread where the AE flagged that the procurement contact changed and the contract review is delayed two weeks.

Claude queries Linear — pulls open bugs tagged severity:high from the last 72 hours. One match: a reported issue with CSV export in the reporting module — the exact feature Acme Corp asked about in the last call.

Claude queries Postgres — runs against the product analytics database. Trial-to-paid conversion this month: 18.4%, down from 22.1% in March.

Claude queries Notion — pulls the Acme Corp deal page for last logged notes and current stage.

Jordan gets back:

Acme Corp: Procurement contact changed (Slack, Friday 4:32pm — @mike flagged it). Contract review pushed ~2 weeks. Deal still active but timeline slipped.

Demo risk: High-severity bug logged Saturday on CSV export in reporting module (Linear #847). That's on their feature request list — worth flagging with engineering before the demo.

Conversion: 18.4% trial-to-paid this month, down from 22.1% in March. Three-week declining trend — may be worth pulling the drop-off stage data before your next team review.

Jordan didn't open Slack, Linear, Notion, or a SQL client. She didn't copy-paste anything. One question, four tools queried in parallel, one brief — work that would have taken 25 minutes assembled in seconds.

That's what connected context looks like in practice. Now here's how to build it.


Why MCP Changes the Equation

Before MCP, connecting an LLM to your business tools meant building a custom integration for each one. One adapter for Postgres, a separate one for GitHub, another for Slack. Each was bespoke, fragile, and tied to a specific model.

MCP replaces all of that with a single standard. An MCP server is a lightweight process that exposes a tool's capabilities — read files, run queries, post messages — through a well-defined interface. Your AI client connects to as many servers as you configure. Swap Claude for GPT-4o or a self-hosted Llama model and the servers don't change. Add a new data source and you add one new entry to the config file.

The result: integrations that used to take weeks now take an afternoon.


The Most In-Demand MCP Servers

Based on adoption across Claude Desktop, Cursor, and enterprise deployments, these are the tools teams are connecting first.

1. GitHub

Why it's at the top: Developers spend more time context-switching between their editor, GitHub, and their AI tool than they do actually writing code. GitHub MCP eliminates most of that.

With it connected, your AI can read repository content, create and review pull requests, manage issues, check CI status, and commit code changes — without you leaving the conversation.

{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}

Token scopes needed: repo, read:org, read:user. For write access (creating PRs, committing): add write:repo.

Try asking:

"Show me all open PRs in the backend repo that have failing checks." "Draft a PR description for my current diff." "What files did we change the last time we touched the authentication module?"


2. Slack

Why it matters: Slack is where decisions get made and then forgotten. Retrieval is painful. MCP makes Slack a searchable, queryable knowledge base for your AI.

{
"mcpServers": {
"slack": {
"command": "uvx",
"args": ["mcp-server-slack"],
"env": {
"SLACK_BOT_TOKEN": "${SLACK_BOT_TOKEN}",
"SLACK_TEAM_ID": "${SLACK_TEAM_ID}"
}
}
}
}

Setup: Create a Slack app with channels:read, channels:history, users:read, and chat:write scopes. Install it to your workspace and copy the bot token.

Try asking:

"Summarize everything discussed in #sales this week." "Find every message that mentioned the payment gateway outage last month." "Draft an announcement for #general about tomorrow's maintenance window."


3. Google Drive & Google Docs

Why it matters: Most organizational knowledge lives in Drive — specs, runbooks, meeting notes, financial models. Without MCP, your AI is cut off from all of it.

{
"mcpServers": {
"gdrive": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-gdrive"],
"env": {
"GOOGLE_CLIENT_ID": "${GOOGLE_CLIENT_ID}",
"GOOGLE_CLIENT_SECRET": "${GOOGLE_CLIENT_SECRET}"
}
}
}
}

Auth: OAuth 2.0 via a Google Cloud project. Enable the Drive API and Docs API, create credentials, and run the auth flow once to generate a refresh token.

Try asking:

"Pull the Q3 product spec and list any open questions or unresolved decisions." "Find all documents Sarah updated in the last two weeks." "Create a new doc summarizing the key points from this meeting transcript."


4. Notion

Why it matters: Notion is the primary knowledge base and project management tool for a large share of modern teams. MCP turns it into a live data source your agent can read, query, and write to.

{
"mcpServers": {
"notion": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-notion"],
"env": {
"NOTION_API_KEY": "${NOTION_API_KEY}"
}
}
}
}

Setup: Create an internal integration in Notion's developer settings, copy the secret, and share the relevant pages or databases with your integration.

Try asking:

"Show me all open tickets in the Sales project." "Which tasks in the Q2 roadmap are past their due date?" "Create a new project page for the onboarding redesign initiative with the details we just discussed."


5. Zapier (9,000+ Apps via One Server)

Why it matters: Not every tool has a dedicated MCP server. Zapier changes that. One Zapier MCP connection gives your AI access to thousands of apps — Gmail, Salesforce, HubSpot, Airtable, QuickBooks, and thousands more — through Zapier's existing automation layer.

{
"mcpServers": {
"zapier": {
"command": "npx",
"args": ["-y", "zapier-mcp"],
"env": {
"ZAPIER_MCP_API_KEY": "${ZAPIER_MCP_KEY}"
}
}
}
}

Setup: Enable MCP in your Zapier account settings and generate an API key. Select which Zap actions to expose to your AI.

Try asking:

"Update the HubSpot contact for the call I just finished with Acme Corp." "Add a row to the sales pipeline spreadsheet for the deal we discussed." "Trigger the new-lead notification workflow for this contact."


6. PostgreSQL / Databases

Why it matters: Data teams want their AI to answer questions about their data — not a snapshot of it, but the live warehouse. Postgres MCP is the most commonly deployed database connector, and the same pattern applies to MySQL, SQLite, and others.

{
"mcpServers": {
"database": {
"command": "uvx",
"args": ["mcp-server-postgres"],
"env": {
"POSTGRES_CONNECTION_STRING": "postgresql://user:pass@host:5432/dbname"
}
}
}
}

Security note: Use a read-only database user unless your agent specifically needs write access. Scope the connection string to the schemas the agent actually needs.

Try asking:

"Which customers haven't placed an order in the last 90 days?" "Show me month-over-month revenue broken down by product category for Q1." "Why is this query slow? Here's the SQL — explain what's happening."


7. Filesystem

Why it matters: Local file access is foundational — it lets agents read configs, write reports, process logs, and work with any data that lives on disk rather than in a cloud service.

{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
}
}
}

Security note: The path argument is the root the server will expose. Keep it scoped to exactly what the agent needs — not your home directory.

Try asking:

"Read the error log from last night and summarize the most frequent failures." "Parse this CSV and tell me which rows have missing values in the revenue column." "Write a daily summary report to /data/reports/ based on what we just discussed."


8. Linear (Issue Tracking)

Why it matters: Engineering teams using Linear spend real time switching between their AI and their issue tracker to log bugs, update sprint status, and check what's in progress. MCP eliminates the tab-switching.

{
"mcpServers": {
"linear": {
"command": "npx",
"args": ["-y", "linear-mcp-server"],
"env": {
"LINEAR_API_KEY": "${LINEAR_API_KEY}"
}
}
}
}

Try asking:

"Show me all open tickets in the Sales project assigned to the backend team." "Create a bug report for the login timeout issue we just found — assign it to Maria." "Summarize this sprint's progress: how many tickets are done, in progress, and blocked?"


9. Supabase

Why it matters: For teams building on Supabase, this server exposes your Postgres database, edge functions, and auth layer to the agent — making it a strong fit for product teams who want AI access to their app's live data.

{
"mcpServers": {
"supabase": {
"command": "npx",
"args": ["-y", "@supabase/mcp-server-supabase@latest"],
"env": {
"SUPABASE_URL": "${SUPABASE_URL}",
"SUPABASE_SERVICE_ROLE_KEY": "${SUPABASE_SERVICE_KEY}"
}
}
}
}

Try asking:

"How many users signed up in the last 7 days, and how many completed onboarding?" "Show me the schema for the orders table and flag any missing indexes." "Which edge function is being called the most and what's its average response time?"


10. Playwright (Browser Automation)

Why it matters: Some tasks require interacting with a web UI — filling forms, extracting data from pages that don't have APIs, automating testing flows. Playwright MCP gives your agent a real browser.

{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"]
}
}
}

Try asking:

"Go to our competitor's pricing page and extract their current plan tiers and prices." "Run through the checkout flow on staging and tell me if anything breaks." "Fill out and submit this vendor registration form with the details below."


Connecting to Your AI Model

The mcp_config.json above works as-is for Claude Desktop — place it at ~/Library/Application Support/Claude/claude_desktop_config.json on Mac or %APPDATA%\Claude\claude_desktop_config.json on Windows.

For ChatGPT (via the Agents SDK or LangChain), MCP servers are wrapped as tool definitions:

from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient({
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {"GITHUB_PERSONAL_ACCESS_TOKEN": os.environ["GITHUB_TOKEN"]},
"transport": "stdio"
}
})

tools = await client.get_tools()

For Gemini, Mistral, or any OpenAI-compatible model, the same LangChain adapter approach works — swap in the model client, keep the tool wrappers.

For Ollama (self-hosted, air-gapped), use the same MCP client bridge with your local endpoint:

from langchain_ollama import ChatOllama

llm = ChatOllama(model="llama3.1", base_url="http://localhost:11434")
agent = create_react_agent(llm, tools)

A Full Working Config

Here's a production-ready config connecting the five most common business tools:

{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}" }
},
"slack": {
"command": "uvx",
"args": ["mcp-server-slack"],
"env": {
"SLACK_BOT_TOKEN": "${SLACK_BOT_TOKEN}",
"SLACK_TEAM_ID": "${SLACK_TEAM_ID}"
}
},
"database": {
"command": "uvx",
"args": ["mcp-server-postgres"],
"env": { "POSTGRES_CONNECTION_STRING": "${DB_URL}" }
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/data/workspace"]
},
"notion": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-notion"],
"env": { "NOTION_API_KEY": "${NOTION_API_KEY}" }
}
}
}

Keep all secrets in environment variables — never hardcode credentials in the config file.


What to Do First

Don't try to configure everything at once. The teams that get the most value from MCP start narrow:

  1. Pick one high-friction tool — the one your team switches tabs for constantly
  2. Configure and test it in isolation — verify the server starts and your AI can call its tools successfully
  3. Build one useful workflow end-to-end before adding the next server
  4. Add servers incrementally as the team's usage patterns become clear

The goal is not to connect every tool. The goal is to eliminate the context-switching that interrupts real work.


Already know which tools you want connected but not sure how to wire them into a coherent agent workflow? Book a strategy session with the Metadata Morph team — we'll map your tools to an MCP architecture and get your first agent running in days.