Creating Per-Project MCP Servers

I must confess, first and foremost, that I am not a fan of as a protocol–I find it overly complex (why shunt that much JSON around and waste tokens parsing it?), badly designed for application scenarios (it is completely redundant if you have good Swagger specs for your APIs, not to mention poorly secured) and has generated so much hype that I instinctively shied away until the dust settled.

But a few months ago I found a (genius) minimalist stdio MCP server (in bash, of all things), turned it into a Python library (with both synchronous and asyncio flavors) and decided to build a couple of tools to extend GitHub Copilot inside .

And then one day I built another tool for another workspace. And another. And after three or so, a pattern emerged–even though provides Copilot with local filesystem and code structure information, I often needed to have specialist tools to help with things like:

  • Looking up snyk vulnerabilities in package.json files
  • Validating internal wiki links in Markdown files
  • Converting old Textile markup to Markdown
  • Adding or updating YAML front-matter in blog posts
  • Bulk-renaming (or linting/formatting) files according to some pattern

And I only needed those tools in one workspace at a time, so having a zillion tools available all the time was pointless (and confused the LLM).

So I started including simple task-specific servers in my repositories.

Configuring VS Code

As it happens, this very site is a good example. The git repository I use for the content has a wiki-helper server that makes it easy for me to check internal page links and perform chores like converting old Textile markup or adding data to YAML files when publishing a post.

stores its workspace preferences in a .vscode folder inside your repository, and the Copilot extension in particular gets its starting context from inside .github, so right now things look like this:

.github
└── copilot-instructions.md
.vscode
├── extensions.json
├── mcp.json
└── settings.json
tools
├── wiki_mcp.py
└── umcp.py

Since I do not need a virtual environment for this particular repository, I can just dump umcp.py alongside my server, and configure mcp.json to invoke it like this:

 cat .code/mcp.json
{
    "servers": {
        "wiki-helper": {
            "type": "stdio",
            // Use python executable as command; pass script path as first arg for consistent CWD handling.
            "command": "python3",
            "args": [
                "tools/wiki_mcp.py"
            ],
            // Provide explicit environment override so WIKI_ROOT uses workspace root (script already defaults correctly).
            "env": {
                "WIKI_ROOT": "${workspaceFolder}/space"
            }
        }
    }
}

This is actually one of my simplest servers (it mostly uses regexps, Path and a few more standard library functions), but I also have other projects that require specific libraries, so I use uv to run the server in those.

Also, besides including in copilot-instructions.md a short description of what the project is, what coding conventions and tooling I’m using, etc., I typically add “use tools from the foobar server to do these tasks”.

Configuring Zed

Configuring is very similar, since it too allows you to have per-workspace settings:

 cat .zed/settings.json 
{
  "context_servers": {
    "wiki-helper": {
      "source": "custom",
      "command": "python3",
      "args": ["tools/wiki_mcp.py"],
      "env": {}
    }
  }
}

However, does not seem to have an easy way to pass the workspace root as a variable, so I’ve had to hack that into the server itself. If anyone knows how to do that, please let me know.

Additionally, is rather picky about server versions (as of this writing it actually errors out if they’re not 2025-03-26 or 2024-11-05, which is a bit too limiting).

Either way, this approach has proven to be highly effective, enabling me to maintain per-project tooling with ease. This not only automates repetitive tasks but also significantly reduces costs and optimizes LLM usage. For instance, I can leverage gpt-4o or gpt-5-mini to handle specific tools, avoiding the need for more expensive models like gpt-5 or Claude, which incur premium charges in GitHub Copilot.

In particular, having a tool to check internal wiki links has been a godsend, since I can now just prompt Copilot to “reformat this post according to repository patterns” before publishing a post and be reasonably sure that I won’t have broken links (which is a pet peeve of mine):

Here's a recent example
Here's a recent example

The fact that I can also use Copilot to help me write the servers themselves with full context of what the project is about and how files are laid out is just icing on the cake, even if it smacks of self-improving