I must confess, first and foremost, that I am not a fan of MCP as a protocol–I find it overly complex (why shunt that much JSON around and waste tokens parsing it?), badly designed for application scenarios (it is completely redundant if you have good Swagger specs for your APIs, not to mention poorly secured) and has generated so much hype that I instinctively shied away until the dust settled.
But a few months ago I found a (genius) minimalist stdio MCP server (in bash
, of all things), turned it into a Python library (with both synchronous and asyncio
flavors) and decided to build a couple of tools to extend GitHub Copilot inside VS Code.
And then one day I built another tool for another workspace. And another. And after three or so, a pattern emerged–even though VS Code provides Copilot with local filesystem and code structure information, I often needed to have specialist tools to help with things like:
- Looking up
snyk
vulnerabilities inpackage.json
files - Validating internal wiki links in Markdown files
- Converting old Textile markup to Markdown
- Adding or updating YAML front-matter in blog posts
- Bulk-renaming (or linting/formatting) files according to some pattern
And I only needed those tools in one workspace at a time, so having a zillion tools available all the time was pointless (and confused the LLM).
So I started including simple task-specific MCP servers in my repositories.
Configuring VS Code
As it happens, this very site is a good example. The git
repository I use for the content has a wiki-helper
MCP server that makes it easy for me to check internal page links and perform chores like converting old Textile markup or adding data to YAML files when publishing a post.
Visual Studio Code stores its workspace preferences in a .vscode
folder inside your repository, and the Copilot extension in particular gets its starting context from inside .github
, so right now things look like this:
.github
└── copilot-instructions.md
.vscode
├── extensions.json
├── mcp.json
└── settings.json
tools
├── wiki_mcp.py
└── umcp.py
Since I do not need a virtual environment for this particular repository, I can just dump umcp.py
alongside my server, and configure mcp.json
to invoke it like this:
❯ cat .code/mcp.json
{
"servers": {
"wiki-helper": {
"type": "stdio",
// Use python executable as command; pass script path as first arg for consistent CWD handling.
"command": "python3",
"args": [
"tools/wiki_mcp.py"
],
// Provide explicit environment override so WIKI_ROOT uses workspace root (script already defaults correctly).
"env": {
"WIKI_ROOT": "${workspaceFolder}/space"
}
}
}
}
This is actually one of my simplest MCP servers (it mostly uses regexps, Path
and a few more standard library functions), but I also have other projects that require specific libraries, so I use uv
to run the MCP server in those.
Also, besides including in copilot-instructions.md
a short description of what the project is, what coding conventions and tooling I’m using, etc., I typically add “use tools from the foobar
MCP server to do these tasks”.
Configuring Zed
Configuring Zed is very similar, since it too allows you to have per-workspace settings:
❯ cat .zed/settings.json
{
"context_servers": {
"wiki-helper": {
"source": "custom",
"command": "python3",
"args": ["tools/wiki_mcp.py"],
"env": {}
}
}
}
However, Zed does not seem to have an easy way to pass the workspace root as a variable, so I’ve had to hack that into the MCP server itself. If anyone knows how to do that, please let me know.
Additionally, Zed is rather picky about MCP server versions (as of this writing it actually errors out if they’re not
2025-03-26
or2024-11-05
, which is a bit too limiting).
Either way, this approach has proven to be highly effective, enabling me to maintain per-project MCP tooling with ease. This not only automates repetitive tasks but also significantly reduces costs and optimizes LLM usage. For instance, I can leverage gpt-4o
or gpt-5-mini
to handle specific tools, avoiding the need for more expensive models like gpt-5
or Claude, which incur premium charges in GitHub Copilot.
In particular, having a tool to check internal wiki links has been a godsend, since I can now just prompt Copilot to “reformat this post according to repository patterns” before publishing a post and be reasonably sure that I won’t have broken links (which is a pet peeve of mine):

The fact that I can also use Copilot to help me write the MCP servers themselves with full context of what the project is about and how files are laid out is just icing on the cake, even if it smacks of self-improving AI…