When AI Meets Infrastructure: How GitHub Copilot and MCP Servers Transform DevOps
Creating Terraform or other Infrastructure-as-Code (IaC) configurations for a new project can feel daunting, even for experienced teams. This post explores how you can quickly spin up new deployments using GitHub Copilot prompt files and a few free Model Context Protocol (MCP) servers. For fun, we’ll even go a step further by converting the same infrastructure between two different cloud providers to show how flexible this setup can be.
GitHub Copilot keeps getting more powerful with each update, and I’ve been impressed by how well it handles everything from writing quick scripts to initializing entire repositories. That said, I’ve never been thrilled with how Copilot (or any LLM, really) generates Terraform. Recently, I’ve been experimenting with MCP servers, which extend an AI agent’s capabilities by connecting it to specialized tools and APIs. I wanted to see whether they could make Copilot more competent with Terraform, and it turns out they can. Giving Copilot the right tools dramatically amplifies its results.
The Setup
./.vscode/mcp.json in your project. Here is what mine looks like:
{
"servers": {
"sequential-thinking": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
],
"type": "stdio"
},
"server-filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"."
],
"type": "stdio"
},
"terraform": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"hashicorp/terraform-mcp-server"
],
"type": "stdio"
},
"mcp-feedback-enhanced": {
"command": "uvx",
"args": ["mcp-feedback-enhanced@latest"]
},
"aws-knowledge": {
"url": "https://knowledge-mcp.global.api.aws",
"type": "http"
},
"azure-knowledge": {
"url": "https://learn.microsoft.com/api/mcp",
"type": "http"
},
},
"inputs": []
}
These are the MCP servers used:
| Server | Purpose |
|---|---|
| sequential-thinking | Very popular MCP for helping an LLM organize its thoughts |
| server-filesystem | Reading/writing to the filesystem |
| terraform | Terraform best practices and provider documentation lookup |
| mcp-feedback-enhanced | (Optional) User feedback forms for more interactive data gathering from the user |
| aws-knowledge | Official AWS online knowledge datastore |
| azure-knowledge | Official Azure online knowledge datastore |
NOTE mcp-feedback-enhanced is optional because I believe copilot will handle interfacing with you on questions just fine. But I recognize that I personally am not always going to be using Copilot for my solutions and wanted to use a less vendor locked solution. I’m also simply interested in human-in-the-loop MCP servers and this one was the best of three I tested out.
The Prompts
./.github/prompts/ folder with a name like *.prompt.md. Once created you can kick them off at anytime in the Copilot agent chat window with a /<prompt> command./terraform-azure-bootstrap. It will start by asking what you want then ask you refining questions to figure out what needs to be created. You do not need to close the feedback window that comes up, it will automatically be reused and refresh its contents when it needs further information or approval from you.Additional Prompts
Pleasant Surprises
- For the managed kubernetes deployment it generated functioning `Makefile`s with a plethora of commands that are useful to the deployment.
- The terraform conversion between one provider and another included cost comparisons between the two deployments.
- The feedback tool used can remain open and be used for all prompts back and forth between the agent.
- The requirements.md generated is quite comprehensive and additive to the deployment for user comprehension.
- Both the AWS and Azure MCP servers were easily used by the Agent with very little extra prompting.
- For the virtual machines I put in some rather complex logic for how I wanted the disks done and was surprised to find that the appropriate `user-data.sh` bash script for AWS and `cloud-init.yml` file for AWS was created for me not only with the disks done as I had requested (LVM and mounted to `/opt`) but much more. For instance, it also generated a pretty decent nginx deployment for wordpress, test scripts for cloud storage access (that I purposefully included as requirement to try to trip things up), and cloud specific agent installs for disk and memory monitoring. Pretty slick!
- There was a corpus of additional documentation included with both example deployments that included a good deal of extra info that I might personally include in a project were I delivering it to a team to manage.
Irksome Things
- An abundance of emojis (while visually pretty) just screams LLM-generated to the trained eye. Can probably reduce their use with minor prompt adjustments.
- The nondeterministic nature of LLMs means results for documentation were wildly different between each project. I specifically requested requirements.md be generated in the bootstrap process but forgot to say anything about it in the migration prompts. The first example I migrated from AWS to Azure left the file mostly in tact. The second example migration from Azure to AWS created a 500+ line operational guide out of it (which was cool and all, but still makes my point here).
- As mentioned before this can chew through your premium tokens pretty quickly depending on your requirements.


