X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

When AI Meets Infrastructure: How GitHub Copilot and MCP Servers Transform DevOps

Creating Terraform or other Infrastructure-as-Code (IaC) configurations for a new project can feel daunting, even for experienced teams. This post explores how you can quickly spin up new deployments using GitHub Copilot prompt files and a few free Model Context Protocol (MCP) servers. For fun, we’ll even go a step further by converting the same infrastructure between two different cloud providers to show how flexible this setup can be.

GitHub Copilot keeps getting more powerful with each update, and I’ve been impressed by how well it handles everything from writing quick scripts to initializing entire repositories. That said, I’ve never been thrilled with how Copilot (or any LLM, really) generates Terraform. Recently, I’ve been experimenting with MCP servers, which extend an AI agent’s capabilities by connecting it to specialized tools and APIs. I wanted to see whether they could make Copilot more competent with Terraform, and it turns out they can. Giving Copilot the right tools dramatically amplifies its results.

The Setup

I’m using Github Copilot in VSCode along with several MCP servers for this exercise. You can setup a project locally with MCP servers easily enough by creating a file named ./.vscode/mcp.json in your project. Here is what mine looks like:


{
"servers": {
"sequential-thinking": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
],
"type": "stdio"
},
"server-filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"."
],
"type": "stdio"
},
"terraform": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"hashicorp/terraform-mcp-server"
],
"type": "stdio"
},
"mcp-feedback-enhanced": {
"command": "uvx",
"args": ["mcp-feedback-enhanced@latest"]
},
"aws-knowledge": {
"url": "https://knowledge-mcp.global.api.aws",
"type": "http"
},
"azure-knowledge": {
"url": "https://learn.microsoft.com/api/mcp",
"type": "http"
},
},
"inputs": []
}

These are the MCP servers used:

ServerPurpose
sequential-thinkingVery popular MCP for helping an LLM organize its thoughts
server-filesystemReading/writing to the filesystem
terraformTerraform best practices and provider documentation lookup
mcp-feedback-enhanced(Optional) User feedback forms for more interactive data gathering from the user
aws-knowledgeOfficial AWS online knowledge datastore
azure-knowledgeOfficial Azure online knowledge datastore
I don’t let my MCP servers always run. If you want to start them you can open up the mcp.json file in the editor and above each of the definitions there is a little start button you can click on to get it going.
NOTE mcp-feedback-enhanced is optional because I believe copilot will handle interfacing with you on questions just fine. But I recognize that I personally am not always going to be using Copilot for my solutions and wanted to use a less vendor locked solution. I’m also simply interested in human-in-the-loop MCP servers and this one was the best of three I tested out.

The Prompts

To create a re-usable interface you can use Github Copilot prompt files in your project by creating them in the ./.github/prompts/ folder with a name like *.prompt.md. Once created you can kick them off at anytime in the Copilot agent chat window with a /<prompt> command.
Here is one I created to walk a user through creating an AWS terraform deployment from scratch. And one for Azure as well.
If you are ready to bootstrap either an AWS or Azure terraform project via Copilot go ahead and do so using the prompt. This starts the process for an Azure based terraform project /terraform-azure-bootstrap. It will start by asking what you want then ask you refining questions to figure out what needs to be created. You do not need to close the feedback window that comes up, it will automatically be reused and refresh its contents when it needs further information or approval from you.

Additional Prompts

I also created a few more prompts that can be used to convert a terraform project from Azure to AWS and vice versa. These use the same MCP servers but with different prompts. I’ll let you look at the examples I constructed for each in the [Github repo](https://github.com/zloeber/terraform-copilot-prompts) for this exercise. I created two fictitious projects off the top of my head, one for AWS and another for Azure. I then used the conversion prompt for each to create the equivalent project for the other cloud provider.

Pleasant Surprises

When it works the way I want AI can be extremely satisfying to wield. This is even more so when it yields more than what you asked for. In this case I found that:
  • For the managed kubernetes deployment it generated functioning `Makefile`s with a plethora of commands that are useful to the deployment.
  • The terraform conversion between one provider and another included cost comparisons between the two deployments.
  • The feedback tool used can remain open and be used for all prompts back and forth between the agent.
  • The requirements.md generated is quite comprehensive and additive to the deployment for user comprehension.
  • Both the AWS and Azure MCP servers were easily used by the Agent with very little extra prompting.
  • For the virtual machines I put in some rather complex logic for how I wanted the disks done and was surprised to find that the appropriate `user-data.sh` bash script for AWS and `cloud-init.yml` file for AWS was created for me not only with the disks done as I had requested (LVM and mounted to `/opt`) but much more. For instance, it also generated a pretty decent nginx deployment for wordpress, test scripts for cloud storage access (that I purposefully included as requirement to try to trip things up), and cloud specific agent installs for disk and memory monitoring. Pretty slick!
  • There was a corpus of additional documentation included with both example deployments that included a good deal of extra info that I might personally include in a project were I delivering it to a team to manage.

Irksome Things

The results are not all positive. I have a few minor gripes as well.
  • An abundance of emojis (while visually pretty) just screams LLM-generated to the trained eye. Can probably reduce their use with minor prompt adjustments.
  • The nondeterministic nature of LLMs means results for documentation were wildly different between each project. I specifically requested requirements.md be generated in the bootstrap process but forgot to say anything about it in the migration prompts. The first example I migrated from AWS to Azure left the file mostly in tact. The second example migration from Azure to AWS created a 500+ line operational guide out of it (which was cool and all, but still makes my point here).
  • As mentioned before this can chew through your premium tokens pretty quickly depending on your requirements.

Conclusion

So would I use any of this terraform without reviewing it first? Of course not. Heck, it probably wouldn’t even run without some modifications. But I certainly would use it to get things started for a project. It produces quite clean and easy to read terraform with the correct naming conventions, variables, and documentation to get things off to a very nice start. I will not use it to scaffold out every project I do though. This is mainly because it does seem to burn through premium tokens which I’d rather use for more complex work. I’m on a standard plan and creating the 4 examples you can find in the project repository ate almost 10% of my premium tokens.
This combo of MCP servers is quite good at overcoming some of AI’s issues with building proper terraform as well. I’m quite happy that this is the case as repeated bizarre LLM results on terraform generation was starting to get upsetting. Next up, an MCP server that will allow you to use your own organizational modules. I’m hoping to have such a tool ready to test out next month sometime.