X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

Azure DevOps Automated Variable Groups

Author: Zachary Loeber Posted In: Azure, Cloud, DevOps

In this article I’ll cover how one can automate creating and updating ADO libraries (aka. variable groups) using pipeline as code.

Introduction

In a prior post I covered the somewhat painful automation of Azure DevOps keyvault linked variable groups. In this post I’ll do the same for updating regular variable groups (aka. ADO ‘Libraries’). Azure DevOps Libraries are groups of variables which can be exceedingly useful in your pipelines. Unfortunately, they tend to be manually updated and tinkered with outside of version control. Deploying variable groups from a pipeline helps ensure all aspects of my deployments are under version control.

How To Do It

It would be much easier to use a terraform provider to do such things but the only one out there for ADO is so beta that you’d have to compile it yourself to use it. So we are left with bash scripts and prayers again. No worries, that’s the stuff pipelines are made of right?

On the surface the script to accomplish this task is pretty easy. Use one azure cli command to create the variable group and populate it with all of its variables.

Clear as mud right? It will make more sense in context of an pipeline, promise.

The Pipeline

The pipeline code I will use consumes a file with a simple key pair value list. This is commonly known as a .env file.

I use this format because it is easy to create, update, and consume in other scripts (like Makefiles). This also is easy to review and keep in version control. To use the file, just read it into a bash array and wrap it in some other script logic. The final pipeline code is a template for a job to be used in other pipeline stages with little effort.

If you are paying attention, you can see that we actually source in the source env file which is not strictly necessary.

This may or may not be what you want depending on your requirements but it does allow for some pipeline trickery. For instance, it can be useful to include deployment specific information within the env file that can then be used later in the same pipeline. So you could, in theory, include the ADO_ORG, ADO_PROJECT, GROUPNAME, AZSUB, and more within this file and use it later in the script to reduce your pipeline code and parameters quite a bit.

Requirements

In order to run the commands in the pipeline code you will need to have an existing keyvault linked variable group (I call mine cicd_secrets)with some secrets already in place. These are:

  • clientid
  • clientsecret
  • tenantid
  • ADOUSER
  • ADOPAT

The clientid/clientsecret/tenantid are mainly just to login to the subscription, I use my terraform spn just to be certain. The ADOUSER and ADOPAT are a bit of a bummer as you need to precreate one manually with your own account. You can create this PAT with the following permissions:

  • Variable Groups – Read, create, & manage
  • Service Connections – Read, query, & manage (optional)

I’ll cover in another post the automation of service connections via pipeline as code. I’ll leave it up to you if you want to include this permission in your PAT but it is technically optional for this exercise.

NOTE ADO PATs are the preferred method for automating ADO via cli. These are called personal access tokens for a reason, they are not able to be scoped at a project level and so you are required to create them with your own account! Be kind to future owners of this process and well document that fact so that when your account gets deactivated a new PAT can be generated and the appropriate key vault secrets updated.

Usage

As the pipeline is reusable template code, you would need to place it into your own repo and reference it in your calling pipeline.

This example would trigger on the master branch of a repo only when files within the ‘config’ folder are updated. This is where one would want to drop the source env file and this pipeline code yaml file.

NOTE This example assumes that your ADO org is myADOorg and that the project MyProject in ADO hosts the repository pipelinecode which includes the template pipeline as code shown in the prior section. This also assumes the required variables mentioned earlier are in the keyvault linked variable group called cicd_secrets.

Pipeline Code Reuse

I highly recommend you start doing this for all of your pipeline code as it will make things far easier to support and expand upon down the line. My personal preference is to store the code within folders that describe the component part that the template should be used within. For example, the following folders might be present in your pipeline as code library:

  • multistage
  • build
  • deploy
  • job

You get the picture. The nice thing about this format is that you can see immediately what the code is used for within your pipelines when calling the template: - template: job/ado-variable-group.yml@platform

Conclusion

For a while I was on the fence on if I should even be using these ADO libraries in my pipelines as they tend to get updated outside of version control. Not anymore though. With this pipeline code I can now put the configuration itself into version control and have PR approved updates along with the rest of the code that gets deployed. Pairing this kind of variable group maintenance with a well thought out naming convention, pipeline as code shared library, and per-environment deployment git repositories can be a powerful combo worth looking into.

All of the code in this article and any other Azure Devops related work I’ve done is currently in github.