This is a three part series on introducing DevOps to your organization. You may start from the beginning, or read the second post before continuing.
In my previous post I described the two components of a DevOps solution: the practices and the tools. How these components are introduced into your organization largely depends on how the organization is managed and whether or not the related effort is part of a new project, enhancement to an existing codebase, or as part of ongoing maintenance activities.
For new projects, the answer is quite simple – after a solution is chosen, the process and tools are introduced as part of that new effort. New teams are being formed, and new servers are being provisioned, and the benefits are realized organically during that projects lifespan.
For existing projects and ongoing maintenance activities, the implementation can become more complex depending on the structure and culture of your organization. As previously mentioned, traditional reporting structures could mean that operations engineers remain on separate teams and some efficiency is lost. In addition, satisfying business justifications may mean months of waiting to roll out changes in processes. From a tools standpoint, the steps below outline techniques that can be used to introduce automation and DevOps practices into existing environments:
Inventory: In order to model your server configurations in code, it is necessary to understand the current implementations that exist in your environment. Many of the tools popular in the DevOps space allow you to define configuration in a hierarchy such that default and common base configurations can be defined and extended where necessary.
Develop and Test: After the various configurations are discovered as part of the Inventory exercise, they must be coded into one of the file types dictated by the tool you’ve chosen (Dockerfile, Puppet Manifest, Ansible Playbooks, Chef Recipes, etc.) and tested. In addition to executing the changes on test servers, many DevOps tools are packaged with features that let engineers validate the structure of configuration files prior to execution and execute changes in “dry-run” modes that let engineers observe what the results of a real execution would be.
Rollout: At this point, the server configurations are coded, tested, and ready to be used. Because they use platform specific package managers (Ansible, Chef), or choose the appropriate package manager via an abstraction layer (Puppet), or completely sandbox your configuration from the host (Docker), configurations can be rolled out in existing infrastructure as part of normal deployment and update processes, so long as this is approached in a sensible way, such as in parallel to existing deployments with a subsequent cutover, or during an extended maintenance window. Plans may differ slightly depending on what tools are at play, but of the tools mentioned, none are specific to cloud infrastructure or are required to be used initially on freshly provisioned infrastructure.
While ‘DevOps’ has been a popular buzzword in the industry for several years, before trying to introduce component concepts into your organization, it is important to step back and understand the problems that are being faced, the solutions available, and how to put these practices and tools in place. However, in almost all cases, there is great benefit to introducing at least a portion of these concepts in organizations where they don’t currently exist, simply for the sake of efficiency and removing the human (and mistake-prone) elements of manual configuration management activities via automation.