In my previous post, I discussed what DevOps is and how it is important to determine an appropriate, achievable, and justifiable DevOps goal, tailored for your organization. With a goal in mind, a DevOps solution can be broken into two components: practices and tools.
Changing the way operations and development teams interact falls into the practices category. Treating operations activities the same way development tasks are treated is a good start. Some Agile project tracking tools call these types of activities ‘chores’. Having operations engineers staffed on project teams and participating as developers is a step further. Operations engineers will participate in scrum meetings, commit scripts and coded configuration to version control systems, and work with developers to support the project’s continuous integration, test and delivery activities. Depending on the project’s size and delivery schedule, these folks could be full-time for the duration of the project. In other cases, like other project support activities, their direct involvement ebbs and flows with project demands. Yet in other cases, because of organizational constraints, traditional reporting structures are preserved and interactions between operations and development remain inter-team instead of intra-team. While some efficiency may be lost while preserving traditional reporting structures, there is still much value to be realized by introducing popular DevOps-related tools into an organization.
Like other solutions, your goals will help determine which tools, if any, are appropriate for use in meeting your organization’s goals. Also, the make-up and complexity of your environment will drive tool selection. Below is a brief list of some of the tools that we’ve used (in addition to other tools and various home-grown solutions), and examples of where they might work well:
Ansible (http://www.ansible.com/home) is generally considered easy to get started with and can be used to automate configuration management activities. While it doesn’t have all of the features other tools have, it’s simple to set up, runs without having to install agents on managed nodes, and has a UI available that provides various admin features for configuring authorization, scheduling jobs, executing jobs, and viewing job status. Ansible can support Windows, and Ansible configuration information (called ‘playbooks’) is platform-specific. Ansible’s execution is straightforward – tasks are performed in order and playbooks are interpreted top-to-bottom. Ansible can be used to provision virtual servers via an installable sub-project.
Puppet Enterprise (https://puppetlabs.com) is a feature-rich and platform-agnostic management tool that can manage servers at scale efficiently and effectively. Nodes managed by Puppet require agents to be installed, and these agents are configured to ‘pull’ information from a central controller (the Puppet Master). Puppet Enterprise also comes with a management UI that can be used to monitor current environment status in addition to executing jobs and managing authorization-related concerns. It’s interesting to note that because Puppet nodes have agents installed on them, Puppet nodes can be configured to keep themselves in compliance with specific configurations (called ‘manifests’) automatically. Another advantage of Puppet (which anecdotally can also make it more difficult troubleshoot when problems arise) is the Resource Abstraction Layer. The Resource Abstraction Layer lets Puppet decide which package manager to use depending on the OS the agent is running on. In this way, manifests can be written once and run on any supported platform. Puppet determines the order that tasks in manifests run by attempting to determine which order is most efficient. While this order can be overridden and made explicit, this feature can be confusing to beginners. Support for provisioning is available out of the box.
Docker (http://www.docker.com) is unique in that it’s not only a configuration management tool, but also provides an environment for ‘Docker-ized’ applications to run in (called containers). A ‘Docker-ized’ application contains not only the application code itself, but also a Dockerfile that determines the full set of dependencies, including the OS, as well as other items such as the desired directory structures and things like HTTP server software. Docker containers, depending on the platform, will execute in light-weight VMs (Windows and OS X), or on Linux, in kernel-level, sandboxed namespaces, isolated from the host via Linux Container (https://linuxcontainers.org) technology. This allows organizations to deploy applications to consistently configured environments regardless of the host platform. There is wide support for Docker containers in cloud environments such as Azure, AWS, and Digital Ocean.
In my next blog post on DevOps, I will explain how the introduction of these practices and tools into your organization will largely depend on organization management structure and whether or not the related effort is part of a new project, enhancement to an existing codebase, or as part of ongoing maintenance activities.