Becoming responsible corporate stewards
Gone are the days where every new application we need to run for the organization requires months of research, planning and significant capital investment. The cloud has shifted our consumption of IT resources from large cash outlays and even longer amortization schedules, to operationally managed, monthly spending structures. If you want your managed monthly spending to be predictable, reportable, and relevant, then your organization must develop a framework that ensures success for these three requirements.
Governance of your environment is the biggest step that your organization can take toward achieving success. The cloud provides “limitless access,” which can translate to limitless spending. We must use our knowledge of our organizations and our IT in general to create guardrails to stay in the right lanes. Laying out strategies for application development, testing and deployment, creating approved platforms, defining common structures, deciding which programming languages and technologies will be used, ensuring environmental security, and maintaining updated documentation are some of the many steps that can be taken towards governing your IT environments.
Modern technologies have made it possible for us to design and deploy applications in ways not available even just a few years ago. Logically separating the different layers of your applications will allow your business to take advantage of different serverless technologies and managed platforms designed to make your environments easier to manage, scale and operate.
Through this process, we often create microservices out of our monolithic applications. From there, we must decide how to best deploy those microservices, using containers, serverless execution platforms or other non-linear computing models. Serverless technologies can be implemented in many places, but often, the needs of our applications may extend beyond the capabilities of serverless platforms; we need something bigger than a function, but smaller than a virtual machine (VM).
The middle ground that exists between a VM and a serverless function is filled with solutions, many of which involve the containerization of different functional levels of your application. Containerization allows you to build a sovereign environment to hold and execute your code, which is then virtualized on top of an existing server (either physical or virtual). A container can always be running, or it can be deployed and removed programmatically, depending on the needs of your application.
For a lot of us, containers are an easy lift to deploy within our environments. Logically separating out components of an application and then deploying them on top of a shared operating system layer can often seem like the perfect solution. But how we deploy them has a significant impact on how easily these deployments can be managed.
Nested Virtualization vs. Managed Container Platforms
What is the easiest way to deploy containers in your environment? Most people would immediately think a VM (installed on a hypervisor) running Docker (a virtualization layer on top of the VM’s operating system) and then leveraging Docker Swarm for scalability, high availability, and fault tolerance. While that may seem “easy to manage,” because you are using technologies that you are familiar with, you could be needlessly complicating something that can be achieved in a better way.
Managed containerization platforms, such as AWS ECS or Azure Containers, can help your organization realize efficiencies that may not be available otherwise. Building a container registry to hold your containers, orchestrating the deployment and retirement of containers, and scaling to meet demand must all delicately balance with each other to achieve and maintain success. Managed platform offerings only expose the necessary components, such as your container registry and orchestration parameters, and remove any upstream interaction would otherwise be needed to maintain your own nested virtualization platform.
If your organization is using deployment pipelines and is operating on a CI/CD model, you can use existing SDKs to programmatically deploy these managed services using programming languages that you are, most likely, already using. Essentially, your application can “build itself;” creating the infrastructure it needs to run, and maintains much of its own operations, all while ingesting changes from your code pipelines and repositories, keeping your containers in sync with your development timelines.