Infrastructure as code (IaC) is the practice of configuring and managing infrastructure such as networks or machine readable files and is one of the most visible outcomes of the rise of the DevOps mindset. IaC allows developers to supply IT environments with multiple lines of code and can be deployed in a matter of minutes (in contrast to manual infrastructure, which can take hours if not days to be deployed).
In this article, we’ll take a deeper look at the benefits and limits of IaC. After a brief discussion of the advantages of using this paradigm, we’ll look at the various elements to which it can be applied, before a detailed discussion of how, at a practical level, you can apply it to your own infrastructure.
As we’ll see, achieving adaptive infrastructure through IaC is possible, but it requires careful planning, rigorous documentation, and skillful programming. In this article, I’ll explain how that can be done, but first a word of warning: everything here applies to network-enabled, hosted, largely cloud-based software development. If you are working with more traditional systems, it’s unlikely to apply to your workflow.
The advantages of IaC
The IaC paradigm has had a meteoric rise in popularity in recent years due to the fact that it allows developers to quickly deploy their infrastructure, and this has had knock-on effects on a number of other technologies that are closely associated with it. The popularity of IaC, for instance, is a key reason why Kuberneties is so popular right now, and implementing an IaC pipeline is often conflated with DevSecOps compliance processes.
Because of the way that IaC is therefore integrated with other technologies and processes, it can be difficult to isolate the advantages offered by IaC itself, rather than its associated practices. Efficient compliance, for example, is a consequence of the way that IaC interacts with managerial practices, and not a feature of IaC itself.
In reality, IaC offers a number of well-defined advantages, largely to DevOps teams:
- Standardization of infrastructure configuration, which can help to reduce human error.
- Control of infrastructure provisioning by a centralized source.
- Integration of infrastructure code into CI/CD pipelines.
- The ability (in principle) to quickly adapt infrastructure.
At its core, IaC deployments aim to give DevOps teams the ability to create temporary, ersatz, ephemeral environments that allow new features to be tested without affecting the stability of main-branch releases. However, achieving this goal requires careful planning and development.
The IaC topology
In order to understand how IaC can be used to achieve truly adaptive network and infrastructure resources, it’s instructive to recognize the range of areas in which IaC affects the development lifecycle.
In order to build a DevOps practice in the right way, IaC can be usefully thought of as being applied across a particular topology, which includes not only the provisioning of infrastructure, but also the configuration of servers, containerization, and orchestration. Let’s take a look at each part of this topology separately.
Provisioning of infrastructure
For most DevOps teams, the primary reason for adopting an IaC approach is that it gives them the ability to reactively provision infrastructure resources in the form of machine readable files. Using IaC, software developers can use code to provision and deploy servers and applications rather than rely on system administrators. Developers can also enable auto-scaling, meaning that a program can automatically adjust resources at runtime to handle increased traffic.
In practice, this will mean that a developer will write an infrastructure-as-code process to provision and deploy a new application for QA or experimental deployment. Operations will then take over for live deployment in production. With the infrastructure setup written as code, it can go through the same version control, automated testing, and other steps of a continuous integration and continuous delivery (CI/CD) pipeline that developers use for application code.
This helps to overcome some of the challenges of hosted solutions by giving teams the ability to create isolated instances of various serverless resource types. This provisioning is usually achieved via integrated tools, like AWS Cloud, PowerShell, or Google’s Deployment Manager.
The next step up in the IaC topology is the process and practice of creating, provisioning, and configuring virtual machines. In most implementations of IaC, this is done via the use of server models. These allow DevOps teams to use images, which contain standard configurations as templates, and to automate the application of these to newly-created virtual machines.
These server models may contain a wide variety of configuration data, including those relating to the security hardening of virtual machines, directory hierarchies, disk mounting limitations, and even network configuration protocols. Again, creating server images and applying them to virtual machines is usually achieved via a set of free tools, including Ansible, PowerShell DSC, Chef, Puppet, and SaltStack.
A different approach to the creation of VMs is to deploy applications within containers. This kind of server-agnostic implementation can offer efficient application scaling, but it can come with a significant cost in terms of performance. In practice, most containerization today is achieved through Docker, which has become the standard tool for creating and working with containers. This is partially because of the flexibility offered by the Docker interface, in which the configuration of Docker images can be controlled alongside the deployment of the images themselves inside a Dockerfile.
There are, however, limits that are imposed by working with Docker in this way. Specifically, the configuration of a container can’t be modified. This means that using containerization is essentially a way of trading agility for compatibility across DevOps environments—while containers might be easier to work with, especially in large teams, they can end up replicating the same kind of monolithic approach that IaC has been developed to avoid.
In particular, DevOps teams should make sure that they understand the differences between Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) and the limitations that using containers puts on these approaches. Dynamic testing is a resource-intensive process that, when deployed on containerized code, can quickly exhaust computing resources.
Deployment via Kubernetes
The final part of the IaC topology, is the one that will be most visible and familiar to IT Ops teams. These are the container orchestrators that control the way in which containers are deployed and provisioned. The most widespread of these orchestrators is undoubtedly Kubernetes.
However, the ubiquity of Kubernetes has created something of a problem—that most IT Ops staff, and indeed many developers, think that IaC and Kubernetes are synonymous. I don’t mean this as a criticism of Kubernetes. The system is perhaps the purest expression of the IaC paradigm: eminently portable, but also capable of being adapted to run efficiently on a wide variety of hardware.
However, Kubernetes is far from representing the full range of what can be achieved by a careful mix of containers, VMs, and a creative use of container orchestrators. In other words—Kubernetes is the start of your IaC journey, not the end. While it’s a great place to begin to explore adaptive provisioning and continuous integration, many firms will need to develop bespoke containerization strategies in order to work with obscure systems: those that interact with legacy hardware, for instance, or need to be optimized for particular hardware.
IaC best practices
For most DevOps teams, deploying each part of the IaC topology I’ve described above will require a careful balance between strategic leadership and focused development work. At the broadest level, teams will need to define their desired outcomes, particularly the level of infrastructural agility they want to achieve.
They will then need to sketch a roadmap: one that details the way in which VMs and containers will be created, configured, managed, and tested. Individuals or small teams can then work on the actual development of this “infrastructure of infrastructure.”
Working toward that goal requires that development and operations teams are working based on the same standards and to the same requirements. As such, there are a number of principles and rules that should be followed by all teams working within IaC environments.
These range from the general to the specific. Among the former types of rules and principles may be found those that are not specific to IaC environments, but are particularly important within them:
- Use consistent nomenclature, ideally arrived at via high-level meetings between development and operational teams.
- Don’t overload code with comments that aren’t necessary and “notes to self” (though see the converse point about documentation below).
- Whenever possible, use small functions and a shared application library.
- Implement error handling across all functions, applications, and containers, no matter how small, and no matter how “temporary” they are envisaged to be.
- Take a security first approach when it comes to programming, particularly when it comes to open source programming or code that you plan on sharing with developers.
These are, after all, the basic good practices of software development more generally. However, following these principles becomes more important than ever when code is empowered to radically change the way in which infrastructure is deployed and managed.
Developing IaC environments
In addition to these general rules, there are also a set of good practices that stem from the way in which IaC environments work. Since the central promise of these environments is that provisioning and configuration of both VMs and containers be automated, it is more important than ever that the code that runs this automation be consistent, reliable, and error-free.
This observation allows a number of key rules of developing IaC environments to be derived:
- Automating as much of the code as possible should both reduce errors and improve speed. This might mean that multiple IaC frameworks will need to be used.
- Infrastructure code should be stored (as far as possible) with application code, and ideally within the same repos. In practice, this might be difficult, but it should be attempted wherever it is possible. Without this, one of the key advantages of IaC (adaptive provisioning, as mentioned previously) is lost.
- Integrate IaC into the broader CI/CD process so that code and applications developed within your IaC environment can be rigorously tested. By linking your IaC system to your CI/CD pipeline, you can integrate and deploy the code in different environments.
- Documentation should be clear, concise, and contained within IaC code itself.
- As far as possible, IaC should also be developed as a system of modular code elements.
As is evident from this list, developing IaC environments must be done carefully, and in a way that ensures the integrity of each element of the system. This is more than possible, of course, but it requires both careful management and a high degree of integration across DevOps teams.
Challenges and opportunities
IaC is a powerful approach, but it is one that is often misunderstood. Many DevOps teams have (rightly) recognized the value that IaC can offer, but persist in using the term as a synonym for the deployment of basic containerization and Kubernetes.
As I hope I have indicated in this article, in reality, IaC goes far beyond this—at its best, IaC refers to a wide-ranging topology of elements that redefine the way in which code is developed, packaged, and tested. Working within such an environment can be challenging, but by following the principles I’ve sketched above, can also be highly rewarding.Tags: cloud computing, iac, infrastructure as code, kubernetes