Containerization took the development and operations world by storm. In days past, deployment was highly technology-specific and generally needed significant non-repeatable engineering effort for each project. Were you deploying to a VPS? Were you distributing VM images? Static executables? Scripts that needed some particular interpreter? Depending on your answers to these questions, then you may have used any combination of Capistrano, Puppet, shell scripts, Ansible,
.rpm packages, cloud-init scripts, proprietary cloud technologies, upstart/systemd/init... the list goes on. The line between system administration and development blurred when it came to deployment, and the discipline of DevOps was born. As DevOps started to mature, it developed best practices around deployment, such as the 12-factor app methodology, but many of the implementation details remained technology-specific.
Then Docker came along and commoditized application deployment with this simple promise: if your application can be packaged as a container, then it can be deployed anywhere. Containers were nothing new - after all, Google had been using them for years. Unix hackers had also used Solaris Zones and FreeBSD Jails for similar purposes. However, until Docker entered the scene, there was no good story for easily packaging an application into a portable container. Docker revolutionized the way we deploy applications.
Docker solved many significant deployment concerns, so the next question to ask was whether Docker offered any advantage for development. There are many advantages to having a development environment that looks (at least if you squint and tilt your head a bit) like production. If you deployed Docker containers in production, then it was only logical to run your code in them during development as well. Additionally, Docker solved the problems of versioning dependencies. For example, if you had one application that required MySQL 5.3 and another that required MySQL 5.7, then you did not have to bend over backward to run both versions locally, and you did not have to pay the overhead of running each version in its own VM. You could simply have a container for each version that could start and stop in seconds.
Docker Compose for Development
In late 2013, Docker Compose (then called fig) entered the scene. Docker Compose had a simple premise: instead of using one-off scripts to start and stop your application and its dependencies in development, you describe them as docker containers in a YAML file and let Compose deal with managing their lifecycle. It provided some additional niceties like log aggregation for 12-factor apps, config-via-environment variables, and basic container networking. In short, Docker Compose was the perfect tool for 12-factor app developers who wanted to get started with containerized development.
At first glance, Docker Compose seemed to be the ideal solution to local development - and in many cases, it was. However (as the name suggests) it was focused only on a development workflow where everything runs inside Docker. In some cases, this works just fine: for instance, if you write an API in Node.JS that relies on Postgres, then you can run your code in a
nodejs container (maybe with a file watcher in front of it) and Postgres in a
postgres container. However, not all development workflows are amenable to being Dockerized. Whether for performance, easy integration with host OS features, or any number of other reasons, it is sometimes preferable to run parts of your development environment as local processes and other parts as containers. You are still left with cobbling together a solution that manages the non-Docker portions and integrates them with the Docker containers.
Consider also that Docker relies on Linux kernel-specific features to implement containers, so users of macOS, Windows, FreeBSD, and other operating systems still need a virtualization layer. All of the complexities of networking, file synchronization, and VM management that we wanted to get away from by using containers are still there. Granted, they generally work... until they do not, at which point we are left with Google, Stack Overflow, and GitHub to try to figure out work-arounds.
Modern Development: Cloud and Microservices
Fast-forward to 2021, and most production applications also rely on cloud infrastructure that cannot be run as local Docker containers, so we are faced with a new set of questions that each come with a trade off:
- Do we stub the cloud services? This option is cheap and performant, but except for very simple services, the engineering required to maintain local stubs is high.
- Does every developer have their own instance of every cloud resource? This is often a costly proposition where the company has to pay a high overhead to reserve infrastructure that is minimally used. Serverless offerings usually have a better cost model than reserved offerings, but the cost must still be considered.
- Do developers share common development infrastructure? In this option, the infrastructure cost is reduced, but there is often additional engineering effort that must be expended so that multiple applications can share the same databases and other stateful services without conflicts. In other words, every application must support multitenancy.
Each one of these options is viable in a different scenario, but the main takeaway for the purpose of this discussion is that adopting Docker/Docker Compose does not address the problem - or even indicate which option would be best! A modern development environment orchestrator must be cloud-aware and support different runtime architectures. At present, infrastructure-as-code tools come closest to solving this problem, but since they are focused on production deployments, they do not integrate smoothly with local development environments.
In addition to cloud services, microservices bring their own complexities that are not solved by "just using Docker". Any large organization that has adopted a microservices strategy quickly outgrows the point at which any developer can run all the organization's services on her laptop. Tools like Telepresence help solve the problem of connecting local containers to ones running inside a remote Kubernetes cluster, but we still lack higher-level tools that can handle concerns like service discovery, proxying, and authentication transparently across a hybrid local/remote environment. Also, the tools that do exists are mostly Kubernetes-centric, which leaves many developers out of luck.
Our industry has made incredible strides in the past decade, thanks in part to technologies like Docker, Docker Compose, and Kubernetes. However, we are still figuring out how to do development in the heterogeneous environments in which we live. The next generation of developer tooling must be able to handle the building and running of local processes, Docker containers, cloud services, even other teams' microservices. We do not have all of the answers, but we are building exo to help developers like ourselves overcome the complexity of local development.