At Substantial, between the tides of client work, we interleave work on internal projects to explore product ideas and experiment with technology.
Recent experimentation by the San Francisco team for Substantial Dashincluded stepping outside our comfort zone for application deployment. We normally use Chef to provision servers and Capistrano to deploy application code to those servers.
- How do we build a host server and then install the web application?
- How do we release a new version?
- Is a version just application code, or does it include dependencies for the host server too?
- Is server provisioning and app deployment reproducible?
Docker containers continue to emerge as an answer to these questions. In fact, Docker supports many of the best practices defined in The Twelve-Factor App.
Put your app in a box
Containerizing an application's runtime environment is a well-known software design pattern for modular, secure systems. The pattern occurs over multiple layers:
- packaging the runtime environment (a "version")
- containing the file system
- firewalling the network
- metering utilization of network and processor
Well-known containers (from vendor-specific to vendor-agnostic) include:
Lightweight virtualization with Docker
Docker containers share the kernel of the host, while virtual machines each have their own kernels sharing the processor. VMs are comparatively slow and expensive because they are so low-level. They duplicate the operating system overhead for every running virtual machine, preventing a single process scheduler from managing everything efficiently.
Today, the Docker daemon easily runs on Linux to host containers. The container images are typically based on a full Linux distribution so that all of the standard tools and libraries are available. Tiny containers may be created using minimal, embeddable OSs such as BusyBox.
Images vs Containers
Every Docker "container" is run from an "image."
The image is a snapshot that can be moved around and reused;
docker images lists the images available to run with the Docker daemon.
The container is the live runtime;
docker ps lists the containers that are currently running.
The most visible part of Docker is the Dockerfile. Create it alongside your application code to construct an image of the application's complete environment. Notice in this example Dockerfile how everything the app needs to operate is built up from stock Ubuntu Linux.
Packaging the code
Many web developers are accustomed to deploying to a long-running server using tools like Capistrano. You push new code, restart the web server, and ta-da, a new version is deployed. But what happens if you need to change a system library, like downgrade a programming language version (e.g. Ruby 2.0.0) or upgrade a dependency (e.g. LibXML or ImageMagick)? Somehow you would have to coordinate infrastructure changes with application changes; as applications grow this can become very messy.
Containers solve this conundrum of server operations by defining all dependencies alongside the application code. Thanks to the build caching, changing application code does not necessarily require rebuilding everything in the container. Notice once again in this example Dockerfile that the application code is added late in the build sequence.
Configuration that remains constant across environments (dev, test, and production) can be kept with the application code. We include the standard
etc/ config files right in the repo, and those files are added to the container with Dockerfile
Configuration that is unique to each environment or secret is passed as environment variables when the
docker run command starts the container. An example run command passing environment variables is:
docker run -e BASE_URL="http://awesomeapp.com" -e SECRET_KEY=meowmeow image_tag_or_hash
Configuration of what command to run inside the container is set using the Dockerfile's CMD and/or ENTRYPOINT. If a container needs to run more than one process, then start a process manager such a supervisorto run and manage all the processes.
Many container hosting options exist today. The offical Docker docs list compatible OSs and vendors. Docker-specific workflows are supported by its creator dotCloud, StackDock, and more emerging companies.
Depending upon who you select to host your containers, they provide varying amounts of automation around Docker image and runtime management.
Our project's deployment testbed is the Digital Ocean Docker application. In this case, all we get is a bare-bones Ubuntu Linux host running the Docker daemon. So how can we upload our images to the host?
- deal with images as files:
docker save, upload to host, and
- use the public registry at docker.io: default
docker push and
docker pullof repos
- use a private registry:
For the Substantial Dash project, we opted to use images as files. This is the simplest, private approach to experiment with Docker.
Distributing server applications
Installation of most open-source server applications requires a mixture of technical knowledge and pain. Docker is poised to become the de facto standard for self-contained application packages that will run practically anywhere.
An example of packaging modern server software for simplified distribution is Discourse, an open-source discussion forum (used byBoingBoing BBS, Ember.js forums, and Mozilla Community.
Discourse's Docker project coordinates both a single-container, all-in-one, simple deployment and a multi-container, service-oriented, high-availability deployment.
The prospects for web application deployment continue to evolve. Docker currently lies between maturity of its core functionality (stable, secure containers) and emergence of high-level utility (like "drag-and-drop" install and scaling of web services). This experiement is just the beginning.