Production-Ready Environment with Docker

Production-Ready Environment with Docker

In previous posts, we knew about what is Docker and how it work. Today, we will take a look at a production-ready environment with Docker. Content of the workflow is refered from this blog post.

Components

Engine

Docker Engine can run on Windows, Linux and OS X, but Docker is Linux-based, so it can run on Windows and OS X via Boot2Docker – a Virtual Box image.

Note the differences between a native environment with Linux kernel and a virtual environment with Virtual Box:

![Docker_Native](httpsDocker_Native       ![Docker_OSX](https://Docker_OSX

Performance: The Boot2Docker management tool is a lightweight Linux virtual machine made specifically to run the Docker daemon on Mac OS X. The Virtual Box VM runs completely from RAM, is a small ~24MB download, and boots in approximately 5s.

Kitematic

Kitematic is an open source project built to simplify and streamline using Docker on a Mac or Windows (coming soon) PC. Kitematic automates the Docker installation and setup process and provides an intuitive graphical user interface (GUI) for running Docker containers. Kitematic integrates with Docker Machine to provision a VirtualBox VM and install the Docker Engine locally on your machine.

Once installed, the Kitematic GUI launches and from the home screen you will be presented with curated images that you can run instantly. You can search for any public images on Docker Hub from Kitematic just by typing in the search bar. You can use the GUI to create, run and manage your containers just by clicking on buttons. Kitematic allows you to switch back and forth between the Docker CLI and the GUI. Kitematic also automates advanced features such as managing ports and configuring volumes. You can use Kitematic to change environment variables, stream logs, and single click terminal into your Docker container all from the GUI.

Machine

Machine lets you create Docker hosts on your computer, on cloud providers, and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

Once your Docker host has been created, it then has a number of commands for managing them:

  • Starting, stopping, restarting
  • Upgrading Docker
  • Configuring the Docker client to talk to your host

Composer

Compose is a tool for defining and running multi-container applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

Using Compose is basically a three-step process.

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment:
  3. Lastly, run docker-compose up and Compose will start and run your entire app.

Swarm

Docker Swarm is native clustering for Docker. It allows you create and access to a pool of Docker hosts using the full suite of Docker tools. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts. Supported tools include, but are not limited to, the following:

  • Dokku
  • Docker Compose
  • Krane
  • Jenkins
  • Docker Client

Like other Docker projects, Docker Swarm follows the “swap, plug, and play” principle. As initial development settles, an API will develop to enable pluggable backends. This means you can swap out the scheduling backend Docker Swarm uses out-of-the-box with a backend you prefer. Swarm’s swappable design provides a smooth out-of-box experience for most use cases, and allows large-scale production deployments to swap for more powerful backends, like Mesos.

Production-Ready Workflow

We want to use Docker to run all applications, including: several websites with Drupal, PHP, NodeJS, an a MySQL Server. Take a look at a production-ready workflow which involving Docker:

  1. All developers use Docker to create the application.
  2. Gitlab instance has Webhooks configured, so when a new push is detected, it will order Jenkins to run a task.
  3. Each Jenkins task includes the same layout: clone the latest code from gitlab, run tests, login to private Docker registry, build a new image with the latest code and then push the image to it.
  4. Maestro-NG orchestration software will deploy the new version of the image.
  5. Load balancer will detect the change and reload the new configuration.

Base Image

We create a base image which include everything to run application, except application itself. Every time we change one of the base images, we run a Jenkins task that will pull the new image and then trigger subsequent tasks to rebuild all the images that depends on the modified base image.

After we created the images, we needed to define a standard structure for all of applications. All of applications are organized using the following structure:

/application
/logs
/files
Dockerfile
config.yml.example
docker-compose.yml.example
Makefile
  • The /application directory is the root folder of application.
  • The /logs and /files directories are there for development purposes, just so the application has directories to write logs and files. Both directories are omitted by Git and completly excluded on production.
  • Dockerfile is the file Jenkins uses to build the image, a developer will almost never have to interact with this file.
  • config.yml.example and docker-compose-yml.example are the files the developer uses to start the application. Neither of them are used on production and when a developer clones a project, he/she needs to copy the example file and fill it with his/her values.
  • Makefile is the last piece of the puzzle, we use it so we could have a standard set of commands for all applications and at the same time, hide all kind of complexities to the developer.
  • Note that all applications will have the same structure as above for easier maintaining and building.

Workflow

  1. A push is detected by Gitlab
  2. Gitlab triggers a web hook and orders Jenkins to start a new Job.
  3. Jenkins clones the latest version of the project.
  4. Jenkins runs tests.
  5. If tests are passing, then Jenkins starts to build a new docker image.
  6. If the image finishes building, Jenkins pushes the image to our private registry.
  7. Jenkins connects to the Maestro-NG server using SSH and runs the command maestro pull ; maestro restart

You can see that, with the above workflow, we must ensure that all environments are the same with developers, testers and everyone else. That’s why everybody need to use Docker to create the application. Then, all operations like commit, push, test, build and deploy are automatically performed using CI like Jenkins and application can work seamlessly in any environment, from local workstation to cloud zone like AWS or Azure.