All posts in Docker

Monitoring your Docker hosts, containers and containerised services (Part 1)

Posted by / Apr 29, 2017 / Categories: Docker

Most of the projects I work on have strict rules regarding data privacy, using a SaaS outside EU it’s not possible and even if so, I would not opt for an external service, paid or not, if I have a viable self-hosted open source solution. So I’ve been looking for an open source, self-hosted monitoring solution that can provide metrics storage, visualisation and alerting for physical servers, virtual machines, containers and services that are running inside containers. After trying out several others and Prometheus I’ve settled-on Prometheus, primarily due to its support for multi-dimensional metrics and the query language that’s easy to grasp. Continue reading →

Monitoring your Docker hosts, containers and containerised services (Part 2)

Posted by / Apr 29, 2017 / Categories: Docker

In Part 1 I covered setting up and running dockprom a containeried monitoring solution for Docker and its host. Now I will configure the alerts it is possible to send yourself should anything go awry.


Define alerts

In the repository are three alerts configuration files:

You can modify the alert rules and reload them by making a HTTP POST call to Prometheus:

Monitoring services alerts

Trigger an alert if any of the monitoring targets (node-exporter and cAdvisor) are down for more than 30 seconds:

ALERT monitor_service_down
  IF up == 0
  FOR 30s
  LABELS { severity = "critical" }
      summary = "Monitor service non-operational",
      description = "{{ $labels.instance }} service is down.",

Docker Host alerts

Trigger an alert if the Docker host CPU is under hight load for more than 30 seconds:

ALERT high_cpu_load
  IF node_load1 > 1.5
  FOR 30s
  LABELS { severity = "warning" }
      summary = "Server under high load",
      description = "Docker host is under high load, the avg load 1m is at {{ $value}}. Reported by instance {{ $labels.instance }} of job {{ $labels.job }}.",

Modify the load threshold based on your CPU cores.

Trigger an alert if the Docker host memory is almost full:

ALERT high_memory_load
  IF (sum(node_memory_MemTotal) - sum(node_memory_MemFree + node_memory_Buffers + node_memory_Cached) ) / sum(node_memory_MemTotal) * 100 > 85
  FOR 30s
  LABELS { severity = "warning" }
      summary = "Server memory is almost full",
      description = "Docker host memory usage is {{ humanize $value}}%. Reported by instance {{ $labels.instance }} of job {{ $labels.job }}.",

Trigger an alert if the Docker host storage is almost full:

ALERT hight_storage_load
  IF (node_filesystem_size{fstype="aufs"} - node_filesystem_free{fstype="aufs"}) / node_filesystem_size{fstype="aufs"}  * 100 > 85
  FOR 30s
  LABELS { severity = "warning" }
      summary = "Server storage is almost full",
      description = "Docker host storage usage is {{ humanize $value}}%. Reported by instance {{ $labels.instance }} of job {{ $labels.job }}.",

Docker Containers alerts

Trigger an alert if an example Jenkins container is down for more than 30 seconds:

ALERT jenkins_down
  IF absent(container_memory_usage_bytes{name="jenkins"})
  FOR 30s
  LABELS { severity = "critical" }
    summary= "Jenkins down",
    description= "Jenkins container is down for more than 30 seconds."

Trigger an alert if the container is using more than 10% of total CPU cores for more than 30 seconds:

 ALERT jenkins_high_cpu
  IF sum(rate(container_cpu_usage_seconds_total{name="jenkins"}[1m])) / count(node_cpu{mode="system"}) * 100 > 10
  FOR 30s
  LABELS { severity = "warning" }
    summary= "Jenkins high CPU usage",
    description= "Jenkins CPU usage is {{ humanize $value}}%."

Trigger an alert if the container is using more than 1,2GB of RAM for more than 30 seconds:

ALERT jenkins_high_memory
  IF sum(container_memory_usage_bytes{name="jenkins"}) > 1200000000
  FOR 30s
  LABELS { severity = "warning" }
      summary = "Jenkins high memory usage",
      description = "Jenkins memory consumption is at {{ humanize $value}}.",

Setup alerting

The AlertManager service is responsible for handling alerts sent by Prometheus server. AlertManager can send notifications via email, Pushover, Slack, HipChat or any other system that exposes a webhook interface. A complete list of integrations can be found here.

You can view and silence notifications by accessing http://<host-ip>:9093.

The notification receivers can be configured in alertmanager/config.yml file.

To receive alerts via email you need delete the default SLACK channel and add route and receiver substituting your values

  group_by: [Alertname]
  # Send all notifications to me.
  receiver: email-me

- name: email-me
  - to:
    smarthost: # or your email server port
    auth_username: "emailaccountusername"
    auth_password: "emailaccountpasword"
    smtp_require_tls: TRUE # or FALSE

Extending the monitoring system

Dockprom Grafana dashboards can be easily extended to cover more than one Docker host. In order to monitor more hosts, all you need to do is to deploy a node-exporter and a cAdvisor container on each host and point the Prometheus server to scrape those.

You should run a Prometheus stack per data center/zone and use the federation feature to aggregate all metrics in a dedicated Prometheus instance that will serve as an overview of your whole infrastructure. This way, if a zone goes down or the Prometheus instance that does the zones aggregation goes down, your monitoring system present on the remaining zones can still be accessed.

You can also make Prometheus highly available by running two identical Prometheus servers in each zone. Having multiple servers pushing alerts to the same Alertmanager will not result in duplicate alerts, since Alertmanager does de-duplication.

Linking Docker Containers

Posted by / Mar 29, 2016 / Categories: Docker

In previous tutorials I have built standalone containers i.e. everything in one “box”, webserver, application and database. One can see how building several containers like this for multiple applications will involve a lot of uneccessary duplication, storage, and processing. Docker is an ideal tool split these services into separate blocks and allow them to interact with each other seamlessly.

Let’s take a very simple example which works straight out of the box – Joomla and MariaDB, using images straight from the public Docker repository Docker Hub. First start the database we want to connect to.

docker run --name joomladb -e MYSQL_ROOT_PASSWORD=my-secret-pw -e MYSQL_DATABASE=joomla -e MYSQL_USER=joomla -e MYSQL_PASSWORD=joomla -d mariadb:latest

This will pull the latest mariadb image from the repository (if you haven’t already got it) and initialise it with the environment variables – root password, and an empty database schema called joomla whose user and password are both joomla.

Now start joomla and link it to the mariadb container

docker run --name some-joomla --link joomladb:mysql -p 8080:80 -d joomla:apache-php7

This pulls an image with Joomla 3.5, Apache and PHP7 which I highly recommend, even though by Docker standards is quite large (around 500Mb). This is linked to the previously started container and is accessible from your host IP on port 8080. You will need to go through the normal Joomla setup process except that the database details are entered as per our mariadb container

joomla setup

You must remember to commit and changes you make to these containers prior to removing them, otherwise you will have to re-install all over again. As you can see these are quie lengthy and error prone commands. Docker has an answer to this problem docker-compose, a tool for defining and running multi-container Docker applications. The documentation on the website is excellent so I don’t propose to cover it here, instead just show the docker-compose.yml file for this joomla application would be

  image: joomla:apache-php7
    - joomladb:mysql
    - 8080:80

  image: mariadb:latest
    - MYSQL_ROOT_PASSWORD=my-secret-pw
    - MYSQL_DATABASE=joomla
    - MYSQL_USER=joomla
    - MYSQL_PASSWORD=joomla

As with all YAML files the formatting/indentation is very important.

The command to run the application will simply be docker-compose up in the same directory as the docker-compose.yml file is stored. In this manner you can link as many containers as you desire e.g. in the above example we could have used separate nginx, php7-fpm and mariadb containers to achieve the same results.


Screenshot showing the linked containers using dockerui, an excellent tool to use locally, or behind a secured proxy on your server, as it has no in-built security. In a later article I will be reviewing dockerui and several other tools that are available to assist with Docker development.

 Tip 1:
Don’t leave docker compose files like this on your server since they contain all your passwords in plain text format. In practice I use Ansible to push all my locally stored commands to my Bhost containers as playbooks. Again this will be the subject of a full tutorial to come later.

Data Volumes

From Docker:-

A data volume is a specially-designated directory within one or more containers that bypasses the Union File System. Data volumes provide several useful features for persistent or shared data:

  • Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization. (Note that this does not apply when mounting a host directory.)
  • Data volumes can be shared and reused among containers.
  • Changes to a data volume are made directly.
  • Changes to a data volume will not be included when you update an image.
  • Data volumes persist even if the container itself is deleted.

Data volumes are designed to persist data, independent of the container’s life cycle. Docker therefore never automatically deletes volumes when you remove a container, nor will it “garbage collect” volumes that are no longer referenced by a container.

In practice I have them very useful for holding static read-only files  e.g. apache or nginx configuration directives, but much more complex when dealing with read and write files. Here is why :-

Standard system UID’s for nginx and apache used by Alpine Linux, which future Docker official images are to be on.


Standard system UID’s for nginx and apache used by Debian and Ubuntu. Most current official Docker images are based on Ubuntu.

As you can see they are quite different. Docker uses the UID’s and GID’s for defining who can read and/or write to what. So if you want to put your base joomla files on your host file syste. and set the read/write permissions to that external volume within your container eg

chown -R apache:apache /var/www/html

within an Alpine based container, your host operating system (if different) does not seem to honour those settings, as the UID/GID are different. There are apparently ways to work around this but they are very much operating system specific so you lose portability.

A simple example highlighting this problem is Eclipse Che, a developer workspace server and cloud IDE. This has a whole section of gotchas for different O/S’s to work around being able to write workspaces to the host system from within docker. Essentially you have to run it with a very specific UID (in Linux :1000 – i.e. the first user created in Debian/Ubuntu), otherwise it will not work – see the “Avoid These Common Setup Gotchas” section here.

Having said that I find the data volumes very useful during development to experiment with different configurations rather than having to build an image each time I tweak a particular configuration setting. For production however these files are built in to the final image using the Docker build COPY or ADD commands. My current development environment looks like this
My development environment
More on this in another tutorial.

I hope you have found this tutorial useful and will now be able tobuild more comlex solutions using Docker.

Making Docker Containers Web Accessible

Posted by / Mar 11, 2016 / Categories: Docker

In previous tutorials we have accessed the docker applications by binding the containers exposed port 80 to a host port. If there is already a web application/s running on the host using port 80 then the containers port needs to bound to a redundant host port say 81 or upwards. Whilst this works in practice it is not very elegant.

In this tutorial I will outline two methods to remove the port number from the containers web facing URL.

Apache Reverse Proxying – Ideal for temporary access/demos

As an example I shall be using the Sahana container I set up in a previous tutorial. This was run using

docker run -i -t -d -p 81:80 --name sahana /sahana

So the site is accessible with the url change the URL to read I will use the Apache location directive. The Directory directive works only for filesystem objects (e.g. /var/www/mypage, C:\www\mypage), while Location directive works only for URLs (the part after your site domain name, e.g.

First the relevant Apache modules need to be enabled and Apache restarted

a2enmod proxy http-proxy html-proxy
service apache2 restart

Then add a simple text file sahana.conf that we can switch on and off as we require with the a2enconf and a2disconf commands. This is much simpler than modifying your default Apache host configuration.

    ProxyRequests off  # IMPORTANT! prevents Apache acting as an open proxy
  ProxyHTMLExtended On # Determines whether to fix links in inline scripts, stylesheets, and scripting events.
  <Location "/sahana"> # The extension to our base URL
     Order deny,allow
     Allow from all
    </Location >

This file should be placed in /etc/apache2/conf-available directory then a2enconf sahana and service apache2 reload.

The site should now be available at

Using a subdomain – For more permanent configuration

Here we create a subdomain that is recognised through the global DNS system by adding an A (Address) record to our DNS server that points the subdomain to our hosts IP address (Note: no port numbers are used here). This method varies according to whether you use

  • your Domain Registrar as your Domain Name Server
  • Self-host using a GUI such as webmin
  • Self-host manually editing configuration files

and is beyond the scope of this tutorial.

Now we create a new virtual host for Apache for this new subdomain using the same proxying configuration as above but with a different virtual host name. Create a new text file in /etc/apache2/sites-available called sahana-web.conf

<VirtualHost *:80> 
	ServerName # The sub-domain that was created with the DNS A record
	ServerAdmin webmaster@localhost
# Now set up proxying between ports 80 and 81 as steps previously
  ProxyRequests off
  ProxyHTMLExtended On
  <Location "/">
     Order deny,allow
     Allow from all

Now a2ensite sahana-web and service apache2 restart

Using either of these methods you can now have multiple containers running completely different systems e.g. ruby, python node.js all accessible through your host apache web server.

Build your own Docker image

Posted by / Mar 2, 2016 / Categories: Docker

One of the more important features of Docker is image content management, or image layering. Docker’s layered approach to images (or content) provides a very powerful abstraction for building up application containers. An image provides the foundation layer for a container. New tools, applications, content, patches, etc. form additional layers on the foundation. Containers are workable instances of these combined entities, which can then be bundled into it’s own image.

Docker allows you to build containers using a Dockerfile. The Dockerfile describes a base image for the build using the FROM instruction. FROM implicitly uses a image registry from which the base image is pulled. This can be or some other (perhaps internal) registry.

The additional layers of a Docker container are created with directives within the Dockerfile. The RUN directive is used to run commands in running image. Extra packages can be installed using the RUN instruction and the Linux distribution’s package installation tool. For Fedora and Red Hat Enterprise Linux this tool is yum. Scripts and other content can be added to the layer by using the ADD instruction from local directories or a URL.

Once you’ve added the required additional layers to your base image to make your specific application, you can create an image and add it to a registry for re-use.

These three instructions are the basics for building containers using the Dockerfile. A simple example:

FROM fedora
RUN yum install -y gcc
ADD ./myprogramfiles.tar /tmp  

Two Approaches to Image Building

There are two approaches to building Docker images.

Consider the following example: an administrator would like to deploy a new simple web site using Docker container technology.

The administrator decides that the image needs three components:

  • Debian base image
  • Apache Web server
  • Web site content

The administrator can build the image in one of the two following ways:

  1. Interactively, by launching a BASH shell under Debian to apt-get install Apache and its dependencies, and then save the image
  2. Create a Dockerfile that builds the image with the web site included

The first approach involves the administrator using the Docker CLI to instantiate the base image, install the Apache web server, and then create a reusable image for later use with the web site content. In this scenario, the base Debian + Apache image can be used as a base for any project that requires those tools.

The second approach involves building a Dockerfile that uses the base Debian image, installs the needed Apache packages, and then adds the necessary content. This ensures that the entire web site is complete in one build. The image created by this build will only serve a single web site and content changes would require a rebuild.

Interactively Building a Debian Container

There is an official image called debian (the latest Debian version) in the public Docker registry. For more information on this image and the options available, check the repository page.

To run a container with an interactive shell, run the following Docker command on the BHost VPS:

docker run -it debian bash

Then from with the docker shell environment start building your customised image

apt-get update
apt-get upgrade
apt-get install apache2 php5
apt-get clean

We now save the container as a base image in our local repository for future use using docker commit [imagename] [userspace]/debian-php

Check you now have the two Debian images docker images. Now we can use our own image as the basis of a website. Here is a Dockerfile to do this

## Use the image we just created
FROM [userspace]/debian-php
MAINTAINER [yourname]
# Add the tar file of the web site 
ADD website_content.tar /tmp/

# Docker automatically extracted. So move files to web directory
RUN mv /tmp/mysite/* /var/www/html && chown -R www-data:www-data /var/www/html

COPY httpd-foreground /usr/local/bin/
CMD ["httpd-foreground"]

You can use this simple Dockerfile as a template for building other web sites. This and the other two files required to build this tutorial can be downloaded from this GitHub directory

The Docker build context passed to the daemon requires both the Dockerfile and the content for the site. The path for this build is ., but in practice you should create separate build contexts (directory) for each container.

docker build -rm -t mysite .
docker run -d -p 80:80 mysite

You should now be able to visit your site at your hosts IP address


Using a Dockerfile to Build a DEBIAN Container

The administrator may decide that building interactively is tedious and error-prone. Instead the administrator could create a Dockerfile that layers on the Apache Web server and the web site content in one build.

A good practice is to make a sub-directory with a related name and create a Dockerfile in that directory. E.g. a directory called mongo may contain a Dockerfile for a MongoDB image, or a directory called httpd may container an Dockerfile for an Apache web server. Copy or create all other content that you wish to add to the image into the new directory. Keep in mind that the ADD directive context is relative to this new directory.

mkdir httpd
cp mysite.tar httpd/

Create the Dockerfile in the httpd directory. This Dockerfile will use the same base image as the interactive command debian:

FROM debian
MAINTAINER A D Ministator email:

# Update the image with the latest packages (recommended)
RUN apt-get update && apt-get upgrade -y && apt-get clean

# Install Apache Web Server
RUN apt-get install -y apache2 && apt-get clean

# Add the tar file of the web site 
ADD mysite.tar /tmp/

# Docker automatically extracted. So move files to web directory
RUN mv /tmp/mysite/* /var/www/html

COPY httpd-foreground /usr/local/bin/
CMD ["httpd-foreground"]

Build this Dockerfile from the new httpd directory and run it:

docker build -rm -t newsite httpd/ 
docker run -d -P newsite

The container build process builds a series of temporary image layers based on the directives in the Dockerfile. These temporary layers are cached so if you make modifications to the content tarball, it won’t completely rebuild and update the Debian image. Since each directive is a new layer, you could reduce the number of layers by combining the RUN apt-get directives into a single RUN directive:

RUN apt-get update && apt-get upgrade -y && apt-get clean

Planning your layers will determine how many layers need to be recreated on each build of the container.

Which Approach is Right?

The approach to building images depends on why you are building the image.

Prototyping and Troubleshooting

If prototyping and trouble shooting then you probably want to do an interactive, *inside the container* approach. Using this approach you can take notes of the history of commands used that make sense and what external files may be missing or need changes. These can be ADDed to the Dockerfile.

Complete Satisfactory Single Build

If you are satisfied with a specific image that has been built using the interactive approach and you believe it might be reused elsewhere, then it is recommended to use the single Dockerfile approach that builds it all in one build.

Now you know how to build and experiment with your own images take a look at some examples on Docker Hub and Git Hub to see just what can be accomplished within a docker container.


Each RUN command in effect opens up a new bash shell in your image at root level i.e. / so if you want to issue consecutive commands in the same directory either

  • specify the directory implicitly in each RUN command,


  • chain commands using the operating systems method e.g. && for Debian with a \ character to show new lines


  • specify a WORKDIR for the remaining commands

Install Docker and run a Container

Posted by / Feb 26, 2016 / Categories: Docker

Install Docker and run a Container

Provision a KVM Virtual Private Server on BHost with a 64 bit operating system of your choice. For the purpose of my tutorials I shall be using Debian 8. Most of the commands I will be demonstrating are from Docker itself and independent of the operating system, however you will have to adapt OS shell commands such as apt-get to suit yourself e.g. yum in Fedora.

As with all new VPS’s log in via SHH

apt-get update && apt-get -y upgrade

Now add the Docker repository to your package manager. The Docker website has OS specific instructions on how to do this. If you want to install it on your local Windows or Mac to experiment with their guide is excellent.

apt-get install apt-transport-https ca-certificates
apt-key adv --keyserver hkp:// --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
touch /etc/apt/sources.list.d/docker.list
echo "deb debian-jessie main" >> /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install docker-engine
service docker start
docker run hello-world

You should now see Dockers welcome message

Docker Hello World

This is Dockers self-explanatory introduction which we will no longer need, so let’s remove it

# This script clears the terminal, and shows different ways of listing docker containers and images,
# then removes hello-word container and image
clear # clear terminal
docker ps # list active containers
docker ps -a # list all containers
docker ps -aq # list all containers by their identifiers
docker rm $(docker ps -aq) # nested docker command to remove/delete all non-active containers
docker images # list all images
docker rmi hello-world # remove image hello-world

Now we are going to download a completely self contained image from Docker Hub. You don’t have to create an account just yet as all the repositories we are going to use are public.

docker pull turnkeylinux/sahana-eden-14.0

Sahana Eden is an Emergency Development Environment platform for rapid deployment of humanitarian response management. Its rich feature set can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis.

This image includes all the standard features in TurnKey Core, and on top of that:

  • Sahana Eden configurations:
    • Installed from upstream source code to /var/www/sahana-eden
    • Serve web2py applications with WSGI on Apache.
    • Force admin console to be served via SSL.
  • SSL support out of the box.
  • Postfix MTA (bound to localhost) to allow sending of email (e.g., password recovery).
  • Webmin modules for configuring Apache2, MySQL and Postfix.

The image is quite large so it will take a few minutes to download and un-extract itself, then we are ready to go.

To run this image

docker run -i -t -d --name sahana turnkeylinux/sahana-eden-14.0

Now check that is is running. Note as with all turnkeylinux images this will take a while to initialise itself, as it updates the base operating system and sets a random root password on first load then needs user input for full password control.

Sahana running

To monitor progress of this first boot

docker logs sahana

When its completed

docker inspect --format='{{.NetworkSettings.IPAddress}}' sahana    # displays container IP address
docker logs sahana | grep "Random initial root password"  # displays first boot password

Now ssh into you container using the credentials above

First boot

Enter all your passwords.

Turnkey setup complete

Select Quit from this then whilst still in the containers shell remove all cached downloads from the system update. This will reduce the size of your final image.

apt-get clean

Check the container is working as expected

curl http://<your container ip>

Your screen will show the html of the sites front page

Curl results


Now we don’t want to have to do this each time this container runs so we create a new image with the changes we have made

docker commit sahana <myname>/sahana   # where <myname> is your own namespace
docker stop sahana  # stop the container
docker rm sahana   # remove the containe
docker rmi turnkeylinux/sahana-eden-14.0     # remove the original image

So far we have a new Docker image that will run inside our VPS with the following ports opened internally

  • 12320/tcp
  • 12321/tcp
  • 22/tcp
  • 443/tcp
  • 80/tcp

In order to enable external access to this image the next time we run it in a container we bind it to a host port with a docker run switch in the form of

-p <any_unused_host_port>:<container_port>

So if you are not running any other services on your VPS e.g. apache just bind -p 80:80 attach it to a spare e.g. 81:80 making the full run command

docker run -i -t -d -p 81:80 --name sahana <myname>/sahana

Then visit your new site. With Sahana the first person to register automatically becomes and administrator so you do this immediately then commit the container again to avoid any mistakes/insecurity in the future.

Sahana front screen

I hope you have enjoyed this quick walk through and have found it useful. In my next article I shall be building my own standalone image.

Introduction to Docker on BHost

Posted by / Feb 19, 2016 / Categories: Docker

By now, we’ve all heard “Docker, Docker, Docker” coming from every available channel. Ok, we get it, Docker’s great. But why would I want to use it on a Virtual Private Server?

Let me first outline why I chose to investigate it and ended up falling in love. I am often called upon to provide web accessible demonstration sites for clients using one piece of software or another. It’s simply not good enough to point them to the software developers existing demonstration site. You have to be able to prove that you can work with it and satisfy the client’s particular requirements, be it simply modifying the site appearance or enhancing the fuctionality with custom modules. These application requirements can vary wildly e.g.

  • Sourcefabric Live Blog – Ubuntu 12.04, Python3 and Mongrel2 server
  • Sahana Eden – Debian, Python 2.6 and Apache2 server
  • Redmine – Ruby, Rails

These requirements could involve adding additional repositories to your server environment, modify your VPS configuration and eventually “breaking” it. 

Previously I have used virtualbox to create these environments with each machine having it own operating system web and database servers to avoid crashing my host machine. This result in duplication accross machines and occupies a great deal of disk space. Even when developed locally, there is no guarantee you can replicate that environment on your VPS.

I wasted so much time and effort building these different environments that I decided to take a leap of faith and investigate Docker …… What a time saver!!

I can use multiple language versions without having to resort to all the hackarounds for the language (python, ruby, java, node). Want to run a Python program in Python 3, but only have Python 2 installed on your host? Run it using Python 3 image. Want to compile a Java program with Java 1.6 instead of the 1.7 that’s installed on your machine? Compile it in a Java 1.6 Docker image.

Deployment is easy. If it runs in your container, it will run on your server just the same. Just package up your code and deploy it on a server with the same image or push a new Docker image with your code in it and run that new image.

You can still use your favorite editor/IDE as you normally do. No need for running a VM in VirtualBox and SSHing in and developing from the shell just so you can build/run on a Linux box.

On BHost

A 64 bit 3.10 Linux kernel is the minimum requirement for Docker. Kernels older than 3.10 lack some of the features required to run Docker containers. These older versions are known to have bugs which cause data loss and frequently panic under certain conditions. From my own experience this means provisioning a KVM box on BHost as the OpenVZ containers use an older kernel. To their credit BHost does not charge more for this option plus it gives you the advantage of a full hardware virtualisation platform with loadable kernel modules, giving users the freedom to run a range of Linux distros with any kernel. Each virtual machine has private virtualised hardware including network card, disk and graphics adapter, and with no possibility of overselling, you get guaranteed resources at your disposal any time day or night!

Docker installation is a breeze, simply a matter of adding the docker software repository to your system package manager and install as you would any other package. Full instruction for installation on many different flavours of Linux, together with Mac and Windows (if you need it on your local machine) can be found on the docker website, together with other comprehensive documentation to get you started.

I will be following up this article with a number of tips and tutorials to help you set up a fully containerised environment on BHost.