All posts by author

Monitoring your Docker hosts, containers and containerised services (Part 1)

Posted by / Apr 29, 2017 / Categories: Docker

Most of the projects I work on have strict rules regarding data privacy, using a SaaS outside EU it’s not possible and even if so, I would not opt for an external service, paid or not, if I have a viable self-hosted open source solution. So I’ve been looking for an open source, self-hosted monitoring solution that can provide metrics storage, visualisation and alerting for physical servers, virtual machines, containers and services that are running inside containers. After trying out several others and Prometheus I’ve settled-on Prometheus, primarily due to its support for multi-dimensional metrics and the query language that’s easy to grasp. Continue reading →

Monitoring your Docker hosts, containers and containerised services (Part 2)

Posted by / Apr 29, 2017 / Categories: Docker

In Part 1 I covered setting up and running dockprom a containeried monitoring solution for Docker and its host. Now I will configure the alerts it is possible to send yourself should anything go awry.

 

Define alerts

In the repository are three alerts configuration files:

You can modify the alert rules and reload them by making a HTTP POST call to Prometheus:

Monitoring services alerts

Trigger an alert if any of the monitoring targets (node-exporter and cAdvisor) are down for more than 30 seconds:

ALERT monitor_service_down
  IF up == 0
  FOR 30s
  LABELS { severity = "critical" }
  ANNOTATIONS {
      summary = "Monitor service non-operational",
      description = "{{ $labels.instance }} service is down.",
  }

Docker Host alerts

Trigger an alert if the Docker host CPU is under hight load for more than 30 seconds:

ALERT high_cpu_load
  IF node_load1 > 1.5
  FOR 30s
  LABELS { severity = "warning" }
  ANNOTATIONS {
      summary = "Server under high load",
      description = "Docker host is under high load, the avg load 1m is at {{ $value}}. Reported by instance {{ $labels.instance }} of job {{ $labels.job }}.",
  }

Modify the load threshold based on your CPU cores.

Trigger an alert if the Docker host memory is almost full:

ALERT high_memory_load
  IF (sum(node_memory_MemTotal) - sum(node_memory_MemFree + node_memory_Buffers + node_memory_Cached) ) / sum(node_memory_MemTotal) * 100 > 85
  FOR 30s
  LABELS { severity = "warning" }
  ANNOTATIONS {
      summary = "Server memory is almost full",
      description = "Docker host memory usage is {{ humanize $value}}%. Reported by instance {{ $labels.instance }} of job {{ $labels.job }}.",
  }

Trigger an alert if the Docker host storage is almost full:

ALERT hight_storage_load
  IF (node_filesystem_size{fstype="aufs"} - node_filesystem_free{fstype="aufs"}) / node_filesystem_size{fstype="aufs"}  * 100 > 85
  FOR 30s
  LABELS { severity = "warning" }
  ANNOTATIONS {
      summary = "Server storage is almost full",
      description = "Docker host storage usage is {{ humanize $value}}%. Reported by instance {{ $labels.instance }} of job {{ $labels.job }}.",
  }

Docker Containers alerts

Trigger an alert if an example Jenkins container is down for more than 30 seconds:

ALERT jenkins_down
  IF absent(container_memory_usage_bytes{name="jenkins"})
  FOR 30s
  LABELS { severity = "critical" }
  ANNOTATIONS {
    summary= "Jenkins down",
    description= "Jenkins container is down for more than 30 seconds."
  }

Trigger an alert if the container is using more than 10% of total CPU cores for more than 30 seconds:

 ALERT jenkins_high_cpu
  IF sum(rate(container_cpu_usage_seconds_total{name="jenkins"}[1m])) / count(node_cpu{mode="system"}) * 100 > 10
  FOR 30s
  LABELS { severity = "warning" }
  ANNOTATIONS {
    summary= "Jenkins high CPU usage",
    description= "Jenkins CPU usage is {{ humanize $value}}%."
  }

Trigger an alert if the container is using more than 1,2GB of RAM for more than 30 seconds:

ALERT jenkins_high_memory
  IF sum(container_memory_usage_bytes{name="jenkins"}) > 1200000000
  FOR 30s
  LABELS { severity = "warning" }
  ANNOTATIONS {
      summary = "Jenkins high memory usage",
      description = "Jenkins memory consumption is at {{ humanize $value}}.",
  }

Setup alerting

The AlertManager service is responsible for handling alerts sent by Prometheus server. AlertManager can send notifications via email, Pushover, Slack, HipChat or any other system that exposes a webhook interface. A complete list of integrations can be found here.

You can view and silence notifications by accessing http://<host-ip>:9093.

The notification receivers can be configured in alertmanager/config.yml file.

To receive alerts via email you need delete the default SLACK channel and add route and receiver substituting your values

route:
  group_by: [Alertname]
  # Send all notifications to me.
  receiver: email-me

receivers:
- name: email-me
  email_configs:
  - to: alerts@myemailaddress.com
    from: alerts@myemailaddress.com
    smarthost: smtp.myemailaddress.com:587 # or your email server port
    auth_username: "emailaccountusername"
    auth_password: "emailaccountpasword"
    smtp_require_tls: TRUE # or FALSE

Extending the monitoring system

Dockprom Grafana dashboards can be easily extended to cover more than one Docker host. In order to monitor more hosts, all you need to do is to deploy a node-exporter and a cAdvisor container on each host and point the Prometheus server to scrape those.

You should run a Prometheus stack per data center/zone and use the federation feature to aggregate all metrics in a dedicated Prometheus instance that will serve as an overview of your whole infrastructure. This way, if a zone goes down or the Prometheus instance that does the zones aggregation goes down, your monitoring system present on the remaining zones can still be accessed.

You can also make Prometheus highly available by running two identical Prometheus servers in each zone. Having multiple servers pushing alerts to the same Alertmanager will not result in duplicate alerts, since Alertmanager does de-duplication.

Linking Docker Containers

Posted by / Mar 29, 2016 / Categories: Docker

In previous tutorials I have built standalone containers i.e. everything in one “box”, webserver, application and database. One can see how building several containers like this for multiple applications will involve a lot of uneccessary duplication, storage, and processing. Docker is an ideal tool split these services into separate blocks and allow them to interact with each other seamlessly.

Let’s take a very simple example which works straight out of the box – Joomla and MariaDB, using images straight from the public Docker repository Docker Hub. First start the database we want to connect to.

docker run --name joomladb -e MYSQL_ROOT_PASSWORD=my-secret-pw -e MYSQL_DATABASE=joomla -e MYSQL_USER=joomla -e MYSQL_PASSWORD=joomla -d mariadb:latest

This will pull the latest mariadb image from the repository (if you haven’t already got it) and initialise it with the environment variables – root password, and an empty database schema called joomla whose user and password are both joomla.

Now start joomla and link it to the mariadb container

docker run --name some-joomla --link joomladb:mysql -p 8080:80 -d joomla:apache-php7

This pulls an image with Joomla 3.5, Apache and PHP7 which I highly recommend, even though by Docker standards is quite large (around 500Mb). This is linked to the previously started container and is accessible from your host IP on port 8080. You will need to go through the normal Joomla setup process except that the database details are entered as per our mariadb container

joomla setup

You must remember to commit and changes you make to these containers prior to removing them, otherwise you will have to re-install all over again. As you can see these are quie lengthy and error prone commands. Docker has an answer to this problem docker-compose, a tool for defining and running multi-container Docker applications. The documentation on the website is excellent so I don’t propose to cover it here, instead just show the docker-compose.yml file for this joomla application would be


joomla:
  image: joomla:apache-php7
  links:
    - joomladb:mysql
  ports:
    - 8080:80

joomladb:
  image: mariadb:latest
  environment:
    - MYSQL_ROOT_PASSWORD=my-secret-pw
    - MYSQL_DATABASE=joomla
    - MYSQL_USER=joomla
    - MYSQL_PASSWORD=joomla

Note:
As with all YAML files the formatting/indentation is very important.

The command to run the application will simply be docker-compose up in the same directory as the docker-compose.yml file is stored. In this manner you can link as many containers as you desire e.g. in the above example we could have used separate nginx, php7-fpm and mariadb containers to achieve the same results.

dockerui

Screenshot showing the linked containers using dockerui, an excellent tool to use locally, or behind a secured proxy on your server, as it has no in-built security. In a later article I will be reviewing dockerui and several other tools that are available to assist with Docker development.

 Tip 1:
Don’t leave docker compose files like this on your server since they contain all your passwords in plain text format. In practice I use Ansible to push all my locally stored commands to my Bhost containers as playbooks. Again this will be the subject of a full tutorial to come later.

Data Volumes

From Docker:-

A data volume is a specially-designated directory within one or more containers that bypasses the Union File System. Data volumes provide several useful features for persistent or shared data:

  • Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization. (Note that this does not apply when mounting a host directory.)
  • Data volumes can be shared and reused among containers.
  • Changes to a data volume are made directly.
  • Changes to a data volume will not be included when you update an image.
  • Data volumes persist even if the container itself is deleted.

Data volumes are designed to persist data, independent of the container’s life cycle. Docker therefore never automatically deletes volumes when you remove a container, nor will it “garbage collect” volumes that are no longer referenced by a container.

In practice I have them very useful for holding static read-only files  e.g. apache or nginx configuration directives, but much more complex when dealing with read and write files. Here is why :-

uids2
Standard system UID’s for nginx and apache used by Alpine Linux, which future Docker official images are to be on.

uids3

Standard system UID’s for nginx and apache used by Debian and Ubuntu. Most current official Docker images are based on Ubuntu.

As you can see they are quite different. Docker uses the UID’s and GID’s for defining who can read and/or write to what. So if you want to put your base joomla files on your host file syste. and set the read/write permissions to that external volume within your container eg

chown -R apache:apache /var/www/html

within an Alpine based container, your host operating system (if different) does not seem to honour those settings, as the UID/GID are different. There are apparently ways to work around this but they are very much operating system specific so you lose portability.

A simple example highlighting this problem is Eclipse Che, a developer workspace server and cloud IDE. This has a whole section of gotchas for different O/S’s to work around being able to write workspaces to the host system from within docker. Essentially you have to run it with a very specific UID (in Linux :1000 – i.e. the first user created in Debian/Ubuntu), otherwise it will not work – see the “Avoid These Common Setup Gotchas” section here.

Having said that I find the data volumes very useful during development to experiment with different configurations rather than having to build an image each time I tweak a particular configuration setting. For production however these files are built in to the final image using the Docker build COPY or ADD commands. My current development environment looks like this
My development environment
More on this in another tutorial.

I hope you have found this tutorial useful and will now be able tobuild more comlex solutions using Docker.

Making Docker Containers Web Accessible

Posted by / Mar 11, 2016 / Categories: Docker

In previous tutorials we have accessed the docker applications by binding the containers exposed port 80 to a host port. If there is already a web application/s running on the host using port 80 then the containers port needs to bound to a redundant host port say 81 or upwards. Whilst this works in practice it is not very elegant.

In this tutorial I will outline two methods to remove the port number from the containers web facing URL.

Apache Reverse Proxying – Ideal for temporary access/demos

As an example I shall be using the Sahana container I set up in a previous tutorial. This was run using

docker run -i -t -d -p 81:80 --name sahana /sahana

So the site is accessible with the url http://www.example.com:81.To change the URL to read http://www.example.com/sahana I will use the Apache location directive. The Directory directive works only for filesystem objects (e.g. /var/www/mypage, C:\www\mypage), while Location directive works only for URLs (the part after your site domain name, e.g. www.example.com/mylocation).

First the relevant Apache modules need to be enabled and Apache restarted

a2enmod proxy http-proxy html-proxy
service apache2 restart

Then add a simple text file sahana.conf that we can switch on and off as we require with the a2enconf and a2disconf commands. This is much simpler than modifying your default Apache host configuration.

    ProxyRequests off  # IMPORTANT! prevents Apache acting as an open proxy
  ProxyHTMLExtended On # Determines whether to fix links in inline scripts, stylesheets, and scripting events.
  <Location "/sahana"> # The extension to our base URL
     Order deny,allow
     Allow from all
     ProxyPass http://www.example.com:81/sahana
     ProxyPassReverse http://www.exmample.com:81/sahana 
    </Location >
  

https://github.com/blackdoginet/BHost-docker-tutorials/blob/master/sahana.conf

This file should be placed in /etc/apache2/conf-available directory then a2enconf sahana and service apache2 reload.

The site should now be available at http://www.example.com/sahana.

Using a subdomain – For more permanent configuration

Here we create a subdomain that is recognised through the global DNS system by adding an A (Address) record to our DNS server that points the subdomain to our hosts IP address (Note: no port numbers are used here). This method varies according to whether you use

  • your Domain Registrar as your Domain Name Server
  • Self-host using a GUI such as webmin
  • Self-host manually editing configuration files

and is beyond the scope of this tutorial.

Now we create a new virtual host for Apache for this new subdomain using the same proxying configuration as above but with a different virtual host name. Create a new text file in /etc/apache2/sites-available called sahana-web.conf

<VirtualHost *:80> 
	ServerName sahana.example.com # The sub-domain that was created with the DNS A record
	ServerAdmin webmaster@localhost
# Now set up proxying between ports 80 and 81 as steps previously
  ProxyRequests off
  ProxyHTMLExtended On
  <Location "/">
     Order deny,allow
     Allow from all
     ProxyPass http://sahana.example.com:81/
     ProxyPassReverse http://sahana.example.com:81/
  </Location>
</VirtualHost>

Now a2ensite sahana-web and service apache2 restart

Using either of these methods you can now have multiple containers running completely different systems e.g. ruby, python node.js all accessible through your host apache web server.

Build your own Docker image

Posted by / Mar 2, 2016 / Categories: Docker

One of the more important features of Docker is image content management, or image layering. Docker’s layered approach to images (or content) provides a very powerful abstraction for building up application containers. An image provides the foundation layer for a container. New tools, applications, content, patches, etc. form additional layers on the foundation. Containers are workable instances of these combined entities, which can then be bundled into it’s own image.

Docker allows you to build containers using a Dockerfile. The Dockerfile describes a base image for the build using the FROM instruction. FROM implicitly uses a image registry from which the base image is pulled. This can be docker.io or some other (perhaps internal) registry.

The additional layers of a Docker container are created with directives within the Dockerfile. The RUN directive is used to run commands in running image. Extra packages can be installed using the RUN instruction and the Linux distribution’s package installation tool. For Fedora and Red Hat Enterprise Linux this tool is yum. Scripts and other content can be added to the layer by using the ADD instruction from local directories or a URL.

Once you’ve added the required additional layers to your base image to make your specific application, you can create an image and add it to a registry for re-use.

These three instructions are the basics for building containers using the Dockerfile. A simple example:

FROM fedora
RUN yum install -y gcc
ADD ./myprogramfiles.tar /tmp  

Two Approaches to Image Building

There are two approaches to building Docker images.

Consider the following example: an administrator would like to deploy a new simple web site using Docker container technology.

The administrator decides that the image needs three components:

  • Debian base image
  • Apache Web server
  • Web site content

The administrator can build the image in one of the two following ways:

  1. Interactively, by launching a BASH shell under Debian to apt-get install Apache and its dependencies, and then save the image
  2. Create a Dockerfile that builds the image with the web site included

The first approach involves the administrator using the Docker CLI to instantiate the base image, install the Apache web server, and then create a reusable image for later use with the web site content. In this scenario, the base Debian + Apache image can be used as a base for any project that requires those tools.

The second approach involves building a Dockerfile that uses the base Debian image, installs the needed Apache packages, and then adds the necessary content. This ensures that the entire web site is complete in one build. The image created by this build will only serve a single web site and content changes would require a rebuild.

Interactively Building a Debian Container

There is an official image called debian (the latest Debian version) in the public Docker registry. For more information on this image and the options available, check the repository page.

To run a container with an interactive shell, run the following Docker command on the BHost VPS:

docker run -it debian bash

Then from with the docker shell environment start building your customised image

apt-get update
apt-get upgrade
apt-get install apache2 php5
apt-get clean
exit

We now save the container as a base image in our local repository for future use using docker commit [imagename] [userspace]/debian-php

Check you now have the two Debian images docker images. Now we can use our own image as the basis of a website. Here is a Dockerfile to do this

## Use the image we just created
FROM [userspace]/debian-php
MAINTAINER [yourname]
# Add the tar file of the web site 
ADD website_content.tar /tmp/

# Docker automatically extracted. So move files to web directory
RUN mv /tmp/mysite/* /var/www/html && chown -R www-data:www-data /var/www/html

COPY httpd-foreground /usr/local/bin/
EXPOSE 80
CMD ["httpd-foreground"]

You can use this simple Dockerfile as a template for building other web sites. This and the other two files required to build this tutorial can be downloaded from this GitHub directory https://github.com/blackdoginet/BHost-docker-tutorials/tree/master/createcontainer

The Docker build context passed to the daemon requires both the Dockerfile and the content for the site. The path for this build is ., but in practice you should create separate build contexts (directory) for each container.

docker build -rm -t mysite .
docker run -d -p 80:80 mysite

You should now be able to visit your site at your hosts IP address

websiteview

Using a Dockerfile to Build a DEBIAN Container

The administrator may decide that building interactively is tedious and error-prone. Instead the administrator could create a Dockerfile that layers on the Apache Web server and the web site content in one build.

A good practice is to make a sub-directory with a related name and create a Dockerfile in that directory. E.g. a directory called mongo may contain a Dockerfile for a MongoDB image, or a directory called httpd may container an Dockerfile for an Apache web server. Copy or create all other content that you wish to add to the image into the new directory. Keep in mind that the ADD directive context is relative to this new directory.

mkdir httpd
cp mysite.tar httpd/

Create the Dockerfile in the httpd directory. This Dockerfile will use the same base image as the interactive command debian:

FROM debian
MAINTAINER A D Ministator email: admin@mycorp.com

# Update the image with the latest packages (recommended)
RUN apt-get update && apt-get upgrade -y && apt-get clean

# Install Apache Web Server
RUN apt-get install -y apache2 && apt-get clean

# Add the tar file of the web site 
ADD mysite.tar /tmp/

# Docker automatically extracted. So move files to web directory
RUN mv /tmp/mysite/* /var/www/html

COPY httpd-foreground /usr/local/bin/
EXPOSE 80
CMD ["httpd-foreground"]

Build this Dockerfile from the new httpd directory and run it:

docker build -rm -t newsite httpd/ 
docker run -d -P newsite

The container build process builds a series of temporary image layers based on the directives in the Dockerfile. These temporary layers are cached so if you make modifications to the content tarball, it won’t completely rebuild and update the Debian image. Since each directive is a new layer, you could reduce the number of layers by combining the RUN apt-get directives into a single RUN directive:

RUN apt-get update && apt-get upgrade -y && apt-get clean

Planning your layers will determine how many layers need to be recreated on each build of the container.

Which Approach is Right?

The approach to building images depends on why you are building the image.

Prototyping and Troubleshooting

If prototyping and trouble shooting then you probably want to do an interactive, *inside the container* approach. Using this approach you can take notes of the history of commands used that make sense and what external files may be missing or need changes. These can be ADDed to the Dockerfile.

Complete Satisfactory Single Build

If you are satisfied with a specific image that has been built using the interactive approach and you believe it might be reused elsewhere, then it is recommended to use the single Dockerfile approach that builds it all in one build.

Now you know how to build and experiment with your own images take a look at some examples on Docker Hub and Git Hub to see just what can be accomplished within a docker container.

Note:

Each RUN command in effect opens up a new bash shell in your image at root level i.e. / so if you want to issue consecutive commands in the same directory either

  • specify the directory implicitly in each RUN command,

or

  • chain commands using the operating systems method e.g. && for Debian with a \ character to show new lines

or

  • specify a WORKDIR for the remaining commands

Install Docker and run a Container

Posted by / Feb 26, 2016 / Categories: Docker

Install Docker and run a Container

Provision a KVM Virtual Private Server on BHost with a 64 bit operating system of your choice. For the purpose of my tutorials I shall be using Debian 8. Most of the commands I will be demonstrating are from Docker itself and independent of the operating system, however you will have to adapt OS shell commands such as apt-get to suit yourself e.g. yum in Fedora.

As with all new VPS’s log in via SHH

apt-get update && apt-get -y upgrade

Now add the Docker repository to your package manager. The Docker website has OS specific instructions on how to do this. If you want to install it on your local Windows or Mac to experiment with their guide is excellent.

apt-get install apt-transport-https ca-certificates
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
touch /etc/apt/sources.list.d/docker.list
echo "deb https://apt.dockerproject.org/repo debian-jessie main" >> /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install docker-engine
service docker start
docker run hello-world

https://github.com/blackdoginet/BHost-docker-tutorials/blob/master/install_docker.sh

You should now see Dockers welcome message

Docker Hello World

This is Dockers self-explanatory introduction which we will no longer need, so let’s remove it

#!/bin/bash
# This script clears the terminal, and shows different ways of listing docker containers and images,
# then removes hello-word container and image
clear # clear terminal
docker ps # list active containers
docker ps -a # list all containers
docker ps -aq # list all containers by their identifiers
docker rm $(docker ps -aq) # nested docker command to remove/delete all non-active containers
docker images # list all images
docker rmi hello-world # remove image hello-world

https://github.com/blackdoginet/BHost-docker-tutorials/blob/master/remove_hello-world.sh

Now we are going to download a completely self contained image from Docker Hub. You don’t have to create an account just yet as all the repositories we are going to use are public.

docker pull turnkeylinux/sahana-eden-14.0

Sahana Eden is an Emergency Development Environment platform for rapid deployment of humanitarian response management. Its rich feature set can be rapidly customized to adapt to existing processes and integrate with existing systems to provide effective solutions for critical humanitarian needs management either prior to or during a crisis.

This image includes all the standard features in TurnKey Core, and on top of that:

  • Sahana Eden configurations:
    • Installed from upstream source code to /var/www/sahana-eden
    • Serve web2py applications with WSGI on Apache.
    • Force admin console to be served via SSL.
  • SSL support out of the box.
  • Postfix MTA (bound to localhost) to allow sending of email (e.g., password recovery).
  • Webmin modules for configuring Apache2, MySQL and Postfix.

The image is quite large so it will take a few minutes to download and un-extract itself, then we are ready to go.

To run this image

docker run -i -t -d --name sahana turnkeylinux/sahana-eden-14.0

Now check that is is running. Note as with all turnkeylinux images this will take a while to initialise itself, as it updates the base operating system and sets a random root password on first load then needs user input for full password control.

Sahana running

To monitor progress of this first boot

docker logs sahana

When its completed

docker inspect --format='{{.NetworkSettings.IPAddress}}' sahana    # displays container IP address
docker logs sahana | grep "Random initial root password"  # displays first boot password

Now ssh into you container using the credentials above

First boot

Enter all your passwords.

Turnkey setup complete

Select Quit from this then whilst still in the containers shell remove all cached downloads from the system update. This will reduce the size of your final image.

apt-get clean
exit

Check the container is working as expected

curl http://<your container ip>

Your screen will show the html of the sites front page

Curl results

 

Now we don’t want to have to do this each time this container runs so we create a new image with the changes we have made

docker commit sahana <myname>/sahana   # where <myname> is your own namespace
docker stop sahana  # stop the container
docker rm sahana   # remove the containe
docker rmi turnkeylinux/sahana-eden-14.0     # remove the original image

So far we have a new Docker image that will run inside our VPS with the following ports opened internally

  • 12320/tcp
  • 12321/tcp
  • 22/tcp
  • 443/tcp
  • 80/tcp

In order to enable external access to this image the next time we run it in a container we bind it to a host port with a docker run switch in the form of

-p <any_unused_host_port>:<container_port>

So if you are not running any other services on your VPS e.g. apache just bind -p 80:80 attach it to a spare e.g. 81:80 making the full run command

docker run -i -t -d -p 81:80 --name sahana <myname>/sahana

Then visit your new site. With Sahana the first person to register automatically becomes and administrator so you do this immediately then commit the container again to avoid any mistakes/insecurity in the future.

Sahana front screen

I hope you have enjoyed this quick walk through and have found it useful. In my next article I shall be building my own standalone image.

Introduction to Docker on BHost

Posted by / Feb 19, 2016 / Categories: Docker

By now, we’ve all heard “Docker, Docker, Docker” coming from every available channel. Ok, we get it, Docker’s great. But why would I want to use it on a Virtual Private Server?

Let me first outline why I chose to investigate it and ended up falling in love. I am often called upon to provide web accessible demonstration sites for clients using one piece of software or another. It’s simply not good enough to point them to the software developers existing demonstration site. You have to be able to prove that you can work with it and satisfy the client’s particular requirements, be it simply modifying the site appearance or enhancing the fuctionality with custom modules. These application requirements can vary wildly e.g.

  • Sourcefabric Live Blog – Ubuntu 12.04, Python3 and Mongrel2 server
  • Sahana Eden – Debian, Python 2.6 and Apache2 server
  • Redmine – Ruby, Rails

These requirements could involve adding additional repositories to your server environment, modify your VPS configuration and eventually “breaking” it. 

Previously I have used virtualbox to create these environments with each machine having it own operating system web and database servers to avoid crashing my host machine. This result in duplication accross machines and occupies a great deal of disk space. Even when developed locally, there is no guarantee you can replicate that environment on your VPS.

I wasted so much time and effort building these different environments that I decided to take a leap of faith and investigate Docker …… What a time saver!!

I can use multiple language versions without having to resort to all the hackarounds for the language (python, ruby, java, node). Want to run a Python program in Python 3, but only have Python 2 installed on your host? Run it using Python 3 image. Want to compile a Java program with Java 1.6 instead of the 1.7 that’s installed on your machine? Compile it in a Java 1.6 Docker image.

Deployment is easy. If it runs in your container, it will run on your server just the same. Just package up your code and deploy it on a server with the same image or push a new Docker image with your code in it and run that new image.

You can still use your favorite editor/IDE as you normally do. No need for running a VM in VirtualBox and SSHing in and developing from the shell just so you can build/run on a Linux box.

On BHost

A 64 bit 3.10 Linux kernel is the minimum requirement for Docker. Kernels older than 3.10 lack some of the features required to run Docker containers. These older versions are known to have bugs which cause data loss and frequently panic under certain conditions. From my own experience this means provisioning a KVM box on BHost as the OpenVZ containers use an older kernel. To their credit BHost does not charge more for this option plus it gives you the advantage of a full hardware virtualisation platform with loadable kernel modules, giving users the freedom to run a range of Linux distros with any kernel. Each virtual machine has private virtualised hardware including network card, disk and graphics adapter, and with no possibility of overselling, you get guaranteed resources at your disposal any time day or night!

Docker installation is a breeze, simply a matter of adding the docker software repository to your system package manager and install as you would any other package. Full instruction for installation on many different flavours of Linux, together with Mac and Windows (if you need it on your local machine) can be found on the docker website, together with other comprehensive documentation to get you started.

I will be following up this article with a number of tips and tutorials to help you set up a fully containerised environment on BHost.

Deploying SSL encryption for free

Posted by / Feb 10, 2016 / Categories: Security

Encryption is good. Without SSL encyrption you never know who is intercepting communications between your webserver and clients connecting to it. Let’s make it difficult for them and show our users that we care for their privacy.

SSL (Secure Sockets Layer) is a standard security technology for establishing an encrypted link between a server and a client—typically a web server (website) and a browser; or a mail server and a mail client (e.g., Outlook).

SSL allows sensitive information such as credit card numbers, social security numbers, and login credentials to be transmitted securely. Normally, data sent between browsers and web servers is sent in plain text—leaving you vulnerable to eavesdropping. If an attacker is able to intercept all data being sent between a browser and a web server they can see and use that information.

Setting up an SSL server is not that easy or cheap usually but there some alternatives.

STARTSSL

This is a great free service but a pain to set up.

Personal Identification

You must first set up an account with them and get a browser certificate. This shows them you are who you say you are. At this point it’s best to have your email client open ready to receive messages because the credentials thay send you are only valid for a short time.After you paste the verification code into their webform your browser certificate is install to your browser.

Browser Certificate confirmation

 

This procedure puts an SSL certificate on your browser so they know for certain who you are. You then apply for an SSL certificate for your domain name.

Proof of Domain Ownership

After setting up an account you have to prove that you own or have some authority to apply for a certificate on behalf of your website. StartSSL does this by sending an email to an administrative email account which MUST be authoritative i.e. the email you used when registering the domain, or, an administrative level email account on the domain you are applying for, a Gmail or Hotmail account will NOT work.

Apply for Server SSL Certificate

This process involves you generating a Certificate Signing Request on your domain. Since this varies according to the operating system software e.g. CentOS, Ubuntu or Debian, I am not going to cover this here. To generate a CSR, a key pair must be created for the server. These two items are a digital certificate key pair and cannot be separated. If the public/private key file or password is lost or changed before the SSL certificate is installed, the SSL certificate will need to be re-issued. The private key, CSR, and certificate must all match in order for the installation to be successful. There are many excellent tutorials already on the web, just Google your OS and “generate CSR” to find one that suits you. I recommend RapidSSL tutorials even though I do not use their services

Once generated you have to submit this CSR to StartSSL who will begin the process of issuing you your certificate to download. I strongly recommend you do NOT password protect these certificates since this cause Apache to request the password each time it starts. If your server tries to restart itself whilst you are not logged in through a terminal it will fail causing your website to go offline.

At the end of this process you will have a zip file to download. Keep this file safe and secure for the full year your certificate is valid.

Install Certificates on Server

Unzip the downloaded file locally then upload the enclosed files to your servers default SSL locations (on Debian /etc/ssl). The private key you created during CSR generation will already be there in directory like /etc/ssl/private/.

Now your need to configure Apache to you all the keys. The default SSL configuration file will look something like this

Apache default SSL configuration

Three lines need to be edited

SSLCertificateFile   /etc/ssl/certs/domain.crt(it is from the apache.zip)
SSLCertificateKeyFile  /etc/ssl/private/private.key
SSLCertificateChainFile    /etc/ssl/certs/1_root_bundle.crt(it is from the apache.zip)

Save the configuration file then restart Apache. You can then check your server SSL configuration by using excellent free tool at https://www.ssllabs.com/ssltest/.

Summary

  • Service is free. Revocation is not.
  • Certificate is valid for 12 months.
  • As complicated to set up as paid services

 

Let’s Encrypt

Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG).

The key principles behind Let’s Encrypt are:

  • Free: Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
  • Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
  • Secure: Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
  • Transparent: All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
  • Open: The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.
  • From https://letsencrypt.org/about/

 

Installing Let’s Encrypt

Note: Let’s Encrypt is in beta. Please don’t use it unless you’re comfortable with beta software that may contain bugs.

If your operating system includes a packaged copy of letsencrypt, install it from there and use the letsencrypt command. Otherwise, you can use our letsencrypt-auto wrapper script to get a copy quickly:

$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt
$ ./letsencrypt-auto --help

letsencrypt-auto accepts the same flags as letsencrypt; it installs all of its own dependencies and updates the client code automatically (but it’s comparatively slow and large in order to achieve that).

How To Use The Client

The Let’s Encrypt client supports a number of different “plugins” that can be used to obtain and/or install certificates. A few examples of the options are included below:

If you’re running Apache on a recent Debian-based OS, you can try the Apache plugin, which automates both obtaining and installing certs:

./letsencrypt-auto --apache

Note: If you are hosting several different websites on the same server, using virtual hosts, this will issue only one certificate but configure all websites to use the same certificate.

On other platforms automatic installation is not yet available, so you will have to use the certonly command. Here are some examples:

To obtain a cert using a “standalone” webserver (you may need to temporarily stop your existing webserver) for example.com and www.example.com:

./letsencrypt-auto certonly --standalone -d example.com -d www.example.com

Configure Apache

Edit three lines in the Apache SSL configuration file to point to the certificates provided by Lets Encrypt

Configuration for Apache2

where the redacted blocks are your domain. Save the configuration file then restart Apache.

Summary

  • Service is free. Revocation is free.
  • Certificate is valid for 90 days.
  • Simple to set up.

 

Hint:

One of the checks performed by the SSL site test linked above to see if your server supports Strict Transport Security (HSTS). To enable this on you site you need to enable mod_headers then edit the Apache SSL configuration file to include the following code directly after you enable SSL as per

SSLEngine on
# HSTS (mod_headers is required) (15768000 seconds = 6 months)
Header always set Strict-Transport-Security "max-age=15768000"
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
SSLHonorCipherOrder on

How to configure reverse DNS

Posted by / Oct 19, 2015 / Categories: DNS

Please see our longer article for a fuller explanation of reverse DNS and what it’s for.

You must already have set up your forward DNS records to enable this feature.

1) Log in to your Bhost server control panel and select the VPS you want to manage.

2) Click on the Network tab

3) Click [Edit] for the IP address you want to add the reverse DNS to.

4) In the popup add the domain you want to allocate to this IP address. This popup will complain if your forward DNS records are not correctly set up.

Running email services using iRedMail

Posted by / Oct 2, 2014 / Categories: Productivity

Seems nowadays everybody and his dog can set up a beautiful website using some of the open source CMS packages that are available, but they still have email addresses like dave.mywebsite@gmail.com because they can’t set up and run an efficient email server.Take this as an example of a marketing No-No, one of many I receive on a daily basis

Apart from the shocking grammar one glaring giveaway here is the gmail address. So the sender either cannot afford a simple BHost package for less than £5 a month or they do not have the technical ability to host a mail server. Neither of which inspires me.

This is an essential part of offering your services to potential clients and is so easy to set up using iRedMail. A free, fully fledged, full-featured mail server solution. All used packages are free and open source, provided by the distribution vendors you trust.

  • Postfix: SMTP service
  • Dovecot: POP3/POP3S, IMAP/IMAPS, Managesieve service
  • Apache: Web server
  • MySQL/PostgreSQL: Storing application data and/or mail accounts
  • OpenLDAP: Storing mail accounts
  • Policyd: Postfix policy server
  • Amavisd: An interface between Postfix and SpamAssassin, ClamAV. Used for spam and virus scanning.
  • Roundcube: Webmail
  • Awstats: Apache and Postfix log analyzer
  • Fail2ban: scans log files (e.g. /var/log/maillog) and bans IPs that show the malicious signs — too many password failures, seeking for exploits, etc.

Don’t be intimidated by this list it is almost a one-click install.

Follow the excellent installation instruction for your operaing system from here http://www.iredmail.org/doc.html#installation_guide

The main gotcha is your FQDN (fully qualified domain name). DO NOT use the same name as you have given your host in the BHost control panel.

The installation is very straightforward. On completion the screen displays all the newly created accounts and passwords necessary for your mail server. Don’t panic these are also posted to your first mailbox and can be saved from there.

Now reboot your VPS then log in to your mail server through the webmail interface. https://www.mynewmailserver.com/mail postmaster@mynewfqdn.com with the password you created. You will have two emails in your account. Save the one with all the passwords to a text file on a USB or print it out and keep it safe somewhere.

Try sending an email from your newly created postmaster account and it will go straight into the recipients Junk Box.

First set up a reverse DNS record using the BHost control panel. Don’t worry if you are hosting a lot of different clients email servers. As long as their is one PTR record mail servers recognise that many sites share the same IP address.

Now set up SPF. This is a TXT record in your DNS server settings. For each domain you are delivering mail from these settings need to be correct.

mynewmaildomain.com.           3600    IN      TXT     "v=spf1 ip4:202.96.134.133 -all"

Where the IP address is obviously your own.

This will still result in mails getting put in the junk box. This is a typical mail header

You can see we have an SPF pass but the mail still gets dumped in Junk. We now need DKIM.

DKIM

So you can imagine from its name this is set up on a domain by domain basis. So if you are hosting multiple mail servers these DKIM records must be set up in each DNS record for the domain being served. One record will have already been created for your first/initial mail server. To see this use

amavisd showkeys
or
amavisd-new showkeys
    • Copy output of above command into one line, like below. It will be the value of DNS record.
v=DKIM1; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDYArsr2BKbdhv9efugByf7LhaKtxFUt0ec5+1dWmcDv0WH0qZLFK711sibNN5LutvnaiuH+w3Kr8Ylbw8gq2j0UBokFcMycUvOBd7nsYn/TUrOua3Nns+qKSJBy88IWSh2zHaGbjRYujyWSTjlPELJ0H+5EV711qseo/omquskkwIDAQAB
  • Add a ‘TXT’ type DNS record, set value to the line you copied above.
    • After you added this in DNS, type below command to verify it:
# amavisd-new testkeys
TESTING
: dkim._domainkey.iredmail.org => pass

If it shows ‘pass’, it works. Now send a mail to another address and check the mail header to see that your new mail server passes both the SPF and DKIM checks

Howto

Create a dkim-key for you domain

newdomain.com

.

amavisd-new genrsa newdomain.com.pem 
//Edit amavis settings file 
# nano /etc/amavis/conf.d/50-user //Append your domain to this line @local_domains_maps = ['mx.xxx.com', 'olddomain.com', 'newdomain.com'] 
//And add it to the dkim_key (it's a bit further down in the file) dkim_key("newdomain.com", "dkim", "/var/lib/dkim/newdomain.com.pem"); 

Problem : Checking the headers I see that SPF and DKIM pass correctly. I have no problem with GMAIL, YAHOO, and other, but hotmail seems very strict.

Solution :This is correct. Hotmail / outlook.com are insanely strict for .. really no sensible reason at all. You have checked the obvious things:

  • SPF
  • DKIM
  • reverse DNS
  • My IP is not listed in any backlist, I used: mxtoolbox.com

The only thing left to do is manually file a request with Microsoft to get your URL listed in their safe senders. I really wish I was kidding, but even after triple checking all our mail settings (same as your above bulleted list), testing successfully on every other mail provider under the sun, etcetera, we had to file a manual Hotmail inclusion request before email from our server would arrive to Hotmail / outlook.com users.

As you can see on Microsoft’s Postmaster Troubleshooting page:

IPs not previously used to send email typically don’t have any reputation built up in our systems. As a result, emails from new IPs are more likely to experience deliverability issues. Once the IP has built a reputation for not sending spam, Outlook will typically allow for a better email delivery experience.

The Improving E-mail Deliverability into Windows Live Hotmail (pdf) document describes this troubleshooting for the “Your e-mail is being delivered to the Junk e-mail Folder” scenario:

  • Too many recipients reported your previous e-mails as spam
  • Too much of your mail is sent to invalid or inactive e-mail addresses
  • Your SenderID record is incorrect or missing

None of which applies here to a new mailer anyway, and SenderID / SPF was already checked as valid.

So this begs the question, how exactly do you get positive email reputation when all your emails go into the spam folder on day zero?

Try setting up Microsoft’s Smart Network Data Services.

Deliverability to Outlook.com is based on your reputation. The Outlook.com Smart Network Data Services (SNDS) gives you the data you need to understand and improve your reputation at Outlook.com. But just looking at the data isn’t enough! Maintaining a good reputation is a lot of work. You should use this data to keep your mailing lists clean and to monitor the IPs you control for unusual behavior. Reputation is always the responsibility of the sender. SNDS gives senders access to detailed data about individual IPs, and it also includes our Junk Email Reporting Program, which lets you receive reports when users junk your messages. Now you can view IP data and manage feedback loop settings from one convenient website.

12