Terraform(ing) Blue-Green Swarms of Docker (VMware vCenter)

Preface

Terraform(ing) Blue-Green Swarms of Docker will enable you to update your Docker Swarm hosts to the actual Docker-CE version without an outage. For example imagine the following situation. Your have some Docker Swarm manager up and running and of course a bunch of Docker Swarm workers. If you are forced to update your operating system or if you like to update from a previous version of Docker to a newer one, you will have to handle this change it in place on your Docker Swarm workers. The result is, that you will drain one Docker host, update it and bring it back active to the Docker swarm. If you have five Docker Swarm worker hosts this will result in a loss of a fifth of your capacity and the remaining Docker Swarm worker hosts will have to handle a plus of a twentieth of workload. And if something goes wrong, maybe the new Docker version have a bug which hits you, you might be out of order shortly.

Therefore it is much better, if you can create fresh Docker Swarm workers side by side with the existing ones and then, if all is up and running, you can drain on old version Docker Swarm worker. The load will be pulled over to the new Docker Swarm workers and if something goes wrong, you can just switch back by activating the old Docker Swarm worker host and draining the new Docker Swarm workers afterwards.

The downside of working this way is, that you need a lot of resources while you are running both, the blue and the green Docker Swarms workers and you have to install the Docker Swarm worker hosts. The first issue will cost money, the second one time. Loosing time is always worse, therefore we will use Terraform to do it for us.

Prerequisites

You will need existing Docker Swarm managers to do this job. In the best case the Docker Swarm managers are not used as shared Docker Swarm workers, they should not have workload containers running. They do not need to have as much resources as the Docker Swarm workers. If you handle it this way, you can update the Docker Swarm mangers during work hours without any hassle. Therefore, separate the Docker Swarm mangers from your Docker Swarm workers.

It might be possible to create the Docker Swarm managers through Terraform, but that is not an easy task. Terraform has only limited provisioning capabilities, which is obvious but evident as it is a tool to build infrastructure. Don’t use it to handle software installation tasks. If you need them, use Puppet, Chef, whatever or write something yourself

Example Terraform file

Terraform file explaination

In this Terraform file we use VMware templates to distinguish between the Ubuntu versions and the installed Docker versions. It is similar to the Docker image usage (line 56). We are using PowerDNS to register the Docker Worker hosts automatically in our DevOps DNS (optional, lines 65-70). The most important part of this Terraform file are the provisioners (lines 123-134). These lines will take care, that the newly created Docker host will join the Docker Swarm as worker and of course it will leave the Docker Swarm, if you destroy the Docker Swarm workers through Terraform destroy.

Conclusion

You can take this file as a boilerplate. You can use this file to bring up the blue Docker Swarm workers. Later, you copy this file, change the configuration eg. ip addresses, and bring up the green Docker Swarm workers. After you have transferred the workload from blue to green, you can destroy the blue Docker Swarm worker and prepare for the next update. Todos: You might need to put a small script on your Terraform created Docker Swarm worker hosts to perform additional tasks after creation or before destroy. For example, the PowerDNS entry creation is a bad hack, because it deletes all entries. It would be better to have a script which does this task after startup from the Docker Swarm worker host point of view.

Have fun -M

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Terraform(ing) Docker hosts with LinuxKit on-premise (VMware vCenter)

Preface

This blog post aims not be a fully-fledged step by step tutorial on how you can create and bootstrap a Docker Swarm cluster with VMware vCenter on-premise. Instead, it should give you an idea about what is possible and why we do it this way. There are different ways to achieve different goals and the way we explain here shows only one way of how you can do it.

Today we are running around 30 Docker hosts which are installed as classic VMWare virtual machines and these Docker hosts are all based upon Ubuntu Linux. This leads to the fact that we have to update our Docker hosts every three months manually to get in touch with the (annual) up-to-date Docker-CE version. Pointless work. We are provisioning our Docker hosts with puppet. Even if we are provisioning our hosts with puppet, it would take time to create new Docker hosts to let the Docker Swarm workers rotate. Yes, we could create a blue-green infrastructure to rotate them, but this will use computing resources if we create them beforehand or let them run all the time. If we would create them on-demand, every three month, it would take time to bring them up and running, update them, and so on. This is now the point to introduce new players. Docker LinuxKit as Docker host OS and HashiCorp Terraform as IaC (Infrastructure as Code). Yes, there is also Docker InfraKit and we are currently evaluating it, but there will be more work to do.

LinuxKit

LinuxKit is the operating system project started by Docker to provide a toolkit for building custom minimal, immutable Linux distributions. The benefit you get, if you choose to go with LinuxKit, is not only to have a custom made Linux distribution which reflects your needs, furthermore you get a platform which enables you to push a resulting iso-image to a VMWare datastore for example. There are already different cloud providers build into the LinuxKit toolkit.

LinuxKit example

Downloading and building LinuxKit is very well described on the LinuxKit GitHub page. After you have built the LinuxKit binary you need a yaml file that describes the compositions of the LinuxKit operating system you would like to create. The example linuxkit.yaml included in the sources is a good starting point. But after some work you will recognize that the simple example will maybe be not enough. Therefore, the next lines will show our basic example where we are starting from to include some additional packages we need.

LinuxKit example explanation

The first thing you need to know is, that LinuxKit uses containerd as container engine to power most of the needed operating system daemons. As you can see in the linuxkit.yml, the different sections are using images known as container-images to build up the system. Important: This images are using the OCI open container initiative image standard. I will reference back to this point later, so just keep this information in mind.

The first line of the linuxkit.yml takes a Linux kernel image and loads it. The file is not using any :latest tag as image definition and you would like to avoid them too because you would like to know what versions are operating in your operating system. Please do not copy the file as it is, because it will be outdated nearly immediately!

After the definition of the kernel there comes the init section. In this section, there are those things located which will be needed immediately, for example the containerd image. This image will be responsible for the upcoming service containers.

Next in the row, there are the services you will probably need for your environment. We need the open-vm-tools image, because we are running the resulting image on the VMWare ESXi infrastructure and without the tools it would not be possible to retrieve the ip address information from the VMWare vCenter and this is a must as we are going to build the virtual machines with Terraform later.

The ssh daemon should be self explaining, but you will need some root/.ssh/authorized_keys to access the running LinuxKit OS. Therefore, look at the files section, where you can describe your configuration needs.

Now we are installing the Docker engine because we would like to build up a Docker Swarm worker and this Docker Swarm worker should join an existing Docker swarm manager later. Important: You will notice, that for the Docker engine, we are using the DockerHub image as usual, no special LinuxKit image! But how does this work? As I said before, the containerd is using OCI standard images which are not the same as Docker images. Lets have a look into it.

LinuxKit persistent disk magic

When it comes to the point, that you have to startup your Docker swarm cluster or even a Docker swarm host with LinuxKit, you will sooner or later ask yourself how you can manage it to persist your Docker Swarm data or at least the information which Docker containers are started.

The Docker data lives inside the /var/lib/docker folder. Therefore, if you persist this folder, you are able to persist the current state of the Docker host.

The collegues at Docker have done their work the right way. Look at the lines, where the images format and mount are loaded. This two images are doing the magic which enables LinuxKit to persist it’s data. You can lookup the documentation at github for details. For the impatient, here’s the summary.

The format image will take the first block device it finds and if there is no Linux partition on it, it will format it. If it finds a Linux partition, nothing happens. Very comfortable! After the disk format is done (or not) the partition gets mounted via the mount image and the corresponding mountie configuration lines. Et voila, there you go. Magical persistence with LinuxKit iso-turbo-boost mode. Genius!

This is one of the most important features, when it comes to infrastructure as code because regardless what you take, InfraKit or Terraform, you might eventually need some kind of persistence to get a reliable infrastructure.

LinuxKit OCI and Docker images

The really cool stuff about LinuxKit is, that it is a toolset. This means, that during the linux build command, which we will see later, the image components as described by the linuxkit.yml are downloaded from DockerHub and afterwards the contents of the used images are transformed to OCI compatible root image filesystems. This means, you can use all the DockerHub images directly without worrying about the image format. Neat! Now you know why you need a Docker environment to build LinuxKit and to build your LinuxKit OS afterwards.

LinuxKit build your image

To build your LinuxKit iso-image, you can use the following command. We have created a separate docker.yml file, to reflect the changes. The resulting iso-image will therefore be named docker.iso automatically.

LinuxKit push image to VMware

After you have build your iso-image, you can push it to a VMware datastore with the following command. Important: There is a small problem located in the actual version of LinuxKit. You cannot push to a VMware datacenter if there are multiple hosts located in it. Thanks to Dan Finneran @thebsdbox who helped me a lot! There is already a github PR merged which makes it possible to push without a problem. The PR will be included in the next version (0.2) of LinuxKit.

Terraform

Sure, at some point in time we will maybe use InfraKit to get our things up and running. But as for today there are hardly alternatives to Terraform. Terraform is Open Source Software but you can purchase an enterprise license if you need. When you start to dig into the parts and pieces of Terraform, you will recognize, that it is not the easiest piece of software but incredibly powerful.

Just download Terraform from the website and unpack it. The only thing you will see after extraction is just a binary called terraform.

Before you can do anything with it you will need a configuration which describes what you would like to receive. Due to this concept, Terraform is working with states. You plan a state, you apply a state and you will destroy a state. Terraform will try to keep your resources consistent. Now it comes to the config.

Terraform example

If you run this example with ./terraform plan and terraform, you will get three LinuxKit VMWare virtual machines which are saving their persistent state to the hard drive.

Conclusion

You can use the information from this blog post to build up a basic infrastructure. But to build up a running Docker Swarm cluster with managers and workers there is more work to do. There are a few glitches which makes it a little bit complicated at the moment to use the VMWare customization against the LinuxKit running virtual machines for example. In turn, it might be better to go with VMWare templates at the moment because you will have more possibilities. Maybe we will post an update on this soon. Stay tuned!

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

DevOps Gathering 2018

In February 2018 I will give my first official talk at the DevOps Gathering 2018 at Bochum Germany. Now I would like to write some lines about why I am doing it and I would also like to suggest you this conference to attend to. And now, here comes the story.

Peter Rossbach is one of the founders of the bee42 gmbh and the bee42 gmbh is one of the conference organizers. Bernhard and I know Peter since ten years now. He is one of the people who you met and never forget. We did a lot of work together for our Apache Tomcat installations at work in the past and therefore we always discussed a lot of things beside the work. One of this things was Docker, back then it was 2013. 2013 it was way to early for us to jump on the Docker container train. We already had container (OpenVZ) on premise which we implemented with Kir Kolyshkin.

We tried Docker on a regular yearly basis the next years but there was too much, which was not working as we need it to use it in production. After some years have passed this way, we started a new try this year. And yes, we managed it to reach the point where things began to run smooth. This year we did a lot for our developers, our company, our customers and ourselfs (the Ops) and Docker was our enabler. Therefore my talk is called Docker: Ops unleashed.

But now, for me, it is time to give something back – back to the community! And therefore I’ve decided to give a talk, to share our experience and to share what the motivation could be to make a change. I am going to do this on my own, privately. At the moment it is winter and together with my colleague we founded a Docker Meetup in south Austria and I am proud to be now allowed to call myself “Docker Community Leader”. But there was more. The Docker Con EU 17 was really great. We met a lot of great people and got a lot of insights!

Why I always wrote “we”? We, because you are probably lost without a team and a team is more than the sum of the individual skills. If you have someone to share your thoughts with, you can go even further! You might be motivated to give a talk for example.

Finally, if you can manage it to come to the [DevOps Gathering 2018](https://devops-gathering.io/) at Bochum Germany please come! It will be a great conference and I am sure, there will be a great audience too. And maybe we can meet us there!

Have a lot of fun!

-M

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

1. Meetup: Allgemeines & Umfrage / General & Survey

We are planning a Docker Meetup in the next months. Therefore we have created a Google forms poll which you can see below. The poll is provided in German language as we are currently expecting only German talking people to come to our first Meetup. The description will only be provided in German language at the moment. If you do not understand German but you would like to attend (if you are from Italy or Slovenia for example), please contact us! If you are interested to hold a lightning talk or if you would like to share your Docker story (approximately 10 minutes) in English, please contact us. You can find our contact information in the left menu or you can head over to our Meetup page.

Wir planen in den nächsten Monaten ein erstes Docker Meetup in Spittal an der Drau. Vorraussichtlich wird das Meetup in den Räumlichkeiten des bfi-Spittal stattfinden. Aufgrund der Platzsituation müssen wir die Teilnehmeranzahl für das Meetup auf 12 Personen begrenzen! Die Organisation des Meetups (Agenda, RSVP, …) wird über unsere Meetup Seite erfolgen (Anmeldung erforderlich). Ein Termin für dieses Meetup steht noch nicht fest, da wir zuerst die Themen sammeln möchten, welche für die Teilnehmer und Teilnehmerinnen von Interesse sind. Aus diesem Grund findet ihr unterhalb dieser Zeilen eine entsprechende Umfrage, mit der Bitte diese auszufüllen.

Die bereits vorgeschlagenen Themen kommen von Bernhard Rausch (CI/CD mit GitLab) und mir (Mario Kleinsasser, Docker 101), da dies Themenbereiche sind, die wir selber aufgrund unserer Erfahrung sehr gut kennen. Solltet ihr weitere Vorschläge haben, so könnt ihr diese gerne bei der Umfrage angeben.

Den Zeitrahmen für das Meetup haben wir mit 2-3 Stunden festgelegt, wobei der Start des Meetups voraussichtlich um 18:30 Uhr sein wird.

Vielen Dank für das Ausfüllen der Umfrage!

Umfrage

Meetup Treffpunkt

Wird in Kürze bekannt gegeben.

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Linux, Golang , govendor and Microsoft Code

Vim. Yes I am a long term Vim user and I think I still will be a Vim user in a couple of years too. But today I would like to show you, that Microsoft Code is really a great addition for sometimes Golang coders like me. Yes I now, Vim is also a great Golang development IDE but often I need some visualization of a Markdown file for example also.

So lets have a look at Microsoft Code:

Installation

The installation is straight forward. Head over to the official download page and download and install the appropriate package for your operating system. In my case, this is Linux (Ubuntu to be precise). After the installation, you can startup the editor via the startup menu of your operating system or you can just type in code in your console.

If you do this, and you are using a remote graphical session like X2GO or XRDP or something similar, nothing will happen, because there is currently a little problem with this setup. But, you can solve this. Just read this issue and at the bottom of it, you can find the (Ubuntu) solution.

Configuration

After Microsoft Code opens, you should change some user settings. You can open the settings screen through the menu or you can press crtl+shift+p which opens the quick command palette. Not start typing, for example “open settings” and select the correct entry to edit the user settings. Here are the settings, which I have overwritten in my environment.

First, I disabled auto save. In my opinion, this is a little bit annoying if you are writing code, as every time you type, the code is parsed immediately which can slowdown the editor performance a lot. Second, I lower the font size. The default of 14 is to huge for me. I like it to see more code on screen to follow the flow of a code easier. go.toolsGopath is important, as it tells the Microsoft Code golang-plugin – more on this later – where to install the plugin dependencies. go.inferGopath is also important because it tells golang-plugin, to use the actual opened folder as GOPATH variable – this is really useful.

golang-plugin

Open the extension browser (crtl+string+p – “extension install”), search for the golang extension and install it. The installation will also install all golang extension dependencies automatically, for example go-lint. All the dependencies will be installed in the path which is defined by the go.toolsGopath.

govendor

govendor is a simple golang solution to resolve your project dependencies. You can get it from this Github project page. It is pretty easy to install and use. Just read through the documentation and follow the given steps :-).

Conclusion

If you have setup all correctly, you will get a nice and useful editor for graphical desktop environments. Microsoft Code is no replacement for VIM or Emacs or the editor of your choice but it is a useful and powerful addition.

Have fun!

-M

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Docker TOTD 3 – Consensus and Syslog

If you make a mistake and do not correct it, this is called a mistake. [Confucius]

Today we faced a problem with our Docker Swarm which was caused by a permanently restarting service. The focused service was Prometheus, which we use for the monitoring of our Docker environment.

The story starts in the middle of the last week, as the new Prometheus (version 2) was released. In the configuration of our docker-swarm.yml which we use for the Prometheus service, we stupidly still used prometheus:latest. Did you noticed the latest? We have been warned (at Docker Con) to not use this. Yes there are a lot of examples on the internet which are exactly using this, but it is a very bad idea. latest literally means unknown because, you will not now, which image is referenced by the latest tag. latest is only a convention, not a guarantee! Therefore, pin the version of the image which you really want to use by pin-pointing it, eg. prometheus:1.1.1.

In our case, caused by an unplanned service update, the Prometheus image was freshly pulled (you now latest) and corrupted the Prometheus database. Furthermore, the configuration of the Prometheus changed between the version which in turn caused a permanent restart of the service. That happened on the weekend, which wouldn’t be bad, but it caused the container engine to get stressed.

This is documented in this Github issue. The result of this bug is, that the syslog get spammed up with a lot of pointless messages. However this will fill up your log partition after some time (maybe hours, but it will get filled up).

At this point it gets icky. In a default Ubuntu setup for example, the /var partition will contain the log directory and of course the lib/docker directory too. If the /var partition of the system is full, also Docker cannot write its Raft data anymore and your Docker Swarm will be nearly dead.

In our case we had a configuration mistake, because we used four Docker Swarm manager nodes, not three and not five. Now we come to the ugly level. Bad luck, the filled up /var partition killed two of our Docker Swarm managers. The containers continued to work, but the cluster orchestration was messed up, because two out of four nodes where dead. No quorum anymore, no consensus, no manageability.

But, no panic, there are ways to bring back online all services with some Linux voodoo (truncating syslog files, …). To sum it up, what are the lessons learned?

  1. Watch out for the correct number of Docker Swarm managers (there has to be an odd number of it, 3, 5, 7, …)
  2. Never ever use the latest tag, if you are not the maintainer of it!
  3. Restrict syslog to not fill up your partition. Place the syslog logs on a separate partition, or disable the syslog forwarding in the systemd journald config ForwardToSyslog=false. journald’s default configuration is to use a maximum of 10% of the diskspace for the log-data.
  4. Use LinuxKit maybe. It is made out of containers, which can be restricted to not use all system resources. If you ever asked yourself why you should have a look at it, read number two. The Docker host is not a general purpose server, likely you do not need default syslog and much more. This is what LinuxKit is designed for.

Thats all for today.

-M

(Image by Walter Grassroot Wikipedia)

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Older blog entries...