Speaker at DevOps Gathering 2018

This year and for the first time I attended as speaker at the DevOps Gathering 2018 conference at Bochum Germany! Long story short, it was an extremely great experience for me!

The idea for giving this talk was born in October 2017. I knew that Peter Rossback (a long term friend of us) invented a conference at Bochum Germany for the first time in March 2017. Due to our own journey along the DevOps way, which started in the beginning of 2017, I decide to try to give something back to the container community. Of course, we and I are already participating to various Open Source projects, but I looked for the chance to give a talk too to find out if I am convenient with it. After putting in my CFP for the DevOps Gathering 2018, the organizers decide to give me the chance to speak to a larger audience.

I’ve started the detailed preparation of the talk at the end of the last year and continuously updated it along the things changed at work. There are always last knowings you would like to reflect in a talk. There are fine details here and there and on the last weekend before the talk I decided to recreate a graphic I use in the presentation on my own for example.

My trip to the Devops Gathering 2018 started on Monday afternoon with the drive to the Salzburg airport. As I arrived there, I recognized that may flight to Düsseldorf was delayed by an hour. Not a huge problem as I had plenty of time and after some waiting the plane to Düsseldorf took off. After a short flight (one and a half hour) I took the train from Düsseldorf to Bochum and arrived there at approximately 9 pm. On the first day the speakers dinner took place at a Greek restaurant in Bochum which I managed to attend. That was the first that I met with the other speakers. And obviously it was great (see the pictures)

The next day I arrived at the conference location at 8 am. I already knew the boys and girls from the Bee42 gmbh and therefore it was a comfortable situation for me. The conference took place at the GData campus, a very nice location. And as the hour hand came close to 9 pm, the location gets filled up with people. The DevOps Gathering was sold out and in sum there were more than 150 attendees.

On this day after the lunch break my talk takes place at 1:15 pm. I was already on stage because I placed a video (including audio) in my talk. Audio is always an interesting thing. You never know if it works during a presentation, if you do not try it beforehand. I talked to the tech-staff and we managed it to get it up and running with audio too. By the way, all tracks were recorded. Here are the videos of the tracks from the last year and I am sure that the new ones will pop up there too in the following days.

From my point of view, my talk runs very smooth. No huge problems, only some minor ones. My talk, as all of the other talks, lasts 45 minutes (in English of course) with Q&A afterwards. And to sum it up, I was really fun! Yes, I think it is exciting for me to give talks. If you never try things out and leave your comfort zone, you will never knew what is exciting and satisfaction for you.

During the whole day I had the possibility to talk with the other speakers and of course with attendees too and that was very interesting! In the evening the DevOps Gathering party takes place at the GData campus. Drinks, food, music, cool people and tons of fun! If you have the chance, attend to the next conference if it is possible for you! It is worth it! I came in contact with a lot of people there. For example Roland Huß(Red Hat), Docker captain Viktor Farcic (CloudBees), Thomas Fricke (CTO of Endocode), Jan Bruder (Rancher Labs) just to name some of them and many others, though.

On the second day I had to travel back and therefore I had to left the conference early. But from the talks which I have seen on the second day, I can say that the second day was great too!

After my journey to the DevOps Gathering 2018 I can say that it was very exiting and enjoyable to give a talk, to meet a lot of great people and finally to learn much new things. The discussions we had were great too. Everyone has his or her own experience but we were always able to understand and respect the position of each other and this is what makes conferences so wonderful! Side note: I did this adventure on my own expense. Why? As I wrote above, if you never leave your comfort zone you will never find out what is exiting and enjoyable for you. Now I know, that I would like to continue this way! There was also an attendees questionnaire about the conference in general and the speakers. I am anxious what my marks are and now I am waiting for the videos of the DevOps Gathering 2018.

You can find all speaker an their talks here – click. Have a lot of fun!


Terraform(ing) Blue-Green Swarms of Docker (VMware vCenter)


Terraform(ing) Blue-Green Swarms of Docker will enable you to update your Docker Swarm hosts to the actual Docker-CE version without an outage. For example imagine the following situation. Your have some Docker Swarm manager up and running and of course a bunch of Docker Swarm workers. If you are forced to update your operating system or if you like to update from a previous version of Docker to a newer one, you will have to handle this change it in place on your Docker Swarm workers. The result is, that you will drain one Docker host, update it and bring it back active to the Docker swarm. If you have five Docker Swarm worker hosts this will result in a loss of a fifth of your capacity and the remaining Docker Swarm worker hosts will have to handle a plus of a twentieth of workload. And if something goes wrong, maybe the new Docker version have a bug which hits you, you might be out of order shortly.

Therefore it is much better, if you can create fresh Docker Swarm workers side by side with the existing ones and then, if all is up and running, you can drain on old version Docker Swarm worker. The load will be pulled over to the new Docker Swarm workers and if something goes wrong, you can just switch back by activating the old Docker Swarm worker host and draining the new Docker Swarm workers afterwards.

The downside of working this way is, that you need a lot of resources while you are running both, the blue and the green Docker Swarms workers and you have to install the Docker Swarm worker hosts. The first issue will cost money, the second one time. Loosing time is always worse, therefore we will use Terraform to do it for us.


You will need existing Docker Swarm managers to do this job. In the best case the Docker Swarm managers are not used as shared Docker Swarm workers, they should not have workload containers running. They do not need to have as much resources as the Docker Swarm workers. If you handle it this way, you can update the Docker Swarm mangers during work hours without any hassle. Therefore, separate the Docker Swarm mangers from your Docker Swarm workers.

It might be possible to create the Docker Swarm managers through Terraform, but that is not an easy task. Terraform has only limited provisioning capabilities, which is obvious but evident as it is a tool to build infrastructure. Don’t use it to handle software installation tasks. If you need them, use Puppet, Chef, whatever or write something yourself

Example Terraform file

Terraform file explaination

In this Terraform file we use VMware templates to distinguish between the Ubuntu versions and the installed Docker versions. It is similar to the Docker image usage (line 56). We are using PowerDNS to register the Docker Worker hosts automatically in our DevOps DNS (optional, lines 65-70). The most important part of this Terraform file are the provisioners (lines 123-134). These lines will take care, that the newly created Docker host will join the Docker Swarm as worker and of course it will leave the Docker Swarm, if you destroy the Docker Swarm workers through Terraform destroy.


You can take this file as a boilerplate. You can use this file to bring up the blue Docker Swarm workers. Later, you copy this file, change the configuration eg. ip addresses, and bring up the green Docker Swarm workers. After you have transferred the workload from blue to green, you can destroy the blue Docker Swarm worker and prepare for the next update. Todos: You might need to put a small script on your Terraform created Docker Swarm worker hosts to perform additional tasks after creation or before destroy. For example, the PowerDNS entry creation is a bad hack, because it deletes all entries. It would be better to have a script which does this task after startup from the Docker Swarm worker host point of view.

Have fun -M

Terraform(ing) Docker hosts with LinuxKit on-premise (VMware vCenter)


This blog post aims not be a fully-fledged step by step tutorial on how you can create and bootstrap a Docker Swarm cluster with VMware vCenter on-premise. Instead, it should give you an idea about what is possible and why we do it this way. There are different ways to achieve different goals and the way we explain here shows only one way of how you can do it.

Today we are running around 30 Docker hosts which are installed as classic VMWare virtual machines and these Docker hosts are all based upon Ubuntu Linux. This leads to the fact that we have to update our Docker hosts every three months manually to get in touch with the (annual) up-to-date Docker-CE version. Pointless work. We are provisioning our Docker hosts with puppet. Even if we are provisioning our hosts with puppet, it would take time to create new Docker hosts to let the Docker Swarm workers rotate. Yes, we could create a blue-green infrastructure to rotate them, but this will use computing resources if we create them beforehand or let them run all the time. If we would create them on-demand, every three month, it would take time to bring them up and running, update them, and so on. This is now the point to introduce new players. Docker LinuxKit as Docker host OS and HashiCorp Terraform as IaC (Infrastructure as Code). Yes, there is also Docker InfraKit and we are currently evaluating it, but there will be more work to do.


LinuxKit is the operating system project started by Docker to provide a toolkit for building custom minimal, immutable Linux distributions. The benefit you get, if you choose to go with LinuxKit, is not only to have a custom made Linux distribution which reflects your needs, furthermore you get a platform which enables you to push a resulting iso-image to a VMWare datastore for example. There are already different cloud providers build into the LinuxKit toolkit.

LinuxKit example

Downloading and building LinuxKit is very well described on the LinuxKit GitHub page. After you have built the LinuxKit binary you need a yaml file that describes the compositions of the LinuxKit operating system you would like to create. The example linuxkit.yaml included in the sources is a good starting point. But after some work you will recognize that the simple example will maybe be not enough. Therefore, the next lines will show our basic example where we are starting from to include some additional packages we need.

LinuxKit example explanation

The first thing you need to know is, that LinuxKit uses containerd as container engine to power most of the needed operating system daemons. As you can see in the linuxkit.yml, the different sections are using images known as container-images to build up the system. Important: This images are using the OCI open container initiative image standard. I will reference back to this point later, so just keep this information in mind.

The first line of the linuxkit.yml takes a Linux kernel image and loads it. The file is not using any :latest tag as image definition and you would like to avoid them too because you would like to know what versions are operating in your operating system. Please do not copy the file as it is, because it will be outdated nearly immediately!

After the definition of the kernel there comes the init section. In this section, there are those things located which will be needed immediately, for example the containerd image. This image will be responsible for the upcoming service containers.

Next in the row, there are the services you will probably need for your environment. We need the open-vm-tools image, because we are running the resulting image on the VMWare ESXi infrastructure and without the tools it would not be possible to retrieve the ip address information from the VMWare vCenter and this is a must as we are going to build the virtual machines with Terraform later.

The ssh daemon should be self explaining, but you will need some root/.ssh/authorized_keys to access the running LinuxKit OS. Therefore, look at the files section, where you can describe your configuration needs.

Now we are installing the Docker engine because we would like to build up a Docker Swarm worker and this Docker Swarm worker should join an existing Docker swarm manager later. Important: You will notice, that for the Docker engine, we are using the DockerHub image as usual, no special LinuxKit image! But how does this work? As I said before, the containerd is using OCI standard images which are not the same as Docker images. Lets have a look into it.

LinuxKit persistent disk magic

When it comes to the point, that you have to startup your Docker swarm cluster or even a Docker swarm host with LinuxKit, you will sooner or later ask yourself how you can manage it to persist your Docker Swarm data or at least the information which Docker containers are started.

The Docker data lives inside the /var/lib/docker folder. Therefore, if you persist this folder, you are able to persist the current state of the Docker host.

The collegues at Docker have done their work the right way. Look at the lines, where the images format and mount are loaded. This two images are doing the magic which enables LinuxKit to persist it’s data. You can lookup the documentation at github for details. For the impatient, here’s the summary.

The format image will take the first block device it finds and if there is no Linux partition on it, it will format it. If it finds a Linux partition, nothing happens. Very comfortable! After the disk format is done (or not) the partition gets mounted via the mount image and the corresponding mountie configuration lines. Et voila, there you go. Magical persistence with LinuxKit iso-turbo-boost mode. Genius!

This is one of the most important features, when it comes to infrastructure as code because regardless what you take, InfraKit or Terraform, you might eventually need some kind of persistence to get a reliable infrastructure.

LinuxKit OCI and Docker images

The really cool stuff about LinuxKit is, that it is a toolset. This means, that during the linux build command, which we will see later, the image components as described by the linuxkit.yml are downloaded from DockerHub and afterwards the contents of the used images are transformed to OCI compatible root image filesystems. This means, you can use all the DockerHub images directly without worrying about the image format. Neat! Now you know why you need a Docker environment to build LinuxKit and to build your LinuxKit OS afterwards.

LinuxKit build your image

To build your LinuxKit iso-image, you can use the following command. We have created a separate docker.yml file, to reflect the changes. The resulting iso-image will therefore be named docker.iso automatically.

LinuxKit push image to VMware

After you have build your iso-image, you can push it to a VMware datastore with the following command. Important: There is a small problem located in the actual version of LinuxKit. You cannot push to a VMware datacenter if there are multiple hosts located in it. Thanks to Dan Finneran @thebsdbox who helped me a lot! There is already a github PR merged which makes it possible to push without a problem. The PR will be included in the next version (0.2) of LinuxKit.


Sure, at some point in time we will maybe use InfraKit to get our things up and running. But as for today there are hardly alternatives to Terraform. Terraform is Open Source Software but you can purchase an enterprise license if you need. When you start to dig into the parts and pieces of Terraform, you will recognize, that it is not the easiest piece of software but incredibly powerful.

Just download Terraform from the website and unpack it. The only thing you will see after extraction is just a binary called terraform.

Before you can do anything with it you will need a configuration which describes what you would like to receive. Due to this concept, Terraform is working with states. You plan a state, you apply a state and you will destroy a state. Terraform will try to keep your resources consistent. Now it comes to the config.

Terraform example

If you run this example with ./terraform plan and terraform, you will get three LinuxKit VMWare virtual machines which are saving their persistent state to the hard drive.


You can use the information from this blog post to build up a basic infrastructure. But to build up a running Docker Swarm cluster with managers and workers there is more work to do. There are a few glitches which makes it a little bit complicated at the moment to use the VMWare customization against the LinuxKit running virtual machines for example. In turn, it might be better to go with VMWare templates at the moment because you will have more possibilities. Maybe we will post an update on this soon. Stay tuned!

DevOps Gathering 2018

In February 2018 I will give my first official talk at the DevOps Gathering 2018 at Bochum Germany. Now I would like to write some lines about why I am doing it and I would also like to suggest you this conference to attend to. And now, here comes the story.

Peter Rossbach is one of the founders of the bee42 gmbh and the bee42 gmbh is one of the conference organizers. Bernhard and I know Peter since ten years now. He is one of the people who you met and never forget. We did a lot of work together for our Apache Tomcat installations at work in the past and therefore we always discussed a lot of things beside the work. One of this things was Docker, back then it was 2013. 2013 it was way to early for us to jump on the Docker container train. We already had container (OpenVZ) on premise which we implemented with Kir Kolyshkin.

We tried Docker on a regular yearly basis the next years but there was too much, which was not working as we need it to use it in production. After some years have passed this way, we started a new try this year. And yes, we managed it to reach the point where things began to run smooth. This year we did a lot for our developers, our company, our customers and ourselfs (the Ops) and Docker was our enabler. Therefore my talk is called Docker: Ops unleashed.

But now, for me, it is time to give something back – back to the community! And therefore I’ve decided to give a talk, to share our experience and to share what the motivation could be to make a change. I am going to do this on my own, privately. At the moment it is winter and together with my colleague we founded a Docker Meetup in south Austria and I am proud to be now allowed to call myself “Docker Community Leader”. But there was more. The Docker Con EU 17 was really great. We met a lot of great people and got a lot of insights!

Why I always wrote “we”? We, because you are probably lost without a team and a team is more than the sum of the individual skills. If you have someone to share your thoughts with, you can go even further! You might be motivated to give a talk for example.

Finally, if you can manage it to come to the [DevOps Gathering 2018](https://devops-gathering.io/) at Bochum Germany please come! It will be a great conference and I am sure, there will be a great audience too. And maybe we can meet us there!

Have a lot of fun!


1. Meetup: Allgemeines & Umfrage / General & Survey

We are planning a Docker Meetup in the next months. Therefore we have created a Google forms poll which you can see below. The poll is provided in German language as we are currently expecting only German talking people to come to our first Meetup. The description will only be provided in German language at the moment. If you do not understand German but you would like to attend (if you are from Italy or Slovenia for example), please contact us! If you are interested to hold a lightning talk or if you would like to share your Docker story (approximately 10 minutes) in English, please contact us. You can find our contact information in the left menu or you can head over to our Meetup page.

Wir planen in den nächsten Monaten ein erstes Docker Meetup in Spittal an der Drau. Vorraussichtlich wird das Meetup in den Räumlichkeiten des bfi-Spittal stattfinden. Aufgrund der Platzsituation müssen wir die Teilnehmeranzahl für das Meetup auf 12 Personen begrenzen! Die Organisation des Meetups (Agenda, RSVP, …) wird über unsere Meetup Seite erfolgen (Anmeldung erforderlich). Ein Termin für dieses Meetup steht noch nicht fest, da wir zuerst die Themen sammeln möchten, welche für die Teilnehmer und Teilnehmerinnen von Interesse sind. Aus diesem Grund findet ihr unterhalb dieser Zeilen eine entsprechende Umfrage, mit der Bitte diese auszufüllen.

Die bereits vorgeschlagenen Themen kommen von Bernhard Rausch (CI/CD mit GitLab) und mir (Mario Kleinsasser, Docker 101), da dies Themenbereiche sind, die wir selber aufgrund unserer Erfahrung sehr gut kennen. Solltet ihr weitere Vorschläge haben, so könnt ihr diese gerne bei der Umfrage angeben.

Den Zeitrahmen für das Meetup haben wir mit 2-3 Stunden festgelegt, wobei der Start des Meetups voraussichtlich um 18:30 Uhr sein wird.

Vielen Dank für das Ausfüllen der Umfrage!


Meetup Treffpunkt

Wird in Kürze bekannt gegeben.

Linux, Golang , govendor and Microsoft Code

Vim. Yes I am a long term Vim user and I think I still will be a Vim user in a couple of years too. But today I would like to show you, that Microsoft Code is really a great addition for sometimes Golang coders like me. Yes I now, Vim is also a great Golang development IDE but often I need some visualization of a Markdown file for example also.

So lets have a look at Microsoft Code:


The installation is straight forward. Head over to the official download page and download and install the appropriate package for your operating system. In my case, this is Linux (Ubuntu to be precise). After the installation, you can startup the editor via the startup menu of your operating system or you can just type in code in your console.

If you do this, and you are using a remote graphical session like X2GO or XRDP or something similar, nothing will happen, because there is currently a little problem with this setup. But, you can solve this. Just read this issue and at the bottom of it, you can find the (Ubuntu) solution.


After Microsoft Code opens, you should change some user settings. You can open the settings screen through the menu or you can press crtl+shift+p which opens the quick command palette. Not start typing, for example “open settings” and select the correct entry to edit the user settings. Here are the settings, which I have overwritten in my environment.

First, I disabled auto save. In my opinion, this is a little bit annoying if you are writing code, as every time you type, the code is parsed immediately which can slowdown the editor performance a lot. Second, I lower the font size. The default of 14 is to huge for me. I like it to see more code on screen to follow the flow of a code easier. go.toolsGopath is important, as it tells the Microsoft Code golang-plugin – more on this later – where to install the plugin dependencies. go.inferGopath is also important because it tells golang-plugin, to use the actual opened folder as GOPATH variable – this is really useful.


Open the extension browser (crtl+string+p – “extension install”), search for the golang extension and install it. The installation will also install all golang extension dependencies automatically, for example go-lint. All the dependencies will be installed in the path which is defined by the go.toolsGopath.


govendor is a simple golang solution to resolve your project dependencies. You can get it from this Github project page. It is pretty easy to install and use. Just read through the documentation and follow the given steps :-).


If you have setup all correctly, you will get a nice and useful editor for graphical desktop environments. Microsoft Code is no replacement for VIM or Emacs or the editor of your choice but it is a useful and powerful addition.

Have fun!


Older blog entries...