Contributing to Docker (or others)

Earth and contirbute


The last week’s I was busy to get two pull requests merged for docker-machine. Here I will tell you the story which experiences I gained and why it was worth the work. I write this during our journey and flights to Docker Con EU 2017, because I am in a good mood and exited to be there soon and it’s another way to support the community too.

In the last summer we started to use Gitlab more extensively because we get used to the integrated CI/CD pipelines in combination with Docker. For this setup we installed a appropriate number of Gitlab-runner too. After a month or two we recognized that we were facing some build peaks during the day and therefore we decided to use our Microsoft Azure connection to the Cloud to setup a Gitlab-autoscale runner.

Docker Con EU 17 break on

Yes, I really wanted to write this blog post during the Docker Con EU 17, but there was too much of fun and information there! It was really exciting and therefore this post was delayed until now … πŸ™‚ . But happily you might find a review of our Docker Con EU 17 journey and impressions here soon!

Docker Con EU 17 break off

This was the point where the problems started. The first thing I mentioned was, that I was not able to reach the Azure VM which is created when the Gitlab-autoscale runner starts. To figure out what was happening there, I started the docker-machine binary manually and tried to create a Azure VM manually, because the Gitlab-autoscale runner uses docker-machine in the background.

First contribution

After a view tries and some debugging runs of docker-machine I realized, that the network routing table which is used to establish the side-by-side connection to the Azure cloud, gets deleted upon the creation of an VM in the Azure cloud. That is really bad because it does not only interrupt the connection to the freshly created machine, it causes that the whole subnet is not reachable anymore. If there are other VM’s existing in the same subnet, they are also not reachable anymore. Ouch.

Open Source is great! Why? You can look at the code! And yes, even if you are not a full time programmer – DO IT! I am an Ops guy, I was able to do this to! Ok, to be fair, we are often forced to solve problems on our own, especially if your main work is to work with Linux, you learn that you can do a lot of tasks in a more fashion manner if you just write a script, or two… (= to code).

So yes, after some digging, I found the place where to change the code and I changed it for on-prem usage and tests. After some time I thought that it will be useful to others and so I filed an issue and I also put in a corresponding pull request.

I had already learned to use Git, but through this pull request I learned a lot more! Thanks to Joffrey who was very kind to support me and after additional work, I was able to get my pull request merged. No more deleted routing table entries!

Second contribution

But the story does not end here πŸ™‚ – during our on-prem tests we also recognized that the Azure storage accounts of delete VM’s are not deleted too! After some days of running the Gitlab-autoscale runner, we messed up our Azure ressource group with lots of orphaned storage accounts. Not nice πŸ˜‰

I guess you know what is coming now? Correct, filing an issue. But wait! Always take a look if there is already a filed issue for a problem! An yes, there was one already filed. So once more I changed the code for on-prem use, tested it and opened a pull request. And again Joffrey was so kind to help me with my questions. After I while, this pull request was merged too and hopefully it helps someone out there.


Yes you can help others! There are plenty of things you can do to the community, not only coding. You can also support others by filing issues or write documentation about projects you use (in no case limited to Docker). There was also a great talk at Docker Con EU 17 held by Ashley McNamara on this topic. To quote one of her slides and to end this blog post:

We are a Community
of coders. But if
all we do is code
then we've lost the community.

Blog picture information

The earth seen from Apollo 17.

New development release of our border controller


In the last few days we have just updated the development branch of our border controller. You can lookup the actual information about it on Github. Be sure that you choose the edge branch. The changelog contains information about the latest changes.

We are really impressed, that the border controller was downloaded 2700 times until today from the Docker hub – hooray!

On-Premise Gitlab with autoscale Docker machine Microsoft Azure runners


This post is about the experience we gained last week while we were expanding our on-premise Gitlab installation with autoscale Docker machine runners located in the Microsoft Azure cloud.


We are running a quite large Gitlab installation for our development colleagues since about five years. Since march this year we are also using our Gitlab installation for CI/CD/CD (continuous integration/continuous delivery/continuous deployment) with Docker. Since our developers started to love the flexibility and the power of Gitlab in combination with Docker, the build jobs are continuously raising. After approximately six month we have run nearly 7000 pipelines and nearly 15000 jobs.

Some of the pipelines are running quite long, for example Maven builds or multi Docker image builds (micro services). Therefore we are running out of local on-premise Gitlab runners. To be fair, this would not be a huge problem because we have a really huge VM-Ware environment on site but we want test the Gitlab autoscaling feature in a real world, real life environment.

Our company is a Microsoft enterprise customer and therefore we have the possibility to just test this things in a little bit different environment than it is usual.

Cloud differences

As told beforehand, we have a more sophisticated on-premise cloud integration. Currently we have a site-to-site connection to Microsoft. Therefore we are able to use the Microsoft Azure cloud as it would be an offside office which is reachable over the WAN (wide area network).

Gitlab autoscale runner configuration

At first glance we just followed the instructions from the Gitlab documentation. The documentation is fairly enough especially with the corresponding docker+machine documentation for the Azure driver.

Important: Persist the docker+machine data! Otherwise every time the Docker Gitlab runner container restarts, your information about the created cloud VM-Ware machines will be lost!

Gitlab runner configuration

Forward, we have to go back!

At this point, the troubles were starting.

Gitlab autoscale runner Microsoft Azure authentication

We decided to run the Gitlab autoscaling runner with the Gitlab runner offical Docker image, as we do with all our other runners. After the basic Gitlab runner configuration (as provided in the documentation), the docker+machine driver will try to connect with the Microsoft Azure API and will start to authenticate the given subscription. Because we are running the Gitlab as Docker container, you have to show the logs to notice the Microsoft login instructions.

Important: The Microsoft Azure cloud login information is stored inside a hidden folder in the /root directory of the Gitlab runner container! You should really persist the /root folder, or you will have to authenticate every time you start the runner!

Important: Use the MACHINE_STORAGE_PATH environment variable to know where docker-machine stores the virtual machine inventory! Otherwise you will lost it every time the container restarts!

If you lost this step, you will have to relogin. You will notice this problem, if you look at the log messages of the container. Every five seconds, you will see a login attempt. This login attempt is only valid for a short time period and every time you will have to enter a new login code online. You can workaround this dead-lock situation, if you open an interactive shell into the Docker Gitlab autoscale runner and enter the docker-machine ls command. This command will wait until you have entered the provided code online.

Docker Gitlab autoscale swarm configuration

Important: This is a swarm compose file!

Microsoft Azure cloud routing table is deleted

Because we are connecting to the Microsoft Azure cloud from within our WAN, there are special routing tables configured in the Microsoft Azure cloud, because the network traffic has to find the way to the cloud and back from it to our site. After we started the first virtual machine we recognized, that we cannot connect to it. There was no chance to establish a ssh connection. After some research we found out, that the routing table was deleted. After some more test and drilling down the docker+machine source code we were able to pin the problem down to one Microsoft Azure API function. To get rid of the problem we wrote a small patch.

The pull request is available at Github and furthermore, you can also find all information there. If you like, please vote up the patch.

Microsoft Azure cloud storage accounts are not deleted

After we sailed around the first problem, we immediately hit the next one. The Github autoscale runner do what he is meant to do and create virtual machines and delete them. But the storage accounts are left over. As a result, your Microsoft Azure resource group will get messed up with orphaned storage accounts.

Yes, you guess it, I wrote the next patch and submitted a pull request at Github. You can read up all details there. If you like, please vote up the patch there :). Thank you.


After a lot of hacking, we are proud to have a full working Gitlab autoscale runner configuration with Microsoft Azure on-premise integration up and running. Regarding the issues found, we have also contacted our Microsoft enterprise cloud consultant to let Microsoft know, that there might be a bug with the Microsoft Azure API.

Blog picture image

The blog picture of this post shows the Sojourner rover from the Nasa Pathfinder mission. In my opinion an appropriate picture, because you will never know what you have to face on uncharted territory.

SSL offloading – NGinx – Apache Tomcat – showcase

Regarding to our post covering the SSL – NGinx – Apache Tomcat offloading, we decide to create a small showcase project. You will find this showcase project under the following link This project should clarify all configuration tasks needed to get up and running with NGxinx and Apache Tomcat. Furthermore you will find a extensive documentation there. If you have questions, you can open an issue at the Github project. Have fun -M

Docker TOTD

If you edit a Docker bind mounted file (-v sourcefile:destitnationfile) you may have recognized that you face a stale file handle under certain circumstances, especially if you edit the file with Vim on your Docker host.

This is because Vim will copy the content of the original edit file to a new one and after you save the changes, Vim exchanges this two files. The result of this operation is, that the inode of the file will be changed.

Docker uses the inode of the file for the bind mount and therefore, correctly, the file handle will be stolen after this operation.


Just open Vim without specifying the file. Afterwords type in :set backupcopy=yes and open the file you like to edit with :e yourfile. With this option you will edit the original file handle and not a copy.


Github issue


Docker Endeavor – Episode 3 – Orbit

Challenger Orbit


It’s been two month since the last Docker Endeavor post but we weren’t lazy! In opposite, we build a lot of new stuff and changed a lot of things and of course we learned a lot too! In between I passed my master exam and therefore the last two month were really busy. Beside this, Bernhard and I met Niclas Mietz a fellow of our old colleague Peter Rossbach from Bee42. We met Niclas because we booked a Gitlab CI/CD workshop in Munich (in June) – and funny, Bernhard and I were the only ones attending this workshop! Therefore we have had a really good time with Niclas because we had the chance to ask him everything we wanted to know specifically for our needs!
Thanks to Bee42 and the DevOps Gathering that we were mentioned on Twitter – what a motivation to go on with this blog!
Also one of our fellows of the Container fathers, Kir Kolyshkin, we met him in 2009, is now working as a developer for Docker Twitter. We are very proud to know him!

Review from the last episode

In the last episode we talked about our ingress-controller, the border-controller and the docker-controller. For now we canceled the docker-controller & the ingress-controller because it adds too much complexity and we managed it to get up and running with a border-controller in conjunction with external created swarm networks and Docker Swarm internal DNS lookups.

Gitlab CI/CD/CD

Yes, we are going further! Our current productive environment is still powered by our work horse OpenVZ. But we are now also providing a handful of Docker Swarm Services in production / development & staging. To get both, CI (continuous integration) and CD/CD (continuous delivery / continuous deployment) up and running, we decided to basically support three strategies.

  • At first, we use Gitlab to create deployment setups for our department, DevOps. We’ve just transitioned our Apache Tomcat setup to a automatic Docker Image build powered by Gitlab. Based on this we created a transition repository where the developer could place his or her .war-package. This file afterwards gets bundled with our Docker Tomcat image, build beforehand, and then it is also pushed to our private Docker registry. Afterwards it will be deployed to the Docker Swarm. KISS – Keep it simple & stupid.

  • Second, the developers of our development department use Gitlab including the Gitlab runners to build a full CI pipeline, including Sonar, QF-GUI tests, Maven and so on.

  • Third, we have projects which are combining both, the CI and the CD/CD mechanisms. For productive and testing/staging environments.


Update of the border-controller

Our border-controller is now only using the Docker internal Swarm DNS service to configure the backends. We do not use the docker-controller anymore, therefore this project of us is deprecated. Furthermore, in the latest development version of our border-controller I’ve included the possibility to send the border-controller IP address to a PowerDNS server (via API). Thanks to our colleague Ilia Bakulin from Russia who is part of my team now! He did a lot of research and supported us to get this service up and running. We will need it in the future for dynamic DNS changes. If you are interested in this project, just have a look at our Github project site or directly use our border-controller Docker image from DockerHub. Please be patient, we are DevOps, not developers. πŸ™‚

Currently we are not using Traefik for the border-controller because for us there are two main reasons.

  • First, our Nginx based border-controller does not need to be run on a Docker Swarm manager node, because we are not using the Docker socket interface with it. Instead we are using the build in Docker Swarm DNS service discovery to get the IP addresses for the backend configuration. This also implies, that we don’t need to mount the Docker socket into the border-controller.

  • Second, in the latest version the border-controller is able to use the PowerDNS API to automatically register the load balancers IP address and the DNS name in the PowerDNS system. That is important for the users point of view because normally they use a domain name in the browser.


Actual Docker Swarm state

Currently we run approximately 155 containers.


In this blog we talked about CI/CD/CD pipelines and strategies with Gitlab and our own border-controller based on Nginx. In addition we gave you some information on what we did the last two month.


The blog headline picture shows the Space Shuttle Challenger in orbit during the STS07-32-1702 mission (22 June 1983).

Challenger Orbit

Nginx Reverse Proxy with SSL offloading and Apache Tomcat backends

Nginx SSL offloading

In our current native Docker environment, we are using Nginx as our border controller (link) to get the traffic and the user sessions (sticky) managed with our Apache Tomcat servers. But together with our developers we found out that there is a major problem with https encryption on Nginx and using Apache Tomcat http connector as backend interface.

The problem

If the Apache Tomcat is not configured correctly (server.xml and web.xml) some of the automatically created redirect links (created by Apache Tomcat himself) in application will still point to http resource urls. This will lead to double requests and of course to a not working application if you are using a modern browser like Chrome (insecure content in secure context).

The solution(s)

Apache Tomcat server.xml

You have to modify the Apache Tomcat server.xml to add the parameters scheme="https", secure="true", proxyPort="443" . Afterwards your http connector setting should looks like the following code. Afterwards the request object in the Apache Tomcat will have the correct scheme.


Usually you will enable the x-forwarded-for header in the Nginx configuration. Afterwards on the backend you can retrieve the header inside your, in case of Apache Tomcat, Java code. But this would be a manual way to do it. To be compatible with this header out of the box, you can add a filter to you web.xml. Afterwards the x-forwarded-proto will be automatically set inside the Apache Tomcat request object. Here is the needed part of the web.xml.


After some research we figured out on how to configure Apache Tomcat to work seamlessly with Nginx as reverse proxy in conjunction with Apache Tomcat backends.

Lenovo Y50-70 replace keyboard

IΒ didn’t find a full tutorial on how to do it, so I decided to post it here. It was my first try and took me about 3 hours last night.

Step 1 – Order a replacement

I started fromΒ and orderd it at

y50-70 the new keyboard

Step 2 – Follow this tutorial

Step 3 – Take some time (a bottle of wine) and go on like this

I marked the places to work on with the blue tools…

y50-70 under the motherboard

y50-70 speakers

Remove the black foil

y50-70 remove the black foil

y50-70 remove the power cable

y50-70 remove the last srew

y50-70 ahhrrrr

y50-70 ahrr

y50-70 got it

y50-70 too early

y50-70 great joy

y50-70 and all the way back

y50-70 and on and on

y50-70 foil is back

… and all the way back.

Traefik Ingress Controller for Docker Swarm Overlay Network Routing Mesh including sticky sessions


This post will cover the problematic topic on how to realize sticky sessions in a Docker swarm overlay network setup.


Well the first thing you have to know is, that a deployed Docker stack which starts a couple of containers (services) will usually also start up a overlay network that provides an intercommunication layer for this stack service. At the first sight that may not be very useful if you only have one service in your Docker stack compose but it will become very useful if you have more than one service inside your compose file.

Docker swarm compose

Before we can dive into the problem with the Docker overlay network routing mesh in the case of the need of sticky sessions, we will need some information about the Docker stack mechanism. Before the Docker stack mechanism rose up (roughly before Docker engine 17.x.-ce) there was (and is) Docker compose. If you are not using a Docker swarm, you will still need and use docker-compose when you want to startup a Docker service on your single Docker host. When we talk about Docker swarm, then we are also talking about a greater number of Docker hosts, greater 1. When you need a Docker service started on a Docker swarm, you have to use the command docker stack deploy for example. This command uses the same input yaml-file as docker-compose does, with additional possible configuration commands. You can read more about it here. The actual config language version is 3.0 but newer versions are already in the pipeline as the Docker engine version gets updated.

Docker compose example

The following example shows you a fully working Docker stack compose file, including all relevant information to deploy a Docker stack including an application service and an ingress controller service (based on Traefik).

You have to deploy this compose yaml file exactly with the command: docker stack deploy -c compose.yml mystack. The reason why you have to do this is explained in the next section. You have to read the next section to understand what is going on here – THE EXAMPLE WILL NOT WORK WITHOUT MODIFICATIONS – READ THE NEXT SECTION. The next section also gives you a lot of background information about the compose details and these details are essential!

Traefik ingress controller

If you want to run the compose file shown above, you have to modify it at one point. The Traefik ingress controller is specified in the lb service section of the compose file and you have to change the placement constraint. If you are running the example on a single Docker host which has Docker swarm enabled, you can delete the whole placement part, otherwise you have to define a valid Docker host manager or leader. You can find this settings between line 41 and 43 of the above Docker stack compose file.

After you may have changed this setting, you can deploy this Docker stack compose file with the following command: docker stack deploy -c compose.yml mystack. You have to use the mystack as service name, because this name is used in line 18 of the Docker stack compose file above. There you see the entry - "". The first part is used due to the usage of the mystack name we specified on running the docker stack command. The second part comes from the network section of the Docker compose file which you see between line 47 and line 49.

You can see this naming also, if you run the docker stack deploy command. Here is the full output from the deploy command:

Now we check if our deployed stack is running. We can check this with the command: docker stack ps mystack. The output is shown as follows:

OK, this seems like that our stack is running. We have two app containers running from the image n0r1skcom/echohttp:latest, which is a simple image built by us to get basic http request/response information quickly. We will see the usage of this in a second. And furthermore a loadbalancer based on traefik:latest is up and running. As you can see in the Docker stack compose file above, we did not specify any exposed ports for the application containers. This containers are running a golang http server on ip port 8080 but it is not possible to reach them from the outside network directly. We can only call them if we use the deployed Traefik loadbalancer which we exposed on ports 25580 (the port 80 mapping of Traefik) and 25581 the dashboard port of Traefik. See lines 29-31. Now we take a look, if we can reach the dashboard of Traefik. Open a web-browser and point it to the ip address of one of your Docker hosts with the given port, for example http://:25581. It will work with any of the Docker hosts due to the Docker overlay network routing mesh! I’ve started this Docker stack on a local Docker host, therefore I will point my browser to You should see the following screenshot:

Traefik Dashboard

And wow! Now this needs some explanation. First, on the right hand side of the screenshot you will see the backend, that Traefik is using for our service. But wait, where are they coming from. Traefik uses the /var/run/docker.sock Docker interface. This is specified in the lines 32 and 33 of the Docker compose file. This is the reason why the Traefik loadbalancer has to run on a Docker swarm manager or leader because only this Docker hosts can provide the Docker swarm information needed. Furthermore the app containers need special labels. This labels are defined in the lines 16 until 20. There we label our app containers so the Traefik loadbalancer finds them and can use it as backends. To get this working, line number 20 is essential – without this line, Traefik will not add the container as backend! Now all lines of the Docker compose file are explained.

Last but not least we should check if the sticky session based on cookie ingress loadbalancing is working. To do this, open up a browser and enter the URL of the http exposed Traefik port. For example http://:25580. I will use once again, and you should see the following output:

HTTP output

On the left hand side of the screenshot you can see the output from our n0r1skcom/echohttp:latest container. This will show you the hostname from the container you are connected to. In this case the container got the dynamic hostname df78eb066abb and the local ip address of this container is The ip address is the VIP (virtual ip) from the Docker overlay network mesh. On the right hand side of the screenshot you can see the Chrome developer console, which is showing the loadbalancing cookie we received from the Traefik loadbalancer and this cookie shows that we are bound to the backend. Congratulation! Now you can press STRG+r as often as you like, with this browser session, you are nailed to the backend with this sticky cookie.

You can test the opposite behavior if you use curl, because with curl you will fire a new request every time and you are not recognizing the cookie. Here is the example output:

As you can see, you are alternating between the started backends. Great! Now we can scale our cluster to, lets say, five backends. This can be done with the command: docker service scale mystack_app=5 with the following output including docker stack ps mystack:

Now we have five backends, we can check this with the Traefik dashboard http://:25581:

Traefik scaled service

Congratulations once again! You have dynamically scaled your service and you still have session stickiness. You can check, if all backends are responding via the curl command from above.

Graphic about what you have built

The following graphic shows more than we built today, but we will describe the border controller (loadbalancer) in one of the follow up posts!



This is the first comprehensive hitchhiker’s guide on Traefik Ingress Controller for Docker Swarm Overlay Network Routing Mesh including sticky sessions. The information shown in this post is a summary of many sources, including Github issues and also a lot of try and (catch)error. If you have any further questions, do not hesitate to contact us! Leave a comment if you like, you are welcome!