Docker Endeavor – Episode 3 – Orbit

Challenger Orbit

General

It’s been two month since the last Docker Endeavor post but we weren’t lazy! In opposite, we build a lot of new stuff and changed a lot of things and of course we learned a lot too! In between I passed my master exam and therefore the last two month were really busy. Beside this, Bernhard and I met Niclas Mietz a fellow of our old colleague Peter Rossbach from Bee42. We met Niclas because we booked a Gitlab CI/CD workshop in Munich (in June) – and funny, Bernhard and I were the only ones attending this workshop! Therefore we have had a really good time with Niclas because we had the chance to ask him everything we wanted to know specifically for our needs!
Thanks to Bee42 and the DevOps Gathering that we were mentioned on Twitter – what a motivation to go on with this blog!
Also one of our fellows of the Container fathers, Kir Kolyshkin, we met him in 2009, is now working as a developer for Docker Twitter. We are very proud to know him!

Review from the last episode

In the last episode we talked about our ingress-controller, the border-controller and the docker-controller. For now we canceled the docker-controller & the ingress-controller because it adds too much complexity and we managed it to get up and running with a border-controller in conjunction with external created swarm networks and Docker Swarm internal DNS lookups.

Gitlab CI/CD/CD

Yes, we are going further! Our current productive environment is still powered by our work horse OpenVZ. But we are now also providing a handful of Docker Swarm Services in production / development & staging. To get both, CI (continuous integration) and CD/CD (continuous delivery / continuous deployment) up and running, we decided to basically support three strategies.

  • At first, we use Gitlab to create deployment setups for our department, DevOps. We’ve just transitioned our Apache Tomcat setup to a automatic Docker Image build powered by Gitlab. Based on this we created a transition repository where the developer could place his or her .war-package. This file afterwards gets bundled with our Docker Tomcat image, build beforehand, and then it is also pushed to our private Docker registry. Afterwards it will be deployed to the Docker Swarm. KISS – Keep it simple & stupid.

  • Second, the developers of our development department use Gitlab including the Gitlab runners to build a full CI pipeline, including Sonar, QF-GUI tests, Maven and so on.

  • Third, we have projects which are combining both, the CI and the CD/CD mechanisms. For productive and testing/staging environments.

Gitlab-CI-CD-Overview

Update of the border-controller

Our border-controller is now only using the Docker internal Swarm DNS service to configure the backends. We do not use the docker-controller anymore, therefore this project of us is deprecated. Furthermore, in the latest development version of our border-controller I’ve included the possibility to send the border-controller IP address to a PowerDNS server (via API). Thanks to our colleague Ilia Bakulin from Russia who is part of my team now! He did a lot of research and supported us to get this service up and running. We will need it in the future for dynamic DNS changes. If you are interested in this project, just have a look at our Github project site or directly use our border-controller Docker image from DockerHub. Please be patient, we are DevOps, not developers. 🙂

Currently we are not using Traefik for the border-controller because for us there are two main reasons.

  • First, our Nginx based border-controller does not need to be run on a Docker Swarm manager node, because we are not using the Docker socket interface with it. Instead we are using the build in Docker Swarm DNS service discovery to get the IP addresses for the backend configuration. This also implies, that we don’t need to mount the Docker socket into the border-controller.

  • Second, in the latest version the border-controller is able to use the PowerDNS API to automatically register the load balancers IP address and the DNS name in the PowerDNS system. That is important for the users point of view because normally they use a domain name in the browser.

Border-Controller-Overview

Actual Docker Swarm state

Currently we run approximately 155 containers.

Summary

In this blog we talked about CI/CD/CD pipelines and strategies with Gitlab and our own border-controller based on Nginx. In addition we gave you some information on what we did the last two month.

Orbit

The blog headline picture shows the Space Shuttle Challenger in orbit during the STS07-32-1702 mission (22 June 1983).

Challenger Orbit

Traefik Ingress Controller for Docker Swarm Overlay Network Routing Mesh including sticky sessions

Intro

This post will cover the problematic topic on how to realize sticky sessions in a Docker swarm overlay network setup.

General

Well the first thing you have to know is, that a deployed Docker stack which starts a couple of containers (services) will usually also start up a overlay network that provides an intercommunication layer for this stack service. At the first sight that may not be very useful if you only have one service in your Docker stack compose but it will become very useful if you have more than one service inside your compose file.

Docker swarm compose

Before we can dive into the problem with the Docker overlay network routing mesh in the case of the need of sticky sessions, we will need some information about the Docker stack mechanism. Before the Docker stack mechanism rose up (roughly before Docker engine 17.x.-ce) there was (and is) Docker compose. If you are not using a Docker swarm, you will still need and use docker-compose when you want to startup a Docker service on your single Docker host. When we talk about Docker swarm, then we are also talking about a greater number of Docker hosts, greater 1. When you need a Docker service started on a Docker swarm, you have to use the command docker stack deploy for example. This command uses the same input yaml-file as docker-compose does, with additional possible configuration commands. You can read more about it here. The actual config language version is 3.0 but newer versions are already in the pipeline as the Docker engine version gets updated.

Docker compose example

The following example shows you a fully working Docker stack compose file, including all relevant information to deploy a Docker stack including an application service and an ingress controller service (based on Traefik).

You have to deploy this compose yaml file exactly with the command: docker stack deploy -c compose.yml mystack. The reason why you have to do this is explained in the next section. You have to read the next section to understand what is going on here – THE EXAMPLE WILL NOT WORK WITHOUT MODIFICATIONS – READ THE NEXT SECTION. The next section also gives you a lot of background information about the compose details and these details are essential!

Traefik ingress controller

If you want to run the compose file shown above, you have to modify it at one point. The Traefik ingress controller is specified in the lb service section of the compose file and you have to change the placement constraint. If you are running the example on a single Docker host which has Docker swarm enabled, you can delete the whole placement part, otherwise you have to define a valid Docker host manager or leader. You can find this settings between line 41 and 43 of the above Docker stack compose file.

After you may have changed this setting, you can deploy this Docker stack compose file with the following command: docker stack deploy -c compose.yml mystack. You have to use the mystack as service name, because this name is used in line 18 of the Docker stack compose file above. There you see the entry - "traefik.docker.network=mystack_net". The first part is used due to the usage of the mystack name we specified on running the docker stack command. The second part comes from the network section of the Docker compose file which you see between line 47 and line 49.

You can see this naming also, if you run the docker stack deploy command. Here is the full output from the deploy command:

Now we check if our deployed stack is running. We can check this with the command: docker stack ps mystack. The output is shown as follows:

OK, this seems like that our stack is running. We have two app containers running from the image n0r1skcom/echohttp:latest, which is a simple image built by us to get basic http request/response information quickly. We will see the usage of this in a second. And furthermore a loadbalancer based on traefik:latest is up and running. As you can see in the Docker stack compose file above, we did not specify any exposed ports for the application containers. This containers are running a golang http server on ip port 8080 but it is not possible to reach them from the outside network directly. We can only call them if we use the deployed Traefik loadbalancer which we exposed on ports 25580 (the port 80 mapping of Traefik) and 25581 the dashboard port of Traefik. See lines 29-31. Now we take a look, if we can reach the dashboard of Traefik. Open a web-browser and point it to the ip address of one of your Docker hosts with the given port, for example http://:25581. It will work with any of the Docker hosts due to the Docker overlay network routing mesh! I’ve started this Docker stack on a local Docker host, therefore I will point my browser to http://127.0.0.1:25581. You should see the following screenshot:

Traefik Dashboard

And wow! Now this needs some explanation. First, on the right hand side of the screenshot you will see the backend, that Traefik is using for our service. But wait, where are they coming from. Traefik uses the /var/run/docker.sock Docker interface. This is specified in the lines 32 and 33 of the Docker compose file. This is the reason why the Traefik loadbalancer has to run on a Docker swarm manager or leader because only this Docker hosts can provide the Docker swarm information needed. Furthermore the app containers need special labels. This labels are defined in the lines 16 until 20. There we label our app containers so the Traefik loadbalancer finds them and can use it as backends. To get this working, line number 20 is essential – without this line, Traefik will not add the container as backend! Now all lines of the Docker compose file are explained.

Last but not least we should check if the sticky session based on cookie ingress loadbalancing is working. To do this, open up a browser and enter the URL of the http exposed Traefik port. For example http://:25580. I will use once again http://127.0.0.1:25580, and you should see the following output:

HTTP output

On the left hand side of the screenshot you can see the output from our n0r1skcom/echohttp:latest container. This will show you the hostname from the container you are connected to. In this case the container got the dynamic hostname df78eb066abb and the local ip address of this container is 10.0.0.3. The ip address 10.0.0.2/32 is the VIP (virtual ip) from the Docker overlay network mesh. On the right hand side of the screenshot you can see the Chrome developer console, which is showing the loadbalancing cookie we received from the Traefik loadbalancer and this cookie shows that we are bound to the 10.0.0.3 backend. Congratulation! Now you can press STRG+r as often as you like, with this browser session, you are nailed to the 10.0.0.3 backend with this sticky cookie.

You can test the opposite behavior if you use curl, because with curl you will fire a new request every time and you are not recognizing the cookie. Here is the example output:

As you can see, you are alternating between the started backends. Great! Now we can scale our cluster to, lets say, five backends. This can be done with the command: docker service scale mystack_app=5 with the following output including docker stack ps mystack:

Now we have five backends, we can check this with the Traefik dashboard http://:25581:

Traefik scaled service

Congratulations once again! You have dynamically scaled your service and you still have session stickiness. You can check, if all backends are responding via the curl command from above.

Graphic about what you have built

The following graphic shows more than we built today, but we will describe the border controller (loadbalancer) in one of the follow up posts!

DockerSwarmController

Summary

This is the first comprehensive hitchhiker’s guide on Traefik Ingress Controller for Docker Swarm Overlay Network Routing Mesh including sticky sessions. The information shown in this post is a summary of many sources, including Github issues and also a lot of try and (catch)error. If you have any further questions, do not hesitate to contact us! Leave a comment if you like, you are welcome!

GitHub and animated gifs…

For our n0r1skcom/echo DockerHub image we wanted to add a gif (see above) with console output to the corresponding GitHub project README.

But that wasn’t that easy as we thought because GitHub caches images with atmos/camo and that brings in some problems with bigger gif’s…

So we had to disable image caching via the http headers of our source image but these images are located in our WordPress media library and we didn’t want to disable image caching in general.

The solution for us was to configure the serving webserver (in our case Apache) to set some caching/expiry headers via LocationMatch directive and a fancy regex.
Our regex includes all pictures with the filename prefix “nocache_” – so every other image uploaded isn’t touched in any way.

Apache configuration sample

WordPress SSL (https) and Reverse Proxy (Nginx, Apache httpd)

As you can see, this blog is accessible through SSL (https) encryption only. Normally this is not a huge problem but WordPress is a little bit clunky if it comes to a setup that also includes a reverse proxy.

General

The following text is a sum up some pages which can be found on the internet but often lacks information. This WordPress blog that you are currently reading is running on an Apache httpd on localhost. In front of it, there is a second Apache httpd which acts as reverse proxy for different tasks. One of these tasks is to offload SSL (https) encryption.

WordPress installation

In the described setup you should first install the WordPress software on http (port 80) without SSL. If you enable SSL at this time chances are good that you end up in a redirect loop.

Configure SSL (https)

On the reverse proxy configure SSL as usual but be aware, that you have to set RequestHeader set X-Forwarded-Proto "https" inside the SSL virtual host! This information is important as otherwise the URL’s generated by WordPress will be http links and therefore you will get browser warnings later. Do not force a permanent redirect from http to https at this point or you will not be able to install the necessary WordPress plugin which take care on your URL’s.

After you have enabled basic https support install the WordPress extension SSL Insecure Content Fixer and configure it to use the X-Forwarded-Proto header. Afterwards you have to modify the wp-config.php to reflect this settings. If you want use Jetpack, you also have to specify SERVER_PORT otherwise you will receive a error message on wordpress.com during the configuration of your social media connections (There was an error retrieving your site settings.). You also have to force admin SSL usage.

Hopefully this will help some people out there to get this up and running. If this config does not help you, leave a comment!

-M

Apache http reverse proxy config

Nginx reverse proxy (in an Docker environment)

WordPress wp-config.php