It’s been two month since the last Docker Endeavor post but we weren’t lazy! In opposite, we build a lot of new stuff and changed a lot of things and of course we learned a lot too! In between I passed my master exam and therefore the last two month were really busy. Beside this, Bernhard and I met Niclas Mietz a fellow of our old colleague Peter Rossbach from Bee42. We met Niclas because we booked a Gitlab CI/CD workshop in Munich (in June) – and funny, Bernhard and I were the only ones attending this workshop! Therefore we have had a really good time with Niclas because we had the chance to ask him everything we wanted to know specifically for our needs!
Thanks to Bee42 and the DevOps Gathering that we were mentioned on Twitter – what a motivation to go on with this blog!
Also one of our fellows of the Container fathers, Kir Kolyshkin, we met him in 2009, is now working as a developer for Docker Twitter. We are very proud to know him!
Review from the last episode
In the last episode we talked about our ingress-controller, the border-controller and the docker-controller. For now we canceled the docker-controller & the ingress-controller because it adds too much complexity and we managed it to get up and running with a border-controller in conjunction with external created swarm networks and Docker Swarm internal DNS lookups.
Yes, we are going further! Our current productive environment is still powered by our work horse OpenVZ. But we are now also providing a handful of Docker Swarm Services in production / development & staging. To get both, CI (continuous integration) and CD/CD (continuous delivery / continuous deployment) up and running, we decided to basically support three strategies.
- At first, we use Gitlab to create deployment setups for our department, DevOps. We’ve just transitioned our Apache Tomcat setup to a automatic Docker Image build powered by Gitlab. Based on this we created a transition repository where the developer could place his or her .war-package. This file afterwards gets bundled with our Docker Tomcat image, build beforehand, and then it is also pushed to our private Docker registry. Afterwards it will be deployed to the Docker Swarm. KISS – Keep it simple & stupid.
Second, the developers of our development department use Gitlab including the Gitlab runners to build a full CI pipeline, including Sonar, QF-GUI tests, Maven and so on.
Third, we have projects which are combining both, the CI and the CD/CD mechanisms. For productive and testing/staging environments.
Update of the border-controller
Our border-controller is now only using the Docker internal Swarm DNS service to configure the backends. We do not use the docker-controller anymore, therefore this project of us is deprecated. Furthermore, in the latest development version of our border-controller I’ve included the possibility to send the border-controller IP address to a PowerDNS server (via API). Thanks to our colleague Ilia Bakulin from Russia who is part of my team now! He did a lot of research and supported us to get this service up and running. We will need it in the future for dynamic DNS changes. If you are interested in this project, just have a look at our Github project site or directly use our border-controller Docker image from DockerHub. Please be patient, we are DevOps, not developers. 🙂
Currently we are not using Traefik for the border-controller because for us there are two main reasons.
Actual Docker Swarm state
Currently we run approximately 155 containers.
In this blog we talked about CI/CD/CD pipelines and strategies with Gitlab and our own border-controller based on Nginx. In addition we gave you some information on what we did the last two month.
The blog headline picture shows the Space Shuttle Challenger in orbit during the STS07-32-1702 mission (22 June 1983).
Nginx SSL offloading
In our current native Docker environment, we are using Nginx as our border controller (link) to get the traffic and the user sessions (sticky) managed with our Apache Tomcat servers. But together with our developers we found out that there is a major problem with https encryption on Nginx and using Apache Tomcat http connector as backend interface.
If the Apache Tomcat is not configured correctly (server.xml and web.xml) some of the automatically created redirect links (created by Apache Tomcat himself) in application will still point to http resource urls. This will lead to double requests and of course to a not working application if you are using a modern browser like Chrome (insecure content in secure context).
Apache Tomcat server.xml
You have to modify the Apache Tomcat server.xml to add the parameters
scheme="https", secure="true", proxyPort="443" . Afterwards your http connector setting should looks like the following code. Afterwards the request object in the Apache Tomcat will have the correct scheme.
Usually you will enable the
x-forwarded-for header in the Nginx configuration. Afterwards on the backend you can retrieve the header inside your, in case of Apache Tomcat, Java code. But this would be a manual way to do it. To be compatible with this header out of the box, you can add a filter to you web.xml. Afterwards the x-forwarded-proto will be automatically set inside the Apache Tomcat request object. Here is the needed part of the web.xml.
After some research we figured out on how to configure Apache Tomcat to work seamlessly with Nginx as reverse proxy in conjunction with Apache Tomcat backends.
As you can see, this blog is accessible through SSL (https) encryption only. Normally this is not a huge problem but WordPress is a little bit clunky if it comes to a setup that also includes a reverse proxy.
The following text is a sum up some pages which can be found on the internet but often lacks information. This WordPress blog that you are currently reading is running on an Apache httpd on localhost. In front of it, there is a second Apache httpd which acts as reverse proxy for different tasks. One of these tasks is to offload SSL (https) encryption.
In the described setup you should first install the WordPress software on http (port 80) without SSL. If you enable SSL at this time chances are good that you end up in a redirect loop.
Configure SSL (https)
On the reverse proxy configure SSL as usual but be aware, that you have to set
RequestHeader set X-Forwarded-Proto "https" inside the SSL virtual host! This information is important as otherwise the URL’s generated by WordPress will be http links and therefore you will get browser warnings later. Do not force a permanent redirect from http to https at this point or you will not be able to install the necessary WordPress plugin which take care on your URL’s.
After you have enabled basic https support install the WordPress extension SSL Insecure Content Fixer and configure it to use the X-Forwarded-Proto header. Afterwards you have to modify the wp-config.php to reflect this settings. If you want use Jetpack, you also have to specify SERVER_PORT otherwise you will receive a error message on wordpress.com during the configuration of your social media connections (There was an error retrieving your site settings.). You also have to force admin SSL usage.
Hopefully this will help some people out there to get this up and running. If this config does not help you, leave a comment!
Apache http reverse proxy config
RequestHeader set X-Forwarded-Proto "https"
SSLProtocol ALL -SSLv2 -SSLv3
Deny from ALL
ProxyPass /server-status !
ProxyPass / http://127.0.0.1:8880/
ProxyPassReverse / http://127.0.0.1:8880/
Redirect permanent / https://www.n0r1sk.com/
Nginx reverse proxy
We dont use Nginx at the moment, but it should work in the same manner. Just be shure that the X-Forwarded-Proto header is submitted by the reverse proxy to the backend.
$_SERVER['SERVER_PORT'] = 443;`