On-Premise Gitlab with autoscale Docker machine Microsoft Azure runners

Sojourner

This post is about the experience we gained last week while we were expanding our on-premise Gitlab installation with autoscale Docker machine runners located in the Microsoft Azure cloud.

Preface

We are running a quite large Gitlab installation for our development colleagues since about five years. Since march this year we are also using our Gitlab installation for CI/CD/CD (continuous integration/continuous delivery/continuous deployment) with Docker. Since our developers started to love the flexibility and the power of Gitlab in combination with Docker, the build jobs are continuously raising. After approximately six month we have run nearly 7000 pipelines and nearly 15000 jobs.

Some of the pipelines are running quite long, for example Maven builds or multi Docker image builds (micro services). Therefore we are running out of local on-premise Gitlab runners. To be fair, this would not be a huge problem because we have a really huge VM-Ware environment on site but we want test the Gitlab autoscaling feature in a real world, real life environment.

Our company is a Microsoft enterprise customer and therefore we have the possibility to just test this things in a little bit different environment than it is usual.

Cloud differences

As told beforehand, we have a more sophisticated on-premise cloud integration. Currently we have a site-to-site connection to Microsoft. Therefore we are able to use the Microsoft Azure cloud as it would be an offside office which is reachable over the WAN (wide area network).

Gitlab autoscale runner configuration

At first glance we just followed the instructions from the Gitlab documentation. The documentation is fairly enough especially with the corresponding docker+machine documentation for the Azure driver.

Important: Persist the docker+machine data! Otherwise every time the Docker Gitlab runner container restarts, your information about the created cloud VM-Ware machines will be lost!

Gitlab runner configuration

Forward, we have to go back!

At this point, the troubles were starting.

Gitlab autoscale runner Microsoft Azure authentication

We decided to run the Gitlab autoscaling runner with the Gitlab runner offical Docker image, as we do with all our other runners. After the basic Gitlab runner configuration (as provided in the documentation), the docker+machine driver will try to connect with the Microsoft Azure API and will start to authenticate the given subscription. Because we are running the Gitlab as Docker container, you have to show the logs to notice the Microsoft login instructions.

Important: The Microsoft Azure cloud login information is stored inside a hidden folder in the /root directory of the Gitlab runner container! You should really persist the /root folder, or you will have to authenticate every time you start the runner!

Important: Use the MACHINE_STORAGE_PATH environment variable to know where docker-machine stores the virtual machine inventory! Otherwise you will lost it every time the container restarts!

If you lost this step, you will have to relogin. You will notice this problem, if you look at the log messages of the container. Every five seconds, you will see a login attempt. This login attempt is only valid for a short time period and every time you will have to enter a new login code online. You can workaround this dead-lock situation, if you open an interactive shell into the Docker Gitlab autoscale runner and enter the docker-machine ls command. This command will wait until you have entered the provided code online.

Docker Gitlab autoscale swarm configuration

Important: This is a swarm compose file!

Microsoft Azure cloud routing table is deleted

Because we are connecting to the Microsoft Azure cloud from within our WAN, there are special routing tables configured in the Microsoft Azure cloud, because the network traffic has to find the way to the cloud and back from it to our site. After we started the first virtual machine we recognized, that we cannot connect to it. There was no chance to establish a ssh connection. After some research we found out, that the routing table was deleted. After some more test and drilling down the docker+machine source code we were able to pin the problem down to one Microsoft Azure API function. To get rid of the problem we wrote a small patch.

The pull request is available at Github and furthermore, you can also find all information there. If you like, please vote up the patch.

Microsoft Azure cloud storage accounts are not deleted

After we sailed around the first problem, we immediately hit the next one. The Github autoscale runner do what he is meant to do and create virtual machines and delete them. But the storage accounts are left over. As a result, your Microsoft Azure resource group will get messed up with orphaned storage accounts.

Yes, you guess it, I wrote the next patch and submitted a pull request at Github. You can read up all details there. If you like, please vote up the patch there :). Thank you.

Conclusion

After a lot of hacking, we are proud to have a full working Gitlab autoscale runner configuration with Microsoft Azure on-premise integration up and running. Regarding the issues found, we have also contacted our Microsoft enterprise cloud consultant to let Microsoft know, that there might be a bug with the Microsoft Azure API.

Blog picture image

The blog picture of this post shows the Sojourner rover from the Nasa Pathfinder mission. In my opinion an appropriate picture, because you will never know what you have to face on uncharted territory.

SSL offloading – NGinx – Apache Tomcat – showcase

Regarding to our post covering the SSL – NGinx – Apache Tomcat offloading, we decide to create a small showcase project. You will find this showcase project under the following link https://github.com/n0r1sk/https-nginx-tomcat. This project should clarify all configuration tasks needed to get up and running with NGxinx and Apache Tomcat. Furthermore you will find a extensive documentation there. If you have questions, you can open an issue at the Github project. Have fun -M

Docker: Enabling mailing in php:apache (running WordPress)

Starting from

When using the php:apache image from https://hub.docker.com/_/php/ image mail support was not enabled. On my docker host postfix is already enabled and configured to accept and forward mails from my docker containers. So I should be able to send mails.

When I decided to install WordPress by myself, I ended up with this Dockerfile and did config stuff using a volume over /var/www/html:

At the end I realized that mails are not working. Php’s answer when asking for the sendmail_path is this:

And Php mail() result is this:

The manual approach first:

Entering the container:

Then installing sSMTP

and configuring it. Here is my ssmtp.conf

Changing the sendmail_path for Php is easy as there is no php.ini by now:

After restarting the Apache

Php is responding with a valid sendmail_path

Now Php mail() and also WordPress mailing is working fine ;-).

The new Dockerfile:

You need to have the above ssmtp.conf file to build the image.

Docker Endeavor – Episode 3 – Orbit

Challenger Orbit

General

It’s been two month since the last Docker Endeavor post but we weren’t lazy! In opposite, we build a lot of new stuff and changed a lot of things and of course we learned a lot too! In between I passed my master exam and therefore the last two month were really busy. Beside this, Bernhard and I met Niclas Mietz a fellow of our old colleague Peter Rossbach from Bee42. We met Niclas because we booked a Gitlab CI/CD workshop in Munich (in June) – and funny, Bernhard and I were the only ones attending this workshop! Therefore we have had a really good time with Niclas because we had the chance to ask him everything we wanted to know specifically for our needs!
Thanks to Bee42 and the DevOps Gathering that we were mentioned on Twitter – what a motivation to go on with this blog!
Also one of our fellows of the Container fathers, Kir Kolyshkin, we met him in 2009, is now working as a developer for Docker Twitter. We are very proud to know him!

Review from the last episode

In the last episode we talked about our ingress-controller, the border-controller and the docker-controller. For now we canceled the docker-controller & the ingress-controller because it adds too much complexity and we managed it to get up and running with a border-controller in conjunction with external created swarm networks and Docker Swarm internal DNS lookups.

Gitlab CI/CD/CD

Yes, we are going further! Our current productive environment is still powered by our work horse OpenVZ. But we are now also providing a handful of Docker Swarm Services in production / development & staging. To get both, CI (continuous integration) and CD/CD (continuous delivery / continuous deployment) up and running, we decided to basically support three strategies.

  • At first, we use Gitlab to create deployment setups for our department, DevOps. We’ve just transitioned our Apache Tomcat setup to a automatic Docker Image build powered by Gitlab. Based on this we created a transition repository where the developer could place his or her .war-package. This file afterwards gets bundled with our Docker Tomcat image, build beforehand, and then it is also pushed to our private Docker registry. Afterwards it will be deployed to the Docker Swarm. KISS – Keep it simple & stupid.

  • Second, the developers of our development department use Gitlab including the Gitlab runners to build a full CI pipeline, including Sonar, QF-GUI tests, Maven and so on.

  • Third, we have projects which are combining both, the CI and the CD/CD mechanisms. For productive and testing/staging environments.

Gitlab-CI-CD-Overview

Update of the border-controller

Our border-controller is now only using the Docker internal Swarm DNS service to configure the backends. We do not use the docker-controller anymore, therefore this project of us is deprecated. Furthermore, in the latest development version of our border-controller I’ve included the possibility to send the border-controller IP address to a PowerDNS server (via API). Thanks to our colleague Ilia Bakulin from Russia who is part of my team now! He did a lot of research and supported us to get this service up and running. We will need it in the future for dynamic DNS changes. If you are interested in this project, just have a look at our Github project site or directly use our border-controller Docker image from DockerHub. Please be patient, we are DevOps, not developers. 🙂

Currently we are not using Traefik for the border-controller because for us there are two main reasons.

  • First, our Nginx based border-controller does not need to be run on a Docker Swarm manager node, because we are not using the Docker socket interface with it. Instead we are using the build in Docker Swarm DNS service discovery to get the IP addresses for the backend configuration. This also implies, that we don’t need to mount the Docker socket into the border-controller.

  • Second, in the latest version the border-controller is able to use the PowerDNS API to automatically register the load balancers IP address and the DNS name in the PowerDNS system. That is important for the users point of view because normally they use a domain name in the browser.

Border-Controller-Overview

Actual Docker Swarm state

Currently we run approximately 155 containers.

Summary

In this blog we talked about CI/CD/CD pipelines and strategies with Gitlab and our own border-controller based on Nginx. In addition we gave you some information on what we did the last two month.

Orbit

The blog headline picture shows the Space Shuttle Challenger in orbit during the STS07-32-1702 mission (22 June 1983).

Challenger Orbit

Traefik Ingress Controller for Docker Swarm Overlay Network Routing Mesh including sticky sessions

Intro

This post will cover the problematic topic on how to realize sticky sessions in a Docker swarm overlay network setup.

General

Well the first thing you have to know is, that a deployed Docker stack which starts a couple of containers (services) will usually also start up a overlay network that provides an intercommunication layer for this stack service. At the first sight that may not be very useful if you only have one service in your Docker stack compose but it will become very useful if you have more than one service inside your compose file.

Docker swarm compose

Before we can dive into the problem with the Docker overlay network routing mesh in the case of the need of sticky sessions, we will need some information about the Docker stack mechanism. Before the Docker stack mechanism rose up (roughly before Docker engine 17.x.-ce) there was (and is) Docker compose. If you are not using a Docker swarm, you will still need and use docker-compose when you want to startup a Docker service on your single Docker host. When we talk about Docker swarm, then we are also talking about a greater number of Docker hosts, greater 1. When you need a Docker service started on a Docker swarm, you have to use the command docker stack deploy for example. This command uses the same input yaml-file as docker-compose does, with additional possible configuration commands. You can read more about it here. The actual config language version is 3.0 but newer versions are already in the pipeline as the Docker engine version gets updated.

Docker compose example

The following example shows you a fully working Docker stack compose file, including all relevant information to deploy a Docker stack including an application service and an ingress controller service (based on Traefik).

You have to deploy this compose yaml file exactly with the command: docker stack deploy -c compose.yml mystack. The reason why you have to do this is explained in the next section. You have to read the next section to understand what is going on here – THE EXAMPLE WILL NOT WORK WITHOUT MODIFICATIONS – READ THE NEXT SECTION. The next section also gives you a lot of background information about the compose details and these details are essential!

Traefik ingress controller

If you want to run the compose file shown above, you have to modify it at one point. The Traefik ingress controller is specified in the lb service section of the compose file and you have to change the placement constraint. If you are running the example on a single Docker host which has Docker swarm enabled, you can delete the whole placement part, otherwise you have to define a valid Docker host manager or leader. You can find this settings between line 41 and 43 of the above Docker stack compose file.

After you may have changed this setting, you can deploy this Docker stack compose file with the following command: docker stack deploy -c compose.yml mystack. You have to use the mystack as service name, because this name is used in line 18 of the Docker stack compose file above. There you see the entry - "traefik.docker.network=mystack_net". The first part is used due to the usage of the mystack name we specified on running the docker stack command. The second part comes from the network section of the Docker compose file which you see between line 47 and line 49.

You can see this naming also, if you run the docker stack deploy command. Here is the full output from the deploy command:

Now we check if our deployed stack is running. We can check this with the command: docker stack ps mystack. The output is shown as follows:

OK, this seems like that our stack is running. We have two app containers running from the image n0r1skcom/echohttp:latest, which is a simple image built by us to get basic http request/response information quickly. We will see the usage of this in a second. And furthermore a loadbalancer based on traefik:latest is up and running. As you can see in the Docker stack compose file above, we did not specify any exposed ports for the application containers. This containers are running a golang http server on ip port 8080 but it is not possible to reach them from the outside network directly. We can only call them if we use the deployed Traefik loadbalancer which we exposed on ports 25580 (the port 80 mapping of Traefik) and 25581 the dashboard port of Traefik. See lines 29-31. Now we take a look, if we can reach the dashboard of Traefik. Open a web-browser and point it to the ip address of one of your Docker hosts with the given port, for example http://:25581. It will work with any of the Docker hosts due to the Docker overlay network routing mesh! I’ve started this Docker stack on a local Docker host, therefore I will point my browser to http://127.0.0.1:25581. You should see the following screenshot:

Traefik Dashboard

And wow! Now this needs some explanation. First, on the right hand side of the screenshot you will see the backend, that Traefik is using for our service. But wait, where are they coming from. Traefik uses the /var/run/docker.sock Docker interface. This is specified in the lines 32 and 33 of the Docker compose file. This is the reason why the Traefik loadbalancer has to run on a Docker swarm manager or leader because only this Docker hosts can provide the Docker swarm information needed. Furthermore the app containers need special labels. This labels are defined in the lines 16 until 20. There we label our app containers so the Traefik loadbalancer finds them and can use it as backends. To get this working, line number 20 is essential – without this line, Traefik will not add the container as backend! Now all lines of the Docker compose file are explained.

Last but not least we should check if the sticky session based on cookie ingress loadbalancing is working. To do this, open up a browser and enter the URL of the http exposed Traefik port. For example http://:25580. I will use once again http://127.0.0.1:25580, and you should see the following output:

HTTP output

On the left hand side of the screenshot you can see the output from our n0r1skcom/echohttp:latest container. This will show you the hostname from the container you are connected to. In this case the container got the dynamic hostname df78eb066abb and the local ip address of this container is 10.0.0.3. The ip address 10.0.0.2/32 is the VIP (virtual ip) from the Docker overlay network mesh. On the right hand side of the screenshot you can see the Chrome developer console, which is showing the loadbalancing cookie we received from the Traefik loadbalancer and this cookie shows that we are bound to the 10.0.0.3 backend. Congratulation! Now you can press STRG+r as often as you like, with this browser session, you are nailed to the 10.0.0.3 backend with this sticky cookie.

You can test the opposite behavior if you use curl, because with curl you will fire a new request every time and you are not recognizing the cookie. Here is the example output:

As you can see, you are alternating between the started backends. Great! Now we can scale our cluster to, lets say, five backends. This can be done with the command: docker service scale mystack_app=5 with the following output including docker stack ps mystack:

Now we have five backends, we can check this with the Traefik dashboard http://:25581:

Traefik scaled service

Congratulations once again! You have dynamically scaled your service and you still have session stickiness. You can check, if all backends are responding via the curl command from above.

Graphic about what you have built

The following graphic shows more than we built today, but we will describe the border controller (loadbalancer) in one of the follow up posts!

DockerSwarmController

Summary

This is the first comprehensive hitchhiker’s guide on Traefik Ingress Controller for Docker Swarm Overlay Network Routing Mesh including sticky sessions. The information shown in this post is a summary of many sources, including Github issues and also a lot of try and (catch)error. If you have any further questions, do not hesitate to contact us! Leave a comment if you like, you are welcome!

Docker Endeavor – Episode 2 – Liftoff

Atlantis Liftoff

Review of episode 1

In episode one we wrote about the challenges we faced during the last two years of our Docker experiments. Some of the problems we found are still existing today but overall we get a Docker infrastructure up and running. This episode will cover how we do it.

Liftoff

The blog-picture of this post, which you can see at the bottom, shows how we decided to setup our on-premise Docker infrastructure. In the following sections we explain the core components and we will also provide further information where, for example Github issues, are available. Explanation is structured from outside to inside – therefore VMWare is the first thing we will explain.

VMWare

As we started to build up our Docker environment we started with one single Docker host just to try out how far the Docker progress actually is. We decided to go with Ubuntu hosts because we already had Linux experience for a long time and therefore this seems a convenient way for us. Soon after the first tests the first questions came up. One of them was, how we should power the Docker environment as a whole infrastructure – should we install Docker (based on Ubuntu) bare metal or not? We read about it and we came to the conclusion, that installing Docker on bare metal is a bad idea because of some reasons.

  • Bare metal operating system upgrades (not updates) are often a huge pain and the need to reinstall the whole system is common. That is really ugly if you have limited hardware resources. It is much easier to build up a new virtual Docker host and to startup the containers you need freshly on this Docker host.
  • Other projects like Rancher prove that it is not unusual to run Docker in Docker to remove the operating system dependency and to build a cloud operating system like CoreOS or RancherOS (but that was too much for us).
  • No, you don’t like hardware! As simple as this sentence is, hardware, networking, fiber channel connections, and so on are always a real challenge. Therefore a datacenter always needs a lot of persons who are doing this kind of tasks. To avoid this tasks, use a hypervisor of some kind. We have VMWare at work so we choose this one.

NFS-Storage

One single Docker host is easy to manage, it is like hacking 127.0.0.1. You can use Docker volumes, you can place the data on your local harddisk and all is pretty easy. But you will not be fail-safe. Therefore if you ever plan to use Docker in production you have to have more then one Docker host. And this is where the problems start. After we setup our second Docker host (now we have five) we quickly realized that we have to share data between hosts. Yes we know about data containers and so on, but this solutions are always limited in multi Docker host scenarios. A data container on Docker Host A is and will ever be on this host. If it fails it is gone. Kubernetes therefore provides Docker volume drivers which enables the containers to directly write to external storage, AWS, GCE,… but we are on-premise. OK, they also support NFS but managing Kubernete pod’s and Kube-proxy and other stuff is not easy. For this reason we decided to follow the KISS principle (Keep it simple & stupid) and we setup a central NFS server for our shared data.

As we write this we literally can look into the future and we will hear the people screaming in our comments: “Oh my god, NFS! They are using this f**** old crap piece of insecure software with this super-duper perfect Docker software!!!” – yeah, only three words on this… It just works.

The NFS share is organized in multiple sections. On the one hand every Docker host has its own area where host specific Docker container configuration files can reside. For example, this is useful if one of the Docker hosts is holding more than one ip address because some kind of “external ip address” is needed to provide a DNS entry with the correct information. In the bottom picture this is the reason why Docker host A is not in the Docker swarm. As you can see there, there is a container deployed who’s role is to be the border-controller. The border-controller will be explained in one of the following blog posts. On the other hand, the NFS share also covers an area where shared data is persisted, for example, if you have a MariaDB running in one of your containers (and logically only one container) then this container may be started on Docker hosts C-E in case of troubles because you deployed it as a Docker Swarm service. Therefore it is absolutely necessary to hold the data of the MariaDB container on a destination that is reachable from any possible Docker host.

Maybe there will be a better solution in the future to achieve this goal, but currently this is a valid setup.

Docker swarm

As you can see in the picture, we are using a Docker swarm setup. For example this is helpful for automatic deployments as a Docker stack service can be updated easily. The swarm makes it also possible to guarantee, that a service with a defined number of replica Containers is always running, regardless if there are only one or many Docker hosts. But currently you have to be careful because the Docker swarm is doing some things that you will not be aware of.

  • Docker swarm does not support –net=host configuration. This means, that you will not be able to configure a needed network interface from inside a Docker container on a Docker host the container is running on. For example, an “external” ip address cannot move with the Docker container if the container is started on another Docker host. If you need such a setup, you cannot use Docker swarm at the moment. Now you now why in the picture at the bottom some Docker hosts are not part of the swarm – more information here -> #25873
  • Docker swarm uses overlay networks between the Docker hosts, which is an impressive feature. But the Docker swarm mesh network does not support sticky sessions. This means, that a service published as Docker stack, will open up the exposed port on all Docker hosts of the Docker swarm. E.g. if you have five Docker hosts and you are running a Docker stack with a service that starts two Docker containers, then this two containers will be reachable through all five Docker hosts on the published port(s). Furthermore, if you start more Docker containers in a service as you have Docker hosts, e.g. ten Docker containers running on five Docker hosts, you will only have five input ways (trough the five Docker hosts) to the Docker containers. This mesh network will route the incoming traffic to a Docker container inside the Docker stack service randomly and you must not reach the same Docker container on the next request. Therefore some kind of ingress-controller is needed. That is why in the bottom picture a ingress-controller (Traefik based) is shown to manage these requests if stickiness is a must have. In most cases for us it is a must because in most cases the service running in the Docker swarm stack has no session database…

We know that a lot of people are saying “make stateless services” or “your applications have to use a session database” and so on, but this is not the reality of real live applications which have a long history. This is the point where theory (Docker) meets practice (real live). We will show you such an ingress-controller in one of the following posts.

Client

The client in this kind of setup can only connect to a DNS name, e.g. example.com. And of course the user on the client would like to only put in the domain name in the browser. A user will not an will never be comprehensive to learn ip ports like example.com:30001. Now you will say: “Meh, just publish the service on 80 and/or 443!”. Ouch, if you do this in a Docker stack service, port 80 and 443 are burned up on all Docker hosts! Starting only one service of this kind will render port 80 and 443 unavailable for any further services. This is why cloud companies like AWS, GCE, Azure and many more provide a service that is able to map a Docker swarm stack that is using a dynamic exposed port to an fixed ip address and which in turn is covered by a DNS server. This is the only way how it is possible to have many services with port 80/443 running in parallel. We call this service “border-controller” and now you know why!

But if you are on-premise there is no such service available. You are out of luck and if your users have to access a domain name as usual and if you would like to provide the service/application behind this domain name via the Docker environment, you have to setup a border-controller like you can see it in the bottom picture. But there are some pitfalls. For example, if you use Traefik as ingress-controller and as border-controller you will currently mess up the stickiness of your application -> #1574. We will show, how we managed this in one of the following posts.

Summary

This post contains a lot of information about many components of an on-premise Docker environment with Docker swarm stacks/services. Stay tuned, we will provide more insights soon. If you have any questions, don’t hesitate and leave a comment, you are welcome!

Docker Environment

Docker Endeavor – Episode 1 – Pre Flight

General

This blog series will give you an insight of our development in the usage of Docker. We will start this with just some information about our working environment and when we tried first to use docker at work. Hopefully you can learn from our pitfalls and maybe you have some fun on reading it. This articles will be longer and therefore it will take some time between the posts.

About the picture

The picture was taken from NASA and shows the Space Shuttle Endeavor. It beautifully shows the cargo bay (no containers there but similar idea just in and for space) and therefore we choose this picture and of course the name Endeavor for our Docker posting series. Endeavour means to try to do something, in our case we try to run Docker with on-premise infrastructure.

Pre-Flight

Bernhard an I are doing containers since 2008. We started with OpenVZ and a handful containers, just one or two applications and some sort of load balancing. The applications we empowered were monolithic blocks of Apache Tomcat in conjunction with Java and the Webapplication bundled together. And this is still the case nowadays. The deployment​ process is basically based on building Debian binary packages and after the build the packages are uploaded to a private repository. But we go on and two years ago we started to change our deployment and we started to use a self written Python program. This is where we are today.

Docker

Now it comes to Docker. Bernhard and I know Peter Rossbach from the Tomcat project as committer and as a consultant. He was one of the first who joined up the docker community. Therefore we decided to give Docker a try but three years ago this was a strong task for us. Too strong. There were to much problems, on the Docker side (load balancing) and of course on the developer side. So for us, most of the time we are the Ops from DevOps, this was impossible to lift. So we canceled our first Docker experiment but we hold it on our radar. Time went buy and with the end of 2016 and the beginning of 2017 we started a new approach. One of the key components, the load balancing, is much better now (traefik) but in some circumstances still a pain. Why? We would like to run Docker on-premise! So there is no sexy GCE external IP load balancer and much more. There are hundreds of problems when it comes to fitting Docker in a grown heterogeneous infrastructure. You need an example? What do you do if the only way to the internet is a http proxy server? Yes, you have to change this first. And now think about the fact that this proxy model is a decade old and you tell someone that you need direct routing. Guess what, that’s not easy to achieve.

But on our way we met Timo Reimann, a very nice contact. After a view chats we were able to find our way to setup Docker and today we are running about 150 containers in production – we started approximately two months ago.

To be continued

But we will tell more about our odyssey the next time in part two of this blog series. Hopefully this will help the one or the other who has to manage a lot technical problems with Docker in a real live and not a lab environment!

WordPress SSL (https) and Reverse Proxy (Nginx, Apache httpd)

As you can see, this blog is accessible through SSL (https) encryption only. Normally this is not a huge problem but WordPress is a little bit clunky if it comes to a setup that also includes a reverse proxy.

General

The following text is a sum up some pages which can be found on the internet but often lacks information. This WordPress blog that you are currently reading is running on an Apache httpd on localhost. In front of it, there is a second Apache httpd which acts as reverse proxy for different tasks. One of these tasks is to offload SSL (https) encryption.

WordPress installation

In the described setup you should first install the WordPress software on http (port 80) without SSL. If you enable SSL at this time chances are good that you end up in a redirect loop.

Configure SSL (https)

On the reverse proxy configure SSL as usual but be aware, that you have to set RequestHeader set X-Forwarded-Proto "https" inside the SSL virtual host! This information is important as otherwise the URL’s generated by WordPress will be http links and therefore you will get browser warnings later. Do not force a permanent redirect from http to https at this point or you will not be able to install the necessary WordPress plugin which take care on your URL’s.

After you have enabled basic https support install the WordPress extension SSL Insecure Content Fixer and configure it to use the X-Forwarded-Proto header. Afterwards you have to modify the wp-config.php to reflect this settings. If you want use Jetpack, you also have to specify SERVER_PORT otherwise you will receive a error message on wordpress.com during the configuration of your social media connections (There was an error retrieving your site settings.). You also have to force admin SSL usage.

Hopefully this will help some people out there to get this up and running. If this config does not help you, leave a comment!

-M

Apache http reverse proxy config

Nginx reverse proxy (in an Docker environment)

WordPress wp-config.php