Lenovo Y50-70 replace keyboard

I didn’t find a full tutorial on how to do it, so I decided to post it here. It was my first try and took me about 3 hours last night.

Step 1 – Order a replacement

I started from http://pcsupport.lenovo.com/us/en/products/laptops-and-netbooks/lenovo-y-series-laptops/y50-70-notebook-lenovo/80ej/80ejcto/parts and orderd it at amazon.de

y50-70 the new keyboard

Step 2 – Follow this tutorial

Step 3 – Take some time (a bottle of wine) and go on like this

I marked the places to work on with the blue tools…

y50-70 under the motherboard

y50-70 speakers

Remove the black foil

y50-70 remove the black foil

y50-70 remove the power cable

y50-70 remove the last srew

y50-70 ahhrrrr

y50-70 ahrr

y50-70 got it

y50-70 too early

y50-70 great joy

y50-70 and all the way back

y50-70 and on and on

y50-70 foil is back

… and all the way back.

Markus Neuhold on GithubMarkus Neuhold on Twitter
Markus Neuhold
Markus Neuhold

Traefik Ingress Controller for Docker Swarm Overlay Network Routing Mesh including sticky sessions

Intro

This post will cover the problematic topic on how to realize sticky sessions in a Docker swarm overlay network setup.

General

Well the first thing you have to know is, that a deployed Docker stack which starts a couple of containers (services) will usually also start up a overlay network that provides an intercommunication layer for this stack service. At the first sight that may not be very useful if you only have one service in your Docker stack compose but it will become very useful if you have more than one service inside your compose file.

Docker swarm compose

Before we can dive into the problem with the Docker overlay network routing mesh in the case of the need of sticky sessions, we will need some information about the Docker stack mechanism. Before the Docker stack mechanism rose up (roughly before Docker engine 17.x.-ce) there was (and is) Docker compose. If you are not using a Docker swarm, you will still need and use docker-compose when you want to startup a Docker service on your single Docker host. When we talk about Docker swarm, then we are also talking about a greater number of Docker hosts, greater 1. When you need a Docker service started on a Docker swarm, you have to use the command docker stack deploy for example. This command uses the same input yaml-file as docker-compose does, with additional possible configuration commands. You can read more about it here. The actual config language version is 3.0 but newer versions are already in the pipeline as the Docker engine version gets updated.

Docker compose example

The following example shows you a fully working Docker stack compose file, including all relevant information to deploy a Docker stack including an application service and an ingress controller service (based on Traefik).

You have to deploy this compose yaml file exactly with the command: docker stack deploy -c compose.yml mystack. The reason why you have to do this is explained in the next section. You have to read the next section to understand what is going on here – THE EXAMPLE WILL NOT WORK WITHOUT MODIFICATIONS – READ THE NEXT SECTION. The next section also gives you a lot of background information about the compose details and these details are essential!

Traefik ingress controller

If you want to run the compose file shown above, you have to modify it at one point. The Traefik ingress controller is specified in the lb service section of the compose file and you have to change the placement constraint. If you are running the example on a single Docker host which has Docker swarm enabled, you can delete the whole placement part, otherwise you have to define a valid Docker host manager or leader. You can find this settings between line 41 and 43 of the above Docker stack compose file.

After you may have changed this setting, you can deploy this Docker stack compose file with the following command: docker stack deploy -c compose.yml mystack. You have to use the mystack as service name, because this name is used in line 18 of the Docker stack compose file above. There you see the entry - "traefik.docker.network=mystack_net". The first part is used due to the usage of the mystack name we specified on running the docker stack command. The second part comes from the network section of the Docker compose file which you see between line 47 and line 49.

You can see this naming also, if you run the docker stack deploy command. Here is the full output from the deploy command:

Now we check if our deployed stack is running. We can check this with the command: docker stack ps mystack. The output is shown as follows:

OK, this seems like that our stack is running. We have two app containers running from the image n0r1skcom/echohttp:latest, which is a simple image built by us to get basic http request/response information quickly. We will see the usage of this in a second. And furthermore a loadbalancer based on traefik:latest is up and running. As you can see in the Docker stack compose file above, we did not specify any exposed ports for the application containers. This containers are running a golang http server on ip port 8080 but it is not possible to reach them from the outside network directly. We can only call them if we use the deployed Traefik loadbalancer which we exposed on ports 25580 (the port 80 mapping of Traefik) and 25581 the dashboard port of Traefik. See lines 29-31. Now we take a look, if we can reach the dashboard of Traefik. Open a web-browser and point it to the ip address of one of your Docker hosts with the given port, for example http://:25581. It will work with any of the Docker hosts due to the Docker overlay network routing mesh! I’ve started this Docker stack on a local Docker host, therefore I will point my browser to http://127.0.0.1:25581. You should see the following screenshot:

Traefik Dashboard

And wow! Now this needs some explanation. First, on the right hand side of the screenshot you will see the backend, that Traefik is using for our service. But wait, where are they coming from. Traefik uses the /var/run/docker.sock Docker interface. This is specified in the lines 32 and 33 of the Docker compose file. This is the reason why the Traefik loadbalancer has to run on a Docker swarm manager or leader because only this Docker hosts can provide the Docker swarm information needed. Furthermore the app containers need special labels. This labels are defined in the lines 16 until 20. There we label our app containers so the Traefik loadbalancer finds them and can use it as backends. To get this working, line number 20 is essential – without this line, Traefik will not add the container as backend! Now all lines of the Docker compose file are explained.

Last but not least we should check if the sticky session based on cookie ingress loadbalancing is working. To do this, open up a browser and enter the URL of the http exposed Traefik port. For example http://:25580. I will use once again http://127.0.0.1:25580, and you should see the following output:

HTTP output

On the left hand side of the screenshot you can see the output from our n0r1skcom/echohttp:latest container. This will show you the hostname from the container you are connected to. In this case the container got the dynamic hostname df78eb066abb and the local ip address of this container is 10.0.0.3. The ip address 10.0.0.2/32 is the VIP (virtual ip) from the Docker overlay network mesh. On the right hand side of the screenshot you can see the Chrome developer console, which is showing the loadbalancing cookie we received from the Traefik loadbalancer and this cookie shows that we are bound to the 10.0.0.3 backend. Congratulation! Now you can press STRG+r as often as you like, with this browser session, you are nailed to the 10.0.0.3 backend with this sticky cookie.

You can test the opposite behavior if you use curl, because with curl you will fire a new request every time and you are not recognizing the cookie. Here is the example output:

As you can see, you are alternating between the started backends. Great! Now we can scale our cluster to, lets say, five backends. This can be done with the command: docker service scale mystack_app=5 with the following output including docker stack ps mystack:

Now we have five backends, we can check this with the Traefik dashboard http://:25581:

Traefik scaled service

Congratulations once again! You have dynamically scaled your service and you still have session stickiness. You can check, if all backends are responding via the curl command from above.

Graphic about what you have built

The following graphic shows more than we built today, but we will describe the border controller (loadbalancer) in one of the follow up posts!

DockerSwarmController

Summary

This is the first comprehensive hitchhiker’s guide on Traefik Ingress Controller for Docker Swarm Overlay Network Routing Mesh including sticky sessions. The information shown in this post is a summary of many sources, including Github issues and also a lot of try and (catch)error. If you have any further questions, do not hesitate to contact us! Leave a comment if you like, you are welcome!

Mario Kleinsasser on GithubMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M
Bernhard Rausch on GithubBernhard Rausch on Twitter
Bernhard Rausch
Bernhard Rausch
devops/sys-admin; love to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker;
Always up to get in contact with interesting people - don't hesitate to write a comment or to contact me! - Bernhard

Docker Endeavor – Episode 2 – Liftoff

Atlantis Liftoff

Review of episode 1

In episode one we wrote about the challenges we faced during the last two years of our Docker experiments. Some of the problems we found are still existing today but overall we get a Docker infrastructure up and running. This episode will cover how we do it.

Liftoff

The blog-picture of this post, which you can see at the bottom, shows how we decided to setup our on-premise Docker infrastructure. In the following sections we explain the core components and we will also provide further information where, for example Github issues, are available. Explanation is structured from outside to inside – therefore VMWare is the first thing we will explain.

VMWare

As we started to build up our Docker environment we started with one single Docker host just to try out how far the Docker progress actually is. We decided to go with Ubuntu hosts because we already had Linux experience for a long time and therefore this seems a convenient way for us. Soon after the first tests the first questions came up. One of them was, how we should power the Docker environment as a whole infrastructure – should we install Docker (based on Ubuntu) bare metal or not? We read about it and we came to the conclusion, that installing Docker on bare metal is a bad idea because of some reasons.

  • Bare metal operating system upgrades (not updates) are often a huge pain and the need to reinstall the whole system is common. That is really ugly if you have limited hardware resources. It is much easier to build up a new virtual Docker host and to startup the containers you need freshly on this Docker host.
  • Other projects like Rancher prove that it is not unusual to run Docker in Docker to remove the operating system dependency and to build a cloud operating system like CoreOS or RancherOS (but that was too much for us).
  • No, you don’t like hardware! As simple as this sentence is, hardware, networking, fiber channel connections, and so on are always a real challenge. Therefore a datacenter always needs a lot of persons who are doing this kind of tasks. To avoid this tasks, use a hypervisor of some kind. We have VMWare at work so we choose this one.

NFS-Storage

One single Docker host is easy to manage, it is like hacking 127.0.0.1. You can use Docker volumes, you can place the data on your local harddisk and all is pretty easy. But you will not be fail-safe. Therefore if you ever plan to use Docker in production you have to have more then one Docker host. And this is where the problems start. After we setup our second Docker host (now we have five) we quickly realized that we have to share data between hosts. Yes we know about data containers and so on, but this solutions are always limited in multi Docker host scenarios. A data container on Docker Host A is and will ever be on this host. If it fails it is gone. Kubernetes therefore provides Docker volume drivers which enables the containers to directly write to external storage, AWS, GCE,… but we are on-premise. OK, they also support NFS but managing Kubernete pod’s and Kube-proxy and other stuff is not easy. For this reason we decided to follow the KISS principle (Keep it simple & stupid) and we setup a central NFS server for our shared data.

As we write this we literally can look into the future and we will hear the people screaming in our comments: “Oh my god, NFS! They are using this f**** old crap piece of insecure software with this super-duper perfect Docker software!!!” – yeah, only three words on this… It just works.

The NFS share is organized in multiple sections. On the one hand every Docker host has its own area where host specific Docker container configuration files can reside. For example, this is useful if one of the Docker hosts is holding more than one ip address because some kind of “external ip address” is needed to provide a DNS entry with the correct information. In the bottom picture this is the reason why Docker host A is not in the Docker swarm. As you can see there, there is a container deployed who’s role is to be the border-controller. The border-controller will be explained in one of the following blog posts. On the other hand, the NFS share also covers an area where shared data is persisted, for example, if you have a MariaDB running in one of your containers (and logically only one container) then this container may be started on Docker hosts C-E in case of troubles because you deployed it as a Docker Swarm service. Therefore it is absolutely necessary to hold the data of the MariaDB container on a destination that is reachable from any possible Docker host.

Maybe there will be a better solution in the future to achieve this goal, but currently this is a valid setup.

Docker swarm

As you can see in the picture, we are using a Docker swarm setup. For example this is helpful for automatic deployments as a Docker stack service can be updated easily. The swarm makes it also possible to guarantee, that a service with a defined number of replica Containers is always running, regardless if there are only one or many Docker hosts. But currently you have to be careful because the Docker swarm is doing some things that you will not be aware of.

  • Docker swarm does not support –net=host configuration. This means, that you will not be able to configure a needed network interface from inside a Docker container on a Docker host the container is running on. For example, an “external” ip address cannot move with the Docker container if the container is started on another Docker host. If you need such a setup, you cannot use Docker swarm at the moment. Now you now why in the picture at the bottom some Docker hosts are not part of the swarm – more information here -> #25873
  • Docker swarm uses overlay networks between the Docker hosts, which is an impressive feature. But the Docker swarm mesh network does not support sticky sessions. This means, that a service published as Docker stack, will open up the exposed port on all Docker hosts of the Docker swarm. E.g. if you have five Docker hosts and you are running a Docker stack with a service that starts two Docker containers, then this two containers will be reachable through all five Docker hosts on the published port(s). Furthermore, if you start more Docker containers in a service as you have Docker hosts, e.g. ten Docker containers running on five Docker hosts, you will only have five input ways (trough the five Docker hosts) to the Docker containers. This mesh network will route the incoming traffic to a Docker container inside the Docker stack service randomly and you must not reach the same Docker container on the next request. Therefore some kind of ingress-controller is needed. That is why in the bottom picture a ingress-controller (Traefik based) is shown to manage these requests if stickiness is a must have. In most cases for us it is a must because in most cases the service running in the Docker swarm stack has no session database…

We know that a lot of people are saying “make stateless services” or “your applications have to use a session database” and so on, but this is not the reality of real live applications which have a long history. This is the point where theory (Docker) meets practice (real live). We will show you such an ingress-controller in one of the following posts.

Client

The client in this kind of setup can only connect to a DNS name, e.g. example.com. And of course the user on the client would like to only put in the domain name in the browser. A user will not an will never be comprehensive to learn ip ports like example.com:30001. Now you will say: “Meh, just publish the service on 80 and/or 443!”. Ouch, if you do this in a Docker stack service, port 80 and 443 are burned up on all Docker hosts! Starting only one service of this kind will render port 80 and 443 unavailable for any further services. This is why cloud companies like AWS, GCE, Azure and many more provide a service that is able to map a Docker swarm stack that is using a dynamic exposed port to an fixed ip address and which in turn is covered by a DNS server. This is the only way how it is possible to have many services with port 80/443 running in parallel. We call this service “border-controller” and now you know why!

But if you are on-premise there is no such service available. You are out of luck and if your users have to access a domain name as usual and if you would like to provide the service/application behind this domain name via the Docker environment, you have to setup a border-controller like you can see it in the bottom picture. But there are some pitfalls. For example, if you use Traefik as ingress-controller and as border-controller you will currently mess up the stickiness of your application -> #1574. We will show, how we managed this in one of the following posts.

Summary

This post contains a lot of information about many components of an on-premise Docker environment with Docker swarm stacks/services. Stay tuned, we will provide more insights soon. If you have any questions, don’t hesitate and leave a comment, you are welcome!

Docker Environment

Mario Kleinsasser on GithubMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M
Bernhard Rausch on GithubBernhard Rausch on Twitter
Bernhard Rausch
Bernhard Rausch
devops/sys-admin; love to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker;
Always up to get in contact with interesting people - don't hesitate to write a comment or to contact me! - Bernhard

Docker Endeavor – Episode 1 – Pre Flight

General

This blog series will give you an insight of our development in the usage of Docker. We will start this with just some information about our working environment and when we tried first to use docker at work. Hopefully you can learn from our pitfalls and maybe you have some fun on reading it. This articles will be longer and therefore it will take some time between the posts.

About the picture

The picture was taken from NASA and shows the Space Shuttle Endeavor. It beautifully shows the cargo bay (no containers there but similar idea just in and for space) and therefore we choose this picture and of course the name Endeavor for our Docker posting series. Endeavour means to try to do something, in our case we try to run Docker with on-premise infrastructure.

Pre-Flight

Bernhard an I are doing containers since 2008. We started with OpenVZ and a handful containers, just one or two applications and some sort of load balancing. The applications we empowered were monolithic blocks of Apache Tomcat in conjunction with Java and the Webapplication bundled together. And this is still the case nowadays. The deployment​ process is basically based on building Debian binary packages and after the build the packages are uploaded to a private repository. But we go on and two years ago we started to change our deployment and we started to use a self written Python program. This is where we are today.

Docker

Now it comes to Docker. Bernhard and I know Peter Rossbach from the Tomcat project as committer and as a consultant. He was one of the first who joined up the docker community. Therefore we decided to give Docker a try but three years ago this was a strong task for us. Too strong. There were to much problems, on the Docker side (load balancing) and of course on the developer side. So for us, most of the time we are the Ops from DevOps, this was impossible to lift. So we canceled our first Docker experiment but we hold it on our radar. Time went buy and with the end of 2016 and the beginning of 2017 we started a new approach. One of the key components, the load balancing, is much better now (traefik) but in some circumstances still a pain. Why? We would like to run Docker on-premise! So there is no sexy GCE external IP load balancer and much more. There are hundreds of problems when it comes to fitting Docker in a grown heterogeneous infrastructure. You need an example? What do you do if the only way to the internet is a http proxy server? Yes, you have to change this first. And now think about the fact that this proxy model is a decade old and you tell someone that you need direct routing. Guess what, that’s not easy to achieve.

But on our way we met Timo Reimann, a very nice contact. After a view chats we were able to find our way to setup Docker and today we are running about 150 containers in production – we started approximately two months ago.

To be continued

But we will tell more about our odyssey the next time in part two of this blog series. Hopefully this will help the one or the other who has to manage a lot technical problems with Docker in a real live and not a lab environment!

Mario Kleinsasser on GithubMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M
Bernhard Rausch on GithubBernhard Rausch on Twitter
Bernhard Rausch
Bernhard Rausch
devops/sys-admin; love to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker;
Always up to get in contact with interesting people - don't hesitate to write a comment or to contact me! - Bernhard

GitHub and animated gifs…

For our n0r1skcom/echo DockerHub image we wanted to add a gif (see above) with console output to the corresponding GitHub project README.

But that wasn’t that easy as we thought because GitHub caches images with atmos/camo and that brings in some problems with bigger gif’s…

So we had to disable image caching via the http headers of our source image but these images are located in our WordPress media library and we didn’t want to disable image caching in general.

The solution for us was to configure the serving webserver (in our case Apache) to set some caching/expiry headers via LocationMatch directive and a fancy regex.
Our regex includes all pictures with the filename prefix “nocache_” – so every other image uploaded isn’t touched in any way.

Apache configuration sample

Bernhard Rausch on GithubBernhard Rausch on Twitter
Bernhard Rausch
Bernhard Rausch
devops/sys-admin; love to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker;
Always up to get in contact with interesting people - don't hesitate to write a comment or to contact me! - Bernhard
Mario Kleinsasser on GithubMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

OpenSSH on IBM i (AS400) – some hints

Preface

I was asked to repost this article from our old wiki. So here it is – with the content back from 2011. If I find some time I’ll post how to restrict ssh access to users with a predefined group profile. Or better let me know if you are interested in it 🙂

Prerequisites

Install Portable App Solutions Environment i5/OS PASE which is shipped as i5/OS option 33.

Installation

Install IBM Portable Utilities for i5/OS (*BASE) and OpenSSH, OpenSSL, zlib Libs (Opt 1) from your i5/OS Installation Media in Drive OPTxx.

Setup

For setup use CL (Command Language) commands or the build terminal to change configuration files

Config file location

After the first call of WRKLNK the DETAIL and DSPOPT parameter doesn’t have to be specified anymore. If you are more familiar with vi use this commands…

(Auto)start ssh daemon

From V6.1 and following, the start is done with an integrated CL command. System wide key files ar generated at first start!!! Autostart can be d

At V5.4 there is some more work, with QSECOFR or a user with following prerequisites, is to be done…

  • The userid that starts the daemon must have *ALLOBJ special authority
  • The userid that starts the daemon must be 8 or fewer characters long
  • Before starting sshd for the first time, you will need to generate host keys starting a PASE shell (STRQSH or CALL QP2TERM)

Start the sshd daemon within the same job…

or in a new job using PASE shell

or in a new job useing CL

For Autostart contact you AS400 SysAdmin to plan a Scheduler Entry (WRKJOBSCDE) with QSECOFR Profile in order to be sure that all thinks will run.

Stopping sshd

From V6.1 and following use…

In V5.4 you may find the running job and ‘kill’ it…

and stop the job using selection 4 ending for the Job with the function PGM-sshd. If more than one job is listed, then there are active connections to you system.

Enable public key authentication

Unmask the following lines in the sshd_config file.

Generate keys and exchange them on user basis as on any other linux/unix based system. Be aware that public key authentication will not work if public (write) authority is set to some directories or files … just read on.

Nice hints

Check this before connect to ssh on AS400

  • The userid that is connecting must be 8 or fewer characters long
  • For public key authentication verify the permissions on the userid’s directories and files
  • The userid’s home directory must not have public write authority ( chmod go-w /home/myuserid )
  • The userid’s /home/myuserid/.ssh directory and /home/myuserid/.ssh/authorized_keys file must not have any public authorities (chmod go-rwx /home/userid/.ssh and chmod go-rwx /home/myuserid/.ssh/authorized_keys )

Once connected, you will be at a PASE for i command line.

Restrictions on ssh, sftp or scp in PASE shell

The PASE shell (STRQSH or CALL QP2TERM) is not a true TTY device. This can cause problems when trying to use ssh, sftp or scp within one of these sessions. Try this as work-a-round:

  • For ssh: use the -T option to not allocate a tty when connecting
  • For sftp and scp: use the ssh-agent utility and public key authentication to avoid sftp and scp prompting for passwords or passphrases

References and Links

IBM Redbooks on this topic
Another straight forward guide
Using chroot to restrict jail access to specific directories
Some security considerations

Markus Neuhold on GithubMarkus Neuhold on Twitter
Markus Neuhold
Markus Neuhold

ACS – Run SQL Scripts – Saving result data to .csv and other

In my last post about ACS 1.1.7.0 I mentioned that it is the first real suitable version for developers. Now I want to share another nice to know…

While IBM support tells us how to Save Result Data to .csv or .xls Files Using Run SQL Scripts in iSeries Navigator, in ACS the feature is grayed out.

Just add these two lines to AcsConfig.properties file to enable save results:

Markus Neuhold on GithubMarkus Neuhold on Twitter
Markus Neuhold
Markus Neuhold

Quality of service…

Most of the time when we try to look at some new software we catch some bugs, have compatibilty problems and so on.

Let me give an example:
We work with a Ubuntu Desktop on an virtual environment and wanted to upgrade it to the newest 17.04 release because our main working tool (terminator) crashed every once in a while letting you sit there without your already opened ssh sessions to many servers or with open vi’s with configuration files / script code / …
Upgrading wasn’t that problem – we used the software updater and everything was fine. Until our monitoring software (zabbix) wrote us an e-mail about a problem with our configuration management agent (puppet) – and BAM, there was the first problem…
So i wanted to install a new puppet agent via the puppet debian repositories -> actually no debian package for ubuntu 17.04…
Next problem -> the upgrade also removed the old puppet package which included facter -> so our monitoring reported a backup problem, because our backup script uses facter variables… the backup works but the monitoring part isn’t…

That’s only one single example but especially when it comes to docker, which is really a great enhancement for IT in general, there are bugs and error’s (not only docker itself) everywhere.

So, getting a system up and running with quality is really hard work with lot’s of testing, searching, reading & implementing.

For me, my part is to raise quality by trying to get any system to a certain standard which includes backup / monitoring / configuration management & scripts for automation. I think these are four of the many important things when developing new systems.

Bernhard Rausch on GithubBernhard Rausch on Twitter
Bernhard Rausch
Bernhard Rausch
devops/sys-admin; love to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker;
Always up to get in contact with interesting people - don't hesitate to write a comment or to contact me! - Bernhard

WordPress SSL (https) and Reverse Proxy (Nginx, Apache httpd)

As you can see, this blog is accessible through SSL (https) encryption only. Normally this is not a huge problem but WordPress is a little bit clunky if it comes to a setup that also includes a reverse proxy.

General

The following text is a sum up some pages which can be found on the internet but often lacks information. This WordPress blog that you are currently reading is running on an Apache httpd on localhost. In front of it, there is a second Apache httpd which acts as reverse proxy for different tasks. One of these tasks is to offload SSL (https) encryption.

WordPress installation

In the described setup you should first install the WordPress software on http (port 80) without SSL. If you enable SSL at this time chances are good that you end up in a redirect loop.

Configure SSL (https)

On the reverse proxy configure SSL as usual but be aware, that you have to set RequestHeader set X-Forwarded-Proto "https" inside the SSL virtual host! This information is important as otherwise the URL’s generated by WordPress will be http links and therefore you will get browser warnings later. Do not force a permanent redirect from http to https at this point or you will not be able to install the necessary WordPress plugin which take care on your URL’s.

After you have enabled basic https support install the WordPress extension SSL Insecure Content Fixer and configure it to use the X-Forwarded-Proto header. Afterwards you have to modify the wp-config.php to reflect this settings. If you want use Jetpack, you also have to specify SERVER_PORT otherwise you will receive a error message on wordpress.com during the configuration of your social media connections (There was an error retrieving your site settings.). You also have to force admin SSL usage.

Hopefully this will help some people out there to get this up and running. If this config does not help you, leave a comment!

-M

Apache http reverse proxy config

Nginx reverse proxy

We dont use Nginx at the moment, but it should work in the same manner. Just be shure that the X-Forwarded-Proto header is submitted by the reverse proxy to the backend.

WordPress wp-config.php

Mario Kleinsasser on GithubMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Okay, Houston, we’ve had a problem here.

sdr

This quote perfectly reflects the essence of what makes our job as DevOps thrilling. Sometimes its like on Apollo 13. You are writing an e-mail and just one second later the master caution is triggered and you have no idea what happend. And for me that is the moment where our job gets out of the often boring daily business and the engineer within us awakens. We literally take our slide rulers and we do what we can do best as Einstein said: “Scientists investigate that which already is; Engineers create that which has never been.”

Therefore on this blog I will write about all that technology and engineer stuff that surrounds me like a satellite and about things that I am interested in. And sometimes that will be sarcastic. Have fun and follow up!

Mario Kleinsasser on GithubMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M