Contributing to Docker (or others)

Earth and contirbute

General

The last week’s I was busy to get two pull requests merged for docker-machine. Here I will tell you the story which experiences I gained and why it was worth the work. I write this during our journey and flights to Docker Con EU 2017, because I am in a good mood and exited to be there soon and it’s another way to support the community too.

In the last summer we started to use Gitlab more extensively because we get used to the integrated CI/CD pipelines in combination with Docker. For this setup we installed a appropriate number of Gitlab-runner too. After a month or two we recognized that we were facing some build peaks during the day and therefore we decided to use our Microsoft Azure connection to the Cloud to setup a Gitlab-autoscale runner.

Docker Con EU 17 break on

Yes, I really wanted to write this blog post during the Docker Con EU 17, but there was too much of fun and information there! It was really exciting and therefore this post was delayed until now … 🙂 . But happily you might find a review of our Docker Con EU 17 journey and impressions here soon!

Docker Con EU 17 break off

This was the point where the problems started. The first thing I mentioned was, that I was not able to reach the Azure VM which is created when the Gitlab-autoscale runner starts. To figure out what was happening there, I started the docker-machine binary manually and tried to create a Azure VM manually, because the Gitlab-autoscale runner uses docker-machine in the background.

First contribution

After a view tries and some debugging runs of docker-machine I realized, that the network routing table which is used to establish the side-by-side connection to the Azure cloud, gets deleted upon the creation of an VM in the Azure cloud. That is really bad because it does not only interrupt the connection to the freshly created machine, it causes that the whole subnet is not reachable anymore. If there are other VM’s existing in the same subnet, they are also not reachable anymore. Ouch.

Open Source is great! Why? You can look at the code! And yes, even if you are not a full time programmer – DO IT! I am an Ops guy, I was able to do this to! Ok, to be fair, we are often forced to solve problems on our own, especially if your main work is to work with Linux, you learn that you can do a lot of tasks in a more fashion manner if you just write a script, or two… (= to code).

So yes, after some digging, I found the place where to change the code and I changed it for on-prem usage and tests. After some time I thought that it will be useful to others and so I filed an issue and I also put in a corresponding pull request.

I had already learned to use Git, but through this pull request I learned a lot more! Thanks to Joffrey who was very kind to support me and after additional work, I was able to get my pull request merged. No more deleted routing table entries!

Second contribution

But the story does not end here 🙂 – during our on-prem tests we also recognized that the Azure storage accounts of delete VM’s are not deleted too! After some days of running the Gitlab-autoscale runner, we messed up our Azure ressource group with lots of orphaned storage accounts. Not nice 😉

I guess you know what is coming now? Correct, filing an issue. But wait! Always take a look if there is already a filed issue for a problem! An yes, there was one already filed. So once more I changed the code for on-prem use, tested it and opened a pull request. And again Joffrey was so kind to help me with my questions. After I while, this pull request was merged too and hopefully it helps someone out there.

Conclusion

Yes you can help others! There are plenty of things you can do to the community, not only coding. You can also support others by filing issues or write documentation about projects you use (in no case limited to Docker). There was also a great talk at Docker Con EU 17 held by Ashley McNamara on this topic. To quote one of her slides and to end this blog post:

We are a Community
of coders. But if
all we do is code
then we've lost the community.

Blog picture information

The earth seen from Apollo 17.

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

We are going to Docker Con EU 2017

Docker Con EU 17

Bernhard and I will attend to Docker Con EU 2017! Glad to see some people there which are in contact with us at Slack or Github! Follow us at Twitter, we might post some updates there!

Image copyright by Docker Inc!

-M

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

New development release of our border controller

neutralzone

In the last few days we have just updated the development branch of our border controller. You can lookup the actual information about it on Github. Be sure that you choose the edge branch. The changelog contains information about the latest changes.

We are really impressed, that the border controller was downloaded 2700 times until today from the Docker hub – hooray!

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

On-Premise Gitlab with autoscale Docker machine Microsoft Azure runners

Sojourner

This post is about the experience we gained last week while we were expanding our on-premise Gitlab installation with autoscale Docker machine runners located in the Microsoft Azure cloud.

Preface

We are running a quite large Gitlab installation for our development colleagues since about five years. Since march this year we are also using our Gitlab installation for CI/CD/CD (continuous integration/continuous delivery/continuous deployment) with Docker. Since our developers started to love the flexibility and the power of Gitlab in combination with Docker, the build jobs are continuously raising. After approximately six month we have run nearly 7000 pipelines and nearly 15000 jobs.

Some of the pipelines are running quite long, for example Maven builds or multi Docker image builds (micro services). Therefore we are running out of local on-premise Gitlab runners. To be fair, this would not be a huge problem because we have a really huge VM-Ware environment on site but we want test the Gitlab autoscaling feature in a real world, real life environment.

Our company is a Microsoft enterprise customer and therefore we have the possibility to just test this things in a little bit different environment than it is usual.

Cloud differences

As told beforehand, we have a more sophisticated on-premise cloud integration. Currently we have a site-to-site connection to Microsoft. Therefore we are able to use the Microsoft Azure cloud as it would be an offside office which is reachable over the WAN (wide area network).

Gitlab autoscale runner configuration

At first glance we just followed the instructions from the Gitlab documentation. The documentation is fairly enough especially with the corresponding docker+machine documentation for the Azure driver.

Important: Persist the docker+machine data! Otherwise every time the Docker Gitlab runner container restarts, your information about the created cloud VM-Ware machines will be lost!

Gitlab runner configuration

Forward, we have to go back!

At this point, the troubles were starting.

Gitlab autoscale runner Microsoft Azure authentication

We decided to run the Gitlab autoscaling runner with the Gitlab runner offical Docker image, as we do with all our other runners. After the basic Gitlab runner configuration (as provided in the documentation), the docker+machine driver will try to connect with the Microsoft Azure API and will start to authenticate the given subscription. Because we are running the Gitlab as Docker container, you have to show the logs to notice the Microsoft login instructions.

Important: The Microsoft Azure cloud login information is stored inside a hidden folder in the /root directory of the Gitlab runner container! You should really persist the /root folder, or you will have to authenticate every time you start the runner!

Important: Use the MACHINE_STORAGE_PATH environment variable to know where docker-machine stores the virtual machine inventory! Otherwise you will lost it every time the container restarts!

If you lost this step, you will have to relogin. You will notice this problem, if you look at the log messages of the container. Every five seconds, you will see a login attempt. This login attempt is only valid for a short time period and every time you will have to enter a new login code online. You can workaround this dead-lock situation, if you open an interactive shell into the Docker Gitlab autoscale runner and enter the docker-machine ls command. This command will wait until you have entered the provided code online.

Docker Gitlab autoscale swarm configuration

Important: This is a swarm compose file!

Microsoft Azure cloud routing table is deleted

Because we are connecting to the Microsoft Azure cloud from within our WAN, there are special routing tables configured in the Microsoft Azure cloud, because the network traffic has to find the way to the cloud and back from it to our site. After we started the first virtual machine we recognized, that we cannot connect to it. There was no chance to establish a ssh connection. After some research we found out, that the routing table was deleted. After some more test and drilling down the docker+machine source code we were able to pin the problem down to one Microsoft Azure API function. To get rid of the problem we wrote a small patch.

The pull request is available at Github and furthermore, you can also find all information there. If you like, please vote up the patch.

Microsoft Azure cloud storage accounts are not deleted

After we sailed around the first problem, we immediately hit the next one. The Github autoscale runner do what he is meant to do and create virtual machines and delete them. But the storage accounts are left over. As a result, your Microsoft Azure resource group will get messed up with orphaned storage accounts.

Yes, you guess it, I wrote the next patch and submitted a pull request at Github. You can read up all details there. If you like, please vote up the patch there :). Thank you.

Conclusion

After a lot of hacking, we are proud to have a full working Gitlab autoscale runner configuration with Microsoft Azure on-premise integration up and running. Regarding the issues found, we have also contacted our Microsoft enterprise cloud consultant to let Microsoft know, that there might be a bug with the Microsoft Azure API.

Blog picture image

The blog picture of this post shows the Sojourner rover from the Nasa Pathfinder mission. In my opinion an appropriate picture, because you will never know what you have to face on uncharted territory.

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

SSL offloading – NGinx – Apache Tomcat – showcase

Regarding to our post covering the SSL – NGinx – Apache Tomcat offloading, we decide to create a small showcase project. You will find this showcase project under the following link https://github.com/n0r1sk/https-nginx-tomcat. This project should clarify all configuration tasks needed to get up and running with NGxinx and Apache Tomcat. Furthermore you will find a extensive documentation there. If you have questions, you can open an issue at the Github project. Have fun -M

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Docker: Enabling mailing in php:apache (running WordPress)

Starting from

When using the php:apache image from https://hub.docker.com/_/php/ image mail support was not enabled. On my docker host postfix is already enabled and configured to accept and forward mails from my docker containers. So I should be able to send mails.

When I decided to install WordPress by myself, I ended up with this Dockerfile and did config stuff using a volume over /var/www/html:

At the end I realized that mails are not working. Php’s answer when asking for the sendmail_path is this:

And Php mail() result is this:

The manual approach first:

Entering the container:

Then installing sSMTP

and configuring it. Here is my ssmtp.conf

Changing the sendmail_path for Php is easy as there is no php.ini by now:

After restarting the Apache

Php is responding with a valid sendmail_path

Now Php mail() and also WordPress mailing is working fine ;-).

The new Dockerfile:

You need to have the above ssmtp.conf file to build the image.

Markus Neuhold on BehanceMarkus Neuhold on EmailMarkus Neuhold on GithubMarkus Neuhold on Twitter
Markus Neuhold
Markus Neuhold
IBMi (AS400) sysadmin since 1997, linux fanboy and loving open source, docker and all about tech and science.

Docker TOTD

If you edit a Docker bind mounted file (-v sourcefile:destitnationfile) you may have recognized that you face a stale file handle under certain circumstances, especially if you edit the file with Vim on your Docker host.

This is because Vim will copy the content of the original edit file to a new one and after you save the changes, Vim exchanges this two files. The result of this operation is, that the inode of the file will be changed.

Docker uses the inode of the file for the bind mount and therefore, correctly, the file handle will be stolen after this operation.

Workaround

Just open Vim without specifying the file. Afterwords type in :set backupcopy=yes and open the file you like to edit with :e yourfile. With this option you will edit the original file handle and not a copy.

Source

Github issue
Stackoverflow

Vim-Config:

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Docker Endeavor – Episode 3 – Orbit

Challenger Orbit

General

It’s been two month since the last Docker Endeavor post but we weren’t lazy! In opposite, we build a lot of new stuff and changed a lot of things and of course we learned a lot too! In between I passed my master exam and therefore the last two month were really busy. Beside this, Bernhard and I met Niclas Mietz a fellow of our old colleague Peter Rossbach from Bee42. We met Niclas because we booked a Gitlab CI/CD workshop in Munich (in June) – and funny, Bernhard and I were the only ones attending this workshop! Therefore we have had a really good time with Niclas because we had the chance to ask him everything we wanted to know specifically for our needs!
Thanks to Bee42 and the DevOps Gathering that we were mentioned on Twitter – what a motivation to go on with this blog!
Also one of our fellows of the Container fathers, Kir Kolyshkin, we met him in 2009, is now working as a developer for Docker Twitter. We are very proud to know him!

Review from the last episode

In the last episode we talked about our ingress-controller, the border-controller and the docker-controller. For now we canceled the docker-controller & the ingress-controller because it adds too much complexity and we managed it to get up and running with a border-controller in conjunction with external created swarm networks and Docker Swarm internal DNS lookups.

Gitlab CI/CD/CD

Yes, we are going further! Our current productive environment is still powered by our work horse OpenVZ. But we are now also providing a handful of Docker Swarm Services in production / development & staging. To get both, CI (continuous integration) and CD/CD (continuous delivery / continuous deployment) up and running, we decided to basically support three strategies.

  • At first, we use Gitlab to create deployment setups for our department, DevOps. We’ve just transitioned our Apache Tomcat setup to a automatic Docker Image build powered by Gitlab. Based on this we created a transition repository where the developer could place his or her .war-package. This file afterwards gets bundled with our Docker Tomcat image, build beforehand, and then it is also pushed to our private Docker registry. Afterwards it will be deployed to the Docker Swarm. KISS – Keep it simple & stupid.

  • Second, the developers of our development department use Gitlab including the Gitlab runners to build a full CI pipeline, including Sonar, QF-GUI tests, Maven and so on.

  • Third, we have projects which are combining both, the CI and the CD/CD mechanisms. For productive and testing/staging environments.

Gitlab-CI-CD-Overview

Update of the border-controller

Our border-controller is now only using the Docker internal Swarm DNS service to configure the backends. We do not use the docker-controller anymore, therefore this project of us is deprecated. Furthermore, in the latest development version of our border-controller I’ve included the possibility to send the border-controller IP address to a PowerDNS server (via API). Thanks to our colleague Ilia Bakulin from Russia who is part of my team now! He did a lot of research and supported us to get this service up and running. We will need it in the future for dynamic DNS changes. If you are interested in this project, just have a look at our Github project site or directly use our border-controller Docker image from DockerHub. Please be patient, we are DevOps, not developers. 🙂

Currently we are not using Traefik for the border-controller because for us there are two main reasons.

  • First, our Nginx based border-controller does not need to be run on a Docker Swarm manager node, because we are not using the Docker socket interface with it. Instead we are using the build in Docker Swarm DNS service discovery to get the IP addresses for the backend configuration. This also implies, that we don’t need to mount the Docker socket into the border-controller.

  • Second, in the latest version the border-controller is able to use the PowerDNS API to automatically register the load balancers IP address and the DNS name in the PowerDNS system. That is important for the users point of view because normally they use a domain name in the browser.

Border-Controller-Overview

Actual Docker Swarm state

Currently we run approximately 155 containers.

Summary

In this blog we talked about CI/CD/CD pipelines and strategies with Gitlab and our own border-controller based on Nginx. In addition we gave you some information on what we did the last two month.

Orbit

The blog headline picture shows the Space Shuttle Challenger in orbit during the STS07-32-1702 mission (22 June 1983).

Challenger Orbit

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M
Bernhard Rausch on GithubBernhard Rausch on Twitter
Bernhard Rausch
Bernhard Rausch
devops/sys-admin; love to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker;
Always up to get in contact with interesting people - don't hesitate to write a comment or to contact me! - Bernhard

Nginx Reverse Proxy with SSL offloading and Apache Tomcat backends

Nginx SSL offloading

In our current native Docker environment, we are using Nginx as our border controller (link) to get the traffic and the user sessions (sticky) managed with our Apache Tomcat servers. But together with our developers we found out that there is a major problem with https encryption on Nginx and using Apache Tomcat http connector as backend interface.

The problem

If the Apache Tomcat is not configured correctly (server.xml and web.xml) some of the automatically created redirect links (created by Apache Tomcat himself) in application will still point to http resource urls. This will lead to double requests and of course to a not working application if you are using a modern browser like Chrome (insecure content in secure context).

The solution(s)

Apache Tomcat server.xml

You have to modify the Apache Tomcat server.xml to add the parameters scheme="https", secure="true", proxyPort="443" . Afterwards your http connector setting should looks like the following code. Afterwards the request object in the Apache Tomcat will have the correct scheme.

web.xml

Usually you will enable the x-forwarded-for header in the Nginx configuration. Afterwards on the backend you can retrieve the header inside your, in case of Apache Tomcat, Java code. But this would be a manual way to do it. To be compatible with this header out of the box, you can add a filter to you web.xml. Afterwards the x-forwarded-proto will be automatically set inside the Apache Tomcat request object. Here is the needed part of the web.xml.

Summary

After some research we figured out on how to configure Apache Tomcat to work seamlessly with Nginx as reverse proxy in conjunction with Apache Tomcat backends.

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Lenovo Y50-70 replace keyboard

I didn’t find a full tutorial on how to do it, so I decided to post it here. It was my first try and took me about 3 hours last night.

Step 1 – Order a replacement

I started from http://pcsupport.lenovo.com/us/en/products/laptops-and-netbooks/lenovo-y-series-laptops/y50-70-notebook-lenovo/80ej/80ejcto/parts and orderd it at amazon.de

y50-70 the new keyboard

Step 2 – Follow this tutorial

Step 3 – Take some time (a bottle of wine) and go on like this

I marked the places to work on with the blue tools…

y50-70 under the motherboard

y50-70 speakers

Remove the black foil

y50-70 remove the black foil

y50-70 remove the power cable

y50-70 remove the last srew

y50-70 ahhrrrr

y50-70 ahrr

y50-70 got it

y50-70 too early

y50-70 great joy

y50-70 and all the way back

y50-70 and on and on

y50-70 foil is back

… and all the way back.

Markus Neuhold on BehanceMarkus Neuhold on EmailMarkus Neuhold on GithubMarkus Neuhold on Twitter
Markus Neuhold
Markus Neuhold
IBMi (AS400) sysadmin since 1997, linux fanboy and loving open source, docker and all about tech and science.