Docker TOTD 3 – Consensus and Syslog

If you make a mistake and do not correct it, this is called a mistake. [Confucius]

Today we faced a problem with our Docker Swarm which was caused by a permanently restarting service. The focused service was Prometheus, which we use for the monitoring of our Docker environment.

The story starts in the middle of the last week, as the new Prometheus (version 2) was released. In the configuration of our docker-swarm.yml which we use for the Prometheus service, we stupidly still used prometheus:latest. Did you noticed the latest? We have been warned (at Docker Con) to not use this. Yes there are a lot of examples on the internet which are exactly using this, but it is a very bad idea. latest literally means unknown because, you will not now, which image is referenced by the latest tag. latest is only a convention, not a guarantee! Therefore, pin the version of the image which you really want to use by pin-pointing it, eg. prometheus:1.1.1.

In our case, caused by an unplanned service update, the Prometheus image was freshly pulled (you now latest) and corrupted the Prometheus database. Furthermore, the configuration of the Prometheus changed between the version which in turn caused a permanent restart of the service. That happened on the weekend, which wouldn’t be bad, but it caused the container engine to get stressed.

This is documented in this Github issue. The result of this bug is, that the syslog get spammed up with a lot of pointless messages. However this will fill up your log partition after some time (maybe hours, but it will get filled up).

At this point it gets icky. In a default Ubuntu setup for example, the /var partition will contain the log directory and of course the lib/docker directory too. If the /var partition of the system is full, also Docker cannot write its Raft data anymore and your Docker Swarm will be nearly dead.

In our case we had a configuration mistake, because we used four Docker Swarm manager nodes, not three and not five. Now we come to the ugly level. Bad luck, the filled up /var partition killed two of our Docker Swarm managers. The containers continued to work, but the cluster orchestration was messed up, because two out of four nodes where dead. No quorum anymore, no consensus, no manageability.

But, no panic, there are ways to bring back online all services with some Linux voodoo (truncating syslog files, …). To sum it up, what are the lessons learned?

  1. Watch out for the correct number of Docker Swarm managers (there has to be an odd number of it, 3, 5, 7, …)
  2. Never ever use the latest tag, if you are not the maintainer of it!
  3. Restrict syslog to not fill up your partition. Place the syslog logs on a separate partition, or disable the syslog forwarding in the systemd journald config ForwardToSyslog=false. journald’s default configuration is to use a maximum of 10% of the diskspace for the log-data.
  4. Use LinuxKit maybe. It is made out of containers, which can be restricted to not use all system resources. If you ever asked yourself why you should have a look at it, read number two. The Docker host is not a general purpose server, likely you do not need default syslog and much more. This is what LinuxKit is designed for.

Thats all for today.

-M

(Image by Walter Grassroot Wikipedia)

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Docker TOTD – container id to aufs diff mount id

Ever asked yourself where to find the information what aufs diff folder maps to a running container? It’s pretty simple if you just poke around a little bit in the /var/lib/docker folder. But lets start at the beginning.

In the default Docker engine configuration, aufs is the storage driver which is used for images and container diffs. You can read on about the aufs driver at the official Docker aufs documentation here.

To sum it up, the base entity of every running container is the underlying image. This insight will direct us to the /var/lib/docker/image folder, more exactly to the /var/lib/docker/image/aufs/layerdb/mounts/ folder.

Insight the mounts folder, you can find directories named along your running containers. inside this containers, you will find a file called mount-id. This file contains the information about the aufs id which will help you to find the correct aufs diff directory of the running container in the /var/lib/docker/aufs/diff/ folder.

Here is a full example:

Happy hacking – M

(Featured image taken from the official Docker aufs documentation here)

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

DockerCon EU 2017 – recap…

First of all

AirborneMario & I where really excited going to DockerCon EU 2017 and didn’t know what was waiting for us. Monday started at about 04:00 AM by driving to Klagenfurt (Austria) and flying to Copenhagen via Vienna. After arriving at Copenhagen airport we managed to buy a metro/bus/train ticket so we could ride to our Hotel near the Bella Center. As we arrived at our Hotel we stored our luggage, connected our Laptops via WiFi and had a short look if everything was fine @work. I had some battery problems with my laptop the whole week (it always needs the power cable to startup) and also with my mobile phone lacking battery power, therefore I was forced to load my mobile via my laptop during the day… But everything went fine in the end and I had enough battery power to stay connected. 🙂

Arriving at Bella Center

Afterwards we went to the Bella Center to get registered and get our name badges. In our opinion the location was really big and some work was going on to finalize it for the public crowd. After registration we called an old friend of us to meet – Peter Rossbach. We met him and his team (Niclas Mietz,…) shortly after registration. It was nice to see them and we started to chat about what we were all waiting for. In the evening the welcome registration took place in the expo hall where we got our T-Shirts and also got a sneak peak of what was waiting for us the next days. And for me that was the start for collecting some Docker Pins… 🙂
We also had some nice chats with Peter Rossbach & Niclas Mietz who we met before and also with Docker Captain Dieter Reuter we haven’t met before but knew he was working closely with Peter.

Day one – the keynote

The next day started with the first keynote. But wait, there also was a breakfast before that and yes, that breakfast definitively needs to be mentioned here. 🙂 I haven’t seen such a big conference with such a great catering during the whole week! Starting from the free coffee from the espresso coffee machines going further on to the really good meals/dishes throughout the whole conference.
But what about the keynote? In my opinion the keynote started very well. The speakers gave an overview about the upcoming new features and what the whole keynote tenor (MTA, Modernizing Traditional Applications) was. The demos where a bit odd but funny to watch and the announcement that Kubernetes will get integrated into the Docker ecosystem beside the Docker Swarm was really interesting.

Day two – the keynote

The keynote of day two wasn’t that good in my opinion because the speakers focused too much on the tele prompter for their talks and so the talks dragged on at length… It felt that everyone in the audience was waiting for it to end to start another great day of more meaningful talks. Also, for me, the MetLife plot has been used too much – everything was just “MetLife”… I understand that this is a good partner for Docker to showcase what can be done but it’s not always about enterprise.
Also revealing that IBM is getting another platinum partner with its bluemix so that the customer can now choose from some of the biggest IT companies to assist him in migrating to a Docker environment hasn’t been that announcement the audience was waiting for…

Mario’s tracks

Here comes a description of the tracks which I (Mario) attended including a short recap of them.

Play with Docker (PWD): Inside Out (Community Track)

That was the first track which I attended and it was a great one. Unfortunately there are no slides online but you can watch the video. The track was about the latest changes on the Play with Docker online application and for me, one of the highlights was, that there will be the possibility to use a direct attached ssh console to communicate with the PWD instance. Play with Docker (PWD): Inside Out

Learning Docker from Square One (Using Docker Track)

This one was held by Chloe Condon and as the topic states, it was a basic track about Docker. But it was a very cool track because Chloe is a great, fun and refreshing person. I really liked her art of presentation!

Experience the Swarm API in Virtual Reality (Community Track)

My second community track and it was a little bit a strange one but not a bad one. The speaker created a 3D virtual reality view of the Docker API! Interesting and unusual.

LinuxKit Deep Dive (Black Belt Track)

Wow, this was one of the heavy tracks. In 30 minutes the audience and I got a really taff and very interesting inside view of what happened to and with LinuxKit in the past 6 month. For me, this was great because with the additional information about the Kubernetes integration into Docker the steps made really made sense.

A Story of Cultural Change: PayPal’s 2 Year Journey to 150,000 Containers with Docker (Transform Track)

One of my personal favorites from the Docker Con EU 17! Why? I am not a pure technician, I have to organize a lot of management stuff too but of course I am also a visionary engineer. Therefore I really liked the talk from Meghdoot Bhattacharya about the transition from the classic virtual machine driven infrastructure to the (not perfect at this time) Docker container infrastructure. We never made it this way, because we already started with containers in 2006 (OpenVZ), and as such we are happy to have missed the interim stage of having hundreds or thousands of virtual machines to manage. For me it was a confirmation that we did it right in the past, we do it right in the present and probably we are going to do it the right way in the future.

Tuesday afternoon meeting with Michael Dielman (Docker Booth)

On Tuesday afternoon we had a short meeting with Michael Dielman who is the account executive of Docker for the DACH region. Michael is a really cool person and we had a great inspiring talk. During the meeting we came in contact with to other Docker employees as well, namely Andreas Lambrecht and Andreas Wilke – was nice to meet you guys! Together with Michael we discussed a lot of topics, including the Docker EE version and why we are currently not going to use the Docker EE version on-prem. He did not force us (our company) to switch from the Docker CE version to the Docker EE version, on the contrary he and his collegues were really interested in how me managed it on-prem. One question we were always facing with from nearly all people we met was, if we are using Docker and more specific Docker Swarm in the productive environment. Our answer was, of course yes, why not, followed by a longer explanation how we manage it.

Eureka! The Open Solution to Solving Storage Persistence (Ecosystem B Track)

This was a short talk about the progress of RexRay held by Chris Duchesne which was very interesting too because it addresses one of the main problems in the Docker ecosphere currently – persistent container storage. It was a impressive short talk but it was also very informative and we were able to get a lot of benefits and ideas out of it.

My Journey To Go (Transform Track)

The last track of the first day for me, held by Ashley McNamara and it was my favorite talk number two. The presentation of Ashley was very brisk and inspiring. Hopefully a lot of people will follow her to find a way to contribute and to come in touch with technology.

Tips and Tricks of the Docker Captains (Using Docker Track)

First track of Wednesday, presented by Adrian Mouat. Some of the tips we knew already but there was a bunch of new things inside the presentation for us. A very good and informative track.

Docker to the rescue of an Ops Team (Community Track)

Help by Rachid Zarouali, this was a classic transition track. Interesting insights on how they moved from the “old” fashioned deployment way to the much more modern CI/CD way. Too bad that they did not get further until today in terms of CI/CD, because managing configuration with Puppetmaster for Docker volume mounts is a little bit odd. But maybe, they haven’t finished it until today, so maybe we will hear from him the next time.

Practical Design Patterns in Docker Networking (Using Docker Track)

Another great talk and one of my favorites. Don Finneran gave a lot of information about Docker networking to the audience and yes, there were some things I did not recognize until this talk.

Gordon’s Secret Session: Kubernetes on Docker (Black Belt Track)

Brilliant! A huge talk about the Kubernetes integration on Docker. A lot of information and insights where given to the audience in a very short time.

Bernhard’s tracks

In the following you can read through my (Bernhard) recaps of the talks I attended.

Play with Docker (PWD): Inside Out (Community Track)

The first one I attended – the talk was really good and we got some insights on the future of PWD! The guys are really working hard on enhancing the features of PWD including a direct attachable SSH console, get rid of the ReCaptcha and also remove the 4 hours limit. Some really nice features for the next releases! It was a really good talk, watch it!
Play with Docker (PWD): Inside Out

What’s New in Docker (Best Practices Track)

Nice talk from Vivek Saraswat getting some more detailed informations on the insights of the newest Docker features. For the Docker CE, which we use in production, Overlay2 over aufs as the new preferred filesystem & adding “–chown” to ADD/COPY in the Dockerfile where the highlights for me in this talk.
What’s New in Docker

Experience the Swarm API in Virtual Reality (Community Track)

Some fancy talk about visualizing the Docker API so you can “walk” through it in virtual reality. Not that important for production use cases but fun to watch.
Experience the Swarm API in Virtual Reality

Tales of Training: Scaling CodeLabs with Swarm Mode and Docker-Compose (Community Track)

The talk was about getting a robust training environment which is scalable, reproducible & simply usable by other trainers. Not that usable for us but nice to see what others implement with Docker.
Tales of Training: Scaling CodeLabs with Swarm Mode and Docker-Compose

Docker?!?! But I’m a SysAdmin (Using Docker Track)

At the first step it’s all about what you specifically need, that sums up what this talk was all about. Ask yourself some questions to take the right decisions for your needs. The best part for me in this talk was that Mike Coleman offered to write him if you have something important to share with the Docker community! Thx for that, we’ll really come back to him!
Docker?!?! But I’m a SysAdmin

Creating Effective Images (Using Docker Track)

Learning about what you can do by yourself to get simple & small container images which only have tools installed needed by the running application.
Creating Effective Images

Tuesday afternoon meeting with Michael Dielman (Docker Booth)

For me it was really cool to meet Michael Dielman and some guys (Andreas Wilke) from Docker who work in europe and which you maybe can contact in the future if there is the need for it. I pretty much enjoyed the chat with Andreas Lambrecht at the after party because we had things in common in our IT history regarding the company branches and so he was understanding why we are working on the things like we do it today.

Eureka! The Open Solution to Solving Storage Persistence (Ecosystem Track)

Sad that whether the video nor the slides are online by now, hopefully they will be uploaded! It was a really great talk by Chris Duchesne getting some informations about RexRay and how persistent container storage could be implemented with it. We are looking forward testing this in our environment!

Back to the Future: Containerize Legacy Applications (Use Case Track)

Modernizing Traditional Applications (MTA) was the tenor of this DockerCon and this talk was about modernizing the infrastructure of your legacy application. After that use this new infrastructure to modernize the application itself by using the new and in any ways more flexible delivery methods to, in the end, deploy your application faster. For me the talk underlined what we are going through in our environment but for us without the tool set for “converting” the application stack to our Docker environment. I think this tool set will help people out there who won’t get into the details of Docker in the first step but have the need to make the step forward to a containerized world.
Back to the Future: Containerize Legacy Applications

Tips and Tricks of the Docker Captains (Using Docker Track)

A good talk to start the second day. We got some tips and tricks around the docker environment. Some of them we already knew & some others we haven’t known already. Everyone using Docker should have a look at the video & slides, I think everyone can take something out of this cool talk by Adrian Mouat!
Tips and Tricks of the Docker Captains

Empowering Docker with Linked Data Principles (Community Track)

Running your applications in the Docker environment gives you masses of metadata around your running application stack. So what if you want to search for some metadata some of your applications have in common? Semantic data maybe is the key. Really nice talk about what you maybe will need in the future…
Empowering Docker with Linked Data Principles

Monitoring Containers: Follow the Data (Ecosystem Track)

Hmm, what should I say. I expected a more technical talk about monitoring containers in a Docker environment rather than a business talk about Datadog…

Container Orchestration from Theory to Practice (Black Belt Track)

Cool talk from Stephen Day & Laura Frank about the insights of Docker SwarmKit. Everyone who wants to know some insights, have a look at the video!
Container Orchestration from Theory to Practice

Docker to the Rescue of an Ops Team (Community Track)

A talk about the transition to Docker with CI/CD but as Mario wrote above to this talk, I also don’t understand why they use Puppet rather than migrating the whole thing to more CI/CD including configurations… We also use Puppet for our Docker hosts (and of course our non Docker environments) but no further. Maybe we will see more from him in one of the future DockerCon’s tracks. Would be interesting to see their further development.
Docker to the Rescue of an Ops Team

Taking Docker to Production: What You Need to Know and Decide (Using Docker Track)

Cool, what a talk to end DockerCon day two. I’m also a gaming kid of the 80’s/90’s, so following that talk by Bret Fisher was really cool. Not only because of the games but that gave the talk an ease following and concentrating at the end of the day! Summing up the talk I would say “start using docker” the simplest way you are comfortable with, don’t start using Docker and make CI/CD a show stoping requirement and also don’t start with a persistent application like a database container. So, really cool talk, have a look at it, you won’t regret it!
Taking Docker to Production: What You Need to Know and Decide

Bernhard’s track summary

In this short summary I want to outline four of the tracks I attended because they gave really much information which we will be needing and using in our environment.
Favorite Nr 1 -> Mike ColemanDocker?!?! But I’m a SysAdmin
Favorite Nr 2 -> Chris Duchesne – Eureka! The Open Solution to Solving Storage Persistence (Ecosystem Track)
Favorite Nr 3 -> Adrian MouatTips and Tricks of the Docker Captains
Favorite Nr 4 -> Bret FisherTaking Docker to Production: What You Need to Know and Decide

DockerCon party

Shuttle service via busses to the DockerCon “after day one” party. Really cool that we hadn’t to look at metro/bus/train plans to find our way to the after party. Everything was organized that everyone is feeling comfortable by making a shuttle service which brought us to the after party and back to the Bella Center after the after party. 😉
Arriving there we went into that old train workshop hall where everyone could drop his/her jackets/backpack so that you could fully enjoy the party! Everything was organized – drinks, meals/dishes, music, arcade games, life sized Asteroids and Bricks & Pong where you could challenge others. I think most of the DockerCon attendees where at the party and enjoyed that evening. There was enough time to meet new people you have already met the first day but also to meet new people and chat with them about their journey to Docker in their environments.
It was a really cool and great organized event and I hope that this wasn’t the last one we’ll attend in the future.

Moby summit

The last day (for us) at DockerCon started a bit bad for me because I forgot to select the “Join Moby Summit” radio button some weeks before the DockerCon so I thought I wouldn’t be able to attend the summit at that day. Mario and I went to the Bella Center and to the breakfast and Mario got into the summit because he checked that early enough before DockerCon.
So i went to the open place outside the Expo area and had a look for a free desk including a power line plug to get my laptop started, you know, my battery problems… So after a while sitting there and trying some things out with Portainer, reading a bit through the RexRay documentation and Portus installation a woman came out of the Expo area and said that there are some free places at the Moby summit. Yes, I thought, and after that I found myself fumbling my equipment back in my backpack in a hurry and literally running to the Moby summit, where Mario was already sitting with a free chair beside him. So I also had the opportunity to listen to the talks from about 09:45. There where some interesting talks about Moby itself, InfraKit & LinuxKit.
After that we unfortunately had to leave to get to our flight back to Austria… A good end for the DockerCon EU 2017!

Bernhard Rausch on EmailBernhard Rausch on GithubBernhard Rausch on LinkedinBernhard Rausch on Twitter
Bernhard Rausch
Bernhard Rausch
ops/sys-admin/engineer; loves to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker;
Always up to get in contact with interesting people - don't hesitate to write a comment or to contact me! - Bernhard
Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Docker on Citrix Linux VDA: Dev and Ops powered

A real story.

What do you do if developers (your friends in this case) are forced to use Windows at work by policy? This developers are already using CI/CD Gitlab Docker/Dind all inclusive pipelines and are recognizing, that they are lost on their local workstations. Why? They are using standard business workstations. Under economic circumstances there is no 16GB Intel Core i7 option for this kind of workstations available. This means that this developers are trapped because they have already build up testing, staging and education CI/CD pipelines with a lot of additional technologies, like Elasticsearch, Apache Ignite, Minio, …, but they aren’t able to run them during the development process on their local workstations because they are running out of local resources immediately.

What do you do as an Ops in this situation?

In my opinion there is something like a Hippocratic oath for Ops. You help. And sometimes you have to go new ways to help. In our case we were sure that we have to implement a central system for our Devs because they are located at different branch offices. Even if we would be able to buy workstation hardware which is powerful enough to get all needed stacks running, we cannot manage it, because it would require to manage the local Docker installations. There are different approaches and possibilities to run Docker on a Windows workstation, for example Vagrant (manage local VM’s) or native Docker (LinuxKit based for Windows). But in our opinion all these options are not really convenient if you would like to use Docker Swarm in combination with centralized hardware. That is not a problem of Docker Swarm, because you can install the Swarm manager under Windows (hybrid Docker Swarm) too. The problem resides within the workstation installation itself, because workstations tend to get broken and have to be reinstalled. Imagine what would happen, if the reinstallation of the workstations is handled by a different IT department? In the worst case the installation just recovers a default image and all of the previous work including the Docker installation etc would be probably lost. Now you can argument that you can do a backup of the installation and so on and so forth, but as you can see, this is getting complex and it is lacking the “keep it simple stupid (KISS)” attempt.

Therefore the first thought was to use X2GO, which we are using since several years for our common Linux terminal server where my team and I are working with (because we have to use Windows by policy too). But the downside of X2GO is, that it is very network consuming if you would like to have 24bit color depth on your desktop. Due to this reason, X2GO is only suitable if you can use it with LAN network speed. But our developers are spread over different international office branches and X2GO therefore won’t work. Beside this technical reasons, there are also user experience reasons. For X2GO you need a installed client like you need the Citrix Receiver, but in the opposite to the X2GO client, the Citrix Receiver is already installed on our workstation and is managed by a central department.

Some backgound

Bernhard and I were Windows admins before we become fully fledged Open Source Linux and Web DevOps in the early 2000s years. Therefore we used a lot of Citrix technologies back then (Citrix Metaframe and Metaframe XP). We always followed our colleges who continued the Windows Citrix journey and we also followed the news on this topic. Therefore we recognized that it should be possible to use Citrix Linux VDA to run a Linux Desktop Terminal Server. This fits perfect for our use case. In combination with Docker Swarm, we would be able to get an infrastructure which would be able to support our developers with the benefit to do it central.

Current status

We did it. Now, there is (currently) one Linux Terminal Server, which is also a Swarm manager and we created some additional worker Linux VM’s (without Citrix) to get the load driven. Furthermore the executed Docker Swarm files are using some placement rules to not place containers on the terminal server. Now, the developers who are already Gitlab CI/CD Docker aware can use this terminal server to develop their application on a central desktop server with 64+ GB’s of ram (only the VM is restricted and up to 512GB of ram are possible) powered by Cisco UCS Blades (32 Cores). They can use all of the Docker power without any restrictions, they can develop, they can test, they can try, they can learn. All of that is real because we are talking the same language – Containers.

Summary

Look at the applied screenshots. At some areas they are blurred out to preserve the policy, but I guess you get the idea. Worlds are coming closer and closer. And that is great! With this combination of Citrix Linux terminal server and Docker Swarm as container platform we can achieve a lot of useful benefits. First, the developers (the users) can use their corporate login seamlessly, because the Linux installation is fully Microsoft Active Directory integrated. The Citrix Store Front portal will be fully integrated in the common workflow of our colleges. Furthermore, the secure access to the development environment is provided through Citrix also. This includes international bandwidth optimized access too. Second, in combination with Docker Swarm, the developers have all the power on their hands to do all the things they need to get the stacks up and running as close as they need it for the other environments (test, staging, productive) too. Third, the Ops (we) get better application stacks which enables us to focus on our work to realize also infrastructure as code in the future. And at last, the setup is scaleable in terms of Docker and in terms of Citrix terminal servers too! That’s it for today.

If you have any questions, please contact us. We will answer it.

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Contributing to Docker (or others)

Earth and contribute

General

The last week’s I was busy to get two pull requests merged for docker-machine. Here I will tell you the story which experiences I gained and why it was worth the work. I write this during our journey and flights to Docker Con EU 2017, because I am in a good mood and exited to be there soon and it’s another way to support the community too.

In the last summer we started to use Gitlab more extensively because we get used to the integrated CI/CD pipelines in combination with Docker. For this setup we installed a appropriate number of Gitlab-runner too. After a month or two we recognized that we were facing some build peaks during the day and therefore we decided to use our Microsoft Azure connection to the Cloud to setup a Gitlab-autoscale runner.

Docker Con EU 17 break on

Yes, I really wanted to write this blog post during the Docker Con EU 17, but there was too much of fun and information there! It was really exciting and therefore this post was delayed until now … 🙂 . But happily you might find a review of our Docker Con EU 17 journey and impressions here soon!

Docker Con EU 17 break off

This was the point where the problems started. The first thing I mentioned was, that I was not able to reach the Azure VM which is created when the Gitlab-autoscale runner starts. To figure out what was happening there, I started the docker-machine binary manually and tried to create a Azure VM manually, because the Gitlab-autoscale runner uses docker-machine in the background.

First contribution

After a view tries and some debugging runs of docker-machine I realized, that the network routing table which is used to establish the side-by-side connection to the Azure cloud, gets deleted upon the creation of an VM in the Azure cloud. That is really bad because it does not only interrupt the connection to the freshly created machine, it causes that the whole subnet is not reachable anymore. If there are other VM’s existing in the same subnet, they are also not reachable anymore. Ouch.

Open Source is great! Why? You can look at the code! And yes, even if you are not a full time programmer – DO IT! I am an Ops guy, I was able to do this to! Ok, to be fair, we are often forced to solve problems on our own, especially if your main work is to work with Linux, you learn that you can do a lot of tasks in a more fashion manner if you just write a script, or two… (= to code).

So yes, after some digging, I found the place where to change the code and I changed it for on-prem usage and tests. After some time I thought that it will be useful to others and so I filed an issue and I also put in a corresponding pull request.

I had already learned to use Git, but through this pull request I learned a lot more! Thanks to Joffrey who was very kind to support me and after additional work, I was able to get my pull request merged. No more deleted routing table entries!

Second contribution

But the story does not end here 🙂 – during our on-prem tests we also recognized that the Azure storage accounts of delete VM’s are not deleted too! After some days of running the Gitlab-autoscale runner, we messed up our Azure ressource group with lots of orphaned storage accounts. Not nice 😉

I guess you know what is coming now? Correct, filing an issue. But wait! Always take a look if there is already a filed issue for a problem! And yes, there was one already filed. So once more I changed the code for on-prem use, tested it and opened a pull request. And again Joffrey was so kind to help me with my questions. After I while, this pull request was merged too and hopefully it helps someone out there.

Conclusion

Yes you can help others! There are plenty of things you can do to the community, not only coding. You can also support others by filing issues or write documentation about projects you use (in no case limited to Docker). There was also a great talk at Docker Con EU 17 held by Ashley McNamara on this topic. To quote one of her slides and to end this blog post:

We are a Communityof coders. But ifall we do is codethen we've lost the community.

Blog picture information

The earth seen from Apollo 17.

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

We are going to Docker Con EU 2017

Docker Con EU 17

Bernhard and I will attend to Docker Con EU 2017! Glad to see some people there which are in contact with us at Slack or Github! Follow us at Twitter, we might post some updates there!

Image copyright by Docker Inc!

-M

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Newer blog entries...
Older blog entries...