Docker Endeavor – Episode 1 – Pre Flight

Estimated reading time: 3 mins

General

This blog series will give you an insight of our development in the usage of Docker. We will start this with just some information about our working environment and when we tried first to use docker at work. Hopefully you can learn from our pitfalls and maybe you have some fun on reading it. This articles will be longer and therefore it will take some time between the posts.

About the picture

The picture was taken from NASA and shows the Space Shuttle Endeavor. It beautifully shows the cargo bay (no containers there but similar idea just in and for space) and therefore we choose this picture and of course the name Endeavor for our Docker posting series. Endeavour means to try to do something, in our case we try to run Docker with on-premise infrastructure.

Pre-Flight

Bernhard an I are doing containers since 2008. We started with OpenVZ and a handful containers, just one or two applications and some sort of load balancing. The applications we empowered were monolithic blocks of Apache Tomcat in conjunction with Java and the Webapplication bundled together. And this is still the case nowadays. The deployment​ process is basically based on building Debian binary packages and after the build the packages are uploaded to a private repository. But we go on and two years ago we started to change our deployment and we started to use a self written Python program. This is where we are today.

Docker

Now it comes to Docker. Bernhard and I know Peter Rossbach from the Tomcat project as committer and as a consultant. He was one of the first who joined up the docker community. Therefore we decided to give Docker a try but three years ago this was a strong task for us. Too strong. There were to much problems, on the Docker side (load balancing) and of course on the developer side. So for us, most of the time we are the Ops from DevOps, this was impossible to lift. So we canceled our first Docker experiment but we hold it on our radar. Time went buy and with the end of 2016 and the beginning of 2017 we started a new approach. One of the key components, the load balancing, is much better now (traefik) but in some circumstances still a pain. Why? We would like to run Docker on-premise! So there is no sexy GCE external IP load balancer and much more. There are hundreds of problems when it comes to fitting Docker in a grown heterogeneous infrastructure. You need an example? What do you do if the only way to the internet is a http proxy server? Yes, you have to change this first. And now think about the fact that this proxy model is a decade old and you tell someone that you need direct routing. Guess what, that’s not easy to achieve.

But on our way we met Timo Reimann, a very nice contact. After a view chats we were able to find our way to setup Docker and today we are running about 150 containers in production - we started approximately two months ago.

To be continued

But we will tell more about our odyssey the next time in part two of this blog series. Hopefully this will help the one or the other who has to manage a lot technical problems with Docker in a real live and not a lab environment!

Posted on: Tue, 02 May 2017 15:07:32 +0200 by Mario Kleinsasser , Bernhard Rausch
  • General
  • Docker
  • Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). My motto is "𝗜𝗺𝗮𝗴𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗺𝗼𝗿𝗲 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝘁𝗵𝗮𝗻 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲. [Einstein]". Interesting contacts are always welcome - nice to meet you out there - if you like, do not hesitate and contact me!
    CloudArchitect/SysOpsEngineer; loves to get things ordered the right way: "A tidy house, a tidy mind."; configuration management fetishist; loving backups; impressed by docker; Always up to get in contact with interesting people - do not hesitate to write a comment or to contact me!