This blog series will give you an insight of our development in the usage of Docker. We will start this with just some information about our working environment and when we tried first to use docker at work. Hopefully you can learn from our pitfalls and maybe you have some fun on reading it. This articles will be longer and therefore it will take some time between the posts.
The picture was taken from NASA and shows the Space Shuttle Endeavor. It beautifully shows the cargo bay (no containers there but similar idea just in and for space) and therefore we choose this picture and of course the name Endeavor for our Docker posting series. Endeavour means to try to do something, in our case we try to run Docker with on-premise infrastructure.
Bernhard an I are doing containers since 2008. We started with OpenVZ and a handful containers, just one or two applications and some sort of load balancing. The applications we empowered were monolithic blocks of Apache Tomcat in conjunction with Java and the Webapplication bundled together. And this is still the case nowadays. The deployment process is basically based on building Debian binary packages and after the build the packages are uploaded to a private repository. But we go on and two years ago we started to change our deployment and we started to use a self written Python program. This is where we are today.
Now it comes to Docker. Bernhard and I know Peter Rossbach from the Tomcat project as committer and as a consultant. He was one of the first who joined up the docker community. Therefore we decided to give Docker a try but three years ago this was a strong task for us. Too strong. There were to much problems, on the Docker side (load balancing) and of course on the developer side. So for us, most of the time we are the Ops from DevOps, this was impossible to lift. So we canceled our first Docker experiment but we hold it on our radar. Time went buy and with the end of 2016 and the beginning of 2017 we started a new approach. One of the key components, the load balancing, is much better now (traefik) but in some circumstances still a pain. Why? We would like to run Docker on-premise! So there is no sexy GCE external IP load balancer and much more. There are hundreds of problems when it comes to fitting Docker in a grown heterogeneous infrastructure. You need an example? What do you do if the only way to the internet is a http proxy server? Yes, you have to change this first. And now think about the fact that this proxy model is a decade old and you tell someone that you need direct routing. Guess what, that’s not easy to achieve.
But on our way we met Timo Reimann, a very nice contact. After a view chats we were able to find our way to setup Docker and today we are running about 150 containers in production - we started approximately two months ago.
But we will tell more about our odyssey the next time in part two of this blog series. Hopefully this will help the one or the other who has to manage a lot technical problems with Docker in a real live and not a lab environment!