Testing Remote TCP/IP Connectivity – IBM i (AS400)

Preface

I’m used to run telnet to do a quick check if a remote server is reachable and listening on a specific port. When trying this on i5/OS with telnet CMD you may get headache!
After some research I ended up with openssl in PASE to succeed my task on IBM i (AS400).

telnet vs openssl syntax

On Telnet 5250 Command Line you first have to enter PASE using

Then run

instead of

as it is not installed in PASE.

Excamples

Success, with server using SSL

with openssl

with telnet

Failure

with openssl

with telnet

Success, with server not using SSL

with openssl

with telnet

Markus Neuhold on BehanceMarkus Neuhold on EmailMarkus Neuhold on GithubMarkus Neuhold on Twitter
Markus Neuhold
Markus Neuhold
IBMi (AS400) sysadmin since 1997, linux fanboy and loving open source, docker and all about tech and science.

Docker South Austria meets DevOps & Security Meetup in Vienna

Just to let you know, I will give a talk at the DevOps & Security Meetup Vienna. The topic of my talk will be GitOps: DevOps in Real Life.

Here is the featured text of the talk in German:

Container-Technologien in einem On-Premises Umfeld einzuführen bringt viele Veränderungen mit sich. Besonders die Evolution in der Zusammenarbeit zwischen den Teams, sowie der ständige Wandel und die Weiterentwicklung der Technologien, führen zu immer neuen Herausforderungen aber auch zu immer neuen, kreativen und innovativen Lösungen. Seit dem Beginn der Veränderungen vor zwei Jahren haben wir uns ständig verbessert und die GitOps Methode eingeführt. Viele dieser Entwicklungen finden häufig in Public Clouds statt, stehen jedoch ebenso On-Premises zur Verfügung. Dass der Einsatz dieser Technologien On-Premises möglich ist und ausgezeichnet funktioniert, werde ich in meinem Talk zeigen. Special Guests: GitLab, Puppet, Prometheus, Elastic, Docker, CoreDNS, Kubernetes und viele mehr 🙂

If you like, please join us in Vienna on the 3th of July!

-M

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Kubernetes the roguelike way

It has been a while since the last post, but we had a busy time. Together with my colleagues I wrote a large documentation called Kubernetes the roguelike way over the last few weeks.

The documentation is about our setup of Kubernetes which we use on premise. We will continue the documentation as we are moving forward in our progress. Have a lot of fun and if you have suggestions, please open an issue in the GitLab project.

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Speaker at DevOps Gathering 2018

This year and for the first time I attended as speaker at the DevOps Gathering 2018 conference at Bochum Germany! Long story short, it was an extremely great experience for me!

The idea for giving this talk was born in October 2017. I knew that Peter Rossback (a long term friend of us) invented a conference at Bochum Germany for the first time in March 2017. Due to our own journey along the DevOps way, which started in the beginning of 2017, I decide to try to give something back to the container community. Of course, we and I are already participating to various Open Source projects, but I looked for the chance to give a talk too to find out if I am convenient with it. After putting in my CFP for the DevOps Gathering 2018, the organizers decide to give me the chance to speak to a larger audience.

I’ve started the detailed preparation of the talk at the end of the last year and continuously updated it along the things changed at work. There are always last knowings you would like to reflect in a talk. There are fine details here and there and on the last weekend before the talk I decided to recreate a graphic I use in the presentation on my own for example.

My trip to the Devops Gathering 2018 started on Monday afternoon with the drive to the Salzburg airport. As I arrived there, I recognized that may flight to Düsseldorf was delayed by an hour. Not a huge problem as I had plenty of time and after some waiting the plane to Düsseldorf took off. After a short flight (one and a half hour) I took the train from Düsseldorf to Bochum and arrived there at approximately 9 pm. On the first day the speakers dinner took place at a Greek restaurant in Bochum which I managed to attend. That was the first that I met with the other speakers. And obviously it was great (see the pictures)

The next day I arrived at the conference location at 8 am. I already knew the boys and girls from the Bee42 gmbh and therefore it was a comfortable situation for me. The conference took place at the GData campus, a very nice location. And as the hour hand came close to 9 pm, the location gets filled up with people. The DevOps Gathering was sold out and in sum there were more than 150 attendees.

On this day after the lunch break my talk takes place at 1:15 pm. I was already on stage because I placed a video (including audio) in my talk. Audio is always an interesting thing. You never know if it works during a presentation, if you do not try it beforehand. I talked to the tech-staff and we managed it to get it up and running with audio too. By the way, all tracks were recorded. Here are the videos of the tracks from the last year and I am sure that the new ones will pop up there too in the following days.

From my point of view, my talk runs very smooth. No huge problems, only some minor ones. My talk, as all of the other talks, lasts 45 minutes (in English of course) with Q&A afterwards. And to sum it up, I was really fun! Yes, I think it is exciting for me to give talks. If you never try things out and leave your comfort zone, you will never knew what is exciting and satisfaction for you.

During the whole day I had the possibility to talk with the other speakers and of course with attendees too and that was very interesting! In the evening the DevOps Gathering party takes place at the GData campus. Drinks, food, music, cool people and tons of fun! If you have the chance, attend to the next conference if it is possible for you! It is worth it! I came in contact with a lot of people there. For example Roland Huß(Red Hat), Docker captain Viktor Farcic (CloudBees), Thomas Fricke (CTO of Endocode), Jan Bruder (Rancher Labs) just to name some of them and many others, though.

On the second day I had to travel back and therefore I had to left the conference early. But from the talks which I have seen on the second day, I can say that the second day was great too!

After my journey to the DevOps Gathering 2018 I can say that it was very exiting and enjoyable to give a talk, to meet a lot of great people and finally to learn much new things. The discussions we had were great too. Everyone has his or her own experience but we were always able to understand and respect the position of each other and this is what makes conferences so wonderful! Side note: I did this adventure on my own expense. Why? As I wrote above, if you never leave your comfort zone you will never find out what is exiting and enjoyable for you. Now I know, that I would like to continue this way! There was also an attendees questionnaire about the conference in general and the speakers. I am anxious what my marks are and now I am waiting for the videos of the DevOps Gathering 2018.

You can find all speaker an their talks here – click. Have a lot of fun!

-M-

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Terraform(ing) Blue-Green Swarms of Docker (VMware vCenter)

Preface

Terraform(ing) Blue-Green Swarms of Docker will enable you to update your Docker Swarm hosts to the actual Docker-CE version without an outage. For example imagine the following situation. Your have some Docker Swarm manager up and running and of course a bunch of Docker Swarm workers. If you are forced to update your operating system or if you like to update from a previous version of Docker to a newer one, you will have to handle this change it in place on your Docker Swarm workers. The result is, that you will drain one Docker host, update it and bring it back active to the Docker swarm. If you have five Docker Swarm worker hosts this will result in a loss of a fifth of your capacity and the remaining Docker Swarm worker hosts will have to handle a plus of a twentieth of workload. And if something goes wrong, maybe the new Docker version have a bug which hits you, you might be out of order shortly.

Therefore it is much better, if you can create fresh Docker Swarm workers side by side with the existing ones and then, if all is up and running, you can drain on old version Docker Swarm worker. The load will be pulled over to the new Docker Swarm workers and if something goes wrong, you can just switch back by activating the old Docker Swarm worker host and draining the new Docker Swarm workers afterwards.

The downside of working this way is, that you need a lot of resources while you are running both, the blue and the green Docker Swarms workers and you have to install the Docker Swarm worker hosts. The first issue will cost money, the second one time. Loosing time is always worse, therefore we will use Terraform to do it for us.

Prerequisites

You will need existing Docker Swarm managers to do this job. In the best case the Docker Swarm managers are not used as shared Docker Swarm workers, they should not have workload containers running. They do not need to have as much resources as the Docker Swarm workers. If you handle it this way, you can update the Docker Swarm mangers during work hours without any hassle. Therefore, separate the Docker Swarm mangers from your Docker Swarm workers.

It might be possible to create the Docker Swarm managers through Terraform, but that is not an easy task. Terraform has only limited provisioning capabilities, which is obvious but evident as it is a tool to build infrastructure. Don’t use it to handle software installation tasks. If you need them, use Puppet, Chef, whatever or write something yourself

Example Terraform file

Terraform file explaination

In this Terraform file we use VMware templates to distinguish between the Ubuntu versions and the installed Docker versions. It is similar to the Docker image usage (line 56). We are using PowerDNS to register the Docker Worker hosts automatically in our DevOps DNS (optional, lines 65-70). The most important part of this Terraform file are the provisioners (lines 123-134). These lines will take care, that the newly created Docker host will join the Docker Swarm as worker and of course it will leave the Docker Swarm, if you destroy the Docker Swarm workers through Terraform destroy.

Conclusion

You can take this file as a boilerplate. You can use this file to bring up the blue Docker Swarm workers. Later, you copy this file, change the configuration eg. ip addresses, and bring up the green Docker Swarm workers. After you have transferred the workload from blue to green, you can destroy the blue Docker Swarm worker and prepare for the next update. Todos: You might need to put a small script on your Terraform created Docker Swarm worker hosts to perform additional tasks after creation or before destroy. For example, the PowerDNS entry creation is a bad hack, because it deletes all entries. It would be better to have a script which does this task after startup from the Docker Swarm worker host point of view.

Have fun -M

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Terraform(ing) Docker hosts with LinuxKit on-premise (VMware vCenter)

Preface

This blog post aims not be a fully-fledged step by step tutorial on how you can create and bootstrap a Docker Swarm cluster with VMware vCenter on-premise. Instead, it should give you an idea about what is possible and why we do it this way. There are different ways to achieve different goals and the way we explain here shows only one way of how you can do it.

Today we are running around 30 Docker hosts which are installed as classic VMWare virtual machines and these Docker hosts are all based upon Ubuntu Linux. This leads to the fact that we have to update our Docker hosts every three months manually to get in touch with the (annual) up-to-date Docker-CE version. Pointless work. We are provisioning our Docker hosts with puppet. Even if we are provisioning our hosts with puppet, it would take time to create new Docker hosts to let the Docker Swarm workers rotate. Yes, we could create a blue-green infrastructure to rotate them, but this will use computing resources if we create them beforehand or let them run all the time. If we would create them on-demand, every three month, it would take time to bring them up and running, update them, and so on. This is now the point to introduce new players. Docker LinuxKit as Docker host OS and HashiCorp Terraform as IaC (Infrastructure as Code). Yes, there is also Docker InfraKit and we are currently evaluating it, but there will be more work to do.

LinuxKit

LinuxKit is the operating system project started by Docker to provide a toolkit for building custom minimal, immutable Linux distributions. The benefit you get, if you choose to go with LinuxKit, is not only to have a custom made Linux distribution which reflects your needs, furthermore you get a platform which enables you to push a resulting iso-image to a VMWare datastore for example. There are already different cloud providers build into the LinuxKit toolkit.

LinuxKit example

Downloading and building LinuxKit is very well described on the LinuxKit GitHub page. After you have built the LinuxKit binary you need a yaml file that describes the compositions of the LinuxKit operating system you would like to create. The example linuxkit.yaml included in the sources is a good starting point. But after some work you will recognize that the simple example will maybe be not enough. Therefore, the next lines will show our basic example where we are starting from to include some additional packages we need.

LinuxKit example explanation

The first thing you need to know is, that LinuxKit uses containerd as container engine to power most of the needed operating system daemons. As you can see in the linuxkit.yml, the different sections are using images known as container-images to build up the system. Important: This images are using the OCI open container initiative image standard. I will reference back to this point later, so just keep this information in mind.

The first line of the linuxkit.yml takes a Linux kernel image and loads it. The file is not using any :latest tag as image definition and you would like to avoid them too because you would like to know what versions are operating in your operating system. Please do not copy the file as it is, because it will be outdated nearly immediately!

After the definition of the kernel there comes the init section. In this section, there are those things located which will be needed immediately, for example the containerd image. This image will be responsible for the upcoming service containers.

Next in the row, there are the services you will probably need for your environment. We need the open-vm-tools image, because we are running the resulting image on the VMWare ESXi infrastructure and without the tools it would not be possible to retrieve the ip address information from the VMWare vCenter and this is a must as we are going to build the virtual machines with Terraform later.

The ssh daemon should be self explaining, but you will need some root/.ssh/authorized_keys to access the running LinuxKit OS. Therefore, look at the files section, where you can describe your configuration needs.

Now we are installing the Docker engine because we would like to build up a Docker Swarm worker and this Docker Swarm worker should join an existing Docker swarm manager later. Important: You will notice, that for the Docker engine, we are using the DockerHub image as usual, no special LinuxKit image! But how does this work? As I said before, the containerd is using OCI standard images which are not the same as Docker images. Lets have a look into it.

LinuxKit persistent disk magic

When it comes to the point, that you have to startup your Docker swarm cluster or even a Docker swarm host with LinuxKit, you will sooner or later ask yourself how you can manage it to persist your Docker Swarm data or at least the information which Docker containers are started.

The Docker data lives inside the /var/lib/docker folder. Therefore, if you persist this folder, you are able to persist the current state of the Docker host.

The collegues at Docker have done their work the right way. Look at the lines, where the images format and mount are loaded. This two images are doing the magic which enables LinuxKit to persist it’s data. You can lookup the documentation at github for details. For the impatient, here’s the summary.

The format image will take the first block device it finds and if there is no Linux partition on it, it will format it. If it finds a Linux partition, nothing happens. Very comfortable! After the disk format is done (or not) the partition gets mounted via the mount image and the corresponding mountie configuration lines. Et voila, there you go. Magical persistence with LinuxKit iso-turbo-boost mode. Genius!

This is one of the most important features, when it comes to infrastructure as code because regardless what you take, InfraKit or Terraform, you might eventually need some kind of persistence to get a reliable infrastructure.

LinuxKit OCI and Docker images

The really cool stuff about LinuxKit is, that it is a toolset. This means, that during the linux build command, which we will see later, the image components as described by the linuxkit.yml are downloaded from DockerHub and afterwards the contents of the used images are transformed to OCI compatible root image filesystems. This means, you can use all the DockerHub images directly without worrying about the image format. Neat! Now you know why you need a Docker environment to build LinuxKit and to build your LinuxKit OS afterwards.

LinuxKit build your image

To build your LinuxKit iso-image, you can use the following command. We have created a separate docker.yml file, to reflect the changes. The resulting iso-image will therefore be named docker.iso automatically.

LinuxKit push image to VMware

After you have build your iso-image, you can push it to a VMware datastore with the following command. Important: There is a small problem located in the actual version of LinuxKit. You cannot push to a VMware datacenter if there are multiple hosts located in it. Thanks to Dan Finneran @thebsdbox who helped me a lot! There is already a github PR merged which makes it possible to push without a problem. The PR will be included in the next version (0.2) of LinuxKit.

Terraform

Sure, at some point in time we will maybe use InfraKit to get our things up and running. But as for today there are hardly alternatives to Terraform. Terraform is Open Source Software but you can purchase an enterprise license if you need. When you start to dig into the parts and pieces of Terraform, you will recognize, that it is not the easiest piece of software but incredibly powerful.

Just download Terraform from the website and unpack it. The only thing you will see after extraction is just a binary called terraform.

Before you can do anything with it you will need a configuration which describes what you would like to receive. Due to this concept, Terraform is working with states. You plan a state, you apply a state and you will destroy a state. Terraform will try to keep your resources consistent. Now it comes to the config.

Terraform example

If you run this example with ./terraform plan and terraform, you will get three LinuxKit VMWare virtual machines which are saving their persistent state to the hard drive.

Conclusion

You can use the information from this blog post to build up a basic infrastructure. But to build up a running Docker Swarm cluster with managers and workers there is more work to do. There are a few glitches which makes it a little bit complicated at the moment to use the VMWare customization against the LinuxKit running virtual machines for example. In turn, it might be better to go with VMWare templates at the moment because you will have more possibilities. Maybe we will post an update on this soon. Stay tuned!

Mario Kleinsasser on GithubMario Kleinsasser on LinkedinMario Kleinsasser on Twitter
Mario Kleinsasser
Mario Kleinsasser
Doing Linux since 2000 and containers since 2009. Like to hack new and interesting stuff. Containers, Python, DevOps, automation and so on. Interested in science and I like to read (if I found the time). Einstein said "Imagination is more important than knowledge. For knowledge is limited." - I say "The distance between faith and knowledge is infinite. (c) by me". Interesting contacts are always welcome - nice to meet you out there - if you like, don't hesitate and contact me! - M

Older blog entries...