What do you do if developers (your friends in this case) are forced to use Windows at work by policy? This developers are already using CI/CD Gitlab Docker/Dind all inclusive pipelines and are recognizing, that they are lost on their local workstations. Why? They are using standard business workstations. Under economic circumstances there is no 16GB Intel Core i7 option for this kind of workstations available. This means that this developers are trapped because they have already build up testing, staging and education CI/CD pipelines with a lot of additional technologies, like Elasticsearch, Apache Ignite, Minio, …, but they aren't able to run them during the development process on their local workstations because they are running out of local resources immediately.
In my opinion there is something like a Hippocratic oath for Ops. You help. And sometimes you have to go new ways to help. In our case we were sure that we have to implement a central system for our Devs because they are located at different branch offices. Even if we would be able to buy workstation hardware which is powerful enough to get all needed stacks running, we cannot manage it, because it would require to manage the local Docker installations. There are different approaches and possibilities to run Docker on a Windows workstation, for example Vagrant (manage local VM's) or native Docker (LinuxKit based for Windows). But in our opinion all these options are not really convenient if you would like to use Docker Swarm in combination with centralized hardware. That is not a problem of Docker Swarm, because you can install the Swarm manager under Windows (hybrid Docker Swarm) too. The problem resides within the workstation installation itself, because workstations tend to get broken and have to be reinstalled. Imagine what would happen, if the reinstallation of the workstations is handled by a different IT department? In the worst case the installation just recovers a default image and all of the previous work including the Docker installation etc would be probably lost. Now you can argument that you can do a backup of the installation and so on and so forth, but as you can see, this is getting complex and it is lacking the “keep it simple stupid (KISS)” attempt.
Therefore the first thought was to use X2GO, which we are using since several years for our common Linux terminal server where my team and I are working with (because we have to use Windows by policy too). But the downside of X2GO is, that it is very network consuming if you would like to have 24bit color depth on your desktop. Due to this reason, X2GO is only suitable if you can use it with LAN network speed. But our developers are spread over different international office branches and X2GO therefore won't work. Beside this technical reasons, there are also user experience reasons. For X2GO you need a installed client like you need the Citrix Receiver, but in the opposite to the X2GO client, the Citrix Receiver is already installed on our workstation and is managed by a central department.
Bernhard and I were Windows admins before we become fully fledged Open Source Linux and Web DevOps in the early 2000s years. Therefore we used a lot of Citrix technologies back then (Citrix Metaframe and Metaframe XP). We always followed our colleges who continued the Windows Citrix journey and we also followed the news on this topic. Therefore we recognized that it should be possible to use Citrix Linux VDA to run a Linux Desktop Terminal Server. This fits perfect for our use case. In combination with Docker Swarm, we would be able to get an infrastructure which would be able to support our developers with the benefit to do it central.
We did it. Now, there is (currently) one Linux Terminal Server, which is also a Swarm manager and we created some additional worker Linux VM's (without Citrix) to get the load driven. Furthermore the executed Docker Swarm files are using some placement rules to not place containers on the terminal server. Now, the developers who are already Gitlab CI/CD Docker aware can use this terminal server to develop their application on a central desktop server with 64+ GB's of ram (only the VM is restricted and up to 512GB of ram are possible) powered by Cisco UCS Blades (32 Cores). They can use all of the Docker power without any restrictions, they can develop, they can test, they can try, they can learn. All of that is real because we are talking the same language - Containers.
Look at the applied screenshots. At some areas they are blurred out to preserve the policy, but I guess you get the idea. Worlds are coming closer and closer. And that is great! With this combination of Citrix Linux terminal server and Docker Swarm as container platform we can achieve a lot of useful benefits. First, the developers (the users) can use their corporate login seamlessly, because the Linux installation is fully Microsoft Active Directory integrated. The Citrix Store Front portal will be fully integrated in the common workflow of our colleges. Furthermore, the secure access to the development environment is provided through Citrix also. This includes international bandwidth optimized access too. Second, in combination with Docker Swarm, the developers have all the power on their hands to do all the things they need to get the stacks up and running as close as they need it for the other environments (test, staging, productive) too. Third, the Ops (we) get better application stacks which enables us to focus on our work to realize also infrastructure as code in the future. And at last, the setup is scaleable in terms of Docker and in terms of Citrix terminal servers too! That's it for today.
If you have any questions, please contact us. We will answer it.