Retired Tools
Multi-Nodes / Multi-Cluster
What is it? What was the plan?
Please refer to the basics section for an introduction to Kubernetes and clusters.
On the university’s internal infrastructure, we were provided with a single node.
The initial plan was to set up multiple clusters:
- one for production, where only finalized services would be deployed;
- one for testing, where new services could be tested before being moved to the production cluster;
- and one for development, where experimentation and exploration could take place.
Since we only had access to a single node, it was not possible to create multiple clusters. As a result, we opted for a combination of virtual machines (VMs). One VM hosted Harvester, and another ran Rancher. A Nginx reverse proxy was placed in front to route requests to the appropriate VMs.
The plan was to fully automate management, making administration straightforward and efficient.
Why was it retired?
This overall solution is too complex for the current MSD use case. It is simpler to have everything on a single node and operate a single-node cluster.
This approach fully utilizes the available infrastructure without dividing resources across virtual machines. Additionally, management becomes easier since everything is centralized on one node. In the university context, having multiple clusters is unnecessary, as high availability is not required.
OpenTofu / Terraform
What is it?
Terraform, or the fork OpenTofu, is one of the ways to automate infrastructure. These applications are summarized under “Infrastructure as Code (IaC)”.
Here, “*.tf - or - *.tofu files” are created, and a plan is generated. The plan is then written into a “state file”.
Subsequently, the plan is realized through an apply. Here, the state of the target is compared with the state in the “state file”, and differences in the target are adjusted to the “state file”.
Thus, the “*.tf - or - *.tofu files” specify what the “state” looks like, and Terraform/OpenTofu automatically establishes the state in the target based on the “state”.
Why we used it in the past?
As described, we used Terraform/OpenTofu to automate the infrastructure. The plan was to specify the entire setup for machines, etc., in the *.tf or *.tofu files.
This included automatically provisioning machines and deploying services as outlined in the Multi-Nodes / Multi-Cluster section.
Initially, OpenTofu was used to create users, projects, namespaces, Git repositories for Fleet, and GitLab runners.
Why was it retired?
OpenTofu is better suited for provisioning infrastructure. A tool like Ansible is more appropriate for configuring components on the single node. Currently, tests are being conducted to determine if everything can also be managed using ArgoCD.
Rancher Fleet
What is it?
Rancher Fleet is the built-in “GitOps Tool” of Rancher. It automates the process of deploying on the clusters. It is comparable to ArgoCD or Flux.
A “fleet.yaml” must be placed in the GitRepo. If there are changes to the Repo, then the deployment of the service updates.
- More on Fleet: fleet
Why we used it in the past?
Rancher Fleet was used to deploy services on the clusters. The services were stored in a Git repository, and Rancher Fleet deployed them to the clusters. The Git repositories were created and executed using OpenTofu Kubernetes manifests.
Why was it retired?
A major issue with Fleet is that it doesn’t handle CRDs well. It is not possible to deploy CRDs since Fleet does not recognize them. As a result, the CRDs had to be manually deployed to the cluster, which undermines the entire automation process.
Therefore, Fleet was replaced by ArgoCD, as ArgoCD handles CRDs more effectively.
Harvester
What is it?
Harvester is also an application from SUSE Linux. With Harvester, virtual machines can be created.
Harvester can be integrated as a cloud provider in Rancher. This allows new Kubernetes clusters to be created on the Harvester via Rancher’s management.
- More on Harvester (v1.2): docs
Why we used it in the past?
Harvester was used to create VMs on the single node. The idea was to create one VM for Rancher and one VM for Harvester. The Rancher VM was to be used to manage the Kubernetes clusters, while the Harvester VM was used to create virtual machines for virtual clusters. Everything was managed through Rancher, which had integrated Harvester as a cloud provider.
The plan was to fully automate the management with OpenTofu.
Why was it retired?
A multi-node cluster is too complex for the MSD. It is simpler to have everything on a single node and operate a single-node cluster. This approach fully utilizes the infrastructure without splitting it across VMs.
See also Multi-Nodes / Multi-Cluster.
k3s
What is it?
K3S a application from SUSE Linux and represent applications for installing Kubernetes clusters on bare metal. K3s ist für IoT Geräte vorgesehen, und ist daher sehr schlank und einfach zu installieren.
- More on k3s: k3s
Why we used it in the past?
Initially, K3s was used to set up the Kubernetes single-node cluster, and later served as the foundation for the Rancher VM.
Why was it retired?
Unfortunately, K3s has the disadvantage that after a few weeks, the node becomes very slow. Additionally, the single-node cluster became unresponsive. The issue was suspected to be caused by the iscsid service, which was causing the slowdown.
As a result, K3s was replaced by RKE2, which is much more stable and is now also used in Harvester as the basis for new clusters.