Overview of Tools

A brief overview of the tools used in our tech stack.


🔗 General

This section includes general information, recommended learning resources, and introductory material for Kubernetes.



📖 How to read this page

This page provides an overview of the key tools and technologies used in our Kubernetes-based platform. The tools are grouped by their role in the system:

  • 🧱 Cluster Infrastructure: Components that provide the technical foundation of our Kubernetes setup.
  • ⚙️ Deployment & GitOps Workflow: Tools that automate and manage application deployments.
  • 🛠️ Developer Tools: Utilities that help you interact with and understand the cluster.
  • Each tool entry explains what it does, why we use it, and links to more detailed documentation if available.


🧱 Cluster Infrastructure

☸️ Kubernetes

Kubernetes is an open-source platform for container orchestration. It allows running and managing multiple services in a scalable and declarative way. Beyond just deploying containers, Kubernetes also handles networking, persistent storage, service discovery, rolling updates, and much more.

In our setup, Kubernetes acts as the target environment for applications that were initially developed and tested using Docker. While Docker is ideal for local development — offering fast feedback and simple container builds — it has limitations when it comes to running complex, distributed systems on shared infrastructure.

Kubernetes solves these challenges by providing a scalable, structured, and production-grade environment for running containerized applications. It allows us to manage microservices, handle dynamic scaling, and ensure reliable networking and storage — features that are hard or impossible to achieve with Docker alone in a shared, centralized setup.

Students first build and test their services locally using Docker. After that, they create Helm Charts to package their applications for deployment. These charts can be tested in a local Minikube environment, before being deployed to our central cluster using ArgoCD as part of the CI/CD pipeline.

By using Kubernetes, we give students hands-on experience with real-world deployment workflows and infrastructure — far beyond running a single container.


💾 Longhorn

Longhorn is a distributed block storage system designed specifically for Kubernetes. It is lightweight, reliable, and integrates seamlessly into existing clusters.

In our setup, Longhorn plays a key role in providing persistent storage. It integrates smoothly with our RKE2-based cluster. While the system’s primary disk has limited space (around 50GB), we have an additional 1TB disk available. Using the Bitnami Helm chart for Longhorn, we configure this larger disk as the active storage backend and set it as the cluster’s default StorageClass.

This setup allows students to request persistent storage for their applications by simply creating a Persistent Volume Claim (PVC) — without needing to worry about disk paths or physical volumes. Longhorn automatically handles the provisioning and lifecycle of these volumes.

Another advantage of Longhorn is its built-in web UI, which allows for easy monitoring and management of storage resources. Even though students interact with the cluster primarily through Helm or ArgoCD, they benefit indirectly from the flexibility and reliability that Longhorn provides.

By handling all volume management transparently, Longhorn is an essential component of our cluster infrastructure.


☸️ Minikube

Minikube is a lightweight, local Kubernetes cluster designed for development and testing purposes. It allows students to test their Helm Charts and application deployments in a real Kubernetes environment on their local machine — after building and running them initially with Docker.

We use Minikube to bridge the gap between local Docker-based development and deploying to the shared cluster. It provides a realistic Kubernetes runtime where students can experiment with manifests, Helm templates, and configurations without risking the stability of the main environment.

While we are currently using Minikube, we are also evaluating k3d as a possible future alternative. One of k3d’s strengths is that it allows cluster creation via simple configuration files, which could simplify setup even further. For now, however, Minikube remains our primary local Kubernetes platform — as referenced in various sections of this documentation.


🚜 RKE2

RKE2 (Rancher Kubernetes Engine 2) is a Kubernetes distribution developed by SUSE, designed for secure, stable, and maintainable cluster installations. It is particularly well-suited for bare-metal environments, where full control over the Kubernetes setup is required.

In our setup, RKE2 serves as the technical foundation of the cluster. It manages the installation and operation of the core Kubernetes components on our servers and forms the base layer for all higher-level tools like Helm, ArgoCD, and K9s. Students do not interact directly with RKE2; instead, they access the cluster through abstracted interfaces provided by these tools.

We are planning to provide students with a dedicated kubeconfig in the future, which will grant them read-only access to the cluster, and extended permissions within their own namespace. However, as of now, access to the cluster is only available via the integrated toolchain.

We chose RKE2 because it has proven to be significantly more stable and reliable than our previous setup, which was based on k3s. While k3s frequently required full reinstallation after just a few weeks of operation, RKE2 has shown itself to be a robust and production-grade alternative that requires much less ongoing maintenance.



⚙️ Deployment & GitOps Workflow

🔄 CI/CD Workflow

Our platform uses a full CI/CD pipeline to automate the process from code changes to deployed applications.

  • The CI (Continuous Integration) part is handled by GitLab CI. Whenever students push changes to their repositories, the pipeline automatically builds Docker images, creates Helm Charts, and pushes both to our internal container and chart registries.
  • The CD (Continuous Deployment) part is managed by ArgoCD, which detects updated Helm Charts in Git and automatically deploys them to the Kubernetes cluster.

This automation ensures consistent and secure deployment workflows, and gives students insight into real-world DevOps processes.

  • 🔍 See our CI Doc Sitehere

  • 🔍 See our ArgoCD Doc Site here


📦 Helm

Helm is a tool used to manage Kubernetes resources through so-called Helm Charts — structured packages that bundle multiple components such as Deployments, Services, and Ingresses. Unlike kubectl, which requires manually applying individual YAML files, Helm allows for centralized installation, updating, and removal of complete application packages — similar to package managers in traditional operating systems.

We use Helm to make our Kubernetes resources structured, reusable, and maintainable. One of Helm’s major strengths lies in its templating system: configuration values such as ports, image tags, or hostnames are defined centrally in a values.yaml file and then injected into the appropriate resources. This reduces duplication, minimizes errors, and makes adjustments much easier.

Helm is also particularly useful for configuring the applications themselves. Through environment variables defined and templated in the Helm Chart, we can pass parameters to applications — for example, to a Java or Spring Boot service via the application.yml. This enables dynamic configuration per environment (e.g. database URLs, log levels, API keys) without modifying the application code.

In our setup, students define their Kubernetes manifests as Helm Charts. These charts are versioned and stored in an internal chart registry. Deployment to the cluster is then handled automatically by ArgoCD — Helm handles the packaging, ArgoCD handles the delivery.

As such, Helm is a key part of our workflow, ensuring that our resources are traceable, reusable, and automatable — both at the infrastructure level and for application configuration.


🔁 ArgoCD

ArgoCD is a GitOps tool that automates the deployment of applications to Kubernetes clusters. It continuously monitors specified Git repositories and compares the desired state (as defined in Git) with the actual state in the cluster. When changes are detected — for example, through updates to a Helm Chart — ArgoCD automatically synchronizes the deployment.

In our setup, ArgoCD is a key component of the CI/CD pipeline. It handles the continuous deployment of resources to the cluster by referencing Helm Charts stored in our internal chart registry. When a chart is updated, ArgoCD ensures the corresponding application is automatically deployed or updated in the cluster.

Previously, such deployments had to be triggered manually using tools like helm upgrade. With ArgoCD, this process is now fully automated, improving both efficiency and reliability.

One of ArgoCD’s most valuable features in our context is its web-based UI. Students are granted read-only access to the overall system, but for their own application within ArgoCD, they are given extended permissions. This enables them to take actions such as rolling back a deployment, manually triggering a redeploy, or deleting a failed deployment so that ArgoCD redeploys the latest working version. This level of access fosters independence and provides real-world experience with GitOps workflows.

ArgoCD also simplifies the workflow for the cluster administrators. Instead of triggering deployments manually, a shared Application.yaml is created in collaboration with the students, which ArgoCD uses to manage deployment automatically.

As such, ArgoCD is a core tool in our setup — enabling automated, traceable, and reliable deployments in Kubernetes, with benefits for both students and the cluster team.


🔐 Sealed Secrets

Sealed Secrets is a tool for securely managing sensitive information such as passwords, API tokens, or database credentials in a GitOps workflow. It allows us to store encrypted secrets safely in Git repositories without exposing any plaintext values.

In our setup, Sealed Secrets is used to encrypt Kubernetes Secrets before committing them to Git. This enables fully automated deployments with ArgoCD, while ensuring that sensitive data remains protected — even if the Git repository is public or shared.

Students actively use kubeseal, the CLI tool that encrypts secrets using a public key we provide via a shared repository. Before pushing their Helm-based applications to Git, students are expected to encrypt any required secrets themselves. This ensures that sensitive data never leaves their local machine in plaintext and that all secret handling remains secure and auditable.

We chose Sealed Secrets because it integrates well with our Helm-based deployment flow and provides a clear separation between secret creation and secret usage. Students benefit from a secure and practical method of handling credentials as part of real-world deployment processes.

  • 🔍 More on Sealed Secrets
    - sealed secrets -

  • 🔍 See our Sealed Secrets Doc Site here



🛠️ Developer Tools

☸️ K9s

K9s is a terminal-based UI tool for interacting with Kubernetes clusters. It provides a live, navigable overview of resources like Pods, Deployments, Services, Logs, and Events — without the need to constantly enter kubectl commands manually.

In our setup, K9s is primarily used by the cluster team to monitor and inspect resources during development, testing, and operation. While students do not yet have direct access to the shared cluster, this is planned for the future using a custom kubeconfig, as described in the RKE2 section.

However, students can already benefit from K9s in their local environments, such as when testing their Helm Charts using Minikube. Instead of typing long kubectl commands (e.g. kubectl get pods -n my-namespace), K9s provides a fast and intuitive way to navigate the state of the cluster, inspect logs, and understand what’s running.

K9s makes Kubernetes more accessible, especially for beginners, by visualizing relationships and statuses that would otherwise require deep command-line knowledge. It is not a replacement for kubectl, but an enhancement — and a useful learning tool for developing Kubernetes intuition.

  • 🔍 More on k9s
    - k9s -



Last modified May 17, 2025: modify devops cluster docs (3b9c6d0)