MSD Watcher Backend

The MSD watcher backend server

Note: All paths in this section are relative to the package root, i.e. /backend/

The backend project exposes a few simple api endpoints for the frontend which interface with the kubernetes api (and in the future kafka) and clean up that data to a more workable format. It also acts as the webserver that serves the frontend.

Important Dependencies

Functionality

Serving the frontend

This is simply done by using /src/main/resources/META-INF/resources as the build target for the frontend. Quarkus already serves everything that is in there as static html which is enough for our usecase.
To ensure that every url that isn’t defined as an endpoint by quarkus we use the NotFoundExceptionMapper.java found in the core package. Which tries to check if the index.html file generated by the frontend build exists and if so, redirects all traffic to it. Since frontend is a SPA the rest of the routing is handled by it and everything works as it should.
One additional note: If no index.html file is found (e.g. if the frontend build failed or is in progress) we redirect to a generic 404 page.

The request flow is now as follows:

graph TD
    A[Quarkus gets a request] --> B[Quarkus checks if this url matches an endpoint defined by itself]
    B -->|Yes| C[Quarkus answers]
    B -->|No| D[Check if an index.html for the frontend exists]
    D -->|Yes| E[Serve the frontend at this URL, let it handle the routing]
    D -->|No| F[Serve a generic 404 html page]

Interfacing with the kubernetes api

Each kubernetes resource that was deemed important gets its own class in the kubernetes package. Each one of these classes contains a getRaw*** method that queries the kubernetes api for a list of the desired resource. This raw list gets exposed under /<endpoint_identifier>/raw. Most of those classes also expose a parsed list which provides the JSON schemas that the frontend needs.

Currently, the following kubernetes resources are implemented:

Kubernetes Resource Name Endpoint Identifier Has Parsed Endpoint Parsed JSON Schema
Deamon Set /deamon-sets Yes
[
  {
   “id”: UUID,
   “name”: Name,
   “namespace”: Namespace
  },
  …
]
Deployment /deployments Yes
[
  {
   “id”: UUID,
   “name”: Name,
   “namespace”: Namespace
  },
  …
]
Endpoint /endpoints Yes
[
  {
   “id”: UUID,
   “name”: Name,
   “namespace”: Namespace
   “targetIds”: [
    targetUUid,
    …
   ]
  },
  …
]
Namespace /namespaces Yes
[
  {
   “id”: UUID,
   “name”: Name,
  },
  …
]
Pod /pods Yes
[
  {
   “id”: UUID,
   “name”: Name,
   “namespace”: Namespace
   “node”: NodeName
   “appName”: appNameFromLabels
   “status”: {
    “phase”: podStatusPhase,
    “startTime”: podStartTime
   },
   “ownerId”: ownerUUID
  },
  …
]
Replica Set /replica-sets Yes
[
  {
   “id”: UUID,
   “name”: Name,
   “namespace”: Namespace
   “ownerId”: ownerUUID
  },
  …
]
Service /services No -

Future Goals

  • Find some way to know what is happening in the MSD
    • e.g. Kafka listener that captures all events, matches them to a pod and sends that information to the frontend via websocket
  • Properly build the services endpoint
    • Goal: Match each pod to a service (e.g. the core MSD services game, trading, map, etc.)
  • Test the endpoints
    • Difficulties: The kubernetes stuff needs a kubectl environment to work. Maybe mock all kubernetes api results?
  • Role based auth system using RBAC and JWT.
    This is implemented, but not used on any of the endpoints. Currently only one admin user gets created on startup
    with credentials from the .env (see .env.example & the classes in the auth package).
    • Todos:
      • Define needed roles
        • Which endpoint can what role access?
        • Maybe create an endpoint to register new users (default GUEST role?)
      • Implement in the endpoints using annotations (see e.g. AuthController:72)
Last modified February 4, 2025: fix go & npm dependencies (8ff1fa0)