Deploying your player to the production environment

After testing everything locally, you need to deploy your player service to the production environment, so that you can interact with other player services and have a codefight. This page describes how to do this.

The Big Picture

The ultimate goal of deployment is to collect the contributions of many developers, or many developer teams, into one production environment - with the least possible manual effort, and the best possible quality assurance. This is depicted in the image below, with the production environment labelled as Kubernetes Cluster. (What this is exactly is outlined further down.)

Motivation

The following diagram shows the main steps in deploying your player service to the production environment. It assumes that the configuration scripts needed for deployment are all properly set up (see Setting Up the Deployment Scripts).

Sequence for deploying your player service to the production environment

Step 1: Commit and Push Your Changes

If all is properly set up, you just need to commit and push your changes to the GitLab repository of your player. The push will trigger the CI pipeline, which will create a Docker image of your player (see next step).

We have Semantic Versioning enabled. This means that the process will only go beyond step 2 if you increment the version number in your helm chart Chart.yaml file of your player service:

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.21

Convention according to Semantic Versioning is:

  • Just a bugfix -> increment the patch version (e.g. 0.1.20 -> 0.1.21)
  • New feature -> increment the minor version (e.g. 0.1.20 -> 0.2.0)
  • Breaking change -> increment the major version (e.g. 0.1.20 -> 1.0.0)

DEBUG

It seems necessary that before running the build, it is sometimes necessary to delete the Docker image tagged latest from the GitLab registry. The latest tag is not always updated properly. This will fixed as soon as possible.

Go to the GitLab Container Registry to check your Docker images. Go to your Gitlab repository (like https://gitlab.com/my_organization_or_name/my_player_repo) and then click on Deploy >> Container Registry in the menu on the left. You should see something like this:

Check Registry

Just delete the latest image by using the three-dots-menu on the right.

First Intermezzo: What is a Docker Image?

A Docker image is a file, which contains all the files needed to run your player service. It is basically an OS-agnostic executable. There is a ton of instructive material on the web explaining the concept behind Docker, and why it is so useful and popular. This for instance is good starting point.

Step 2: Running the CI Pipeline to Create A Docker Image

Running the CI pipeline is triggered by you executing a git push command. Precondition is that you have properly set up the GitLab CI/CD pipeline for your player service (see Setting Up the GitLab CI Pipeline in gitlab-ci.yml).

You can check the status of your build by going to your repository (like https://gitlab.com/my_organization_or_name/my_player_repo) and then clicking on `Build » Pipeline`` in the menu on the left. You should see something like this:

Check Build Pipeline (1)

By clicking on the latest running pipeline, you can see the details of the build:

Check Build Pipeline (2)

It is important that also the third stage helm is present. This means that actually a new Docker image with the new version number is been created and pushed to the GitLab registry. Only if all three stages show a green checkmark, you can proceed to the next step. Otherwise, you need to fix the issues with your build, and push your changes again.

Step 3: Monitoring the Gitlab Registry by Rancher/Fleet

Before going to Rancher, you can check the GitLab Container Registry to see if your new Docker image is actually there and tagged latest. Go to your Gitlab repository (like https://gitlab.com/my_organization_or_name/my_player_repo) and then click on Deploy >> Container Registry in the menu on the left. You should see something like this:

Check Registry

The Fleet service in the production environment is monitoring the GitLab Container Registry for new Docker images tagged latest. If it finds a new image, it will pull it and deploy it to the Kubernetes cluster.

Second Intermezzo: What is Kubernetes?

Kubernetes is a container orchestration system. It is a tool for managing containerized applications in a clustered environment. It is a very powerful tool, which is used by many companies to run their applications in the cloud. As for Docker, there is a lot material on the web explaining the concept behind Kubernetes, and why it is so widely used.

Actually, Kubernetes own tutorial is a good summary of the concept, and suitable as a starting point. There is also another useful section in their material, Kubernetes Basics, which introduces the main tools like kubectl and minikube.

Step 4: Updating Your Player’s Docker Image in the Kubernetes Cluster in Production Environment

You can check the status of the deployment by going to Rancher. Log in with the credentials you received from the Microservice Dungeon Admin. When you click on Apps >> Installed Apps in the menu on the left, you see your player service listed there.

Rancher

You can reload the latest version of your player service by clicking on your player, navigating to Deployments and deleting the pod by clicking on the three-dots-menu on the right. This will cause Kubernetes to reload your image (pod) from the GitLab registry in the most recent version.

Rancher Reload Pod

Setting Up the Deployment Scripts

Setting the Gitlab CI Pipeline in gitlab-ci.yml

The GitLab CI pipeline is defined in the .gitlab-ci.yml file in the root directory of your player service repository. It is automatically triggered by a git push command. The file should look like this:

image: docker:20.10.16
services:
  - docker:20.10.16-dind

include:
  - project: "the-microservice-dungeon/devops-team/common-ci-cd"
    ref: "main"
    file: "helm/package-publish.yaml"

variables:
  # Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
  DOCKER_HOST: tcp://docker:2376
  DOCKER_TLS_CERTDIR: "/certs"
  IMAGE_TAG: $CI_REGISTRY_IMAGE:latest
  PATH_TO_CHART: "helm-chart"
  CHART_NAME: "player-skeleton-java-springboot" # ! TODO: Update to your player name

stages:
  - build
  - build_container_image
  - helm

maven-build:
  image: maven:latest
  stage: build
  script: "mvn clean package -B"
  artifacts:
    paths:
      - target/*.jar

docker-build:
  stage: build_container_image
  image:
    name: gcr.io/kaniko-project/executor:v1.14.0-debug
  services:
    - docker:dind
  script:
    - /kaniko/executor
      --context "${CI_PROJECT_DIR}"
      --dockerfile "${CI_PROJECT_DIR}/Dockerfile"
      --destination "$IMAGE_TAG"
      --destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
      --destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHORT_SHA}"

helm-package-publish:
  rules:
    - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
      changes:
        - ${PATH_TO_CHART}/**/*
    - if: '$CI_PIPELINE_SOURCE == "web"'
      when: always

This is already present if you are using a skeleton player. You just need to update your player name in there. The CHART_NAME variable needs to be set to the name of your player service. This is the

Helm Chart

The Helm chart is a set of files, which describe how to deploy your player service to the Kubernetes cluster. It is located in the helm-chart directory of your player service repository. It should contain at least the following files:

  • Chart.yaml
  • values.yaml
  • templates/deployment.yaml
  • templates/service.yaml
  • templates/ingress.yaml

The Chart.yaml file contains the metadata of your player service. It should look like this:

apiVersion: v2
name: player-skeleton-java-springboot # ! TODO: Update to your player name
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

The values.yaml file contains the configuration of your player service. It should look like this:

name: player-skeleton-java-springboot # ! TODO: Update to your player name
replicas: 1

loadBalancerNodePort: 31090

image:
  name: "registry.gitlab.com/the-microservice-dungeon/player-teams/skeletons/player-skeleton-java-springboot" #TODO: Change this to your image path
  tag: "latest"
  imagePullPolicy: "Always"
serviceName: "Player-Skeleton-Java-Springboot" #TODO: Change this to the name of your Player Application

lbTargetPort: 8090
port: 8080
lbNodePort: 31090

env:
  DATA_ENDPOINT_PORT: "8090"
  GAME_HOST: "http://game-service.game"
  GAME_PORT: "8080"
  RABBITMQ_USERNAME: "admin"
  RABBITMQ_PASSWORD: "admin"
  RABBITMQ_HOST: "rabbitmq-service.rabbitmq"
  RABBITMQ_PORT: "5672"
  PLAYER_NAME: "Player-Skeleton-Java-Springboot" #TODO: Substitute with correct player name
  PLAYER_EMAIL: "Player-Skeleton-Java-Springboot@test.com" #TODO: Substitute with correct player email
  DEV_MODE: "false"

ingress:
  enabled: true # True if you run the player on the cluster, false if you run it locally in minikube
  hostname: player-skeleton-java-springboot.goedel.debuas.de #TODO: Change this to your ingress hostname
  path: /
  classname: traefik

The other files should be available in your skeleton, and not require any changes.

Fleet Configuration in fleet.yaml

In the repository msd-fleet-system, there needs to be a folder with the name of your player service, like player\player-monte. This folder should contain a fleet.yaml file,

Within this folder, there needs to be a fleet.yaml file, which should look like this:

namespace: player-your-playername-here # TODO: Change this to your player name
helm:
  releaseName: player-your-playername-here # TODO: Change this to your player name
  targetNamespace: player-your-playername-here # TODO: Change this to your player name
  repo: "https://gitlab.com/api/v4/projects/42239222/packages/helm/stable"
  chart: "player-your-playername-here"  # TODO: Change this to your player name
#  version: "v0.1.23"
  values:
    name: player-your-playername-here # TODO: Change this to your player name
    namespace: player-your-playername-here # TODO: Change this to your player name
    serviceName: "player-monte"
    port: 8080
    targetPort: 8090
    env:
    - name: GAME_HOST
      value: "http://game-service.game:8080"
    - name: RABBITMQ_PASSWORD
      value: "admin"
    - name: RABBITMQ_HOST
      value: "rabbitmq-service.rabbitmq"
    - name: RABBITMQ_PORT
      value: "5672"
    - name: LOGGING_LEVEL
      value: "debug"
    ingress:
      enabled: true
      hostname: player-your-playername-here.goedel.debuas.de # TODO: Change this to your player name
      path: /
      classname: traefik
Last modified February 4, 2025: fix go & npm dependencies (8ff1fa0)