This is the multi-page printable view of this section. Click here to print.
Currently Worked On
- 1: API Reference (RealTime-MSD)
- 1.1: Game Service
- 1.1.1: OpenAPI (Real Time MSD)
- 1.1.2: AsyncAPI (Real Time MSD)
- 1.2: Map Service
- 1.2.1: Custom map
- 1.2.2: OpenAPI
- 1.2.3: AsyncAPI
- 1.3: Robot Service
- 1.3.1: Robot Actions
- 1.3.2: OpenAPI (Real Time MSD)
- 1.3.3: AsyncAPI (Real Time MSD)
- 1.4: Trading Service
- 1.4.1: Tradeables
- 1.4.2: Economy
- 1.4.3: OpenAPI (Real Time MSD)
- 1.4.4: AsyncAPI (Real Time MSD)
- 1.5: Gamelog Service
- 1.5.1: OpenAPI (Real Time MSD)
- 1.5.2: AsyncAPI (Real Time MSD)
- 1.6: MSD Dashboard
- 1.6.1: MSD Dashboard
- 1.6.2: MSD Dashboard Docker Api
- 2: Dashboard as Microfrontend
- 3: Access to the Rancher cluster and deployment of players
- 4: DevOps-Team Contribution of Bernhard
- 5: DevOps-Team Contribution of Matia
- 6: DevOps-Team Contribution of Omar
- 7: Event Authorization in Kafka/Redpanda
- 8: Extensions for the MSD-Dashboard
- 9: Functional Trading Service Implementation
- 10: Improved Player Dev Env
- 11: Libraries for Player Health Checks
- 12: Map WYSIWYG Editor
- 13: Peer to Peer Communication
- 14: Pluggable Strategy and Typical Strategy Patterns
- 15: Praxisprojekt Real Time MSD
- 16: Real Time MSD
- 17: Reinforcement Learning
- 18: Test Microservice Framework
- 19: Test Microservice Usage
1 - API Reference (RealTime-MSD)
API Reference
, by Robin Lemmer.1.1 - Game Service
The game service is responsible for creating, starting and ending games, as well as creating and registering players.
The players receive an id and a queue, which they can use to send commands and receive the outcome.
Events
Before a game
Before the start of a game there are following game service related events:
-
Game Created
-
Player “name” joined the game. (optional, if no player joins then this event is not produced)
-
Game started
-
Excursus: The game world is created through a REST call to the map service. The game world is automatically sized to the created game maximal player.
Game ending
The final event is:
- Game ended (with this event a game ends. No commands can be issued anymore. GameLog service should provide a scoreboard and trophies. When a new game starts, you have to join it again to play.)
Repository Link Game
1.1.1 - OpenAPI (Real Time MSD)
1.1.2 - AsyncAPI (Real Time MSD)
1.2 - Map Service
Map Service Technical View
Map Size
The map size depends on the number of players. For less than 10 players the map size is 15x15, for more than 10 the size is 20 and above that the size is equal to the number of players. But there is an option when starting a new game to override this behavior with a specific size.
Map Structure
The map (also called the gameworld) consists of fields, referred to as planets. Each planet is located on the map by two coordinates (x and y) with neighbouring planets and depending on its position have a movement difficulty that influences the amount of energy needed in order to move a robot. There are four different areas: inner, mid, outer and border. Usually, planets in the inner part of the map have the highest difficulty with a difficulty of 3, whereas planets in the middle and outer part have a difficulty of 2 and 1.
The area also effects which type of resource a planet may contain. While the outer area only contains coal, planets in the middle area may contain iron and gem deposits and in the most inner area gold and platin.
Additionally, some planets in the outer area have a space station. Space stations are locations at which trading takes place, new robots spawn and where robots have a higher regeneration rate in general. Lastly, not every field on the map is a planet a robot can move to. Some fields are blank and represent an obstacle. They are called a black holes.
Service-oriented Functions
Repository Link Map
1.2.1 - Custom map
Custom game world configuration
When creating a new game, clients have the option to customize the map layout and resource distribution when creating a new game. Otherwise, a default map is generated.
In order to create a custom map, the gameworldSettings
field has to be provided as part of the request body when
creating a new game at the game service. See POST /games
for the game service for details. Below is an example
what a request body may look like.
Example request body for creating a new game
{
"maxRounds": 58,
"maxPlayers": 6,
"gameworldSettings": {
"map": {
"type": "custom",
"size": 5,
"options": {
"layout": "XOOXX;OOOOO;OXXOO;XXXXO;OOOOO"
}
},
"resources": {
"minAmount": 500,
"maxAmount": 5000,
"areaSettings": {
"border": {
"coal": 1.0
},
"inner": {
"iron": 0.4,
"gem": 0.1
}
}
}
}
}
As can be seen in the example, gameworldSettings
is an object with two keys: map
and resources
.
map
: Settings that change the layout of the map.resources
: Settings that change the distribution and richness of resources.
All of these settings are option. In case either or both map
and resources
are left out, the default generation
takes precedence.
Map Settings
Map settings allow to shape the layout of the map. There are three fields:
type
sets the map type. A detailed explanation of all supported types can be found below. If left blank the service will default to typedefault
.size
allows to specify a map size and override the default behavior which calculates the map size based of the number of participating players.options
is an additional map that contains options that strongly correlate with the map type. These are optional and explained for each map type below. Unsupported options are ignored. Note, that there is an exception for typecustom
that requires a valid layout as part of the options.
Map types and options
1. DEFAULT - default
The default
map is chosen when type
is left blank or no gameworldSettings
are provided. It is randomly generated
(as all maps are) and covered with planets with some being black holes as obstacles in the outer
and mid
area of the
map.
Supported Options: none
Example:
{
"map": {
"type": "default",
"size": 10
}
}
Example Output:
O | O | O | O | O | O | O | O | O | O |
O | X | O | O | O | O | O | X | O | O |
O | O | X | O | X | X | O | O | O | O |
O | O | X | X | O | O | X | O | O | O |
O | O | O | X | O | O | O | O | O | O |
O | X | X | O | O | O | O | X | O | O |
O | O | O | O | O | O | X | O | O | O |
O | O | X | O | X | O | O | O | O | O |
O | O | X | X | O | O | X | X | X | O |
O | O | O | O | O | O | O | O | O | O |
2. CUSTOM
Type custom
allows for a complete customization of the layout. This is done by providing a layout
option.
Supported Options:
layout
(String): The layout is a string consisting of two charactersX
andO
(the letter, not the numeral) with;
being the delimiter.X
represents a planet whileO
results in a black hole.
Caution:
- The length of each block (or row) must be equal. For instance
"XOO;OO;OO"
is invalid. - The layout must be square and match the map size. It is therefore recommended to provide a fixed size for this type of map.
Example:
{
"map": {
"type": "custom",
"size": 5,
"options": {
"layout": "XOOXX;OOOOO;OXXOO;XXXXO;OOOOO"
}
}
}
Example Output:
X | O | O | X | X |
O | O | O | O | O |
O | X | X | O | O |
X | X | X | X | O |
O | O | O | O | O |
3. CORRIDOR
Type corridor
creates a map consisting of two black holes that contain random rows and columns filled with planets (“corridors”).
These corridors are cut off from each other, meaning that there can’t be two corridors next to each other.
Note, that there is always at least one row or column with planets present. It is not possible to create a map that consists only of black holes.
Supported Options:
frequency
(Numeric between 0 and 1): The probability for a corridor occurring in rows / columns.width
(Numeric > 0): The width of each corridor. The default is 1.
Example:
{
"map": {
"type": "corridor",
"size": 10,
"options": {
"frequency": 0.2,
"width": 1
}
}
}
Example Output:
O | O | O | O | O | O | O | O | O | O |
X | O | X | X | X | X | X | X | X | X |
X | O | X | X | X | X | X | X | X | X |
X | O | X | X | X | X | X | X | X | X |
X | O | X | X | X | X | X | X | X | X |
O | O | O | O | O | O | O | O | O | O |
X | O | X | X | X | X | X | X | X | X |
X | O | X | X | X | X | X | X | X | X |
X | O | X | X | X | X | X | X | X | X |
X | O | X | X | X | X | X | X | X | X |
4. ISLANDS
Type islands
creates a “sea” of black holes with islands of planets in it.
Islands are clusters of planets that are isolated from each other, meaning that robots cant travel between them. These
islands are randomly placed and vary in size.
Internally the algorithm used to create this type of map is a flood fill algorithm.
It randomly chooses a starting planet using the frequency
value from the list of yet unvisited planets. Islands are
created recursively by the algorithm by visiting all neighbouring planets and marking them as either a planet or a black
hole by using the size
option.
Supported Options:
frequency
(Numeric between 0 and 1): Controls the amount of islands appearing on the map. The default is 0.5.size
(Numeric between 0 and 1): Controls the island size. The default is 0.75.
Example:
{
"map": {
"type": "islands",
"size": 10,
"options": {
"size": 0.6,
"frequency": 0.2
}
}
}
Example Output:
O | O | O | X | X | X | X | O | O | X |
X | X | O | O | X | X | O | X | O | X |
X | X | O | X | X | O | O | O | O | O |
X | X | X | X | O | O | X | X | X | X |
X | X | O | O | X | O | X | O | X | X |
X | X | O | X | O | O | O | O | O | O |
X | O | X | O | O | O | O | X | O | O |
X | O | O | O | O | O | O | O | O | O |
O | O | X | O | O | O | O | O | X | O |
X | X | X | X | O | O | O | X | X | X |
5. MAZE
Type maze
creates a maze where the aisles are planets and the walls are black holes.
Strategically this means that there are longer distances to traverse for robots to arrive in certain areas. It is also
possible to flank opponents in a fight or trap fleeing opponents in a dead end etc.
A map with type maze is generated by choosing a starting planet and recursively branching out in all
directions marking its neighbours as visitable planets.
If the neighbour already has a neighbour that is a visitable planet, this planet will either be declared a wall or a
visitable planet using the clusters
probability.
Options
clusters
(Numeric between 0 and 1): Controls the probability of “plazas”/clusters of adjacent planets appearing. Default is 0.02.
Example:
{
"map": {
"type": "maze",
"size": 10,
"options": {
"clusters": 0.2
}
}
}
Example Output:
O | O | O | O | O | O | O | O | O | O |
O | X | O | X | X | X | X | O | X | O |
O | O | O | O | O | O | O | X | O | O |
O | X | O | O | O | X | O | X | O | O |
X | O | O | X | O | O | O | X | O | X |
O | O | O | X | X | O | O | O | O | O |
O | X | O | O | X | O | X | O | X | O |
O | O | O | O | O | O | X | O | O | O |
O | O | X | O | O | X | O | X | X | O |
O | O | O | O | X | O | O | O | O | O |
Resources Settings
By default, resources are distributed as followed:
COAL
can be found in theBORDER
andOUTER
regions of the map with a probability of 80%.IRON
can be found in theMID
region of the map with a probability of 50%.GEM
can be found in theMID
region of the map with a probability of 30%.GOLD
can be found in theINNER
region of the map with a probability of 20%.PLATIN
can be found in theINNER
region of the map with a probability of 10%.- The resource amount on planets is always 10,000 by default.
Using the resources
settings, you have more control over the amount and placement of those resources.
minAmount
: Controls the minimum amount of resources on a planet.maxAmount
: Controls the maximum amount of resources on a planet.areas
: Enables a detailed specification on how each resource type is placed in each area by which probability.
As with all other options, these options are optional.
amount
-Settings
The amount of resources each deposit contains are a random number between the minAmount
and maxAmount
. Depending on
which variable is set the following behavior is to be expected:
minAmount |
maxAmount |
Behavior |
---|---|---|
Value A | Value B | Random value between A and B |
null | Value B | Random value between 0 and B |
Value A | null | Random value between A and 10.000 |
null | null | Default of 10.000 |
area
-Settings
The areaSettings
enables detailed control over which resource type is placed in which area by which probability.
Therefore, the setting is a nested map by area and resource with a probability between 0 and 1.
Caution: Providing an area setting overrides the default placement entirely. Meaning a game world with the following
setting would have no resources in the MID
and OUTER
areas.
Example:
{
"areaSettings": {
"border": {
"coal": 1.0
},
"inner": {
"platin": 0.3,
"gold": 0.5
}
}
}
Map areas
There are four map areas:
- The
BORDER
(red) region contains all the planets at the edge of the map - The
INNER
(yellow) region contains all the planets that are more than(size - 1) / 3
planets away from the edge. - The
MID
(green) region contains all the planets that are more than(size - 1) / 6
planets away from the edge. - The
OUTER
(blue) region contains all the planets not in any of the above regions.
For the example above the INNER
region is 3 planets away from the edge ((10 - 1) / 3 = 3
) and the MID
region 1
planet away from the edge ((10 - 1) / 6 = 1
because of integer division).
Therefore, a map with a size of 50 would have an INNER
region that starts after (50 - 1) / 3 = 16
planets away
from the edge and a MID
region that starts after (50 - 1) / 6 = 8
planets.
1.2.2 - OpenAPI
1.2.3 - AsyncAPI
1.3 - Robot Service
Robot Service Technical View
Robot and its Information
The robot has several variables that you must keep an eye out for.
You can obtain the information for your robot by using this REST call:
GET
http://{defaultHost}/robots/{robot-uuid}
Response Payload Example
{
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"player": "ae2cfcf0-e870-4360-a41e-3b3bb3312234",
"planet": "2faf337d-d8d1-40fc-983e-5f130540496b",
"alive": true,
"maxHealth": 100,
"maxEnergy": 60,
"energyRegen": 8,
"attackDamage": 5,
"miningEfficiency": 10,
"miningSpeed": 1.0,
"movementSpeed": 1.0,
"attackSpeed": 0.2,
"energyRegenSpeed": 1.0,
"health": 75,
"energy": 43,
"healthLevel": 3,
"damageLevel": 2,
"miningSpeedLevel": 0,
"miningLevel": 4,
"miningEfficiencyLevel": 2,
"energyLevel": 3,
"energyRegenLevel": 2,
"energyRegenSpeedLevel": 0,
"movementSpeedLevel": 0,
"attackSpeedLevel": 4,
"storageLevel": 0,
"inventory": {
"maxStorage": 20,
"usedStorage": 5,
"coal": 3,
"iron": 2,
"gem": 0,
"gold": 0,
"platin": 0
}
}
Robot Data
-
general:
id
,player
,planet
,alive(y/n)
-> These are self-explanatory. -
Max stats of robot according to current upgrade status:
maxHealth
,maxEnergy
,energyRegen
,attackDamage
,miningEfficiency
,miningSpeed
,movementSpeed
,attackSpeed
,energyRegenSpeed
-> These are variables that can be improved by upgrades bought by the trading service. -
Current status of the robot:
health
,energy
-> your current pool of health and energy. A Robot is not dying with “0” Energy. You simply cannot use another action except ofregenerating
. -
Current upgrade level:
maxHealthLevel
,attackDamageLevel
,miningEfficiencyLevel
,miningLevel
,maxEnergyLevel
,energyRegenLevel
,storageLevel
,miningSpeedLevel
,movementSpeedLevel
,attackSpeedLevel
,energyRegenSpeedLevel
-> you can only upgrade to the next level via trading service. -
object “inventory” with attributes:
maxStorage
,usedStorage
,storedCoal
,storedIron
,storedGem
,storedGold
,storedPlatin
Spawning a Robot
The spawn of the robot is a direct result of a “Buy robot command” to the trading service.
If a player has enough money the trading service uses a REST call to the robot service.
Upgrade transaction
- Trading service: receives the command from the player and processes it, checks if player has enough money, withdraws money
- Robot service: validate robot and do the upgrade
The robot service only has to validate the given robot and check if the bought upgrade is possible.
Repository Link Robot
1.3.1 - Robot Actions
Robot Actions
Commands
:::info
Not all action related events are presented here with an actual example, because they work in the same manner. For the missing Events please refer to the ASYNC API
:::
Valid command types for robot
move
fight
mine
regenerate
Actions
Most of the actions will require energy. If there is not enough energy left in the robot an action will fail. Every action has an action Duration, after which the command finishes processing. During that time, the same robot can’t send another command.
Movement action
- robot service: receives and processes the command, issues request to map, checks if two planets are neighbours, processes the results and throws event according to the result
- map service: provides neighbours of a planet
A successful result of the move must include all planet data of the new position. This info must be obfuscated so that not every player can just read the most recent planet data of all visited planets. Therefore, the planet info must be obfuscated via the command uuid. After a successful movement two events are thrown. The first one which indicates the success of the movement. It contains the remaining energy of the robot, the planet data of the target planet and the uuids of all robots located there. The second event is mapped to the command uuid and provides all neighbours for the target planet after a successful move. If a player tries to move a robot which is on an unreachable planet, the robot service will just throw an event to report the failure for that specific robot.
Fighting action
- Robot service: receives and processes the command and throws event according to the result.
Mining action
- Robot service: receives and processes the command, issues requests to map, processes the results and throws event according to the result
- Map service: handles the amount of available resources
To determine if the requests is valid and the corresponding robot can mine the resource on its location, the robot service first requests the type of the resource from the map service. The robot service then sends a mining request to the map service. Map then returns the amount which can be mined (requested value or below).
Regeneration action
- Robot service: receives and processes the command and throws event according to the result
Be careful not to mix up with the regenerating, you can buy with a command to the trading service. This action does not require energy to be used.
1.3.2 - OpenAPI (Real Time MSD)
1.3.3 - AsyncAPI (Real Time MSD)
1.4 - Trading Service
Trading Service Technical View
Lifecycle Technical information
- Trading keeps a bank account for every player
- Trading debits bank account based on trading operation (Buy/Sell)
- Trading doesn’t care about game mechanics, it will happily sell you an upgrade for a non-existing robot and charge your account
- Trading announces prices at the start of each round
Player and Money
Trading saves a Player
. The playerId is known, by listening to the Player registered event of the game service.
Trading is also the only service which saves the money of the registered Player
as an attribute Money which is a numeric value.
Events for player
BankAccountInitialized
This event is produced when a player bank account is initialized.
BankAccountCleared
When a bank account has been cleared (at the end of a game).
BankAccountTransactionBooked
When the bank account has been charged.
TradablePrices
At the start of each round prices are announced
TradableSold
When something has been sold
TradableBought
When something has been bought
Service-oriented Functions
1.4.1 - Tradeables
Tradeables
Tradeables are Item, Resource, Restoration or Upgrade. They have an Name, Price and Type.
Resources
Resources types
Before you will be able to afford more than just your starting robot you will have to mine resources.
There are five resource types, which can be found on the planets. These are the starting selling prices for the resources.
Value | Name | Price |
---|---|---|
COAL | Coal | 5 |
IRON | Iron | 15 |
GEM | Gem | 30 |
GOLD | Gold | 50 |
PLATIN | Platin | 60 |
Items
Value | Name | Description | Price |
---|---|---|---|
ROBOT | Robot | Buys another Robot | 100 |
Restorations
Value | Name | Description | Price |
---|---|---|---|
HEALTH_RESTORE | Health restoration | Heals the robot to full HP | 50 |
ENERGY_RESTORE | Energy restoration | Restores robot to full Energy | 75 |
Upgrades
Upgrades improve the variables of your robot. For example, a bigger health pool.
Upgrade types
Value | Description |
---|---|
STORAGE_N | Storage Level N=1-5 Upgrade |
HEALTH_N | Health Points Level N=1-5 Upgrade |
DAMAGE_N | Damage Points Level N=1-5 Upgrade |
MINING_SPEED_N | Mining Speed Level N=1-5 Upgrade |
MINING_N | Mining Strength Level N=1-5 Upgrade |
MAX_ENERGY_N | Energy Capacity Level N=1-5 Upgrade |
ENERGY_REGEN_N | Energy Regen Level N=1-5 Upgrade |
MOVEMENT_SPEED_N | Movement Speed Level N=1-5 Upgrade |
ATTACK_SPEED_N | Attack Speed Level N=1-5 Upgrade |
MINING_EFFICIENCY_N | Mining Efficiency Level N=1-5 Upgrade |
ENERGY_REGEN_SPEED_N | Energy Regen Speed Level N=1-5 Upgrade |
Upgrade Prices
Level | Price |
---|---|
1 | 50 |
2 | 300 |
3 | 1500 |
4 | 4000 |
5 | 15000 |
Upgrade Restriction
There are two restrictions, when it comes to buying upgrades
:
-
You can only buy one
upgrade
per robot per command. The reason for this is that an upgrade is seen as a single action. Just imagine it as giving your car to the shop for a tuning. -
You can only buy an upgrade to the next level of the variable, you want to improve. For example, you only can upgrade your HEALTH_1 to HEALTH_2.
1.4.2 - Economy
Warning
The economy is a subject to change and is at the moment removed as a game mechanic. However, It is likely that a eceonmy-concept will be part of the game at a later point. Therefore, you can keep this econmy here in your mind.Economy
Price Economy
The special items and the resources in the game will have a simulated economy. This means that the prices will be adjusted according to a different parameter.
Every item and resource have an own economy entity. An economy consists of a buy/sell-history
and a stock/demand
. The item-stock will not influence the number of items that can be bought. Same for the resource-demand
. It will not influence how many resources can be sold. They are only a virtual parameter to simulate the price adjustments. Additionally, there is another parameter that determines a time range, over which the history should be analysed.
This economy basically implements a very easy form of price adjustments:
more items are bought => less stock => price high
less items are bought => more stock => price low
more resources are sold => less demand => price low
less resources are sold => more demand => price high
These economies will calculate new prices after every command-execution. The prices will then be published through their corresponding events.
All prices will always be Integers
.
Resources sell-price adjustments
There is a calculation done, which will be changing the prices of the resources gradually every round. For the calculation only matters how many resources of a certain type were sold in the past.
newPrice = ceil(originalPrice * historyFactor)
This factor is calculated as follows (if the factor is greater than or equals 1, the factor will be 1 and the price will stay the same):
historyFactor = resourceDemand / soldAmountInTimeRange
Example 1
demand = 10; sold = 15
factor = 10 / 15 = 0,66
factor <= 1
=> price will be changed by a factor of 0,66
Example 2
demand = 10; sold = 3
factor = 10 / 3 = 3,33
factor > 1
=> price will stay the same
Items buy-price adjustment
Items are calculated with the above presented buy-history factor and an additional round-adjusting:
newPrice = ceil(originalPrice * historyFactor * roundAdjust)
The buy-history factor is calculated as follows (if the factor is smaller than or equals 1, the factor will be 1 and the price will stay the same):
historyFactor = boughtAmountInTimeRange / itemStock
Example 1:
stock = 5; bought = 3
factor = 3 / 5 = 0,6
factor <= 1
=> price will stay the same
Example 2:
stock = 2; bought = 3
factor = 3 / 2 = 1,5
factor > 1
=> price will be changed by a factor of 1,5
Also, Items will be more expensive in the endgame phases, when players have collected more wealth. This ensures a fair play.
roundAdjust = floor(200 * (1 / (1 + e^(-0,014 * currentRound) * 199)))
1.4.3 - OpenAPI (Real Time MSD)
1.4.4 - AsyncAPI (Real Time MSD)
1.5 - Gamelog Service
GameLog Service Technical View
Warning
Gamelog is currently in a state of being rewritten. The documentation here is outdated. Please refer to GameLog RepositoryThere are multiple scoreboards:
-
The global scoreboard
-
Multiple scoreboards for the different event categories: Fighting Mining Traveling Trading
The weighting for calculating the scores won’t be changeable during the game.
Repository Link GameLog
1.5.1 - OpenAPI (Real Time MSD)
1.5.2 - AsyncAPI (Real Time MSD)
1.6 - MSD Dashboard
1.6.1 - MSD Dashboard
Hint
This following part of the documentation is embedded from the README of the msd-dashboard repository. If you experience any issues on this page, just visit the repository directly.MSD-Dashboard
Mission and Vision
One recurring issue with playing the Microservice Dungeon game was that while players could compete against each other, nobody really knew what was happening at any given moment. They didn’t know which player was dominating or getting smashed by others unless they painstakingly compiled logs themselves. To address this problem, we introduced the Dashboard—a tool designed to observe, analyze, and provide real-time insights into gameplay, enabling users to track and evaluate live events more effectively.
The Dashboard serves multiple purposes. Firstly, it significantly aids in player development by allowing easy creation, starting, and stopping of games. Users can also effortlessly create opponents (custom player) and, after fulfilling certain conditions, even compete against their own player. These features are invaluable for testing and refining player strategies. Secondly, the Dashboard enhances larger code-fights where tracking numerous players becomes challenging. It provides a comprehensive view of the game’s status and ongoing events, which would otherwise be difficult to monitor.
From the start of a game, the Dashboard offers real-time monitoring through a live map displaying robots, resources, and detailed information about participating players and planets. Additionally, it presents statistics both graphically and textually, some of which remain accessible even after the game ends for comprehensive analysis.
Table of Contents
- Architecture and Operation of the Dashboard
- Setup Guide - For Developers
- Player-Guide: Getting Started
- Further Instructions for Use
- FAQ
- How to Report Bugs
Architecture and Operation of the Dashboard
Architecture and General Function
The Dashboard is built using Angular, a popular open-source web application framework developed by Google. Angular provides a robust platform for building dynamic single-page applications (SPAs) with a rich user interface.
What is Angular?
Angular is a TypeScript-based open-source framework for building web applications. It extends HTML with additional attributes and binds data to HTML with powerful templating. Angular is known for its speed, performance, and ease of development, making it a preferred choice for modern web applications.
External Services
In addition to Angular, the Dashboard relies on several external services that provide endpoints for fetching game data. These services include:
- Game Service: The primary service for game-related operations.
- MSD-Dashboard-Backend: Returns information about all robots and planets currently present.
- Gamelog Service: Provides scoreboards, map data, and makes it possible to map player names to player IDs.
- MSD-Dashboard-Docker-API: Tailored to the Dashboard’s needs, it starts Docker containers with specific configurations to enable the implementation of custom players.
Internal Services
Within the Dashboard, the central service, Match-Data-Fetch-Service, is responsible for data collection. This service operates as follows:
- Regular Data Fetching: The Match-Data-Fetch-Service calls the data-fetching methods of the respective services at regular intervals, typically three times per game round.
- HTTP Requests: These methods execute HTTP requests to the external service endpoints.
- Data Aggregation: The results from these requests are passed back to the Match-Data-Fetch-Service.
- Data Distribution: The collected data is made available to all other internal services.

Key Considerations:
- Real-Time Data Retrieval: Since external services/APIs only provide data for the current round and do not store historical data, the Dashboard must fetch data each round to ensure a comprehensive view of the game.
- Data Consistency: Regular and timely data fetching is crucial for maintaining accurate and complete game data within the Dashboard.
How Are Information and Changes Calculated from This Data?
To provide comprehensive game data, information on players, robots, and planets is collected for each round. These datasets are temporarily stored and further processed for detailed analysis.
The Match-Data-Service handles this processing by:
- Data Comparison: Comparing the current round’s data with the previous round’s data, focusing on robots.
- Change Detection: Identifying new robots, killed robots, purchased upgrades, and calculating financial transactions such as money earned from selling resources and purchasing robots and upgrades.
Data Persistence and Usage:
- Robot Data: The raw data and derived information are persisted and utilized by various services for further analysis and functionality.
- Planet Data: While planet data is also stored and used for the live map, it does not require the same level of detailed comparison and analysis as robot data.
How Does the Custom Player Feature Work?
The custom player feature allows players to run as Docker containers on the local machine. Here’s how it works:
-
Player Data Creation:
- Create data for the player, including the name, email, and Docker image to be used.
-
Configuration File Creation:
- Generate a configuration file for each custom player, stored in JSON format.
- This file includes essential environment variables such as player email and player name, which must be unique across all players, and other user-defined configuration variables.
- The Dashboard automatically creates (if not specified) and updates the configuration file.
-
Container Creation and Launch:
- The internal ‘Docker-Api-Service’ sends an HTTP request to the external ‘MSD-Dashboard-Docker-Api’.
- The API uses the provided information (container name, image name, port, and configuration file) to create and start the container.
- The variables in the configuration file are set as environment variables of the container
- The API utilizes the Node.js library ‘Dockerode’ to interface with the Docker engine and manage the container lifecycle.
- The ‘MSD-Dashboard-Docker-API’ provides feedback on the success of the container creation and start-up process.
Similarly, the MSD-Dashboard-Docker-Api provides endpoints to stop and delete containers. At the end of each game, all containers are stopped and deleted.
Setup Guide - For Developers
Welcome to the setup guide for developers. This will walk you through the steps required to clone the repository and get the Dashboard running on your machine.
Prerequisites
Before you begin, ensure that you have the following installed on your system:
Important: It is crucial to have the local development environment, including the dashboard-backend, the dashboard-docker-api and the gamelog up and running for the Dashboard to function correctly. Please follow the steps provided in their respective links to set up these components before proceeding.
Local Setup
Step 1: Prepare the Directory
First, you need to create or navigate to the directory where you want to clone the repository. Open your terminal or command prompt and use the cd
command to navigate to your desired directory.
Step 2: Clone the Repository
Run the following command in your terminal to clone the repository:
git clone https://github.com/MaikRoth/msd-dashboard.git
This will create a copy of the repository in your current directory.
Step 3: Navigate to the Repository Folder
Once the repository is cloned, navigate into the repository folder by running:
cd msd-dashboard
Replace msd-dashboard
with the correct folder name if it’s different.
Step 4: Install Dependencies
In the repository folder, run the following command to install all the necessary dependencies:
npm install
This command will download and install all the required Node.js packages.
Step 5: Run the Application
Finally, to start the Dashboard, run:
ng serve
This will start the Angular development server and the Dashboard should be accessible at http://localhost:4200
.
Docker Container Setup
Step 1: Clone the Repository
Follow the same steps as in the local setup to clone the repository.
Step 2: Navigate to the Repository Folder
cd msd-dashboard
Step 3: Docker Container Setup
In Powershell, set up the Docker container by running:
docker-compose up
This command will create and start the necessary Docker containers.
Usage
After completing the installation, you can access the Dashboard by navigating to http://localhost:4200
in your web browser.
Troubleshooting
If you encounter any issues during the setup, make sure all prerequisites are correctly installed and that you’re following the steps in the correct order.
Player-Guide: Getting Started
If you use the local development environment, the dashboard should be available at localhost:4200. It will navigate you to the ‘Control Panel’ tab. Here, you can:
- Create a game
- Customize it
- Add players
- Start the game
After starting a game, you will be automatically navigated to the map. It takes a few rounds (usually until round 3) to finish loading. From there, you can start exploring the application and manually stop the game if needed. The data seen in the match statistics tab is available even after stopping the game, but it will be deleted when you create a new game.
Player-Guide: How to Play Against Your Own Player
The Dashboard allows you to compete against your own player or other custom players. Here’s how it works:
- Creates a Docker container from the Docker image of the player on your local machine.
- Overrides all important variables (e.g., player name, player email, game service URL).
- The player runs in the Docker container and joins the game automatically.
- You can add more than one instance of a specific player to your game.
Requirements
To play against your own player, your player needs to fulfill certain requirements.
1. Docker Image
You must provide the Docker image of your player. You can do this by either:
- Adding it to our microservice-dungeon registry: registry.gitlab.com/the-microservice-dungeon/devops-team/msd-image-registry
- Having the image on your local machine or any other registry.
2. Environment Variables
Your player must read and set certain variables from environment variables. This is important because the dashboard needs to change the values of certain variables to start the player correctly as a container. The following environment/system variables need to be implemented in your player with the exact same names:
PLAYER_NAME
PLAYER_EMAIL
GAME_HOST
RABBITMQ_HOST
Important: Please make sure to name these exactly as written here.
Other not required, but potentially necessary variables in some cases:
RABBITMQ_USERNAME
RABBITMQ_PASSWORD
RABBITMQ_PORT
Adding Your Player to the Game
After fulfilling the requirements, visit the dashboard interface at localhost:4200 and start a game via the interface. The following steps explain how to add players to the game.
- Open Menu:
- Click the ‘Add Custom Player’ button.
- Click the ‘Select Own Player’ button. A menu will open where you must enter the details of the player you want to add.


- Enter Image Registry:
- Insert the registry of your image if it is in one.
- The default input is the microservice-dungeon registry. If your player is registered there, you don’t need to change anything in this line.
- If the Docker image of your player is on your local machine, leave the input field empty.

- Enter Image Name:
- Insert the name of your Docker image. If the image is in the microservice-dungeon registry, the name is usually something like player-hackschnitzel.

- Enter Image Tag:
- Insert the tag of the Docker image. The default input is latest, so you can leave it as is unless you want the image with a specific tag.

- Provide Port:
- Provide a port to map the container port to the same port on the host machine (port:port).
- Leaving the field empty or set to 0 will result in a random port assignment (this should be fixed in the future in the Docker API to avoid port assignment when no value is provided).

Adding a Configuration to Your Player
After entering the details of your player image, the Dashboard will ask if you want to add configurations. This allows you to pass additional environment/system variables to your player for further customization. For example, you could have an environment variable named ‘STRATEGY’ to change the strategy of your player based on the given input. This allows you to start your player with different strategies. If you don’t have any configurations to add, just press ‘No, continue without’.
If you decide to add a configuration, a file picker will open. The file you select must be a text file with a single JSON object in it. The file name is not important. It could look like this:
{
"STRATEGY": "aggressive",
"port": 43553,
"MAX_NUMBER_ROBOTS": 100
}
Playing Against Standard Players
For this feature, you don’t need any special requirements. You can simply add one or more of the standard players to your game. Just press the ‘Add Custom Player’ button and then click on their name. Standard players cannot be configured.
Important: It might take some time to pull the Docker images for the first time.
Further Instructions for Use
-
Dashboard Usage: Ensure that the Dashboard remains in the foreground at all times. Switching browser tabs or using other applications may disrupt regular data fetching, leading to incomplete game data on the Dashboard (hopefully this can be fixed in the future).
-
Game Spectating: When spectating a game, start observing from the beginning (in case you ever intend to start a game through other sources than the dashboard). This ensures accurate data calculations, especially for metrics like player ‘balance’, which rely on complete game data.
FAQ
How do I play against my own player?
Why does the Dashboard show different values than those logged in my player?
- The Dashboard retrieves and calculates game data by fetching it from an API backend, which provides the current state of all robots and planets. The Dashboard continuously fetches this data, manually assigns round numbers, and calculates changes between rounds. Occasionally, specific information may be lost or assigned to incorrect round numbers, leading to discrepancies.
When creating a game with ‘previous’ settings, will the custom players retain the old configuration, or do I need to provide a new configuration file?
- Custom players will retain the exact configuration provided in the last game. You do not need to provide a new configuration file unless you intend to make changes. Currently, there is no option to see IF a configuration file was provided or not
How to Report Bugs
The preferred method for reporting bugs is to create an issue on gitlab and provide a detailed description of the problem.
If you encounter any difficulties, you can also message me directly via Discord: bronzescrub or use the appropriate Discord channels on the ArchiLab Discord server.
Authors
1.6.2 - MSD Dashboard Docker Api
MSD-Dashboard-Docker-Api
The API is closely tied to the MSD Dashboard and is used to start, stop, and delete predefined players as Docker containers using a given configuration.
The API offers various endpoints:
/docker/configureAndRunWithUpload
: Requires a file named playerConfig.json, containing a single Json with variables which will be set as environment variables in the Docker container, as well as an image name, port, and container name. The API creates a container with these configurations./docker/stop
: Stops the Docker container with the provided name./docker/stopAndRemove
: Stops and deletes the Docker container with the provided name./docker/remove
: Deletes the Docker container with the provided name.
When the API is terminated, all Docker containers created by the API are automatically stopped and deleted.
Note: At startup, the API pulls the images for the standard players if they are not already available, as these are necessary for a specific Dashboard feature.
Local Installation
Step 1: Clone the repository:
git clone https://gitlab.com/florianlemmer/dashboard-docker-api
Step 2: Install the dependencies:
npm install
Step 3: Run the project:
npm start
Alternatively:
node server.js
The API will then be accessible at localhost:3100
.
2 - Dashboard as Microfrontend
Documentation for the Dashboard as Microfrontend
Situation Description:
- University project ⇒ limited time, constantly changing developers ⇒ Maintenance problematic
- Ensuring long-term functionality
- Ensuring a consistent UI
- Implementation of the necessary infrastructure
Criteria for Technology Selection (derived from the situation)
- What (special) requirements are there for our dashboard?
- Real-time data, more complex data through the map, many user interactions ⇒ Performance
- Maintainability, as simple as possible and as automated as possible
- Possibility of easy connection of player frontends
- Communication between the micro-frontends, will be necessary for the map
- Technology-independent development of the frontends
Technology Comparison
iFrame
iFrames are HTML elements that allow embedding another HTML page within an existing page.
Implementation:
- Creation of the Root Application: A simple HTML page with iFrames acting as containers.
- Embedding Micro-Frontends: Each Micro-Frontend application is loaded within an iFrame.
- Communication: Use
PostMessage
for communication between the iFrames.
Web Components
Web Components are a collection of technologies that enable the creation of reusable and encapsulated HTML elements, which can be used across any web application. They consist of three main technologies:
- Custom Elements: Allows the definition of custom HTML elements.
- Shadow DOM: Provides encapsulation of the DOM and styles, ensuring that the implementation of a Web Component is isolated from the rest of the page.
- HTML Templates: Enables the creation of templates for reusable markup structures.
Webpack 5 Module Federation
Module Federation is a concept and feature introduced in Webpack 5. It allows different web applications and microfrontends to share and import modules at runtime, without the need for these modules to be duplicated in each project. This can significantly improve the reusability and integration of modules across different projects.
Webpack is a widely used open-source module bundler tool for JavaScript applications. It processes modules with dependencies and generates static assets that can be efficiently loaded by browsers. Webpack allows developers to leverage modern JavaScript features and libraries while reducing the complexity of managing dependencies and builds.
Implementation:
-
Configure ESLint
{ "extends": ["next/babel", "next/core-web-vitals"] }
-
Configure the Remote Application
const NextFederationPlugin = require("@module-federation/nextjs-mf"); const { FederatedTypesPlugin } = require("@module-federation/typescript"); const federatedConfig = { name: "remote", filename: "static/chunks/remoteEntry.js", exposes: { "./Home": "./src/component/home.tsx", }, shared: {}, }; const nextConfig = { reactStrictMode: true, typescript: { ignoreBuildErrors: true, }, webpack(config, options) { config.plugins.push( new NextFederationPlugin(federatedConfig), new FederatedTypesPlugin({ federationConfig }) ); return config; }, }; module.exports = nextConfig;
-
Create a Component to Use in the Host Project
import React from 'react'; const Home = () => { return <div>Welcome to the Remote Home Component!</div>; }; export default Home;
-
Configure the Host Application
const NextFederationPlugin = require("@module-federation/nextjs-mf"); const { FederatedTypesPlugin } = require("@module-federation/typescript"); const nextConfig = { reactStrictMode: true, typescript: { ignoreBuildErrors: true, }, webpack(config, options) { const { isServer } = options; const remotes = { remote: `remote@http://localhost:3001/_next/static/chunks/remoteEntry.js`, }; const federatedConfig = { name: "host", remotes: remotes, shared: {}, }; config.plugins.push( new NextFederationPlugin(federatedConfig), new FederatedTypesPlugin({ federationConfig }) ); return config; }, }; module.exports = nextConfig;
-
Use the Exported Component in the Host
import React from 'react'; const RemoteHome = React.lazy(() => import('remote/Home')); const IndexPage = () => { return ( <React.Suspense fallback="Loading Remote Component..."> <RemoteHome /> </React.Suspense> ); }; export default IndexPage;
RSBuild
RSBuild is a framework specifically designed for the development of micro-frontend architectures. It offers several advantages, but also some disadvantages compared to other technologies for micro-frontend containers.
For more information, see: RSBuild Example
Single SPA
Single-SPA (Single-Single Page Application) is a micro-frontend framework that enables multiple micro-frontends to be combined in a single web application. It loads and renders individual micro-frontends at runtime as needed, ensuring they cooperate with each other.
- Application Registration: Each micro-frontend application is registered with Single-SPA and loaded based on routes and conditions.
- Lifecycle Hooks: Single-SPA uses lifecycle hooks (bootstrap, mount, unmount) to start, render, and remove micro-frontends.
- Routing: Single-SPA manages routing and directs navigation to the appropriate micro-frontends.
- Framework-Agnostic: Supports micro-frontends built with various frameworks like React, Angular, Vue, etc.
Implementation:
- Setup the Root Application: Create a root application with Single-SPA.
- Register Micro-Frontends: Each micro-frontend is registered as a separate application and loaded as needed.
- Communication Between Micro-Frontends: Use global event bus or shared state mechanisms for communication.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta http-equiv="X-UA-Compatible" content="ie=edge" />
<title>Root Config</title>
<script type="systemjs-importmap">
{
"imports": {
"single-spa": "https://cdn.jsdelivr.net/npm/single-spa@5.9.0/lib/system/single-spa.min.js",
"react": "https://cdn.jsdelivr.net/npm/react@17.0.2/umd/react.production.min.js",
"react-dom": "https://cdn.jsdelivr.net/npm/react-dom@17.0.2/umd/react-dom.production.min.js"
}
}
</script>
<script type="systemjs-importmap">
{
"imports": {
"@single-spa/welcome": "https://unpkg.com/single-spa-welcome/dist/single-spa-welcome.js",
"@MEA/root-config": "//localhost:9000/MEA-root-config.js",
"@MEA/React-MicroFrontend": "//localhost:8080/MEA-React-MicroFrontend.js",
"@MEA/React-MicroFrontend2": "//localhost:8081/MEA-React-MicroFrontend2.js"
}
}
</script>
</head>
<body>
</body>
</html>
index.js
<single-spa-router>
<nav>
<application name="@org/navbar"></application>
</nav>
<route path="settings">
<application name="@org/settings"></application>
</route>
<main>
<route default>
<h1>Hello</h1>
<application name="@MEA/React-MicroFrontend"></application>
<application name="@MEA/React-MicroFrontend2"></application>
</route>
</main>
</single-spa-router>
microfrontend-layout.html
```javascript
import { registerApplication, start } from "single-spa";
import {
constructApplications,
constructRoutes,
constructLayoutEngine,
} from "single-spa-layout";
import microfrontendLayout from "./microfrontend-layout.html";
const routes = constructRoutes(microfrontendLayout);
const applications = constructApplications({
routes,
loadApp({ name }) {
return System.import(name);
},
});
const layoutEngine = constructLayoutEngine({ routes, applications });
applications.forEach(registerApplication);
layoutEngine.activate();
start();
```
root-config.js
https://github.com/erdinc61/mfederation-test
Dynamic UI Composition
Dynamic UI Composition is an architectural approach where the backend acts as a gateway and dynamically assembles the UI components at runtime. This can be achieved through a Backend-for-Frontend (BFF) pattern or a gateway that integrates the various micro-frontends and presents them as a cohesive page.
Technology comparison
Technology | Advantages | Disadvantages |
---|---|---|
iFrames | - Easy Integration: Minimal effort to embed. - Isolation: Each application runs in isolation, avoiding conflicts. |
- Communication: Difficult and often cumbersome communication between the main application and the embedded content. - Performance: Higher resource consumption and potentially slower loading. - User Experience: Limited design options and poorer user experience. |
Web Components | - Standardized: Works in all modern browsers without additional libraries. - Reusability: Components can be used independently of the framework. - Encapsulation: Styles and scripts of a Web Component are isolated from the rest of the document. - Framework Independence: Can be used in any web application, regardless of the framework used. |
- Browser Compatibility in older browsers. - Boilerplate Code: Creating Web Components often requires more boilerplate code compared to frameworks like React or Vue, which can make development somewhat cumbersome. - Styling: Shadow DOM offers good isolation for styles but can also make sharing global styles difficult. Managing styles between Shadow DOM and the outer page can be complicated. - Performance: Initial Load Time: Using Shadow DOM and custom elements can increase the initial load time, especially when many components are rendered simultaneously. |
Webpack 5 Module Federation | - Performance: Efficient use of shared modules. - Integration: Seamless integration of micro-frontends, regardless of the framework used. |
- Dependency on Build Tools (Webpack 5). - Incompatibilities: Different versions of the same library can lead to unpredictable errors and compatibility issues. - Maintenance Effort: Maintaining the configuration and shared dependencies can be time-consuming and error-prone. - Scalability: Managing shared dependencies and configuration can become complex and difficult to scale with a large number of micro-frontends. |
Single SPA | - Flexibility: Supports various frameworks like React, Vue, Angular, and more. - Modularity: Each micro-frontend can be developed and deployed independently. - Efficient Loading: Loads micro-frontends on demand and can share common resources. - Good Developer Experience: Provides APIs and tools for easy management of micro-frontends. |
- Complexity: Can be more complex to set up and maintain than simpler approaches. - Isolating and managing styles between different micro-frontends can be challenging and lead to inconsistencies. |
Dynamic UI Composition | - Flexibility: Allows dynamic and context-dependent assembly of UI components. - Encapsulation: Backend handles the logic of composition, simplifying frontend development. |
- Complexity: Requires a complex backend infrastructure and increases overall system complexity. - Performance: Can introduce additional latencies when the backend dynamically assembles the UI. |
RSBuild | 1. Performance: - Optimized performance through efficient bundling and lazy-loading strategies. - Minimizes load time and improves user experience through asynchronous loading of modules. 2. Isolation: - RS Build offers strong isolation between different micro-frontends, preventing errors in one module from affecting others. - This enables a more robust and stable application. 3. Flexibility: - Supports various frontend frameworks and libraries, allowing developers to choose the best technologies for their needs. - Enables integration of existing projects without major restructuring. |
- Community and Support: Compared to established technologies like Webpack or Module Federation, the community around RS Build might be smaller. - Dependencies: Strong dependency on the RS Build infrastructure and provided tools, which could limit flexibility in choosing alternative solutions. - Potential issues with updating or migrating to new versions of RS Build. |
Our Technologies:
Webpack:
- Webpack: Module bundler for modern JavaScript applications.
- Webpack Module Federation Plugin: Enables module federation, where modules can be shared between different Webpack builds.
JavaScript Frameworks/Libraries:
- React
- Vue.js
- Angular
Development Tools
- ESLint
- Prettier
Styling
- CSS-in-JS
- SASS/SCSS
- Tailwind CSS
API Communication
-
Axios: Promise-based HTTP client for the browser and Node.js:
import axios from "axios"; const API_URL = "https://jsonplaceholder.typicode.com"; export const fetchPosts = async () => { const response = await axios.get(`${API_URL}/posts`); return response.data; };
-
React Query: For data fetching and state management in React applications.
You need to create a provider and client to use React Query
import { QueryClient, QueryClientProvider } from "@tanstack/react-query"; import { ReactNode } from "react"; import { ReactQueryDevtools } from "@tanstack/react-query-devtools"; const queryClient = new QueryClient(); interface ProvidersProps { children: ReactNode; } export function Providers({ children }: ProvidersProps) { return ( <QueryClientProvider client={queryClient}> {children} <ReactQueryDevtools initialIsOpen={false} /> </QueryClientProvider> ); }
React Query example:
import { useQuery } from "@tanstack/react-query";
import { fetchPosts } from "../services/api";
const PostsPage = () => {
const { data, error, isLoading } = useQuery({
queryKey: ["posts"],
queryFn: fetchPosts,
});
if (isLoading) return <div>Loading...</div>;
if (error instanceof Error)
return <div>An error occurred: {error.message}</div>;
return (
<div>
<h1>Posts</h1>
<ul>
{data.map((post: { id: number; title: string; body: string }) => (
<li key={post.id}>
<h2>{post.title}</h2>
<p>{post.body}</p>
</li>
))}
</ul>
</div>
);
};
export default PostsPage;
State Management
- Redux: Centralized state management for React applications.
- RobotService -> WebSocket endpoint
- Planet-Service -> WebSocket endpoint
- Trading-Service -> WebSocket endpoint
Microfrontend Technologies:
- Single-SPA and Qiankun are ideal for complex microfrontend applications that need to integrate different frameworks.
- PuzzleJS and Piral are good for simpler, fast-to-develop microfrontends.
- Web Components are ideal for creating reusable UI components across different frameworks.
- Mosaic is best suited for large enterprises with complex requirements and the need for extensive support.
https://github.com/erdinc61/mfederation-test
Conclusion: Setting up the repositories was well documented and allowed for a smooth start. Expanding the project is easy and intuitive. You quickly get into the structure, and the loose coupling of the components is good. However, the update does not seem to be fully given.
Architecture:
Current Architecture:
The MSD dashboard queries the MSD dashboard backend service and the game log service at regular intervals. These services listen to the Kafka queue and process most, if not all, events (I have not compared every single event). The processed events are written to their databases, providing the dashboard frontend with information on robots, planets, scoreboards, and possibly also achievements (which represents an interesting extension possibility).
The MSD dashboard frontend allows the starting and ending of games as well as the creation of custom players via REST calls to the game service. If selected by the user, these custom players are started via the MSD dashboard Docker API.
Monorepos vs. Multi-Repos
What is a Monorepo?
A monorepo hosts multiple projects or components within a single repository. This approach promotes code and resource sharing, simplifies dependency management, and ensures consistency.
What is a Multi-Repo Approach?
A multi-repository structure involves housing individual projects or components in separate repositories. This provides autonomy to different teams working on different projects. This isolation allows for a focused and structured development process where teams can independently manage their codebase.
Monorepos
Pro:
- Easier coordination and synchronization: Frontend and backend can easily collaborate as both parts of the application are in the same repository.
- Consistent code quality and standards: Shared repository promotes uniform code standards and best practices.
- Simplified dependency management: Centralized management of dependencies reduces version conflicts.
- More efficient CI/CD pipelines: Integrated CI/CD pipelines enable consistent builds and releases for the entire system.
- Cohesion: The frontend serves as a complement to the backend and is not meaningful on its own, while the backend could also operate independently. Therefore, frontend and backend should be considered a unit and merged together to ensure efficient and better development and maintenance of the application.
Contra:
- Complexity and performance issues: Large monorepos can cause slow build and pull processes, and it’s easy to lose track.
- Scalability issues: Growing monorepos can be more difficult to scale and maintain.
- Size issue: Too large repositories can make it difficult for developers to get started.
- Version management: The shared repository leads to unclear version management.
Multi-Repos
Advantages:
- Isolation and independence: Issues in one project do not affect other areas.
- Flexibility in tool selection: Teams can use the tools and workflows best suited to their needs.
- Lower complexity and faster builds: Smaller repositories lead to faster build and test cycles.
- Easier entry: A smaller repository allows developers to get into the project more quickly and focus on specific aspects of the application. This makes understanding and editing the code easier, leading to more efficient and focused development work.
Disadvantages:
- Difficulty managing dependencies: Complex and error-prone dependency management.
- Increased administrative overhead: Multiple repositories require more coordination and management.
- Potential communication issues: Isolated work can lead to more effort.
https://www.gitkraken.com/blog/git-multi-repo-vs-git-mono-repo
https://www.thoughtworks.com/en-us/insights/blog/agile-engineering-practices/monorepo-vs-multirepo
Our Repo Strategy
Placement of Repos
The repositories are located in a dedicated area
within the “core-services.” We are following a multi-repo approach. This not only allows for centralized management but also facilitates coordination with the various microfrontends and other components. Most importantly, this approach significantly eases the maintenance and expansion of individual components.
Placement of Individual Components
- Microfrontends: Each microfrontend component is managed in its own repository. This allows for independent development, testing, and deployment of individual frontend parts.
- Design System: The design system is maintained in a larger, central repository. This ensures that all frontend components can access a consistent design and common UI components.
Deployment of a New Component
When deploying a new microfrontend component, the process follows these steps:
- The microfrontend is developed in its own repository.
- After completion and testing, it is deployed.
- It is then integrated into the container (shell), making it part of the overall application.
Necessary Changes in Various Repositories
- Core Services Repositories
- DevOps Repositories
- Docs-Repo
Naming the Feature Branch
For the development of new features, a specific branch naming convention is used:
- Feature Branch: MSD-{ticketnumber}-{tickettitle} (Example: MSD-10-Add_websocket_endpoint). This ensures clear assignment of branches to specific tickets and their descriptions.
For deploying the microfrontends, we have three options:
- Content Delivery Network, i.e., a server we rent (not a real option)
- Deployment via own Docker containers in which the frontends run
- Deployment via the respective backend service, which then provides the frontend via the standard URL, e.g., MapService would provide the map microfrontend via the simplest URL http://localhost/map:3000.
If we want separate repositories for the frontends, option 2 makes sense. Option 3 with separate repositories for the frontends is feasible but cumbersome.
The implementation of a new microfrontend would proceed as follows:
- Implement the code
- Configure e.g., Webpack Module Federation (plugin)
- Build (e.g., tsc compile or bundling of the files)
- Deploy the microfrontend via one of the above options.
- Integrate the microfrontend into the host/container frontend
Ideas for Map implementation
Approach I OREO-Concept
OREO-Concept Description
The OREO concept describes the mere layering of the Map-Frontend and the Robot-Frontend. These are completely independent of each other. As standalone components, we are already leveraging the advantages of micro-frontends here. And this with comparatively little effort.
OREO-Concept Advantages
- Independence: Each frontend can be developed, deployed, and scaled independently, allowing for greater flexibility and faster development cycles.
- Modularity: The separation of concerns ensures that each component can be maintained and updated without affecting the other, reducing the risk of introducing bugs.
- Scalability: Independent components can be scaled individually based on their specific needs, optimizing resource usage.
OREO-Concept Disadvantages
- Integration Complexity: Combining independent frontends can introduce complexity in terms of integration and communication between the components. This is limited with this approach.
- Overhead: Managing multiple independent components can introduce overhead in terms of deployment, monitoring, and maintenance.
OREO-Concept Naming
The name derives from the visual analogy of the two frontends as the cookies of an OREO, simply stacked on top of each other.
Approach II Narcos-Concept
Narcos-Concept Description
This approach involves the container, in this case, the Robot-Map container, determining the position of the robots. The robots are independent applications. The Robot-Map container determines the position of the robot’s planet based on the provided planetId and positions the robot accordingly.
In this approach, not only are the map and the planets independent of the robots, but each individual robot is also independent. This is a very complex approach that allows for very loose coupling. In our situation, this approach is overkill, and the benefits do not outweigh the effort required to implement it.
Narcos-Concept Advantages
- High Flexibility: Since each robot is an independent application, they can be developed, tested, and deployed independently.
- Scalability: Individual robots can be scaled independently according to demand and load.
- Maintainability: Changes to one robot do not directly affect other robots or the map, making maintenance easier.
- Reusability: Robots can be reused in different contexts or projects because they are independent.
- Technology Independence: Different robots can be developed with different technologies, allowing the best tools to be chosen for each task.
Narcos-Concept Disadvantages
- Complexity: Managing and coordinating many independent applications can be very complex.
- Performance Overhead: Communication between the independent units can lead to performance overhead.
- Deployment: Deploying many independent units can be more complex and requires a well-thought-out CI/CD pipeline.
- Debugging: Debugging can be more difficult as issues may arise in the interaction between the independent units.
- Initial Effort: The initial effort to set up the infrastructure and communication mechanisms is high.
Narcos-Concept Naming
This concept is reminiscent of Pablo Escobar, who commands his Narcos. These are independent units that are assigned to cities by him. Hence the name Narcos-Concept.
Narcos-Concept Implementation
An implementation of this concept can be found in this repository. More information is also available here: https://gitlab.com/sjannsen/MSD-Map-Approach-2
Approach III WW3-Concept
WW3-Concept Descripton
In this concept, the coordination of the Map-Frontend and Robot-Frontend is done through a shared state. The container application provides a map grid. Here, the planets can register and position themselves. Each planet represents its own frontend component. The robots, which are also independent frontend components, can use the state to find the position of the corresponding planet and position themselves accordingly. The difference from the Narcos-Concept lies particularly in the independent positioning of the components on a grid provided by the container. In the Narcos-Concept, these are positioned by the container. Unfortunately, within the given timeframe of the project, we were not able to implement this approach successfully. Of the three approaches, this is the most complex and requires the most effort.
WW3-Concept Advantages
- Independent Positioning: Components can position themselves independently on the grid, allowing for more flexibility and dynamic interactions.
- Modularity: Each planet and robot being its own frontend component promotes modularity and separation of concerns.
- Scalability: The approach scales very well as each component can be developed and deployed independently.
- High Fault Tolerance: Due to the loose coupling, the system has a high resistance to failures.
WW3-Concept Disadvantages
- Complexity: The approach is the most complex among the three, requiring significant effort to implement and maintain.
- Coordination Overhead: Managing the shared state and ensuring consistent positioning introduces additional overhead.
- Implementation Time: The complexity and effort required are causing an enormous amount of time needed to implement this approach, especially in comparison to the others.
WW3-Concept Naming
The name derives from the analogy of independent units in the military, which coordinate together in an operational area.
Connecting a Player Frontend
Example of dynamically loading frontends in Single SPA: https://gitlab.com/sjannsen/Single-SPA-Test
To load player-frontends the GameService needs to save the Name and the URL at the registration of a player. Under an endpoint he needs to provide a JSON of the following pattern:
[
{
name: '@MEA/React-MicroFrontend',
url: '//localhost:8080/MEA-React-MicroFrontend.js',
},
{
name: '@MEA/React-MicroFrontend2',
url: '//localhost:8081/MEA-React-MicroFrontend2.js',
},
]
This is all the container application needs to dynamically import the player frontends. Further details can be found here: https://gitlab.com/sjannsen/Single-SPA-Test#dynamic-integration-of-microfrontends
Where to go from here
We have presented and evaluated three possible approaches for building the MSD dashboard in micro-frontends. Approach I, in particular, can be realized with a reasonable amount of effort. The feature of integrating player frontends is also feasible with some effort, and this documentation provides starting points for it. As the next possible step, the MSD dashboard could be migrated to a micro-frontend technology like Single-SPA or similar. The current state of the map will initially remain unchanged. After that, the feature of integrating player frontends could be implemented. Finally, the map could be converted into micro-frontends. The advantages of micro-frontends are not particularly relevant in our situation from a pragmatic perspective, as we do not have business use cases where loose coupling is of particular importance, such as when outages are associated with financial loss or when we have many teams working simultaneously on an application. However, this would create a very realistic working environment, as found in many large companies like Rewe Digital, Zalando, etc.
3 - Access to the Rancher cluster and deployment of players
Main project:
The project aims to deploy a player onto the Rancher cluster. This involves creating a personal access account for the cluster.
Below, I describe my past tasks in this project:
- Started the MSD project at the beginning of the summer semester 2023
- Assigned to the Admin group together with Maik, with Fijola assigning us tasks
- The main tasks in the first semester were getting familiar with “K3s,” “K8s,” and the “Goedel Cluster”, as well as understanding the current MSD environment
- This phase ended with a presentation by Maik and me on the basics (Cube CTL commands)
- Various experiments, ranging from small to larger ones, were conducted with the cluster, all orchestrated by Fijola
- After the Code Fight, the goal was to work on a specific topic to complete our individual projects. The idea was to create a script that could restore the entire cluster. However, this turned out to be a short-term solution.
- At the beginning of 2024, it became clear that we could use Terraform. Fijola set up Terraform.
- I then took on the final project of automating the deployment of players.
- Additionally, I have written the documentation for the OpenTofu (currently still referred to as Terraform) page, which includes only the key points. The more detailed documentation has been stored in the DevOps Guide, where both the code and the general handling of OpenTofu are explained.
4 - DevOps-Team Contribution of Bernhard
Contribute to the making of a new standard of how the costume helm charts are structured and then applied these to the core service gamelog and the player skeletons kotlin, python, rust, type-script. Changes have already been made and merged.
The configuration of the Bitnami MariaDB chart so it can be made into a central DB reducing the resources need for deployment in minikube or Kubernetes. The Bitnami has a sh script in its value file that setups all needed namespaces, databases, users for each core service. The value file is completely deployment ready and only waits for the changes in the core services helm charts to link to it and have it in the deployment systemt for kubernetes and minikube.
The configuration of the Bitnami PostgresSQL DB to be a central DB like MariaDB with the same goals of reduces need deploment resources. It also incoperates a sh script in its value file to setup the namespaces, databases, users for the services that depened on it. After the requirements of how to setup the user, databases were changed from sql file to sh script for potential more posiibities, the Script would no longer work and still needs to be fixed. The individual commands are verified that they work.
The surrealDB configuration helm chart is a one to one extraction from dashboard as the only service that uses it. It was attempted to make it have a setup script like the other two DB but the complexity of the helm chart configuration has made it not worth it as only one service uses it.
During the Developement a lot about Kubernetes and Helm Charts were learned and also how not User friendly Helm Charts can be if one make one from Scratch or try to Change one which one did not make themselves. This further was made difficult by the bad and sometimes confusing documentation of it. The existion of Bitnami to have a chart standard with common variables to change and with all possible need changes configurable from the value file and good standard values make Helm Chart a lot easier.
5 - DevOps-Team Contribution of Matia
(tbd)
6 - DevOps-Team Contribution of Omar
Contributions Overview
Helm Charts for MariaDB
- Objective: Simplify database deployments across multiple environments.
- Outcome: Developed a minimal Helm chart for MariaDB, leveraging the Bitnami Helm chart as a foundation. This integration standardizes our database setups and reduces deployment complexities.
Minikube and Kubernetes Structure Updates
- Objective: Update and optimize the local development environment to support the latest Kubernetes features.
- Outcome: Revamped the Minikube setup and integrated new Helm charts for core services. This update enhances local testing capabilities and better simulates our production environment.
These contributions are pivotal for maintaining our project’s competitiveness and agility in adopting new technologies and architectural paradigms.
7 - Event Authorization in Kafka/Redpanda
Old Architecture
Services

Integration

New Architecture
Changes
Player(Java)
The PlayerExternalEventListener.java has been adapted to function as a @KafkaListener, enabling it to receive events sent via Kafka (Redpanda) that are intended for each player. The listener is configured to listen to two topics: one is a public topic where messages are broadcast to every player in the game, and the other is a private topic dedicated to each individual player, identified by their unique player ID. For player developers, nothing changes in terms of how events are produced. Events are generated and handled exactly as before.
Game
The PlayerExchangeManager.kt has been replaced by the PlayerTopicManager.kt to ease the transition from RabbitMQ queues to Kafka event streams. This shift involved some minor adjustments to ensure seamless integration. The PlayerTopicManager.kt is now responsible for creating and managing all necessary topics, enabling event delivery to every player. In addition, we added an AdminClient that is used to empty the Topics of the player and the public Topic before each game. It also used to create a User and the read Permission of the User when a new Player is registered or joins a Game.
local-dev-environment
Some .yaml files got adjusted to now define Redpanda-Connect instead of RabbitMQ in our Docker cluster. Redpanda Console was modified to be usable when Auth is required. Also the for the ACL needed setup commands for the Core Services were added to redpanda’s start script.
kafka-rabbitmq-connector
The KafkaRabbitMQConnectorApplication.java is now a KafkaKafkaConnectorApplication.java which means some minor adjustments were made to use Kafka instead of RabbitMQ.
redpanda-connect
Redpanda-Connect is the Connector solution of redpanda and uses a simple config style setup to process Events from all possible sources and send these to all possible sources. In our case we use the kafka part of the connector and listen and send to kafka which is redpanda. The connector take the events like the old KafkaRabbitMQCOnnector and then sends these to the player specific Topics or the public Topic. A big problem with the redpanda-connector is that it can only send the events to one Topic and not to multiple Topics like events that can occur in multiplayer that are send to two player like a Fight that is send to the two players involed.
Fog of War
The Fog of War is maintained using Kafka’s authentication and authorization features. Players authenticate/log in with a username and password. Kafka’s Access Control Lists (ACLs) authorize each player and ensure that players only have READ access to their private topic and the public topic. So each player now has to add a password in their application.properties like this dungeon.playerPassword=${PLAYER_PASSOWRD:password}.
What Topics do the Services need?
As we want to have dedicated User-Permission even for our internal services we need to know what topics they read, write or create. The services always create the topic they write to and no other service is ever going to write to a topic they never created. A notable exception is the topic “command” as it seems to be only published by the GameService and no service listen to it as they all get the Commands via a Rest call from Game. In addition this topic is auto created by Kafka/Redpanda and not by Game.
Here is a table of needed permission of the Services for each Topic:
Service | Create / Write | Read |
---|---|---|
Game | status, playerStatus, roundStatus, command, error | robot |
Map | gameworld, planet, error | robot |
Robot | robot, error | status, roundStatus, gameworld, trade-sell, trade-buy |
Trading | bank, trade-sell, trade-buy, prices, error | status, roundStatus, playerStatus, robot, bank, trade-sell, trade-buy, prices |
Redpanda Connect | status, roundStatus, planet, robot, bank, prices, trade-sell, trade-buy, error |
Cli Commands needed for ACL
Based on the now known Topic we can create the ACL (Access Control List) that gets bound to the Users we create. Each Service is now going to get its own User on which is need Permission is going to be bound too. The write/create and read Permission is to ensure seperation its own binding. The binding are also having the consumer group the Services consume from included insuring they only use their correct one. For Game we give it an extra binding which allows it to create Topics that a prefixed with “player-”, this allows it to make the public Topic and all invidual Topic of the Players.
rpk acl user create game -p game --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl user create map -p map --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl user create robot -p robot --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl user create trading -p trading --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl user create redpanda-connect -p redpanda-connect --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:game --group game --operation read --topic robot --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:game --group game --operation create --operation write --topic status --topic playerStatus --topic roundStatus --topic command --topic error --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:game --group game --operation all --resource-pattern-type prefixed --topic player- --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:map --group map --operation read --topic robot --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:map --group map --operation create --operation write --topic gameworld --topic planet --topic error --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:robot --group robot --operation read --topic status --topic roundStatus --topic gameworld --topic trade-sell --topic trade-buy --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:robot --group robot --operation create --operation write --topic robot --topic error --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:trading --group trading --operation read --topic status --topic roundStatus --topic playerStatus --topic robot --topic bank --topic trade-sell --topic trade-buy --topic prices --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:trading --group trading --operation create --operation write --topic bank --topic trade-sell --topic trade-buy --topic prices --topic error --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:redpanda-connect --group redpanda-connect --operation all --topic status --topic roundStatus --topic planet --topic robot --topic bank --topic prices --topic trade-sell --topic trade-buy --topic error --user admin --password admin --sasl-mechanism SCRAM-SHA-256
rpk acl create --allow-principal User:redpanda-connect --group redpanda-connect --operation all --resource-pattern-type prefixed --topic player- --user admin --password admin --sasl-mechanism SCRAM-SHA-256
For Players we create their User using their Name as this is set by themselves and known before joining a game. The topics themselves are deleted and created each game to remove their contents by the GameService as otherwise new Players would consume the Old Events in the Public Topic. The User is created for each new Player that is registerd by the GameService
This is an example rpk cli for User creation on which we based then the needed info for the Kafka API
rpk acl user create player-monte -p password
rpk acl create --allow-principal User:player-monte --group PLAYERID --operation read --topic player-PLAYERID --topic player-public
The PLAYERID needs to be substituted by the player UUID that is given to each player with their registration with the Game Service.
If a possible Fog of War viaolation can be allowed the use of the resource pattern type prefixed with player as prefix can be used as the the player only has access to events in all Player Topics.
rpk acl create --allow-principal User:player-monte --operation read --resource-pattern-type prefixed --topic player-
Issue with Kafka SASL Authentication
When Kafka’s SASL authentication is enabled, the player no longer receives the critical GAME STARTED event needed to kick off further processing. Notably, other services integrated with Kafka, using the same SASL authentication setup, continue to function without issues. Despite confirming that permissions and configurations are correct, the player fails to receive the first event when SASL authentication is enabled.
8 - Extensions for the MSD-Dashboard
The Dashboard is a tool that offers real-time monitoring of an existing game through a live map displaying robots, resources, and detailed information about participating players and planets. The Dashboard will be extended by the following features:
- Party Creation: Player can add standard players and their own player to the game
- Match Statistics: Charts display information about the current score of the game
- Player Maps: Option to switch to player specific maps showing the perspective of a single player
- Play-Again-Button: Button for playing again with same settings including game configurations and added players
Hint
This following part of the documentation is embedded from the README of the msd-dashboard repository. If you experience any issues on this page, just visit the repository directly.MSD-Dashboard
Mission and Vision
One recurring issue with playing the Microservice Dungeon game was that while players could compete against each other, nobody really knew what was happening at any given moment. They didn’t know which player was dominating or getting smashed by others unless they painstakingly compiled logs themselves. To address this problem, we introduced the Dashboard—a tool designed to observe, analyze, and provide real-time insights into gameplay, enabling users to track and evaluate live events more effectively.
The Dashboard serves multiple purposes. Firstly, it significantly aids in player development by allowing easy creation, starting, and stopping of games. Users can also effortlessly create opponents (custom player) and, after fulfilling certain conditions, even compete against their own player. These features are invaluable for testing and refining player strategies. Secondly, the Dashboard enhances larger code-fights where tracking numerous players becomes challenging. It provides a comprehensive view of the game’s status and ongoing events, which would otherwise be difficult to monitor.
From the start of a game, the Dashboard offers real-time monitoring through a live map displaying robots, resources, and detailed information about participating players and planets. Additionally, it presents statistics both graphically and textually, some of which remain accessible even after the game ends for comprehensive analysis.
Table of Contents
- Architecture and Operation of the Dashboard
- Setup Guide - For Developers
- Player-Guide: Getting Started
- Further Instructions for Use
- FAQ
- How to Report Bugs
Architecture and Operation of the Dashboard
Architecture and General Function
The Dashboard is built using Angular, a popular open-source web application framework developed by Google. Angular provides a robust platform for building dynamic single-page applications (SPAs) with a rich user interface.
What is Angular?
Angular is a TypeScript-based open-source framework for building web applications. It extends HTML with additional attributes and binds data to HTML with powerful templating. Angular is known for its speed, performance, and ease of development, making it a preferred choice for modern web applications.
External Services
In addition to Angular, the Dashboard relies on several external services that provide endpoints for fetching game data. These services include:
- Game Service: The primary service for game-related operations.
- MSD-Dashboard-Backend: Returns information about all robots and planets currently present.
- Gamelog Service: Provides scoreboards, map data, and makes it possible to map player names to player IDs.
- MSD-Dashboard-Docker-API: Tailored to the Dashboard’s needs, it starts Docker containers with specific configurations to enable the implementation of custom players.
Internal Services
Within the Dashboard, the central service, Match-Data-Fetch-Service, is responsible for data collection. This service operates as follows:
- Regular Data Fetching: The Match-Data-Fetch-Service calls the data-fetching methods of the respective services at regular intervals, typically three times per game round.
- HTTP Requests: These methods execute HTTP requests to the external service endpoints.
- Data Aggregation: The results from these requests are passed back to the Match-Data-Fetch-Service.
- Data Distribution: The collected data is made available to all other internal services.

Key Considerations:
- Real-Time Data Retrieval: Since external services/APIs only provide data for the current round and do not store historical data, the Dashboard must fetch data each round to ensure a comprehensive view of the game.
- Data Consistency: Regular and timely data fetching is crucial for maintaining accurate and complete game data within the Dashboard.
How Are Information and Changes Calculated from This Data?
To provide comprehensive game data, information on players, robots, and planets is collected for each round. These datasets are temporarily stored and further processed for detailed analysis.
The Match-Data-Service handles this processing by:
- Data Comparison: Comparing the current round’s data with the previous round’s data, focusing on robots.
- Change Detection: Identifying new robots, killed robots, purchased upgrades, and calculating financial transactions such as money earned from selling resources and purchasing robots and upgrades.
Data Persistence and Usage:
- Robot Data: The raw data and derived information are persisted and utilized by various services for further analysis and functionality.
- Planet Data: While planet data is also stored and used for the live map, it does not require the same level of detailed comparison and analysis as robot data.
How Does the Custom Player Feature Work?
The custom player feature allows players to run as Docker containers on the local machine. Here’s how it works:
-
Player Data Creation:
- Create data for the player, including the name, email, and Docker image to be used.
-
Configuration File Creation:
- Generate a configuration file for each custom player, stored in JSON format.
- This file includes essential environment variables such as player email and player name, which must be unique across all players, and other user-defined configuration variables.
- The Dashboard automatically creates (if not specified) and updates the configuration file.
-
Container Creation and Launch:
- The internal ‘Docker-Api-Service’ sends an HTTP request to the external ‘MSD-Dashboard-Docker-Api’.
- The API uses the provided information (container name, image name, port, and configuration file) to create and start the container.
- The variables in the configuration file are set as environment variables of the container
- The API utilizes the Node.js library ‘Dockerode’ to interface with the Docker engine and manage the container lifecycle.
- The ‘MSD-Dashboard-Docker-API’ provides feedback on the success of the container creation and start-up process.
Similarly, the MSD-Dashboard-Docker-Api provides endpoints to stop and delete containers. At the end of each game, all containers are stopped and deleted.
Setup Guide - For Developers
Welcome to the setup guide for developers. This will walk you through the steps required to clone the repository and get the Dashboard running on your machine.
Prerequisites
Before you begin, ensure that you have the following installed on your system:
Important: It is crucial to have the local development environment, including the dashboard-backend, the dashboard-docker-api and the gamelog up and running for the Dashboard to function correctly. Please follow the steps provided in their respective links to set up these components before proceeding.
Local Setup
Step 1: Prepare the Directory
First, you need to create or navigate to the directory where you want to clone the repository. Open your terminal or command prompt and use the cd
command to navigate to your desired directory.
Step 2: Clone the Repository
Run the following command in your terminal to clone the repository:
git clone https://github.com/MaikRoth/msd-dashboard.git
This will create a copy of the repository in your current directory.
Step 3: Navigate to the Repository Folder
Once the repository is cloned, navigate into the repository folder by running:
cd msd-dashboard
Replace msd-dashboard
with the correct folder name if it’s different.
Step 4: Install Dependencies
In the repository folder, run the following command to install all the necessary dependencies:
npm install
This command will download and install all the required Node.js packages.
Step 5: Run the Application
Finally, to start the Dashboard, run:
ng serve
This will start the Angular development server and the Dashboard should be accessible at http://localhost:4200
.
Docker Container Setup
Step 1: Clone the Repository
Follow the same steps as in the local setup to clone the repository.
Step 2: Navigate to the Repository Folder
cd msd-dashboard
Step 3: Docker Container Setup
In Powershell, set up the Docker container by running:
docker-compose up
This command will create and start the necessary Docker containers.
Usage
After completing the installation, you can access the Dashboard by navigating to http://localhost:4200
in your web browser.
Troubleshooting
If you encounter any issues during the setup, make sure all prerequisites are correctly installed and that you’re following the steps in the correct order.
Player-Guide: Getting Started
If you use the local development environment, the dashboard should be available at localhost:4200. It will navigate you to the ‘Control Panel’ tab. Here, you can:
- Create a game
- Customize it
- Add players
- Start the game
After starting a game, you will be automatically navigated to the map. It takes a few rounds (usually until round 3) to finish loading. From there, you can start exploring the application and manually stop the game if needed. The data seen in the match statistics tab is available even after stopping the game, but it will be deleted when you create a new game.
Player-Guide: How to Play Against Your Own Player
The Dashboard allows you to compete against your own player or other custom players. Here’s how it works:
- Creates a Docker container from the Docker image of the player on your local machine.
- Overrides all important variables (e.g., player name, player email, game service URL).
- The player runs in the Docker container and joins the game automatically.
- You can add more than one instance of a specific player to your game.
Requirements
To play against your own player, your player needs to fulfill certain requirements.
1. Docker Image
You must provide the Docker image of your player. You can do this by either:
- Adding it to our microservice-dungeon registry: registry.gitlab.com/the-microservice-dungeon/devops-team/msd-image-registry
- Having the image on your local machine or any other registry.
2. Environment Variables
Your player must read and set certain variables from environment variables. This is important because the dashboard needs to change the values of certain variables to start the player correctly as a container. The following environment/system variables need to be implemented in your player with the exact same names:
PLAYER_NAME
PLAYER_EMAIL
GAME_HOST
RABBITMQ_HOST
Important: Please make sure to name these exactly as written here.
Other not required, but potentially necessary variables in some cases:
RABBITMQ_USERNAME
RABBITMQ_PASSWORD
RABBITMQ_PORT
Adding Your Player to the Game
After fulfilling the requirements, visit the dashboard interface at localhost:4200 and start a game via the interface. The following steps explain how to add players to the game.
- Open Menu:
- Click the ‘Add Custom Player’ button.
- Click the ‘Select Own Player’ button. A menu will open where you must enter the details of the player you want to add.


- Enter Image Registry:
- Insert the registry of your image if it is in one.
- The default input is the microservice-dungeon registry. If your player is registered there, you don’t need to change anything in this line.
- If the Docker image of your player is on your local machine, leave the input field empty.

- Enter Image Name:
- Insert the name of your Docker image. If the image is in the microservice-dungeon registry, the name is usually something like player-hackschnitzel.

- Enter Image Tag:
- Insert the tag of the Docker image. The default input is latest, so you can leave it as is unless you want the image with a specific tag.

- Provide Port:
- Provide a port to map the container port to the same port on the host machine (port:port).
- Leaving the field empty or set to 0 will result in a random port assignment (this should be fixed in the future in the Docker API to avoid port assignment when no value is provided).

Adding a Configuration to Your Player
After entering the details of your player image, the Dashboard will ask if you want to add configurations. This allows you to pass additional environment/system variables to your player for further customization. For example, you could have an environment variable named ‘STRATEGY’ to change the strategy of your player based on the given input. This allows you to start your player with different strategies. If you don’t have any configurations to add, just press ‘No, continue without’.
If you decide to add a configuration, a file picker will open. The file you select must be a text file with a single JSON object in it. The file name is not important. It could look like this:
{
"STRATEGY": "aggressive",
"port": 43553,
"MAX_NUMBER_ROBOTS": 100
}
Playing Against Standard Players
For this feature, you don’t need any special requirements. You can simply add one or more of the standard players to your game. Just press the ‘Add Custom Player’ button and then click on their name. Standard players cannot be configured.
Important: It might take some time to pull the Docker images for the first time.
Further Instructions for Use
-
Dashboard Usage: Ensure that the Dashboard remains in the foreground at all times. Switching browser tabs or using other applications may disrupt regular data fetching, leading to incomplete game data on the Dashboard (hopefully this can be fixed in the future).
-
Game Spectating: When spectating a game, start observing from the beginning (in case you ever intend to start a game through other sources than the dashboard). This ensures accurate data calculations, especially for metrics like player ‘balance’, which rely on complete game data.
FAQ
How do I play against my own player?
Why does the Dashboard show different values than those logged in my player?
- The Dashboard retrieves and calculates game data by fetching it from an API backend, which provides the current state of all robots and planets. The Dashboard continuously fetches this data, manually assigns round numbers, and calculates changes between rounds. Occasionally, specific information may be lost or assigned to incorrect round numbers, leading to discrepancies.
When creating a game with ‘previous’ settings, will the custom players retain the old configuration, or do I need to provide a new configuration file?
- Custom players will retain the exact configuration provided in the last game. You do not need to provide a new configuration file unless you intend to make changes. Currently, there is no option to see IF a configuration file was provided or not
How to Report Bugs
The preferred method for reporting bugs is to create an issue on gitlab and provide a detailed description of the problem.
If you encounter any difficulties, you can also message me directly via Discord: bronzescrub or use the appropriate Discord channels on the ArchiLab Discord server.
Authors
9 - Functional Trading Service Implementation
10 - Improved Player Dev Env
What did we do?
- Write a development guide for new players (should minikube get involved?)
- overview of events, when and which are thrown (new images in static)
- videos of player, not too long
- Error and question page with possible solutions
What we need to do!
- finish requirements for development environment for microservice architecture
- topic technical means? Unsure about specifications
- Another video for player insides and some features of existing player
- (Maybe markdown checkup)
Problems or questions
- technical means. We might have problems implementing those (kubefwd, kubeVPN). What’s expected of us exactly?
- Does gitlab block the linked videos. Does it work in the documentation?
11 - Libraries for Player Health Checks
Libraries for Player Health Checks
Monitoring the health of a player, especially in distributed systems, is a critical task. The current challenge lies in detecting when a player has fallen “behind,” i.e., when it fails to process game events and calculate commands in real-time. If not monitored, the player can enter a “delusional state,” where it incorrectly assumes itself to be operating in past rounds, thus sending inaccurate or outdated commands. These erroneous commands might succeed “by chance,” but their long-term impact can destabilize the game or cause malfunctions in its logic.
To mitigate this, we are implementing a library in each of the main player languages—Java, Kotlin, TypeScript, Rust, and Python—to facilitate health monitoring. Each library will provide a set of tools and endpoints to monitor and query the player’s state, ensuring timely detection of discrepancies between the player’s internal state and the current game state.
Key Objectives
- Detecting Delays and Missing Events:
- The system will automatically check whether a player has processed all game events up to the current round. If the player is lagging behind in event processing, it will be flagged as “behind.”
- Preventing Erroneous Command Execution:
- By analyzing the player’s internal state and comparing it with the game’s current round, the system can determine if the commands the player intends to send are valid or likely to cause errors due to being outdated.
- Health Check Endpoints:
- Each player will expose health check endpoints within the distributed system, allowing external services to query its status. These endpoints will return whether the player is behind and provide additional diagnostic information if necessary.
Current Implementation in Java
We have updated the implementation of the health check mechanism in the Java player to include the following features:
-
Actuator-Based Health Check for Liveness and Readiness: The player now uses the Spring Boot Actuator
/actuator/health
endpoint for both Kubernetes Liveness and Readiness probes. This ensures that Kubernetes can monitor the basic health status of the player, restarting it if necessary. -
Scheduled Round Check Mechanism: Using Spring Boot’s scheduler (
@Scheduled
), the player automatically fires a/roundCheck
request at regular intervals to determine if the player is “behind” the current game round. This check compares the player’s last processed round with the current round retrieved from the game server. If the player is behind, a warning is logged for further analysis. -
Round Comparison Logic: The health check logic compares the current game round (retrieved from the game server) with the player’s last processed round. If the player’s round is less than the current game round, it is considered “behind.” This information is logged for further analysis.
Logging:
The system utilizes logging to track when a player falls behind and when discrepancies between the player state and the game state are detected. This information can be critical for debugging and ensuring the overall stability of the game environment.
Future Work
In addition to the Java implementation, similar libraries will be developed for the other player languages—Kotlin, TypeScript, Rust, and Python. Each library will follow the same fundamental design principles, ensuring consistency across the different implementations, while being adapted to the unique features and idioms of each language.
Additional Resources







Last Update: 2024-07-30
12 - Map WYSIWYG Editor
Hint
This following part of the documentation is embedded from the README of the map-editor repository. If you experience any issues on this page, just visit the repository directly.map-editor
This template should help get you started developing with Vue 3 in Vite.
Recommended IDE Setup
- VSCode
- Volar as a Vue plugin (and disable Vetur)
Type Support for .vue
Imports in TS
TypeScript cannot handle type information for .vue
imports by default, so we replace the tsc
CLI with vue-tsc
for type checking. In editors, we need TypeScript Vue Plugin Volar to make the TypeScript language service aware of .vue
types.
If the standalone TypeScript plugin doesn’t feel fast enough to you, Volar has also implemented a Take Over Mode that is more performant. You can enable it by the following steps:
- Disable the built-in TypeScript Extension
- Run
Extensions: Show Built-in Extensions
from VSCode’s command palette - Find
TypeScript and JavaScript Language Features
, right click and selectDisable (Workspace)
- Run
- Reload the VSCode window by running
Developer: Reload Window
from the command palette.
Customize configuration
See Vite Configuration Reference.
Project Setup
npm install
Compile and Hot-Reload for Development
npm run dev
Type-Check, Compile and Minify for Production
npm run build
Run Unit Tests with Vitest
npm run test:unit
Run End-to-End Tests with Nightwatch
# When using CI, the project must be built first.
npm run build
# Runs the end-to-end tests
npm run test:e2e
# Runs the tests only on Chrome
npm run test:e2e -- --env chrome
# Runs the tests of a specific file
npm run test:e2e -- tests/e2e/example.ts
# Runs the tests in debug mode
npm run test:e2e -- --debug
Run Headed Component Tests with Nightwatch Component Testing
npm run test:unit
npm run test:unit -- --headless # for headless testing
Lint with ESLint
npm run lint
13 - Peer to Peer Communication
What happened so far?
An opinion poll was formulated with google forms. While learning the msd and testing on the local dev environment, I fixed the map not showing bug on the local_dev_environment.
Opinion poll results
toDo
14 - Pluggable Strategy and Typical Strategy Patterns
What happened so far?
- Researched how best to develop a pluggable strategy in java
- Checked out how strategies are realize in player M.O.N.T.E.
What will happen next?
- Implement feature to alter the strategy of player M.O.N.T.E. based on given parameters
- Add guide of how strategies could be implemented for future players
15 - Praxisprojekt Real Time MSD
Table of Contents
Architecture
Events
Changed Event
RobotsRevealedEvent: Reveals all robots on the planet to the player.
- contains the robotRevealed data of all robots on the planet
- is sent when a robot spawns or moves
- is only sent to the player of the robot who triggered the event
New Event
ActivityDetectedEvent: Informs players with robots on a planet when activity was executed by another robot.
- contains the open data (robotRevealed) of a single robot (for more information check out the Asyncapi)
- is sent after following robot actions:
move
,fight
,regenerate
,upgrade
,restoration
- is sent to every player that has a robot standing on the planet where the activity happened. For
move
that means players from both the source and target planet.
Action Duration
In order to keep a balanced game experience we introduce action times. Those times determine how long it takes until the action is completed.
Action | Time |
---|---|
Fighting | 3 Sek |
Mining | 3 Sek |
Movement | 3 Sek |
Regenerate | 3 Sek |
Upgrade | 3 Sek |
Timings are currently only temporary and are subject to change in order to improve the game balance.
Robot Locking
A robot can only execute 1 command at a time. This is insured by the robot command locker. It uses a local Redis server with a distributed lock pattern to save (lock) the IDs of robots that are currently performing an action. When the action duration is over, the robotID is released (unlock) again. In case of errors the robotIDs are automatically released after a certain time to avoid deadlocks.
Robot Upgrades
New Upgrades
- Movement Speed: reduces travel time between planets
- Regeneration Speed: reduces regeneration time
- Mining Efficiency: replaces the old mining speed upgrade
- Attack Speed: reduces attack time
Upgrade Changes
- Mining Speed: reduces mining time
Upgrade Values
Upgrade | Effect | Level 0 | Level 1 | Level 2 | Level 3 | Level 4 | Level 5 |
---|---|---|---|---|---|---|---|
Attack Speed | Percentage of action time | 100 % | 80 % | 65 % | 50 % | 35 % | 20 % |
Mining Speed | Percentage of action time | 100 % | 80 % | 65 % | 50 % | 35 % | 20 % |
Movement Speed | Percentage of action time | 100 % | 80 % | 65 % | 50 % | 35 % | 20 % |
Regeneration Speed | Percentage of action time | 100 % | 80 % | 65 % | 50 % | 35 % | 20 % |
Mining Efficiency | Mining Amount | 2 | 5 | 10 | 15 | 20 | 40 |
Note: While the effect improvement per level of the speed upgrades is on paper not as high as the existing upgrades, it allows for more flexibility and better reaction time.
MSD Dashboard
The controlpanel, the map with the sidebar and the scoreboard are functional. The map is updated every 0.5 seconds. The match statistics are not working with the Real Time MSD.
Player Monte can be deployed to the game through the dashboard. The Dashboard-Docker-Api including necessary changes has been added as new infrastructure to the local-dev-env to allow this.
MONTE and Skeletons
Player MONTE and the Java Skeleton have been modified to be compatible with the Real Time MSD.
TODO
The following things still need implementation or some changes:
- PriceChangedEvent (needs trading economics)
- Change other skeletons to work in the realtime msd
- Tests are outdated and not working
- Add the new capabilities in msd-dashboard-backend
- Websockets in dashboard
- Kubernetes
How to use the Local-Dev-Environment of the Real Time MSD
Clone the local_dev_env
Repository
Clone the DevOps-Team/local-dev-environment
Gitlab repository
to your local machine and switch to branch PP-RealTime-MSD-Robin
.
# HTTP
git clone https://gitlab.com/the-microservice-dungeon/devops-team/local-dev-environment.git
# SSH
git@gitlab.com:the-microservice-dungeon/devops-team/local-dev-environment.git
# Switch Branch (if branch does not exist locally yet)
git checkout -b PP-RealTime-MSD-Robin origin/PP-RealTime-MSD-Robin
Next Steps
Follow the README of the local-dev-environement repository (Branch: PP-RealTime-MSD-Robin).
API Reference
Game Service
Robot Service
Trading Service
The API Reference lists more services, which have not been altered for the Real Time MSD.
16 - Real Time MSD
Table of Contents
Overview
Before going into detail what changes have to be made, we want to emphasize the architectural difference we get by removing rounds from the game
Old Architecture
In this architecture the player communicates directly only with the game service, which controls the rounds and forwards the commands to the different services
New Architecture
With this new architecture we remove the game service as a gateway point for commands, which reduces the overall responsibility of the service and now only performs general game management - e.g. creating, starting and ending games. Commands used whilst in game are now send directly to the corresponding service - all trading related commands to the trading-service and everything else to the robot-service. The robot-service assumes the position of controlling that robots can only perform one action at a time.
Planed Changes
Event Changes
Service | Old Event | New Event | Change |
---|---|---|---|
Game | RoundStatusEvent | Removed | |
GameStatusEvent | Game Timer ends the game | ||
Trading | TradeblePricesEvent | One time full list of all prices at game start | |
PriceChangedEvent | Only contains price changes | ||
Robot | RobotsRevealedEvent | Only contains robots on the same planet | |
ActivityDetectedEvent | Informs players with robots on a planet when activity was executed | ||
RobotAttackedEvent | Interrupts targets action |
Command Responsibility
With the removal of the rounds we don’t need to keep the game service as a command controller. The Command responsibilities are now as followed:
Service | Commands |
---|---|
Robot | Battle |
Mining | |
Movement | |
Regenerate | |
Trading | Buying |
Selling |
Action Duration
In order to keep a balanced game experience we introduce action times. Those times determine how long it takes until the action is completed.
Action | Time |
---|---|
Fighting | 3 Sek |
Mining | 3 Sek |
Movement | 3 Sek |
Regenerate | 3 Sek |
Upgrade | 3 Sek |
Timings are currently only temporary and are subject to change in order to improve the game balance.
Robot Upgrades
Now that action are not round bound anymore and have a completion time we can add new robot upgrades and change how some upgrades work to be more logically correct
New Upgrades
- Movement Speed: reduces travel time between planets
- Regeneration Speed: reduces regeneration time
- Mining Efficiency: replaces the old mining speed upgrade
- Attack Speed: reduces attack time
Upgrade Changes
- Mining Speed: reduces mining speed
Upgrade Values
Upgrade | Effect | Level 0 | Level 1 | Level 2 | Level 3 | Level 4 | Level 5 |
---|---|---|---|---|---|---|---|
Attack Speed | Percentage of action time | 100 % | 50 % | 33 % | 25 % | 20 % | 16 % |
Mining Speed | Percentage of action time | 100 % | 50 % | 33 % | 25 % | 20 % | 16 % |
Movement Speed | Percentage of action time | 100 % | 50 % | 33 % | 25 % | 20 % | 16 % |
Regeneration Speed | Percentage of action time | 100 % | 50 % | 33 % | 25 % | 20 % | 16 % |
Mining Efficiency | Mining Amount | 2 | 5 | 10 | 15 | 20 | 40 |
Dashboard
The dashboard needs a small change. Instead of displaying the current round it now has to display the current game time.
Docs
The open and async api needs change with the new command responsibilities.
Checklist
- Removed Rounds
- Change Command Responsibility
- Robot Locking
- Action Times
- Robot Upgrades
- Updated docs (open/async api):
- game
- Tests
TODO
The following things still need implementation or some changes:
- automatic unlocking of robot after a certain time has passed (in case of processing errors)
- attacking a robot will interrupt the targets action
- event for interrupted action
- ActivityDetectedEvent
- PriceChangedEvent (needs trading economics)
- fixing the dashboard (needs changes in dashboard-backend (rust))
- change the skeletons to work in the realtime msd
- update remaining docs (open and async api)
17 - Reinforcement Learning
This README should function as an entrypoint to this project. I recommend reading this through for a broad overview and then reading the readmes from the single components to get a better understanding of what they are doing.
18 - Test Microservice Framework
(tbd)
19 - Test Microservice Usage
(tbd)