Horizontal Scaling

Horizontal scaling can be achieved by balancing the load between many instances of each MPC node. It allows for improved throughput and failover. We support two different types of horizontal scaling of the TSM.

Multiple TSMs

In this setup you deploy several identical TSMs, each TSM consisting of one set of MPC nodes. All instances of MPC Node 1 (one in each TSM) are configured to share the same database. Similarly, all instances of MPC Node 2 share the same database, and so forth. With this setup, a SDK can now use any of the TSMs.

As an example, the following figure shows a setup with two TSMs (TSM A and TSM B), each consisting of two MPC nodes. A number of SDKs are configured such that some of them connect to TSM A while the other SDKs connect to TSM B.

This is a simple way of improving throughput and/or providing failover. The application can spin up new TSMs if needed, and the SDKs can dynamically be configured to use a random TSM, or the TSM with the most free ressources, etc.

In this setup, however, it is up to your application to figure out which TSM a given SDK should use. To increase performance, you have to add a complete TSM to the setup, i.e., you cannot just add one MPC node at a time. This may be difficult to coordinate if the TSM nodes are controlled by different organizations.

One TSM with Multiple Node Instances

Alternatively, you can run multiple instances of a single MPC node behind a standard (round robin) load balancer. Again, all the instances of a given MPC node share the same database. In this setup, all the instances of a given MPC node must be configured to route sessions to each other (see configuration example below).

The benefit of this configuration is that from the outside, i.e., seen from the SDK, there is only a single TSM. Also, the replication can be scaled up/down at each MPC node independently of how the other MPC nodes are scaled.

The following example shows a setup with a TSM consisting of three MPC nodes. Node 1 is configured with two node instances, and TSM Node 2 is configured with three node instances. Node 3 runs in the standard configuration, with a single node instance.

In the previous section, the MPC node instances had standard configuration, except that all instances of each node connected to the same database. Here, however, to run more than one instance of a node, all the instances must include the following section in their configuration files:

# This setting enables multiple instances of the same player to be placed behind a load balancer.
# Each instance will either handle sessions itself or route the traffic to other instances.
[MultiInstance]
  # IP address where this instance can be reached from other the instances. If not specified, an
  # auto detected address is used and this might not be the address you want if there are multiple
  # IP addresses associated with the system.
  Address = ""
  # Port number the multi instance server listens on and announces to other nodes. This port MUST
  # ONLY be accessible by other nodes representing the same player.
  Port = 7000
  # How often should we run a cleanup job that purges old routing entries from the database.
  CleanupInterval = "5m"
  # Every CleanupInteval the cleanup job will run with this probability. 0 means never and 100
  # means always.
  CleanupProbability = 25