Skip to content

Load tests - Backend

INFO

Load tests are a part of the Backend repository whose only task is to verify whether the Backend can handle a given number of requests per second.

MQTT testing

Run command:

bash
python -m tests.load.load_test_mqtt

This creates synthetic load in the form of Unit instances that send MQTT messages. Additional Backend ENV variables let you configure the load using the following parameters:

Backend variableWhat it does
PU_TEST_LOAD_MQTT_DURATIONTest duration in seconds
PU_TEST_LOAD_MQTT_UNIT_COUNTNumber of Unit instances that will send requests
PU_TEST_LOAD_MQTT_RPSLoad (RPS) that each Unit will generate
PU_TEST_LOAD_MQTT_VALUE_TYPEType of sent variables: Text or Number
PU_TEST_LOAD_MQTT_DUPLICATE_COUNTNumber of consecutive duplicate messages
PU_TEST_LOAD_MQTT_MESSAGE_SIZESize of MQTT messages in characters
PU_TEST_LOAD_MQTT_POLICY_TYPEPolicy type for processing all messages in the test: LastValue, NRecords, TimeWindow, Aggregation
PU_TEST_LOAD_MQTT_WORKERSNumber of multiprocessing worker processes creating the load

DANGER

The Backend always uses only 1 Gunicorn worker to process MQTT messages. It can handle about ~4000 rps for system topics domain.com/+/+/+/pepeunit. For DataPipe topics with the pattern domain.com/+/pepeunit, the throughput can reach ~25000 rps.

REST and GQL testing

Run command:

bash
locust -f tests/load/locustfile.py

This creates synthetic load in the form of GQL and REST requests to the most heavily loaded endpoints. Additional Backend ENV variables let you configure the load using the following parameters:

Backend variableWhat it does
LOCUST_HEADLESSCLI mode for running locust
LOCUST_USERSNumber of Users that will generate load
LOCUST_SPAWN_RATERamp-up rate for Users from 0 to 400 with a step of 10, i.e. the ramp-up will take ~= 40 seconds
LOCUST_RUN_TIMETest duration in seconds

WARNING

Client wait time before sending a request is 1 second, therefore rps ~= number of users.

DANGER

REST and GQL requests are processed in multiple threads; increasing the number of workers almost linearly increases the number of requests processed per unit time. 4 Gunicorn workers can handle ~400 rps without restarts. Percentiles in milliseconds:

bash
Type     Name                             50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
--------|---------------------------|--------|------|------|------|------|------|------|------|------|------|------|------
GET      /pepeunit                          6      8     10     11     19     31     54     73    180    230    240  13227
GET      /pepeunit/api/v1/metrics/         14     18     21     24     33     45     64     82    180    230    270  13126
POST     /pepeunit/graphql                 10     12     14     16     24     38     60     72    150    270    290  13257
--------|---------------------------|--------|------|------|------|------|------|------|------|------|------|------|------
         Aggregated                        10     13     16     18     27     39     60     75    180    270    290  39610