Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Locust.io: Controlling the request per second parameter

I have been trying to load test my API server using Locust.io on EC2 compute optimized instances. It provides an easy-to-configure option for setting the consecutive request wait time and number of concurrent users. In theory, rps = wait time X #_users. However while testing, this rule breaks down for very low thresholds of #_users (in my experiment, around 1200 users). The variables hatch_rate, #_of_slaves, including in a distributed test setting had little to no effect on the rps.

Experiment info

The test has been done on a C3.4x AWS EC2 compute node (AMI image) with 16 vCPUs, with General SSD and 30GB RAM. During the test, CPU utilization peaked at 60% max (depends on the hatch rate - which controls the concurrent processes spawned), on an average staying under 30%.

Locust.io

setup: uses pyzmq, and setup with each vCPU core as a slave. Single POST request setup with request body ~ 20 bytes, and response body ~ 25 bytes. Request failure rate: < 1%, with mean response time being 6ms.

variables: Time between consecutive requests set to 450ms (min:100ms and max: 1000ms), hatch rate at a comfy 30 per sec, and RPS measured by varying #_users.

Locust.io throughput graph

The RPS follows the equation as predicted for upto 1000 users. Increasing #_users after that has diminishing returns with a cap reached at roughly 1200 users. #_users here isn't the independent variable, changing the wait time affects the RPS as well. However, changing the experiment setup to 32 cores instance (c3.8x instance) or 56 cores (in a distributed setup) doesn't affect the RPS at all.

So really, what is the way to control the RPS? Is there something obvious I am missing here?

like image 285
sidi Avatar asked Dec 30 '14 21:12

sidi


People also ask

How do you control a request per second in locusts?

Locust is centered around running a certain number of users and those users being fairly independent from each other. You can control the throughput for each user using wait_time = constant_throughput(x) (https://docs.locust.io/en/stable/writing-a-locustfile.html#wait-time-attribute).

How do you put RPS in a locust?

The code your Locust users runs needs to be written in such a way that each user is only making one request. Then in your Locust User class, you set wait_time = constant(1) or wait_time = constant_pacing(1) , whichever behavior you want. You can see this sort of pattern in all of the code examples for custom shapes.

What does RPS mean in Locust?

Current RPS — Current requests per second.

Is Locust better than JMeter?

Due to its event-based methodology and “test as code” capabilities, Locust is very scalable. Locust has a huge and constantly growing group of users that prefer it to JMeter because of these reasons.


1 Answers

(one of the Locust authors here)

First, why do you want to control the RPS? One of the core ideas behind Locust is to describe user behavior and let that generate load (requests in your case). The question Locust is designed to answer is: How many concurrent users can my application support?

I know it is tempting to go after a certain RPS number and sometimes I "cheat" as well by striving for an arbitrary RPS number.

But to answer your question, are you sure your Locusts doesn't end up in a dead lock? As in, they complete a certain number of requests and then become idle because they have no other task to perform? Hard to tell what's happening without seeing the test code.

Distributed mode is recommended for larger production setups and most real-world load tests I've run have been on multiple but smaller instances. But it shouldn't matter if you are not maxing out the CPU. Are you sure you are not saturating a single CPU core? Not sure what OS you are running but if Linux, what is your load value?

like image 93
cgbystrom Avatar answered Oct 15 '22 07:10

cgbystrom