%%
date:: [[2021-04-19]], [[2021-04-03]], [[2021-04-02]], [[2022-09-29]]
%%
# [[Increasing throughput of a load test]]
Increasing the throughput of a script makes it capable of simulating high load, and is what distinguishes an automated [[Functional testing|functional test]] script from a [[Load Testing]] one.
A higher throughput may be more realistic in cases where the application experiences a higher load of users in production.
Additional instances of the script running on other load generators will increase throughput; however, it is good practice to also make the script itself more efficient at producing load so that fewer load generators are needed.
Here are some ways to increase throughput at the script level.
##### [[Number of virtual users]]
This is the most obvious way to increase load. The more instances of your script running per load generator, the more requests are executed.
##### [[Dynamic think time and pacing]]
Not including think time is a common mistake in load testing scripts. Think time is the amount of time that a user spends “thinking”— that is, the delay between requests. Firing off many requests immediately one after another may sound like a great way to apply some extra load on your server, but this can actually backfire. Not including think time in your scripts is resource-intensive because it causes the node you’re running the script from to work overtime to send the requests. Sometimes this can cause delays within the node as it struggles to process each request before sending it to the server. This means that the node itself can become a bottleneck, causing some queuing of requests way before your requests are even sent to your servers. This would report high response times that don’t necessarily reflect how your application servers handle the load.
To prevent this from happening, add think time to your scripts that reflects a user’s normal wait time. This will space out the requests and lead to more accurate results.
However, it’s important to note that decreasing think time will also increase throughput, because each thread will be able to send more requests if the time it has to wait decreases. The right balance of think time needs to be struck in order to mimic production behaviour.
##### Using [[Concurrent requests]]
Not found
This page does not exist