- I’ve talked a lot about the number of users as if it were a measure of throughput (how much load is being generated), but that’s actually a bit of an oversimplification.
- More users doesn’t necessarily mean more load. For example, 1000 users could send a combined total of 100 requests per minute, while 100 users could send 1000 requests in the same amount of time. A user that clicks every link on the page and trigger requests to multiple servers will have a different load profile to a user that just navigates to one page and refreshes it.
- So there’s a missing variable here, and that’s the number of requests per second, which is the more accurate measure of test throughput. Most of the time, when people talk about user concurrency, what they’re actually looking for is a way to express how much load they want to apply.
- If this is the case, the savvy load tester (that’s you) can reduce the number of users and increase the throughput of each user. This is because most load testing tools require more resources to increase the number of users than to increase the throughput. Reducing the number of users while increasing throughput will maintain the expected load on the server while reducing the amount of machines that need to be provisioned (and paid for).