%%
date:: [[2022-10-27]]
Parent:: [[Load Testing]]
Friend:: [[Server metrics]]
%%
# [[Load testing metrics]]
Below are some common metrics that are generated by your load testing tool. They pertain to the load test scenario you executed and will give you insight into the user experience (in terms of response time) of your application under load.
**Concurrent users** are the number of users running at the same time. In JMeter or Gatling speak, this is also referred to as the number of threads. Given the same script, a larger number of users will increase load on your application. Note that user concurrency doesn’t say anything about throughput. That is, having 1000 users doesn’t necessarily convert to 1000 requests per second. Nor does it mean that those 1000 users are all actively using the application. It only means that there are 1000 instances of your script that are currently ongoing, and some of those could be executing think time or other wait times that you’ve scripted.
**Response time** is the time between when the transaction was sent by the load testing tool on the load generator and the time that a response was received. This is the time to last byte. Since it is measured by the load testing tool, it does include things like latency and is affected by bottlenecks on the generator (such as high resource utilization). Both JMeter and Gatling ignore think time when calculating this. Response time is a useful metric to look at when trying to get an idea of how long your application took to process requests. A high response time means a longer processing time.
The **transaction rate** measures the throughput of your load test. JMeter sometimes reports throughput in terms of samples per second, which is a similar but concept but not the same. Generally speaking, a JMeter sample is a single request, and multiple requests can be grouped into transaction controllers. So samples and transactions are not interchangeable, but they do both describe how quickly your test is sending requests to your application. The transaction rate, more than the number of concurrent users, better describes the load your application is handling. You can expect a higher transaction rate to correspond to higher load.
The **failed rate/error rate** is normally expressed as the number of failed transactions divided by the total number of transactions that were executed. It is often represented as a percentage: an error rate of 40% means that 40% of all transactions failed. Whether a transaction failed or not is determined by the script and can be caused by many issues. A transaction could fail due to a verification on the page not being found due to an unexpected response (such as an error page being returned) or it could also be due to a connection timeout as a result of the load testing tool waiting too long for a response. High error rates are an indication of either script errors or application errors and should never be ignored.
The **passed rate** is similar to the error rate but measures the other side of the coin: it expresses how many of the transactions during the test passed.