%% Last Updated: - [[2021-03-30]] %% [Tim Koopmans](https://flood.io/blog/author/tim-koopmans/), co-founder of [Flood](http://flood.io), coined the acronym SPEAR to describe the different aspects of a performance that should be considered in load testing. This is a great starting point when thinking about what we want our nonfunctional requirements to cover. #### [[Scalability]] Scalability is the application’s ability to cope with increasing demands by increasing the amount of server resources. This could mean scaling _up_ (increasing the resources of the dedicated server) or scaling _out_ (adding more nodes to shoulder the load). What happens when more users than expected sign up in response to a promotion on your site? #### [[Performance]] The most common performance metric is page response time, but there are other considerations to be made here, such as throughput (requests per minute) and the number of concurrent sessions that need to be supported. Things like the total size of the resources on the page, whether or not a CDN is being used, and what to cache are also worth discussing. #### [[Elasticity]] Elasticity is a relatively newer aspect to performance testing brought about by advances in the cloud that allow application infrastructure to adapt to changes in load. Unlike scalability, elasticity emphasises scaling down as much as it does scaling up. Testing that virtual machines scale up when load increases is important, but testing that virtual machines also scale down when load decreases can also help save on unnecessary costs. #### [[Availability]] To test for high availability, ask yourself what would happen when (not _if_) your application’s server fails. Is there another server that the load balancer will seamlessly send traffic towards? Does the throughput fluctuate wildly? If users are connected to one server that fails, is your application smart enough to make new connections to another server? Or will it simply serve up an error page that users won’t know what to do with? Disaster recovery is best tested when there’s no disaster imminent. #### [[Reliability]] Reliability encompasses a lot of scenarios, but they all have to do with whether or not your application returns expected responses. Does your error rate increase when you increase the duration of your load test? Are you adding verification steps to your load testing scripts to check whether or not the HTTP 200 response that the application returned is not actually an error page?