[Google Docs](https://docs.google.com/document/d/1R68UFyR-CKLDwD2YDZ8FYcJk20VZeZsFwcRxckZA1w4/edit#)
[[Pepe - whitepaper]]
# Load testing for modern engineering teams
## Executive Summary
The way software teams build and ship software to the end-user today has put pressure on engineering organizations to deliver software faster, cheaper, more frequently, and without compromising quality.
Major trends in the software testing industry like Agile, DevOps, TestOps, and Site Reliability Engineering challenge the traditional role silos through collaboration, automation, and the acceptance of the inevitability of application failures.
Load testing validates performance and reliability, and is a shared responsibiliy across various roles in a team. Modern engineering teams that have embraced these emergent trends and best practices in software delivery may find traditional load testing solutions slow, inefficient, and incapable of adapting to faster-paced work methodologies.
This whitepaper looks at the trends and challenges in the software industry that affect today's load testing process. It describes how a developer-oriented and team-centric solution can improve software quality by using collaborative testing to achieve higher release velocity.
According to McKinsey research [^mckinsey], best-in-class tools are the primary driver of developer velocity and are the top contributor to business success, enabling greater productivity, visibility, and coordination.
## Testing trends that challenge traditional testing
Traditional load testing follows the waterfall model, where project activities are organized into independent phases that follow a sequential order as they cascade down to the final phase, deployment. In this model, load testing was carried out as part of the testing phase, if at all, after the code had been developed and before it was deployed. Performance testing was a luxury activity, mostly required by organizations that could afford expensive solutions, a skilled workforce, and another costly step in their software process. The first performance testing tools were made in this context, and were built to enable testers to create large end-to-end test suites that would exhaustively target the system under test-- but at a cost.

The waterfall methodology has declined in popularity in recent years, yet many testing tools have not changed. They now face considerable challenges due to some emerging trends in performance testing.
### Agility in performance
Switching to Agile methodologies has changed software delivery practices completely, affecting the way we develop, test, and ship software for our customers for the better:
- We build better products, experimenting and delivering software more frequently.
- We build software faster through better tools and team collaboration.
- We build better digital experiences, focusing more on performance and reliability.
These modern engineering practices enable software teams to move faster with exceptional quality. However, they also come with these concerns:
- Frequent, fast-paced development makes long-term vendor contracts and tightly coupled testing frameworks expensive to set up and cumbersome to use.
- Tighter collaboration between teams pushes testing earlier in the cycle, so developers buy-in for tooling is essential.
- The focus on digital experiences increases both the likelihood and the visibility of failures due to inadequate performance testing.
Without a modern testing solution, teams struggle with collaboration and bureaucratic testing procesess, restraining the full potential of this new era.
> An application's technology stack is the top aspect for successful agile and DevOps Automation. 65% of respondents considered it essential or very important. (World Quality Report 2021 Capgemini.) [^worldqualityreport]
### Shift-left performance testing and SDETs
Roles within engineering teams are changing. Agile and shift-left testing encourage developers to participate in testing activities, creating a new brand of developer called Software Development Engineers in Test (SDETs) or Software Engineers in Test (SETs). SDETs/SETs are a hybrid role between the traditional QA and development roles, and they are in high demand in the software industry. They enable teams to test early and continuously, incorporating testing into every activity in the software development process.
> According to the Quality World Report 2021 by Capgemini, shift-left testing is one of the most important aspects to increase testing efficiency. [^worldqualityreport]
When developers don’t participate in testing, testing slows to a grind. Developers write the code, so they often know the details required to implement tests efficiently. However, processes and tools need to adapt to developer workflows to bring dev roles into the testing; otherwise, they will hamper collaboration.
![[coder-tester.png]]
SETs collaborate with product managers to define more practical and targeted testing approaches. At the same time, they work with engineers to create test cases and automate the testing process. The rise of SDET and SET roles is the inevitable effect of shift-left testing and the trend away from silos within Agile teams.
However, traditional load testing solutions were not designed to be used by developers. They involve languages and frameworks with little regard for developer ergonomics, test scripts that need to be managed separately from application code, and test execution components that don't leverage modern infrastructure.
Modern testing solutions like k6 fill this gap and deliver the advantages of new technologies, especially when compared to more traditional, QA-siloed tools.
### Shift-right performance testing and TestOps
Testing is shifting left towards the beginning of the lifecycle of a product feature, but it's also simultaneously shifting right: performance testing now extends past traditional testing cycles to follow code through deployment and beyond that, into production.
#### TestOps and Continuous Testing
Engineering teams are running on stricter deadlines than ever, and a proven way to reduce the amount of time spent on testing while increasing software quality is to include performance testing in delivery pipelines. Automating testing through deployment and delivery as well as development ensures that performance is front-of-mind for every release.
In projects that implement application infrastructure as code, having performance tests that are also written in code makes TestOps possible. TestOps is the emerging discipline blending testing and operations. TestOps empowers testers to spin up their own test environments, avoid downtime due to environment contention, and reduce dependencies on integrated components.
TestOps also empowers site reliability engineers to incorporate testing at an infrastructure level, such as to validate Kubernetes configuration, dynamic scaling, and disaster recovery procedures.
Without tools that allow code-driven test scripts that integrate with Continuous Delivery frameworks, the gulf between testing and operations remains. Many teams struggle to incorporate GUI-based tools into code-based pipelines.
#### Site Reliability Engineering (SRE) and testing in production
SRE is one of the latest trends and a sign of maturity in the software industry. The basic SRE goal is to meet increasing customer expectations in terms of availability, scalability, and performance. This goal often leads to extending the scope of testing through to production environments.
Site Reliability Engineering practices accept the benefits and complexity of today’s infrastructure and embrace system failures as given. SRE seeks to meet customer expectations by establishing Service Level Objectives (SLOs) in terms of availability and performance. SLOs are an evolution of the older concept of test requirements, which often failed to quantitatively describe desired application states.
![[k6-thresholds.png]]
Defining SLOs leads to more mature software organizations. It is fundamental for the business and operations because it aligns core objectives across engineering, product teams, and customers—a common framework for the whole organization in terms of reliability and performance for the end-user.
Because SRE embraces system failures, an SLO also defines an application's error budget: the percent of allowed errors for a period of time. Teams monitor the error budget burndown rate in production to verify the SLO status and address possible issues immediately, ideally before the SLO is breached.
However, monitoring or testing *only* in production is risky, as rapid software releases often introduce software regressions. Service Level Agreements are crucial for the business, and teams should mitigate the risks of near-breach violations by testing SLOs earlier during the software delivery cycle.
As an inevitable effect of this trend, more and more SREs are participating in testing. According to the SRE Report 2020, 30% of SRE respondents were participating in QA testing, and 41% of SRE teams were using testing tools.
The participation of SREs in testing breaks the silo between engineering managers, QA, and Operational teams, bringing QA to the shared organizational goals of performance and reliability.
### Enabling end-to-end performance and reliability testing today
In the face of the transformation in the testing industry due to the emerging trends mentioned, software organizations have to align their processes, teams, and tools to the shared goals of performance and reliability. All of these elements - process, team, and tool - are connected, and failing in one will negatively impact the other two.
The wrong tool selection is one of the main factors restraining cross-functional testing and developer participation.
> 67% of respondents considered "Whole team testing" challenging or very challenging. (State of Testing 2021). [^stateoftesting]
With best practices and talent management, choosing the right tool can unlock the benefits of modern testing practices. However, the testing tool needs to enable three critical factors: developer enablement, continuous testing, and collaboration.
#### Developer enablement
Developers have strong tooling preferences based on previous development experiences. They will choose tools that minimize barriers, boost their productivity, and work well with their existing workflows. To help to bring developers into the testing, a developer-oriented tool should:
- Support a familiar language to create and maintain advanced tests efficiently
- Be open-source with a strong community aligned with their principles
- Integrate easily with other open-source tools
#### Continuous testing
Agile practices encourage continuous software delivery, DevOps accelerates software cycles through automation, and TestOps facilitates efficient testing across environments. Testing earlier and continuously will ensure that software regressions don't breach your Service Level Agreements (SLAs). In this rapid environment, the testing solution should:
- Support testing SLOs and error budgets
- Integrate automated testing into software delivery cycles
- Alert stakeholders of failures
#### Collaboration
Today, various roles participate in performance testing: engineering managers, performance engineers, SREs, developers, QA testers, SDETs, business analysts, and more. While the bulk of the official responsibility of testing usually falls on one team, depending on the organization structure, everyone is ultimately responsible for quality. Other roles support testing teams by activately incorporating tests into their day-to-day activities. To enable this collaboration, the testing solution should:
- Be flexible enough to adapt to the workflows of participants from different areas
- Enable cross-functional team collaboration
- Democratize testing by making test suites easily accessible by all
## Unlocking collaboration with k6 Cloud
At k6, we have experienced the challenges highlighted in this whitepaper. We built k6 (our open-source tool) and k6 Cloud (our SaaS offering) to solve the testing problems in this new era. Both solutions can improve your testing processes significantly, bringing cross-functional teams together to do performance testing.
k6 provides a [rich ecosystem](https://k6.io/docs/integrations/) for various team workflows and multiple options for test creation, including a scripting language and GUI tools. k6 Cloud offers a centralized platform for cross-functional collaboration, reducing operational expenses in [building and maintaining load testing infrastructure](https://k6.io/what-to-consider-when-building-or-buying-a-load-testing-solution/) and allowing your team to focus their efforts on building and releasing a performant product.
The k6 Cloud platform offers:
- **Scalability**: It lets you ramp up your load tests easily by running hundreds of machines across multiple geographical regions in minutes.
- **Automation**: We've built k6 with native support to automate your performance testing, adding valuable features like trending analysis, test comparison, and notifications.
- **Collaboration**: Our platform includes key collaboration features, such as role-based access control (RBAC) for projects, test concurrency, and advanced sharing capabilities.
- **Unified reporting**: You get access to a single source of truth to further the collaboration and communication between managers and engineers. Customizable dashboards provide test results in real time, letting you make decisions faster.
- **Integrations**: k6 Cloud provides rich integrations with your favorite monitoring tools and platforms for further test visualization, data correlation and incident response.
> k6 Cloud Enterprise has enabled our quality engineering team to build more confident testing and streamline the process for deployment of new features and products, creating an all-around first-class experience for our customers.
> Eric Stone
> Software Engineer. [Carvana Case Study](https://k6.io/case-study-carvana/)
> Performance and reliability of our platform are of the utmost importance for Olo, our brands, and our partners. As a result, k6 has become an invaluable part of our test stack. k6 helps us quickly experiment with new ideas and verify that releases are production-ready.
> Jake Travisano
> Staff Software Engineering in Test. [Olo Case Study](https://k6.io/case-study-olo/)
[^mckinsey]: Citation required.
[^worldqualityreport]: Capgemini, Sogeti, Micro Focus. (2021). _World quality report 2020-2021_. Accessed in May 2021 from [Sogeti](https://www.sogeti.com/explore/reports/continuous-testing-report-2020/)
[^stateoftesting]: Citation required.