%% Last Updated: [[2020-09-04]] %% # The Myth of Continuous Performance Testing | LinkedIn [Readwise URL](https://readwise.io/bookreview/4358728) | [Source URL](https://www.linkedin.com/pulse/myth-continuous-performance-testing-stephen-townshend/) --- ![](https://readwise-assets.s3.amazonaws.com/static/images/article1.be68295a7e40.png) --- Often this means building our performance testing into our deployment pipeline. But does automated performance testing actually deliver what it should? ^83125919 --- The age old problem with load testing assets is that when the application changes, our test suites have a habit of breaking. The effort required to repair or rebuild them can be substantial to the point where it is not cost effective in a rapid iterative life-cycle. ^83125920 --- Load test tool vendors are trying to tell a story about being DevOps and CI/CD ready. The reality is that from a technical viewpoint they have not significantly evolved in over a decade (with a few rare exceptions). ^83125921 --- We can define NFR's, but from my experience these are often numbers plucked out of the air without any real connection to what matters. ^83125922 --- A better way would be to track performance over time. We could compare back to the past dozen runs and look for degradation over time. This is tricky to implement, nothing on the market does it out of the box. And how, even then, do you determine when to fail the build? Still, this is possible and a good avenue for future investigation. ^83125923 --- So, what if the environment we are testing in is not production like? What can we say from the results? The best we can do is draw comparisons between builds and only about response time. On top of that the response times we observe will not necessarily reflect production, and capacity and stability cannot be measured accurately. It is a good early indication of some performance issues, but that is all. ^83194048 --- The conclusion I keep coming back to is that there are some limited things we can test continually, but there is also a place for some 'big bang' performance testing at less frequent milestones ^83194049 --- ## My thoughts I mostly disagree with Stephen's points here. Any good load testing tool should include a way to measure performance trends over time, and with the [[Test as code]] movement picking up steam, implementing [[Continuous improvement in performance testing]] is trivial. Given this article was written in 2017, I'm chalking this one up to it being outdated.