## [[The Observer Effect]] in performance testing A key thing to understand any [[Software Testing|Testing]], but especially performance testing, is that it's almost impossible to test an application and measure its behaviour under load or other circumstances without the tests themselves influencing the results. Ideally, our load tests would be [[Making load testing scripts more realistic| as realistic as possible]]. Practically, a test is still just an experiment and it's very likely that the real production results will vary somewhat. We can tweak and tune our tests, but at some point, we get diminishing marginal returns for the effort spent. That's why it's a good strategy to use both testing and observability to: - Capture [[Baseline test|baseline]] performance, ideally in [[Testing in production|production]]. - Run a baseline test in your test environment before any changes, and compare how close you got to the baseline production data. - Make the change(s) in the application, run the test, and then compare to the baselines. - Assume you didn't get the testing exactly right and run [[Chaos Engineering|chaos tests]] to test for unexpected issues. - Continue to monitor [[Testing in production|in production]].