WOPR22 was held May 21-23, 2014 in Malmö, Sweden. It was hosted by Maria Kedemo and Verisure. Eric Proegler was the Content Owner.
Attendees
Fredrik Fristedt, Andy Hohenner, Paul Holland, Martin Hynie, Emil Johansson, Maria Kedemo, John Meza, Eric Proegler, Bob Sklar, Paul Stapleton, Andy Still, Neil Taitt, and Mais Tawfik Ashkar.
WOPR22 Theme: Early Performance Testing
Performance Testing is usually thought of as an activity conducted against “completed” software deployed on production hardware, or a copy of production hardware. These tests are designed for “realism”: simulating a sample of user activity at scales chosen to match expected production loads, in environments with “identical” resource capacity and configuration. The value of this kind of test is seen as its perceived accuracy in predicting how the live system will perform, and how reliable it will be.
This approach is used for validating completed, assembled systems right before go-live, and can be effective for addressing these goals. But what do we do when the software is not complete, when the system is not yet assembled, or when the software will be deployed in multiple environments with different workloads?
In agile/iterative development methodologies, the time between writing code and deploying it has become very short, to the extent where high-fidelity simulation before deployment is often impractical or impossible. Cloud deployments and virtualization have made the quantity of available resources very hard to determine. All of these challenges have pressured testing, measurement, and evaluation of performance, scalability, and reliability to evolve in order to test incomplete and rapidly changing systems.
We would like to hear how you test for performance and reliability against systems that are in development, undergoing major refactoring, and/or are not yet deployed to production-class environments. WOPR22 will share the experiences and learning of practitioners who are testing in these new contexts. We want to learn with you about providing actionable performance and reliability feedback earlier in the life cycle, and how to apply it more broadly.
Questions WOPR22 May Address
Your experience might touch on one or more of these questions, or it may not. If you have a story that relates to the theme, we would like to hear it.
- When do you start testing for performance and reliability in your projects? What are your entry criteria?
- How does your approach to performance testing change when testing a system that does not have “production-class” resources?
- How do you test components for performance and reliability? What techniques do you use to substitute for missing pieces of a system?
- What do you do when you do not have a usage model for end users? How do you decide which activities to simulate? What does your load model look like?
- Have you added performance/reliability testing to a Continuous Integration process? How is that going?
- How do you report test results when “realism” isn’t present?