WOPR14 was held in Montreal on April 22-24, 2010, and was hosted by CRIM. James Bach was the Content Owner.
Attendees
Goranka Bjedov, Michael Bolton, Mike Bonar, Jeremy Brown, Marcel Butz, Francine Cloutier, Ross Collard, Siddhartha Devadhar, Dan Downing, Paul Holland, Pam Holt, Alain Lafreniere, Yury Makedonov, Greg McNelly, Gregory Pope, Eric Proegler, Raymond Rivest, Rob Sabourin, Chris Steele, Keith Stobie, Mohit Verma
Theme: Reliability Heuristics
A heuristic is a fallible method of solving a problem or making a decision. Testing, indeed all engineering, relies on heuristics. Since heuristics are fallible we must examine them to learn the conditions under which they are useful and in which they may fail. The immediate corollary of heuristic use is that you must gain the skill, knowledge, and judgment in order to apply them wisely.
One kind of heuristic is a “rule of thumb”, which is a fast and frugal method that lets us quickly guess the solution to an otherwise difficult situation. Another kind of heuristic is a mnemonic, which helps us remember things under pressure.
Indeed, any tool used by a practitioner that does not guarantee a complete and correct solution (for example a hammer, power saw, or regression test suite) is a heuristic.
We encounter familiar situations in our work every day, repeatedly deploying pre-determined solutions without thinking much about them. Our biases (often unrecognized) and our experiences drive our actions. Since these rules seem to be correct, or at least rarely proven wrong, we maintain them as valuable artifacts.
At WOPR13, we discussed performance testing heuristics. WOPR14 will be focused on heuristics for reliability testing.
What are the reliability testing heuristics by which you work?
How do you develop them?
How well do your heuristics work for you?
How do you assure that you are using them properly?
Equally important, where have you been surprised?
For instance, how do you decide that you have done enough testing?
How do you know your coverage and oracles were adequate to justify your analysis of reliability?
At WOPR14, we will discuss these questions. We will review specific first-person experiences, compare findings and share intuitions.
We hope to learn more about how to conduct effective, credible and timely reliability testing. We encourage insightful, talented people of varied experience levels and backgrounds to apply. Even if you do not believe you have a relevant experience, we welcome people who work in performance and reliability testing disciplines to contribute to the discussion.
Please bring us your first-person experience report that addresses one or more of these questions:
- What heuristics do you use to plan and conduct reliability testing?
- How reliable are your heuristics, and what are the exception cases?
- How do they help predict how systems scale – or fail?
- What happens if we apply them incorrectly?
- What are the characteristics of situations you have encountered where you have seen them before and neatly step ahead to plan, diagnose, and/or solve?
- Are there a certain characteristic ways failure rate and other measurements change as load increases?
- What do you know before you start?
- What rules are you developing right now?
- What rules have you recently abandoned?
Work Product
Private workshop collaboration space here.