At the end of WOPR22, we asked the group for their takeaways. Here is the list that generated, with attribution when we could remember who said it. 1. “We need to think about Performance Testing (in terms) other than just load testing.” – Andy Hohenner 2. “Our first performance test for each build was a […]
Category: WOPR22
WOPR22 Brainstorm: Illusion of Reality
Our last brainstorming exercise was an examination of the ways in which we acknowledge our performance model is likely to be inaccurate, compared to reality. Since much of performance testing centers on “realistic” simulation, and iterative development forces us to test faster with less information, it was instructive to review how we approach simulation fidelity. […]
WOPR22 Brainstorm: Pain Points in Iterative, Rapid, Early Testing
A facilitated discussion was started by listing areas where it is difficult to apply performance testing techniques to iterative development. Some discussion followed each point, but they are listed here a as a collection of some of the difficulties we face in trying to provide performance feedback early and often. Any attendee (or website visitor!) is […]
WOPR22 Brainstorm: Optimizations
During WOPR22, we spend a few minutes before lunch collecting systems-based (as opposed to code) optimizations that the group had used in the past. This was time- and hunger-bound. This is not an exhaustive or ranked list, and does not include enough information to use these as heuristics. It might be a source of ideas. This […]
WOPR22 Practitioner Tool Survey
During WOPR22, we took an informal survey of tools that practitioners have used regularly over the last year. It should be remembered that tools are frequently chosen by non-practitioners for us, for reasons besides fitness for purpose. This survey does not meet any standard of statistical significance, and does not include many well-known tools – just the […]