The Workshop on Performance and Reliability (WOPR) announces its 26th Workshop. WOPR26 will be hosted by Tricentis in Melbourne, Australia, on 5-7 March, 2018. The traditional Pre-WOPR Dinner will take place on Sunday, 4 March.

Tim Koopmans of Flood.IO is the Content Owner for WOPR26.

Cognitive Biases in Performance Testing

As human beings, we’re all guilty of cognitive biases. A cognitive bias refers to “a systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion.” Cognitive biases are part of the human condition; we can only hope to be aware of our biases, not eliminate them.

Performance testing is subject to a host of decision making, belief, and behavioral biases. We sometimes joke about the tendency for unskilled individuals to overestimate their own ability and the tendency for experts to underestimate their own ability, also known as the Dunning–Kruger effect. These jokes are more ironic when we consider some of the logical leaps we make based on heuristics or hunches to determine what to test, how to test it, and how to interpret results.

People are pattern- and narrative-matchers. Even in the relatively narrow context of our work, we’re susceptible to selecting the story we find interesting, compelling, familiar, and/or confirmatory – and then using data to support the narratives we like.

The theme of this workshop is to explore the wide range of cognitive biases and their effect on your performance testing experiences. What are the common reasoning mistakes we make, and why are we still making them? What are the mistakes our stakeholders make, and how can we help in avoiding them?

This could be related to the approach you took, the tools you used, the analysis you made, or how that analysis was received and consumed. Part psychological, part technical, you might leave this workshop feeling a little less biased (or at least more aware of your biases), and maybe even have a cathartic release in having shared and reflected on your experiences and mistakes.

Many people who work in IT think they know about how to performance test: “Just apply the same load as production in a test environment, and then you know if the system will scale and we can go live”. Consider:

  • How many assumptions and shortcuts do we take in building our load model?
  • How many factors do we ignore in declaring equivalency between production and test environments?
  • How thorough is our analysis going to be?
  • What will we really decide to do – or not do – based on the results?

The fields of software/hardware/network/systems engineering are stuffed with common biases. Both these and biases more closely associated with performance testing/engineering are worthy of reflection.

  1. What about social biases? Ever spent more time and energy discussing metrics that all members are already familiar with? That could be considered Shared Information bias.
  2. Ever bolstered or defended the status quo, or been on the other side of that? System Justification bias.
  3. How about memory biases? Forgetting information that can otherwise be found online or is recorded somewhere? Google Effect/Digital Amnesia.
  4. How about postmortem analysis of an event becoming less accurate because of interference from the post event news of how your site crashed and pressure from management? Misinformation Effect.
  5. Some performance testers still push back against virtualized and cloud load injectors, well after modern computing has moved on. If you ask why, there is typically an anecdote of resource overcommitment at the virtualization host level, so now it is impossible to trust data that isn’t generated by physical machines (with their own operating system background processes, resource contention, etc). Anchoring/Focalism.
  6. Unprofitably spending time debating individual script steps and test data characteristics, instead of researching the load model? Debating a stray CPU spike instead of researching a flood of errors midtest? Bikeshedding.
  7. Continuing to run tests that don’t yield useful information, or trying to salvage obsolete test artifacts, because someone thinks a lot of valuable time was spent building them? Sunk Cost.

Please come share your experience with encountering, suffering from, or suffering because of cognitive biases in a performance engineering/testing context. More about Experience Reports here.

Conference Location and Dates

WOPR26 will be hosted by Tricentis in Melbourne, Australia, on 5-7 March, 2018. The traditional Pre-WOPR Dinner will take place on Sunday, 4 March.

If you would like to attend WOPR26, please submit your application soon. We will begin sending invitations in late November.

About WOPR

WOPR is a peer workshop for practitioners to share experiences in system performance and reliability, allow people interested in these topics to network with their peers, and to help build a community of professionals with common interests. WOPR is not vendor-centric, consultant-centric, or end user-centric, but strives to accommodate a mix of viewpoints and experiences. We are looking for people who are interested in system performance, reliability, testing, and quality assurance.

WOPR has been running since 2003, and over the years has included many of the world’s most skillful and well-known performance testers and engineers. To learn more about WOPR, visit our About page, connect at LinkedIn and Facebook, or follow @WOPR_Workshop.


WOPR is not-for-profit. We do ask WOPR participants to help us offset expenses, as employers greatly benefit from the learning their employees can get from WOPR. The expense-sharing amount for WOPR26 is $400 AUS. If you are invited to the workshop, you will be asked to pay the expense-sharing fee to indicate acceptance of your invitation. We are happy to discuss the fee if needed.

Applying for WOPR

WOPR conferences are invitation-only and sometimes over-subscribed. For WOPR26, we plan to limit attendance to about 20 people. We usually have more applications and presentations than can fit into the workshop; not everyone who submits a presentation will be invited to WOPR, and not everyone invited to WOPR will be asked to present.

Our selection criteria are weighted heavily towards practitioners, and interesting ideas expressed in WOPR applications. We welcome anyone with relevant experiences or interests. We reserve seats to identify and support promising up-and-comers. Please apply, and see what happens.

The WOPR organizers will select presentations, and invitees will be notified by email according to the above dates. You can apply for WOPR26 here.

© 2003-2023 WOPR Frontier Theme Vlone