WOPR6

google

WOPR6 was held in Mountain View, California in April, 2006, and was hosted by Google. Ross Collard was the Content Owner.

Attendees

Henry Amistadi, Scott Barber, Bill Barnett, Goranka Bjedov, Angela Cavesina, Ross Collard, Dan Gold, Corey Goldberg, Linda Hamm, Julian Harty, Douglas Hoffman, Andy Hohenner, Paul Holland, James Lyndsay, Shelton Mar, Antony Marcano, Neil McCarthy, Michael Pearl, Craig Rapin, Harry Robinson, Robert Sabourin, Roland Stens, Brian Warren, Donna Williamson, Nick Wolf

Theme: Evolving Perceptions of Performance Testing

Perceptions of “things” change.  Whether the change is as a result of education, experience or the “thing” actually changing may or may not be particularly relevant to the holder of the perception.  In fact, the holder of the perceptions may not even be aware that their perception has changed!

Occasionally, enough individuals experience a similar change in their perception of a “thing” that the “thing” effectively becomes redefined.  At the moment when a critical mass of individuals has experienced a similar change in their perception that their new perception becomes more “pervasive” than the old perception, a paradigm shift is said to have occurred.

As a less esoteric description, consider the cellular telephone.  In the year 2000, the cell phone was a common device.  Many people had one and most of them used it as, well, a phone.  There were a few people who had really expensive phones that did “neat tricks”, like doubled as a PDA, but most folks really weren’t interested.  The cell phone paradigm was that a “phone is a phone”.  By 2005, it became difficult to even find a cell phone that wasn’t also a camera, PDA, MP3 player, web browser, email device or more! The cell phone paradigm had shifted to something more like “a cell phone is the single electronic device that I need to carry”.

With that in mind, WOPR6 hopes to address the question: Is the software industry currently undergoing a paradigm shift in its approach to performance testing systems?

Some examples of ideas, innovations or trends that in retrospect may end up being part of such a paradigmatic shift include:

–       Midwest Research, in 2005, claimed that the load generation test tool market was poised for some degree of restructuring starting sometime in early 2006 due to a new breed of tool being introduced by both IBM and
Microsoft and increased pressure from OpenSource tools.

–       2005 has been a year of significant growth for Software Test and Performance, both the magazine and the conference.

–       SQE has put more focus on performance testing in 2005 than any time since Alberto Savoia’s articles and presentations in the late 90’s. Interestingly enough, the underlying topics are the same.

–       Seemingly out of the blue, “agile performance testing” is being defined, debated, used and promoted in articles, blogs, marketing materials and conferences.

–       Web centric performance testers are reaching out to their peers in other markets, such as embedded, real-time, mainframe and telecom to share lessons as TCP/IP becomes a thread that ties these previously distinct markets together.

Experience Reports

WOPR6 is searching for reports of relevant experiences from past projects or current initiatives which demonstrate or contradict this view of the software industry experiencing a paradigm shift related to performance testing.

Experience reports of successful or failed attempts to influence an organizations perception of, or approach to, performance testing through actual performance testing projects are being sought.

New approaches to performance testing that you are currently applying on projects (Innovation Reports) are also being sought to stimulate discussion, dialogue and brainstorming about how these new approaches may succeed or fail based on the group’s collective experiences and wisdom.  (Innovation experiences may be presented in white board style.)

Anticipated Topics of Interest

Some things you may wish to include in your experience report:

– Have you taken part in projects in which organizational management mandated one or more of the following adjustments? How well did it work? What was the relative net gain or loss of that adjustment?

– Start performance testing earlier in that company’s SDLC.

– Shift the focus of performance testing from being requirements validation centric to being investigation centric.

– Revise performance test planning to follow a more agile, exploratory or reactionary approach than it previously had.

– Change the performance testing aspect of the project plan for more collaboration with the developers for test design and/or tuning.

– Change the performance testing aspect of the project plan for more collaboration with the operations system managers, system administrators and technical support staff.

– Increase the realism/accuracy of load models and simulations.

– Have you served as an “agent of change” for an organization’s approach to performance testing?

– What was the “as-was” approach?

– What was the “as-is” approach?

– What is the “to-be” approach?

– What metrics were used to estimate the value of the change?

– What new approaches or methods have you been using on recent projects that you would not have used in years prior?

– What is the real or perceived value of this change?

– Where did you get the idea for the new approach or method?

– What was the response to the new approach or method from the project team and/or management a) before   implementing it and b) after implementing it?

Have you acquired or built new and more powerful test ware – automated test frameworks, test tools, test labs and equipment?

Have you re-skilled or up-skilled your performance test professionals to increase their competency, e.g., better tool use, better use of basic math and probability calculations, better modeling?

Potential Workshop Activities

Throughout the workshop, the facilitator and organizers will periodically convene to consider supplementing experience reports with activities such as the ones listed below.  The intent behind these activities will be to facilitate brainstorming, smaller group discussions, constructive debate and generally enhance experience sharing.

– Brainstorms of what performance testing approaches are or are not undergoing a significant transformation.

– Break-out sessions exploring perceptions of the current direction of performance testing.

– Explorations of perceived gaps in current performance testing paradigms.

– Identification of the driving forces behind current performance testing paradigms (and whether they are driving transformation or stagnation)

–  Innovations, trends and blind alleys – speculations and discussions of which areas that are hot today will be the winners and the also-rans.

Intent of the Organizers

It is the intent of the organizers to bring together individuals with experience and expertise in various aspects of performance testing including, but not limited to:

–       Tool Vendors/Developers/Influencers

–       Authors and Columnists

–       Consulting Practitioners

–       Managers and Executives of teams that conduct performance testing

–       Performance Testers who have been working for the same (non-consulting)  organization for several years

–       Academics/Researchers

–       Market Analysts

With the purpose of:

– Determining if there is a noticeable ongoing transformation of performance testing paradigms in the software industry.

– If there is an ongoing transformation, exploring whether it is positive or negative and exploring what the next phase of this paradigm transformation may be.

– If there is not an ongoing transformation in performance testing paradigms, should there be one and what would influence such a paradigm transformation assuming it would benefit the software industry as a whole.

– Providing an opportunity for industry influencers to network, share ideas and increase their knowledge through direct interaction with one another.

Background

In terms of maturity, performance testing of web-centric systems has lagged well behind functional testing of these same systems.  In the same way that the software industry’s view of the value of and approach to functional testing of these systems seemed to evolve dramatically in about 2000, the industry’s view on performance testing appears to be transforming now. Below is a very brief synopsis of the relevant, web-centric history of systems performance testing.

–       Pre-1995: Performance testing of single-user embedded, real-time and other types of hardware based systems as well as client/server and mainframe multi-user (distributed and single server) systems had evolved to a relatively high degree of maturity.  Due to limited and predictable ways users could make use of the systems, performance testing was straight-forward if both technically and mathematically intensive.

–       Roughly 1995-1998: The view of the multi-user application portion of the software industry, which was becoming more distinct from the embedded and real-time system industry was growing a new branch that would ultimately
look to replace the client/server and mainframe segments.  This branch was dominated by web-based software. During this period very little of the knowledge gained from performance testing other types of systems seemed to
transfer into the performance testing of these new web-based systems leaving the most common practice around performance testing to be “generate enough random load to guess how many web servers we need, then add one.” This method did work relatively well for purely static text and graphics web sites

–       Roughly 1998-2000:  Web sites commonly evolved beyond purely static text and graphics and Alberto Savoia published several articles detailing why the “random load to estimate the number of web servers” approach was
inadequate.  Those articles were widely viewed as “overkill” at the time. Performance testing of these systems evolved very little, though load generation software became commonly available.

–       Roughly 2000 to 2003: The common practices of performance testing were largely dictated by load generation software vendors who wanted to make performance testing appear to be easy.

–       Roughly 2000 to 2003: Some individuals and isolated teams began observing and implementing the same concepts Savoia had written about several years before and started looking too other industry segments, such as real-time and mainframe for ideas about how to improve their performance testing.  Many articles and books were published about very technical performance tuning and organizations began to commonly perform some sort of abbreviated performance testing at the tail end of many development projects. Additionally, the embedded software, telecom and other industry segments were starting to build TCP/IP based systems and applications to take advantage of the internet architecture and/or digital communications.

–       2003: Many of the individuals who were advocating more focus on performance testing and observing the flaws in the commonplace viewpoint of the industry at the time became aware of one another and began collaborating publishing and influencing load generation tool vendors.

–       2003-2005:  There was an explosion of awareness and a near revolution of the industry’s commonplace view of performance testing, but few organizations seemed to be ready to expend the time or money to change their practices to accommodate their new view.

–       2005-2006:  There are entire magazines and conferences dedicated to performance testing, general testing conferences are soliciting and accepting presentations about performance testing, there are countless articles and blog entries about “new” approaches to performance testing. “Lessons learned” from the performance testing of client-server, embedded, real-time, mainframe, and telecom systems are being applied to web-based applications.  Load generation software vendors are even conforming with this evolving view and providing tools that better accommodate the resulting types of performance testing.

© 2003-2023 WOPR Frontier Theme Vlone