This event has ended. Visit the official site or create your own event on Sched.
Test Leadership Congress is an international conference featuring emerging trends, latest developments, experience and practical advice in software quality leadership and management.

The conference program 2020 is designed for comfortable learning and maximized interactive participation from the safety of your home:

Summer Season: July 20th through August 21st
Tutorials: September-October 2020

Learn details and ways to participate: https://testleadershipcongress-ny.com/program/
Back To Schedule
Tuesday, August 18 • 8:00pm - 9:00pm
Testopsy: What Really Happens During Testing?

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Every software product we use has been tested, in some way. The people who test may or may not be professionally trained. They may or may not have designed their tests based on systematic methods. So, how do testers learn their craft? How do they perform testing?  

Field studies of software projects are difficult to find, partly due to the expense of carrying out such studies and partly due to the confidential or otherwise socially sensitive nature of engineering work. Furthermore, books about software testing are generally not written based on close and systematic research into actual practice, but rather upon the opinions and recollections of apparent experts (prominent examples include Beizer 1995; Copeland 2004; Hetzel 1993; Kaner 1999; 2002).

However, the literature of expertise warns us that experts may not have a clear understanding of their own methods and processes; they may be unreliable witnesses to their own work-- despite being fully able to perform that work successfully (Johnson 1983).

I am systematically studying how testers actually test. This includes their reasoning processes, mental schemata, social behavior, and work products. This study calls for the standard methods of ethnomethodological research: trained observation, activity recording, note-taking, spoken protocol analysis, and subsequent coding and analysis. These are well-established tools for studying human expertise (Clancey 2006).

There have been efforts in the testing industry to informally record and study testing. James Bach calls such studies “testopsies” (Bach 2020) and recommends them as regular professional practice for process improvement. In the academic world, some small studies of testing practices have been published. There have also been studies of analogous problem-solving processes, such as exploring the functionality of an electronic toy (Klahr 2002), solving a computerized puzzle (Weinberg 1966), working a photocopy machine (Suchman 1987), or navigating a warship (Hutchins 1995).

Among the elements I propose to study is the development and representation of mental models held by testers. These may include models of the product, project, and the process of testing itself. Mental models are much discussed in the research literature of human-computer interaction as well as the literature of expertise in general (Gentner and Stevens 2014). 

I take as a premise that intentional test design begins with a model that has been constructed at the moment or retrieved from memory. (In the absence of some such model, the tester would not be able to operate, or even recognize a computer!) This model is then amended and improved as the tester learns about the product through interview, reading, or exploration of the product itself. Mental models are input, mediator, and outcome of software testing.

Published test design techniques are usually based explicitly on some model of the product that is then “covered” with test cases via some heuristic. There is also such a thing as “model-based testing,” but it does not refer to testing based on models. Instead, it refers to the use of explicitly specified models used as a basis for the automatic generation of test data, test procedures, or test oracles (Apfelbaum 1995). The research I am doing will focus mainly on mental models. However, one aspect of my research will be the examination of the interaction between mental models and explicitly specified models. 
My hypothesis is that both novice and expert testers rely, in a different way, on extensive tacit knowledge (Collins 2010) to accomplish the mission of testing. Testing requires social competence to perform and cannot be performed by machines (Collins 1998). 

Humans perform sensemaking (Weick 1995), modeling, and risk evaluation (Kasperson et al. 1988), all of which are socially situated. I further hypothesize that claims as to the automatability of testing are based on an impoverished folk theory of testing that cannot account for the behavior and success of real testers, nor for the expectations put on testers by stakeholders.
By studying testers in the act of testing, I hope to uncover and document a more robust and respectable account of what testers really do, what competencies good testing requires, and how those competencies arise.

My speech will cover, among others:
- What is my research about?
- What is the nature of my research?
- What could empower it?
- Where have I gone so far?
- How can every tester contribute?

avatar for Rihab Loukil

Rihab Loukil

"I'm a context-driven tester, a Peer Advisor on the RST (rapid software testing) class, James Bach's student, and a Ph.D. student in software testing.Reference: https://www.satisfice.com/So, if you are a recruiter here is how I can help you: I am conducting a research about what... Read More →

Tuesday August 18, 2020 8:00pm - 9:00pm EDT