“What makes a good testing challenge?”
This was the question I was repeatedly asking myself on the run up to the 29th November when I would run a practical, 50-minute testing challenge for attendees at the second annual WeTest Weekend Workshops conference in Wellington.
I’d originally proposed the idea of a lab style challenge to organisers Katrina and Aaron as a starring vehicle for someone else, but when they asked me to take it on instead I thought to myself: “Why not? How hard could it really be?!”
I was rueing that cavalier attitude a week out from the conference though, as I found myself still unable to really pin down an idea that didn’t seem incredibly contrived. “I could make them test a website… I could make them test an app… give prizes for the best bug, best report…”
All of the things that passed through my head seemed so mundane and predictable, and worst of all, I didn’t see any way that the participants could actually gain from taking part. If I asked them simply to test something, no matter what the scenario and what the judging criteria, they would likely fall back on their normal approach.
Only when I realised this did I start to make progress. I realised that my goal wasn’t actually to run a hands-on testing challenge – I wanted to run a brains-on thinking challenge. It was pointless just asking them to do what they always did; I needed instead to concoct a challenge that forced them to think differently.
This was the breakthrough I needed, and inspired me to start thinking about my audience. I wanted to create a situation where they were forced to not simply apply their usual processes or heuristics for how they would approach a problem. To achieve this, I incorporated a series of specific pressures or traps into the challenge.
First, I used the implicit time pressure enforced by the conference schedule – I had a 50 minute slot which, allowing for set up and wrap up time, meant about 40 minutes of actual testing time. To build on this, I devised a scenario where the time limit was everything – the fate of the world (literally, within the scenario) depended upon them completing their mission in time.
Second, I heaped more pressure on by setting scope expectations that were wildly unachievable in the time available. While I acknowledge that full testing of the assigned products was impossible, I specified a series of focuses for the participants. While this seemed like a concession to the time available, it was still too much to do. To stand a chance, they would need to think fast, be resourceful, and prioritise.
Third, I gave them no up-front guidance in terms of what we were looking for in the product they were testing – no indication of what was acceptable or desirable. I literally just gave them the context of the scenario, and the promise that I (playing the role of product owner) would answer any questions they had to the best of my ability. I was, essentially, forcing them to consider context first, rather than falling back on explicitly stated requirements.
Fourth, I made no explicit concession for the reporting of their findings. There was no allowance for reporting in the time available, and this was a bit of a trap. The scenario made it clear I needed information before their time was up, and so instead of falling into the familiar “report at the end” pattern, they needed to figure out a way of communicating information dynamically that worked for me, as the product owner.
Fifth, the main criteria against which the products needed to be judged – both according to the scenario and any advice I provided – was something that I knew most of the people attending had little to no experience in: security style permissions testing. This meant that I knew they would be unlikely to practically test the products, and I hoped would ensure they spent more time thinking about their approaches and being resourceful than banging their heads against the proverbial wall.
To dress all of this up in a way that would immediately grab them and keep them engaged in what would – I hoped – be a high-pressured challenge, I created a relatively elaborate scenario involving superheroes, super villains and the end of the world as we know it to frame (and, to an extent, obscure) the pressures and traps I’d devised.
So, “what makes a good testing challenge?”
Well, based on the positive feedback from those who took part, as well as the observations from Aaron and myself in our roles within the challenge, the approach I took to crafting this workshop appears to be one way to construct a good challenge.
To varying degrees, all of the teams uncovered and attempted to deal with the various pressures that were implicit in the challenge they were set. Some consciously recognised them and set about devising strategies to overcome these pressures. Others somewhat unconsciously factored these restrictions into their approaches and what they chose to do.
Indeed, there were a range of things I found particularly interesting:
- One team began by rigorously questioning and interrogating the bounds of the challenge to better understand their mission. They nearly defeated the challenge within 5 minutes by proving that their product was, by design, insufficient for the stated purpose, and only some quick thinking on my part ensured that there was a way that their system could work and that they could thus continue.
- Another team took a very clearly structured approach, with one senior team member delegating the investigation of various areas to individuals and acting as the direct conduit between the team and myself as the product owner.
- A further team immediately consigned themselves to the fact that they didn’t have the skills to test the application for the desired purpose and so instead took to performing online research in an attempt to discover the relevant information to inform my decision.
- Another team initially appeared to have not seen the wood for the trees and set about a traditionally phased approach to the testing by identifying tests they would later execute. Only with some pointed questions did they realise that their approach would likely leave them high and dry, and they immediately adapted outside of their evident comfort zones.
- The remaining two teams worked in apparent chaos for much of the allocated time, with little observable planning or structure to their approach, but they reported to me regularly and were consistently providing useful information, so while the structure was invisible to the outside observer, it was most certainly there.
Ultimately, what was most pleasing was that each team took a slightly different approach, and generally worked together to pool their strengths to help them face down the essentially impossible task they had been set. And this meant that not only were they thinking outside of their normal boxes, they were doing so collaboratively and learning from one another too.
This was both a delight and a relief to me, a week on from the day when I’d been blankly pondering how to set a good challenge, I had run one which seemed to have inspired and engaged the attendees and given them cause to confront different problems and adopt different approaches than those they were used to.
This goes to show that in testing, it is very easy to miss the woods for the trees, and to focus on the performance of testing itself rather than the way that we think about how we will perform it. For me, a good challenge is one that will force the attendees to challenge their thinking, and by considering your audience and establishing a context that can ask them some difficult questions, this can be achieved quite elegantly.
Note: I have deliberately refrained from detailing the specifics of the challenge or the approaches taken by the teams in too much detail here. This is because I intend to run this challenge again in various forms in the future. However, I am happy to share the challenge details if you wish to run it, or something like it, at other events. If you’re interested, please get in touch.