Lately I’ve been asked by a number of people either for advice on how to build a Visual Test Model (VTM) using mind-mapping software, or to review such a model that they (or testers on their team) had developed.
This is good news – it means that more and more people are starting to incorporate this skill into their testing toolbox, which can only help to make their testing more effective, encouraging them to question their own understanding of how they approach a testing problem, and open it up to collaborative review from other testers or stakeholders.
However, it’s also bad news because it suggests that there is potentially a dearth of information out there to help people think about how to actually create a VTM and to have confidence in doing so – although there are a few testing courses which cover the subject well.
Don’t get me wrong, there are lots of great blogs about using visual test models – Aaron Hodder talks about incorporating them with Session Based Test Management, Katrina Clokie examines how they can evolve, and Adam Knight and Darren McMillan both talk about various uses for visual models in testing.
But I’ve yet to find many articles online which look at how to approach building a model, and examine the considerations that are often important in how you go about it.
As such, given that I have accumulated a fair amount of experience both creating VTMs and teaching others – both formally and informally – about them, I decided to attempt to organise my own thinking a little, and share some of the key ideas that I often discuss with other testers about the practice.
Probably the most common thing I see when a tester makes the move to develop a VTM having previously operated in a more factory-style, test-case driven environment is that they simply transpose the information from their test cases into the more visual format – often just collecting test “steps” into sequential branches of a mind map.
This might be OK – often the use of a more visual format alone might start prompting testers to think a little further, or at least better enable review and feedback of test ideas – but a mind-map does not a model make, or to put it less awkwardly – just making something look prettier doesn’t make it better.
The key then, is to allow yourself to think big. Loosen the shackles. Include anything and everything that you think might be relevant to the software or your testing. Remember that you’re trying to represent your mental model visually – so if it’s in your head and will influence how you evaluate the product, it’s relevant.
This is often a challenge to testers who have previously had requirements documents held to their heads like weapons. They can become conditioned to only testing those requirements which are explicitly stated, and may not consider – or may write off as invalid or out of scope – other ideas which don’t seem relevant to them.
A useful tool to help thinking beyond the specification is James Bach’s Heuristic Test Strategy Model (HTSM). Considering the ‘Product Elements’ and ‘Project Environment’ factors in particular often prompts me to think about something I would otherwise have overlooked, or makes me question whether that type of test might be important.
Other strategies for developing your model might be to explore the product and see what things you discover that might prove relevant. The touring heuristics proposed by Mike Kelly are another nice way of fleshing out your understanding of a product, as are exploratory techniques like “galumphing” and “creep and leap” as proposed by James Bach.
You might look at these techniques and think “but isn’t that testing?” – and you’d be right. But that’s very much the point of consciously building a model. By thinking about your understanding and the product, it forces a more active investigation of the product, and actually gets you engaging with it earlier. From the very moment I begin a project I consider myself actively testing, because the process of learning about the product and modelling it is itself testing.
Ultimately, you can include an almost endless amount of information in your model – choosing what not to include can be just as difficult as deciding what is relevant. I usually advocate “the more the merrier” because you can never be 100% sure that some piece of information is not relevant to your testing, but the exclusions will often be determined by the purpose of your test project.
For instance, if you are testing an existing off-the-shelf product to determine it’s suitability for use in a specific hardware set up or environment, there would likely be less emphasis on the finer details of its functionality than if you were testing an in-development product of the same type. In each case, your model might reflect this focus, with the former likely emphasising platform and compatibility elements, and the latter potentially focusing more on functions and operations.
As we’ve touched on purpose, now seems to be a good time to think about how you might structure your VTM.
The first, and most obvious, question which structure poses is around what format you should use for your model. In my experience, most testers in most situations seem to enjoy working with mind maps, and have found the format flexible enough that it can usually suit their needs.
However, you needn’t always use a mind map to create your VTM – in some situations, you may find that another format is more applicable.
For example, last year I ran a WeTest Weekend Workshops session on Session Based Test Management, and participants were challenged to visually model a flash based “horse lunge” game. Most participants chose to create a mind map, and they generally made it work. Later though, some colleagues and I used the same game to teach graduate testers to build state-transition models, and this proved to be a far more effective visual format for modelling this game.
As such, I’d encourage anyone developing a visual model to consider any format that might best communicate the information you’re trying to get across, or best suit the particular features of the product or testing.
However, given the proliferation of mind map based VTMs, it’s worth spending some time thinking about how mind maps might be structured for this purpose. Again, this is something which comes down to the purpose of the model, or the information objectives driving it.
In my own work, I usually prefer to structure my VTMs around the functions or features of the product (or part of the product) that I’m testing. This is in part because I am most often engaged to functionally test the product, but mainly because I find that this sort of structure is the clearest way to represent the product (and, against it, my testing) to the business stakeholders.
An example of a function-based VTM. Note that this was created for demonstrative use only and is not necessarily accurate. Click the image to view full size.
Modelling the product around functions or use-cases that business users are familiar with gives them an immediate sense of orientation when reviewing my model, especially as I’m careful to use the business’ language too. This gives them something they can relate to, and means that they can more easily understand the way in which I’ve planned my testing. It takes something that’s usually foreign to them (testing) and familiarises it by structuring it around something they know (their business/product).
However, this needn’t always be the approach you take. You may decide instead to structure your model around different types of testing or test ideas. One approach I have used previously is to actually take the ‘Product Elements’ section of the HTSM referenced previously and use each factor as a node to build ideas on. This is especially useful in building a broad view of a product or system.
Again, I should stress here that a VTM is ultimately a representation of how you see or understand the system you’re working with and how you might test it. As such, the way you structure it will likely depend on how you orientate your thinking about the project, and so there is no right or wrong way to structure a model.
Regardless of how you structure your model, and what information you do or don’t include in it, the presentation of it is an absolutely critical factor. It is also probably my second most common complaint when I review models created by testers. As I’ve just re-iterated and will say again – I really can’t stress this enough – a VTM represents your thinking about the product and about your testing.
As such, it is really important that you think about what the model says about your understanding. Recently I attended a meeting where a tester was walking other testers and SMEs through their VTM for review. The model itself actually contained a lot of really good information, and I could tell that the tester had a deep understanding of the functionality.
Unfortunately that was not the impression which the VTM conveyed. The tester actually struggled at times to find the next element of the model that they wished to talk about, there were lines criss-crossing everywhere, and we spent a large portion of the meeting watching the mind map scroll back and forth past our eyes. It was less a mind map than it was a brain dump.
Had I not been paying careful attention (I was there in part to review the model itself) I might have missed the understanding which lay beneath the scattered exterior of the model and gotten the impression that the tester actually knew nothing. In fact, if I’d not known differently, I might have suspected that the model had been made deliberately obtuse, in an attempt to convey a high level of complexity!
This highlights the importance of taking care in how we present our VTMs. They do represent your understanding of the product and your testing, and so unless you want that questioned, I recommend you take time to ensure that it sends the right message.
Sometimes that message might be “I know nothing” or “I don’t know what this does” or “I have questions!” – I often include big question marks on a VTM when I’m having trouble getting important information – but even then, you need to present that image in the right way. Not knowing things is fine, as long as you’re aware you don’t know them, and don’t convey a more general impression of ignorance or even apathy.
The prior example also highlights that you must understand your model. Had the tester spent some time beforehand refining and understanding their model and – importantly – talking to it, then I’m confident the model would have ended up in a better state. It’s important to remember that these VTMs are, at the end of the day, communication tools.
The very point of them is that they’re visual, open, accessible and on display. You want people to see them, because that’s where they can provide value. A crucial part of that will be in your talking to them – be it an informal chat when you’ve stuck it up on the wall around your desk, or at a formal walkthrough. You need to be able to talk the talk.
Another point worth mentioning is that, especially when working as part of a team of testers, it might well be worthwhile establishing some design and formatting conventions for your VTMs. If you use mind mapping software, use the same software (I can’t recommend Xmind enough). Try to use the same kind of icons or numbering for the same purpose. Ideally agree on a basic colour scheme and design.
There is a time and a place for innovation, and if there is a reason to be different go with it. But generally, when working within a project, setting an expectation as to how your models can be interpreted in a consistent way is important. It will make it easier for stakeholders to engage with them, and that is ultimately what you want – because that’s where a visual model starts to pay off, when it creates a feedback or communication loop.
Something else I’ve seen is that there is sometimes a tendency to treat a VTM as a “deliverable” – in much the same way as ‘traditional’ documentation in a factory style approach to testing. In other words, the VTM can sometimes be treated as a product rather than a tool – once the initial model is created, it’s pushed aside and forgotten, a box ticked.
To do that is to waste the time invested in developing the model in the first place. It is a tool for communicating your thinking, but also a tool for developing your thinking and guiding your testing, as well as reporting on the progress of your testing. In testing, it is the cognitive activity which is the product, which has value to a project. The value of a VTM is in how it can clarify and communicate that thinking.
In order to get full value from a VTM then, we must remember that it is evolutionary. It may start out as a an indication of our planned testing, but as we start to actively investigate the product, it should be updated with the knowledge and information we discover. We should use this to revise and re-prioritise our ideas and plans.
Katrina Clokie recently published a nice blog post about this very phenomenon, demonstrating how a VTM can and probably should be revised over time to reflect the continuous learning that should occur in a well-run testing project. Her example was taken from a fairly simple training activity, and yet the model was almost completely re-worked – imagine how much change you might expect in a complex and challenging real life project.
I believe that Visual Test Models are an incredibly valuable tool for the thinking tester. In opening up the mind of the tester to a project, and encouraging the tester and other stakeholders to challenge and expand the horizons of their understanding of the product and their testing, they can really enable good, thoughtful testing practices.
This article has been an attempt to collect my thoughts on some of the considerations that might be important or useful when approaching the creation of a VTM. It’s a practice which I’m really invested in, and I hope that this may help others to adopt and evolve the use of VTMs in their testing projects.
What I’m especially interested in though, is to see and hear ideas from other people who have experience in using VTMs or who are inspired to try the approach. I’m sure there are other things that I haven’t covered here which should factor into the creation of such a model, and hopefully as a professional group we can continue to develop innovative new ways of modelling and visually representing our testing and test coverage.