Requirement Review, Impact Analysis and Test Planning with Mind Maps
Professionals working in quality or test roles need some form of planning to discover and describe how we think testing will be done and what it will look like. If you are like me, you have seen varied and sundry tools and planning models put forward as “the solution” that fix the shortcomings of the others. Each of these will be replaced by some other technique that will “fix” the problem.
The fundamental question these models all try to address is “How will you plan testing so we can show what has been done, what remains to be done and how long it will take?”
The problem in answering that question is that it is usually asked at a time when the major points within a project are not clearly understood. When that happens “I don’t know yet” may be the most correct answer, but few Project Managers will be satisfied with that.
I have been using a tool I find helpful for both requirements analysis and test planning. I find that requirements analysis needs to be done so the test team and I can understand the intent. This gives us a foundation for meaningful test planning and testing to occur.
Mind Maps for Requirement Analysis
Each organization will describe requirements a little differently.
There may be “business requirements” which describe some need that is not being met by the people using the software or the results of the software. There may be “solution requirements” which may be a bit more granular in that they are associated with a given business requirement and describe one aspect of how that requirement can be fulfilled. Then there may be “solution detail requirements” which may be extremely granular and get into the technical aspects of addressing the need.
Unfortunately, people documenting requirements are often imprecise in their language. Even when they believe they are being extremely precise, there are some pieces that will be missed.
I start with mind maps to see if any of these gaps can be identified. I pull the requirements from their repository to visually represent them in some logical manner. Usually these are by function or component within the software. Sometimes, if it is for a project on a well-known existing system, I map only the change. Other times, if it is a larger project or a significant change to an existing project, I will map the current system with the changes highlighted.
I began doing this several years ago when I encountered documented requirements I did not understand. Individually they made sense, but when I tried to consider them as a group I found I could not make sense of them. By mapping them visually, by function, the reason became clear. I noticed there were several requirements that, taken as written, contradicted each other when viewed together. By asking for clarification, I was able to get the lead developer and designer to work with me to understand what the true scope of the project was.
By representing the requirements visually, by specific function area, I was able to identify potential conflicts and gaps I was not noticing when the requirements were in a tabular form.
When a single requirement has a relationship with more than one functional area, I am able to see that and identify that with dotted lines between them. The more dotted lines I end up with, the greater the likelihood some of the requirements need to be reconsidered or restated. This is another form of ambiguity resolution I was not able to effectively contribute to before I began applying mind maps to the question of requirements.
By updating the requirements mind map as more information is available, I am able to assist the design and development teams in understanding what the product owner and other stakeholders believe they are requesting. I can also help identify potential trouble-spots beyond the requirements.
Mind Maps for Impact Analysis
In some instances when the requirements appear to be understood, they are masking potential problems within the software. If, for example, a requirement can be expressed simply and any development work to make that change is minimal, one question remains to be answered: “What is the impact of this change?”
This is critical, and often overlooked in many models for estimation. If testing is to be some percentage of development effort, for example 30 percent, and development work takes one hour, can testing really be finished in 20 minutes? What part of the system is being touched? How does that impact other areas?
What about logical flows within the application, do we know what passes through them? The importance of these questions came clear to me in ways I had not appreciated a few years ago.
The system being modified allowed customers to register on a retail website and order products.
Language options were available so customers could receive emails and other communication in their preferred language. The change was simple: allow customers to select from a new group of languages so the need for these communication templates could be really measured. They would be created, but until the new languages were ready customers would get information in English along with a message saying their language preference was not completely ready and a message when it would be.
This seemed straight forward. A slow roll-out meant that the marketing staff had time to finish their translation work and make sure it was correct, then roll each piece out as it was ready.
Development time, including required documentation, was 90 minutes. When I examined what was being changed and the areas that were potentially being impacted, I got nervous. I looked for documentation on that logical flow and found none. When I asked others about it, the response was there was none because “it was so well understood.”
When I began looking at the logical flows that touched this seemingly simple change, I ended up with over 300 scenarios that potentially needed exercising.
By using a mind map as a decision tree, I was able to track impact areas for this change. I also was able to find logical paths to focus on and eliminate others that were effectively within the same variable set as other paths.
From that project on, I looked to mind maps to help visually model logical flows, particularly when people believe them to be well understood. This project changed my thinking in many ways and forced me to reconsider much of what I believed about requirements and impact to systems.
By looking at “what is touched by the change”, as opposed to “what does the change touch” the full scope of the project was made clear. I now try to apply the same concept in each project I work on, where it is appropriate. By taking this approach, the test team found 14 critical bugs in 38 hours of testing these logical paths.
Mind Maps for Test Planning and Reporting
It is these logical paths that proved interesting at the time, and they are why I now use this approach for test planning. By identifying what possible relationships exist and what paths are available to get to them, I can model behavior without exercising the application itself. This gives me the chance to talk with product owners, subject matter experts and designers to see if my understanding is correct and to get their input on the likelihood of these paths being followed.
Once I have possible paths identified, I can then map what functions are available for testing right then.
When working in iterations, in an Agile model, I can apply this idea to what is being worked on in that sprint, what was delivered in previous sprints and relationships between them.
I can also see if I missed any possible paths in testing by comparing the test reports with the map showing possible paths. With each iteration or delivery cycle we can add pieces to the mind map as they become available.
By examining what areas of the application can be tested and what possible paths exist that may touch these areas, the nature of testing expands with each code delivery. Sprint planning meetings, or the project plan depending on the project environment, give me an idea when certain functions will be made available. From this I can see what is expected to be impacted based on what logical paths, which were identified earlier, may open up in the current iteration.
Within the mind map I can add the location in the project SharePoint or wiki where the test reports are saved. For me, these are not simply tagged lists of steps to be done, expected results and actual results. Instead they are more in the line of Session Reports from SBTM. The purpose of why the testing is being done is described, who is doing the testing, what environment they are working in and information around what test data they are using is recorded.
Other things are recorded as well, including what they did:
By keeping the mind map openly available, anyone with an interest in the project can look and see at any time what testing has been done, what is currently in progress and what is left to be tested. By associating the test reports in the mind map I can give full access to anyone viewing the mind map what the testers did and saw. (My current client’s tool does not allow embedding hyperlinks in mind maps. I give the full, explicit link so people can copy and paste it directly into their browser and get the test report they are interested in.)
This can help Subject Matter Experts weigh in and ask questions about the testing and note potential flaws in a report, either what was done, how it was done or what the results were.
The Underlying Purpose of this Model
This reflects on the question of "How does the software behave?"
Usually, I'm not testing to validate requirements. I find that documented requirements are one part of the puzzle to suitability for purpose. By comparing documented requirements with the actual behavior of the software, I can report the behavior more fully and relate what I and the test team are observing with expectations for the software, one part of which consists of documented requirements.
In the end, I have a full visual representation of what has been exercised, how it has been exercised and how thoroughly. This gives me something I can present to stakeholders and describe what we tested how it was tested, and what our findings were. This gives me the chance to describe any variations encountered and ask them if they see this as a problem. This sets up the question of “Are you comfortable with this or would you be more comfortable with more testing?"
Rather than talking about tests and test cases and what has been run and not run, which I've found is of really little value to most people, I talk about the business processes we have exercised. I talk about the depth to which we exercised them.
When a reference is needed, I find mind maps give me a useful tool to present abstract ideas clearly to people with an interest in the software.
In the many years since I worked on this project, I still find the examples hold up. I have been able to adapt some, and sometimes all, of these techniques to meet the needs of the project and team I am working with. While no single approach works everywhere for every project, I find this to be a versatile tool that can be readily adapted as needed.
I remember sitting watching "The King's Speech" one evening some time ago. Did you ever see the film "The King's Speech"? It's an interesting study. The thing is, most people watching it saw it as a study in how one man, a Prince who, eventually, would be crowned King. He did not want to be King and as he had an elder brother, the heir, it seemed unlikely. Of course, the brother had some interesting "relationships" with people that were inappropriate. The result of those eventually led to the older Brother, who by then was King Edward VII of England, abdicating the throne in 1936.
Most people who see the film take away a story of a triumph of will on the part of the man who became King George VI. With the assistance of the speech therapist, of course.
I noticed one distinct thing early on. The Prince, Albert, not yet George, and his wife Elizabeth pay a visit to the speech therapist (Lionel Logue). After a brief, unsuccessful visit with Albert alone, this second visit consisted of Albert and Elizabeth telling Logue how they wanted him to do his job.
It was interesting, because he had been asking questions that made Bertie/Albert (not yet George) uncomfortable. Stuff where Bertie/Albert did not understand the purpose of the questions. Logue explained that there was possibly information he needed to help Bertie/Albert within the answers. Except Bertie and Elizabeth would have nothing to do with such flim-flam and silliness.
They wanted him to fix the physical problem of his stammer.
How many times does someone come in and demand the team do something that will not serve the needs of the project, team or company. When we, as software professionals, push back, they, the project manager or business analyst or manager or maybe a would-be scrum master, tell us to do what we are told. That is our job. So just do it.
Being the compliant, obedient people we are we just give in and do it their way. Right? Testers should focus on finding bugs. Unless we should focus on making "sure the software works." Maybe we should focus on ensuring confidence in the functionality. Perhaps we should focus on all of these things.
Nonsense. We might do those things on some projects, based on the needs of the project when they are the right thing to do. Here, by "right" I mean within a reasonable professional code of ethics. Of course, it might boil down to "keeping your job" but that has never really held much sway for me. At least not in the last 15 or 20 years or so.
How do you approach or respond to someone who is telling you what you should be doing? What about when they have no real expertise or experience within their argument, other than "I'm your customer and this is what I want"?
I am reminded of the philosopher-poets who wrote:
You can't always get what you want
But if you try sometimes, well, you just might find
You get what you need
What is Wanted vs What is Needed
Many people confuse wants and needs. There really is a difference, no matter what people trying to sell you something might say. Sorting out what is needed from what is wanted can be really hard.
There is the noise, buzz, and clamour of managers, project managers, someone else saying they "need it NOW!" Then there is the voice in the back of the head that says, "Something doesn't quite feel right. Something is out of sorts."
How do we sort out what is really needed? The easy way is to tell them "that won't work." I'm not sure that works either. The not-quite-as-easy way is to say "Well, I'm not sure that will work, and here are my concerns..."
The thing often wanted from us, as testers, is where we say "OK! I'll do precisely that!" Which will make the requester/demander go away happy. Odds are they'll be back, not so happy, but that won't be for a while so that is just fine for now. We can figure out something to tell them later. Not today.
The option that Logue used in the film (and in real life) was simple. He said, "OK, we'll do it that way." He then proceeded to do "physical exercises" and "training" to deal with the stammer. He did this knowing that the chances of it working were incalculably small..
In the process of working through these exercises, they conversed. They talked. At one point it became clear that Bertie was actually left-handed but had been trained to act right-handed. This led Logue to comment that this was not uncommon. There were questions posed as "interesting ideas" that Bertie answered, simply because he was relaxed. His guard was down and he was more open.
In the end Bertie came to rely on Logue. He even offered an apology, in a very Royal Family sort of way, for his previous “bad behavior.” In the end Logue did what was needed. It started by simply being willing to help.
Importantly he took pains to not dismiss the words used to express the wishes, and desires of Bertie and Elizabeth. He made sure they knew he wanted to help. He focused on being willing to "do what they wanted" until it became clear something else was needed.
We can push back, gently. We can offer help. We can set the conditions. We must also know what to push back against. We must know why.
I'm not sure I can do what Logue did, at least on a regular basis. I've tried, with various levels of success. Some folks were OK with that. Other folks wanted something like "that and only that." They wanted me to precisely do that one thing, exactly what they said. I have a hard time with that, particularly when they can't answer basic questions around the intent of the software.
Granted, working as a consultant or contractor, you may have a bit of leeway that an "employee" may not have on the surface. Know how you can contribute, then do so.
You may not get a CVO (Commander of the Victorian Order, an "Honour" given for personal service to the Monarch of Britain) out of it, but I expect you'll be able to sleep at night.
I remember reading an article on how Quality Engineering was well beyond and more complex than software testing. In the course of reading that article, it struck me that the author had a totally different understanding of those two terms.
I wanted to examine them and explore some ideas.
Software Quality Engineering, simply put, is applying the practices of Quality Engineering, as a discipline, to the development and creation of software. It is part of a defined quality program focused on how software is made. It focuses on the process, goals, measurement, compliance to the agreed goals established by the development team for the project.
Unfortunately, like most things in software, the formal, intended meaning of concepts have been forgotten or ignored. People will hear a term that sounds familiar and apply their understanding to it. If you use your favorite search engine and look up “Software Quality Engineering” most of the responses you get will be around software testing. For most people in software, a “Quality Engineer” does software testing.
Let us look at Software Testing then. Again, a significant problem is people hear a term and apply their understanding of similar terms to the new one. The result is confusion.
Common View of Testing
A view which some consider to be the "real" or "correct" view, is that testing validates behavior. Tests "pass" or "fail" based on expectations and the point of testing is to confirm those expectations.
The challenge of introducing the concept of “Quality” with this conception of testing brings in other problems. It seems the question of "Quality" is often tied to a "voice of authority.” For some people that "authority" is the near-legendary Jerry Weinberg: "Quality is value to some person." For others the “authority” is Joseph Juran: "fitness for use."
How do we know about the software we are working on? What is it that gives us the touch points to be able to measure this?
There are the classic measures used by advocates of testing as validation or pass/fail:
These may shed some light on testing or on the perceived progress of testing for some organizations. Many organizations will assert with great confidence that these measures tell them all they need to know about their testing.
However, they speak nothing about the software itself or the quality of the software being tested. Nor do they tell us anything meaningful about the testing that is being done.
One response, a common one, is that the question of the “quality of the software” is not a concern of “testing,” that it is a concern for “quality engineering.” Thus, testing is independent of the concerns of overall quality.
Good testing, like everything involved in creating good software, takes disciplined, thoughtful work. Following the precise steps dictated is not testing. It is following a cookbook recipe or a script. Testing takes consideration beyond the simple, straightforward path.
When people ask me what testing is, my working definition is:
Software testing is a systematic evaluation
of the behavior of a piece of software,
based on some model.
By using models that are relevant to the project, epic or story, we can select appropriate methods and techniques in place of relying on organizational comfort-zones. If one model we use is “conformance to documented requirements” we exercise the software one way. If we are interested in aspects of performance or load capacity, we’ll exercise the software in another way.
There is no rule limiting a tester to using a single model. Most software projects will need multiple models to be considered in testing. There are some concepts that are important in this working.
When it comes to “documented requirements,” they serve as information points for us. Many times, they make reasonable starting points for good, meaningful testing.
To perform good testing we need good communication. Real communication is not documents which are emailed back and forth. Communication itself is shared and is bi-directional. It is not a lecture or a monologue. Communication requires conversation. This helps make sure all parties are in alignment.
Good testing looks at the reason behind the project. We need to be able to identify and understand what the intended change will do. We need to be able to show how the business problem is addressed by the change being implemented.
Good testing looks to understand the impact of what is being done to the broader application and software system. It is not enough to identify the intended impact areas. We must be able to illuminate areas within the system which may be impacted, whether intended or not. We must also identify people who may find their work processes changed by the change in software.
Good testing looks at these reasons and purposes for the changes and compares them to the team and company purpose and values. Are they in alignment with the mission, purpose and core values of the organization? Good testing includes a willingness to report variances in these fundamental considerations beyond requirements and code.
Good testing can exercise the design before a single line of code is written. Good testing can help search out implied or undocumented requirements to catch variances before design is finalized.
Good testing can help product owners, designers and developers in demonstrating the impact of changes on people who will be working with the software. Good testing can help build consensus within the team as to the very behavior of the software.
Good testing can navigate between function level testing to broader aspects of testing, by following multiple roles within the application and evaluating what people using or impacted by the change will experience.
Good testing can help bring the voice of the customer, internal and external, to the conversation when nothing or no one else does. Good testing challenges assurances. It investigates possibilities and asks questions about what is discovered.
Good testing challenges assumptions and presumptions. It looks for ways in which those assumptions and presumptions are not valid or are not appropriate in the project being worked on.
Good testing serves the stakeholders of the project by being in service to them.
How Does Testing Serve Stakeholders? Testing provides information.
Applying test approaches to design can help reveal weak points in the design, before any code is written. Doing the same thing with documented requirements can head off problems in design before design begins.
Then there is the reason the work is being done in the first place. Applying the same mindset or approach to evaluate the business problem being addressed, before requirements are considered, can help clarify thinking around not only the reason driving the work, but also define some of the expected outcomes. By doing this from the very inception, before the project is actually a project, many of the problems which come up later can be reduced if not avoided altogether.
In that sense, the idea of “quality engineering” takes on a third definition. That is, software quality engineering is building quality software products from the very inception. To make that happen takes very similar skills to what is required for good software testing.