By Moss Drake, PNSQC Board Member
In software, there are numerous ways of testing systems. The assumption is that we all know exactly what others mean when they say they are going to “test” something. This understanding of the landscape is part of what we know as software quality professionals. But, as George Bernard Shaw said, “The single biggest problem in communication is the illusion that it has taken place.” Recently, I attended an online workshop on Agile testing strategies for XP2020, which can help you break through this illusion.
“Context is what shapes meaning”
At XP2020, I was fortunate enough to attend a workshop called “Puzzling out a Test Strategy” led by Wouter Lagerweij. He started by explaining that this exercise would help answer these questions:
- Do we really have to spend all this time testing? (why)
- What do we need to test?
- When do we test?
- Where should we focus testing?
We started with three views of the testing landscape for a project: The System map showed how the components of the system fit together, the Process map showed how the system will be developed, and the Pipeline map gave an overview of release management. Additionally, each team had a set of “stickies” to place on the maps to indicate where each type of testing should take place. The stickies represented test strategies such as integration testing, contract testing, unit testing, and others (see Testing Goals sidebar).
Testing GoalsIntegration Testing: Test the integration between your code and the surrounding environment (file, data, config, framework). UI end-to-end Testing: To verify that all the components of the system work well together and form a usable whole. Unit Testing: Unit tests prove to the developer that the code does what he thought it does. Also known as micro-tests. BDD/Function Testing: Functional tests, also known as BDD tests, ATDD tests and Specification by Example, are meant to demonstrate the implemented system does what was asked by the customer or stakeholder. Manual Regression Testing: To manually verify that no existing functionality was negatively impacted when implementing new functionality. Exploratory Testing: To discover issues not predicted before implementation. Smoke Testing: Verify that a component is deployed correctly, functioning, and able to reach all its dependencies. Contract Testing: Test the contract in use between consumers (‘clients’) and providers (‘services’) to ensure the consumer’s expectations on the provider are met in current and future versions of both |
Breaking into small groups, we discussed when and how to apply various types of testing for each of the views. This simulated a team planning a test strategy for a project. Our group started with integration tests and end to end tests.
From the beginning of the conversation, I noticed we didn’t necessarily agree on the definition of “integration tests.” Even though the high-level definitions were right in front of us, we all had slightly differing opinions of how to apply integration testing to the project. Not everyone had the same level of technical expertise, nor even the same understanding of the project. Sound familiar?
As we explored the strategy for Process testing, we had to drill down into how we envisioned the actual testing. Questions arose: Will integration tests be written before development? During development? When shall the tests run? During development and integration? Again during the release to beta? Every time we release to production?
This led to other questions
- Should we approach test design from outside in vs inside out? (white box, black box)
- What is our architecture? monolith, client server, n-tier, microservices?
- Should we focus on UI tests during discovery and leave integration tests until implementation?
The exercise went through several iterations, allowing us to look at other testing goals applied to each view. This provided opportunities to compare unit testing vs behavioral (BDD) and manual regression vs exploratory testing strategies.
Getting Specific
Overall, the exercise provided multiple opportunities for team discussion. One question that came up about testing services, for example: “What does it mean to do integration testing on a service?” Will this include the back-end all the way to the database? Or is it limited to just the i/o of the service itself? Can the service under test be simply a stub?
The takeaway from this exercise can be summed up as: “It’s important that the development team know what they mean when they’re talking about tests.” By providing a context map of the project in various ways, the team can open up and explore what the various options mean and then discover when they’re in agreement.
“How To” is Only the Beginning
Now, let’s take a short leap to a larger conclusion: This, in short, is also the benefit of attending a conference. There are tons of code schools and online classes explaining “How” to accomplish something, but it is only when you meet with others who share your goals and problems that you start to learn the context.
Context is the who, what, why, where and when of the problem. Do these questions look familiar? They’re the same questions that get answered while using the test strategy tool.
As we have seen, context also shapes meaning. PNSQC, as a conference, provides the context for you as an attendee to give shape to the meaning of your career as a software quality professional.
The mission statement of PNSQC is Achieving higher quality software through knowledge exchange. The goal of sharing this exercise is twofold. Hopefully some will read this and want to know more about the benefits of this exercise. Please contact Wouter and let him know you heard about it through PNSQC. And secondly, if you have experienced a similar “aha” moment at work, at an event, or elsewhere, and would like to share it with a larger audience, please let us know.