Follow-up Q&A to PNSQC webinar with Clyneice Chaney
In our last webinar, Clyneice Chaney gave a comprehensive look at a topic she could spend hours talking about: Creating an effective test strategy. Due to time constraints, not all the attendee’s questions could be answered, but we’re sharing the responses now.
View the full webinar below, and, if you want an even more in-depth guide to testing strategies, Chaney will be giving two workshops in October at PNSQC, in addition to her Main Stage talk, “Trimming Down your QA Effort While Maintaining Quality.”
Q: Is it worth doing formal design inspections if a team does not have an architect?
A: Yes, the lead technical personnel on the team would provide the walkthrough. The organization can decide what formal means, but there is definitely flexibility. I’ve see some interesting approaches — even on agile projects — where they structure the design review during scrum planning sessions. If you search for new approaches or flexible approaches to design on agile projects, you’ll hit some interesting approaches in use.
Q: Are these Test Inventory items in priority order, with Path First and Regression last?
A: Yes, the priority order is based on the risk analysis score. If you use a scale of 1-3 (in the spreadsheet) that I sent, or make your own scale (1-5) is what I like, the inventory would have highest priority. So priority in this case isn’t user priority but risk-based priority — probability of feature failing and impact of damage if it fails.
Q: I am familiar with SIT and UAT Testing. Where do those fall in your list of tests?
A: Once your inventory is set up, you decide which tests get tested in which of the cycles defined in your organization. You want to try to push as many high priority test items in the inventory as early as possible in the life cycle. That provides opportunities to make adjustments to the cycles you have planned for testing if issues arise.
Q: Is there an online version of this that can be filled out and be more automatic?
A: The spreadsheet where the formulas are calculating is in Excel. Other organizations have used it in an online fashion.
Q: Can you define a “moderate” project?
A: I think this question is in relation to the use of the inventory for larger scale projects. Unfortunately, sizing is a not a uniform concept in our industry. The variations in size range considerably, although most organizations are using some designation of small, medium, large.
Here’s my take on it (Roughly how I’d accord things — keep in mind this is more or less arbitrary):
The “size” of the project in a composite of other factors like complexity, source lines of code, number of features/business value, etc. A very small product can deliver a large amount of value, etc. That being said, here’s an example of sizing for an organization:
-
2m+ sloc is a large to huge project. These are generally so complex that few if any people are ‘fluent’ in the entire system; rather responsibility tends to be modularized along the structure of the code. These projects often deliver enormous business value and may be mission critical. They are also sometimes under a heavy strain of technical debt and other legacy concerns.
-
100k – 2m sloc is a medium-sized project. This is my middle ground: the project is complex enough to require some expert knowledge, and has likely accrued some degree of technical debt; it is likely also delivering some degree of business value.
-
10k – 100k is a small project, but not too small to have enough complexity that you will want expert consideration; if you are open source, consider getting people you trust to review your commits.
-
Anythings less than 10k sloc is tiny, really. That doesn’t mean it can’t deliver any value at all, and many very interesting projects have very tiny imprint (e.g. Camping, whose source is ~2 kb (!)). Non-experts can generally drive value concerns — fix bugs and add features — without having to know too much about the domain.
Another example:
Other Factors to Consider |
|||
Project Team Size Duration Cost |
< 5 people < 3 months < $50k |
~ 5-10 people ~ 3-12 months ~ $50K-$500K |
> 10 people > 12 months > $500K |
As you can see, size is relative. Most organizations use some variation of small medium, large but the definitions vary greatly on industry. For example, in the space I’m working, large really means long — projects that take a year or more.
With regards to size and the inventory approach. I find that more 200 High level requirements is in the moderate range. Therefore, I would use a modular approach for the inventory and create inventory for each module/ component etc. If you have integrated the use of the inventory into a tool like Quality Manager, it is possible to manage larger scale inventories with less effort.
Q: We would like to do testing at each phase (Unit, System, Integration, and Acceptance) — what advice do you have?
A: The inventory lists the set of tests, and you assign which phase the selected test inventory objectives are executed in. I have another column that I add to my spreadsheet that is “Phase.” This column allows me to associate test ID in the inventory to specific phases. I’m a big shift left proponent so, I attempt to identify how early in the project a test ID — particularly a risk associated test ID — can begin to be evaluated. For me, the higher the risk, the earlier I want to begin to test so I would assign it under unit, or component testing and depending on the test- it could also show up in acceptance as well as integration.
Watch the webinar to see what else Chaney discussed.