“Working software is the primary measure of progress”… so states one of the principles of the Agile Manifesto. This means that if you ain’t got no working software, you ain’t got no real progress. This mindset, at the very least, should be the conviction of agile teams. It stands in stark contrast to the traditional measure of progress by the activities completed in a project timeline.
How would you know if the software is working? When it meets the various criteria that it is expected to fulfill. The way to ensure those criteria are met is via testing. It is first and foremost the developer’s responsibility to test his or her software to ensure it is working. This should not come as a surprise since it is not a new topic.
This idea of developers testing is where things start to get interesting. Effort wise, there are essentially two ways to test: manual and automated approach. In this post, I would like to explore manual testing.
Experientially from the past, I will have to admit that the most tempting approach for developers would be the manual testing approach (unless they have been test infected). This is especially true when one is under time pressure.
With manual testing, I have the following advantages:
- Focus on coding the functionality right from the get-go. I can save time by not figuring out how to test the code beforehand.
- Do quick checks without spending time to write a unit or functional test code. How many quick checks, though, will depend on how confident I am of my code.
- I do not need to learn how to use a test harness framework.
- Avoid the pain of setting up, configuring, and updating the build environment in order for automated tests to run .. on both my development and build machine.
I can still produce working software quickly and much faster than the automated testing approach. So as a developer, why wouldn’t I just test manually?
The unseen slippery slope starts when I get comfortable using a manual testing approach. Among the many unrealistic precognition skills expected of the developer, a common one is to estimate the effort needed to deliver a certain functionality. Since I am already comfortable with manual testing, by default, the estimate given will be based on the development and manual testing effort needed. Therefore, for any given feature, the estimate will be development effort + manual testing effort.
Except that the effort to retest the other features to ensure they are not broken is not accounted for. A couple of things will start to happen:
1. More bugs and issues will be discovered as time goes by since not all aspects of the software are retested upon every change
2. With point (1) left unchecked, the team will fail to deliver working software every iteration (or for the Scrum folks, potentially shippable product increment every Sprint )
3. The teams spends more time fixing bugs and less time on new features
4. The team asks for more time
5. In the meantime, when estimates for a feature is requested, the estimates given continues to be the development effort + manual testing effort, further masking the unaccounted manual work required to retest other parts of the system. This gives a false view of progress since without retesting, we do not know how much of the software is actually working.
Who then, is going to take up the slack of retesting other features? (A really bad strategy will be to add testers to the team, pass the bucket to them, and introduce testing iterations/sprints)
Ideally, the effort estimation should be given along these lines:
- Effort for the 1st feature: development effort + manual testing effort for 1st feature
- Effort for 2nd feature: development effort + manual testing effort for 2nd feature + manual testing effort for 1st feature (to ensure nothing is broken)
- Effort for 3rd feature : development effort + manual testing effort for 3rd feature + manual testing effort for 2nd feature + manual testing for 1st feature (for the same reason)
- Effort for nth feature : development effort + manual testing effort for nth feature + manual testing effort n-1 feature + manual testing effort n-2 feature + … + to manual testing for 1st feature)
This estimation is probably more realistic, though not necessarily easier for the project/product sponsor to swallow.
Even with this approach, it is not without its own problems. Consider these issues:
1. In a development team setting, you may not be the one re-testing the feature that you developed. This is especially true if someone else is working on a related feature but may need to do a regression test on the feature you previously developed to ensure nothing is broken. It may take more effort for that person to retest your feature. Also, some scenarios that require testing may be left out (unless you documented the manual test steps, which I trust most developers would loathe)
2. In a best case scenario, the manual testing effort for each feature is constant. You will get a linear increase of effort required for manual testing. If the manual testing effort differs across features, you can be looking at an exponential increase of manual testing effort. If you start to feel that this is not a sustainable strategy, you are right.
3. This effort given is based on estimation. What happens if development effort overshoots? Will the manual testing time be reduced? If there is more than one feature that needs to be tested, will those get compromised as well? (we know the answer to this, but let’s keep quiet to prevent embarrassment)
In conclusion, what seems to be a good idea (with good intention) of delivering working software quickly using manual testing can easily degenerate into a drag.
Would we still have working software? Nope.
How, then, is the progress? None.
Manual testing sounded like a good idea, except now it isn’t anymore.
If only we have started differently...
...and then we move on to another project and repeat the same approach.