Thursday, October 23, 2008

Treating Test-Driven Development as a matter of technique

At the heart of good development is good programming and at the heart of good programming is the ability to think through how things are being done and what needs to be achieved.
Thinking through how things are done is a lower-level concern, involving the nuts and bolts offw how the function integrates with the surrounding code, mostly around the area of exception handling, mapping of functional domain to the programming language, and usage of the correct system calls.
Thinking through what needs to be achieved is a higher-level concern, presumably starting from some sort of requirements specification, which governs the test inputs and result expectations from those tests.
Once the “what” and the “how” are combined with a certain skill, one should have a product that does what a user expects without exploding while at it.
Test-driven development (TDD) is an ideal solution to the “what” and a significant help to the “how”. Writing tests first inevitably forces you to understand what needs to be achieved, model it in terms of method calls, and to define the test inputs and outputs. Less churn in the definition of what methods are supposed to do is translated in less churn modifying the code implementation to match the method definitions.
Technique versus choice
Despite significant literature on the subject, TDD is often approached as a matter of personal choice. The argument invariably lands on the ditch of unproven results and how teams have succeeded in developing products using a write-first-test-later approach. For TDD supporters, here are a couple of arguments that should help tow the discussion out of the ditch and give it a second chance.
The cost argument against TDD is rooted at the difficulty to move fast while coding volatile areas of the system, invariably surrounded by statements such “this code will change next week, and it will cost us more to fix the tests”. The problem is, while this argument is perfectly valid at discrete points in time, it is prone to misinterpret the cause of volatility as intrinsic to the system rather than to the phase of development.  Absent a formal understanding of the software development phases, technique is replaced by individual judgment as to whether TDD is right for the project, rather than as to whether it is right for the phase of the project.
IRUP to the rescue
In general, we all acknowledge that a product under development matures over its course, with the nature of changes being smaller and smaller over time. A quick glance at the IRUP map of disciplines and phases moves the discussion from general acknowledgement to specifics, shining a revealing light – more like a hand-draw red rectangle - on where TDD is harmful and where it is necessary:
image
Elaboration, when coding helps design…
During the elaboration phase, while the analysis and design work is reaching its peak, it is counterproductive to try and write tests first. During this phase, the entire team is after the “unknowns”, such as whether a design choice can scale or whether a new technology supports certain features. There is little point in hardening the quality of the code used for these exercises while concepts are being vetted.
Think of most of the code built during this phase as the prototype that should be thrown away once the key design concepts are validated or proven.
Construction, when coding follows design…
During the construction phase, on the other hand, the bigger decisions were already made and the design will be moved from high-level to actual code. Not doing TDD has the more obvious effect of risking miscalculating the time required to automate the unit tests, often followed by the schedule-constrained decision of skipping test automation altogether.
The less obvious, and far more nefarious, consequence of writing the code before tests is that it inverts the flow of design from the “business modeling”, “requirements”, “analysis & design” chain. Up to that point, the system design is being driven from end-user needs to final product, but when developers skip TDD during the construction phase, the flow goes from code to end-user, premised on the assumption that the developer can short-circuit his own design decisions to match the original design direction.
For a skilled developer, the result is just additional work in the form of “sculpting” the results, iterating over what should be done (the original design) and the output of what is being coded. For a less skilled developer, the results is often a mismatch between the code and the original design.
There are are individuals who can do this in a single iteration, but usually this happens when the developer is both designer and programmer for the system.
As with any framework, IRUP is not a golden rule, but its matrix of phases and disciplines offers a temporal and conceptual separation that supports better decisions as to when and where TDD should be followed. In the end, it should still be a matter of choice, but not a philosophical one.

LinkedIn