Recently there have been a few discussions of testing and test-driven development (TDD) in the blogosphere, which inspired me to think a bit about my own development methodology, and testing in particular. Although TDD is quite well defined, I have a feeling it is being interpreted differently by different people, however slightly. So in this post I want to write a few words on what TDD means to me.

Let me start by declaring openly that I usually don't write tests before code. Why this is so is probably a topic for a separate post, but just in brief I can say that I usually write my code bottom-up, starting with small utilities and infrastructure code and gradually building up towards the final solution. Given this methodology, writing tests before code doesn't make sense, in my opinion, because the interfaces aren't yet stable (there's usually little added functionality at each layer, and the code is mostly interfaces), and because I don't believe testing should be applied to each small function - it's too fine grained. Eventually I do write tests that try to provide a desired level of coverage, but I do this on the abstraction layer for which it makes most sense - and this usually isn't the lowest layer, which leads to tests coming after code.

What I do before writing code is thinking about the testability of my design. Thinking hard. Design usually means breaking the solution into components (or modules, or whatever you want to call them), figuring out the responsibility of each component and the interactions between them. There are many aspects to consider when judging a certain design, and I add testability as an important first-class criterion. A design that doesn't lend itself to being tested, is in my view a bad design, so I start thinking about alternatives.

How does a design lend itself to being tested? By being composed of components that can be tested in isolation [1]. Isolation is extremely important for debugging problems when they unavoidably arise. It is not easy to achieve - good isolation (a.k.a. encapsulation) of components is a kind of holy grail - you never really find the best division into components, you only approximate it. I think that testability serves as a good guide - components that are easy to test in separation usually make up a pretty good design. This is because testability and isolation usually go hand in hand. The more coupled a design is, the harder it makes testing its components in isolation. As a corollary - when components are easy to test in isolation, the design is well decoupled. There are many techniques for decoupling [2], but this is not what this post is about. The point I'm trying to make is that testability is a great guide to judging the results of applying these techniques [3]. This is why testability must be an important part of the design of software.

So, to reiterate, my development methodology can still be called TDD, if you let the final "D" stand for "Design". Test driven design - plan your tests while you're designing the software, not afterwards. In a way, this makes thinking about testing even earlier than classic TDD, with the assumption that one does (at least some) design before writing code [4]. In classic TDD you do the design, and then start writing tests + code. In Test Driven Design, you do the design + test planning, and then start writing code + tests. Yes, first code, then tests. It takes discipline, but over the years I've found that this discipline comes with experience.

So, I said in the beginning that I usually don't write tests before code. What is this "usually" about? In some cases, I do write tests before code. This happens when the code is done (full functionality implemented), and over time problems are being discovered (whether by me or by users opening issues and bug reports). When I suspect a bug, I first write the test to reproduce it and then go on fixing it. This keeps the bug reproducer safely in the test suite, ready to protect me from re-creating the bug in some future release. At this point the design is pretty much done and the APIs are stable, so writing the tests before the code makes more sense.

I want to conclude by stating the biggest benefit of comprehensive testing in my eyes. No, it's not because testing catches the most bugs. This simply isn't true - what catches the most bugs is careful reviewing of the code (whether by the original programmer or by others). Testing is essential to keep the quality high, but it's not the biggest bug slayer out there. What it's most important for, I think, is giving the developer confidence to change the code. IIRC this dawned on me when reading the book "Refactoring" by Martin Fowler, many years ago. No piece of code comes out perfect the first time you write it, and changing it, twisting APIs, optimizing, moving stuff around is an important part of a good programmer's routine. But doing that without a big safety net of tests is simply crazy, which is why many programmers don't refactor. Having comprehensive tests empowers you to change the code at will, knowing that the behavior stays correct even if considerable changes are applied.

[1]This definition is naturally recursive. A complex design may consist of large components that should be further divided into sub-components. It should be possible to test these sub-components in isolation as well.
[2]Dependency injection is one powerful technique that comes to mind.
[3]Not the only guide, of course. A design in general, and decoupling in particular is judged by many criteria - testability is just one of them.
[4]What design before coding? Didn't I just say in an earlier paragraph that I write my code bottom-up? Yes. I design the program top-down, and then code it bottom-up.