farmdev

Spike, Test, Break, Fix

I first heard about test-driven development (TDD) around 2006-ish and I still hear about it often. TDD encourages you to write effective tests because you always see both states: the failing state and the passing state. This is great but it's easy to write the wrong code for a feature as you learn more about it. Duplicating that exploratory effort on tests is a waste of time.

After many years of building software on small to medium sized teams, I prefer a leaner approach that accomplishes the same goal: something I call spike, test, break, fix.

Spike

I like to start a feature by just writing the code! I'm calling this a spike because my goal is to hack together the feature as quickly as possible. Adding tests would only slow me down. This approach is especially good for building web UIs because tools such as Create React App offer hot reloading which lets you run the app while writing the code.

It's incredible what you can learn by spiking a feature. Most of the time, I learn that my first approach is wrong or that I misunderstood some key integration point. If I had written tests for all of that, I would have been sad.

Spiking originally meant to explore uncharted territory such as trying out a third party library for the first time. However, I prefer to spike pretty much everything. There are always unknown unknowns in any new feature.

There's an added benefit to spiking: you can pause and split a patch into smaller pieces that are easier for your teammates to review. A study of a Cisco team showed that code reviews are most effective when the patch has 400 changes or less. I use techniques like git add --patch to prepare smaller patches for review.

Test

Once I get the feature working and I think the approach is shippable (but not polished), I write the tests. I commit to the approach I have chosen.

I like writing tests because I get to think about edge cases and how to make my feature more robust. If writing tests ever feels like a chore to you, I recommend investing in helper functions for common setup or user simulations. In a well architected test suite, adding tests should be a breeze.

Break

The feature has been written and the tests are passing. Next, I break the feature temporarily just to see the tests fail. Usually, this is as easy as commenting out some code. For example, imagine I had this if statement in a React effect:

useEffect(() => {
  if (user === undefined && userId) {
    dispatch(fetchUser({ userId }));
  }
}, [user, userId]);

I'd comment out the first part of the if statement and make sure a test fails:

useEffect(() => {
  if (
    // user === undefined &&
    userId
  ) {
    dispatch(fetchUser({ userId }));
  }
}, [user, userId]);

I'd also comment out the second part and make sure a test fails (and so on):

useEffect(() => {
  if (
    user === undefined
    // && userId
  ) {
    dispatch(fetchUser({ userId }));
  }
}, [user, userId]);

Testing is hard. This break step usually reveals a mistake I made, probably in how the test was set up or in its assertions. This is a crucial step that cannot be skipped!

If it's too cumbersome to comment out the feature code, I sometimes just change a static value in one of the assertions to make it fail. This is less ideal, though.

Fix

When I've watched each test fail, I put everything back to make them pass again. At that point, the cycle is complete. I am free to refactor my implementation and make other simplifications. I can be confident that the tests will catch any regressions.

Is TDD still useful?

Yes, definitely. I like to use TDD for well understood code changes such as adding a new adapter implementation alongside other proven implementations, for example. That is, I like TDD for no-brainer changes where I am confident about what code I need to write.