Sunday, November 29, 2015

Everybody is doing TDD, take two

In my previous post Everybody is doing TDD, I tried to make a point by telling a story. But most people missed the point and argued about unrelated problems. I guess that was mostly my fault, so in this post I will attempt to explain the point directly.

My claim is this : Sooner or later, every software development effort slips into same kind of workflow : 

  1. Define a test case.
  2. Change or extend implementation so that above test case passes.
  3. Execute the test case. If it passes go to 1. If it fails, go to 2.
This workflow is so far only one for which we know produces software that fits the specification. Only difference is how this workflow is implemented. For example, as software development process, the steps can be:
  1. Create a specification.
  2. Implement an application according to specification.
  3. Have testers make sure the implementation is according to specification. If this fails, return to step 2.
This process works as long as no corners are cut. Which means defining the specification with high levels of detail and having army of testers, so whole specification can be validated. But much worse is situation when the whole workflow is non-conscious, as presented by Josh in previous post : 
  1. Test case is kept in programmer's memory as steps to run the use case.
  2. Change or add code to fix current use case.
  3. Run the steps as defined in 1.
It should be obvious why this is so bad, but that is not a point of this post. The point is that no other workflow exists that would give software developer ability to create software that works according to specification. Or at least I don't know about any.

That is why I pose a question, primarily to those that claim TDD doesn't work : Does any other workflow exist, that cannot be reduced to this one?

There are few options, none are good.

First option is to implement the software properly the first time. I believe this is just a dream and that this is only possible for simplest of use cases. If software gets even little bit more complex, it becomes impossible to create it without executing the test cases during the development.

Second option came from the comments on the previous post : That developer can "feel" when software is correct. And while I agree developer can limit amount of tests that need to be executed by using his experience and intuition while looking at code itself, there is still huge possibility of bias. So while it is possible to come up with new test cases and minimize testing by having good experience, it is not full replacement. 

So if we agree that this is the best workflow to follow, it then starts a question: "Which implementation of this workflow is best?" But that is question for another post (with obvious conclusion).


  1. This comment has been removed by the author.

  2. Reduction to this workflow does not mean that this workflow can be understood and implemented with the same results in any manner.

    I call this the "Lowest Common Denominator" problem. In any given situation, there are some common things that are understood between all parties, and these are the things that can be communicated "clearly".

    These concepts are used to divide up the problem so that they can be solved, hence denominator. The problem is that in order for the parties to agree, they have to use the simplest methods of communicating that every party can understand, which is where the "common" and "lowest" (simplest) come from.

    The issue here is that there are more efficient ways of doing things, that are not as simple, but not all parties will have equal knowledge of all of these things, presenting a dilemma: Do you aim for higher efficiency or higher commonality?

    This is something every given team/organization should decide for themselves, because they work on a spectrum. If you aim for more commonality, you lower the bar on what techniques are common to all parties, and you will lose the benefits that un-common techniques may provide.

    If you aim for efficiency of techniques, you enter the area where not all parties understand the techniques. This is also exemplified by the "single person vs. large team" spectrum. Single people are able to use efficiencies that even a 2-person team cannot, because of the immediate nature of communicate inside one's own head.

    As soon as the communication must be externalized, there is a MASSIVE loss of efficiency in the communication, and as the team grows the communication problems grow factorially.

    So while your assertion that everything can be broken down into: Create Test, Implement Thing To Be Tested, Verified Test.

    This does not mean that all different types of processes give the same results merely because of this.

    To give an analogy, I can say that all data in a database could be stored in a single table with 3 fields:

    - Group
    - Type
    - Value

    All of these fields can be BLOBs, so we only need 1 storage type, and from that all data actions done in any other method can be implemented.

    This is true, but it is not efficient.