03/07/2012

TDD in practice: ABC's of TDD

Prelude

This is my humble opinion of TDD:
  • PRO: It's super easy to get equipped with a testing framework and learn the basics. 
  • CON: It's unbelievably hard in comparison to start unit testing and applying TDD. 
Hopefully, by now we have a common understanding of what we should be testing. (TDD in practice: Where does it fit in? - We can only test against known results and behaviours).

Let’s reiterate and approach the same conclusion from a different angle:

Method A: void SendEmail()
Analysis: The above method takes no parameters and returns no values. The only clue we have of its purpose and intended behaviour is the method name.
Conclusion: Not testable

Method B: int Sum(params int[])
Analysis: The above method takes parameters and returns a value. We can imply that because the parameters are numeric of nature and the methods' intention is to sum up these values - the expected result should be the sum of the parameters.
Conclusion: Easily testable

Although method A is not testable - be not discouraged - it doesn't imply it can never be.

Test Driven Development revolves around:
  1. Red: Write the test the way you want the implementation to work. 
  2. Green: Implement the functionality required to make the test pass (with a focus on loose coupling and high cohesion). 
  3. Refactor: Refactor your implementation and enforce Separation of Concerns. 
  4. Repeat
Relating to the above-mentioned:
  • (SoC)Separation of Concerns: The process of separating a computer program into distinct features that overlap in functionality as little as possible. 
  • (LC)Loose coupling: In computing and systems design a loosely coupled system is one where each of its components has, or makes use of, little or no knowledge of the definitions of other separate components. 
  • (HC)High cohesion: In computer programming, cohesion is a measure of how strongly-related each piece of functionality expressed by the source code of a software module is. 

I suggest everyone should follow these steps when applying Test Driven Development:

Step 1: Investigate and Plan
Our downfall in regards to TDD is that we've been conditioned via tutorials to think that applying TDD is simple and that we should jump head first into coding.

You have to start planning out the feature you want to implement.

I have found that enforcing the Single Responsibility Principle has allowed me to identify different components. The Liskov Substitution Principle makes for a good guideline to determine if an abstraction should be implemented as an interface or using the Template Method Pattern.

Once you have set out some behavioural guidelines (user stories) and have a pretty good idea of the functionality you have to implement and have identified reusable components - then you are ready to go.

Relating to above-mentioned:
  • (SRP)Single responsibility principle: Every class should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. 
  • (LSP)Liskov substitution principle: Concrete implementations of abstractions should be interchangeable without altering any of the desirable properties of that implementation (correctness, task performed, etc.) 
  • Template method pattern: A template method defines the program skeleton of an algorithm. One or more of the algorithm steps can be overridden by subclasses to allow differing behaviours while ensuring that the overarching algorithm is still followed. 
  • User Story: A sentence that captures the context (who, what, where, when) and the expected behaviour (then). In agile development (XP,SCRUM etc.) the form is usually As a <user>-When <condition/s>-Then <expected result/s/behaviour/s>. In behaviour driven development (BDD) and acceptance test driven development (ATDD) the form is usually Given<context>-When<condition/s>-Then<expected result/s/behaviour/s>. 
Step 2: Red-Green-Refactor
I'd like to revisit Method A from the prelude.

If Method A was fully implemented within a single method, then it would have been in an untestable state.

We would approach this problem by refactoring in-scope functionality into testable methods and abstracting out-of-scope functionality into implementations that can be tested independently.

This allows us to test the method by testing the abstracted implementations and refactored in-scope methods the method to be tested uses.

Abstract out-of-scope functionality that can be tested independently
If the responsibility of certain functionality lies outside of the scope of the class (SRP), functionality should be refactored out of the method/class into a concrete implementation. The concrete implementation then needs to be abstracted with an interface or an abstract class using a template method pattern (DI, TMP).

This rule also highlights the Dependency Inversion Principle, which is a specific form of decoupling that's very useful while applying TDD. You can then write unit tests for said low level implementations and use Stubs/Fakes/Mocks within your higher level implementations (that depend on the abstraction) to verify behaviour of said high level components.

The implementation of the abstraction can then be supplied via the constructor or via parameters to the method (SP).

Relating to above-mentioned:
  • (DI)Dependency inversion principle: High level implementations should not depend on low level implementations. Both should depend on abstractions. (A high level implementation should depend on the abstraction of a low level implementation) 
  • (SP)Strategy pattern: The strategy pattern allows you to supply a strategy (behaviour) to a high level implementation at run-time. 
In hopes of not making this post too long, it will have to be cut short for now. It is evident that a good understanding is required of SOLID principles, GRASP and design patterns to properly apply Test Driven Development.

Test Driven Development is awesome - it promotes good coding practices and quality code. Once you know implementations adhere to desired behaviours, the fear of maintaining (by refactoring) and extending a system almost entirely disappears.

There are 2 advanced fields within TDD - Behaviour Driven Design and Acceptance Test Driven Design (ATDD) - albeit outside the scope of this post, carry great benefits. For example, ATDD tests double as confirmation that a specific feature is done - which serves as an asset within project management.

My opinion
I still consider TDD as an implementation detail - it's very useful within the context of its application (implementation of functionality).

There are other fields to explore, such as:
  • Architectural design (like CQRS, layering, distributed systems, client-server, online-offline) 
  • Project management (prince II, agile/scrum, xp) 
  • Program design (domain driven design, metadata driven design, model driven design, design by contract, AOP) 
Perhaps we are not intended to learn everything - but I believe we should know enough to fend for ourselves.

Unlike the common expression - a chain is only as strong as its weakest link - it's the average skill within a programming team that dominates the quality of implementation within a project.

20/06/2012

TDD in practice: Where does it fit in?

Lately I've been delving deeper into Domain Driven Design and Test Driven Development.

If you asked me two months ago; do you know and/or use test driven development? I'd say - yes, but I only write tests for algorithms or methods with expected results or behavior.

Since then, my view has changed.

The question I was asking was not how to do test driven development, but rather when?

Let's take a look at the well known rules for TDD set out by uncle Bob:
  1. You are not allowed to write any production code unless it is to make a failing unit test pass.
  2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
These are great rules. Too bad most developers don't know how to implement them.

It becomes pretty clear why, when we examine what I was implying two months ago.

We can only test against known results and behaviors.

We struggle to incorporate TDD into our coding practices because we are focused on code, instead of context.

Domain driven design gives a neat explanation of a domain; Whenever a context is implied, a boundary is formed.

Ubiquitous language implies that by classification, an appropriate name can be given for a domain entity that describes its context and purpose.


As an example, I'll describe an User within different contexts of healthcare.

A person that uses a system, is called a user.
A person that needs to be billed, is treated as an account. (Accounting)
A person who has clinical data, is considered a patient. (Clinical)
A user, may be a patient.
A user, may be a doctor.

Let's classify the above into User Roles (roles a user may fulfil) and Domain Entities
User Roles: patient, doctor
Domain entities: account, patient

The reason why we struggle to test domain entities is because we never define them.

Let's take a look at the average programmers model abstraction evolution:

In the beginning
UI -> Database

After we've learnt about presentation patterns
UI -> View Model/Presenter -> Database

After we've learnt about ORMs
UI-> View Model/Presenter -> Data Model -> Database

At this stage, another level of abstraction comes along if you're doing SOA
UI -> View Model/Presenter -> Data Transfer Object -> Data Model -> Database

Let's look at the last example in terms of context:

  • The UI defines the data of the View Model (or rather, the View Model contains the data for UI).
  • The DTO contains the data required by the View Model (the View Model also has UI specific properties like state unlike the DTO).
  • The Data Model represents the structure of how the object is stored in the database.
  • In a simple case, the DTO is effectively a partial 'view' of the Data Model.
The reason why we are struggling to practice TDD is because nowhere in this chain is anything represented in the domain (meaning we don't know where to apply the boundaries/rules).

We are effectively just mapping the one object to the other along the chain.

Let's add in the Domain Entity.
UI -> View Model/Presenter -> Data Transfer Object -> Domain Entity -> Data Model -> Database

What's interesting here is the fact that by simply adding the domain entity, we have given context to the entire chain. For example, the View Model/Presenter UI validation should mirror/conform to that of the Domain Entity.

The role TDD plays in DDD is that of verifying that the domain boundaries (or rules) are in place.

By defining boundaries, you create context. Context, in turn allows you to define behaviour which creates structure. Structure leads to reuse and better code.

Disclaimer: TDD in itself has many benefits and other use cases outside DDD.

In my opinion, the main reason why you should adopt TDD is because it takes the fear out of modifying and extending an existing system, because you can verify that everything works as it should.

We have learnt that just because a system builds successfully, doesn't mean it works the way it should.


17/04/2012

We build boxes

Today I will take a more philosophical approach to software development.

We as developers are faint to admit that our golden goose is building boxes.

Consider for a moment how many authentication and user management systems you have implemented.

Every one of them are similar in many ways - but never entirely reuseable.

Our golden goose is building boxes.

We take ideas and build a box for it.

Often clients will request something that the system wasn't design to do - because the new idea doesn't fit into the old box.

I guess what I'm trying to say here is that (designing the box) constraints are important.

In a sense we are always trying to build boxes for ideas. One for every part of the system that will be neatly stacked into the big box that is the final product.

Modular programming is rarely applied to general scenarios. The responsibility of a box (module) should be defined (even if barely) to keep in mind how it will fit into the big box.

There is no excuse not to plan your boxes and how they will to form your final product.

I think building boxes is a natural reaction to software change requests.

Despite the almost negative character I have portrayed regarding building boxes, they are definitely important.

I have seen (and experienced) many times that businesses suffer because of a lack of constraints in quoting projects for clients.

To define your box and sticking to it requires commitment - in business this is essential to guard against the never-ending stream of new ideas.

Don't be ashamed, embrace building boxes and you will get better at it.