skip to Main Content

Driven by Test


Have you ever worked in a company that has a separate test team from the development team? Maybe that’s the situation you’re in right now. Did you ever ask yourself the question “What do those testing guys do in their dark, windowless basement all day anyway?” or “Why are the testers kept apart as a separate functional unit from the developers and designers?” What answers did you find?

Maybe you’ve worked in a traditional software development life cycle, and do have answers, but did you ever ask yourself “Do I really believe them?”

Two commonly held views (though not the only arguments) are:

  1. Testing is a discipline in its own right, for which we need to hire people with expertise in the specific area of software test. Developers are experts at developing. Testers are experts at testing. It is possibly unreasonable to expect anyone to be highly competent at both, or difficult to hire people who have both skills to a high degree;
  2. Testing is there to quality assure that the code produced by the development team actually meets the original requirements as stated in the function requirements specification, or whatever pre-design documents initiated the development process. We don’t want the independent interpretation of the requirements by the test team to be polluted by knowledge of how the development team put the code together.

In recent years there has been a huge migration within the software engineering industry away from waterfall and other classical software development techniques towards lightweight ‘agile’ methodologies. Amongst other things, a characteristic of these agile software development methods is that they challenge the separation of test and development. They question whether software practitioners have to be narrow-disciplined, perceiving that it is genuinely useful to have people who are reasonably skilled at both test and development, or at the very least that both disciplines can cohabit within a development team. Let’s take a look at testing in agile, and see how this might be true.

Test First

Go back far enough into agile history, and you will see that a long-running theme in agile has been the practice of test-first design (TFD). Traditionally, software would have been developed, then handed over to testers for testing. The implication of this is that developers don’t know how to test, and so are presumed to write the code, let someone else test it and provide feedback, then they fix the faults.

With TFD, the developers either have some knowledge of testing or work closely with someone who does. Together, they design the set of tests that will test the as yet unwritten code. In designing the tests, they will think about modes of failure, and hence how the developer might avoid or handle those. They read the functional requirements, or obtain sufficiently detailed input requirements from the stakeholder, to be able to construct a set of tests that will confirm the code meets these input requirements.

Having constructed these tests, the developer can now go ahead and develop code to pass these tests, the tests now being an executable specification of the original requirement. The quoted benefits of using test-first development cited by agile experts are:

  1. You were going to write the tests anyway, so why not write them first to support the development process. It won’t take any longer to do it this way round;
  2. The tests will force the developer to analyse the requirements before beginning development. Having thought longer about what is to be developed, particularly with a focus on its robustness as well as its correctness, there is an opinion that this may actually shorten the development time since less code is developed that is not subsequently discarded.

TFD does not mandate knowing all the requirements for a piece of functionality when coding begins. There is latitude in the technique for code to be developed incrementally. However, as each increment of functionality is begun, the first thing that is constructed is the set of tests for that new behaviour.

Dev Testing1

Test Driven

By far the widest heard term in test-first development is Test Driven Development. Consequently, a cynic might also say that it is also the most widely misunderstood! Differing literature calls it Test Driven Design or Test Driven Development, so even the expansion of the abbreviation is ambiguous.

There are two interpretations of TDD that are generally cited. Actually, the terms Test Driven Development and Test Driven Design would neatly describe each of these.

In the case of design, the usual explanation is that in order for a test to compile, there must be classes and methods of the classes to be tested already written. Hence when writing the tests, the construction of those tests makes us determine what classes, methods, and interconnections between those classes will be needed. This just-in-time design makes the developer write literally just enough code to pass the tests, thereby avoiding creating something more than was ever in the requirement.

In the case of development, a significant part of the TDD community would describe TDD as meaning TFD + refactoring. Refactoring is an activity where given a block of working code (code that is passing all its tests), we incrementally adjust the structure and dynamic behaviour of the code to improve its adaptability, extensibility and maintainability, without causing any tests to fail, and without adding any extra functionality. In this interpretation, there is no implied choice of classes and methods being driven by the code written in the tests. Rather there may be design involved prior to test construction, as discussed in AMDD later.

There is unease among many agile developers at the thought of having the test code drive the architectural design, while an equally significant group of agile practitioners quite strongly support the notion that tests should drive the incremental design. Needless to say, if the agile model-driven development ideas are practised, as discussed later, the notion of tests driving the design would no longer be the primary interpretation. The activity diagram below shows the TFD + refactoring view of TDD.

Dev Testing2

Acceptance Test Driven Development

It is very common for those who practise TDD to apply it at the unit test, and low-level integration test stages of development. In many cases, companies would still hand over to a separate test team to do the actual system testing or functional UAT. Often this test team is at best semi-automated, meaning that the regression test suite becomes progressively larger as agile development progresses from sprint to sprint.

If manual testing is involved, the duration of the manual tests in regression becomes larger, until compromises start to be applied. Maybe regression does not get done completely, or is done less often. Maybe the test team grows until its salary bill is greater than that of the development team. Almost certainly it becomes impossible to run the regression tests within the actual agile iteration that the code being tested was generated. The outcome of this is that developers have to come back to code they developed in a previous sprint to address integration issues. This makes sprint planning very difficult, requires the developer to have to drop what is foremost in their mind to go back and remember what the code did that is now under question. It can make the gap between tasks selected for development and their demonstration to the stakeholders way longer than a single sprint. It means design decisions made on new code, begun since the code failing regression testing was written, maybe invalid because we have to adjust older code on which it depends. All in all, the combination of a separate test team culture and an over-dependence on manual testing can ultimately kill the benefits gained by using agile development at all.

With ATDD, the high-level functional tests for the actual functional requirements are developed at the front of the sprint (development cycle) in which the code that realises the same functional requirements will be developed. Where possible, these tests are automated. Ideally, we should be looking for automation in the upper nineties percentage so that the manual testing doesn’t drag the project to its knees. These same automated functional tests are derived from the requirement source, and confirmed with that source to make sure they represent an ‘executable statement of the requirements’. The developers now proceed to design, produce unit tests, develop low-level code, refactor it, and integrate it. As the implementation approaches completion, the already-developed automated acceptance test can be run against the code to prove (or otherwise) that the code has realized what the stakeholders requested.

This is an approach that is emerging in popularity among agile teams in the past few years. A number of tools have evolved to support this approach. Tools such as Gherkin, Cucumber and SpecFlow, for example, permit better traceability between the actual formal requirements and the executable tests. Also, the now venerable use-case driven approach to defining functional requirements continues to support this traceability, as use-cases very easily provide requirements for developers, functional test scenarios for testing, and user documentation.

Dev Testing3sm

Agile Model Driven Development

The newest kid on the block as far as test-first approaches are concerned is AMDD. While not strictly a test-first approach to development, it does address one of the main concerns the wary developer cites when challenging TDD as an approach.

We have already said that the quality of design in test-driven agile comes from the use of extensive refactoring of code once it is passing its tests. We have implied that the need for the code to compile forces the construction of the tests to define the classes and methods needed in the implemented code, and this would, therefore, need refactoring because of this ad hoc design strategy.

In traditional methodologies, the architectural and detailed design would have been done before any code was cut. This was an efficient approach. Design is usually performed by people who are experienced, and who have previously ‘been around the block’ as coders themselves. Producing upfront designs before coding makes sense therefore since it will minimise the amount of code that gets thrown away during refactoring. AMDD simply states that doing some modelling and design before developing the unit tests, and possibly in tandem with developing the automated acceptance tests, can improve efficiency. It removes wasted time developing stuff that is subsequently refactored out of the implementation. It is however not a replacement for TFD + refactoring, it is merely an additional activity to improve overall architecture and to improve throughput.

As a rule, AMDD delivers at two points in an agile development process. First, there is usually a high-level workshop or brainstorming design phase in what is often called ‘sprint zero’. Sprint zero is the iteration before actual coding begins, when initial requirements are being gathered, key stakeholders identified, agile processes being defined and teams formed. This initial high-level design activity comes up with the overall high-level architecture and is usually driven by those members of the team with a lot of software architecture experience. In some cases, this sprint zero activity might occur once at the beginning of a project. In other cases, where the iterations of development (sprints) are grouped together to form releases, there may be a high-level design phase in a sprint zero at the beginning of each release.

Second, AMDD requires each product backlog item (an individual requirement) to also undergo a lower level, more detailed design modelling activity prior to coding up the tests. This activity would usually be conducted by the developer and tester who have picked up this item for development. The design might involve some informal use of modelling notation such as UML, but is not generally regarded as a formal modelling ‘step’ in a methodology. It is more of a design brainstorming activity, to at least come up with a viable structure for the classes, methods and dynamic behaviour of the use case, before going ahead with programming the unit tests and empty classes and methods.

Dev Testing4


Whichever of these approaches to test first design and development is adopted, one or two things are fundamentally clear.

Firstly, the need for automated testing is unquestionable. The sheer volume of manual regression testing that builds up during an iteratively developed project eventually spills outside of the iterations and severely alters the process. Hence to automate absolutely as much as possible should be a fundamental goal of whoever is driving the choice of software development process.

Secondly, the benefits to be gained by applying test first approaches should not be dismissed. There is plenty of evidence of past projects, large and small, where the use of TDD has become the routine, and where the practitioners would strongly advocate new projects adopt the approach too. TFD ensures that testing is not skipped or deferred to the end of the project when time is running out. It educates the developers in what the code to be written should do at a detailed level, and ensures they consider robustness issues at the same time. It provides an automated suite that, while testing the code you are developing today, becomes part of the regression test pack without modification tomorrow.

Thirdly, without a trusted set of automated tests, dependable refactoring could not be performed. If you live in fear of altering your code lest you break it irreparably, or lest you have to go through a whole manual testing cycle again, refactoring just won’t happen.

Lastly, to be able to produce this set of trusted automated tests, and to be able to apply test-driven at all, it is essential that test expertise is embedded in each agile team, and becomes a first-class part of the development activity itself. Keeping all the test expertise in a separate test team prevents this, and leads to legacy waterfall-style development and test procedures, with all that that implies as a missed opportunity.

Further reading

  1. provides another more extensive overview of TFD, TDD, and Agile Model-Driven Development.
  2. “Test-Driven Development by Example” by Kent Beck, pub. Addison Wesley, 2003. A great if slightly pedantic detailed treatise on how to conduct TDD, but hey, pedantic is good! Available through Amazon:
  3. is an article co-authored by Scott Ambler, one of the leading proponents of modelling in agile development. It gives good insights into what constitutes ‘just enough’ modelling for agile development, both in the sprint zero phase, and at each iteration.
  4. “The Cucumber Book: BDD for Testers and Developers”, by M Wynne and A Hellesoey, available as an e-book from
  5. “Agile Testing: Key Points for Unlearning” by MVL Expedith, and available to read on the Scrum Alliance web site ( This article one by one explains why the once treasured key justifications for separate test teams don’t support agile development, and in some cases can even hinder it.

By Sean D Smith

Sean Smith is one of Britain’s top consultants with a staggering 34 years of IT industry experience and a PhD to his name. He is a leader in Microsoft Enterprise Application development; software engineering and software development best practices. He has lectured at the Royal Military College of Science, Loughborough University and Southampton University.

Have a comment? Let us know below, or send an email to [email protected]

About the Author

Sean D Smith is a DevOps and Agile Development specialist. Working in Agile Development for the last twenty years, he has worked for the likes of TFL and Learning Tree International and as an independent software consultant. Sean also has an impressive teaching career with years as a lecturer at the University of Southampton and an instructor at Learning Tree International. Sean now puts his teaching experience to use blogging about DevOps, Agile Development and Agile Test Automation.

Education, Membership & Awards

Sean graduated from the University of Kent at Canterbury with a BSc (Hon) in Electronics and a PhD in Digital Communications. He went on to become a Chartered Engineer and Member of the Institute of Engineering and Technology.

Back To Top
Contact us for a chat