Test Automation for Endur/Findur : Part 2 - Beyond Unit Testing

In Part 1 of this series I looked at where defects are typically found in an Endur/Findur implementation.  In Part 2 I look at testing solutions designed specifically for Endur/Findur implementations.

Part 1 showed that unit testing might catch less than a quarter of the defects.  This is not an argument against unit testing but suggests that we need to do something more.  We need tests that include interaction with Endur and include use of the configuration that makes up a sizeable part of the solution.

I'll refer to these tests as automated system tests. I'll talk about the risks and rewards of attempting to automate system testing in Endur/Findur.

We've already written about why test automation fails, but here I want to talk specifically about how that relates to Endur.

I'll start with what I think is the most important concept:

Automated System Testing of Endur/Findur is not simply extending the scope of Unit Tests so they run connected to Endur/Findur.

I'll show why I believe this to be so and what can follow when you make this realisation.

Unit Testing : The Effort/Reward Pay-Off

All test automation is predicated on the belief that an investment of upfront effort is rewarded.  That reward can come in many forms; improved quality, lower cost of quality, and so on.

If we simplify slightly and think of "reward" as a single dimension then we can represent the typical pay-off as a graph.

unit-testing-effort-reward

Put simply

  • One unit test is better than none.

  • Two unit tests are better than one.

  • And so on.

Of course, you can write good tests and you can write bad or ineffectual ones, but, broadly speaking, the pay-off is linear; each unit test is incrementally beneficial and independent of the others.

Automated System Testing : The Effort/Reward Pay-Off

Suppose, for a moment, that we want to write an automated system test for invoicing; we want to check the values on an invoice for a range of situations.  Does the invoice use the correct settlement prices, does it handle long/short day, is the payment date derived correctly, and so on.

To automate such a test, we will need the following:

  1. The ability to book and modify deals of the required type.

  2. The ability to load historical prices.

  3. The ability to make the deal "fix" against the historical price(s).

  4. The ability to process documents (invoices) from one state to another.

  5. The ability to check the invoice - either the resulting document (PDF?) or the results from the output script.

That's a sizeable investment just to test invoicing.  The ability to book and modify deals on its own is a significant piece of work for any implementation using more than one or two toolsets.

If I draw the effort/reward pay-off, we can see, that in contrast to unit testing, there is considerable effort with no immediate reward.

automated-system-testing-risk-reward

However, if we suppose that we have achieved success with automated invoice testing (quite a bold assumption, as we will see) then consider the additional work needed to automate the checking of confirmation documents.

To automate such a test, we will need the following:

  1. The ability to book and modify deals of the required type - check

  2. The ability to process documents (invoices) from one state to another - check

  3. The ability to check the confirmation - 50% check

Depending on how good a good a job we did with the invoice checker, we already have almost everything we need.

Now the shape of the graph changes.  Things are looking up; literally. With a relatively small incremental effort we have achieved a significantly larger return; the ability to automatically test all of our confirmations.

Now imagine we wish to automate P&L testing.  We will need the following:

  1. The ability to book and modify deals of the required type - check

  2. The ability to load historical prices - check

  3. The ability to load forward curves

  4. The ability to run a task (or otherwise trigger a simulation)

  5. The ability to check simulation results.

There is still some work to be done, to be sure, but we're no longer starting from scratch.  If you've proven the effort/reward pay-off so far, you should have no difficulty persuading someone to invest further to automate P&L testing.

The Inflection Point

If you buy the premise that system test automation requires some investment before it begins to pay-off, the question is When do I get my pay-back?When does the graph change?, What is the inflection point?

Maybe the effort/reward looks actually looks like this....

failed-system-testing

The pay-off never really materialises....the investment is likely to be terminated before it delivers a return..

In fact, I will go beyond saying maybe the effort/reward looks like this.  I will say it is likely to look like this ... unless you understand why automated system testing is more than just extended unit testing.

Why Automated System Testing is not just an extension in scope of Unit Testing

Suppose you are merrily writing unit tests and you discover that in one area it is proving to be difficult.  Maybe you're using some third party code (such as Endur/Findur) that doesn't support interfaces, makes it hard to mock, or whatever....The unit tests you were planning to write will either take longer or be reduced in effectiveness.

What does that matter to the rest of the unit testing efforts?  It doesn't matter at all.  If one unit test is not possible it has absolutely no impact on any other unit tests.  They are, by definition, independent.

Contrast that with the example we looked at earlier; automating the testing of invoicing. Suppose you find that there is no mechanism for automatically loading historical prices.  Now your whole idea of fully automated system testing invoices is in tatters; if you have to stop and manually intervene you're ability to run overnight regression has gone out the window.

One weak link in the chain and the value of test automation (even the viability of test automation) is called in to question.

With unit testing you don't need to know where you're heading; just test each class on its own and you can be sure you're making small, incremental contributions to the greater good.

With automated system testing you have to have a plan; a plan you can reliably expect to succeed before you start. Automated system testing needs to be designed and architected.

Step 1 to successful system test automation is therefore to recognise that it is not just extending the scope of unit testing.

Step 2 is to recognise that this requires a viable plan; a design; a test architecture.

Step 3 is to recognise that not all testers are designers and test architects.

Moving The Goal Posts (in a good way)

The key to reliably hitting the inflexion point is, of course, to start there.

To return to the example of automating the testing of invoices.  The way deals and historical prices are captured in Endur is, mostly, standard.  It's generic functionality that we need to automate before we can get to the interesting (and probably customer-specific) bits; the invoice proper.

start-here

If you could start with the generic capabilities already covered then..

  1. The effort/reward pay-off is sooner

  2. The risk that the whole thing is not possible is greatly reduced.

In Part 3 I'll look at how this can be achieved and discuss SpecFlow as a test automation tool and its use with Endur/Findur.