Friday, 21 March 2014

Writing Automated Tests? Make sure they fail!

I'm currently adding to the automated test pack that we use to smoke test our LIVE site after a deploy.

It's a LIVE site so we don't know what data is going to be returned and we can't exactly do any data build or importing. So this is just a suite of scenarios that checks some basic functionality without being either constructive or destructive.

I've written a scenario using our wonderful in-house developed test framework, Substeps. It navigates to a public page, searches for a term that DEFINITELY exists and checks that some results are returned. Simple, right?

Well, because I don't know what exact data will be returned, I can't write a verification step for it. So I'm just checking that the containers appear. again, pretty simple because a helpful developer has given them all unique and sensible ID attributes.

The problem here is - it turns out that these particular objects are always on the page. They are just hidden until they are required and populated. So if the search function fails or no results are returned, the final few steps will always return a pass.

With a little bit more digging I did find some IDs that are only generated when the search results are returned.

So the moral is - When you've written a test that passes, nobble it to make sure that it fails! Run it with different data or a couple of steps missing to simulate a failure. While a test that passes is a good thing, a test that always passes is anything but.


[Footnote: Yes, I realise that in this instance I am definitely writing automated checks rather than automated tests. But quite frankly I'm getting sick and tired of that argument so let's just forget about it for now, eh?]

No comments:

Post a Comment