Thursday, 11 September 2014

Do you really need a test automation framework?

Virtually every project I have worked on has at some time in its life had a close encounter with a well meaning, all things to all men, test automation framework, that has caused more harm than good, and cost a fortune to build in the process. 

Maybe its due to the ubiquitous SDET (Software Development Engineer in Test) or a well meaning tester with no programming background, or a very demanding test manager, or just a plain old developer who should know better, but the continuing prevalence of massive test automation frameworks is slightly worrying. I both frequently hear about them, and very often see them. If they appear on my projects, they usually die a swift death.

So what's the problem? 


The premise is that there is no way to execute or optimise automated testing apart from executing vast swathes of system tests from, usually, the highest interface into a system using a “special” test framework. If this is your only way to provide any kind of code coverage, or guarantee of safe delivery, then it points to a poor delivery process or a badly engineered system that is no doubt flaky and incredibly difficult to change and maintain. 

These frameworks are typically maintained by a tester, test team, or a specialist bunch of mercenary developers. 

You will normally find the tests executed by these frameworks to be long running, high maintenance, and vague in what they actually do. In many cases they will be driven through a UI, which typically means something like WebDriver, wrapped in an obviously completely necessary layer of abstraction, and maybe some fruity BDD framework to describe what the tests are doing. 

Being system tests will typically mean that the tests are run in an integrated full stack environment. This in itself, regardless of test design, is a complexity that most would wish to avoid. Data management, permissions, versioning, infrastructure availability, etc. all come in to play here.

They very often sit in a separate test project that is completely decoupled from the code they are testing, meaning that version synchronisation issues at the feature level become a real problem. They are also hardly ever created by the people that need the feedback most from these tests - the developers.

The cost of both building and maintaining test frameworks for any enterprise sized solution can become astronomical in comparison to the actual value and risk reduction they deliver.

What I typically see is a very high test execution failure rate that can’t be attributed to actual code changes. I have seen some fairly large enterprise projects where the test run failure rate was between 69% and 91% with those failures not being tied to a code or configuration change. That is quite shocking. Equally, I have seen failure rates lower than 10%, but that does seem rare. If you couple the failure rate with the typical cost of building a test framework for any reasonably complex system, then the value becomes quite clear. Just multiply the day rate of all those involved in building a framework by the number of days it takes to build, and do the same for the ongoing maintenance cost, deduct all that from your expected profit and work out if that cost is justified. 

The other cost related issue lies with the constant rebuilding of test frameworks. As test frameworks are not typically used in production, the change management becomes less rigid. I'm not sure what the average lifespan of a test framework is, but based on experience I would hazard a guess at less than 2 years. 

Essentially, these test frameworks are one of the symptoms of a badly planned test/design approach applied to your system.  

and the Cause?


So do you really need to build this all singing, all dancing test automation framework? There are many causes and reasons why test frameworks appear.

Confidence
Test Managers, testers, release managers, project managers etc, may have no understanding as to how developer driven tests maybe helping to reduce the risk of failure when adding and changing features. This encourages regression test phases, release test phases, pre prod test phases, all usually heavily laden with time draining manual and automated test framework antics. (Steve Smith goes into the detail of release testing, and how dubious that activity is, and that's without even talking about test frameworks!)

Inexperience
Engineers who have limited experience of commercial development, or engineers that should know better choose to knock out something that works over something that works and is maintainable. Very often you see the "responsibility of quality" foist upon testers or QA engineers within a team or organisation. These individuals very often don't have a programming background and will create, with the best intentions, a safety net in the form of a test framework that will typically be riddled with design anti-patterns, and difficult to maintain code. Again, this encourages the inclusion of multiple test phases due to the lack of confidence provided by the general lack of valuable and consistent testing feedback.

Cost
Project and test managers who believe that the tester or QA headcount can be reduced through the use of automation. This leads to attempts to automate the types of tests that a tester may execute. In reality, the things that get automated are just simple checks and not the complex interactive tests that a human being can execute. This in turn either leads to increased project costs, due to maintaining both manual and automation testers, or leads to poorer quality code being delivered due to the reduced amount of interactive testing that takes place. Either way, there is a hit on the money you have to spend. 

Legacy Systems
We've probably all worked on a legacy system with no tests, and no documentation. If we need to make changes to this system, or refactor it into something more manageable, then we do need some kind of safety net. This can rear its head in the form of larges amounts of system tests being run by a test automation framework. 

Whats the solution?

A decent test approach during the entire life cycle of your system, from concept to the end of its life. 

The decent approach would typically include a combination of collaborative (isolated tests that use test doubles) and contract tests, coupled with those few tests that give you the warm fuzzy feeling that your system is hanging together nicely. See this great video from J.B. Rainsberger ("Integrated Tests Are A Scam") about how to design your tests to optimise not only how the system is tested, but also how it is designed as well. 

If you join this all up with the testing executed by an expert interactive tester, then you will never have the need for a test framework ever again! Honest!

Of course, that's a very idealistic, and simplistic way to put it, but that is the bare bones of it. I think that some of the bigger problems really do come from those legacy systems with no tests, the ones that you need to refactor, but have no safety net. The instant reaction is to add a mountain of system tests driven by a framework to give you the sense that the risk of breakage is covered. In my mind, its probably more cost effective, and safer to have some good testers, and a small amount of warm and fuzzy smoke tests to help you through the hump of making a change, where each change sees some collaborative and possibly contract tests included in the work required for each change. Overtime, especially during a refactor, the test approach will modernise, and provide more built in safety.   

Finally..


I have spent a number of years during my career either building or contributing to test automation frameworks. My experience is generally negative (hence this post). However, there is a test framework that I built in 2006 still running through a couple of hundred system tests every single day on an insurance policy management system. The cause of its existence here was, and still is, confidence, not in the software itself, but the infrastructure on which it is hosted. Its still there, still going strong. I've not done the sums yet to work out the value, but I'm sure either the company using it, or the consulting company maintaining it has a lot to thank me for. Value is in the eye of the beholder after all!



Wednesday, 10 September 2014

Regression testing

What is regression testing? Why do it? What value does it have? My short answer to these questions is regression testing as a concept is about searching for new faults introduced through changes to the code base, and to learn more about the system you are building.

Usually in modern development, through an automated build pipeline, you are constantly executing checks that help detect whether the changes made to your code have resulted in new faults being introduced. Couple this with the interactive testing that we do throughout the development of any story then we have a combined effort that you could describe as regression testing.

In my mind, thats pretty much it. Its main purpose is to find faults, and learn more about the system.

Many people become confused about the term regression testing, and rightly question its worth when they judge either automated only solutions, typically through a UI or API, or look to have regression testing as a phase that occurs at some point on the critical path to delivery. Both of these approaches fall into the realm of delivery anti patterns.

If you think of your entire test and development approach as a way to minimise the risk of regression and to deliver the right thing in the most effective and safe way, you very rarely have to think about or understand the need for regression testing.

Thats my rather simplistic view. For more detail, have a read of Michael Bolton’s presentation "Things Could Get Worse: Ideas About Regression Testing"

Thursday, 27 February 2014

Vagrant - simple environment provisioning

Vagrant is a tool that I'm using more and more these days.
Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team. 
To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.
I use it with VirtualBox to quickly install and launch different virtual machines for both development and testing tasks. Some of the teams that I work with have used vagrant within their build process to facilitate test automation or to provide environment consistency in a delivery pipeline.

Its very easy to set up, and is similar for most operating systems
  1. Install VirtualBox
  2. Install Vagrant 
  3. Navigate to where you want to install a vm
  4. Goto a site like http://www.vagrantbox.es/ and choose a prebuilt box 
  5. Then do:
     $ vagrant box add {title} {url}
     $ vagrant init {title}
     $ vagrant up
This will create, initialise and launch the box of your choice in VirtualBox. At point 4, you can just as easily point to your own custom built boxes.

Once you have the box built you can use the following commands to bring up, shut down, or rebuild the box
# to start the box
 $ vagrant up
# to stop the box
 $ vagrant halt
# to rebuild the box
 $ vagrant destroy --force && vagrant up
You can then bring tools like puppet into the equation to manage installations and configurations on the box.

Tuesday, 18 February 2014

Alternative views of the test pyramid diagram

warning.. there is an attempt at humour in here..
A lot of people like to use simple diagrams to explain their test approach on a project or system. One that crops a lot is the test pyramid as described in Alister Scott's WatirMelon blog. I love this diagram, and thought that it gave people a high level understanding about what and how you are going to be testing, both automatically and as a human on a project. I was wrong! After a recent series of criticism about a test strategy I produced, I realised that not everyone sees the world like I do. So here are a couple of takes on the test pyramid to help other people understand the diagram a bit better. Its so difficult to be all things to all people!

Pyramid at sunrise
Obviously, the sun is what humans do, with their minds etc.. and the pyramid describes the effort invested in certain test automation activities.

When I was looking at the diagram above, I couldn't help noticing similarities to pac-man, so here is my second attempt.

pac-man tests
For more information see my brief entry on the test pyramid.

Wednesday, 5 February 2014

Low ceremony http request mocking with Betamax

Whilst researching effective http request mocking in junit tests I came across a great project called Betamax created by Rob Fletcher.
Betamax is a tool for mocking external HTTP resources such as web services and RESTAPIs in your tests. The project was inspired by the VCR library for Ruby. 
You don’t want 3rd party downtime, network issues or resource constraints (such as the Twitter API’s rate limit) to break your tests. Writing custom stub web server code and configuring the application to connect to a different URI when under test is tedious and might not accurately simulate the real service. 
Betamax aims to solve these problems by intercepting HTTP connections initiated by your application and replaying previously recorded responses.
The first time a test annotated with @Betamax is run any HTTP traffic is recorded to atape and subsequent test runs will play back the recorded HTTP response from the tape without actually connecting to the external server. 
Tapes are stored to disk as YAML files and can be modified (or even created) by hand and committed to your project’s source control repository so they can be shared by other members of your team and used by your CI server. Different tests can use different tapes to simulate various response conditions. Each tape can hold multiple request/response interactions. An example tape file can be found here.
Betamax works with JUnit and Spock. Betamax is written in Groovy, but can be used to test applications written in any JVM language.

For more information please visit:


Mocking out system dependencies with MockServer

Have you ever had the problem of running a web application in a development environment that had multiple system dependencies, only to find that those dependencies are unavailable?

MockServer built by James D Bloom can help you solve this problem.
MockServer is for mocking of any system you integrate with via HTTP or HTTPS (i.e. services, web sites, etc).
MockServer supports:
  • mocking of any HTTP / HTTPS response when any request is matched 
  • recording requests and responses to analyse how a system behaves 
  • verifying which requests and responses have been sent as part of a test
MockServer is built using Java and has bindings for both Java and Javascript although any language that can send JSON via HTTP can easily use the MockServer API. Their are some .net bindings in the pipeline, but until those come along you could use Nancy to provide similar capabilities.

For more information on MockServer please visit

http://www.mock-server.com/