Saturday, 24 September 2011

UI Automation: Avoiding Failure


I have developed and worked on many UI automation frameworks using both commercial and open source tools, some of these have been more successful than others. Regardless of the tool you use there are some common indicators that you can help you identify if you are going down the wrong path.

No business backed strategy to begin UI automation

UI automation is expensive; it requires a lot of time and effort. Make sure you have enough secured resource to carry out your plans. If you don’t, that resource will be pulled somewhere else and you will end up with several unfinished automation projects that have cost the business money but bring no value.

No clear reason to automate

Why are we automating?  If you have no clear objectives, what’s the point?

Some of the reasons why we should automate
  • to reduce the feedback loop between code implementation and possible defect detection
  • to reduce the amount of time we spend in resource intensive activities such as regression testing
  • to remove human error from test execution

       …and we should automate when:
  • we have stable functionality
  • we have the skill necessary to automate

 No design pattern is being applied to the test design

There is no need to reinvent the wheel with UI automation. Using a recognised design pattern like the page object model will reduce the time it takes to automate. It will also give you cleaner and easier to understand tests. Other engineers will be able to pick up the project and understand it.

Tests are brittle

If small changes to the AUT cause the tests to break then your tests are too brittle and not easily maintainable. 

Things to look out for:

  • Too many checkpoints per test. Always ask yourself what the value of a checkpoint is before implementing it. The more checkpoints we have the greater the chance of test failing. In automation I only test for critical information. Many people will put checkpoints in for page layout, sizing, or styles applied. This is useful but is something that can become hazardous when you start cross browser testing, or testing with different resolutions.
  • Hard coded data. Tests should be driven by a driver class or datasheet if data is required. If you don’t have complete control over your test data then it could change and your tests will break. Being able to feed data in gives you much greater control over the tests and makes them much more useful.
  • Over engineered. Is your test code more complex than the code it is testing? Make sure that your tests are not failing due to over complicated code. I've seen this happen so many times!
  • Tests that depend on the test results of another test. This is a very common mistake. If one test with dependencies fails all dependent tests will also fail.

 No control over the data that drives the tests. When the data changes the tests fail.

Data control is one of the most common reasons why UI automation project fail. Any automation project should have its own environment where data is guaranteed. If the environment is shared then you must have a way of protecting the data your tests use. It’s very frustrating when your data is tampered with and the result is failed tests.

Tests fail because the identification of page or application objects is using ambiguous names, or auto generated identifiers.

UI automation requires us to identify page objects through their properties or location. Many applications will generate unique changing names for objects. Using these names is not secure. Try to get developers to add decent identifiers to objects. It will help keep the tests from becoming brittle and save time during test creation.

Large xpath identifiers are also a common cause of test failure. Try to avoid!

More time is spent on maintaining tests than testing

This is a sure sign you have problems. 

Things to look out for
  • A large number of tests fail on each test run
  • Engineers are “Baby sitting” test runs. Helping the tests complete is not automation!
  • You seem to be spending all your resource on one application. You never get to automate anything else
  • New test creation drops

Nobody knows what the tests coverage is

One thing I’ve found in agile environments is that although UI automation is high on the agenda, there is little understanding of what the test coverage is. Developers and testers churn out tests, get excited, but there is no understanding of what really has been tested. This means that you still have to invest in manual regression runs to assure yourself. UI automation is a safety harness, so we need to know how much protection it is giving us.