Sunday, 30 October 2011

GTAC 2011 - Cloudy with a Chance of Test - Videos

Just got back from Google Test Automation Conference 2011 which is probably one of the best software development conferences out there, and its free! Focus is on test automation, but the crowd is made up of a mixed bunch from across the broad spectrum of development.

Highlights for me:

Keynote from Alberto Savoia

Test is Dead (Don't take this literally, please!)

http://www.youtube.com/watch?v=gQclnI_8Vmg&list=PLBB2CAFDDBD7B7265

Keynote from Hugh Thompson

How hackers see bugs:

Overview of Angular and its test capabilities from Miško Hevery

All of the videos for GTAC 2011 are here: 

Saturday, 24 September 2011

UI Automation: Avoiding Failure


I have developed and worked on many UI automation frameworks using both commercial and open source tools, some of these have been more successful than others. Regardless of the tool you use there are some common indicators that you can help you identify if you are going down the wrong path.

No business backed strategy to begin UI automation

UI automation is expensive; it requires a lot of time and effort. Make sure you have enough secured resource to carry out your plans. If you don’t, that resource will be pulled somewhere else and you will end up with several unfinished automation projects that have cost the business money but bring no value.

No clear reason to automate

Why are we automating?  If you have no clear objectives, what’s the point?

Some of the reasons why we should automate
  • to reduce the feedback loop between code implementation and possible defect detection
  • to reduce the amount of time we spend in resource intensive activities such as regression testing
  • to remove human error from test execution

       …and we should automate when:
  • we have stable functionality
  • we have the skill necessary to automate

 No design pattern is being applied to the test design

There is no need to reinvent the wheel with UI automation. Using a recognised design pattern like the page object model will reduce the time it takes to automate. It will also give you cleaner and easier to understand tests. Other engineers will be able to pick up the project and understand it.

Tests are brittle

If small changes to the AUT cause the tests to break then your tests are too brittle and not easily maintainable. 

Things to look out for:

  • Too many checkpoints per test. Always ask yourself what the value of a checkpoint is before implementing it. The more checkpoints we have the greater the chance of test failing. In automation I only test for critical information. Many people will put checkpoints in for page layout, sizing, or styles applied. This is useful but is something that can become hazardous when you start cross browser testing, or testing with different resolutions.
  • Hard coded data. Tests should be driven by a driver class or datasheet if data is required. If you don’t have complete control over your test data then it could change and your tests will break. Being able to feed data in gives you much greater control over the tests and makes them much more useful.
  • Over engineered. Is your test code more complex than the code it is testing? Make sure that your tests are not failing due to over complicated code. I've seen this happen so many times!
  • Tests that depend on the test results of another test. This is a very common mistake. If one test with dependencies fails all dependent tests will also fail.

 No control over the data that drives the tests. When the data changes the tests fail.

Data control is one of the most common reasons why UI automation project fail. Any automation project should have its own environment where data is guaranteed. If the environment is shared then you must have a way of protecting the data your tests use. It’s very frustrating when your data is tampered with and the result is failed tests.

Tests fail because the identification of page or application objects is using ambiguous names, or auto generated identifiers.

UI automation requires us to identify page objects through their properties or location. Many applications will generate unique changing names for objects. Using these names is not secure. Try to get developers to add decent identifiers to objects. It will help keep the tests from becoming brittle and save time during test creation.

Large xpath identifiers are also a common cause of test failure. Try to avoid!

More time is spent on maintaining tests than testing

This is a sure sign you have problems. 

Things to look out for
  • A large number of tests fail on each test run
  • Engineers are “Baby sitting” test runs. Helping the tests complete is not automation!
  • You seem to be spending all your resource on one application. You never get to automate anything else
  • New test creation drops

Nobody knows what the tests coverage is

One thing I’ve found in agile environments is that although UI automation is high on the agenda, there is little understanding of what the test coverage is. Developers and testers churn out tests, get excited, but there is no understanding of what really has been tested. This means that you still have to invest in manual regression runs to assure yourself. UI automation is a safety harness, so we need to know how much protection it is giving us.


Sunday, 13 December 2009

Quick Tips: Defect Tracking with Google Wave

On an agile project I'd been using Skype to track defect conversations I was having with developers. We had no easy way to track bugs, and the need to get things fixed ASAP removed the benefit of formally tracking bugs. The biggest problem with using Skype was the distributed and fractured information.

Solution: Google Wave! Create a wave with all your developers and testers and call it after the name of your sprint. List defects one by one, give them an id and description. Use colour coding to indicate the state of the defect. Developers and testers can comment on each individual part of the wave/defect. Completely simple! All the information is contained within one location and is accessible to all. Next level communication!

Thursday, 23 July 2009

expoQA Conference October 2009

Yes, it’s that time again. The successful and highly acclaimed quality assurance conference expo:QA returns to Madrid for another year!

expo:QA was held for the first time in 2004 and has become a point of reference for experts in the field of quality assurance not just in Spain but all over the world. With a varied and interesting speaker base, this conference gives you plenty to think about, with great ideas jumping out at you from all sides.

Whilst much of the conference is in Spanish, as the years have gone by, there has been more and more emphasis placed on accommodating a multinational audience. Many of the speakers now deliver in both Spanish and English (at different times, of course!).

It’s great to see that this year’s agenda includes many agile based presentations, which goes to demonstrate the way this conference keeps going from strength to strength by adopting and including new tendencies.

Check out the website:

http://www.expoqa.com/en/index.php

Sunday, 29 March 2009

Testing in a New or Transitional Agile Environment



Agile environments need many practices set up and functioning before a tester can really flourish – continuous integration, environment management, good software engineering practices, a solid development process etc. Without even these basic elements in place, the tester is left to manage an ad hoc flow of user stories, support issues, and goodwill.

Issues that seem to be common for testers in these environments:

  • Iteration planning is a quick guessing meeting. This is the most important part of any iteration as it sets up the focus and objectives for upcoming work. It is also an opportunity for the team to extract decent acceptance criteria from the product owners.
  • Test estimations reduced by product owners or developers. Just remember who the experts are here!  Don’t put yourself in a situation where you have to cram a full regression test into 3 minutes because a PO thinks that is enough time!
  • Acceptance criteria either not identified or too vague to be of any real value (See above!). Not having good acceptance criteria means that a story has no real objective, and will be too vague to test. Without the defined goal posts that acceptance criteria gives us, testers will often find themselves beaten up over failed expectations if the story doesn’t do what the PO wanted it to do. Comments like “This hasn’t been QA’d properly” or “the testers didn’t catch this” are quite common in this situation and push accountability on to the test team.
  • Stories getting to the tester too late. This usually happens when stories are ill defined and extend past the original estimation. Again, acceptance criteria will usually help focus estimations.
  • In smaller environments where developers are multi-access resources, there is often a stream of “under the radar” work that eventually flows into the hands of the tester. This is work being done, perhaps for the good of the business that puts an extra burden on the team. In this type of environment, work never gets tested properly. Consider using Kanban if this is the case!
  • No supporting development processes such as continuous integration, or automated testing. This means that the tester is usually engaged in large amounts of regression testing rather than exploring new functionality. Consider adding automation tasks to stories.
  • No decent environment management system in place meaning that it’s very difficult to have consistency with test, development, and production environments. This is a must if you wish to be efficient and effective with your deployment pipeline. You will need to set aside developer, DevOps, and tester time to get this up and running. To secure this time you need to be able to sell the benefits to your management team. Reduced delivery time is always a good benefit to use in this circumstance.
  • Testers being treated as a quality gate at the end of the iteration rather than an integral part of the team. This is a cultural change that is required in the team. A strong, test "savvy" development manager or a solid QA/test director should be pushing this change. Ground-up changes are usually quite difficult. Embedded cultural changes such as this usually require strong and determined leadership.
  • No sense of quality ownership by the team. This is common in those teams with no test automation at any level, and where acceptance criteria are either weak or missing. This links in with many of the points above. The more we can infiltrate into the minds of the developers, the better! All the suggested practices above and below will help define this ownership.

What can the tester do to change this?

The tester in an agile environment needs to become a proponent for process improvement. To avoid some the issues above, an agile tester must engage in some of the following
  • Highlight software engineering opportunities for the team, and be proactive in providing possible solutions. A great way to do this is to start a software engineering work group that gets together proactive and innovate developers and testers on a regular basis to implement the engineering practices that can improve the efficiency and effectiveness of the teams.
  • Work to rule. This is tough, but if you have been asked to succeed in agile, then you must follow the basic work flows that have been designed. If things are not working, use the retrospectives to make changes that can be agreed on by the team.
  • Be alert! Use the tools that you have to keep track of what is being developed, supported, and released to help you get an understanding of the work output. I have had a lot of success monitoring RSS feeds and change logs from source control systems as they give me the ability to hone test analysis to a specific part of the system. You also get information on changes to parts of the system that your developers may not have mentioned to you!
  • Publish your ideal test architecture as part of the test strategy. This will allow others to see what your perspective is on what is needed to develop successfully, and may prompt them to help you out, especially if your ideas are compelling!
  • Measure your Boomerang! This is work that comes back to the development process after it is released. In ISEB circles this is known as the defect detection rate. It is one of the most useful measurements we have as it is a real indicator of how effective your quality practices are.
  • Measure whatever seems important as it can help you push the importance of what you are doing. This is one the most important things we can do. I once did this to get unit testing up and running in one environment. A simple weekly report on the number of tests per project provided the impetus to get developers writing tests on a regular basis.


Testing can be a thankless task, but being proactive during a transitional period will bring benefit to your team. The team will probably be going through a lot of change acceptance, and placing an integrated tester directly into the team is another difficult change to manage.

Here are a couple of great resources on agile development and lean development that could help you formulate ideas in these tricky environments.





Friday, 27 March 2009

When Should We Automate the User Interface in an Agile Environment?

Many misunderstand automation in the context of agile. Picking up a user interface test tool and automating from the word go is never going to work. Automation is often associated directly to UI tests but in an agile environment, automation refers to the whole test strategy. One of our primary goals in agile automation is to develop automated unit tests that will help us monitor the pulse of our project. From there we can look at developing integration tests, coupling together groups of unit tests, or we can look straight to UI automation. UI automation allows us to do both integration and system testing.

In an agile environment we have to look at the real return on investment when deciding what and how we automate in terms of the UI.

Examining the following factors may make this task easier:

1. Risk - A business or safety critical application is always a candidate. Use the other factors to assess the correct moment to automate. Low business impact applications should really be avoided unless work flows are simple.

2. Maturity/Stability - If the web application is still in primary development stages then there will be lots of changes to the UI. Sometimes it is better to wait for a Beta release before beginning UI automation. At this point there will usually be a lower frequency of change or less impacting changes. Waiting until this point saves a lot of time, and reduces the maintenance overhead.

3. Resource - Automation is labour intensive. If you can only devote a small amount of time to UI automation then you are probably not going to have success. Time block an automation project if necessary, it will bring benefit.

4. Change Index - Web applications with a high change index usually require the largest amount of maintenance. Keeping up with a site that has constantly changing layout or content can kill an automation project.

5. Complexity - Large complex systems that use a host of differing technologies, and contain work flows that cross these technologies, should be avoided. Unless, that is, you have the necessary resources and tools to combat this.

6. Technology - If you don't have well supported UI automation tools such as QuickTest Pro or Selenium, or the expertise to use the ones you have got, then your automation project could extend far beyond the original plan. The problem of not having expertise in your automation applications, or even in automation, is that you could find yourself with a very brittle framework that requires constant maintenance, increasing the cost and burden of automation.

Friday, 20 March 2009

Tools: Web Service Testing

A colleague put me onto a great free XML development tool called Liquid XML Studio. This is an incredible feature rich XML development tool that apart from developing XML, it allows you to test and build SOAP requests in such an easy manner that I couldn't imagine a developer or tester being without such a tool! Its another great tool to add to your arsenal of agile test tools.

Some of the useful features:
  • Web Service Call Composer
  • XPath Expression Builder
  • HTML Documentation Generation
  • XML Diff - Compare XML Files
  • Microsoft Visual Studio Integration (2005 & 2008)
  • XML Schema Editor
  • XML Data Binding
A great alternatives is soapui - This is an open source tool designed for web service testing. It allows you to inspect and invoke web services. This is a tool that has become a strong component of my current web test framework.