Friday, 20 February 2009

Managing Defects

In an agile world we have to tackle defects in a different manner to cope with the pace and working practices. Here's one way of coping.

Develop a defect notebook, where defects are noted down and passed on immediately to the developer. Do this instead of logging every defect and you will save everybody time and greatly improve the quality of code that gets built.

Its a simple process.

When you detect a defect, inform the implicated developer immediately and discuss the defect. This is an important discussion and changes the way that we the code. In the developer/tester discussion make sure you cover the follow points.
  • Is this a defect? Between you and the developer, you will be able to establish if it really is a defect or if it is expected behaviour. Get a product owner involved if necessary. Most importantly note it down in a note book.
  • Can it be corrected now? The developer will be able to make a decision as to whether the defect can be fixed immediately or whether it will require more complex analysis. Ideally, the developer will fix it in the moment, not disrupting the work rate, and most importantly for the tester, reducing the need to raise a defect report.
  • Does this defect require more analysis? If the case is yes, then before logging the defect, arrange a follow up the same day, or at the latest, the following day. This keeps the defect fresh and closer to the time that the original code was developed. This is important as the higher the turn around time for the developer writing the code and the defect being resolved, the slower the fix time will become. If a developer has to wait weeks before tackling a defect, it is likely that they will have forgotten the thought process that went into the development, thus increasing the time required to analyse and eventually fix.
If the defect requires more than a few hours to fix then log it. It maybe the case that the defect gets moved further down the backlog or requires input from others. It is in these cases where there is a benefit in tracking and monitoring a defect.The discussion with the developer can lead to an immediate solution or the necessity to invest more time in analysis. We only log the defect when we know that we have identified a serious issue.

Although not logging all defects, we still have our notes that we can use to prompt actions related to follow ups to analysis, or to ensure that the developer has done the promised fix. This note book can become vital, so look after it!

Your developers will more than likely adapt to this way of working as it helps them to have this quick quality reaction time to their code efforts.

Wednesday, 18 February 2009

Quick Tips: Learn From Others!

Its great to find other sources of great Agile QA info. Have a look a look at this great site full of more than just agile test tips.

http://www.quicktestingtips.com/tips/

Check out the authors for even more info:

Tuesday, 17 February 2009

Exploratory Testing

Even outside of the Agile arena I use exploratory testing. Its a great way to discover the workings of software and how to break it!

An agile tester is often told that they should only think about exploratory testing when testing outside of the basic user story acceptance tests. All other testing is covered by unit and integration testing. But how often is this the case? Agile development is never perfect, and to find a project that has a fully functioning, good quality CI implementation that executes all unit and integration tests is not as easy as some seem to think. So, as a tester we need some rules to allow us to implement exploratory testing only when it is really necessary. The following is a set of guide lines that I use to identify exploratory test opportunities.
  • Always use in conjunction with planned tests on high impact stories. Cover as much as you can!
  • Use when trying to reproduce system failure.
  • Use when defect clusters have been identified. This will flush out even more defects.
  • Always use when you have a good technical understanding of the system architecture. You will already be aware of what usually breaks certain systems.
And when executing exploratory testing:
  • Demonstrate a plan of action. Even a quick outline of what you aim to achieve by carrying out certain actions will give confidence in your actions.
  • Write down all tests that are performed. I use a test short hand that describes navigation/action/result in just one sentence. This enables you to create more tests further down the line.
  • Let the system risk analysis guide you to critical areas of the application. This is where exploratory testing pays off.
  • Sit near to, or with the development team to enable quick solutions to problems and questions.
  • Never rely on just doing exploratory testing.

Remember - Completely unplanned random actions on an application is not exploratory testing, but rather, bad testing.

Friday, 13 February 2009

Checking Mail Automatically

In those situations where you need to automatically test the delivery of email , and you are unable to get access to a mail server through an api, what other options do you have to check this delivery automatically? Running a script to open up your mail client and check for mail can be cumbersome and unreliable. One method that I have used successfully has been identifying a mail service that gives access via an Atom or RSS feed.

Googlemail provides a limited but reliable Atom feed for its mail service. Sign in to googlemail account, open another browser tab or window and goto > https://mail.google.com/mail/feed/atom/. You will now see all your unread messages!

Using the .net WebClient class will allow you to input credentials against this url,you can then create an xml object and use the DOM to access particular tags and get the info you require.

Monday, 2 February 2009

Agile Test Strategy

[THIS POST HAS BEEN UPDATED: Please visit Agile Test strategy Updated!]

Over the years I have had to design really heavy weight test strategies to help sell/communicate to the business the reasons why we are testing and what we will do during each project. However, I have always found that no matter how good the intention set out in the test strategy is, what actually occurs during the test phase of a project is almost unidentifiable. Fortunately, on the last project I worked on I had the opportunity to develop a lean test strategy that was useful, practical, reusable, and above all, CMMi friendly!

Although not strictly a company that practised agile or lean development, we were trying to reduce the bureaucracy of traditional technical processes. The following is probably as light weight as a test strategy can get, but it works.

The idea was to make a statement of intention that loosely binds some of the more important test practices that can help a team move forward. The phases are fairly typical of agile development, although they do not represent a definite task execution flow.

Phase: Project Set Up
  • Understand the project
  • Collect information about the project
  • Create a test knowledge repository
Phase: Planning & Analysis or Release Planning
  • Assist in the definition and scope of stories
  • Develop test plans based on planning session
Phase: Development Iterations
  • Risk analysis during the sprint/iteration planning
  • Construct acceptance tests for each story
  • Develop business functionality validation plan
  • Document and write tests for defects
  • Automate with both unit and UI tests
  • Assist in functional review/demo
  • Accept User stories
Phase: Hardening Iterations
  • Regression test
  • Business acceptance testing
  • Develop release readiness plan
  • Run performance tests
Phase: Release
  • Assist in release readiness
  • Plan test release data and tests
  • Accept the Release
Using this basis for every project we can deliver software that has undergone rigorous testing but has not been delayed through the burden of traditional test documentation.