On an agile project I'd been using Skype to track defect conversations I was having with developers. We had no easy way to track bugs, and the need to get things fixed ASAP removed the benefit of formally tracking bugs. The biggest problem with using Skype was the distributed and fractured information.
Solution: Google Wave! Create a wave with all your developers and testers and call it after the name of your sprint. List defects one by one, give them an id and description. Use colour coding to indicate the state of the defect. Developers and testers can comment on each individual part of the wave/defect. Completely simple! All the information is contained within one location and is accessible to all. Next level communication!
Software Development, Test Engineering, Architecture, Agile, Lean, Automation, Process Improvement
Sunday, 13 December 2009
Quick Tips: Defect Tracking with Google Wave
Thursday, 23 July 2009
expoQA Conference October 2009
Yes, it’s that time again. The successful and highly acclaimed quality assurance conference expo:QA returns to Madrid for another year!
expo:QA was held for the first time in 2004 and has become a point of reference for experts in the field of quality assurance not just in Spain but all over the world. With a varied and interesting speaker base, this conference gives you plenty to think about, with great ideas jumping out at you from all sides.
Whilst much of the conference is in Spanish, as the years have gone by, there has been more and more emphasis placed on accommodating a multinational audience. Many of the speakers now deliver in both Spanish and English (at different times, of course!).
It’s great to see that this year’s agenda includes many agile based presentations, which goes to demonstrate the way this conference keeps going from strength to strength by adopting and including new tendencies.
Check out the website:
http://www.expoqa.com/en/index.php
Sunday, 29 March 2009
Testing in a New or Transitional Agile Environment
Agile environments need many practices set up and functioning before a
tester can really flourish – continuous integration, environment management, good
software engineering practices, a solid development process etc. Without even
these basic elements in place, the tester is left to manage an ad hoc flow of
user stories, support issues, and goodwill.
Issues that seem to be common for testers in these environments:
- Iteration planning is a quick guessing meeting. This is the most important part of any iteration as it sets up the focus and objectives for upcoming work. It is also an opportunity for the team to extract decent acceptance criteria from the product owners.
- Test estimations reduced by product owners or developers. Just remember who the experts are here! Don’t put yourself in a situation where you have to cram a full regression test into 3 minutes because a PO thinks that is enough time!
- Acceptance criteria either not identified or too vague to be of any real value (See above!). Not having good acceptance criteria means that a story has no real objective, and will be too vague to test. Without the defined goal posts that acceptance criteria gives us, testers will often find themselves beaten up over failed expectations if the story doesn’t do what the PO wanted it to do. Comments like “This hasn’t been QA’d properly” or “the testers didn’t catch this” are quite common in this situation and push accountability on to the test team.
- Stories getting to the tester too late. This usually happens when stories are ill defined and extend past the original estimation. Again, acceptance criteria will usually help focus estimations.
- In smaller environments where developers are multi-access resources, there is often a stream of “under the radar” work that eventually flows into the hands of the tester. This is work being done, perhaps for the good of the business that puts an extra burden on the team. In this type of environment, work never gets tested properly. Consider using Kanban if this is the case!
- No supporting development processes such as continuous integration, or automated testing. This means that the tester is usually engaged in large amounts of regression testing rather than exploring new functionality. Consider adding automation tasks to stories.
- No decent environment management system in place meaning that it’s very difficult to have consistency with test, development, and production environments. This is a must if you wish to be efficient and effective with your deployment pipeline. You will need to set aside developer, DevOps, and tester time to get this up and running. To secure this time you need to be able to sell the benefits to your management team. Reduced delivery time is always a good benefit to use in this circumstance.
- Testers being treated as a quality gate at the end of the iteration rather than an integral part of the team. This is a cultural change that is required in the team. A strong, test "savvy" development manager or a solid QA/test director should be pushing this change. Ground-up changes are usually quite difficult. Embedded cultural changes such as this usually require strong and determined leadership.
- No sense of quality ownership by the team. This is common in those teams with no test automation at any level, and where acceptance criteria are either weak or missing. This links in with many of the points above. The more we can infiltrate into the minds of the developers, the better! All the suggested practices above and below will help define this ownership.
What can the tester do to change this?
The tester in an agile environment needs to become a proponent for
process improvement. To avoid some the issues above, an agile tester must engage
in some of the following
- Highlight software engineering opportunities for the team, and be proactive in providing possible solutions. A great way to do this is to start a software engineering work group that gets together proactive and innovate developers and testers on a regular basis to implement the engineering practices that can improve the efficiency and effectiveness of the teams.
- Work to rule. This is tough, but if you have been asked to succeed in agile, then you must follow the basic work flows that have been designed. If things are not working, use the retrospectives to make changes that can be agreed on by the team.
- Be alert! Use the tools that you have to keep track of what is being developed, supported, and released to help you get an understanding of the work output. I have had a lot of success monitoring RSS feeds and change logs from source control systems as they give me the ability to hone test analysis to a specific part of the system. You also get information on changes to parts of the system that your developers may not have mentioned to you!
- Publish your ideal test architecture as part of the test strategy. This will allow others to see what your perspective is on what is needed to develop successfully, and may prompt them to help you out, especially if your ideas are compelling!
- Measure your Boomerang! This is work that comes back to the development process after it is released. In ISEB circles this is known as the defect detection rate. It is one of the most useful measurements we have as it is a real indicator of how effective your quality practices are.
- Measure whatever seems important as it can help you push the importance of what you are doing. This is one the most important things we can do. I once did this to get unit testing up and running in one environment. A simple weekly report on the number of tests per project provided the impetus to get developers writing tests on a regular basis.
Testing can be a thankless task, but being proactive during a transitional
period will bring benefit to your team. The team will probably be going through
a lot of change acceptance, and placing an integrated tester directly into the
team is another difficult change to manage.
Here are a couple of great resources on agile development and lean
development that could help you formulate ideas in these tricky environments.
Labels:
agile,
Change,
Testing,
Transition
Friday, 27 March 2009
When Should We Automate the User Interface in an Agile Environment?
Many misunderstand automation in the context of agile. Picking up a user interface test tool and automating from the word go is never going to work. Automation is often associated directly to UI tests but in an agile environment, automation refers to the whole test strategy. One of our primary goals in agile automation is to develop automated unit tests that will help us monitor the pulse of our project. From there we can look at developing integration tests, coupling together groups of unit tests, or we can look straight to UI automation. UI automation allows us to do both integration and system testing.
In an agile environment we have to look at the real return on investment when deciding what and how we automate in terms of the UI.
Examining the following factors may make this task easier:
1. Risk - A business or safety critical application is always a candidate. Use the other factors to assess the correct moment to automate. Low business impact applications should really be avoided unless work flows are simple.
2. Maturity/Stability - If the web application is still in primary development stages then there will be lots of changes to the UI. Sometimes it is better to wait for a Beta release before beginning UI automation. At this point there will usually be a lower frequency of change or less impacting changes. Waiting until this point saves a lot of time, and reduces the maintenance overhead.
3. Resource - Automation is labour intensive. If you can only devote a small amount of time to UI automation then you are probably not going to have success. Time block an automation project if necessary, it will bring benefit.
4. Change Index - Web applications with a high change index usually require the largest amount of maintenance. Keeping up with a site that has constantly changing layout or content can kill an automation project.
5. Complexity - Large complex systems that use a host of differing technologies, and contain work flows that cross these technologies, should be avoided. Unless, that is, you have the necessary resources and tools to combat this.
6. Technology - If you don't have well supported UI automation tools such as QuickTest Pro or Selenium, or the expertise to use the ones you have got, then your automation project could extend far beyond the original plan. The problem of not having expertise in your automation applications, or even in automation, is that you could find yourself with a very brittle framework that requires constant maintenance, increasing the cost and burden of automation.
Labels:
agile,
Automation,
quicktest,
selenium,
test
Friday, 20 March 2009
Tools: Web Service Testing
A colleague put me onto a great free XML development tool called Liquid XML Studio. This is an incredible feature rich XML development tool that apart from developing XML, it allows you to test and build SOAP requests in such an easy manner that I couldn't imagine a developer or tester being without such a tool! Its another great tool to add to your arsenal of agile test tools.
Some of the useful features:
- Web Service Call Composer
- XPath Expression Builder
- HTML Documentation Generation
- XML Diff - Compare XML Files
- Microsoft Visual Studio Integration (2005 & 2008)
- XML Schema Editor
- XML Data Binding
A great alternatives is soapui - This is an open source tool designed for web service testing. It allows you to inspect and invoke web services. This is a tool that has become a strong component of my current web test framework.
Labels:
agile,
SOAP,
soapui,
web services,
xml liquid studio
Friday, 20 February 2009
Managing Defects
In an agile world we have to tackle defects in a different manner to cope with the pace and working practices. Here's one way of coping.
Develop a defect notebook, where defects are noted down and passed on immediately to the developer. Do this instead of logging every defect and you will save everybody time and greatly improve the quality of code that gets built.
Its a simple process.
When you detect a defect, inform the implicated developer immediately and discuss the defect. This is an important discussion and changes the way that we the code. In the developer/tester discussion make sure you cover the follow points.
- Is this a defect? Between you and the developer, you will be able to establish if it really is a defect or if it is expected behaviour. Get a product owner involved if necessary. Most importantly note it down in a note book.
- Can it be corrected now? The developer will be able to make a decision as to whether the defect can be fixed immediately or whether it will require more complex analysis. Ideally, the developer will fix it in the moment, not disrupting the work rate, and most importantly for the tester, reducing the need to raise a defect report.
- Does this defect require more analysis? If the case is yes, then before logging the defect, arrange a follow up the same day, or at the latest, the following day. This keeps the defect fresh and closer to the time that the original code was developed. This is important as the higher the turn around time for the developer writing the code and the defect being resolved, the slower the fix time will become. If a developer has to wait weeks before tackling a defect, it is likely that they will have forgotten the thought process that went into the development, thus increasing the time required to analyse and eventually fix.
If the defect requires more than a few hours to fix then log it. It maybe the case that the defect gets moved further down the backlog or requires input from others. It is in these cases where there is a benefit in tracking and monitoring a defect.The discussion with the developer can lead to an immediate solution or the necessity to invest more time in analysis. We only log the defect when we know that we have identified a serious issue.
Although not logging all defects, we still have our notes that we can use to prompt actions related to follow ups to analysis, or to ensure that the developer has done the promised fix. This note book can become vital, so look after it!
Your developers will more than likely adapt to this way of working as it helps them to have this quick quality reaction time to their code efforts.
Wednesday, 18 February 2009
Quick Tips: Learn From Others!
Its great to find other sources of great Agile QA info. Have a look a look at this great site full of more than just agile test tips.
http://www.quicktestingtips.com/tips/
Check out the authors for even more info:
http://www.quicktestingtips.com/tips/
Check out the authors for even more info:
Tuesday, 17 February 2009
Exploratory Testing
Even outside of the Agile arena I use exploratory testing. Its a great way to discover the workings of software and how to break it!
An agile tester is often told that they should only think about exploratory testing when testing outside of the basic user story acceptance tests. All other testing is covered by unit and integration testing. But how often is this the case? Agile development is never perfect, and to find a project that has a fully functioning, good quality CI implementation that executes all unit and integration tests is not as easy as some seem to think. So, as a tester we need some rules to allow us to implement exploratory testing only when it is really necessary. The following is a set of guide lines that I use to identify exploratory test opportunities.
- Always use in conjunction with planned tests on high impact stories. Cover as much as you can!
- Use when trying to reproduce system failure.
- Use when defect clusters have been identified. This will flush out even more defects.
- Always use when you have a good technical understanding of the system architecture. You will already be aware of what usually breaks certain systems.
And when executing exploratory testing:
- Demonstrate a plan of action. Even a quick outline of what you aim to achieve by carrying out certain actions will give confidence in your actions.
- Write down all tests that are performed. I use a test short hand that describes navigation/action/result in just one sentence. This enables you to create more tests further down the line.
- Let the system risk analysis guide you to critical areas of the application. This is where exploratory testing pays off.
- Sit near to, or with the development team to enable quick solutions to problems and questions.
- Never rely on just doing exploratory testing.
Remember - Completely unplanned random actions on an application is not exploratory testing, but rather, bad testing.
Labels:
agile,
exploratory test,
QA
Friday, 13 February 2009
Checking Mail Automatically
In those situations where you need to automatically test the delivery of email , and you are unable to get access to a mail server through an api, what other options do you have to check this delivery automatically? Running a script to open up your mail client and check for mail can be cumbersome and unreliable. One method that I have used successfully has been identifying a mail service that gives access via an Atom or RSS feed.
Googlemail provides a limited but reliable Atom feed for its mail service. Sign in to googlemail account, open another browser tab or window and goto > https://mail.google.com/mail/feed/atom/. You will now see all your unread messages!
Using the .net WebClient class will allow you to input credentials against this url,you can then create an xml object and use the DOM to access particular tags and get the info you require.
Monday, 2 February 2009
Agile Test Strategy
[THIS POST HAS BEEN UPDATED: Please visit Agile Test strategy Updated!]
Over the years I have had to design really heavy weight test strategies to help sell/communicate to the business the reasons why we are testing and what we will do during each project. However, I have always found that no matter how good the intention set out in the test strategy is, what actually occurs during the test phase of a project is almost unidentifiable. Fortunately, on the last project I worked on I had the opportunity to develop a lean test strategy that was useful, practical, reusable, and above all, CMMi friendly!
Although not strictly a company that practised agile or lean development, we were trying to reduce the bureaucracy of traditional technical processes. The following is probably as light weight as a test strategy can get, but it works.
The idea was to make a statement of intention that loosely binds some of the more important test practices that can help a team move forward. The phases are fairly typical of agile development, although they do not represent a definite task execution flow.
Phase: Project Set Up
The idea was to make a statement of intention that loosely binds some of the more important test practices that can help a team move forward. The phases are fairly typical of agile development, although they do not represent a definite task execution flow.
Phase: Project Set Up
- Understand the project
- Collect information about the project
- Create a test knowledge repository
- Assist in the definition and scope of stories
- Develop test plans based on planning session
- Risk analysis during the sprint/iteration planning
- Construct acceptance tests for each story
- Develop business functionality validation plan
- Document and write tests for defects
- Automate with both unit and UI tests
- Assist in functional review/demo
- Accept User stories
- Regression test
- Business acceptance testing
- Develop release readiness plan
- Run performance tests
- Assist in release readiness
- Plan test release data and tests
- Accept the Release
Labels:
agile,
cmmi,
scrum,
test strategy,
Testing
Friday, 30 January 2009
Agile Metrics
As a tester you will always find yourself absorbed in metrics creation, reporting on anything and everything that your management team wishes to report on. However, in agile development environments, sometimes its very difficult to assess the importance of metrics. I've been looking at some ways to provide management teams with interesting and useful information to measure our success.
One of the most fundamental approaches we can take towards metrics collection in an agile environment is the Return On Investment that a collection of metrics can give us. That is, how can our metrics help us provide more business value.
Using measurement goals such as only having 5 serious defects found in production after release give us more incentive to produce less buggy code.
Measuring the time it takes from the conception of a user story to seeing that user story providing business value also gives us an idea of how we are performing as a team. This is known as cycle time.
These metrics give us valuable information that provide us with the correct means to motivate a team.
Using the following principles, we can develop metrics that give us a decent ROI from our metrics:
Achieving SLA's relating to service and availability of products and systems.- Low cycle time, that is, a high throughput of change.
- Low amount of unplanned work.
- Effective process improvement practices.
- Effective change management.
Subscribe to:
Posts (Atom)