Saturday 31 December 2011

Kanbanery – Simple work management to keep you moving


Kanbanery is a great tool for managing tasks, your work load, and, of course, your Kanban process. Based on Kanban principles, it gives you a solid way to manage tasks for both solo and group work. With just three states you get a very clear view of what needs doing, what is being done, and what is done.

The tools simplicity focuses you on what is really important in your current work load. The price plan includes a free version that allows two people to collaborate on a project, whilst other very reasonably priced plans easily open this up to small and large teams.

Well worth a look. Check it out here – www.Kanbanery.com


Wednesday 28 December 2011

Automate a soap web service using c#

In web service rich systems you can leverage the interfaces that these services provide and test as much of the system as possible. Web Services give you a great opportunity to do quick regression and integration testing of business logic without going through a front end system or creating complicated test harnesses across your system. Automation at this level is extremely valuable to any deployment pipeline.

This quick example shows you how to:
  • Automate a web service with a soap binding using Visual Studio and c# 
  • Set up a simple test framework using NUnit
Automating the web service

For this example we are going to use this banking utility api from http://www.ibanbic.be  that provides several IBAN and BIC conversion facilities. The service uses the soap protocol.

To begin with we need to create a new class library project in visual studio with a references to NUnit. Download NUnit here if you don't have it. You will need to add the following reference  ..\ NUnit-2.5.10.11092\bin\net-2.0\framework\nunit.framework.dll. You may also need to add System.ServiceModel as well.

Once all the references are added we then need to create a c# proxy client class based on the contract (wsdl) of the target web service. We do this using ms svcutil, which you can read more about here.

Once the proxy client class is created add it to the VS project. During the process of creating this proxy client class, an app.config file should have been created as well. Add this to the project.

ProjectName\Services\proxyClient.cs

Invoking the web service

If you don't know how the service works, use a tool like soapUI to interrogate the different requests and responses. This will allow you to become familiar with the data required to be sent with a request, and the expected response.

For this example we are going to invoke the service and use the calculateIBAN1 operation which takes two parameters ISOCountry and account to return an IBAN number.

The code is very simple. I've placed a method inside a class called converters.cs which sits inside a solution folder called modules.



namespace WebServiceAutomationSOAP.Modules
{
    class Converters
    {

        public string CalculateIBAN(string isoCode, string account)
        {

            try
            {
                BANBICSoapClient client = new BANBICSoapClient("IBANBICSoap");
                string iban = client.calculateIBAN1(isoCode, account);
                return iban;
            }
            catch (Exception e)
            {

                return e.ToString();
            }

        }


    }
}


We can now call this method and return either an iban number or an exception.

Setting up the test framework

As with many other examples on this blog, the test framework uses NUnit and separates out business logic from test logic using module classes for business, and test classes for tests. As we saw above we created a module class called converters.cs. Each module class has a corresponding test class.

Create a class called ConverterTests.cs and place it in a solution folder called Tests. Inside this class we will create two tests, one that tests for the correct IBAN creation, and another to check that the correct exception is thrown.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
namespace WebServiceAutomationSOAP
{
    [TestFixture]
    public class Class1
    {

        [SetUp]
        public void SetUp()
        {
     
        // Do some set up configuration here
        }
     
        [Test]
        public testCorrectIBANIsCreatedUsingCalculateIBAN1()
        {

            string expectedIBAN = "IBAN ES44 0000 0333 0000 0000 3434";
            Modules.Converters converters = new Modules.Converters();
            Assert.AreEqual(expectedIBAN, converters.GetIBAN("ES", "000003333434"));

        }


        [Test]
        public testCorrectExceptionIsCreatedWhenIncorrectBankCodeIsUsedWithCalculateIBAN1()
        {

            string expectedException = "This is the expected exception";
            Modules.Converters converters = new Modules.Converters();
            Assert.AreEqual( expectedException , converters.GetIBAN("ES", "cwsaSDFASD"));

        }


        [TearDown]
        public void TearDown()
        {
     
            // Clean up here
     
        }
     
    }
}

Expanding the idea

Once you have this simple test framework in place you can begin to build a suite of tests around a web service very easily. To expand this further you can think about placing the creation of the web service client in the constructor of a base class that can then be inherited by all module classes.

You could also add data driving by using either a constants class, an application configuration file, or a library such as FileHelpers to manage excel or text files.

Of course, soap is not the only webservice type out there. RESTful services have become much more widespread with google, yahoo, twitter adopting these over soap services.  I will give some examples soon of how to automate these!

Monday 12 December 2011

Engineering Support Teams (DevOps)


In agile environments, product owners that aren't in touch with development or who are not familiar with engineering practices always seem to push out of the backlog those technical stories that are required for a successful project.

Sometimes we need technical stories that allocate time to set up a CI project, or create a mocking framework, or create an environment, or some automated deployment scripts in order to continue developing and releasing a project. When we don’t have these mechanisms in place, we need to hope that we have a team that can pick up the functionality as we fire it out and position it into the deployment pipeline so that it will drop out happily into production.

Without these technical stories and engineering practices in place, the cycle time (time between conception and release of a story) climbs and we either never release, or if we do release, it’s a pressurised situation of manic updates done manually on live servers by technicians that may have no idea about the code they are deploying.

If you have multiple teams or programmes, and if you have the budget, an engineering support team is essential. This is a team that engages primarily in DevOps, but will also be domain experts capable of taking a holistic view of the project and identify any unforeseen issues.

You will often see this team type in large organisations, but in those organisations which may need them the most, i.e. those start ups that need to be capable of continuous delivery to keep ahead of the competition, it is usually never seen due to the difficulty in justifying a couple of heads working on technical stories instead of functionality. It takes a really technically savvy manager to understand and sell their worth to the business.

If you don’t have this team, and you are not afforded the necessary time to do technical user stories in your sprints or iterations, then you need to think outside of the box and form a group that can focus on these software engineering practices.

A software engineering group is one of these and is usually a selection of the most ambitious or creative engineers from your teams that are willing to get together on a regular basis to throw around current engineering problems and work on providing solutions between them. By utilising slack days, or lunch times, the team can fire through some of these completely necessary technical elements of the delivery pipeline and keep the cycle time down to a minimum.

Agile Test Strategy (Updated)

A couple of years ago I posted a  very simple test strategy for agile based projects, here is an updated version.


Test and quality objectives


Objectives are a fundamental part of any strategy as they show us where we need to be going, and they allow us to invest in the right activities. My test strategies typically include the following objectives
  • Provide the customer with a system that they really need with quality baked in
  • Automate as much of the testing, configuration, and deployment as possible
  • Engage the business and customers in testing
  • Provide the business with the ability to confidently deliver features or complete systems without considering lengthy or multiple test iterations

Phase – Project conception & early estimation meetings


This is the most important part of the project. The wrong idea here could result in many man hours down the drain and a lot of money for your business. Who is really bothered about reducing the defect detection rate in a product or system that no one will ever use? Idea bugs are the most costly bugs and the ones that we should really focus on. This is sometimes out of the hands of most QA and test engineers, and is usually down to either having a very strong willed architect or a product manager that really understands what the customer needs and how to prove those ideas. I believe that this warrants a completely separate post, but you can find a very good explanation of idea bugs here. Whatever happens, there are tasks that we can take on.
  • Encourage pretotyping to prove the product concept over writing stories for unproven ideas
  • Understand any technology
  • Begin to sketch out a test strategy for the project

Phase – Early stage sprinting


During the early sprints our focus should be on proving the architecture and system concepts, building the team, and putting in place development and test frameworks.
Planning
  • Write technical stories or ensure the following tasks exist (These are all MUSTS)
    • Set up a CI project on your CI server
    • Set up unit test framework
    • Set up framework for front end script automation (JSUnit)
    • Set up build scripts
    • Set up all the necessary environments for now and think about what you may need for the future (Identify any needs you have further down the line)
  • Help scope out the stories and product
  • Get acceptance criteria clear in each story
  • Engage architects and technical experts in all planning discussions, never make assumptions on how things should work
  • Liaise with test leads or test managers to ensure you have covered off any global test strategies or ideals
  • Raise any performance considerations
Sprint
  • If the architecture is not yet baked then get to understand proposed solutions, arguments for and against. If it is decided, get to know why it was chosen, capabilities, etc
  • Ensure that plans are in place to make the code testable at more than just the unit level
  • Work with the product owners to get more insight on what the product is supposed to be doing
  • Update and test build and deploy mechanisms into the QA environments
  • Look for opportunities for early automation
  • Pair program / test / analyse

Phase – Normal sprinting


During normal sprinting we shape the product and deploy to UAT or beta environments on a regular basis to get feedback from customers, we may even be deploying direct to production. Regression testing is a key part of this process so must be efficient as possible. Throughout the strategy we are automating every process we can. Unit, integration, front end, and build scripts all combine to provide a high level of confidence in what we are delivering. 


Planning
  • Clarify acceptance criteria
  • Write testing tasks for all user stories
    • Unit
    • Integration
    • Scripting
  • Plan exploratory strategy
  • Ensure environments are ready
  • Plan crowd source testing
  • Ensure that the correct people are involved in the different planning activities. The tester should become a conduit during the planning, ensuring that information is passing to the right people.
  • Ensure UAT or beta environments are ready
  • Set up performance environments
Sprint
  • Write acceptance tests based on acceptance criteria
  • Carry out automation tasks
  • Maintain build scripts and environments
  • Carry out exploratory testing
  • Pair up with developers
  • Performance test
  • Test release mechanisms
  • Write post release test plans
There are probably a lot more activities that we can include, and I would encourage folk to experiment. If you understand your objectives, you will definitely understand what activities you will need to implement.

Thursday 17 November 2011

Using Selenium WebDriver with C#, NUnit and the Page Object Pattern

In this quick tutorial I am going to show you how to do the following:

  • Set up Selenium Webdriver test project using the C# implementation
  • Use C# and NUnit to build a test framework
  • Use the page object model to build a simple automation test suit
  • Simple data management


Part One - Setting up the project

  • Create a C# .net class library project using the .Net 3.5 framework

    ..\ NUnit-2.5.10.11092\bin\net-2.0\framework\nunit.framework.dll

     ..\net35\*.dll
  • Add a reference to System.Configuration to the project, and the add an application  configuration file

Your solution should like this:



Part Two – Set up a simple test framework using the page object model

Design patterns such as the page object model take the pain out of planning how to manage and set up tests, they also greatly reduce maintenance. I have chosen the page object model as I believe that it is the easiest to understand and gives a robust framework that requires very little, in UI automation terms, maintenance. Read more about automation design patterns here.

For this example I am using a web application I’m working on at the moment, it’s a simple device to manage restaurant table availability. It consists of an account login page and several account utility pages. In this test I have decided to automate the account login page, and the account home page.

Set up some base classes for the project

Create a class called base.cs which will contain all your common variables, objects, methods, etc. For the moment, we just create a new WebDriver instance


using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Configuration;
using System.Linq;
using System.Text;
using NUnit.Framework;
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;

namespace Automated
{

    public class Base
    {
        public static IWebDriver driver;
        public static string _baseUrl;
        static Base()
        {
            _baseUrl = ConfigurationManager.AppSettings["baseUrl"];
        }
       
        public void NavigateTo(string url)
        {
            var navigateToThisUrl = _baseUrl + url;
            driver.Navigate().GoToUrl(navigateToThisUrl);
        }
        public void GetDriver()
        {
            //driver = new ChromeDriver();
            driver = new FirefoxDriver();
        }
    }
}

You will notice a reference to the application configuration file we created earlier. In here we are going to put the target url of the application under test. It doesn’t need to go here, but having it here allows us to make this value easily configurable without building a data management framework.

Add the following contents:

  <appSettings>
    <add key="baseUrl" value="http://www.mynewapplication.com" />
  appSettings>


There is also a “navigate to” function. This takes our application base url and tacks on any required paths that we may need to visit directly. We call this method inside the page objects.

For this example I need to create two page object models so I create a class called AccountLogin.cs and AccountHome.cs in a solution folder called Pages. We then model the pages in the classes.

By model, what we are doing is identifying actions and elements on a page and placing them inside a method that can be called on that page object. We may even group together these actions as a part of or full work flow on a page. We may also include in the page object any elements that may serve as assertions.

My two page classes look like this

Pages\AccountLogin.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.IE;
using OpenQA.Selenium;

namespace Automated
{
    public class AccountLogin : Base
    {

        public AccountLogin NavigateToLogin()
        {
            NavigateTo("/account/login");
            return new AccountLogin();
        }
      
        public AccountHome LoginAs(string username, string password)
        {
            driver.FindElement(By.Id("email")).SendKeys(username);
            driver.FindElement(By.Id("password")).SendKeys(password);
            driver.FindElement(By.Id("submit")).Submit();
            return new AccountHome();
        }
    }
}

In the accountlogin page I have modelled the login action (LoginAs). This action fills the login fields and then submits the login form, finally returning the proceeding page, account home. I am using the FindElement method with the field element and submit Ids being used as the identifiers.

Pages\AccountHome

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using OpenQA.Selenium;

namespace Automated
{
    class AccountHome:Base
    {

       
        public bool User_Welcome_Name(String userFullName)
        {
            IWebElement userWelcome = driver.FindElement(By.ClassName("Welcome");
            bool result = userWelcome.Text.Contains(userFullName);
            return result;
        }
    }
}

In the account home page model, for this example I have identified an object on the page that will tell me if the login has been successful for the user. I have modelled this object as a method that returns a Boolean value, it will return true if the users details that it finds match those that are expected. I am using the FindElement method of Webdriver to locate the text by finding the css classname “Welcome”, I then look in the text to see if the username is contained within.

Bringing the two page objects together as a test

Where possible I try to marry up a test class to a page object, and the actions within that object, for example, the login test class only contains tests that test the login page. It’s similar to functional decomposition only we are breaking an application down into pages. This keeps the testing simple. Sometimes we may also need to introduce workflow testing where we are testing a flow through several pages.

A test class usually contains a Set Up method, one or more tests, and a Tear Down method. The set up method will usually do something like get the webdriver object and make it available to all tests within the class. The Tear Down will usually tidy up the test environment or reset values for the next set of tests. Either of these can be run globally as either a SetUpFixture or TearDownFeature for all tests, or equally, within each test class to be run before and after each test.

A test procedure using selenium requires the following to happen

  1. Instantiate the WebDriver for the browser you wish to automate. This is what we do in the Set Up method, calling the GetDriver method from the base class. This opens up browser specified.
  2. Navigate to a start page. We call the NavigateTo function in the base class to do this. This Method grabs our base url from the app.config file and appends whatever route that we give to the method. See data management below for how to manage different data types such as URLs.
  3. Execute one or more actions, and assert the outcome of those actions.
  4. Tear Down the WebDriver if required, and run clean up commands.

The first test I have written for this example is a successful login test. It navigates to the account login page, fills in the login fields, and submits the login form; it then waits for the account home page to display before executing a checkpoint action to make sure we are on the right page.

The test class looks like this

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;

namespace Automated
{
    [TestFixture]
    class LoginTests : AccountLogin
    {

        [SetUp]
        public void Setup()
        {
            GetDriver();
            driver.Manage().Timeouts().ImplicitlyWait(new TimeSpan(0, 0, 30));
        }

        [Test]
        public void Login_Successful()
        {
            AccountLogin accountLogin = NavigateToLogin();
            // Assert.IsTrue(accountLogin.Sign_Button_Visible());
            AccountHome accountHome = LoginAs("tamgus.bultgreb@yahoo.com", "itsasecret");
            Assert.IsTrue(accountHome.User_Welcome_Name("Tamgus Bultgreb"));

        }

        [TearDown]
        public void TearDown()
        {
            // Some funky stuff here..
        }

    }
}

As you can see, there are some nice hard coded values in the test such as user name and password. These should be extracted out of the test and placed in a test data repository, more about that later.

Running the tests

There are several ways to run the tests, from the command line, with NUnits own UI, or with a test add tool such as test driven. Whilst developing I go for Test Driven as it allows me to run tests quickly from visual studio, but during test runtime, with tests being triggered from a CI server, you need to go down the command line route.

Managing the test data

Your tests will almost always need to have some kind of data to drive them. In the example above we needed to provide a username, a password, and some URLs, we also needed to provide data for some checkpoints.

There are several ways of managing data, each with its own merits, from experience the main mechanisms I have seen successfully used are constant files and datasheets. Constant files are basically static classes that contain all the data that your tests will use assigned to variables. This is a nice way to keep all your test artefacts manageable inside the project. It does mean, though, that data can’t be easily manipulated close to runtime, and if you wish to change data, the test project must be rebuilt. It also stops you from using dynamic data at runtime.

Datasheets give you a lot of flexibility when feeding data into tests, you have the ability to implement different scenarios with the same tests, you can also feed data into your datasheets dynamically, and you can update data at any moment up to the test runtime. However, using datasheets tends to remove some of the robustness of your tests. When you are working with constant files, you are immediately aware of any break in your data, datasheets tend to promote a more “detached” approach to data management which can often lead to incorrect data being used within tests.

When choosing which approach to take you can look at the following

Test flexibility
Data reliability

Test flexibility refers to some of the objectives of your automation efforts. If you wish to have flexibility with the data going into your tests, or you wish a business representative or such to feed in data to your tests, the datasheet route is more favourable. If your data is simple and unchanging, constant files usually give a more stable and robust test project, which should always be one of your main objectives.

Data reliability refers to how “safe” is your data from changing. This is one point that I push more than any in automation. Apart from bad programming, data management is the biggest killer of automation projects. Uncontrollable data will mean lots of failed tests. A good example would not be having complete control over a user in an application, and someone changes the password for that user, or the language that the user sees an application in. This would break your test where you are using either one of those items in the test. In our example above, we have a test called successful login, if someone else has access to the user being used and the data, they could change the password and that successful login test will now fail for data reasons, and not a code bug. This adds to the maintenance debt.

Here is an example of how I can add data values into the base class in the form of constants and use that to drive the tests.

In the base class add test data values to the constructor

public class Base
{
    public static IWebDriver driver;

    public static string _baseUrl, userName, password, userFullName;

    static Base()
    {
        _baseUrl = ConfigurationManager.AppSettings["baseUrl"];
        userName = "livetest999@gmail.com";
        password = "testtest";
        userFullName = "Live Test";        
    }

In the test, you can now reference these values as follows
   
[Test]
public void Login_Successful()
{

    AccountLogin accountLogin = NavigateToLogin();
    Assert.IsTrue(accountLogin.Sign_Button_Visible());
    AccountHome accountHome = LoginAs(userName, password);
    Assert.IsTrue(accountHome.UserWelcomeName(userFullName));

}

If you do go down the datasheet route be prepared to invest time in setting up a data management mechanism in your test suite that can read in and parse data for the tests. Libraries such as FileHelpers can greatly facilitate this.

Ok, that’s it for now. Hopefully you can now set up a test project using the .NET version of WebDriver, implement tests using a common design pattern and implement a simple data management technique. We have not touched upon a lot of UI automation here, just the basics. In the coming months I would like to explore cross browser testing, and localisation testing in more detail.

For more information on NUnit visit their site.

For indepth information on WebDriver visit the Selenium WebDriver Project Wiki.

Sunday 30 October 2011

GTAC 2011 - Cloudy with a Chance of Test - Videos

Just got back from Google Test Automation Conference 2011 which is probably one of the best software development conferences out there, and its free! Focus is on test automation, but the crowd is made up of a mixed bunch from across the broad spectrum of development.

Highlights for me:

Keynote from Alberto Savoia

Test is Dead (Don't take this literally, please!)

http://www.youtube.com/watch?v=gQclnI_8Vmg&list=PLBB2CAFDDBD7B7265

Keynote from Hugh Thompson

How hackers see bugs:

Overview of Angular and its test capabilities from Miško Hevery

All of the videos for GTAC 2011 are here: 

Saturday 24 September 2011

UI Automation: Avoiding Failure


I have developed and worked on many UI automation frameworks using both commercial and open source tools, some of these have been more successful than others. Regardless of the tool you use there are some common indicators that you can help you identify if you are going down the wrong path.

No business backed strategy to begin UI automation

UI automation is expensive; it requires a lot of time and effort. Make sure you have enough secured resource to carry out your plans. If you don’t, that resource will be pulled somewhere else and you will end up with several unfinished automation projects that have cost the business money but bring no value.

No clear reason to automate

Why are we automating?  If you have no clear objectives, what’s the point?

Some of the reasons why we should automate
  • to reduce the feedback loop between code implementation and possible defect detection
  • to reduce the amount of time we spend in resource intensive activities such as regression testing
  • to remove human error from test execution

       …and we should automate when:
  • we have stable functionality
  • we have the skill necessary to automate

 No design pattern is being applied to the test design

There is no need to reinvent the wheel with UI automation. Using a recognised design pattern like the page object model will reduce the time it takes to automate. It will also give you cleaner and easier to understand tests. Other engineers will be able to pick up the project and understand it.

Tests are brittle

If small changes to the AUT cause the tests to break then your tests are too brittle and not easily maintainable. 

Things to look out for:

  • Too many checkpoints per test. Always ask yourself what the value of a checkpoint is before implementing it. The more checkpoints we have the greater the chance of test failing. In automation I only test for critical information. Many people will put checkpoints in for page layout, sizing, or styles applied. This is useful but is something that can become hazardous when you start cross browser testing, or testing with different resolutions.
  • Hard coded data. Tests should be driven by a driver class or datasheet if data is required. If you don’t have complete control over your test data then it could change and your tests will break. Being able to feed data in gives you much greater control over the tests and makes them much more useful.
  • Over engineered. Is your test code more complex than the code it is testing? Make sure that your tests are not failing due to over complicated code. I've seen this happen so many times!
  • Tests that depend on the test results of another test. This is a very common mistake. If one test with dependencies fails all dependent tests will also fail.

 No control over the data that drives the tests. When the data changes the tests fail.

Data control is one of the most common reasons why UI automation project fail. Any automation project should have its own environment where data is guaranteed. If the environment is shared then you must have a way of protecting the data your tests use. It’s very frustrating when your data is tampered with and the result is failed tests.

Tests fail because the identification of page or application objects is using ambiguous names, or auto generated identifiers.

UI automation requires us to identify page objects through their properties or location. Many applications will generate unique changing names for objects. Using these names is not secure. Try to get developers to add decent identifiers to objects. It will help keep the tests from becoming brittle and save time during test creation.

Large xpath identifiers are also a common cause of test failure. Try to avoid!

More time is spent on maintaining tests than testing

This is a sure sign you have problems. 

Things to look out for
  • A large number of tests fail on each test run
  • Engineers are “Baby sitting” test runs. Helping the tests complete is not automation!
  • You seem to be spending all your resource on one application. You never get to automate anything else
  • New test creation drops

Nobody knows what the tests coverage is

One thing I’ve found in agile environments is that although UI automation is high on the agenda, there is little understanding of what the test coverage is. Developers and testers churn out tests, get excited, but there is no understanding of what really has been tested. This means that you still have to invest in manual regression runs to assure yourself. UI automation is a safety harness, so we need to know how much protection it is giving us.