Showing posts with label Testing. Show all posts
Showing posts with label Testing. Show all posts

Monday, 14 December 2015

Test Tooling Drift - Integration Test Client Anti-Pattern

Test Tooling Drift is a test tool anti-pattern that seems to be a common occurrence on many of the teams that I work with.

In applications with integration points with external systems, clients are usually created in the code to connect with those systems. These clients usually form a part of the code base of the application. They communicate using protocols such as http, tcp, etc.

When testing this type of application, whether its writing automated checks, or creating tools to facilitate testing, you may find teams (test or development) creating their own test clients to handle some of the testing or checking code that is used against those external systems.

An example of this could be a transaction manager that provides transactional capabilities to a payment system. The payment system will have a client that connects to this transaction manager, and there will be a contract between the application client and the transaction system to facilitate functional or technical change. Changes in the contract will usually be handled within the development process of those working on the team. They may even use mechanisms like consumer driven contract tests to facilitate contract change and approval.

In this scenario its common in test tooling to see a separate test client created for the purpose of test being used to communicate with a system such as the transaction manager. As this client is different, if there is a contract change that is implemented between the client in the application, and the transaction manager, there is then room for error to creep into our test client should we not also implement that contract change. The test client has the potential to shield or mask potential issues occurring in contracts, such as protocol issues, network issues, schema issues, etc. This is where drift occurs. Of course, the biggest problem here is the time spent having to keep these clients in sync. We are most definitely violating the DRY principle here.

I've seen this anti-pattern occurring a lot on mobile application projects. Many mobile applications will call an endpoint from the mobile application to facilitate some functionality, and that will be done using a connection client built into the application code. When testing this integration, or running contract tests against this endpoint, you will see tests and checks using tools such as runscope, soapUI, or postman, even though none of these tools and the clients they use to connect to endpoints sit inside your application. Whilst these tests can call the endpoint and validate certain aspects of your contract, they are not doing it in the exact same way as your application client. Inconsistencies are most prominent in request headers, request creation and the deserialization of responses into objects to validate or use within test code.

If you want to reduce risk of failure you should certainly be using the client from the application to make calls to these endpoints during your testing and checking. Tools such as runscope, postman and soapUI are great for investigating and understanding integrations, but they are tools that use their own way to construct requests to your endpoints.

If you are an API provider you might want to make use of consumer driven contract testing to assure you keep alignment with your consumers. Though this can become untenable when you are providing a mass consumed API such as the twitter API, which is when you have to move towards suggesting implementations and best practices for consumers.

Monday, 9 November 2015

What really is Continuous Testing?

Continuous Testing. You'll keep hearing this term more and more. Don't be alarmed! It's just a term being used by some to describe practices that have been with us for many years, and a term used by many to cope with the fact that testing can happen at the same time as iterative development. Yes, it's true!

The term is being used to describe the context under which activities such as automated unit, integration, performance, and integration tests are executed, with that context being the release of frequent small batches of code to production, with minimal interaction from humans. It refers to the process of assessing risk on a continual basis in high frequency delivery environments.

It describes the inclusion of automated checks (or tests) into your development workflow, and a 'shift left' in terms of where testing happens in the development workflow. The human interaction aspect of continuous testing refers to the increasing validation of business requirements before they hit development teams through prototyping and business value analysis, and an increase in, or rather, focus on the amount of in team testing and crowd testing that occurs.

Most will recognise these activities to be an integral part of any lean, agile, XP, continuous deployment or continuous delivery workflow or pipeline.

Wikipedia describes continuous testing as
..the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.
I'm not a big fan of this term to describe these activities as it conjures up the notion that continuous testing is a separate set of activities and not integral to a successful development workflow or pipeline. It drives a wedge once again between the concept of testing and development which need to go together hand in hand. A wedge that has troubled us so much in the waterfall years, and a wedge that has still existed throughout the general adoption of agile practices.

Do we really need the term continuous testing to group together activities, processes, mechanisms, etc. that are already happily described in detail by those championing devops, continuous delivery, continuous deployment, xp, etc.? What value is this terminology bringing to the software delivery table? I just find it confusing.

I would hazard a guess and state that the term is being used in organisations where testing is sequential and costly, and not yet integral to the development workflow, and where little or no test automation exists to gain a traction with difficult to implement activities. Conceptually I can see how the term can help those in waterfall environments, or sudo agile environments to begin embracing a shift left in their testing process.

Whilst more mature developers and teams won't be impacted too much by the term, it's the consultancies, the large archaic organisations, and misinformed software professionals that worry me. Already terms such as devops, test automation, etc. have been hijacked by many to construct new functional silos within the development ecosystem, rather than embracing it as a cultural change. The very same could happen with continuous testing. Look out soon for job ads with the title "wanted - head of continuous testing"!

When you read some of the writing and blog posts on continuous testing, you will notice that some have attached roles to continuous testing, as if it was now someone's responsibility. The holistic value that continuous testing attempts to describe gets blown away once it becomes someone's responsibility.

Whilst I complete agree with and promote most of the practices I see being written about, I can't help but think that grouping things like automated unit or performance testing, or monitoring under the banner of continuous testing could encourage the removal of the responsibility of quality from the entire team, handing it over to a separate group, taking many organisations right back in time. Please don't let there be a Head of Continuous Testing appearing in your organisation!

Some Continuous Testing sources

https://www.stickyminds.com/interview/putting-quality-first-through-continuous-testing-starwest-2015-interview-adam-auerbach
https://blog.parasoft.com/continuous-testing-devops-infographic
https://www.soasta.com/webinars/continuous-testing-in-devops/


Tuesday, 17 March 2015

Continuous Delivery - Madrid 26 Feb 2015

Here are the slides to accompany the talk I did on continuous delivery and testing in February at AfterTest in Madrid. 


Many thanks to expoQA and Graham Moran for inviting me along to talk, and Miguel Angel Nicolao at panel.es for an excellent write up. Thanks to everyone who attended, I hope it was useful. There was certainly some interesting debate after the talk!

The next AfterTest is in Barcelona on 26/3/2015 with Javier Pello.

Thursday, 11 September 2014

Do you really need a test automation framework?

Virtually every project I have worked on has at some time in its life had a close encounter with a well meaning, all things to all men, test automation framework, that has caused more harm than good, and cost a fortune to build in the process. 

Maybe its due to the ubiquitous SDET (Software Development Engineer in Test) or a well meaning tester with no programming background, or a very demanding test manager, or just a plain old developer who should know better, but the continuing prevalence of massive test automation frameworks is slightly worrying. I both frequently hear about them, and very often see them. If they appear on my projects, they usually die a swift death.

So what's the problem? 


The premise is that there is no way to execute or optimise automated testing apart from executing vast swathes of system tests from, usually, the highest interface into a system using a “special” test framework. If this is your only way to provide any kind of code coverage, or guarantee of safe delivery, then it points to a poor delivery process or a badly engineered system that is no doubt flaky and incredibly difficult to change and maintain. 

These frameworks are typically maintained by a tester, test team, or a specialist bunch of mercenary developers. 

You will normally find the tests executed by these frameworks to be long running, high maintenance, and vague in what they actually do. In many cases they will be driven through a UI, which typically means something like WebDriver, wrapped in an obviously completely necessary layer of abstraction, and maybe some fruity BDD framework to describe what the tests are doing. 

Being system tests will typically mean that the tests are run in an integrated full stack environment. This in itself, regardless of test design, is a complexity that most would wish to avoid. Data management, permissions, versioning, infrastructure availability, etc. all come in to play here.

They very often sit in a separate test project that is completely decoupled from the code they are testing, meaning that version synchronisation issues at the feature level become a real problem. They are also hardly ever created by the people that need the feedback most from these tests - the developers.

The cost of both building and maintaining test frameworks for any enterprise sized solution can become astronomical in comparison to the actual value and risk reduction they deliver.

What I typically see is a very high test execution failure rate that can’t be attributed to actual code changes. I have seen some fairly large enterprise projects where the test run failure rate was between 69% and 91% with those failures not being tied to a code or configuration change. That is quite shocking. Equally, I have seen failure rates lower than 10%, but that does seem rare. If you couple the failure rate with the typical cost of building a test framework for any reasonably complex system, then the value becomes quite clear. Just multiply the day rate of all those involved in building a framework by the number of days it takes to build, and do the same for the ongoing maintenance cost, deduct all that from your expected profit and work out if that cost is justified. 

The other cost related issue lies with the constant rebuilding of test frameworks. As test frameworks are not typically used in production, the change management becomes less rigid. I'm not sure what the average lifespan of a test framework is, but based on experience I would hazard a guess at less than 2 years. 

Essentially, these test frameworks are one of the symptoms of a badly planned test/design approach applied to your system.  

and the Cause?


So do you really need to build this all singing, all dancing test automation framework? There are many causes and reasons why test frameworks appear.

Confidence
Test Managers, testers, release managers, project managers etc, may have no understanding as to how developer driven tests maybe helping to reduce the risk of failure when adding and changing features. This encourages regression test phases, release test phases, pre prod test phases, all usually heavily laden with time draining manual and automated test framework antics. (Steve Smith goes into the detail of release testing, and how dubious that activity is, and that's without even talking about test frameworks!)

Inexperience
Engineers who have limited experience of commercial development, or engineers that should know better choose to knock out something that works over something that works and is maintainable. Very often you see the "responsibility of quality" foist upon testers or QA engineers within a team or organisation. These individuals very often don't have a programming background and will create, with the best intentions, a safety net in the form of a test framework that will typically be riddled with design anti-patterns, and difficult to maintain code. Again, this encourages the inclusion of multiple test phases due to the lack of confidence provided by the general lack of valuable and consistent testing feedback.

Cost
Project and test managers who believe that the tester or QA headcount can be reduced through the use of automation. This leads to attempts to automate the types of tests that a tester may execute. In reality, the things that get automated are just simple checks and not the complex interactive tests that a human being can execute. This in turn either leads to increased project costs, due to maintaining both manual and automation testers, or leads to poorer quality code being delivered due to the reduced amount of interactive testing that takes place. Either way, there is a hit on the money you have to spend. 

Legacy Systems
We've probably all worked on a legacy system with no tests, and no documentation. If we need to make changes to this system, or refactor it into something more manageable, then we do need some kind of safety net. This can rear its head in the form of larges amounts of system tests being run by a test automation framework. 

Whats the solution?

A decent test approach during the entire life cycle of your system, from concept to the end of its life. 

The decent approach would typically include a combination of collaborative (isolated tests that use test doubles) and contract tests, coupled with those few tests that give you the warm fuzzy feeling that your system is hanging together nicely. See this great video from J.B. Rainsberger ("Integrated Tests Are A Scam") about how to design your tests to optimise not only how the system is tested, but also how it is designed as well. 

If you join this all up with the testing executed by an expert interactive tester, then you will never have the need for a test framework ever again! Honest!

Of course, that's a very idealistic, and simplistic way to put it, but that is the bare bones of it. I think that some of the bigger problems really do come from those legacy systems with no tests, the ones that you need to refactor, but have no safety net. The instant reaction is to add a mountain of system tests driven by a framework to give you the sense that the risk of breakage is covered. In my mind, its probably more cost effective, and safer to have some good testers, and a small amount of warm and fuzzy smoke tests to help you through the hump of making a change, where each change sees some collaborative and possibly contract tests included in the work required for each change. Overtime, especially during a refactor, the test approach will modernise, and provide more built in safety.   

Finally..


I have spent a number of years during my career either building or contributing to test automation frameworks. My experience is generally negative (hence this post). However, there is a test framework that I built in 2006 still running through a couple of hundred system tests every single day on an insurance policy management system. The cause of its existence here was, and still is, confidence, not in the software itself, but the infrastructure on which it is hosted. Its still there, still going strong. I've not done the sums yet to work out the value, but I'm sure either the company using it, or the consulting company maintaining it has a lot to thank me for. Value is in the eye of the beholder after all!



Wednesday, 10 September 2014

Regression testing

What is regression testing? Why do it? What value does it have? My short answer to these questions is regression testing as a concept is about searching for new faults introduced through changes to the code base, and to learn more about the system you are building.

Usually in modern development, through an automated build pipeline, you are constantly executing checks that help detect whether the changes made to your code have resulted in new faults being introduced. Couple this with the interactive testing that we do throughout the development of any story then we have a combined effort that you could describe as regression testing.

In my mind, thats pretty much it. Its main purpose is to find faults, and learn more about the system.

Many people become confused about the term regression testing, and rightly question its worth when they judge either automated only solutions, typically through a UI or API, or look to have regression testing as a phase that occurs at some point on the critical path to delivery. Both of these approaches fall into the realm of delivery anti patterns.

If you think of your entire test and development approach as a way to minimise the risk of regression and to deliver the right thing in the most effective and safe way, you very rarely have to think about or understand the need for regression testing.

Thats my rather simplistic view. For more detail, have a read of Michael Bolton’s presentation "Things Could Get Worse: Ideas About Regression Testing"

Tuesday, 18 February 2014

Alternative views of the test pyramid diagram

warning.. there is an attempt at humour in here..
A lot of people like to use simple diagrams to explain their test approach on a project or system. One that crops a lot is the test pyramid as described in Alister Scott's WatirMelon blog. I love this diagram, and thought that it gave people a high level understanding about what and how you are going to be testing, both automatically and as a human on a project. I was wrong! After a recent series of criticism about a test strategy I produced, I realised that not everyone sees the world like I do. So here are a couple of takes on the test pyramid to help other people understand the diagram a bit better. Its so difficult to be all things to all people!

Pyramid at sunrise
Obviously, the sun is what humans do, with their minds etc.. and the pyramid describes the effort invested in certain test automation activities.

When I was looking at the diagram above, I couldn't help noticing similarities to pac-man, so here is my second attempt.

pac-man tests
For more information see my brief entry on the test pyramid.

Wednesday, 5 February 2014

Low ceremony http request mocking with Betamax

Whilst researching effective http request mocking in junit tests I came across a great project called Betamax created by Rob Fletcher.
Betamax is a tool for mocking external HTTP resources such as web services and RESTAPIs in your tests. The project was inspired by the VCR library for Ruby. 
You don’t want 3rd party downtime, network issues or resource constraints (such as the Twitter API’s rate limit) to break your tests. Writing custom stub web server code and configuring the application to connect to a different URI when under test is tedious and might not accurately simulate the real service. 
Betamax aims to solve these problems by intercepting HTTP connections initiated by your application and replaying previously recorded responses.
The first time a test annotated with @Betamax is run any HTTP traffic is recorded to atape and subsequent test runs will play back the recorded HTTP response from the tape without actually connecting to the external server. 
Tapes are stored to disk as YAML files and can be modified (or even created) by hand and committed to your project’s source control repository so they can be shared by other members of your team and used by your CI server. Different tests can use different tapes to simulate various response conditions. Each tape can hold multiple request/response interactions. An example tape file can be found here.
Betamax works with JUnit and Spock. Betamax is written in Groovy, but can be used to test applications written in any JVM language.

For more information please visit:


Mocking out system dependencies with MockServer

Have you ever had the problem of running a web application in a development environment that had multiple system dependencies, only to find that those dependencies are unavailable?

MockServer built by James D Bloom can help you solve this problem.
MockServer is for mocking of any system you integrate with via HTTP or HTTPS (i.e. services, web sites, etc).
MockServer supports:
  • mocking of any HTTP / HTTPS response when any request is matched 
  • recording requests and responses to analyse how a system behaves 
  • verifying which requests and responses have been sent as part of a test
MockServer is built using Java and has bindings for both Java and Javascript although any language that can send JSON via HTTP can easily use the MockServer API. Their are some .net bindings in the pipeline, but until those come along you could use Nancy to provide similar capabilities.

For more information on MockServer please visit

http://www.mock-server.com/

Sunday, 5 May 2013

GTAC 2013 - Videos now up

I wasn't lucky to get an invite to this years GTAC, but the videos look good.

See the conference here:

http://www.youtube.com/playlist?list=PLSIUOFhnxEiCODb8XQB-RUQ0RGNZ2yW7d

Wednesday, 23 January 2013

Test Doubles

Sometimes your system under test is made up of multiple systems. Some of those systems may not be in your control, and others may be very difficult to operate in a non production environment. When this happens you can use a test double to mimic those particular parts of the system.

A test double is a generic term used to describe the various ways of mimicking a system and can be classified as follows
  • Dummy objects - objects that passed around but never actually used. Usually they are just used to fill parameter lists.
  • Fake objects - objects that actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase is a good example).
  • Stubs - provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
  • Spies - These are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent.
  • Mocks - These are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.
Test doubles can be used across the entire test life cycle, facilitating the delivery of code by solving tricky integration or environment problems. However, at some point it may be necessary to remove those doubles and carry out more realistic system integration test. You should always assess the risk of using test doubles and have at the back of your mind that these are not real systems or objects, and are just helping development move along.

For more information read:

Tuesday, 11 December 2012

The Automation Pyramid

Think about using the test automation pyramid when planning your test automation strategy.

The test automation pyramid was used by Mike Cohn to describe the value of different types of automated tests in the context of an ntier application. The concept is very simple. Invest more time and effort in those tests that are lower down the pyramid than those at the peak, as those tests lower down the pyramid provide the most value in terms of quick feedback and reliability, whereas those at the peak are expensive to implement, brittle, and time consuming.

The traditional pyramid is split into three layers, Unit testing at the base, integration/API tests in the middle layer, and UI tests forming the peak of the pyramid. Many now opt to describe the UI layer as the ‘end to end’ layer as this phrase better represents those types of test.


Useful posts on the subject:

http://martinfowler.com/bliki/TestPyramid.html by Martin Fowler

http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid
by Mike Cohn

Tuesday, 13 November 2012

Automated smoke tests in production

If you can, don’t be afraid to run your automated tests in production. A production environment is a place where automated tests can give real value, especially after a release. Instant feedback on the success of a change in production could be worth a lot of money to your organisation.

As a minimum run automated smoke tests before and after a release in production, firstly, to baseline, and secondly, to assure nothing has broken after a release.

If you are limited by the data you can use or create during a test then just consider non transactional tests. Any way that you can speed up the feedback loop when a change has occurred is a bonus.

Obviously not all systems or organisations are conducive to this sort of strategy, so as a consideration when building a new system, it’s worth thinking about the ability to run automated tests in a live environment when designing a system.

Monday, 22 October 2012

Testing Webservices with SpecFlow

I have been looking for a way to test multiple soap web services as part of a complete integrated end to end workflow that at the same time can provide valuable business documentation. The requirements are quite simple:
  • Workflows can be written using natural language
  • Multiple web services can be easily executed in sequence
  • Development time must be minimal
My immediate thought was to use a cucumber type test framework, and after a recommendation I started to investigate SpecFlow.

SpecFlow is a way of binding business requirements to code through specification by example in .NET. It supports both behaviour driven development (BDD ) and test driven development (TDD). SpecFlow, like any other natural language test framework, can also be used as a tool to combine documentation and testing of existing code, and that is exactly what I have used it for.

Using this method for generating an arbitrary web service, in a feature scenario using Gherkin I can specify the specifics of a web service, the contract location, the methods to be used, and what the response should be.  

In the binding statements, which SpecFlow uses to manage the logic required to execute the scenarios, I can execute the implementation of the call to the web service. There is a great example of this framework being used here, with multiple web services being called inside one feature.

This is probably not the most beautiful solution I have used to test services in a SOA environment but it provides the ability to get accessible system knowledge into the test and it’s extremely quick to set up. 

Wednesday, 17 October 2012

Developers in test.Yes, really!

I have mainly worked in high growth businesses, either in the form of start ups, or strategic projects in large corporations. My role typically involves promoting the use of sensible software engineering practices and software delivery patterns to help produce a product that works, and that can have frequent low risk change applied to it. In this type of environment, the team structure is very much organismic in nature. What this usually means is that there are very few people dedicated to, or specialising in, test activities. 

However, this does not mean that testing gets completely side stepped. We can still achieve the quality objectives of the organisation without dedicated specialists. Given the right means, I have found developers in this type of environment can become some of the best testers that you will come across.

How does that work?


There are a number of ways that we can use to bring effective testing to forefront of product engineering

Software Engineering Practices
I always ensure that developers are equipped with decision making power on how their work environments are structured, about the tools that they use, and about the delivery mechanism to push regular updates to a product. I ensure that teams use sensible practices such as CI, zero branching, infrastructure as code, contract testing, and the like. I push continuous delivery, and actively promote learnings made from many of the great people that I have worked with.

People
You need to hire engineers on the team that take a holistic and caring approach to software development. These are the people that have built successful products from the ground up, or have been pivotal players of very successful product teams. 

Test Activities
I find that coaching teams using principles from folk like James Bach and Michael Bolton to be incredible useful in up skilling developers quickly in the art of testing. These two guys have spent their careers honing a testing approach, and are so well drilled that you will always come away from any of their writings or teachings with more than a handful of powerful testing ideas. I personally think they are great guys that should be listened to a lot more. Their pragmatic, and often dogmatic approach, is contributing to the changing face of testing.

At some point organismic structures become mechanistic. This is when professional testers are hired. This is when test managers are hired, or may be a head of QA. At this point it is always really good to have facts and figures to assess just how successful the new order is compared to your pre-exising "testerless" state.  





Sunday, 16 September 2012

Digging into Compiled Code


I recently had to test a number of changes to a .net web service which had no test automation, no regression tests, and no specification apart from the service contract and a subversion change log. In addition to this, there was also no indication as to when the last release of the service was so I had no idea from the change log which changes were live and which required testing.

Fortunately I had access to the live binaries which meant that I was able to decompile them using Red Gates Reflector, and drill into individual methods. This gave me the ability to cross reference whether the changes listed in the change log were actually live or not.

It took about an hour to analyse the decompiled code, but this reduced the potential test time from approximately four days down to less than one. It also gave a reassurance that no untested code would be released.

A decompiler is a great tool that gives you further insight into the code you are testing. Red Gate’s .Net Reflector, is one the most common for .net which I use a lot. For Java there are many plugins available for most common IDEs, I’m currently playing with the “Java Decompiler Project”. 

Friday, 27 July 2012

Achieving an expected level of quality with limited resource and budget

Sometimes there is just no money for testing or QA. Testers leave your team and don’t get replaced. The team dwindles, but the developer base either maintains or grows. Your reduced team has more and more to do. The worst case scenario here is that your remaining testers become overworked, can’t do their job properly, get thoroughly de-motivated, and leave, and who could blame them. You now have even less resource.

Despite the scenario above, when it does happen, you will still hear the mantra of “quality is not negotiable”, and probably even more so, requests by product and company leaders to support everything and everyone.

So what is possible? How can you achieve the expected system and product quality with a limited budget?

Looking back at some of the successful projects on which I have been involved, and which have also been struck by similar limited test resource scenarios, it is possible to identify some common characteristics that contributed to their success from both a product and quality perspective.

- a product that the customer really needs
- customer input from the start
- a team that cares about what they are building
- a product manager that knows how to ship products and that trusts in the development team
- a strong work ethic
- innovation throughout the team
- the right tools
- a simple iterative development process

Without going into the psychological aspects of building the right team and processes, most of the above I would weigh as being far more important to foster or implement in a product development team than fretting too much about test and QA resource. Why? Having all the above, for me, goes a long way to ensuring the quality of both the idea and build of your product. Good people, processes, and tools will do far more for you than hammering your application to death, and don’t usually come out of your budget. If you don't have much of the above then life will be difficult.

As a final comment, if you are faced with the scenario described above, you should ask yourself, and maybe the business, the following questions:

- Can we compromise the quality of the system?
- Is quality negotiable for this product?
- Will the customers accept a less than perfect solution?

If the answer is yes to any of these questions then you have the answer as to why you have no  budget, and with this knowledge you can then focus your test and quality efforts in a different and more effective manner.

Sunday, 17 June 2012

Unit testing databases using c# and Nunit

I have been looking at ways to regression test the data access layer of a .Net application that has a heavy reliance on stored procedures. There are tools that can help do this, and I did consider both dbfit and ndbunit, but neither of them could satisfy my criteria.

Criteria

  • Ease of use – No time to train developers or testers on how to use a new test framework
  • Ability to use straight SQL statements
  • The tool or mechanism must integrate easily into this and other projects.  I also need to provide unit testing for a data warehouse project, so something that I could use on both would be perfect
  • The generated tests must be easy to put under version control. Being able to tightly couple tests with a specific version or schema is very important, but more about that another time.
The .net application I'm trying to test already has a data access layer that could be used to create these tests, but the implementation of this particular layer is complicated and would require the test engineers working on the project to have a high level of .net code understanding.

Solution


The solution I came up with creates a very simple data access layer using System.Data.SqlClient  and Nunit (download NUnit here). The only complexity that the tester needs to think about is the way they construct the sql and how they write assertions.

Using standard nunit test fixtures, in the test set up I connect to a database, and then in the tests I execute queries and stored procedures, using simple asserts to validate the results.

Here is how its done.

Created a standard class library project that references:

Nunit
nunit.framework

Microsoft
system.data

I'm using a factory pattern with interfaces that allow easy creation of a database session management class which can be used throughout multiple test fixtures. The session manager has methods that create connections to a database defined in a factory, query the database, and execute stored procedures.

The test fixture:

using System;
using NUnit.Framework;
using Codedetective.Database;


namespace Codedetective.Tests
{
    [TestFixture]
    public class DbChecks
    {
        readonly IDatabaseSessionFactory _dbFactory;


        public DbChecks()
        {
            _dbFactory = DatabaseSessionFactory.Create
                (@"Database=CodeDectiveExamples;Data Source=local\test;
                    User=*******;Password=******);
        }


        [Test, Description(“Identify whether TestSet 501 is created and active”)]
        public void DbTest01()
        {
            using (var session = _dbFactory.Create())
            {
                var query = session.CreateQuery(
                    @"select count(*) from testset
                      where testSetId = 501 and Active = '1'");
                var result = query.GetSingleResult<int>();
                Console.WriteLine("test 1 " + ((result == 1) ? "passed" : "failed"));
                Assert.AreEqual(result, 1);
            }
        }

The database session manager:

namespace Codedetective.Database
{
    public class DatabaseSession : IDatabaseSession
    {
        public string ConnectionString { get; set; }
        private SqlConnection _connection;
        private SqlTransaction _transaction;


        public DatabaseSession(string connectionString)
        {
            ConnectionString = connectionString;
        }


        public SqlConnection GetConnection()
        {
            if (_connection == null)
            {
                InitializeConnection();
            }


            return _connection;
        }


        public SqlTransaction GetTransaction()
        {
            if (_transaction == null)
            {
                InitializeConnection();
            }


            return _transaction;
        }


        private void InitializeConnection()
        {
            _connection = new SqlConnection(ConnectionString);
            _connection.Open();


            _transaction = _connection.BeginTransaction();
        }


        public void Dispose()
        {
            if (_transaction != null)
                _transaction.Dispose();


            if (_connection != null)
                _connection.Dispose();
        }


        public IDatabaseQuery CreateQuery(string query)
        {
            var command = GetConnection().CreateCommand();


            command.CommandText = query;
            command.Transaction = _transaction;


            return new DatabaseQuery(command);
        }


        public IDatabaseNoQuery CreateNoQuery(string insertstring)
        {
            var command = GetConnection().CreateCommand();
            command.CommandText = insertstring;
            command.Transaction = _transaction;


            return new DatabaseNoQuery(command);
        }


        public void Commit()
        {
            _transaction.Commit();
        }


        public void Rollback()
        {
            _transaction.Rollback();
        }
    }
}

The interface of the DatabaseSession class:

using System;
using System.Data.SqlClient;


namespace Codedetective.Database
{
    public interface IDatabaseSession : IDisposable
    {
        IDatabaseQuery CreateQuery(string query);
        IDatabaseNoQuery CreateNoQuery(string insertstring);


        SqlConnection GetConnection();
        SqlTransaction GetTransaction();


        void Commit();
        void Rollback();
    }
}

The factory that we use to create the database session manager

namespace Codedetective.Database
{
    public class DatabaseSessionFactory : IDatabaseSessionFactory
    {
        public string ConnectionString { get; set; }


        public IDatabaseSession Create()
        {
            return new DatabaseSession(ConnectionString);
        }


        public static IDatabaseSessionFactory Create(string connectionString)
        {
            var sessionFactory = new DatabaseSessionFactory
            {
                ConnectionString = connectionString
            };


            return sessionFactory;
        }
    }
}

The interface for DatabaseSessionFactory:

namespace Codedetective.Database
{
    public interface IDatabaseSessionFactory
    {
        IDatabaseSession Create();
    }
}

Finally we create the methods that will be used to execute queries and stored procedures

using System;
using System.Collections.Generic;
using System.Data.SqlClient;


namespace Codedetective.Database
{
    public class DatabaseQuery : IDatabaseQuery
    {
        SqlCommand Command { get; set; }


        public DatabaseQuery(SqlCommand command)
        {
            Command = command;
        }


        public void AddParameter(string name, object value, System.Data.DbType dbType)
        {
            var parameter = Command.Parameters.AddWithValue(name, value);
            parameter.DbType = dbType;
        }


        public TResult GetSingleResult<TResult>()
        {
            return (TResult)Convert.ChangeType(Command.ExecuteScalar(), typeof(TResult));
        }


        public IEnumerable<TResult> GetResults<TResult>()
        {
            Type resultType = typeof(TResult);
            IList<TResult> result = new List<TResult>();


            if (resultType.FullName.StartsWith("System."))
            {
                using (var reader = Command.ExecuteReader())
                {
                    while (reader.Read())
                    {
                        var value = reader.GetValue(0);
                        result.Add((TResult)(value != DBNull.Value ? value : null));
                    }
                }
            }
            else
            {
                var properties = typeof(TResult).GetProperties();


                using (var reader = Command.ExecuteReader())
                {
                    while (reader.Read())
                    {
                        var entity = Activator.CreateInstance<TResult>();


                        foreach (var property in properties)
                        {
                            var value = reader[property.Name];
                            property.SetValue(entity, value != DBNull.Value ? value : null, null);
                        }
                        result.Add(entity);
                    }
                }
            }
            return result;
        }
    }
}

The interface for DatabaseQuery is IDatabaseQuery:

using System.Collections.Generic;
using System.Data;


namespace Codedetective.Database
{
    public interface IDatabaseQuery
    {
        void AddParameter(string name, object value, DbType dbType);


        TResult GetSingleResult<TResult>();
        IEnumerable<TResult> GetResults<TResult>();
    }
}

Now for stored procedures execution, we create a class called DatabaseNoQuery

using System.Data.SqlClient;


namespace Codedetective.Database
{
    public class DatabaseNoQuery : IDatabaseNoQuery
    {
        SqlCommand Command { get; set; }


        public DatabaseNoQuery(SqlCommand command)
        {
            Command = command;
        }


        public void AddParameter(string name, object value, System.Data.DbType dbType)
        {
            var parameter = Command.Parameters.AddWithValue(name, value);
            parameter.DbType = dbType;
        }


        public int ExecuteInsert()
        {
            int rows = Command.ExecuteNonQuery();
            return rows;
        }
    }
}


The interface for DatabaseNoQuery is IDatabaseNoQuery

namespace Codedetective.Database
{
    public interface IDatabaseNoQuery
    {
        int ExecuteInsert();
        void AddParameter(string name, object value, System.Data.DbType dbType);
    }
}

This is a long way from being a tool such as DbFit which opens up automation to even non programmers, but it serves a purpose, and does it well. The entire team can now write these tests which can be run alongside the rest of the project tests.

Monday, 12 December 2011

Agile Test Strategy (Updated)

A couple of years ago I posted a  very simple test strategy for agile based projects, here is an updated version.


Test and quality objectives


Objectives are a fundamental part of any strategy as they show us where we need to be going, and they allow us to invest in the right activities. My test strategies typically include the following objectives
  • Provide the customer with a system that they really need with quality baked in
  • Automate as much of the testing, configuration, and deployment as possible
  • Engage the business and customers in testing
  • Provide the business with the ability to confidently deliver features or complete systems without considering lengthy or multiple test iterations

Phase – Project conception & early estimation meetings


This is the most important part of the project. The wrong idea here could result in many man hours down the drain and a lot of money for your business. Who is really bothered about reducing the defect detection rate in a product or system that no one will ever use? Idea bugs are the most costly bugs and the ones that we should really focus on. This is sometimes out of the hands of most QA and test engineers, and is usually down to either having a very strong willed architect or a product manager that really understands what the customer needs and how to prove those ideas. I believe that this warrants a completely separate post, but you can find a very good explanation of idea bugs here. Whatever happens, there are tasks that we can take on.
  • Encourage pretotyping to prove the product concept over writing stories for unproven ideas
  • Understand any technology
  • Begin to sketch out a test strategy for the project

Phase – Early stage sprinting


During the early sprints our focus should be on proving the architecture and system concepts, building the team, and putting in place development and test frameworks.
Planning
  • Write technical stories or ensure the following tasks exist (These are all MUSTS)
    • Set up a CI project on your CI server
    • Set up unit test framework
    • Set up framework for front end script automation (JSUnit)
    • Set up build scripts
    • Set up all the necessary environments for now and think about what you may need for the future (Identify any needs you have further down the line)
  • Help scope out the stories and product
  • Get acceptance criteria clear in each story
  • Engage architects and technical experts in all planning discussions, never make assumptions on how things should work
  • Liaise with test leads or test managers to ensure you have covered off any global test strategies or ideals
  • Raise any performance considerations
Sprint
  • If the architecture is not yet baked then get to understand proposed solutions, arguments for and against. If it is decided, get to know why it was chosen, capabilities, etc
  • Ensure that plans are in place to make the code testable at more than just the unit level
  • Work with the product owners to get more insight on what the product is supposed to be doing
  • Update and test build and deploy mechanisms into the QA environments
  • Look for opportunities for early automation
  • Pair program / test / analyse

Phase – Normal sprinting


During normal sprinting we shape the product and deploy to UAT or beta environments on a regular basis to get feedback from customers, we may even be deploying direct to production. Regression testing is a key part of this process so must be efficient as possible. Throughout the strategy we are automating every process we can. Unit, integration, front end, and build scripts all combine to provide a high level of confidence in what we are delivering. 


Planning
  • Clarify acceptance criteria
  • Write testing tasks for all user stories
    • Unit
    • Integration
    • Scripting
  • Plan exploratory strategy
  • Ensure environments are ready
  • Plan crowd source testing
  • Ensure that the correct people are involved in the different planning activities. The tester should become a conduit during the planning, ensuring that information is passing to the right people.
  • Ensure UAT or beta environments are ready
  • Set up performance environments
Sprint
  • Write acceptance tests based on acceptance criteria
  • Carry out automation tasks
  • Maintain build scripts and environments
  • Carry out exploratory testing
  • Pair up with developers
  • Performance test
  • Test release mechanisms
  • Write post release test plans
There are probably a lot more activities that we can include, and I would encourage folk to experiment. If you understand your objectives, you will definitely understand what activities you will need to implement.

Sunday, 30 October 2011

GTAC 2011 - Cloudy with a Chance of Test - Videos

Just got back from Google Test Automation Conference 2011 which is probably one of the best software development conferences out there, and its free! Focus is on test automation, but the crowd is made up of a mixed bunch from across the broad spectrum of development.

Highlights for me:

Keynote from Alberto Savoia

Test is Dead (Don't take this literally, please!)

http://www.youtube.com/watch?v=gQclnI_8Vmg&list=PLBB2CAFDDBD7B7265

Keynote from Hugh Thompson

How hackers see bugs:

Overview of Angular and its test capabilities from Miško Hevery

All of the videos for GTAC 2011 are here: 

Sunday, 29 March 2009

Testing in a New or Transitional Agile Environment



Agile environments need many practices set up and functioning before a tester can really flourish – continuous integration, environment management, good software engineering practices, a solid development process etc. Without even these basic elements in place, the tester is left to manage an ad hoc flow of user stories, support issues, and goodwill.

Issues that seem to be common for testers in these environments:

  • Iteration planning is a quick guessing meeting. This is the most important part of any iteration as it sets up the focus and objectives for upcoming work. It is also an opportunity for the team to extract decent acceptance criteria from the product owners.
  • Test estimations reduced by product owners or developers. Just remember who the experts are here!  Don’t put yourself in a situation where you have to cram a full regression test into 3 minutes because a PO thinks that is enough time!
  • Acceptance criteria either not identified or too vague to be of any real value (See above!). Not having good acceptance criteria means that a story has no real objective, and will be too vague to test. Without the defined goal posts that acceptance criteria gives us, testers will often find themselves beaten up over failed expectations if the story doesn’t do what the PO wanted it to do. Comments like “This hasn’t been QA’d properly” or “the testers didn’t catch this” are quite common in this situation and push accountability on to the test team.
  • Stories getting to the tester too late. This usually happens when stories are ill defined and extend past the original estimation. Again, acceptance criteria will usually help focus estimations.
  • In smaller environments where developers are multi-access resources, there is often a stream of “under the radar” work that eventually flows into the hands of the tester. This is work being done, perhaps for the good of the business that puts an extra burden on the team. In this type of environment, work never gets tested properly. Consider using Kanban if this is the case!
  • No supporting development processes such as continuous integration, or automated testing. This means that the tester is usually engaged in large amounts of regression testing rather than exploring new functionality. Consider adding automation tasks to stories.
  • No decent environment management system in place meaning that it’s very difficult to have consistency with test, development, and production environments. This is a must if you wish to be efficient and effective with your deployment pipeline. You will need to set aside developer, DevOps, and tester time to get this up and running. To secure this time you need to be able to sell the benefits to your management team. Reduced delivery time is always a good benefit to use in this circumstance.
  • Testers being treated as a quality gate at the end of the iteration rather than an integral part of the team. This is a cultural change that is required in the team. A strong, test "savvy" development manager or a solid QA/test director should be pushing this change. Ground-up changes are usually quite difficult. Embedded cultural changes such as this usually require strong and determined leadership.
  • No sense of quality ownership by the team. This is common in those teams with no test automation at any level, and where acceptance criteria are either weak or missing. This links in with many of the points above. The more we can infiltrate into the minds of the developers, the better! All the suggested practices above and below will help define this ownership.

What can the tester do to change this?

The tester in an agile environment needs to become a proponent for process improvement. To avoid some the issues above, an agile tester must engage in some of the following
  • Highlight software engineering opportunities for the team, and be proactive in providing possible solutions. A great way to do this is to start a software engineering work group that gets together proactive and innovate developers and testers on a regular basis to implement the engineering practices that can improve the efficiency and effectiveness of the teams.
  • Work to rule. This is tough, but if you have been asked to succeed in agile, then you must follow the basic work flows that have been designed. If things are not working, use the retrospectives to make changes that can be agreed on by the team.
  • Be alert! Use the tools that you have to keep track of what is being developed, supported, and released to help you get an understanding of the work output. I have had a lot of success monitoring RSS feeds and change logs from source control systems as they give me the ability to hone test analysis to a specific part of the system. You also get information on changes to parts of the system that your developers may not have mentioned to you!
  • Publish your ideal test architecture as part of the test strategy. This will allow others to see what your perspective is on what is needed to develop successfully, and may prompt them to help you out, especially if your ideas are compelling!
  • Measure your Boomerang! This is work that comes back to the development process after it is released. In ISEB circles this is known as the defect detection rate. It is one of the most useful measurements we have as it is a real indicator of how effective your quality practices are.
  • Measure whatever seems important as it can help you push the importance of what you are doing. This is one the most important things we can do. I once did this to get unit testing up and running in one environment. A simple weekly report on the number of tests per project provided the impetus to get developers writing tests on a regular basis.


Testing can be a thankless task, but being proactive during a transitional period will bring benefit to your team. The team will probably be going through a lot of change acceptance, and placing an integrated tester directly into the team is another difficult change to manage.

Here are a couple of great resources on agile development and lean development that could help you formulate ideas in these tricky environments.