Sunday, 23 December 2012

Browser Developer Tools


There is more to a browser than meets the eye! Not much more, but there are some great browser development tools that you should definitely pay attention to if you want to seriously test a UI manually through a browser.

I've added this to the list of things a tester should know or do as I still see many testers taking what is basically a point and click approach to manual browser testing. This is fine for simple user scenario based testing, but you could be missing valuable information just under the surface.

Take this simple scenario, on a login page, when a user inputs a correct user name, but an incorrect password, as it is bad practice to explain to that user the exact reason why they have not been able to log in to the system, the page displays a message stating "either the user name or password is incorrect". This is perfect for the user, but for a hacker, for example, trying to gain entry to the system, it’s not really giving any valuable detail about what their next attempt to enter the system should be based on.

At this point a hacker may look at the communication being sent between the user interface and any back end system. In this scenario, the user interface receives a message that contains an exception indicating that the log in failed, but it would not give the reason why it failed. However, not every developer follows good practice, and there maybe an instance where this message does contain enough detail to give a hacker more ammunition for their next attempt at breaking into the system.

I have seen something very similar to the following on a popular content management system, it’s a JSON object returned to the UI from a service after a failed log in attempt:

{
        "exception": "LOGIN_FAIL",
  "detail": "PASSWORD_ERROR",
}

Given that this scenario is a real possibility, and applicable to many other areas of a system, a tester needs to be able to easily assess these types of vulnerability.

Most browsers have a set of development tools built in that allow you to view the requests and responses that are processed by a browser. In any instance where you are informing a user of an action that has occurred through the user interface, and there is some degree of sensitivity or security related to that message or feature, then it always pays to have a look at what is going on in the background.

Don’t just stop at looking at the requests and responses, there are a whole host of over areas that you can look at such as the resources that are loading, the way css classes change, JScript errors, page performance, and much more.

Both chrome and firefox offer a decent tool set, either a feature or additional plugin

https://developers.google.com/chrome-developer-tools/

http://getfirebug.com/whatisfirebug

Tuesday, 11 December 2012

The Automation Pyramid

Think about using the test automation pyramid when planning your test automation strategy.

The test automation pyramid was used by Mike Cohn to describe the value of different types of automated tests in the context of an ntier application. The concept is very simple. Invest more time and effort in those tests that are lower down the pyramid than those at the peak, as those tests lower down the pyramid provide the most value in terms of quick feedback and reliability, whereas those at the peak are expensive to implement, brittle, and time consuming.

The traditional pyramid is split into three layers, Unit testing at the base, integration/API tests in the middle layer, and UI tests forming the peak of the pyramid. Many now opt to describe the UI layer as the ‘end to end’ layer as this phrase better represents those types of test.


Useful posts on the subject:

http://martinfowler.com/bliki/TestPyramid.html by Martin Fowler

http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid
by Mike Cohn

Tuesday, 13 November 2012

Automated smoke tests in production

If you can, don’t be afraid to run your automated tests in production. A production environment is a place where automated tests can give real value, especially after a release. Instant feedback on the success of a change in production could be worth a lot of money to your organisation.

As a minimum run automated smoke tests before and after a release in production, firstly, to baseline, and secondly, to assure nothing has broken after a release.

If you are limited by the data you can use or create during a test then just consider non transactional tests. Any way that you can speed up the feedback loop when a change has occurred is a bonus.

Obviously not all systems or organisations are conducive to this sort of strategy, so as a consideration when building a new system, it’s worth thinking about the ability to run automated tests in a live environment when designing a system.

Monday, 22 October 2012

Testing Webservices with SpecFlow

I have been looking for a way to test multiple soap web services as part of a complete integrated end to end workflow that at the same time can provide valuable business documentation. The requirements are quite simple:
  • Workflows can be written using natural language
  • Multiple web services can be easily executed in sequence
  • Development time must be minimal
My immediate thought was to use a cucumber type test framework, and after a recommendation I started to investigate SpecFlow.

SpecFlow is a way of binding business requirements to code through specification by example in .NET. It supports both behaviour driven development (BDD ) and test driven development (TDD). SpecFlow, like any other natural language test framework, can also be used as a tool to combine documentation and testing of existing code, and that is exactly what I have used it for.

Using this method for generating an arbitrary web service, in a feature scenario using Gherkin I can specify the specifics of a web service, the contract location, the methods to be used, and what the response should be.  

In the binding statements, which SpecFlow uses to manage the logic required to execute the scenarios, I can execute the implementation of the call to the web service. There is a great example of this framework being used here, with multiple web services being called inside one feature.

This is probably not the most beautiful solution I have used to test services in a SOA environment but it provides the ability to get accessible system knowledge into the test and it’s extremely quick to set up. 

Wednesday, 17 October 2012

Developers in test.Yes, really!

I have mainly worked in high growth businesses, either in the form of start ups, or strategic projects in large corporations. My role typically involves promoting the use of sensible software engineering practices and software delivery patterns to help produce a product that works, and that can have frequent low risk change applied to it. In this type of environment, the team structure is very much organismic in nature. What this usually means is that there are very few people dedicated to, or specialising in, test activities. 

However, this does not mean that testing gets completely side stepped. We can still achieve the quality objectives of the organisation without dedicated specialists. Given the right means, I have found developers in this type of environment can become some of the best testers that you will come across.

How does that work?


There are a number of ways that we can use to bring effective testing to forefront of product engineering

Software Engineering Practices
I always ensure that developers are equipped with decision making power on how their work environments are structured, about the tools that they use, and about the delivery mechanism to push regular updates to a product. I ensure that teams use sensible practices such as CI, zero branching, infrastructure as code, contract testing, and the like. I push continuous delivery, and actively promote learnings made from many of the great people that I have worked with.

People
You need to hire engineers on the team that take a holistic and caring approach to software development. These are the people that have built successful products from the ground up, or have been pivotal players of very successful product teams. 

Test Activities
I find that coaching teams using principles from folk like James Bach and Michael Bolton to be incredible useful in up skilling developers quickly in the art of testing. These two guys have spent their careers honing a testing approach, and are so well drilled that you will always come away from any of their writings or teachings with more than a handful of powerful testing ideas. I personally think they are great guys that should be listened to a lot more. Their pragmatic, and often dogmatic approach, is contributing to the changing face of testing.

At some point organismic structures become mechanistic. This is when professional testers are hired. This is when test managers are hired, or may be a head of QA. At this point it is always really good to have facts and figures to assess just how successful the new order is compared to your pre-exising "testerless" state.  





Sunday, 16 September 2012

Digging into Compiled Code


I recently had to test a number of changes to a .net web service which had no test automation, no regression tests, and no specification apart from the service contract and a subversion change log. In addition to this, there was also no indication as to when the last release of the service was so I had no idea from the change log which changes were live and which required testing.

Fortunately I had access to the live binaries which meant that I was able to decompile them using Red Gates Reflector, and drill into individual methods. This gave me the ability to cross reference whether the changes listed in the change log were actually live or not.

It took about an hour to analyse the decompiled code, but this reduced the potential test time from approximately four days down to less than one. It also gave a reassurance that no untested code would be released.

A decompiler is a great tool that gives you further insight into the code you are testing. Red Gate’s .Net Reflector, is one the most common for .net which I use a lot. For Java there are many plugins available for most common IDEs, I’m currently playing with the “Java Decompiler Project”. 

Wednesday, 8 August 2012

Scrum in under 10 minutes - video

A great introduction to Scrum in under 10 minutes!  



For more information on scrum, visit www.scrum.org.

Friday, 27 July 2012

Achieving an expected level of quality with limited resource and budget

Sometimes there is just no money for testing or QA. Testers leave your team and don’t get replaced. The team dwindles, but the developer base either maintains or grows. Your reduced team has more and more to do. The worst case scenario here is that your remaining testers become overworked, can’t do their job properly, get thoroughly de-motivated, and leave, and who could blame them. You now have even less resource.

Despite the scenario above, when it does happen, you will still hear the mantra of “quality is not negotiable”, and probably even more so, requests by product and company leaders to support everything and everyone.

So what is possible? How can you achieve the expected system and product quality with a limited budget?

Looking back at some of the successful projects on which I have been involved, and which have also been struck by similar limited test resource scenarios, it is possible to identify some common characteristics that contributed to their success from both a product and quality perspective.

- a product that the customer really needs
- customer input from the start
- a team that cares about what they are building
- a product manager that knows how to ship products and that trusts in the development team
- a strong work ethic
- innovation throughout the team
- the right tools
- a simple iterative development process

Without going into the psychological aspects of building the right team and processes, most of the above I would weigh as being far more important to foster or implement in a product development team than fretting too much about test and QA resource. Why? Having all the above, for me, goes a long way to ensuring the quality of both the idea and build of your product. Good people, processes, and tools will do far more for you than hammering your application to death, and don’t usually come out of your budget. If you don't have much of the above then life will be difficult.

As a final comment, if you are faced with the scenario described above, you should ask yourself, and maybe the business, the following questions:

- Can we compromise the quality of the system?
- Is quality negotiable for this product?
- Will the customers accept a less than perfect solution?

If the answer is yes to any of these questions then you have the answer as to why you have no  budget, and with this knowledge you can then focus your test and quality efforts in a different and more effective manner.

Thursday, 26 July 2012

Kanban eBook

I blogged about Kanbanery a while back but failed to notice their really concise ebook on Kanban. Definitely worth a quick read if you are thinking of doing Kanban and need a quick introduction.

Kanban eBook

Tuesday, 24 July 2012

Integration testing of stored procedures using c# and NUnit

In a recent post I described setting up a database test framework to test an MS SQL database. The purpose of creating this framework was to ensure that in stored procedure heavy applications, we could still get a high level of automated test coverage (which is incredibly important for any application that uses any kind of continuous delivery process). That framework didn’t include an easy way to iterate through a result set and was limited to testing just the number results returned. (See this post for the original framework. Let me know if you need the source and ill get it zipped up).


I have now extended the framework to include the capability of executing a stored procedure with parameters, and then iterate through a result set by mapping the output to a data model.

Model the data

First set up a model of your data

public class CustomerBasicAccountModel
{
        public string CustomerID { get; set; }
        public string CustomerSortCode { get; set; }
        public string CustomerAccountNumber { get; set; }
        public string CustomerAffiliation { get; set; }
}

To facilitate reading of the data returned from a stored procedure I extend the datareader class to consume the reader using linq. This will allow me to easily manage the data once it is returned from the database.

using System;
using System.Collections.Generic;
using System.Data.SqlClient;

namespace  Database.Integration.DAL
{
    static class Extensions
    {

        public static IEnumerable<T> Select<T>(this SqlDataReader reader,
                                       Func<SqlDataReader, T> projection)
        {
            while (reader.Read())
            {
                yield return projection(reader);
            }
        }
    }
}

Finally in the test we can do something like this

[Test]
public void TestSortCode()
 {
            using (var session = _dbFactory.Create())
            {
                var z = session.GetTransaction().Connection.CreateCommand();
                z.CommandTimeout = 0;
                z.CommandType = CommandType.StoredProcedure;

                    var query = session.CreateQuery(
                    @"[GetCustomerBasicAccount]");

                query.AddParameter("@customerid", 3, DbType.Int32);
               
                var dataReader = query.GetAllResults();
                while (dataReader.IsClosed==false | dataReader.Read())
                {
                    using (dataReader)
                    {
                     customerBasicAccount = dataReader.Select(r => new                    CustomerBasicAccountModel
                                         {
                                         customerSortCode =
                                         r["CustomerSortCode"] is DBNull
                                         ? null
                                         : r["CustomerSortCode"]
                                         }).ToList();
                    }
                    Assert.AreEqual(customerBasicAccount[0].customerSortCode, 278222);
                }
            }
        }
Although this is a somewhat crude way to run integration tests on a database, it really does the trick. My original idea was that developers would simply use existing data mapping within a project and write the integration tests using that, but it has always been difficult to get them to commit to doing this. This framework is generic and can be used in any .net project with ease.

Sunday, 17 June 2012

Unit testing databases using c# and Nunit

I have been looking at ways to regression test the data access layer of a .Net application that has a heavy reliance on stored procedures. There are tools that can help do this, and I did consider both dbfit and ndbunit, but neither of them could satisfy my criteria.

Criteria

  • Ease of use – No time to train developers or testers on how to use a new test framework
  • Ability to use straight SQL statements
  • The tool or mechanism must integrate easily into this and other projects.  I also need to provide unit testing for a data warehouse project, so something that I could use on both would be perfect
  • The generated tests must be easy to put under version control. Being able to tightly couple tests with a specific version or schema is very important, but more about that another time.
The .net application I'm trying to test already has a data access layer that could be used to create these tests, but the implementation of this particular layer is complicated and would require the test engineers working on the project to have a high level of .net code understanding.

Solution


The solution I came up with creates a very simple data access layer using System.Data.SqlClient  and Nunit (download NUnit here). The only complexity that the tester needs to think about is the way they construct the sql and how they write assertions.

Using standard nunit test fixtures, in the test set up I connect to a database, and then in the tests I execute queries and stored procedures, using simple asserts to validate the results.

Here is how its done.

Created a standard class library project that references:

Nunit
nunit.framework

Microsoft
system.data

I'm using a factory pattern with interfaces that allow easy creation of a database session management class which can be used throughout multiple test fixtures. The session manager has methods that create connections to a database defined in a factory, query the database, and execute stored procedures.

The test fixture:

using System;
using NUnit.Framework;
using Codedetective.Database;


namespace Codedetective.Tests
{
    [TestFixture]
    public class DbChecks
    {
        readonly IDatabaseSessionFactory _dbFactory;


        public DbChecks()
        {
            _dbFactory = DatabaseSessionFactory.Create
                (@"Database=CodeDectiveExamples;Data Source=local\test;
                    User=*******;Password=******);
        }


        [Test, Description(“Identify whether TestSet 501 is created and active”)]
        public void DbTest01()
        {
            using (var session = _dbFactory.Create())
            {
                var query = session.CreateQuery(
                    @"select count(*) from testset
                      where testSetId = 501 and Active = '1'");
                var result = query.GetSingleResult<int>();
                Console.WriteLine("test 1 " + ((result == 1) ? "passed" : "failed"));
                Assert.AreEqual(result, 1);
            }
        }

The database session manager:

namespace Codedetective.Database
{
    public class DatabaseSession : IDatabaseSession
    {
        public string ConnectionString { get; set; }
        private SqlConnection _connection;
        private SqlTransaction _transaction;


        public DatabaseSession(string connectionString)
        {
            ConnectionString = connectionString;
        }


        public SqlConnection GetConnection()
        {
            if (_connection == null)
            {
                InitializeConnection();
            }


            return _connection;
        }


        public SqlTransaction GetTransaction()
        {
            if (_transaction == null)
            {
                InitializeConnection();
            }


            return _transaction;
        }


        private void InitializeConnection()
        {
            _connection = new SqlConnection(ConnectionString);
            _connection.Open();


            _transaction = _connection.BeginTransaction();
        }


        public void Dispose()
        {
            if (_transaction != null)
                _transaction.Dispose();


            if (_connection != null)
                _connection.Dispose();
        }


        public IDatabaseQuery CreateQuery(string query)
        {
            var command = GetConnection().CreateCommand();


            command.CommandText = query;
            command.Transaction = _transaction;


            return new DatabaseQuery(command);
        }


        public IDatabaseNoQuery CreateNoQuery(string insertstring)
        {
            var command = GetConnection().CreateCommand();
            command.CommandText = insertstring;
            command.Transaction = _transaction;


            return new DatabaseNoQuery(command);
        }


        public void Commit()
        {
            _transaction.Commit();
        }


        public void Rollback()
        {
            _transaction.Rollback();
        }
    }
}

The interface of the DatabaseSession class:

using System;
using System.Data.SqlClient;


namespace Codedetective.Database
{
    public interface IDatabaseSession : IDisposable
    {
        IDatabaseQuery CreateQuery(string query);
        IDatabaseNoQuery CreateNoQuery(string insertstring);


        SqlConnection GetConnection();
        SqlTransaction GetTransaction();


        void Commit();
        void Rollback();
    }
}

The factory that we use to create the database session manager

namespace Codedetective.Database
{
    public class DatabaseSessionFactory : IDatabaseSessionFactory
    {
        public string ConnectionString { get; set; }


        public IDatabaseSession Create()
        {
            return new DatabaseSession(ConnectionString);
        }


        public static IDatabaseSessionFactory Create(string connectionString)
        {
            var sessionFactory = new DatabaseSessionFactory
            {
                ConnectionString = connectionString
            };


            return sessionFactory;
        }
    }
}

The interface for DatabaseSessionFactory:

namespace Codedetective.Database
{
    public interface IDatabaseSessionFactory
    {
        IDatabaseSession Create();
    }
}

Finally we create the methods that will be used to execute queries and stored procedures

using System;
using System.Collections.Generic;
using System.Data.SqlClient;


namespace Codedetective.Database
{
    public class DatabaseQuery : IDatabaseQuery
    {
        SqlCommand Command { get; set; }


        public DatabaseQuery(SqlCommand command)
        {
            Command = command;
        }


        public void AddParameter(string name, object value, System.Data.DbType dbType)
        {
            var parameter = Command.Parameters.AddWithValue(name, value);
            parameter.DbType = dbType;
        }


        public TResult GetSingleResult<TResult>()
        {
            return (TResult)Convert.ChangeType(Command.ExecuteScalar(), typeof(TResult));
        }


        public IEnumerable<TResult> GetResults<TResult>()
        {
            Type resultType = typeof(TResult);
            IList<TResult> result = new List<TResult>();


            if (resultType.FullName.StartsWith("System."))
            {
                using (var reader = Command.ExecuteReader())
                {
                    while (reader.Read())
                    {
                        var value = reader.GetValue(0);
                        result.Add((TResult)(value != DBNull.Value ? value : null));
                    }
                }
            }
            else
            {
                var properties = typeof(TResult).GetProperties();


                using (var reader = Command.ExecuteReader())
                {
                    while (reader.Read())
                    {
                        var entity = Activator.CreateInstance<TResult>();


                        foreach (var property in properties)
                        {
                            var value = reader[property.Name];
                            property.SetValue(entity, value != DBNull.Value ? value : null, null);
                        }
                        result.Add(entity);
                    }
                }
            }
            return result;
        }
    }
}

The interface for DatabaseQuery is IDatabaseQuery:

using System.Collections.Generic;
using System.Data;


namespace Codedetective.Database
{
    public interface IDatabaseQuery
    {
        void AddParameter(string name, object value, DbType dbType);


        TResult GetSingleResult<TResult>();
        IEnumerable<TResult> GetResults<TResult>();
    }
}

Now for stored procedures execution, we create a class called DatabaseNoQuery

using System.Data.SqlClient;


namespace Codedetective.Database
{
    public class DatabaseNoQuery : IDatabaseNoQuery
    {
        SqlCommand Command { get; set; }


        public DatabaseNoQuery(SqlCommand command)
        {
            Command = command;
        }


        public void AddParameter(string name, object value, System.Data.DbType dbType)
        {
            var parameter = Command.Parameters.AddWithValue(name, value);
            parameter.DbType = dbType;
        }


        public int ExecuteInsert()
        {
            int rows = Command.ExecuteNonQuery();
            return rows;
        }
    }
}


The interface for DatabaseNoQuery is IDatabaseNoQuery

namespace Codedetective.Database
{
    public interface IDatabaseNoQuery
    {
        int ExecuteInsert();
        void AddParameter(string name, object value, System.Data.DbType dbType);
    }
}

This is a long way from being a tool such as DbFit which opens up automation to even non programmers, but it serves a purpose, and does it well. The entire team can now write these tests which can be run alongside the rest of the project tests.

Thursday, 22 March 2012

Solving system Integration configuration problems

I am doing a lot of semi-automated application configuration management at the moment. I'm working on a system that has about 30 to 40 interdependent services, web sites, and applications. With an expanding team, splitting into several teams from one large team, the organisation needs to be able to easily replicate development, test and UAT environments on a team by team basis. We could use virtualisation to replicate a master environment set consisting of database and code servers, giving each team a clone of this, but we would still need to invest a lot of time configuring end points and database connection strings.

Whilst trying to solve this problem I came across a really interesting tool that almost does what I think is required.


Octopus is an automated deployment tool developed by Paul Stovell for .NET that uses Nuget and configuration conventions to push out applications in a very effective way to multiple environments. 


Octopus has a central server that pushes packages out to tentacles that sit on servers. The tentacles look after the deployment server side meaning permission issues are never a problem.

One of the great things is the way you can store all your application or web configuration variables and easily assign different configuration sets to different environments. This is one of the key features for me.





This is pretty much what I want, except that it is heavily dependent on using Nuget packages. This is not a massive issue and if my proof of concept goes well, I will try and convince the teams that they need to be generating Nuget packages as part of their build process. It does actually link into tools like teamcity very well to do this so it may even be possible to leverage the generation of artefacts and use them as the packages.

Related Tools:



Friday, 16 March 2012

ThoughtWorks Tech Radar - March 2012

The latest ThoughtWorks Technology Radar is now out!

http://www.thoughtworks.com/radar
"The ThoughtWorks Technology Advisory Board is a group of senior technology leaders within ThoughtWorks. They produce the ThoughtWorks Technology Radar to help decision makers understand emerging technologies and trends that affect the market today. This group meets regularly to discuss the global technology strategy for ThoughtWorks and the technology trends that significantly impact our industry." ThoughtWorks 2012
Check out the archives for some interesting trends over the past few years. Interesting reading.

Sunday, 19 February 2012

scrum.org

Whilst looking for a course that would allow me to get some professional recognition for my scrum knowledge, I came across scrum.org. Scrum.org was founded c.2010 by Ken Schwaber, one of the original founders of the scum alliance. He resigned from the alliance in 2009 to pursue his belief that a transformation was required in the way the principles and fundamentals of scrum were delivered to users of scrum. There is more about that here.

Scrum.org offers a number of different training and assessment packages with several that are tailored towards developers, focusing on the core engineering practices that developers and scrum masters really need to understand to be successful in a scrum environment.

For more information on scrum visit:


and have a read of Henrik Kniberg’s Scrum and xp from the trenches:



Friday, 10 February 2012

blazemeter - JMeter cloud testing

I've been using Apache's performance test tool JMeter for many years now, but struggle to find enough time and resource to create a decent distributed network of JMeter engines that can simulate large and realistic load. Today I stumbled upon http://blazemeter.com/ which allows just that. Its a cloud based performance test service that gives you access to multiple instances of JMeter that are geographically distributed on a variety of different server types.

To use blazemeter you either upload your own JMeter scripts or give blazemeter a url and let it write the script for you, you then create your scenarios before finally executing your test. Blazemeter has many great features, including some detailed analysis and test management capabilities that make this a compelling option when it comes to investing in performance testing.

Blazemeter operates a similar business model to browsermob.com (now Neustar Web Performance), where you pay for the number of test engines, different server types, on demand usage, etc.

There is a comprehensive free trial available that will allow you to experiment with features and tests for a couple of weeks.

For more information on JMeter visit http://jmeter.apache.org/.

Monday, 16 January 2012

Data driven tests



Filehelpers is a great .Net library that allows you to easily import data into your code from a number of different data sources.

I use it to data drive selenium tests, reading in all the iterations in a setup method and then using the data either through iteration or targeting rows for specific data sets.

This quick start guide is pretty all you need to get data driving your tests.

http://www.filehelpers.com/quick_start.html

Sunday, 15 January 2012

Cloud testing


You might not be able to get permanent test resource into your organisation, but a relatively small one off project cost might be acceptable to your business. Several cloud based test services have gained popularity recently providing on demand testing for individual or multiple projects.

Neustar Web Performance (Formerly BrowserMob )

This is a performance test service that allows you to test your applications from various geographical locations using selenium scripts.

The interface allows you a good degree of control over the test, and gives you plenty of analytics both during and after the test.

This is a great tool for those teams with a limit budget that need high performance but can’t step up and invest in dedicated performance test environments.


UTest offers a global community of testers that are paid by the test or number of defects that they find.

With features like bug tracking and test planning, and a seemingly endless list of possible platforms on which to test, this type of service is set to become big business during 2012.