Monday, 14 December 2015

Test Tooling Drift - Integration Test Client Anti-Pattern

Test Tooling Drift is a test tool anti-pattern that seems to be a common occurrence on many of the teams that I work with.

In applications with integration points with external systems, clients are usually created in the code to connect with those systems. These clients usually form a part of the code base of the application. They communicate using protocols such as http, tcp, etc.

When testing this type of application, whether its writing automated checks, or creating tools to facilitate testing, you may find teams (test or development) creating their own test clients to handle some of the testing or checking code that is used against those external systems.

An example of this could be a transaction manager that provides transactional capabilities to a payment system. The payment system will have a client that connects to this transaction manager, and there will be a contract between the application client and the transaction system to facilitate functional or technical change. Changes in the contract will usually be handled within the development process of those working on the team. They may even use mechanisms like consumer driven contract tests to facilitate contract change and approval.

In this scenario its common in test tooling to see a separate test client created for the purpose of test being used to communicate with a system such as the transaction manager. As this client is different, if there is a contract change that is implemented between the client in the application, and the transaction manager, there is then room for error to creep into our test client should we not also implement that contract change. The test client has the potential to shield or mask potential issues occurring in contracts, such as protocol issues, network issues, schema issues, etc. This is where drift occurs. Of course, the biggest problem here is the time spent having to keep these clients in sync. We are most definitely violating the DRY principle here.

I've seen this anti-pattern occurring a lot on mobile application projects. Many mobile applications will call an endpoint from the mobile application to facilitate some functionality, and that will be done using a connection client built into the application code. When testing this integration, or running contract tests against this endpoint, you will see tests and checks using tools such as runscope, soapUI, or postman, even though none of these tools and the clients they use to connect to endpoints sit inside your application. Whilst these tests can call the endpoint and validate certain aspects of your contract, they are not doing it in the exact same way as your application client. Inconsistencies are most prominent in request headers, request creation and the deserialization of responses into objects to validate or use within test code.

If you want to reduce risk of failure you should certainly be using the client from the application to make calls to these endpoints during your testing and checking. Tools such as runscope, postman and soapUI are great for investigating and understanding integrations, but they are tools that use their own way to construct requests to your endpoints.

If you are an API provider you might want to make use of consumer driven contract testing to assure you keep alignment with your consumers. Though this can become untenable when you are providing a mass consumed API such as the twitter API, which is when you have to move towards suggesting implementations and best practices for consumers.

Monday, 9 November 2015

What really is Continuous Testing?

Continuous Testing. You'll keep hearing this term more and more. Don't be alarmed! It's just a term being used by some to describe practices that have been with us for many years, and a term used by many to cope with the fact that testing can happen at the same time as iterative development. Yes, it's true!

The term is being used to describe the context under which activities such as automated unit, integration, performance, and integration tests are executed, with that context being the release of frequent small batches of code to production, with minimal interaction from humans. It refers to the process of assessing risk on a continual basis in high frequency delivery environments.

It describes the inclusion of automated checks (or tests) into your development workflow, and a 'shift left' in terms of where testing happens in the development workflow. The human interaction aspect of continuous testing refers to the increasing validation of business requirements before they hit development teams through prototyping and business value analysis, and an increase in, or rather, focus on the amount of in team testing and crowd testing that occurs.

Most will recognise these activities to be an integral part of any lean, agile, XP, continuous deployment or continuous delivery workflow or pipeline.

Wikipedia describes continuous testing as
..the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.
I'm not a big fan of this term to describe these activities as it conjures up the notion that continuous testing is a separate set of activities and not integral to a successful development workflow or pipeline. It drives a wedge once again between the concept of testing and development which need to go together hand in hand. A wedge that has troubled us so much in the waterfall years, and a wedge that has still existed throughout the general adoption of agile practices.

Do we really need the term continuous testing to group together activities, processes, mechanisms, etc. that are already happily described in detail by those championing devops, continuous delivery, continuous deployment, xp, etc.? What value is this terminology bringing to the software delivery table? I just find it confusing.

I would hazard a guess and state that the term is being used in organisations where testing is sequential and costly, and not yet integral to the development workflow, and where little or no test automation exists to gain a traction with difficult to implement activities. Conceptually I can see how the term can help those in waterfall environments, or sudo agile environments to begin embracing a shift left in their testing process.

Whilst more mature developers and teams won't be impacted too much by the term, it's the consultancies, the large archaic organisations, and misinformed software professionals that worry me. Already terms such as devops, test automation, etc. have been hijacked by many to construct new functional silos within the development ecosystem, rather than embracing it as a cultural change. The very same could happen with continuous testing. Look out soon for job ads with the title "wanted - head of continuous testing"!

When you read some of the writing and blog posts on continuous testing, you will notice that some have attached roles to continuous testing, as if it was now someone's responsibility. The holistic value that continuous testing attempts to describe gets blown away once it becomes someone's responsibility.

Whilst I complete agree with and promote most of the practices I see being written about, I can't help but think that grouping things like automated unit or performance testing, or monitoring under the banner of continuous testing could encourage the removal of the responsibility of quality from the entire team, handing it over to a separate group, taking many organisations right back in time. Please don't let there be a Head of Continuous Testing appearing in your organisation!

Some Continuous Testing sources

https://www.stickyminds.com/interview/putting-quality-first-through-continuous-testing-starwest-2015-interview-adam-auerbach
https://blog.parasoft.com/continuous-testing-devops-infographic
https://www.soasta.com/webinars/continuous-testing-in-devops/


Tuesday, 25 August 2015

What works for their project, won't necessarily work for yours!

Sometimes its great to seek inspiration for your development practices from companies or speakers that are seen as being super successful. They may have blogged their practices, or deliver enthusiastic talks to development conferences all over the world. Whilst insight from these people and companies is great, we have to remember that they have designed their particular processes, practices or mechanisms based upon very specific needs.

I write this because more and more I see companies that I work with and speak to suffering because someone has, for whatever reason, sought to solve a problem using a solution based entirely on another companies needs. The result has been costly in many of the cases that I have seen, and these include branching strategies that have caused severe project delays and complexity, architecture that has devalued a company, and test automation approaches that have crippled budgets and delayed projects.

A solution should be the result of thinking through your needs or problem domain, and providing what you deem to be the most apt way of solving those needs or problem. 

Don't take the cheap quick route out, unless you are solving the same problem!

Use others solutions as ideas, or starting points for thinking, but always keep your own needs or problem domain at the forefront of your thinking.

Wednesday, 18 March 2015

Small batch sizes - how small is small?

Many agile, continuous delivery and lean practitioners advocate only pushing small batches through your delivery process at any one time. The benefits of doing so have been described in detail many times, with Eric Ries giving a great overview.

Essentially the benefits are:
  • Faster feedback 
  • Problems are instantly localised 
  • Reduce risk 
  • Reduced overhead 

How small is small?

One thing that I don’t often see articulated in too much detail is what a small batch size actually looks like or means.

Its Relative..


That’s right, its really relative to your current state. Small batch sizing is about continually optimising your manageable workload until you get to the point when the benefits described are realised.

A small batch could be any of the following:
  • Enough lines of code to enable a particular method call 
  • A feature branch 
  • A service 
  • An individual component 
  • A change or changes that take days or short amount of time to produce 
The important thing to remember is that it really is relative to what you may have been or are doing at a given point in time. If you are pushing thousands of lines to production during a release, and then you gradually reduce that to hundreds, then your batch size is relatively small, and you have probably received many of the benefits associated with small batch sizes. If any of the above are giving you some or all of the benefits mentioned then you are probably pushing relatively small batch sizes through your system.

As a rule of thumb, small batch sizes are not typically associated with:
  • Making changes to multiple parts of a system 
  • Delivering big feature sets that typically move you to a major version release 
  • Delivering multiple components 
  • Multiple changes to a database 
  • Changes that take many iterations 
Although your customer might not be ready or have a need to receive smaller incremental updates, that does not stop you from delivering in that way to a pre production platform or similar.

Tuesday, 17 March 2015

Continuous Delivery - Madrid 26 Feb 2015

Here are the slides to accompany the talk I did on continuous delivery and testing in February at AfterTest in Madrid. 


Many thanks to expoQA and Graham Moran for inviting me along to talk, and Miguel Angel Nicolao at panel.es for an excellent write up. Thanks to everyone who attended, I hope it was useful. There was certainly some interesting debate after the talk!

The next AfterTest is in Barcelona on 26/3/2015 with Javier Pello.

Thursday, 11 September 2014

Do you really need a test automation framework?

Virtually every project I have worked on has at some time in its life had a close encounter with a well meaning, all things to all men, test automation framework, that has caused more harm than good, and cost a fortune to build in the process. 

Maybe its due to the ubiquitous SDET (Software Development Engineer in Test) or a well meaning tester with no programming background, or a very demanding test manager, or just a plain old developer who should know better, but the continuing prevalence of massive test automation frameworks is slightly worrying. I both frequently hear about them, and very often see them. If they appear on my projects, they usually die a swift death.

So what's the problem? 


The premise is that there is no way to execute or optimise automated testing apart from executing vast swathes of system tests from, usually, the highest interface into a system using a “special” test framework. If this is your only way to provide any kind of code coverage, or guarantee of safe delivery, then it points to a poor delivery process or a badly engineered system that is no doubt flaky and incredibly difficult to change and maintain. 

These frameworks are typically maintained by a tester, test team, or a specialist bunch of mercenary developers. 

You will normally find the tests executed by these frameworks to be long running, high maintenance, and vague in what they actually do. In many cases they will be driven through a UI, which typically means something like WebDriver, wrapped in an obviously completely necessary layer of abstraction, and maybe some fruity BDD framework to describe what the tests are doing. 

Being system tests will typically mean that the tests are run in an integrated full stack environment. This in itself, regardless of test design, is a complexity that most would wish to avoid. Data management, permissions, versioning, infrastructure availability, etc. all come in to play here.

They very often sit in a separate test project that is completely decoupled from the code they are testing, meaning that version synchronisation issues at the feature level become a real problem. They are also hardly ever created by the people that need the feedback most from these tests - the developers.

The cost of both building and maintaining test frameworks for any enterprise sized solution can become astronomical in comparison to the actual value and risk reduction they deliver.

What I typically see is a very high test execution failure rate that can’t be attributed to actual code changes. I have seen some fairly large enterprise projects where the test run failure rate was between 69% and 91% with those failures not being tied to a code or configuration change. That is quite shocking. Equally, I have seen failure rates lower than 10%, but that does seem rare. If you couple the failure rate with the typical cost of building a test framework for any reasonably complex system, then the value becomes quite clear. Just multiply the day rate of all those involved in building a framework by the number of days it takes to build, and do the same for the ongoing maintenance cost, deduct all that from your expected profit and work out if that cost is justified. 

The other cost related issue lies with the constant rebuilding of test frameworks. As test frameworks are not typically used in production, the change management becomes less rigid. I'm not sure what the average lifespan of a test framework is, but based on experience I would hazard a guess at less than 2 years. 

Essentially, these test frameworks are one of the symptoms of a badly planned test/design approach applied to your system.  

and the Cause?


So do you really need to build this all singing, all dancing test automation framework? There are many causes and reasons why test frameworks appear.

Confidence
Test Managers, testers, release managers, project managers etc, may have no understanding as to how developer driven tests maybe helping to reduce the risk of failure when adding and changing features. This encourages regression test phases, release test phases, pre prod test phases, all usually heavily laden with time draining manual and automated test framework antics. (Steve Smith goes into the detail of release testing, and how dubious that activity is, and that's without even talking about test frameworks!)

Inexperience
Engineers who have limited experience of commercial development, or engineers that should know better choose to knock out something that works over something that works and is maintainable. Very often you see the "responsibility of quality" foist upon testers or QA engineers within a team or organisation. These individuals very often don't have a programming background and will create, with the best intentions, a safety net in the form of a test framework that will typically be riddled with design anti-patterns, and difficult to maintain code. Again, this encourages the inclusion of multiple test phases due to the lack of confidence provided by the general lack of valuable and consistent testing feedback.

Cost
Project and test managers who believe that the tester or QA headcount can be reduced through the use of automation. This leads to attempts to automate the types of tests that a tester may execute. In reality, the things that get automated are just simple checks and not the complex interactive tests that a human being can execute. This in turn either leads to increased project costs, due to maintaining both manual and automation testers, or leads to poorer quality code being delivered due to the reduced amount of interactive testing that takes place. Either way, there is a hit on the money you have to spend. 

Legacy Systems
We've probably all worked on a legacy system with no tests, and no documentation. If we need to make changes to this system, or refactor it into something more manageable, then we do need some kind of safety net. This can rear its head in the form of larges amounts of system tests being run by a test automation framework. 

Whats the solution?

A decent test approach during the entire life cycle of your system, from concept to the end of its life. 

The decent approach would typically include a combination of collaborative (isolated tests that use test doubles) and contract tests, coupled with those few tests that give you the warm fuzzy feeling that your system is hanging together nicely. See this great video from J.B. Rainsberger ("Integrated Tests Are A Scam") about how to design your tests to optimise not only how the system is tested, but also how it is designed as well. 

If you join this all up with the testing executed by an expert interactive tester, then you will never have the need for a test framework ever again! Honest!

Of course, that's a very idealistic, and simplistic way to put it, but that is the bare bones of it. I think that some of the bigger problems really do come from those legacy systems with no tests, the ones that you need to refactor, but have no safety net. The instant reaction is to add a mountain of system tests driven by a framework to give you the sense that the risk of breakage is covered. In my mind, its probably more cost effective, and safer to have some good testers, and a small amount of warm and fuzzy smoke tests to help you through the hump of making a change, where each change sees some collaborative and possibly contract tests included in the work required for each change. Overtime, especially during a refactor, the test approach will modernise, and provide more built in safety.   

Finally..


I have spent a number of years during my career either building or contributing to test automation frameworks. My experience is generally negative (hence this post). However, there is a test framework that I built in 2006 still running through a couple of hundred system tests every single day on an insurance policy management system. The cause of its existence here was, and still is, confidence, not in the software itself, but the infrastructure on which it is hosted. Its still there, still going strong. I've not done the sums yet to work out the value, but I'm sure either the company using it, or the consulting company maintaining it has a lot to thank me for. Value is in the eye of the beholder after all!



Wednesday, 10 September 2014

Regression testing

What is regression testing? Why do it? What value does it have? My short answer to these questions is regression testing as a concept is about searching for new faults introduced through changes to the code base, and to learn more about the system you are building.

Usually in modern development, through an automated build pipeline, you are constantly executing checks that help detect whether the changes made to your code have resulted in new faults being introduced. Couple this with the interactive testing that we do throughout the development of any story then we have a combined effort that you could describe as regression testing.

In my mind, thats pretty much it. Its main purpose is to find faults, and learn more about the system.

Many people become confused about the term regression testing, and rightly question its worth when they judge either automated only solutions, typically through a UI or API, or look to have regression testing as a phase that occurs at some point on the critical path to delivery. Both of these approaches fall into the realm of delivery anti patterns.

If you think of your entire test and development approach as a way to minimise the risk of regression and to deliver the right thing in the most effective and safe way, you very rarely have to think about or understand the need for regression testing.

Thats my rather simplistic view. For more detail, have a read of Michael Bolton’s presentation "Things Could Get Worse: Ideas About Regression Testing"