Showing posts with label integration. Show all posts
Showing posts with label integration. Show all posts

Monday, 14 December 2015

Test Tooling Drift - Integration Test Client Anti-Pattern

Test Tooling Drift is a test tool anti-pattern that seems to be a common occurrence on many of the teams that I work with.

In applications with integration points with external systems, clients are usually created in the code to connect with those systems. These clients usually form a part of the code base of the application. They communicate using protocols such as http, tcp, etc.

When testing this type of application, whether its writing automated checks, or creating tools to facilitate testing, you may find teams (test or development) creating their own test clients to handle some of the testing or checking code that is used against those external systems.

An example of this could be a transaction manager that provides transactional capabilities to a payment system. The payment system will have a client that connects to this transaction manager, and there will be a contract between the application client and the transaction system to facilitate functional or technical change. Changes in the contract will usually be handled within the development process of those working on the team. They may even use mechanisms like consumer driven contract tests to facilitate contract change and approval.

In this scenario its common in test tooling to see a separate test client created for the purpose of test being used to communicate with a system such as the transaction manager. As this client is different, if there is a contract change that is implemented between the client in the application, and the transaction manager, there is then room for error to creep into our test client should we not also implement that contract change. The test client has the potential to shield or mask potential issues occurring in contracts, such as protocol issues, network issues, schema issues, etc. This is where drift occurs. Of course, the biggest problem here is the time spent having to keep these clients in sync. We are most definitely violating the DRY principle here.

I've seen this anti-pattern occurring a lot on mobile application projects. Many mobile applications will call an endpoint from the mobile application to facilitate some functionality, and that will be done using a connection client built into the application code. When testing this integration, or running contract tests against this endpoint, you will see tests and checks using tools such as runscope, soapUI, or postman, even though none of these tools and the clients they use to connect to endpoints sit inside your application. Whilst these tests can call the endpoint and validate certain aspects of your contract, they are not doing it in the exact same way as your application client. Inconsistencies are most prominent in request headers, request creation and the deserialization of responses into objects to validate or use within test code.

If you want to reduce risk of failure you should certainly be using the client from the application to make calls to these endpoints during your testing and checking. Tools such as runscope, postman and soapUI are great for investigating and understanding integrations, but they are tools that use their own way to construct requests to your endpoints.

If you are an API provider you might want to make use of consumer driven contract testing to assure you keep alignment with your consumers. Though this can become untenable when you are providing a mass consumed API such as the twitter API, which is when you have to move towards suggesting implementations and best practices for consumers.

Wednesday, 23 January 2013

Test Doubles

Sometimes your system under test is made up of multiple systems. Some of those systems may not be in your control, and others may be very difficult to operate in a non production environment. When this happens you can use a test double to mimic those particular parts of the system.

A test double is a generic term used to describe the various ways of mimicking a system and can be classified as follows
  • Dummy objects - objects that passed around but never actually used. Usually they are just used to fill parameter lists.
  • Fake objects - objects that actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase is a good example).
  • Stubs - provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
  • Spies - These are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent.
  • Mocks - These are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.
Test doubles can be used across the entire test life cycle, facilitating the delivery of code by solving tricky integration or environment problems. However, at some point it may be necessary to remove those doubles and carry out more realistic system integration test. You should always assess the risk of using test doubles and have at the back of your mind that these are not real systems or objects, and are just helping development move along.

For more information read:

Tuesday, 11 December 2012

The Automation Pyramid

Think about using the test automation pyramid when planning your test automation strategy.

The test automation pyramid was used by Mike Cohn to describe the value of different types of automated tests in the context of an ntier application. The concept is very simple. Invest more time and effort in those tests that are lower down the pyramid than those at the peak, as those tests lower down the pyramid provide the most value in terms of quick feedback and reliability, whereas those at the peak are expensive to implement, brittle, and time consuming.

The traditional pyramid is split into three layers, Unit testing at the base, integration/API tests in the middle layer, and UI tests forming the peak of the pyramid. Many now opt to describe the UI layer as the ‘end to end’ layer as this phrase better represents those types of test.


Useful posts on the subject:

http://martinfowler.com/bliki/TestPyramid.html by Martin Fowler

http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid
by Mike Cohn