Test Tooling Drift is a test tool
anti-pattern that seems to be a common occurrence on many of the teams that I work with.
In applications with
integration points with external systems, clients are usually created in the
code to connect with those systems. These clients usually form a part of the
code base of the application. They communicate using protocols such as http, tcp,
etc.
When testing this
type of application, whether its writing automated checks, or creating tools to
facilitate testing, you may find teams (test or development) creating their own
test clients to handle some of the testing or checking code that is used against
those external systems.
An example of this
could be a transaction manager that provides transactional capabilities to a
payment system. The payment system will have a client that connects to this
transaction manager, and there will be a contract between the application
client and the transaction system to facilitate functional or technical change.
Changes in the contract will usually be handled within the development process
of those working on the team. They may even use mechanisms like consumer driven
contract tests to facilitate contract change and approval.
In this scenario its
common in test tooling to see a separate test client created for the purpose of
test being used to communicate with a system such as the transaction manager.
As this client is different, if there is a contract change that is implemented
between the client in the application, and the transaction manager, there is
then room for error to creep into our test client should we not also implement
that contract change. The test client has the potential to shield or mask
potential issues occurring in contracts, such as protocol issues, network
issues, schema issues, etc. This is where drift occurs. Of course, the biggest
problem here is the time spent having to keep these clients in sync. We are
most definitely violating the DRY principle here.
I've seen this
anti-pattern occurring a lot on mobile application projects. Many mobile
applications will call an endpoint from the mobile application to facilitate
some functionality, and that will be done using a connection client built into
the application code. When testing this integration, or running contract tests
against this endpoint, you will see tests and checks using tools such as
runscope, soapUI, or postman, even though none of these tools and the clients
they use to connect to endpoints sit inside your application. Whilst these
tests can call the endpoint and validate certain aspects of your contract, they
are not doing it in the exact same way as your application client.
Inconsistencies are most prominent in request headers, request creation and the
deserialization of responses into objects to validate or use within test code.
If you want to
reduce risk of failure you should certainly be using the client from the
application to make calls to these endpoints during your testing and checking.
Tools such as runscope, postman and soapUI are great for investigating and
understanding integrations, but they are tools that use their own way to
construct requests to your endpoints.
If you are an API
provider you might want to make use of consumer
driven contract testing to assure you keep alignment with your consumers.
Though this can become untenable when you are providing a mass consumed API
such as the twitter API, which is when you have to move towards suggesting
implementations and best practices for consumers.