Thursday, 6 June 2013

RevealJS - HTML Presentation Made Easy

If you are looking for an alternative presentation authoring tool then have a look at revealjs.

reveal.js is a framework for easily creating beautiful presentations using HTML. You'll need a browser with support for CSS 3D transforms to see it in its full glory.

I've used it a couple of times now in several workshops and the crowd love it. There is a certain degree of html knowledge required to use it, but the results are worth it. Checkout their site to see a presentation in action.

http://lab.hakim.se/reveal-js/#/

One of the things I love about revealjs is the ability to broadcast presentations and update any viewers device with the page you are currently presenting (Multiplexing). It uses Socket.io server to broadcast events from the master to the clients. See a demo at http://revealjs.jit.su/.



Sunday, 5 May 2013

GTAC 2013 - Videos now up

I wasn't lucky to get an invite to this years GTAC, but the videos look good.

See the conference here:

http://www.youtube.com/playlist?list=PLSIUOFhnxEiCODb8XQB-RUQ0RGNZ2yW7d

Friday, 15 February 2013

Versioning Test Code

In some development environments it is quite common to see test code, especially UI test code, managed as a completely separate entity in a version control system to the application code that it is testing. In this anti-pattern, tests are developed and maintained only in alignment to the latest application code. The danger here is that when you need to fix issues in previous releases you have no automation tests to cover a regression test. The test code base has moved on and can only be executed against code that is currently being developed.  

So whenever you make a release branch, make sure you branch your test code alongside it. 

It’s just common sense, versioning your test code alongside application code and running tests frequently allow regressions to be picked up quickly. 

Wednesday, 23 January 2013

Test Doubles

Sometimes your system under test is made up of multiple systems. Some of those systems may not be in your control, and others may be very difficult to operate in a non production environment. When this happens you can use a test double to mimic those particular parts of the system.

A test double is a generic term used to describe the various ways of mimicking a system and can be classified as follows
  • Dummy objects - objects that passed around but never actually used. Usually they are just used to fill parameter lists.
  • Fake objects - objects that actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryTestDatabase is a good example).
  • Stubs - provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
  • Spies - These are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent.
  • Mocks - These are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don't expect and are checked during verification to ensure they got all the calls they were expecting.
Test doubles can be used across the entire test life cycle, facilitating the delivery of code by solving tricky integration or environment problems. However, at some point it may be necessary to remove those doubles and carry out more realistic system integration test. You should always assess the risk of using test doubles and have at the back of your mind that these are not real systems or objects, and are just helping development move along.

For more information read:

Sunday, 23 December 2012

Browser Developer Tools


There is more to a browser than meets the eye! Not much more, but there are some great browser development tools that you should definitely pay attention to if you want to seriously test a UI manually through a browser.

I've added this to the list of things a tester should know or do as I still see many testers taking what is basically a point and click approach to manual browser testing. This is fine for simple user scenario based testing, but you could be missing valuable information just under the surface.

Take this simple scenario, on a login page, when a user inputs a correct user name, but an incorrect password, as it is bad practice to explain to that user the exact reason why they have not been able to log in to the system, the page displays a message stating "either the user name or password is incorrect". This is perfect for the user, but for a hacker, for example, trying to gain entry to the system, it’s not really giving any valuable detail about what their next attempt to enter the system should be based on.

At this point a hacker may look at the communication being sent between the user interface and any back end system. In this scenario, the user interface receives a message that contains an exception indicating that the log in failed, but it would not give the reason why it failed. However, not every developer follows good practice, and there maybe an instance where this message does contain enough detail to give a hacker more ammunition for their next attempt at breaking into the system.

I have seen something very similar to the following on a popular content management system, it’s a JSON object returned to the UI from a service after a failed log in attempt:

{
        "exception": "LOGIN_FAIL",
  "detail": "PASSWORD_ERROR",
}

Given that this scenario is a real possibility, and applicable to many other areas of a system, a tester needs to be able to easily assess these types of vulnerability.

Most browsers have a set of development tools built in that allow you to view the requests and responses that are processed by a browser. In any instance where you are informing a user of an action that has occurred through the user interface, and there is some degree of sensitivity or security related to that message or feature, then it always pays to have a look at what is going on in the background.

Don’t just stop at looking at the requests and responses, there are a whole host of over areas that you can look at such as the resources that are loading, the way css classes change, JScript errors, page performance, and much more.

Both chrome and firefox offer a decent tool set, either a feature or additional plugin

https://developers.google.com/chrome-developer-tools/

http://getfirebug.com/whatisfirebug

Tuesday, 11 December 2012

The Automation Pyramid

Think about using the test automation pyramid when planning your test automation strategy.

The test automation pyramid was used by Mike Cohn to describe the value of different types of automated tests in the context of an ntier application. The concept is very simple. Invest more time and effort in those tests that are lower down the pyramid than those at the peak, as those tests lower down the pyramid provide the most value in terms of quick feedback and reliability, whereas those at the peak are expensive to implement, brittle, and time consuming.

The traditional pyramid is split into three layers, Unit testing at the base, integration/API tests in the middle layer, and UI tests forming the peak of the pyramid. Many now opt to describe the UI layer as the ‘end to end’ layer as this phrase better represents those types of test.


Useful posts on the subject:

http://martinfowler.com/bliki/TestPyramid.html by Martin Fowler

http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid
by Mike Cohn

Tuesday, 13 November 2012

Automated smoke tests in production

If you can, don’t be afraid to run your automated tests in production. A production environment is a place where automated tests can give real value, especially after a release. Instant feedback on the success of a change in production could be worth a lot of money to your organisation.

As a minimum run automated smoke tests before and after a release in production, firstly, to baseline, and secondly, to assure nothing has broken after a release.

If you are limited by the data you can use or create during a test then just consider non transactional tests. Any way that you can speed up the feedback loop when a change has occurred is a bonus.

Obviously not all systems or organisations are conducive to this sort of strategy, so as a consideration when building a new system, it’s worth thinking about the ability to run automated tests in a live environment when designing a system.