Testing, testing and more testing!

The importance of testing the apps we build, and why we do it.

It is blindingly obvious that testing is utterly important. You can't go live without at least some testing time. But what really is testing, and do we give it space, time and money it deserves? That's the question I'd like to ask of the whole industry because I think the answer is a resounding NO! And that worries me.

What is app testing? 

First off, I'm going to look at what testing comprises of. What is testing? What are the options and their meanings in terms of time and value? I'll look at what we're prepared to pay for, and why this needs addressing, and I'll ask why we're not being more forceful about it.

Testing can mean many things, so I'll keep it in the context of a web application in this instance. It's a limited viewpoint I know, but it carries the same message across other disciplines.

Testing a website

In testing a website, we talk about functional testing, we talk about user acceptance testing, and we talk about User Interface testing. The latter two often work hand in hand and are done prior to building, where functional testing traditionally happens after the build process and is about eliminating bugs in the system. Fine, but that only works to an extent. It's the functional testing we're talking about here. If the others haven't been done prior to building, there are bigger issues that need dealing with first. Like why not?!


We're given scopes to look through and quote for on a day to day basis. Sometimes they appear complete and other times they don't and need some reworking. The complete ones almost certainly are not perfect and have missed one or two bits of business logic somewhere along the line. These aren't picked up until later in development, and only surface when an astute developer asks the question of his Project Manager. If they're not picked up, they surface when a user of the site does something unexpected, at least something that the developers didn't expect the user to do. And that's usually where the bugs appear.

What sort of testing can you employ and how does it help? 

Well, there are two main options: manual testing, or automatic testing. You could put someone in front of a computer and ask them to run through every single possibility and flag bugs, then run through it again and again until there are no bugs, in each separate browser. You could alternatively automate the testing if you're building the application in a framework that has the possibility of testing built into it.


The first of these two options, manual testing, is what is normally assumed. Unfortunately, it's considered a case of going through the application or site after build and checking everything appears to work as expected. This is fine but doesn't really deal with the issue at hand - those pesky bugs that hide under bits of code you never thought to look under. If you choose to go down the route of manual testing, preparation is key - all scenarios must be thought through, written up into test cases and tested by a person, or 

Those involved in the project need to be testing too

people, independent of the project. Clearly, the people involved in the project need to be testing as well, but they can often be too close to the project to notice what are sometimes glaring issues. Sufficient testing means a full set of written tests, with results properly recorded step by step. This costs money; often half or more of the build cost. Doing any less than this can't be considered to be sufficient.


If the framework of choice has a test suite, such as Ruby on Rails, there is a second option that costs significantly less, yet gives almost the same quality assessment of the project as does the manual process. Depending on the choice of test suites (many can be employed, each one testing a different facet of the application; e.g. separate front-end and back-end suites), they can be written to be readable by people not affiliated to the project, or those not on the project in a technical capacity. This means these tests can be peer reviewed and cross checked for completeness against the technical specification. Usually testing in this way happens during the build process, the tests being written before any code and then the code written to pass all the relevant tests. Two methods used here are Behaviour and Test driven development. The first tests that the behaviours expected by the user actually happen, and the latter is often used to test the business logic. Both can be used together. Automated testing costs on average approximately one-third of the build cost. With it come guarantees that the application has been tested almost to destruction, with all kinds of possibilities tested out. If you want an extra layer of security, do some manual testing as well. Get people not involved in the project to run through the system and if anything is spotted, then it can be dealt with promptly. Again, tests must be written for these, and should detail behaviour that the user should experience. As long as the results are recorded, including steps taken, then this is well worth doing.

There is one further major benefit to automated testing

The tester doesn't go away. If you update the application later down the line and you want to add some more functionality, you can quickly make sure that the changes you have made work, and more importantly, that all the previous tests still work. If you decide on the manual process because of a resourcing issue at the time for instance, remember that you'll have to run through every single test again whenever you make any changes if you want to guarantee the same level of quality. This is the value of automated testing. Continuous, solid, quantitative tests that never go away.


If making sure your application is robust is this economical, why then is it perceived to be too expensive? I'll make an analogy to motorcycling. I recently bought some new motorcycle gear that cost me nearly £900. I know I could have bought gear for less than £200, but I also know I'd be compromising my own life if I did so. Testing an application is the same if a little less terminal. Usually, though, it's the first thing to be thrown out. You need to reduce the budget and the scope needs to stay the same, so the testing budget suffers. The issue is that the expectation is still that the application is fully tested. I think there is a certain belief too that the £200 gear will save your life. It may well do, but not before you've had a few skin grafts and organ transplants. Should we be prepared to pay more for this part of the build? Yes. If the budget does need to be reduced, it must reduce the scope, or the quality of the end product will be severely compromised, and for that there can be no guarantees provided.

Summarising app testing - manual v automated

I know this is something of a long post. It's important that we understand the unequivocal requirement for proper testing, whether manual or automated and that our clients are educated to that effect. They pay millions of pounds and dollars for software systems that run their organisation on the basis that the systems are tested and won't break. Even so, they do sometimes break, if rarely. Even less often do they break badly. That's a testament to the amount of testing that they do. All software has bugs. It's almost impossible to eliminate them all. Why can't budget be put into testing these bespoke systems that are being built? Education. That's all. The budget is there for testing, but if the knowledge and understanding of the whys and wherefores isn't there, then the budget won't be allowed to be spent on what appears to be a frivolous activity. This is why.


If you do need an app for your customers, we can help there too. We’ll help figure out what your customers need, build it, support it and maintain it. Discover more today