Functional Testing Home

Functional Testing Articles

Functional Testing Links

Functional Testing Books

Functional Testing Tools

Functional Testing Keywords

Functional Testing

test execution

What
"The processing of a test case suite by the software under test, producing an outcome"
BS 7925-1.British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST)

The vast majority of tests will require the code to be exercised, for execution to take place. However for some design techniques, this is not the case. For instance with some White Box techniques as Symbolic Testing and Static Analaysis.

Test Execution is on the syllabus for the ISEB test practitioner exam.

Why?
This seems on the surface such an obvious question. Of course, we have to actually run the tests. Look a little deeper. The question becomes, is it so important?

Well, yes. If we did not execute the test, how do we know if the software is behaving as we expect? Further, execution is merely a stage in the testing process.

A proper execution of the suite, is every bit as important as having good test cases. Even in ad hoc testing, there has to be an awareness of execution procedures.

Who? Selection of personnel involved in executing test cases, depends on many factors. In some organisations, test execution is not seen as a distinct discipline. The "Tester" is responsible for the whole testing process. Thus he will analyse, choose the test design technique, write test cases, execute them and chase resulting defects. This would be the case for a small organisation or a low level maturity level or one with little test automation. This is fine in that one person is in control. The downside is the tester has to be a Jack of all Trades.

Large test or highly automated organisations, may have clearer demarcations. Here a Test Analyst will choose the design technique and write the script. A Test Technician will be required to merely start the job and ensure it completes. Logging of results will also be done by the technician. Thus execution can be a low level role. Routinely following another persons script can be very mundane. The routine element of execution is even worse if the scripts are automated.

The complexity of the Software Under Test (SUT) will also play a part. Manual testing by a domain specialist will be required, where the program is complex.

Where?
Everywhere that testing takes place is a de facto location to execute testing. In certain cases execution itself may be complex. Alternatively the SUT, has to be tested in multiple environments. (Windows X, Mac, Web etc). In these instances, test execution may be outsourced to a testing lab.

When?
Again anytime testing takes place, there will be an element of execution. Ideally the test execution is taking place in the context of a well thought out test process. Analysis, traceability from requirements and a properly configured environment.

The other extreme is where execution takes place in a test process vacum. No planning takes place, it is difficult to figure out which requirements are being tested and defects are defects are not re-creatable due to insufficient logging information.

How?
Firstly, I have to explain successful "test execution". A test case execises code. Whilst exercising the code, defects or failures may be exposed. A successful execution is one which tells us the result of a test. Whether this result is the one we want or not. This is extendable to the test suite level. A suite may have hundreds of test cases. Once they have been executed, the suite has been executed.

Planning is essential for successful execution. The execution itself, needs to be scheduled and environment variables scheduled. The wider Test Process planning plays a part. Analysis and requirements are the foundations on which success is built. Assembling a sound mechanism for recording results and outputs. It is from these, that debugging of defects will be based.

Logically one executes those cases, which are deemed highest risk. This might be part of a larger risk mitigation strategy, such as for the Capability Maturity Model Integration (CMMI). Alternatively, at the other extreme it can be the testers experience and knowledge which determines the risk.

Personally, I try to establish a ranking of priorities amongst a suite of test cases. As a manual systems tester, execution can take two or three days. Tight scheduling means, efficient use of time is important. Thus cases which might block other tests, if they fail, have to be ran first. Then cases where I think failure is high or which are central to the architecure. This process continues, to the point I am executing tests that will catch obscure data exceptions.

The other important variable is the environment in which execution takes place. A huge range of platforms, operating system and other interface combinations are available. Each combination will have an effect on the SUT. This means that the test case suite may have to be ran many times. However we can never test in every possible environment. A scope will need establishing for which combination to test in.

Lastly we have the question:- manual or automated? This brief article is not nearly enough to answer this question, more than simplistically. Broadly manual execution requires the tester to conduct the test himself. Automated execution requires the use of a tool, to execute the test suite. Of course, we can never achieve true automation, as somebody has to set the rules for the tool.

In general, automation is more likely if the test case suite can be executed over and over again. This makes demands on the test environment. For instance databases may need to be rolled back after each pass through of the suite.

Manual execution is ideally suited to more in depth tests, which require knowledge of the SUT or other domain knowledge.

Once the test suite has been exercised we need an audit trail of the execution. We need to log all the relevant details. Again this process can be automated or manually on paper.

Likely information to log includes
 *Time date
 *Who ran test
 *Environment - Platforms, operating systems
 *Overall results - Percentage of tests passed and other metrics.

At the individual test case level useful information includes:-
 *Requirement tested
 *Input
 *Output
 *Predicted outcome
 *Actual outcome
 *Pass/Fail - according to pass/fail criteria.

Google
Web www.riskmanagement.force9.co.uk

Functional Testing Bestsellers
The bestselling books on Amazon.

Articles

test execution

acceptance testing

negative testing

alpha testing

component testing

Visit our site of the month Load Testing at loadtesting.force9.co.uk