Comparing function testing and performance testing


Most people involved in Quality Assurance have a good understanding of functional testing, but a good understanding of non-functional testing is quite rare.

Differences between function testing and performance testing is therefore a good starting point into system performance. Differences are vast.

On this page we discuss five categories of differences:

  1. Differences in test purpose.

  2. Differences in test requirements.

  3. Differences in test cases and test case descriptions.

  4. Differences in test execution.

  5. Differences in produced test results.

  6. Differences in presentation of test results.


1. Differences in test purpose

Function tests purposes

  • Function tests are performed to ensure correct processing of requested services by the tested system.

  • Function tests shall cover all meaningfull aspects of using each service provided by the tested system, i.e. a high test coverage. This means many test cases.

Performance test purposes

  • Performance tests are performed to understand the non-functional behaviour of the tested system under specified conditions.

  • Performance testing is therefore a learning process.

  • Performance tests assume working functionality, i.e. scenarios used in a performance test must be preceeded by function tests to verify working functionality.

  • A system's performance can always be improved, i.e. performance testing of a system has no real end.

  • Performance tests must not cover all services provided by the tested system, i.e. no test coverage requirement.

  • The selection of services used in performance test cases is based on criteria, such as:

    • Frequently used services.

    • Availability critical services.

    • Resource critical services.

    • etc.

To top of page


2. Differences in test requirements

Function test requirements

Stated functional requirements are specifications of what shall be tested and in most cases also how it shall be tested.

Performance test requirements

Stated performance requirements are specifications of what non-functional characteristics shall be measured, but not how the measurements shall be done.

Stated performance requirements shall specify what measurement data shall be collected and how measurement data shall be evaluated.

Each requirement usually contains a number of different conditions that must apply, such as what types of services shall load the tested system, traffic intensity, traffic patterns, test duration, resource conditions on the tested system etc.

To top of page


3. Differences in test cases and test case descriptions

Function test cases and test case descriptions

  • A function test case describes a number of steps that shall be checked (preconditions) before the test execution can start.

  • A function test case describes a number of steps that shall be done during the function test execution.

  • A function test case describes a number of steps that shall be checked (postconditions) after the test execution is finished.

  • Only if all specified test case steps are passed, precondition steps, execution steps, and postconditions steps, has the test case passed.

  • A function test case can be positive, testing what shall work, or negative, testing what shall not work.

Performance test cases and test case descriptions

  • A performance test case is a lot more complex than a function test case.

  • A performance test case is applied on one or more specified function test cases (scenarios).

  • A performance test case is normally performed on positive function test cases.

  • Performance measurements of negative test function test cases can be done, but are usually regarded as pointless.

  • The test cases are performed by a number of concurrently simulated users.

  • A performance test case shall specify conditions that shall apply when measurement data are captured.

  • The specified test conditions can be external, such as applied load on the tested system, or test duration etc.

  • The specified test conditions can be internal, such as resource conditions on the tested system: CPU load, memory usage, queue sizes etc.

  • To create required measurement conditions for different types of performance metrics, measurements tools apply different load patterns.

  • A performance test case can specify several load patterns to be used.

  • A performance test case shall specify a test duration time that will last from a few minutes up to several weeks depending on the measured category of characteristics.

  • A performance test case shall specify what performance data shall be collected during the performance test.

  • A performance test case shall specify how collected performance data shall be processed, evaluated, and presented.

  • A performance test case shall specify how performance metrics shall be evaluated on a scale ranging from Excellent to Unacceptable with of number of steps in between.

  • A performance test case requires, depending on test duration time, lots of test data to ensure that each service request is to some extent unique.

  • The test data includes identities of simulated users as well as data that will be mapped into every service request.

To top of page


4. Differences in test execution

Function test cases execution

  • Multiple function test cases can be applied concurrently on the tested system by function testers or function test tools.

  • Function tests can be done manually by testers or automated by function test tools.

  • A function test has no specified duration time.

Performance test cases case execution

  • Only one performance test cases can be applied on a tested system at a time.

  • Performance test cases can only be executed by a performance test tool. This is to ensure that the test is repeated identically every time.

  • A performance test case may have a duration from a few minutes up to several weeks depending on the measured category of characteristics.

  • Performance test tools must execute on own servers separate from the tested system to avoid any impact on collected measurement data.

  • To avoid any impact on collected measurement data by networks the performance test tool shall be connected as close as possible to the tested system, such as via a switch.

To top of page


5. Differences in produced test results

Function tests

  • Execution of one function test case will produce just one test result – passed or failed.

  • At the end of a function test case execution the result is clear. There is no need to do any evaluaion.

Performance tests

  • Execution of a performance test case will collect performance data that can produce a number of different performance metrics about the tested system.

  • Every performance test result is evaluated on a scale ranging from Excellent to Unacceptable with of number of steps in between.

To top of page


6. Differences in presentation of test results

Function test presentation

  • Function test results are usually presented as number of passed or failed test cases.

Performance test presentation

  • Performance test results are usually presented in graphs to show on a time line the behaviour pattern of the tested system during the test execution.

  • Performance test results can also be presented as figures showing the aggregated behaviour of the system during the test execution, such as processing availability was 99.99999% of processed requests.

  • The aggregated behaviour figures usually display metrics with great impact on the tested system, also called KPIs (Key Performance Indicators).

To top of page