domain: The set from
which valid input and/or output values can be selected.
driver: A software
component or test tool that replaces a component that takes care of the
control and/or the calling of a
component or system.
dynamic
analysis: The
process of evaluating behavior, e.g. memory performance, CPU
usage, of a system or component
during execution.
dynamic analysis
tool: A
tool that provides run-time information on the state of the software
code. These tools are most
commonly used to identify unassigned pointers, check pointer
arithmetic and to monitor the
allocation, use and de-allocation of memory and to flag
memory leaks.
dynamic
comparison: Comparison
of actual and expected results, performed while the
software is being executed, for
example by a test execution tool.
dynamic testing:
Testing
that involves the execution of the software of a component or
system.
efficiency: The capability
of the software product to provide appropriate performance,
relative to the amount of
resources used under stated conditions.
efficiency
testing: The
process of testing to determine the efficiency of a software product.
elementary
comparison testing: A
black box test design technique in which test cases are
designed to execute combinations
of inputs using the concept of condition determination
coverage.
emulator: A device,
computer program, or system that accepts the same inputs and produces
the same outputs as a given
system.
entry criteria: The set of
generic and specific conditions for permitting a process to go
forward with a defined task, e.g.
test phase. The purpose of entry criteria is to prevent a
task from starting which would
entail more (wasted) effort compared to the effort needed
to remove the failed entry
criteria.
entry point: The first
executable statement within a component.
equivalence
partition: A
portion of an input or output domain for which the behavior of a
component or system is assumed to
be the same, based on the specification.
equivalence
partition coverage: The
percentage of equivalence partitions that have been
exercised by a test suite.
equivalence
partitioning: A
black box test design technique in which test cases are designed
to execute representatives from
equivalence partitions. In principle test cases are designed
to cover each partition at least
once.
error: A human action
that produces an incorrect result.
error guessing: A test design
technique where the experience of the tester is used to
anticipate what defects might be
present in the component or system under test as a result
of errors made, and to design
tests specifically to expose them.
error tolerance:
The
ability of a system or component to continue normal operation despite
the presence of erroneous inputs.
exception
handling: Behavior
of a component or system in response to erroneous input, from
either a human user or from
another component or system, or to an internal failure.
executable
statement: A
statement which, when compiled, is translated into object code, and
which will be executed
procedurally when the program is running and may perform an
action on data.
exercised: A program
element is said to be exercised by a test case when the input value
causes the execution of that
element, such as a statement, decision, or other structural
element.
exhaustive
testing: A
test approach in which the test suite comprises all combinations of
input values and preconditions.
exit criteria: The set of
generic and specific conditions, agreed upon with the stakeholders,
for permitting a process to be
officially completed. The purpose of exit criteria is to
prevent a task from being
considered completed when there are still outstanding parts of
the task which have not been
finished. Exit criteria are used to report against and to plan
when to stop testing.
exit point: The last
executable statement within a component.
expected result:
The
behavior predicted by the specification, or another source, of the
component or system under
specified conditions.
experienced-based
test design technique: Procedure to derive and/or select test cases based
on the tester’s experience,
knowledge and intuition.
exploratory
testing: An
informal test design technique where the tester actively controls the
design of the tests as those
tests are performed and uses information gained while testing to
design new and better tests.
fail: A test is deemed
to fail if its actual result does not match its expected result.
failure: Deviation of the
component or system from its expected delivery, service or result.
failure mode: The physical or
functional manifestation of a failure. For example, a system in
failure mode may be characterized
by slow operation, incorrect outputs, or complete
termination of execution.
Failure Mode and
Effect Analysis (FMEA): A systematic approach to risk identification
and analysis of identifying
possible modes of failure and attempting to prevent their
occurrence.
Failure Mode,
Effect and Criticality Analysis (FMECA): An extension of FMEA, as in
addition to the basic FMEA, it
includes a criticality analysis, which is used to chart the
probability of failure modes
against the severity of their consequences. The result
highlights failure modes with
relatively high probability and severity of consequences,
allowing remedial effort to be
directed where it will produce the greatest value.
failure rate: The ratio of the
number of failures of a given category to a given unit of
measure, e.g. failures per unit
of time, failures per number of transactions, failures per
number of computer runs.
false-fail
result:
A test result in which a defect is reported although no such defect actually
exists in the test object.
false-pass
result:
A test result which fails to identify the presence of a defect that is actually
present in the test object.
fault seeding: The process of
intentionally adding known defects to those already in the
component or system for the
purpose of monitoring the rate of detection and removal, and
estimating the number of remaining
defects.
fault seeding
tool: A
tool for seeding (i.e. intentionally inserting) faults in a component or
system.
fault tolerance:
The
capability of the software product to maintain a specified level of
performance in cases of software
faults (defects) or of infringement of its specified
interface. [ISO 9126] See also reliability,
robustness.
Fault Tree
Analysis (FTA): A
technique used to analyze the causes of faults (defects). The
technique visually models how
logical relationships between failures, human errors, and
external events can combine to
cause specific faults to disclose.
feasible path: A path for which
a set of input values and preconditions exists which causes it
to be executed.
feature: An attribute of
a component or system specified or implied by requirements
documentation (for example
reliability, usability or design constraints).
finite state
machine: A
computational model consisting of a finite number of states and
transitions between those states,
possibly with accompanying actions.
formal review: A review characterized
by documented procedures and requirements, e.g.
inspection.
frozen test
basis: A
test basis document that can only be amended by a formal change control
process.
Function Point
Analysis (FPA): Method
aiming to measure the size of the functionality of
an information system. The
measurement is independent of the technology. This
measurement may be used as a
basis for the measurement of productivity, the estimation of
the needed resources, and project
control.
functional
integration: An
integration approach that combines the components or systems
for the purpose of getting a
basic functionality working early.
functional
requirement: A
requirement that specifies a function that a component or system
must perform.
functional test
design technique: Procedure
to derive and/or select test cases based on an
analysis of the specification of
the functionality of a component or system without
reference to its internal
structure.
functional
testing: Testing
based on an analysis of the specification of the functionality of a
component or system.
functionality: The capability
of the software product to provide functions which meet stated
and implied needs when the
software is used under specified conditions.
functionality
testing: The
process of testing to determine the functionality of a software
product.
hazard analysis: A technique
used to characterize the elements of risk. The result of a hazard
analysis will drive the methods
used for development and testing of a system.
heuristic
evaluation: A
static usability test technique to determine the compliance of a user
interface with recognized
usability principles (the so-called “heuristics”).
high level test
case: A
test case without concrete (implementation level) values for input data
and expected results. Logical
operators are used; instances of the actual values are not yet
defined and/or available.
horizontal
traceability: The
tracing of requirements for a test level through the layers of test
documentation (e.g. test plan,
test design specification, test case specification and test
procedure specification or test
script).
hyperlink: A pointer
within a web page that leads to other web pages.
hyperlink tool: A tool used to
check that no brtoken hyperlinks are present on a web site.
impact analysis:
The
assessment of change to the layers of development documentation, test
documentation and components, in
order to implement a given change to specified
requirements.
incident: Any event
occurring that requires investigation.
incident
logging: Recording
the details of any incident that occurred, e.g. during testing.
incident
management: The
process of recognizing, investigating, taking action and disposing
of incidents. It involves logging
incidents, classifying them and identifying the impact.
incident
management tool: A
tool that facilitates the recording and status tracking of
incidents. They often have
workflow-oriented facilities to track and control the allocation,
correction and re-testing of
incidents and provide reporting facilities.
incident report:
A
document reporting on any event that occurred, e.g. during the testing,
which requires investigation.
incremental
development model: A
development life cycle where a project is broken into a
series of increments, each of
which delivers a portion of the functionality in the overall
project requirements. The
requirements are prioritized and delivered in priority order in the
appropriate increment. In some
(but not all) versions of this life cycle model, each
subproject follows a ‘mini
V-model’ with its own design, coding and testing phases.
incremental
testing: Testing
where components or systems are integrated and tested one or
some at a time, until all the
components or systems are integrated and tested.
independence of
testing: Separation
of responsibilities, which encourages the
accomplishment of objective
testing.
infeasible path:
A
path that cannot be exercised by any set of possible input values.
informal review:
A
review not based on a formal (documented) procedure.
input: A variable
(whether stored within a component or outside) that is read by a
component.
Post a Comment