Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

Articles On Testing

Wecome to http://www.articlesontesting.com !!!

ISTQB Certification Foundation Level Terms 2


co-existence: The capability of the software product to co-exist with other independent
software in a common environment sharing common resources

compiler: A software tool that translates programs expressed in a high order language into
their machine language equivalent.

complexity: The degree to which a component or system has a design and/or internal
structure that is difficult to understand, maintain and verify.

compliance: The capability of the software product to adhere to standards, conventions or
regulations in laws and similar prescriptions.

compliance testing: The process of testing to determine the compliance of the component or
system.

component: A minimal software item that can be tested in isolation.

component integration testing: Testing performed to expose defects in the interfaces and
interaction between integrated components.

component specification: A description of a component’s function in terms of its output
values for specified input values under specified conditions, and required non-functional
behavior (e.g. resource-utilization).

component testing: The testing of individual software components.

compound condition: Two or more single conditions joined by means of a logical operator
(AND, OR or XOR), e.g. ‘A>B AND C>1000’.

concurrency testing: Testing to determine how the occurrence of two or more activities
within the same interval of time, achieved either by interleaving the activities or by
simultaneous execution, is handled by the component or system.

condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test
condition.

condition coverage: The percentage of condition outcomes that have been exercised by a test
suite. 100% condition coverage requires each single condition in every decision statement
to be tested as True and False.

condition determination coverage: The percentage of all single condition outcomes that
independently affect a decision outcome that have been exercised by a test case suite.
100% condition determination coverage implies 100% decision condition coverage.

condition determination testing: A white box test design technique in which test cases are
designed to execute single condition outcomes that independently affect a decision
outcome.

condition testing: A white box test design technique in which test cases are designed to
execute condition outcomes.

condition outcome: The evaluation of a condition to True or False.

configuration: The composition of a component or system as defined by the number, nature,
and interconnections of its constituent parts.

configuration auditing: The function to check on the contents of libraries of configuration
items, e.g. for standards compliance.

configuration control: An element of configuration management, consisting of the
evaluation, co-ordination, approval or disapproval, and implementation of changes to
configuration items after formal establishment of their configuration identification

configuration control board (CCB): A group of people responsible for evaluating and
approving or disapproving proposed changes to configuration items, and for ensuring
implementation of approved changes.

configuration identification: An element of configuration management, consisting of
selecting the configuration items for a system and recording their functional and physical
characteristics in technical documentation.

configuration item: An aggregation of hardware, software or both, that is designated for
configuration management and treated as a single entity in the configuration management
process. [

configuration management: A discipline applying technical and administrative direction and
surveillance to: identify and document the functional and physical characteristics of a
configuration item, control changes to those characteristics, record and report change
processing and implementation status, and verify compliance with specified requirements.

configuration management tool: A tool that provides support for the identification and
control of configuration items, their status over changes and versions, and the release of
baselines consisting of configuration items.

consistency: The degree of uniformity, standardization, and freedom from contradiction
among the documents or parts of a component or system.

control flow: A sequence of events (paths) in the execution through a component or system.

control flow analysis: A form of static analysis based on a representation of sequences of
events (paths) in the execution through a component or system.

control flow graph: An abstract representation of all possible sequences of events (paths) in
the execution through a component or system.

continuous representation: A capability maturity model structure wherein capability levels
provide a recommended order for approaching process improvement within specified
process areas.

conversion testing: Testing of software used to convert data from existing systems for use in
replacement systems.

cost of quality: The total costs incurred on quality activities and issues and often split into
prevention costs, appraisal costs, internal failure costs and external failure costs.

COTS: Acronym for Commercial Off-The-Shelf software.

coverage: The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test suite.

coverage analysis: Measurement of achieved coverage to a specified coverage item during
test execution referring to predetermined criteria to determine whether additional testing is
required and if so, which test cases are needed.

coverage item: An entity or property used as a basis for test coverage, e.g. equivalence
partitions or code statements.

coverage tool: A tool that provides objective measures of what structural elements, e.g.
statements, branches have been exercised by a test suite.

cyclomatic complexity: The number of independent paths through a program. Cyclomatic
complexity is defined as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine)

daily build: a development activity where a complete system is compiled and linked every
day (usually overnight), so that a consistent system is available at any time including all
latest changes.

data definition: An executable statement where a variable is assigned a value.

data driven testing: A scripting technique that stores test input and expected results in a table
or spreadsheet, so that a single control script can execute all of the tests in the table. Data
driven testing is often used to support the application of test execution tools such as
capture/playback tools.

data flow: An abstract representation of the sequence and possible changes of the state of
data objects, where the state of an object is any of: creation, usage, or destruction.

data flow analysis: A form of static analysis based on the definition and usage of variables.

data flow coverage: The percentage of definition-use pairs that have been exercised by a test
suite.

data flow testing: A white box test design technique in which test cases are designed to
execute definition and use pairs of variables.

database integrity testing: Testing the methods and processes used to access and manage the
data(base), to ensure access methods, processes and data rules function as expected and
that during access to the database, data is not corrupted or unexpectedly deleted, updated or
created.

debugging: The process of finding, analyzing and removing the causes of failures in
software.

debugging tool: A tool used by programmers to reproduce failures, investigate the state of
programs and find the corresponding defect. Debuggers enable programmers to execute
programs step by step, to halt a program at any program statement and to set and examine
program variables.

decision: A program point at which the control flow has two or more alternative routes. A
node with two or more links to separate branches.

decision condition coverage: The percentage of all condition outcomes and decision
outcomes that have been exercised by a test suite. 100% decision condition coverage
implies both 100% condition coverage and 100% decision coverage.

decision condition testing: A white box test design technique in which test cases are
designed to execute condition outcomes and decision outcomes.

decision coverage: The percentage of decision outcomes that have been exercised by a test
suite. 100% decision coverage implies both 100% branch coverage and 100% statement
coverage.

decision outcome: The result of a decision (which therefore determines the branches to be
taken).

decision table: A table showing combinations of inputs and/or stimuli (causes) with their
associated outputs and/or actions (effects), which can be used to design test cases.

decision table testing: A black box test design technique in which test cases are designed to
execute the combinations of inputs and/or stimuli (causes) shown in a decision table.
decision testing: A white box test design technique in which test cases are designed to
execute decision outcomes.

defect: A flaw in a component or system that can cause the component or system to fail to
perform its required function, e.g. an incorrect statement or data definition. A defect, if
encountered during execution, may cause a failure of the component or system.

defect based test design technique: A procedure to derive and/or select test cases targeted at
one or more defect categories, with tests being developed from what is known about the
specific defect category.

defect density: The number of defects identified in a component or system divided by the
size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode,
number of classes or function points).

Defect Detection Percentage (DDP): The number of defects found by a test phase, divided
by the number found by that test phase and any other means afterwards.

defect management: The process of recognizing, investigating, taking action and disposing
of defects. It involves recording defects, classifying them and identifying the impact.

defect management tool: A tool that facilitates the recording and status tracking of defects
and changes. They often have workflow-oriented facilities to track and control the
allocation, correction and re-testing of defects and provide reporting facilities.

defect masking: An occurrence in which one defect prevents the detection of another.

defect report: A document reporting on any flaw in a component or system that can cause the
component or system to fail to perform its required function.

defect taxonomy: A system of (hierarchical) categories designed to be a useful aid for
reproducibly classifying defects.

definition-use pair: The association of the definition of a variable with the use of that
variable. Variable uses include computational (e.g. multiplication) or to direct the
execution of a path (“predicate” use).

deliverable: Any (work) product that must be delivered to someone other than the (work)
product’s author.

design-based testing: An approach to testing in which test cases are designed based on the
architecture and/or detailed design of a component or system (e.g. tests of interfaces
between components or systems).

desk checking: Testing of software or specification by manual simulation of its execution.

development testing: Formal or informal testing conducted during the implementation of a
component or system, usually in the development environment by developers.

documentation testing: Testing the quality of the documentation, e.g. user guide or
installation guide.

ISTQB Certification Foundation Level Terms 1

acceptance criteria: The exit criteria that a component or system must satisfy in order to be
accepted by a user, customer, or other authorized entity.

acceptance testing: Formal testing with respect to user needs, requirements, and business
processes conducted to determine whether or not a system satisfies the acceptance criteria
and to enable the user, customers or other authorized entity to determine whether or not to
accept the system.

accessibility testing: Testing to determine the ease by which users with disabilities can use a
component or system.

accuracy: The capability of the software product to provide the right or agreed results or effects
with the needed degree of precision.
actual result: The behavior produced/observed when a component or system is tested.

ad hoc testing: Testing carried out informally; no formal test preparation takes place, no
recognized test design technique is used, there are no expectations for results and
arbitrariness guides the test execution activity.

adaptability: The capability of the software product to be adapted for different specified
environments without applying actions or means other than those provided for this purpose
for the software considered.

agile testing: Testing practice for a project using agile methodologies, such as extreme
programming (XP), treating development as the customer of testing and emphasizing the
test-first design paradigm.
alpha testing: Simulated or actual operational testing by potential users/customers or an
independent test team at the developers’ site, but outside the development organization.
Alpha testing is often employed for off-the-shelf software as a form of internal acceptance
testing.

analyzability: The capability of the software product to be diagnosed for deficiencies or causes
of failures in the software, or for the parts to be modified to be identified.

anomaly: Any condition that deviates from expectation based on requirements specifications,
design documents, user documents, standards, etc. or from someone’s perception or
experience. Anomalies may be found during, but not limited to, reviewing, testing,
analysis, compilation, or use of software products or applicable documentation.

attack: Directed and focused attempt to evaluate the quality, especially reliability, of a test
object by attempting to force specific failures to occur.

attractiveness: The capability of the software product to be attractive to the user.

audit: An independent evaluation of software products or processes to ascertain compliance
to standards, guidelines, specifications, and/or procedures based on objective criteria,
including documents that specify:
(1) the form or content of the products to be produced
(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured.

audit trail: A path by which the original input to a process (e.g. data) can be traced back
through the process, taking the process output as a starting point. This facilitates defect
analysis and allows a process audit to be carried out.

automated testware: Testware used in automated testing, such as tool scripts.

availability: The degree to which a component or system is operational and accessible when
required for use. Often expressed as a percentage.

back-to-back testing: Testing in which two or more variants of a component or system are
executed with the same inputs, the outputs compared, and analyzed in cases of
discrepancies.

baseline: A specification or software product that has been formally reviewed or agreed upon,
that thereafter serves as the basis for further development, and that can be changed only
through a formal change control process.

basic block: A sequence of one or more consecutive executable statements containing no
branches. Note: A node in a control flow graph represents a basic block.

basis test set: A set of test cases derived from the internal structure of a component or
specification to ensure that 100% of a specified coverage criterion will be achieved.

behavior: The response of a component or system to a set of input values and preconditions.

benchmark test: (1) A standard against which measurements or comparisons can be made.
(2) A test that is be used to compare components or systems to each other or to a standard
as in (1).

bespoke software: Software developed specifically for a set of users or customers. The
opposite is off-the-shelf software.

best practice: A superior method or innovative practice that contributes to the improved
performance of an organization under given context, usually recognized as ‘best’ by other
peer organizations.

beta testing: Operational testing by potential and/or existing users/customers at an external
site not otherwise involved with the developers, to determine whether or not a component
or system satisfies the user/customer needs and fits within the business processes. Beta
testing is often employed as a form of external acceptance testing for off-the-shelf software
in order to acquire feedback from the market.

big-bang testing: A type of integration testing in which software elements, hardware
elements, or both are combined all at once into a component or an overall system, rather
than in stages.

black-box testing: Testing, either functional or non-functional, without reference to the
internal structure of the component or system.

black-box test design technique: Procedure to derive and/or select test cases based on an
analysis of the specification, either functional or non-functional, of a component or system
without reference to its internal structure.

blocked test case: A test case that cannot be executed because the preconditions for its
execution are not fulfilled.

bottom-up testing: An incremental approach to integration testing where the lowest level
components are tested first, and then used to facilitate the testing of higher level
components. This process is repeated until the component at the top of the hierarchy is
tested.
boundary value: An input value or output value which is on the edge of an equivalence
partition or at the smallest incremental distance on either side of an edge, for example the
minimum or maximum value of a range.

boundary value analysis: A black box test design technique in which test cases are designed
based on boundary values.

boundary value coverage: The percentage of boundary values that have been exercised by a
test suite.

branch: A basic block that can be selected for execution based on a program construct in
which one of two or more alternative program paths is available, e.g. case, jump, go to, ifthen-
else.

branch coverage: The percentage of branches that have been exercised by a test suite. 100%
branch coverage implies both 100% decision coverage and 100% statement coverage.

branch testing: A white box test design technique in which test cases are designed to execute
branches.

buffer: A device or storage area used to store data temporarily for differences in rates of data
flow, time or occurrence of events, or amounts of data that can be handeld by the devices
or processes involved in the transfer or use of the data.

buffer overflow: A memory access defect due to the attempt by a process to store data
beyond the boundaries of a fixed length buffer, resulting in overwriting of adjacent
memory areas or the raising of an overflow exception.
business process-based testing: An approach to testing in which test cases are designed
based on descriptions and/or knowledge of business processes.

Capability Maturity Model (CMM): A five level staged framework that describes the key
elements of an effective software process. The Capability Maturity Model covers bestpractices
for planning, engineering and managing software development and maintenance.
[CMM] See also Capability Maturity Model Integration (CMMI).

Capability Maturity Model Integration (CMMI): A framework that describes the key
elements of an effective product development and maintenance process. The Capability
Maturity Model Integration covers best-practices for planning, engineering and managing
product development and maintenance. CMMI is the designated successor of the CMM.
[CMMI]

capture/playback tool: A type of test execution tool where inputs are recorded during
manual testing in order to generate automated test scripts that can be executed later (i.e.
replayed). These tools are often used to support automated regression testing.

CASE: Acronym for Computer Aided Software Engineering.

CAST: Acronym for Computer Aided Software Testing.

cause-effect graph: A graphical representation of inputs and/or stimuli (causes) with their
associated outputs (effects), which can be used to design test cases.

cause-effect graphing: A black box test design technique in which test cases are designed
from cause-effect graphs.

certification: The process of confirming that a component, system or person complies with
its specified requirements, e.g. by passing an exam.

changeability: The capability of the software product to enable specified modifications to be
implemented.

classification tree: A tree showing equivalence parititions hierarchically ordered, which is
used to design test cases in the classification tree method. See also classification tree
method.

classification tree method: A black box test design technique in which test cases, described
by means of a classification tree, are designed to execute combinations of representatives
of input and/or output domains.

code: Computer instructions and data definitions expressed in a programming language or in
a form output by an assembler, compiler or other translator.

code coverage: An analysis method that determines which parts of the software have been
executed (covered) by the test suite and which parts have not been executed, e.g. statement
coverage, decision coverage or condition coverage.