Second Software Testing Benchmark Workshop
TestBench'10
Co-located with ICST 2010
Third International Conference on Software Testing, Verification and Validation
April 10, 2010
Paris, France
|
Motivation and Objectives
A significant and fundamental software engineering problem faced by software testing researchers
is the difficulty of comparing different testing tools and techniques in such a way that sound
conclusions can be drawn about their relative merits. Benchmarks have been used successfully in
other domains such as databases, computer architecture text retrieval; and research in the speech
recognition, and natural language processing domain has been driven by international competitions
which involve exposing software produced by different labs to common data sets. It is argued that
these benchmarks have provided a significant impetus for research and defined the key challenges.
There is evidence that the time is right for the development of a software testing benchmark and
this was confirmed by the results of the previous TestBench workshop which identified a number
of “proto-benchmarks”, but there is still considerable diversity in evaluation strategies.
The aim of this workshop is to move the development of testing benchmarks into the next phase by
combining it with the incentive of a competition, where different tools are publicly compared
using the same target systems. This will provide a greater incentive for tool and technique
developers to to use a standard benchmark thereby helping to identify the significant common
problems faced by such tools, and serving to focus and accelerate research in this important area.
The workshop will initially explore the feasibility of such a competition by considering the
range of possible testing tools that can be compared and scoping it appropriately. It will also
aim to identify any issues associated with such a competition. Secondly, it will consider the
range of systems that may form the basis of the benchmark. These may be drawn from existing ones
used by researchers, or providers of such systems. Again, any issues with such systems
(e.g. necessary additional documentation, technical details etc.) will be identified.
The final part will be the establishment of a working group aimed at running the competition
proper, either at the following year's ICST, or over the year with the aim of announcing the
results at ICST (this depends on the outcomes of the earlier phases of the workshop).
It is envisaged that once established this competition will become an annual event at ICST.
The workshop has three objectives:
- To investigate the feasibility of a testing tools “competition”
- To identify suitable benchmark systems for use in a competition and any issues associated
with the use of such systems
- To work towards running a tools competition at ICST’11
The workshop will consist of a mixture of formal presentations and working sessions.
Submissions
Contributions are welcomed from both academia and industry describing experiences and resources to
support the formation of a benchmark for software testing.
To participate in the workshop is it necessary to provide either:-
- a test data generation tool, along with an indication of the suite of programs that have
been used to evaluate it
and/or
- a set of programs that may form part of a candidate benchmark suite.
Submissions should take the form of short position papers between 2 and
4 pages long which describe either the test data generation tool (along
with the evaluation carried out to date, programs used, results, tool(s)
used to measure coverage etc.) or the candidate benchmark suite itself
(along with details of any tool evaluations that have used the suite).
Any program/benchmark suites must be publicly available and the paper
must provide clear download instructions. Ideally, tools should be
downloadable too but this is not being imposed at the moment.
The proceedings of the workshop
will be published in the IEEE Digital Library.
Authors of accepted submissions will then be given a series of tasks to
work on prior to the workshop. This will typically involve running tools
on previously untried target systems and reporting on the experience, and any
problems encountered, at the workshop.
The aim of the workshop will be to produce a joint paper which reports on
these experiences; outlines the key issues, common problems, and significant
barriers encountered when using the various candidate problems; identifies
the possible benchmark set; and maps out the way forward to establishing and
running a competition.
Formatting
Submissions should take the form of short position papers between 2 and 4 pages long in PDF format conforming to the
IEEE Proceedings (8.5 by 11-inch) style
Please use the
Word templates or LaTeX files for preparation.
Submissions should be sent by email to Marc Roper.
Position paper due:
Notification of acceptance:
|
February 1, 2010
February 22, 2010
|
- Marc Roper, University of Strathclyde, Glasgow, UK