ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs

ManyBugs and IntroClass are sets of C programs that contain defects and are associated with test suites. They are intended to support reproducible and comparative studies of research techniques in automatic patch generation for defect repair. They are released under a BSD license.

Publications

We would be most appreciative if work that uses either of these datasets cites the following:
The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs, by Claire Le Goues, Neal Holtschulte, Edward K. Smith, Yuriy Brun, Premkumar Devanbu, Stephanie Forrest, and Westley Weimer. IEEE Transactions on Software Engineering (TSE), vol. 41, no. 12, December 2015, pp. 1236-1256, DOI: 10.1109/TSE.2015.2454513.
BibTeX PDFDownload

Downloads

The overview README replicates much of this information in flat text and also includes the license terms for this dataset.

The GenProg v2.0 README describes how to build and run repair, compiled from GenProg code source, available on the GenProg website; this README is the same as the one that appears with the code. Running AE and TRPAutoRepair requires different flags to the same executable. All GenProg executable runs enumerate the values of all command lines arguments, simplifying reproduction.

ManyBugs

README explains the ManyBugs scenarios and results in detail. Note that a newer, docker-based method for running ManyBugs scenarios now exists. See more info here: https://github.com/squaresLab/BugZoo. Also see below for links to alternative harnesses, though note that our results are with respect to the harnesses we link here.

ManyBugs/ contains the ManyBugs programs and defects and the baseline results of running GenProg 2.0, TRPAutoRepair, and AE on them. The results and benchmark scenarios are in different directories:

IntroClass

README explains the IntroClass scenarios and results in detail.

IntroClass/ contains the IntroClass programs and defects, the baseline results of running GenProg 2.0 and AE on those defects, and scripts to help you use the scenarios. IntroClass contains a directory for each of the IntroClass programs; each program directory contains support for running your own experiments as well as the results from our baseline experiments.

IntroClass.tar.gz is a downloadable single-file archive of the IntroClass benchmarks.

Virtual Machines

VirtualMachines/ and its subdirectories contain the ISOs for the two versions of Fedora that appear on the virtual machines. The baseline ManyBugs results were attained on a machine running Fedora 13. We also provide a Fedora 20 image that is suitable for running many of the scenarios.

Errata and updates

We update the scenarios and associated datasets from time to time. We hope to link to the test suites and defects of others and to add more categorization information to the datasets as they are brought to our attention. We are generally very interested in hearing feedback and errata from users who find problems or have questions about these benchmarks and associated information, as we hope these artifacts can evolve.

With respect to the ManyBugs scenarios, we heavily rely on the default input/output test case behavior as defined by the original developers of the underlying projects, in the absence of the context and domain knowledge required to perfectly and programmatically capture original developer intent. Although we have performed several sanity checks as described in the article, there is evidence that certain of these test suites (just like many test suites in practice) are inadequate for making strong claims about correctness (libtiff's test suite is particularly weak). We are particularly grateful to Qi et al. for studying some of these issues in depth; they provide alternative harnesses for the execution of some of these scenarios, with additional correctness checks based on their intuitions and insights regarding desired program behavior, in their replication package for Kali.

In 2017, we published a note on the creation and use of the ManyBugs benchmark, particularly addressing some of the issues outlined by Qi et al., including results using other harnesses for executing the scenarios: Clarifications on the Construction and Use of the ManyBugs Benchmark, by Claire Le Goues, Yuriy Brun, Stephanie Forrest, and Westley Weimer. IEEE Transactions on Software Engineering (TSE), vol. 43, no. 11, November 2017, pp. 1089-1090, DOI: 10.1109/TSE.2017.2755651. PDFDownload

A newer, docker-based method for running ManyBugs scenarios now exists. See more info here: https://github.com/squaresLab/BugZoo.