Using Rule Mining for Automatic Test Oracle Generation

Autoren Alejandra Duque-Torres
Anastasiia Shalygina
Dietmar Pfahl
Rudolf Ramler
Editoren Horst Lichter
Selin Aydin
Thanwadee Sunetnanta
Toni Anwar
Titel Using Rule Mining for Automatic Test Oracle Generation
Buchtitel roceedings of the 8th International Workshop on Quantitative Approaches to Software Quality co-located with 27th Asia-Pacific Software Engineering Conference (APSEC 2020)
Typ in Konferenzband
Verlag ceur-ws.org
Monat December
Jahr 2020
Seiten 21-28
SCCH ID# 20106
Abstract

Software testing is essential for checking the quality of software but it is also a costly and time-consuming activity. The mechanism to determine the correct output of the System Under Test (SUT) for a given input space is called test oracle. The test oracle problem is a known bottleneck in situations where tests are generated automatically and no model of the correct behaviour of the SUT exists. To overcome this bottleneck, we developed a method which generates test oracles by comparing information extracted from object state data created during the execution of two subsequent versions of the SUT. In our initial proof-of-concept, we derive the relevant information in the form of rules by using the Association Rule Mining (ARM) technique. As a proof-of-concept, we validate our method on the Stack class from a custom version of the Java Collection classes and discuss the lessons learned from our experiment. The test suite that we use in our experiment to execute the different SUT version is automatically generated using Randoop. Other approaches to generate object state data could be used instead. Our proof-of-concept demonstrates that our method is applicable and that we can detect the presence of failures that are missed by regression testing alone. Automatic analysis of the set of violated association rules provides valuable information for localizing faults in the SUT by directly pointing to the faulty method. This kind of information cannot be found in the execution traces of failing tests.