TY - GEN
T1 - Retrofitting automatic testing through library tests reusing
AU - Ma, Lei
AU - Zhang, Cheng
AU - Yu, Bing
AU - Zhao, Jianjun
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/7/5
Y1 - 2016/7/5
N2 - Test cases are useful for program comprehension. Developers often understand dynamic behavior of systems by running their test cases. As manual testing is expensive, automatic testing has been extensively studied to reduce the cost. However, without sufficient knowledge of the software under test, it is difficult for automated testing techniques to create effective test cases, especially for software that requires complex inputs. In this paper, we propose to reuse existing test cases from the libraries of software under test, to generate better test cases. We have the observation that, when developers start to test the target software, the test cases of its dependent libraries are often available. Therefore, we propose to perform program analysis on these artifacts to extract relevant code fragments to create test sequences. We further seed these sequences to a random test generator GRT to generate test cases for target software. The preliminary experiments show that the technique significantly improves the effectiveness of GRT. Our in-depth analysis reveals that several dependency metrics are good indicators of the potential benefits of applying our technique on specific programs and their libraries.
AB - Test cases are useful for program comprehension. Developers often understand dynamic behavior of systems by running their test cases. As manual testing is expensive, automatic testing has been extensively studied to reduce the cost. However, without sufficient knowledge of the software under test, it is difficult for automated testing techniques to create effective test cases, especially for software that requires complex inputs. In this paper, we propose to reuse existing test cases from the libraries of software under test, to generate better test cases. We have the observation that, when developers start to test the target software, the test cases of its dependent libraries are often available. Therefore, we propose to perform program analysis on these artifacts to extract relevant code fragments to create test sequences. We further seed these sequences to a random test generator GRT to generate test cases for target software. The preliminary experiments show that the technique significantly improves the effectiveness of GRT. Our in-depth analysis reveals that several dependency metrics are good indicators of the potential benefits of applying our technique on specific programs and their libraries.
UR - http://www.scopus.com/inward/record.url?scp=84979742700&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84979742700&partnerID=8YFLogxK
U2 - 10.1109/ICPC.2016.7503725
DO - 10.1109/ICPC.2016.7503725
M3 - Conference contribution
AN - SCOPUS:84979742700
T3 - IEEE International Conference on Program Comprehension
BT - Proceedings of the 24th IEEE International Conference on Program Comprehension, ICPC 2016 - co-located with ICSE 2016
PB - IEEE Computer Society
T2 - 24th IEEE International Conference on Program Comprehension, ICPC 2016
Y2 - 16 May 2016 through 17 May 2016
ER -