Abstract:
The ability of code to reveal its flaws, especially during automated testing, is
known as software testability. The program being tested must be able to with stand testing. The coverage of the test data provided by a specific test data
generation algorithm, on the other hand, is what determines whether a test will
be successful. To clarify whether and how software testability affects test coverage.
However little empirical evidence has been presented. In this article, we suggest a
technique to clarify this issue. The testability of programs is determined using a
variety of source code metrics, and our suggested framework builds machine learn ing models using the coverage of Software Under Test (SUT) provided by various
automatically generated test suites.The cost of additional testing is decreased be cause the resulting models can anticipate the code coverage offered by a particular
test data generation algorithm before the algorithm is even run.To measure the
testability of source code, a concrete proxy called predicted coverage is used. The
correlation between code coverage and maintainability is crucial in assessing the
testability of software, as high code coverage combined with well-maintained code
facilitates the creation of comprehensive test cases and ensures thorough testing
of critical paths and edge cases.
Description:
Supervised by
Ms. Lutfun Nahar Lota,
Assistant Professor
Department of Computer Science and Engineering(CSE),
Islamic University of Technology(IUT),
Board Bazar, Gazipur-1704, Bangladesh