Exploring The Effect of Code Coverage And Maintainability for Identifying Software Testability

Show simple item record

dc.contributor.author Abrar, Md. Fahim
dc.contributor.author Alam, Muntasir Bin
dc.date.accessioned 2024-09-05T08:01:29Z
dc.date.available 2024-09-05T08:01:29Z
dc.date.issued 2023-05-30
dc.identifier.citation [1] N. Anwar and S. Kar, “Review paper on various software testing techniques strategies,” Global Journal of Computer Science and Technology, pp. 43–49, 05 2019. [2] G. Fraser and A. Arcuri, “A large-scale evaluation of automated unit test generation using evosuite,” ACM Transactions on Software Engineering and Methodology, vol. 24, pp. 1–42, 12 2014. [3] T. Heriˇcko and B. Sumak, “Exploring maintainability index variants for soft ˇ - ware maintainability measurement in object-oriented systems,” Applied Sci ences, vol. 13, 02 2023. [4] N. Kasisopha, S. Rongviriyapanish, and P. Meananeatra, “Method evaluation for software testability on object oriented code,” 09 2020, pp. 308–313. [5] V. Terragni, P. Salza, and M. Pezz`e, “Measuring software testability modulo test quality,” 07 2020, pp. 241–251. [6] O.-J. Oluwatosin, A. Balogun, S. Basri, A. Akintola, and A. Bajeh, “Object oriented measures as testability indicators: An empirical study,” Journal of Engineering Science and Technology, vol. 15, pp. 1092–1108, 04 2020. [7] M. Zakeri-Nasrabadi and S. Parsa, “Learning to predict software testability,” 03 2021, pp. 1–5. [8] R. Sharma and A. Saha, “A systematic review of software testability measure ment techniques,” 09 2018, pp. 299–303. en_US
dc.identifier.uri http://hdl.handle.net/123456789/2158
dc.description Supervised by Ms. Lutfun Nahar Lota, Assistant Professor Department of Computer Science and Engineering(CSE), Islamic University of Technology(IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.description.abstract The ability of code to reveal its flaws, especially during automated testing, is known as software testability. The program being tested must be able to with stand testing. The coverage of the test data provided by a specific test data generation algorithm, on the other hand, is what determines whether a test will be successful. To clarify whether and how software testability affects test coverage. However little empirical evidence has been presented. In this article, we suggest a technique to clarify this issue. The testability of programs is determined using a variety of source code metrics, and our suggested framework builds machine learn ing models using the coverage of Software Under Test (SUT) provided by various automatically generated test suites.The cost of additional testing is decreased be cause the resulting models can anticipate the code coverage offered by a particular test data generation algorithm before the algorithm is even run.To measure the testability of source code, a concrete proxy called predicted coverage is used. The correlation between code coverage and maintainability is crucial in assessing the testability of software, as high code coverage combined with well-maintained code facilitates the creation of comprehensive test cases and ensures thorough testing of critical paths and edge cases. en_US
dc.language.iso en en_US
dc.publisher Department of Computer Science and Engineering(CSE), Islamic University of Technology(IUT), Board Bazar, Gazipur-1704, Bangladesh en_US
dc.title Exploring The Effect of Code Coverage And Maintainability for Identifying Software Testability en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search IUT Repository


Advanced Search

Browse

My Account

Statistics