One of the biggest challenges in fixing software bugs can be finding them.
With support from the National Science Foundation, computer scientists at the University of Texas at Dallas are tackling some of the hardest-to-find errors called variability bugs that only appear when software is configured to work with it. certain materials.
Austin Mordahl (bottom), doctoral student in software engineering, and Dr. Shiyi Wei, assistant professor of computer science, have developed a framework for detecting hard-to-find variability bugs. They presented their research in August at a software engineering conference in Estonia.
The researchers presented a framework they developed to detect variability bugs at the recent Joint European Software Engineering Conference of the Association for Computing Machinery (ACM) and the Fundamentals of Software Engineering Symposium in Estonia. Variability bugs generally cannot be detected using standard software analysis tools, said Dr Shiyi Wei, assistant professor of computer science at the Erik Jonsson School of Engineering and Computer Science.
“It’s difficult to test and analyze software for variability bugs because of all the different hardware involved,” Wei said. “You have to test many software configurations to find these bugs, and you cannot test all configurations. “
A software program can have millions of configurations that allow it to work among different models and brands of computers and devices. Typical software analysis tools only test one configuration at a time, said Austin Mordahl, a software engineering doctoral student and computer science research assistant working on the project.
“The analogy I use is that it’s like ordering a pizza, where the initial codebase of a program is all the range of topping options you have at the start, and the end product contains elements. selected. But standard tools are only able to analyze the finished pizza, ”Mordahl said. “So if you don’t select the part of the code that contains a bug to include in the final product – let’s say you skipped the anchovies – then no matter how good your standard tool is, it will never find the problem because the bug just doesn’t exist in your executable.
Researchers tested 1,000 configurations across three programs, for a total of 3,000 configurations, in an effort to identify variability bugs. The project detected and confirmed 77 bugs, including 52 variability bugs.
Mordahl won the ACM Student Research Competition at the International Software Engineering Conference in May for an article describing this research titled “Towards the Detection and Characterization of Variability Bugs in Configurable C Software: An Empirical Study.”
Millions of people depend on highly configurable software, but these systems lack adequate automated tools to keep them secure and reliable, Wei said. He and his fellow researchers plan to continue to develop and improve their framework and hope their dataset can support future research on highly configurable code analyzes.
“Configured software is one of the most common types we use,” Wei said. “This is why it is so important to improve the quality of this software.
Researchers at UT Dallas collaborated on the project with computer scientists from the University of Texas at Austin, the University of Maryland, and the University of Central Florida.
Researchers Create Automated System to Improve Computer Bug Reporting
When the software is not working properly, many frustrated users file bug reports online. Too often, however, their explanations are unclear or incomplete, leaving developers without enough information to resolve the issue, said Dr Andrian Marcus, professor of computer science at the Jonsson School.
Dr Andrian Marcus
With funding from the National Science Foundation, Marcus is working with other IT professionals to create a more effective way for users to report issues to developers. Researchers have developed a tool that provides feedback to users on the quality of their reports. The research was recently presented at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering and won the ACM SIGSOFT Distinguished Paper Award.
“Thousands of these reports go to developers,” Marcus said. “The first thing they need to do is reproduce the bug.”
Without enough information to reproduce the error, developers are spending excessive time fixing the problem, if they can fix it, he said. The researchers aim to bridge the gap between users’ descriptions of what happened and the technical information developers need.
Marcus teamed up with Dr. Vincent Ng, professor of computer science and expert in natural language processing and machine learning, and two doctoral students: Jing Lu, research assistant in computer science, and Oscar Chaparro PhD’19, now assistant professor at the College. by William & Mary. Researchers at UT Dallas also collaborated with other researchers from William & Mary and Sannio University in Italy.
The team has designed an automated approach that can analyze the text of a bug report, assess the quality of the information, and provide feedback to users who report bugs.
“Our research is to automatically identify these components and allow the machine to determine the steps to reproduce an error,” Marcus said.
The platform identified errors in the bug reports with an accuracy of over 90%. The long-term goal of the researchers is to create an interactive system that will produce better reports.
“We don’t teach people how to write bug reports, so they write gibberish,” Marcus said. “If we can create a system that gets the information from a conversation, we think there will be better bug reports coming out of it.”