Tags:Artificial intelligence, automated program repair, Computer aided software engineering, fine tuning, potency of transformer models and Software debugging
Abstract:
Manually finding and fixing bugs is cumbersome work, which consumes valuable resources in the software development cycle. In this work, we examine the capability of pretrained transformer models for tackling the task of automated program repair. Previous research has been focused on inherently different machine learning architectures for solving this use case. Our contributions include a novel dataset for fine-tuning the models, the introduction of a windowing technique augmenting the pretrained model and the evaluation on the commonly used Defects4J benchmark along with an ablation study. The findings demonstrate that leveraging our dataset leads to enhanced model performance surpassing Bugs2Fix. Our model enhancements significantly boost overall performance, enabling resulting models to achieve parity with the current state of the art by fixing 30 bugs in 27 minutes on Defects4J. This shows that pretrained transformers are promising for the task of automated bug fixing and should be considered by future research. However, similar to the existing state-of-the-art solutions, the performance still needs be improved to provide practical benefits to end users.
Analyzing the Potency of Pretrained Transformer Models for Automated Program Repair