Read between the lines - Functionality Extraction From READMEs

Prince Kumar, Srikanth Tamilselvam, Dinesh Garg


Abstract
While text summarization is a well-known NLP task, in this paper, we introduce a novel and useful variant of it called functionality extraction from Git README files. Though this task is a text2text generation at an abstract level, it involves its own peculiarities and challenges making existing text2text generation systems not very useful. The motivation behind this task stems from a recent surge in research and development activities around the use of large language models for code-related tasks, such as code refactoring, code summarization, etc. We also release a human-annotated dataset called FuncRead, and develop a battery of models for the task. Our exhaustive experimentation shows that small size fine-tuned models beat any baseline models that can be designed using popular black-box or white-box large language models (LLMs) such as ChatGPT and Bard. Our best fine-tuned 7 Billion CodeLlama model exhibit 70% and 20% gain on the F1 score against ChatGPT and Bard respectively.
Anthology ID:
2024.findings-naacl.251
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3977–3990
Language:
URL:
https://aclanthology.org/2024.findings-naacl.251
DOI:
Bibkey:
Cite (ACL):
Prince Kumar, Srikanth Tamilselvam, and Dinesh Garg. 2024. Read between the lines - Functionality Extraction From READMEs. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3977–3990, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Read between the lines - Functionality Extraction From READMEs (Kumar et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.251.pdf
Copyright:
 2024.findings-naacl.251.copyright.pdf