Kelly Lockhart
2025
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
Alberto Accomazzi | Tirthankar Ghosal | Felix Grezes | Kelly Lockhart
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
Alberto Accomazzi | Tirthankar Ghosal | Felix Grezes | Kelly Lockhart
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
Overview of the Third Workshop for Artificial Intelligence for Scientific Publications
Kelly Lockhart | Alberto Accomazzi | Felix Grezes | Tirthankar Ghosal
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
Kelly Lockhart | Alberto Accomazzi | Felix Grezes | Tirthankar Ghosal
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
The Workshop for Artificial Intelligence for Scientific Publications (WASP), formerly Workshop on Information Extraction from Scientific Publications (WIESP), started in 2022 to provide a platform for researchers to discuss research on information extraction, mining, generation, and knowledge discovery from scientific publications using Natural Language Processing and Machine Learning techniques. The third WASP workshop was held at the 14th International Joint Conference on Natural Language Processing and 4th Asia-Pacific Chapter of the Association for Computational Linguistics in Mumbai, India on December 23rd, 2025, as a hybrid event. The WASP workshop saw great interest, with 29 submissions, of which 16 were accepted. The program consisted of the contributed research talks, 2 keynote talks, a panel discussion, and one shared task, Telescope Reference and Astronomy Categorization Shared task (TRACS).
Overview of TRACS: the Telescope Reference and Astronomy Categorization Dataset & Shared Task
Felix Grezes | Jennifer Lynn Bartlett | Kelly Lockhart | Alberto Accomazzi | Ethan Seefried | Anjali Pandiri | Tirthankar Ghosal
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
Felix Grezes | Jennifer Lynn Bartlett | Kelly Lockhart | Alberto Accomazzi | Ethan Seefried | Anjali Pandiri | Tirthankar Ghosal
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
To evaluate the scientific influence of observational facilities, astronomers examine the body of publications that have utilized data from those facilities. This depends on curated bibliographies that annotate and connect data products to the corresponding literature, enabling bibliometric analyses to quantify data impact. Compiling such bibliographies is a demanding process that requires expert curators to scan the literature for relevant names, acronyms, and identifiers, and then to determine whether and how specific observations contributed to each publication. These bibliographies have value beyond impact assessment: for research scientists, explicit links between data and literature form an essential pathway for discovering and accessing data. Accordingly, by building on the work of librarians and archivists, telescope bibliographies can be repurposed to directly support scientific inquiry. In this context, we present the Telescope Reference and Astronomy Categorization Shared task (TRACS) and its accompanying dataset, which comprises more than 89,000 publicly available English-language texts drawn from space telescope bibliographies. These texts are labeled according to a new, compact taxonomy developed in consultation with experienced bibliographers.
2024
INDUS: Effective and Efficient Language Models for Scientific Applications
Bishwaranjan Bhattacharjee | Aashka Trivedi | Masayasu Muraoka | Muthukumaran Ramasubramanian | Takuma Udagawa | Iksha Gurung | Nishan Pantha | Rong Zhang | Bharath Dandala | Rahul Ramachandran | Manil Maskey | Kaylin Bugbee | Michael M. Little | Elizabeth Fancher | Irina Gerasimov | Armin Mehrabian | Lauren Sanders | Sylvain V. Costes | Sergi Blanco-Cuaresma | Kelly Lockhart | Thomas Allen | Felix Grezes | Megan Ansdell | Alberto Accomazzi | Yousef El-Kurdi | Davis Wertheimer | Birgit Pfitzmann | Cesar Berrospi Ramis | Michele Dolfi | Rafael Teixeira De Lima | Panagiotis Vagenas | S. Karthik Mukkavilli | Peter W. J. Staar | Sanaz Vahidinia | Ryan McGranaghan | Tsengdar J. Lee
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Bishwaranjan Bhattacharjee | Aashka Trivedi | Masayasu Muraoka | Muthukumaran Ramasubramanian | Takuma Udagawa | Iksha Gurung | Nishan Pantha | Rong Zhang | Bharath Dandala | Rahul Ramachandran | Manil Maskey | Kaylin Bugbee | Michael M. Little | Elizabeth Fancher | Irina Gerasimov | Armin Mehrabian | Lauren Sanders | Sylvain V. Costes | Sergi Blanco-Cuaresma | Kelly Lockhart | Thomas Allen | Felix Grezes | Megan Ansdell | Alberto Accomazzi | Yousef El-Kurdi | Davis Wertheimer | Birgit Pfitzmann | Cesar Berrospi Ramis | Michele Dolfi | Rafael Teixeira De Lima | Panagiotis Vagenas | S. Karthik Mukkavilli | Peter W. J. Staar | Sanaz Vahidinia | Ryan McGranaghan | Tsengdar J. Lee
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large language models (LLMs) trained on general domain corpora showed remarkable results on natural language processing (NLP) tasks. However, previous research demonstrated LLMs trained using domain-focused corpora perform better on specialized tasks. Inspired by this insight, we developed INDUS, a comprehensive suite of LLMs tailored for the closely-related domains of Earth science, biology, physics, heliophysics, planetary sciences and astrophysics, and trained using curated scientific corpora drawn from diverse data sources. The suite of models include: (1) an encoder model trained using domain-specific vocabulary and corpora to address NLP tasks, (2) a contrastive-learning based text embedding model trained using a diverse set of datasets to address information retrieval tasks and (3) smaller versions of these models created using knowledge distillation for applications which have latency or resource constraints. We also created three new scientific benchmark datasets, Climate-Change NER (entity-recognition), NASA-QA (extractive QA) and NASA-IR (IR) to accelerate research in these multi-disciplinary fields. We show that our models outperform both general-purpose (RoBERTa) and domain- specific (SciBERT) encoders on these new tasks as well as existing tasks in the domains of interest. Furthermore, we demonstrate the use of these models in two industrial settings- as a retrieval model for large-scale vector search applications and in automatic content tagging systems.
2023
Search
Fix author
Co-authors
- Alberto Accomazzi 5
- Felix Grezes 5
- Tirthankar Ghosal 4
- Thomas Allen 2
- Sergi Blanco-Cuaresma 2
- Megan Ansdell 1
- Jennifer Lynn Bartlett 1
- Cesar Berrospi Ramis 1
- Bishwaranjan Bhattacharjee 1
- Kaylin Bugbee 1
- Sylvain V. Costes 1
- Bharath Dandala 1
- Rafael Teixeira De Lima 1
- Michele Dolfi 1
- Yousef El-Kurdi 1
- Elizabeth Fancher 1
- Irina Gerasimov 1
- Iksha Gurung 1
- Tsengdar J. Lee 1
- Michael M. Little 1
- Manil Maskey 1
- Ryan McGranaghan 1
- Armin Mehrabian 1
- S. Karthik Mukkavilli 1
- Masayasu Muraoka 1
- Anjali Pandiri 1
- Nishan Pantha 1
- Birgit Pfitzmann 1
- Rahul Ramachandran 1
- Muthukumaran Ramasubramanian 1
- Lauren Sanders 1
- Ethan Seefried 1
- Peter W. J. Staar 1
- Aashka Trivedi 1
- Takuma Udagawa 1
- Panagiotis Vagenas 1
- Sanaz Vahidinia 1
- Davis Wertheimer 1
- Rong Zhang 1