Carina Negreanu


2024

pdf bib
Solving Data-centric Tasks using Large Language Models
Shraddha Barke | Christian Poelitz | Carina Negreanu | Benjamin Zorn | José Cambronero | Andrew Gordon | Vu Le | Elnaz Nouri | Nadia Polikarpova | Advait Sarkar | Brian Slininger | Neil Toronto | Jack Williams
Findings of the Association for Computational Linguistics: NAACL 2024

Large language models are rapidly replacing help forums like StackOverflow, and are especially helpful to non-professional programmers and end users. These users are often interested in data-centric tasks, like spreadsheet manipulation and data wrangling, which are hard to solve if the intent is only communicated using a natural-language description, without including data. But how do we decide how much data and which data to include in the prompt?This paper makes two contributions towards answering this question. First, we create a dataset of real-world NL-to-code tasks manipulating tabular data, mined from StackOverflow posts. Second, we introduce a novel cluster-then-select prompting technique, which adds the most representative rows from the input data to the LLM prompt. Our experiments show that LLM performance is indeed sensitive to the amount of data passed in the prompt, and that for tasks with a lot of syntactic variation in the input table,our cluster-then-select technique outperforms a random selection baseline.

2023

pdf bib
CodeFusion: A Pre-trained Diffusion Model for Code Generation
Mukul Singh | José Cambronero | Sumit Gulwani | Vu Le | Carina Negreanu | Gust Verbruggen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Imagine a developer who can only change their last line of code—how often would they have to start writing a function from scratch before it is correct? Auto-regressive models for code generation from natural language have a similar limitation: they do not easily allow reconsidering earlier tokens generated. We introduce CodeFusion, a pre-trained diffusion code generation model that addresses this limitation by iteratively denoising a complete program conditioned on the encoded natural language. We evaluate CodeFusion on the task of natural language to code generation for Bash, Python, and Microsoft Excel conditional formatting (CF) rules. Experiments show that CodeFusion (75M parameters) performs on par with state-of-the-art auto-regressive systems (350M-175B parameters) in top-1 accuracy and outperforms them in top-3 and top-5 accuracy due to its better balance in diversity versus quality.

pdf bib
InstructExcel: A Benchmark for Natural Language Instruction in Excel
Justin Payan | Swaroop Mishra | Mukul Singh | Carina Negreanu | Christian Poelitz | Chitta Baral | Subhro Roy | Rasika Chakravarthy | Benjamin Van Durme | Elnaz Nouri
Findings of the Association for Computational Linguistics: EMNLP 2023

With the evolution of Large Language Models (LLMs) we can solve increasingly more complex NLP tasks across various domains, including spreadsheets. This work investigates whether LLMs can generate code (Excel OfficeScripts, a TypeScript API for executing many tasks in Excel) that solves Excel specific tasks provided via natural language user instructions. To do so we introduce a new large-scale benchmark, InstructExcel, created by leveraging the ‘Automate’ feature in Excel to automatically generate OfficeScripts from users’ actions. Our benchmark includes over 10k samples covering 170+ Excel operations across 2,000 publicly available Excel spreadsheets. Experiments across various zero-shot and few-shot settings show that InstructExcel is a hard benchmark for state of the art models like GPT-4. We observe that (1) using GPT-4 over GPT-3.5, (2) providing more in-context examples, and (3) dynamic prompting can help improve performance on this benchmark.