Sharan Narang


2023

pdf bib
Understanding HTML with Large Language Models
Izzeddin Gur | Ofir Nachum | Yingjie Miao | Mustafa Safdari | Austin Huang | Aakanksha Chowdhery | Sharan Narang | Noah Fiedel | Aleksandra Faust
Findings of the Association for Computational Linguistics: EMNLP 2023

Large language models (LLMs) have shown exceptional performance on a variety of natural language tasks. Yet, their capabilities for HTML understanding – i.e., parsing the raw HTML of a webpage, with applications to automation of web-based tasks, crawling, and browser-assisted retrieval – have not been fully explored. We contribute HTML understanding models (fine-tuned LLMs) and an in-depth analysis of their capabilities under three tasks: (i) Semantic Classification of HTML elements, (ii) Description Generation for HTML inputs, and (iii) Autonomous Web Navigation of HTML pages. While previous work has developed dedicated architectures and training procedures for HTML understanding, we show that LLMs pretrained on standard natural language corpora transfer remarkably well to HTML understanding tasks. For instance, when fine-tuned on data from the MiniWoB benchmark, LLMs successfully complete 50% more tasks using 192x less data compared to the previous best supervised model. We create and open-source a large-scale HTML dataset distilled and auto-labeled from CommonCrawl

pdf bib
Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Yi Tay | Mostafa Dehghani | Samira Abnar | Hyung Chung | William Fedus | Jinfeng Rao | Sharan Narang | Vinh Tran | Dani Yogatama | Donald Metzler
Findings of the Association for Computational Linguistics: EMNLP 2023

There have been a lot of interest in the scaling properties of Transformer models. However, not much has been done on the front of investigating the effect of scaling properties of different inductive biases and model architectures. Do model architectures scale differently? If so, how does inductive bias affect scaling behaviour? How does this influence upstream (pretraining) and downstream (transfer)? This paper conducts a systematic study of scaling behaviour of ten diverse model architectures such as Transformers, Switch Transformers, Universal Transformers, Dynamic convolutions, Performers, and recently proposed MLP-Mixers. Via extensive experiments, we show that (1) architecture is an indeed an important consideration when performing scaling and (2) the best performing model can fluctuate at different scales. We believe that the findings outlined in this work has significant implications to how model architectures are currently evaluated in the community.

pdf bib
Character-Aware Models Improve Visual Text Rendering
Rosanne Liu | Dan Garrette | Chitwan Saharia | William Chan | Adam Roberts | Sharan Narang | Irina Blok | Rj Mical | Mohammad Norouzi | Noah Constant
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word’s visual makeup as a series of glyphs. To quantify this effect, we conduct a series of experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Applying our learnings to the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.

2022

pdf bib
ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models
Linting Xue | Aditya Barua | Noah Constant | Rami Al-Rfou | Sharan Narang | Mihir Kale | Adam Roberts | Colin Raffel
Transactions of the Association for Computational Linguistics, Volume 10

Most widely used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison, token-free models that operate directly on raw text (bytes or characters) have many benefits: They can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Because byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.1

2021

pdf bib
Do Transformer Modifications Transfer Across Implementations and Applications?
Sharan Narang | Hyung Won Chung | Yi Tay | Liam Fedus | Thibault Fevry | Michael Matena | Karishma Malkan | Noah Fiedel | Noam Shazeer | Zhenzhong Lan | Yanqi Zhou | Wei Li | Nan Ding | Jake Marcus | Adam Roberts | Colin Raffel
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.

2020

pdf bib
On Task-Level Dialogue Composition of Generative Transformer Model
Prasanna Parthasarathi | Sharan Narang | Arvind Neelakantan
Proceedings of the First Workshop on Insights from Negative Results in NLP

Task-oriented dialogue systems help users accomplish tasks such as booking a movie ticket and ordering food via conversation. Generative models parameterized by a deep neural network are widely used for next turn response generation in such systems. It is natural for users of the system to want to accomplish multiple tasks within the same conversation, but the ability of generative models to compose multiple tasks is not well studied. In this work, we begin by studying the effect of training human-human task-oriented dialogues towards improving the ability to compose multiple tasks on Transformer generative models. To that end, we propose and explore two solutions: (1) creating synthetic multiple task dialogue data for training from human-human single task dialogue and (2) forcing the encoder representation to be invariant to single and multiple task dialogues using an auxiliary loss. The results from our experiments highlight the difficulty of even the sophisticated variant of transformer model in learning to compose multiple tasks from single task dialogues.