Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention

Changjiang Gao, Shujian Huang, Jixing Li, Jiajun Chen


Abstract
Recent large language models (LLMs) have revealed strong abilities to understand natural language. Since most of them share the same basic structure, i.e. the transformer block, possible contributors to their success in the training process are scaling and instruction tuning. However, how these factors affect the models’ language perception is unclear. This work compares the self-attention of several existing LLMs (LLaMA, Alpaca and Vicuna) in different sizes (7B, 13B, 30B, 65B), together with eye saccade, an aspect of human reading attention, to assess the effect of scaling and instruction tuning on language perception. Results show that scaling enhances the human resemblance and improves the effective attention by reducing the trivial pattern reliance, while instruction tuning does not. However, instruction tuning significantly enhances the models’ sensitivity to instructions. We also find that current LLMs are consistently closer to non-native than native speakers in attention, suggesting a sub-optimal language perception of all models. Our code and data used in the analysis is available on GitHub.
Anthology ID:
2023.findings-emnlp.868
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13042–13055
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.868
DOI:
10.18653/v1/2023.findings-emnlp.868
Bibkey:
Cite (ACL):
Changjiang Gao, Shujian Huang, Jixing Li, and Jiajun Chen. 2023. Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13042–13055, Singapore. Association for Computational Linguistics.
Cite (Informal):
Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention (Gao et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.868.pdf