Novel or Drivel? Variants of Invariants for Teaching NLP in the LLM Era

Marius Micluța-Câmpeanu


Abstract
The ubiquitous adoption of large language models by students prompts teachers to redesign courses and evaluation methods, especially in computer science and natural language processing (NLP) where the impact is more tangible.Our contribution is two-fold. First, we attempt to define invariants for the role of education itself given the over-abundance of information that appears to be more accessible than ever before. Then, we present our approach and materials used for an introductory course in NLP for undergraduate students, drawing inspiration from software engineering best practices. Our vision regarding large language models is torely on local models to cultivate a sense of ownership and sovereignty in an age where every bit of independence and privacy get eroded.
Anthology ID:
2026.teachingnlp-1.17
Volume:
Proceedings of the Seventh Workshop on Teaching Natural Language Processing (TeachNLP 2026)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Matthias Aßenmacher, Laura Biester, Claudia Borg, György Kovács, Margot Mieskes, Sofia Serrano
Venues:
TeachingNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
129–133
Language:
URL:
https://aclanthology.org/2026.teachingnlp-1.17/
DOI:
Bibkey:
Cite (ACL):
Marius Micluța-Câmpeanu. 2026. Novel or Drivel? Variants of Invariants for Teaching NLP in the LLM Era. In Proceedings of the Seventh Workshop on Teaching Natural Language Processing (TeachNLP 2026), pages 129–133, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Novel or Drivel? Variants of Invariants for Teaching NLP in the LLM Era (Micluța-Câmpeanu, TeachingNLP 2026)
Copy Citation:
PDF:
https://aclanthology.org/2026.teachingnlp-1.17.pdf