Karan Bania
2026
lrnnx: A library for Linear RNNs
Karan Bania | Soham Kalburgi | Manit Tanwar | Dhruthi | Aditya Nagarsekar | Harshvardhan Mestha | Naman Chibber | Raj Deshmukh | Anish Sathyanarayanan | Aarush Rathore | Pratham Chheda
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Karan Bania | Soham Kalburgi | Manit Tanwar | Dhruthi | Aditya Nagarsekar | Harshvardhan Mestha | Naman Chibber | Raj Deshmukh | Anish Sathyanarayanan | Aarush Rathore | Pratham Chheda
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Linear recurrent neural networks (LRNNs) provide a structured approach to sequence modeling that bridges classical linear dynamical systems and modern deep learning, offering both expressive power and theoretical guarantees on stability and trainability. In recent years, multiple LRNN-based architectures have been proposed, each introducing distinct parameterizations, discretization schemes, and implementation constraints. However, existing implementations are fragmented across different software frameworks, often rely on framework-specific optimizations, and in some cases require custom CUDA kernels or lack publicly available code altogether. As a result, using, comparing, or extending LRNNs requires substantial implementation effort. To address this, we introduce lrnnx, a unified software library that implements several modern LRNN architectures under a common interface. The library exposes multiple levels of control, allowing users to work directly with core components or higher-level model abstractions. lrnnx aims to improve accessibility, reproducibility, and extensibility of LRNN research and applications. We make our code available under a permissive MIT license.