Tianqi Chen
2025
ARXSA: A General Negative Feedback Control Theory in Vision-Language Models
Zeyu Zhang
|
Tianqi Chen
|
Yuki Todo
Findings of the Association for Computational Linguistics: EMNLP 2025
The Transformer model has been increasingly applied across various domains, driven by the self-attention mechanism, which offers robust data processing capabilities and has substantially contributed to the advancement of the model. In the self-attention mechanism, three core matrices from the same data batch are computed together to determine correlations between input elements. Drawing inspiration from the efficiency and stability conferred by negative feedback structures in predictive control systems, the concept of vertical training was introduced to integrate data from multiple batches. Accordingly, this paper proposes an autoregressive with exogenous inputs (ARX) approach for the self-attention mechanism, transforming the Encoder block into a negative feedback predictive control system. A network architecture based on this method is also proposed, enabling the autoregressive with exogenous inputs for self-attention to transmit data from batches at previous time points. The effectiveness of the proposed approach is validated through comparative experimental evaluations.
2012
Semi-Supervised Technical Term Tagging With Minimal User Feedback
Behrang QasemiZadeh
|
Paul Buitelaar
|
Tianqi Chen
|
Georgeta Bordea
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
In this paper, we address the problem of extracting technical terms automatically from an unannotated corpus. We introduce a technology term tagger that is based on Liblinear Support Vector Machines and employs linguistic features including Part of Speech tags and Dependency Structures, in addition to user feedback to perform the task of identification of technology related terms. Our experiments show the applicability of our approach as witnessed by acceptable results on precision and recall.