Offline Reinforcement Learning from Human Feedback in Real-World Sequence-to-Sequence Tasks

Julia Kreutzer, Stefan Riezler, Carolin Lawrence


Abstract
Large volumes of interaction logs can be collected from NLP systems that are deployed in the real world. How can this wealth of information be leveraged? Using such interaction logs in an offline reinforcement learning (RL) setting is a promising approach. However, due to the nature of NLP tasks and the constraints of production systems, a series of challenges arise. We present a concise overview of these challenges and discuss possible solutions.
Anthology ID:
2021.spnlp-1.4
Volume:
Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Zornitsa Kozareva, Sujith Ravi, Andreas Vlachos, Priyanka Agrawal, André Martins
Venue:
spnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
37–43
Language:
URL:
https://aclanthology.org/2021.spnlp-1.4
DOI:
10.18653/v1/2021.spnlp-1.4
Bibkey:
Cite (ACL):
Julia Kreutzer, Stefan Riezler, and Carolin Lawrence. 2021. Offline Reinforcement Learning from Human Feedback in Real-World Sequence-to-Sequence Tasks. In Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021), pages 37–43, Online. Association for Computational Linguistics.
Cite (Informal):
Offline Reinforcement Learning from Human Feedback in Real-World Sequence-to-Sequence Tasks (Kreutzer et al., spnlp 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.spnlp-1.4.pdf