Erratum: Measuring and Improving Consistency in Pretrained Language Models

Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg


Abstract
During production of this paper, an error was introduced to the formula on the bottom of the right column of page 1020. In the last two terms of the formula, the n and m subscripts were swapped. The correct formula is:Lc=∑n=1k∑m=n+1kDKL(Qnri∥Qmri)+DKL(Qmri∥Qnri)The paper has been updated.
Anthology ID:
2021.tacl-1.83
Volume:
Transactions of the Association for Computational Linguistics, Volume 9
Month:
Year:
2021
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1407–1407
Language:
URL:
https://aclanthology.org/2021.tacl-1.83
DOI:
10.1162/tacl_x_00455
Bibkey:
Cite (ACL):
Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Erratum: Measuring and Improving Consistency in Pretrained Language Models. Transactions of the Association for Computational Linguistics, 9:1407–1407.
Cite (Informal):
Erratum: Measuring and Improving Consistency in Pretrained Language Models (Elazar et al., TACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.tacl-1.83.pdf