Geoffrey T. LaFlair
2021
Jump-Starting Item Parameters for Adaptive Language Tests
Arya D. McCarthy
|
Kevin P. Yancey
|
Geoffrey T. LaFlair
|
Jesse Egbert
|
Manqian Liao
|
Burr Settles
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
A challenge in designing high-stakes language assessments is calibrating the test item difficulties, either a priori or from limited pilot test data. While prior work has addressed ‘cold start’ estimation of item difficulties without piloting, we devise a multi-task generalized linear model with BERT features to jump-start these estimates, rapidly improving their quality with as few as 500 test-takers and a small sample of item exposures (≈6 each) from a large item bank (≈4,000 items). Our joint model provides a principled way to compare test-taker proficiency, item difficulty, and language proficiency frameworks like the Common European Framework of Reference (CEFR). This also enables new item difficulty estimates without piloting them first, which in turn limits item exposure and thus enhances test item security. Finally, using operational data from the Duolingo English Test, a high-stakes English proficiency test, we find that the difficulty estimates derived using this method correlate strongly with lexico-grammatical features that correlate with reading complexity.
2020
Machine Learning–Driven Language Assessment
Burr Settles
|
Geoffrey T. LaFlair
|
Masato Hagiwara
Transactions of the Association for Computational Linguistics, Volume 8
We describe a method for rapidly creating language proficiency assessments, and provide experimental evidence that such tests can be valid, reliable, and secure. Our approach is the first to use machine learning and natural language processing to induce proficiency scales based on a given standard, and then use linguistic models to estimate item difficulty directly for computer-adaptive testing. This alleviates the need for expensive pilot testing with human subjects. We used these methods to develop an online proficiency exam called the Duolingo English Test, and demonstrate that its scores align significantly with other high-stakes English assessments. Furthermore, our approach produces test scores that are highly reliable, while generating item banks large enough to satisfy security requirements.
Search
Co-authors
- Burr Settles 2
- Masato Hagiwara 1
- Arya D. McCarthy 1
- Kevin P. Yancey 1
- Jesse Egbert 1
- show all...