Ulrike Pado

Also published as: Ulrike Padó


pdf bib
Summarization Evaluation meets Short-Answer Grading
Margot Mieskes | Ulrike Padó
Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning


pdf bib
Work Smart - Reducing Effort in Short-Answer Grading
Margot Mieskes | Ulrike Padó
Proceedings of the 7th workshop on NLP for Computer Assisted Language Learning


pdf bib
Question Difficulty – How to Estimate Without Norming, How to Use for Automated Grading
Ulrike Padó
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications

Question difficulty estimates guide test creation, but are too costly for small-scale testing. We empirically verify that Bloom’s Taxonomy, a standard tool for difficulty estimation during question creation, reliably predicts question difficulty observed after testing in a short-answer corpus. We also find that difficulty is mirrored in the amount of variation in student answers, which can be computed before grading. We show that question difficulty and its approximations are useful for automated grading, allowing us to identify the optimal feature set for grading each question even in an unseen-question setting.


pdf bib
Get Semantic With Me! The Usefulness of Different Feature Types for Short-Answer Grading
Ulrike Padó
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Automated short-answer grading is key to help close the automation loop for large-scale, computerised testing in education. A wide range of features on different levels of linguistic processing has been proposed so far. We investigate the relative importance of the different types of features across a range of standard corpora (both from a language skill and content assessment context, in English and in German). We find that features on the lexical, text similarity and dependency level often suffice to approximate full-model performance. Features derived from semantic processing particularly benefit the linguistically more varied answers in content assessment corpora.


pdf bib
Short Answer Grading: When Sorting Helps and When it Doesn’t
Ulrike Pado | Cornelia Kiefer
Proceedings of the fourth workshop on NLP for computer-assisted language learning


pdf bib
A Flexible, Corpus-Driven Model of Regular and Inverse Selectional Preferences
Katrin Erk | Sebastian Padó | Ulrike Padó
Computational Linguistics, Volume 36, Issue 4 - December 2010


pdf bib
Automated Assessment of Spoken Modern Standard Arabic
Jian Cheng | Jared Bernstein | Ulrike Pado | Masanori Suzuki
Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications


pdf bib
Flexible, Corpus-Based Modelling of Human Plausibility Judgements
Sebastian Padó | Ulrike Padó | Katrin Erk
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)


pdf bib
Modelling Semantic Role Pausibility in Human Sentence Processing
Ulrike Padó | Matthew Crocker | Frank Keller
11th Conference of the European Chapter of the Association for Computational Linguistics