%0 Conference Proceedings %T LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing %A Li, Yu %A Arnold, Josh %A Yan, Feifan %A Shi, Weiyan %A Yu, Zhou %Y Ji, Heng %Y Park, Jong C. %Y Xia, Rui %S Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations %D 2021 %8 August %I Association for Computational Linguistics %C Online %F li-etal-2021-legoeval %X We present LEGOEval, an open-source toolkit that enables researchers to easily evaluate dialogue systems in a few lines of code using the online crowdsource platform, Amazon Mechanical Turk. Compared to existing toolkits, LEGOEval features a flexible task design by providing a Python API that maps to commonly used React.js interface components. Researchers can personalize their evaluation procedures easily with our built-in pages as if playing with LEGO blocks. Thus, LEGOEval provides a fast, consistent method for reproducing human evaluation results. Besides the flexible task design, LEGOEval also offers an easy API to review collected data. %R 10.18653/v1/2021.acl-demo.38 %U https://aclanthology.org/2021.acl-demo.38 %U https://doi.org/10.18653/v1/2021.acl-demo.38 %P 317-324