%0 Conference Proceedings %T Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks %A Thrush, Tristan %A Tirumala, Kushal %A Gupta, Anmol %A Bartolo, Max %A Rodriguez, Pedro %A Kane, Tariq %A Gaviria Rojas, William %A Mattson, Peter %A Williams, Adina %A Kiela, Douwe %Y Basile, Valerio %Y Kozareva, Zornitsa %Y Stajner, Sanja %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F thrush-etal-2022-dynatask %X We introduce Dynatask: an open source system for setting up custom NLP tasks that aims to greatly lower the technical knowledge and effort required for hosting and evaluating state-of-the-art NLP models, as well as for conducting model in the loop data collection with crowdworkers. Dynatask is integrated with Dynabench, a research platform for rethinking benchmarking in AI that facilitates human and model in the loop data collection and evaluation. To create a task, users only need to write a short task configuration file from which the relevant web interfaces and model hosting infrastructure are automatically generated. The system is available at https://dynabench.org/ and the full library can be found at https://github.com/facebookresearch/dynabench. %R 10.18653/v1/2022.acl-demo.17 %U https://aclanthology.org/2022.acl-demo.17 %U https://doi.org/10.18653/v1/2022.acl-demo.17 %P 174-181