To Train or Not to Train: Predicting the Performance of Massively Multilingual Models
Shantanu Patankar, Omkar Gokhale, Onkar Litake, Aditya Mandke, Dipali Kadam
- Anthology ID:
- 2022.sumeval-1.2
- Volume:
- Proceedings of the First Workshop on Scaling Up Multilingual Evaluation
- Month:
- November
- Year:
- 2022
- Address:
- Online
- Editors:
- Kabir Ahuja, Antonios Anastasopoulos, Barun Patra, Graham Neubig, Monojit Choudhury, Sandipan Dandapat, Sunayana Sitaram, Vishrav Chaudhary
- Venue:
- SUMEval
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8–12
- Language:
- URL:
- https://aclanthology.org/2022.sumeval-1.2
- DOI:
- Bibkey:
- Cite (ACL):
- Shantanu Patankar, Omkar Gokhale, Onkar Litake, Aditya Mandke, and Dipali Kadam. 2022. To Train or Not to Train: Predicting the Performance of Massively Multilingual Models. In Proceedings of the First Workshop on Scaling Up Multilingual Evaluation, pages 8–12, Online. Association for Computational Linguistics.
- Cite (Informal):
- To Train or Not to Train: Predicting the Performance of Massively Multilingual Models (Patankar et al., SUMEval 2022)
- Copy Citation:
- PDF:
- https://aclanthology.org/2022.sumeval-1.2.pdf
Export citation
@inproceedings{patankar-etal-2022-train, title = "To Train or Not to Train: Predicting the Performance of Massively Multilingual Models", author = "Patankar, Shantanu and Gokhale, Omkar and Litake, Onkar and Mandke, Aditya and Kadam, Dipali", editor = "Ahuja, Kabir and Anastasopoulos, Antonios and Patra, Barun and Neubig, Graham and Choudhury, Monojit and Dandapat, Sandipan and Sitaram, Sunayana and Chaudhary, Vishrav", booktitle = "Proceedings of the First Workshop on Scaling Up Multilingual Evaluation", month = nov, year = "2022", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.sumeval-1.2", pages = "8--12", }
<?xml version="1.0" encoding="UTF-8"?> <modsCollection xmlns="http://www.loc.gov/mods/v3"> <mods ID="patankar-etal-2022-train"> <titleInfo> <title>To Train or Not to Train: Predicting the Performance of Massively Multilingual Models</title> </titleInfo> <name type="personal"> <namePart type="given">Shantanu</namePart> <namePart type="family">Patankar</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Omkar</namePart> <namePart type="family">Gokhale</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Onkar</namePart> <namePart type="family">Litake</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Aditya</namePart> <namePart type="family">Mandke</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Dipali</namePart> <namePart type="family">Kadam</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2022-11</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Proceedings of the First Workshop on Scaling Up Multilingual Evaluation</title> </titleInfo> <name type="personal"> <namePart type="given">Kabir</namePart> <namePart type="family">Ahuja</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Antonios</namePart> <namePart type="family">Anastasopoulos</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Barun</namePart> <namePart type="family">Patra</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Graham</namePart> <namePart type="family">Neubig</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Monojit</namePart> <namePart type="family">Choudhury</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Sandipan</namePart> <namePart type="family">Dandapat</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Sunayana</namePart> <namePart type="family">Sitaram</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Vishrav</namePart> <namePart type="family">Chaudhary</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>Association for Computational Linguistics</publisher> <place> <placeTerm type="text">Online</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <identifier type="citekey">patankar-etal-2022-train</identifier> <location> <url>https://aclanthology.org/2022.sumeval-1.2</url> </location> <part> <date>2022-11</date> <extent unit="page"> <start>8</start> <end>12</end> </extent> </part> </mods> </modsCollection>
%0 Conference Proceedings %T To Train or Not to Train: Predicting the Performance of Massively Multilingual Models %A Patankar, Shantanu %A Gokhale, Omkar %A Litake, Onkar %A Mandke, Aditya %A Kadam, Dipali %Y Ahuja, Kabir %Y Anastasopoulos, Antonios %Y Patra, Barun %Y Neubig, Graham %Y Choudhury, Monojit %Y Dandapat, Sandipan %Y Sitaram, Sunayana %Y Chaudhary, Vishrav %S Proceedings of the First Workshop on Scaling Up Multilingual Evaluation %D 2022 %8 November %I Association for Computational Linguistics %C Online %F patankar-etal-2022-train %U https://aclanthology.org/2022.sumeval-1.2 %P 8-12
Markdown (Informal)
[To Train or Not to Train: Predicting the Performance of Massively Multilingual Models](https://aclanthology.org/2022.sumeval-1.2) (Patankar et al., SUMEval 2022)
- To Train or Not to Train: Predicting the Performance of Massively Multilingual Models (Patankar et al., SUMEval 2022)
ACL
- Shantanu Patankar, Omkar Gokhale, Onkar Litake, Aditya Mandke, and Dipali Kadam. 2022. To Train or Not to Train: Predicting the Performance of Massively Multilingual Models. In Proceedings of the First Workshop on Scaling Up Multilingual Evaluation, pages 8–12, Online. Association for Computational Linguistics.