@inproceedings{zeinalipour-etal-2025-farsense,
title = "{F}ar{S}ense: A Comprehensive Commonsense Benchmark and Evaluation Framework for the {F}arsi Language",
author = "Zeinalipour, Kamyar and
Jamshidi, Neda and
Hejazi, Seyedehbahareh and
Maggini, Marco and
Bianchini, Monica and
Paoletti, Simone and
Gori, Marco",
editor = "Inui, Kentaro and
Sakti, Sakriani and
Wang, Haofen and
Wong, Derek F. and
Bhattacharyya, Pushpak and
Banerjee, Biplab and
Ekbal, Asif and
Chakraborty, Tanmoy and
Singh, Dhirendra Pratap",
booktitle = "Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics",
month = dec,
year = "2025",
address = "Mumbai, India",
publisher = "The Asian Federation of Natural Language Processing and The Association for Computational Linguistics",
url = "https://aclanthology.org/2025.ijcnlp-long.187/",
pages = "3529--3599",
ISBN = "979-8-89176-298-5",
abstract = "Although Farsi is widely spoken, no comprehensive benchmark exists for assessing commonsense reasoning in language models. We therefore present \textbf{FarSense}, a 6{-}task benchmark for Farsi covering True/False judgment, multiple-choice questions, Explanation, Cause{-}Effect inference, Counterfactual reasoning, and Knowledge Completion. Starting from Farsi{-}Wikipedia, we filtered noise and retained {\textasciitilde}4,210 passages, rewrote them into realistic daily scenarios, and derived the above tasks from each scenario. Scenario and task generation quality was first judged via native{-}speaker annotations on outputs from five major LLMs{---}GPT{-}4o, Gemini-2.5-Flash, Mistral-Large, Qwen{-}Plus, and DeepSeek{-}Chat. Gemini-2.5-Flash demonstrated the highest performance, leading to its use in generating a large-scale dataset, subsequently finalized through meticulous two-step human validation. Using \textbf{FarSense}, we measured the commonsense ability of the same five flagship LLMs and also fine{-}tuned six compact models (1B{--}24B parameters) before re{-}evaluating them. To ensure broad applicability, task wording was designed to minimize dialectal, cultural, or religious bias. Experiments show that targeted fine{-}tuning yields substantial gains, confirming \textbf{FarSense} as a reliable, openly licensed resource for advancing reproducible commonsense understanding research in Farsi NLP. We publicly release all code and data at https://github.com/KamyarZeinalipour/FarSense."
}<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="zeinalipour-etal-2025-farsense">
<titleInfo>
<title>FarSense: A Comprehensive Commonsense Benchmark and Evaluation Framework for the Farsi Language</title>
</titleInfo>
<name type="personal">
<namePart type="given">Kamyar</namePart>
<namePart type="family">Zeinalipour</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Neda</namePart>
<namePart type="family">Jamshidi</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Seyedehbahareh</namePart>
<namePart type="family">Hejazi</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Marco</namePart>
<namePart type="family">Maggini</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Monica</namePart>
<namePart type="family">Bianchini</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Simone</namePart>
<namePart type="family">Paoletti</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Marco</namePart>
<namePart type="family">Gori</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2025-12</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics</title>
</titleInfo>
<name type="personal">
<namePart type="given">Kentaro</namePart>
<namePart type="family">Inui</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Sakriani</namePart>
<namePart type="family">Sakti</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Haofen</namePart>
<namePart type="family">Wang</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Derek</namePart>
<namePart type="given">F</namePart>
<namePart type="family">Wong</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Pushpak</namePart>
<namePart type="family">Bhattacharyya</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Biplab</namePart>
<namePart type="family">Banerjee</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Asif</namePart>
<namePart type="family">Ekbal</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Tanmoy</namePart>
<namePart type="family">Chakraborty</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Dhirendra</namePart>
<namePart type="given">Pratap</namePart>
<namePart type="family">Singh</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>The Asian Federation of Natural Language Processing and The Association for Computational Linguistics</publisher>
<place>
<placeTerm type="text">Mumbai, India</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
<identifier type="isbn">979-8-89176-298-5</identifier>
</relatedItem>
<abstract>Although Farsi is widely spoken, no comprehensive benchmark exists for assessing commonsense reasoning in language models. We therefore present FarSense, a 6-task benchmark for Farsi covering True/False judgment, multiple-choice questions, Explanation, Cause-Effect inference, Counterfactual reasoning, and Knowledge Completion. Starting from Farsi-Wikipedia, we filtered noise and retained ~4,210 passages, rewrote them into realistic daily scenarios, and derived the above tasks from each scenario. Scenario and task generation quality was first judged via native-speaker annotations on outputs from five major LLMs—GPT-4o, Gemini-2.5-Flash, Mistral-Large, Qwen-Plus, and DeepSeek-Chat. Gemini-2.5-Flash demonstrated the highest performance, leading to its use in generating a large-scale dataset, subsequently finalized through meticulous two-step human validation. Using FarSense, we measured the commonsense ability of the same five flagship LLMs and also fine-tuned six compact models (1B–24B parameters) before re-evaluating them. To ensure broad applicability, task wording was designed to minimize dialectal, cultural, or religious bias. Experiments show that targeted fine-tuning yields substantial gains, confirming FarSense as a reliable, openly licensed resource for advancing reproducible commonsense understanding research in Farsi NLP. We publicly release all code and data at https://github.com/KamyarZeinalipour/FarSense.</abstract>
<identifier type="citekey">zeinalipour-etal-2025-farsense</identifier>
<location>
<url>https://aclanthology.org/2025.ijcnlp-long.187/</url>
</location>
<part>
<date>2025-12</date>
<extent unit="page">
<start>3529</start>
<end>3599</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings
%T FarSense: A Comprehensive Commonsense Benchmark and Evaluation Framework for the Farsi Language
%A Zeinalipour, Kamyar
%A Jamshidi, Neda
%A Hejazi, Seyedehbahareh
%A Maggini, Marco
%A Bianchini, Monica
%A Paoletti, Simone
%A Gori, Marco
%Y Inui, Kentaro
%Y Sakti, Sakriani
%Y Wang, Haofen
%Y Wong, Derek F.
%Y Bhattacharyya, Pushpak
%Y Banerjee, Biplab
%Y Ekbal, Asif
%Y Chakraborty, Tanmoy
%Y Singh, Dhirendra Pratap
%S Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
%D 2025
%8 December
%I The Asian Federation of Natural Language Processing and The Association for Computational Linguistics
%C Mumbai, India
%@ 979-8-89176-298-5
%F zeinalipour-etal-2025-farsense
%X Although Farsi is widely spoken, no comprehensive benchmark exists for assessing commonsense reasoning in language models. We therefore present FarSense, a 6-task benchmark for Farsi covering True/False judgment, multiple-choice questions, Explanation, Cause-Effect inference, Counterfactual reasoning, and Knowledge Completion. Starting from Farsi-Wikipedia, we filtered noise and retained ~4,210 passages, rewrote them into realistic daily scenarios, and derived the above tasks from each scenario. Scenario and task generation quality was first judged via native-speaker annotations on outputs from five major LLMs—GPT-4o, Gemini-2.5-Flash, Mistral-Large, Qwen-Plus, and DeepSeek-Chat. Gemini-2.5-Flash demonstrated the highest performance, leading to its use in generating a large-scale dataset, subsequently finalized through meticulous two-step human validation. Using FarSense, we measured the commonsense ability of the same five flagship LLMs and also fine-tuned six compact models (1B–24B parameters) before re-evaluating them. To ensure broad applicability, task wording was designed to minimize dialectal, cultural, or religious bias. Experiments show that targeted fine-tuning yields substantial gains, confirming FarSense as a reliable, openly licensed resource for advancing reproducible commonsense understanding research in Farsi NLP. We publicly release all code and data at https://github.com/KamyarZeinalipour/FarSense.
%U https://aclanthology.org/2025.ijcnlp-long.187/
%P 3529-3599
Markdown (Informal)
[FarSense: A Comprehensive Commonsense Benchmark and Evaluation Framework for the Farsi Language](https://aclanthology.org/2025.ijcnlp-long.187/) (Zeinalipour et al., IJCNLP-AACL 2025)
ACL
- Kamyar Zeinalipour, Neda Jamshidi, Seyedehbahareh Hejazi, Marco Maggini, Monica Bianchini, Simone Paoletti, and Marco Gori. 2025. FarSense: A Comprehensive Commonsense Benchmark and Evaluation Framework for the Farsi Language. In Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pages 3529–3599, Mumbai, India. The Asian Federation of Natural Language Processing and The Association for Computational Linguistics.