@inproceedings{costa-jussa-etal-2025-lcfo,
title = "{LCFO}: Long Context and Long Form Output Dataset and Benchmarking",
author = "Costa-juss{\`a}, Marta R. and
Andrews, Pierre and
Meglioli, Mariano Coria and
Chen, Joy and
Chuang, Joe and
Dale, David and
Ropers, Christophe and
Mourachko, Alexandre and
S{\'a}nchez, Eduardo and
Schwenk, Holger and
Tran, Tuan A. and
Turkatenko, Arina and
Wood, Carleigh",
editor = "Che, Wanxiang and
Nabende, Joyce and
Shutova, Ekaterina and
Pilehvar, Mohammad Taher",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2025",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.findings-acl.556/",
doi = "10.18653/v1/2025.findings-acl.556",
pages = "10672--10700",
ISBN = "979-8-89176-256-5",
abstract = "This paper presents the Long Context and Form Output (LCFO) benchmark, a novel evaluation framework for assessing gradual summarization and summary expansion capabilities across diverse domains. LCFO consists of long input documents (5k words average length), each of which comes with three summaries of different lengths (20{\%}, 10{\%}, and 5{\%} of the input text), as well as approximately 15 questions and answers (QA) related to the input content. Notably, LCFO also provides alignments between specific QA pairs and corresponding summaries in 7 domains. The primary motivation behind providing summaries of different lengths is to establish a controllable framework for generating long texts from shorter inputs, i.e. summary expansion. To establish an evaluation metric framework for summarization and summary expansion, we provide human evaluation scores for human-generated outputs, as well as results from various state-of-the-art large language models (LLMs). GPT-4o-mini achieves best human scores among automatic systems in both summarization and summary expansion tasks ({\ensuremath{\approx}} +10{\%} and +20{\%}, respectively). It even surpasses human output quality in the case of short summaries ({\ensuremath{\approx}} +7{\%}). Overall automatic metrics achieve low correlations with human evaluation scores ({\ensuremath{\approx}} 0.4) but moderate correlation on specific evaluation aspects such as fluency and attribution ({\ensuremath{\approx}} 0.6)."
}<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="costa-jussa-etal-2025-lcfo">
<titleInfo>
<title>LCFO: Long Context and Long Form Output Dataset and Benchmarking</title>
</titleInfo>
<name type="personal">
<namePart type="given">Marta</namePart>
<namePart type="given">R</namePart>
<namePart type="family">Costa-jussà</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Pierre</namePart>
<namePart type="family">Andrews</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Mariano</namePart>
<namePart type="given">Coria</namePart>
<namePart type="family">Meglioli</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Joy</namePart>
<namePart type="family">Chen</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Joe</namePart>
<namePart type="family">Chuang</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">David</namePart>
<namePart type="family">Dale</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Christophe</namePart>
<namePart type="family">Ropers</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Alexandre</namePart>
<namePart type="family">Mourachko</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Eduardo</namePart>
<namePart type="family">Sánchez</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Holger</namePart>
<namePart type="family">Schwenk</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Tuan</namePart>
<namePart type="given">A</namePart>
<namePart type="family">Tran</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Arina</namePart>
<namePart type="family">Turkatenko</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Carleigh</namePart>
<namePart type="family">Wood</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2025-07</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Findings of the Association for Computational Linguistics: ACL 2025</title>
</titleInfo>
<name type="personal">
<namePart type="given">Wanxiang</namePart>
<namePart type="family">Che</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Joyce</namePart>
<namePart type="family">Nabende</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Ekaterina</namePart>
<namePart type="family">Shutova</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Mohammad</namePart>
<namePart type="given">Taher</namePart>
<namePart type="family">Pilehvar</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>Association for Computational Linguistics</publisher>
<place>
<placeTerm type="text">Vienna, Austria</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
<identifier type="isbn">979-8-89176-256-5</identifier>
</relatedItem>
<abstract>This paper presents the Long Context and Form Output (LCFO) benchmark, a novel evaluation framework for assessing gradual summarization and summary expansion capabilities across diverse domains. LCFO consists of long input documents (5k words average length), each of which comes with three summaries of different lengths (20%, 10%, and 5% of the input text), as well as approximately 15 questions and answers (QA) related to the input content. Notably, LCFO also provides alignments between specific QA pairs and corresponding summaries in 7 domains. The primary motivation behind providing summaries of different lengths is to establish a controllable framework for generating long texts from shorter inputs, i.e. summary expansion. To establish an evaluation metric framework for summarization and summary expansion, we provide human evaluation scores for human-generated outputs, as well as results from various state-of-the-art large language models (LLMs). GPT-4o-mini achieves best human scores among automatic systems in both summarization and summary expansion tasks (\ensuremath\approx +10% and +20%, respectively). It even surpasses human output quality in the case of short summaries (\ensuremath\approx +7%). Overall automatic metrics achieve low correlations with human evaluation scores (\ensuremath\approx 0.4) but moderate correlation on specific evaluation aspects such as fluency and attribution (\ensuremath\approx 0.6).</abstract>
<identifier type="citekey">costa-jussa-etal-2025-lcfo</identifier>
<identifier type="doi">10.18653/v1/2025.findings-acl.556</identifier>
<location>
<url>https://aclanthology.org/2025.findings-acl.556/</url>
</location>
<part>
<date>2025-07</date>
<extent unit="page">
<start>10672</start>
<end>10700</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings
%T LCFO: Long Context and Long Form Output Dataset and Benchmarking
%A Costa-jussà, Marta R.
%A Andrews, Pierre
%A Meglioli, Mariano Coria
%A Chen, Joy
%A Chuang, Joe
%A Dale, David
%A Ropers, Christophe
%A Mourachko, Alexandre
%A Sánchez, Eduardo
%A Schwenk, Holger
%A Tran, Tuan A.
%A Turkatenko, Arina
%A Wood, Carleigh
%Y Che, Wanxiang
%Y Nabende, Joyce
%Y Shutova, Ekaterina
%Y Pilehvar, Mohammad Taher
%S Findings of the Association for Computational Linguistics: ACL 2025
%D 2025
%8 July
%I Association for Computational Linguistics
%C Vienna, Austria
%@ 979-8-89176-256-5
%F costa-jussa-etal-2025-lcfo
%X This paper presents the Long Context and Form Output (LCFO) benchmark, a novel evaluation framework for assessing gradual summarization and summary expansion capabilities across diverse domains. LCFO consists of long input documents (5k words average length), each of which comes with three summaries of different lengths (20%, 10%, and 5% of the input text), as well as approximately 15 questions and answers (QA) related to the input content. Notably, LCFO also provides alignments between specific QA pairs and corresponding summaries in 7 domains. The primary motivation behind providing summaries of different lengths is to establish a controllable framework for generating long texts from shorter inputs, i.e. summary expansion. To establish an evaluation metric framework for summarization and summary expansion, we provide human evaluation scores for human-generated outputs, as well as results from various state-of-the-art large language models (LLMs). GPT-4o-mini achieves best human scores among automatic systems in both summarization and summary expansion tasks (\ensuremath\approx +10% and +20%, respectively). It even surpasses human output quality in the case of short summaries (\ensuremath\approx +7%). Overall automatic metrics achieve low correlations with human evaluation scores (\ensuremath\approx 0.4) but moderate correlation on specific evaluation aspects such as fluency and attribution (\ensuremath\approx 0.6).
%R 10.18653/v1/2025.findings-acl.556
%U https://aclanthology.org/2025.findings-acl.556/
%U https://doi.org/10.18653/v1/2025.findings-acl.556
%P 10672-10700
Markdown (Informal)
[LCFO: Long Context and Long Form Output Dataset and Benchmarking](https://aclanthology.org/2025.findings-acl.556/) (Costa-jussà et al., Findings 2025)
ACL
- Marta R. Costa-jussà, Pierre Andrews, Mariano Coria Meglioli, Joy Chen, Joe Chuang, David Dale, Christophe Ropers, Alexandre Mourachko, Eduardo Sánchez, Holger Schwenk, Tuan A. Tran, Arina Turkatenko, and Carleigh Wood. 2025. LCFO: Long Context and Long Form Output Dataset and Benchmarking. In Findings of the Association for Computational Linguistics: ACL 2025, pages 10672–10700, Vienna, Austria. Association for Computational Linguistics.