Alexander Johnson
2025
NovAScore: A New Automated Metric for Evaluating Document Level Novelty
Lin Ai
|
Ziwei Gong
|
Harshsaiprasad Deshpande
|
Alexander Johnson
|
Emmy Phung
|
Ahmad Emami
|
Julia Hirschberg
Proceedings of the 31st International Conference on Computational Linguistics
The rapid expansion of online content has intensified the issue of information redundancy, underscoring the need for solutions that can identify genuinely new information. Despite this challenge, the research community has seen a decline in focus on novelty detection, particularly with the rise of large language models (LLMs). Additionally, previous approaches have relied heavily on human annotation, which is time-consuming, costly, and particularly challenging when annotators must compare a target document against a vast number of historical documents. In this work, we introduce NovAScore (Novelty Evaluation in Atomicity Score), an automated metric for evaluating document-level novelty. NovAScore aggregates the novelty and salience scores of atomic information, providing high interpretability and a detailed analysis of a document’s novelty. With its dynamic weight adjustment scheme, NovAScore offers enhanced flexibility and an additional dimension to assess both the novelty level and the importance of information within a document. Our experiments show that NovAScore strongly correlates with human judgments of novelty, achieving a 0.626 Point-Biserial correlation on the TAP-DLND 1.0 dataset and a 0.920 Pearson correlation on an internal human-annotated dataset.
2018
Discourse Coherence: Concurrent Explicit and Implicit Relations
Hannah Rohde
|
Alexander Johnson
|
Nathan Schneider
|
Bonnie Webber
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Theories of discourse coherence posit relations between discourse segments as a key feature of coherent text. Our prior work suggests that multiple discourse relations can be simultaneously operative between two segments for reasons not predicted by the literature. Here we test how this joint presence can lead participants to endorse seemingly divergent conjunctions (e.g., BUT and SO) to express the link they see between two segments. These apparent divergences are not symptomatic of participant naivety or bias, but arise reliably from the concurrent availability of multiple relations between segments – some available through explicit signals and some via inference. We believe that these new results can both inform future progress in theoretical work on discourse coherence and lead to higher levels of performance in discourse parsing.
Search
Fix data
Co-authors
- Lin Ai 1
- Harshsaiprasad Deshpande 1
- Ahmad Emami 1
- Ziwei Gong 1
- Julia Hirschberg 1
- show all...