Yifan Sun
Other people with similar names: Yifan Sun
Unverified author pages with similar names: Yifan Sun
2025
Enhancing the Comprehensibility of Text Explanations via Unsupervised Concept Discovery
Yifan Sun | Danding Wang | Qiang Sheng | Juan Cao | Jintao Li
Findings of the Association for Computational Linguistics: ACL 2025
Yifan Sun | Danding Wang | Qiang Sheng | Juan Cao | Jintao Li
Findings of the Association for Computational Linguistics: ACL 2025
Concept-based explainable approaches have emerged as a promising method in explainable AI because they can interpret models in a way that aligns with human reasoning. However, their adaption in the text domain remains limited. Most existing methods rely on predefined concept annotations and cannot discover unseen concepts, while other methods that extract concepts without supervision often produce explanations that are not intuitively comprehensible to humans, potentially diminishing user trust. These methods fall short of discovering comprehensible concepts automatically. To address this issue, we propose ECO-Concept, an intrinsically interpretable framework to discover comprehensible concepts with no concept annotations. ECO-Concept first utilizes an object-centric architecture to extract semantic concepts automatically. Then the comprehensibility of the extracted concepts is evaluated by large language models. Finally, the evaluation result guides the subsequent model fine-tuning to obtain more understandable explanations using relatively comprehensible concepts. Experiments show that our method achieves superior performance across diverse tasks. Further concept evaluations validate that the concepts learned by ECO-Concept surpassed current counterparts in comprehensibility.
The Staircase of Ethics: Probing LLM Value Priorities through Multi-Step Induction to Complex Moral Dilemmas
Ya Wu | Qiang Sheng | Danding Wang | Guang Yang | Yifan Sun | Zhengjia Wang | Yuyan Bu | Juan Cao
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Ya Wu | Qiang Sheng | Danding Wang | Guang Yang | Yifan Sun | Zhengjia Wang | Yuyan Bu | Juan Cao
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Ethical decision-making is a critical aspect of human judgment, and the growing use of LLMs in decision-support systems necessitates a rigorous evaluation of their moral reasoning capabilities. However, existing assessments primarily rely on single-step evaluations, failing to capture how models adapt to evolving ethical challenges. Addressing this gap, we introduce the Multi-step Moral Dilemmas (MMDs), the first dataset specifically constructed to evaluate the evolving moral judgments of LLMs across 3,302 five-stage dilemmas. This framework enables a fine-grained, dynamic analysis of how LLMs adjust their moral reasoning across escalating dilemmas. Our evaluation of nine widely used LLMs reveals that their value preferences shift significantly as dilemmas progress, indicating that models recalibrate moral judgments based on scenario complexity. Furthermore, pairwise value comparisons demonstrate that while LLMs often prioritize the value of care, this value can sometimes be superseded by fairness in certain contexts, highlighting the dynamic and context-dependent nature of LLM ethical reasoning. Our findings call for a shift toward dynamic, context-aware evaluation paradigms, paving the way for more human-aligned and value-sensitive development of LLMs.