Jae Hee Lee

Also published as: Jae Hee Lee, Jae-Hee Lee


2024

pdf bib
Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic
Xufeng Zhao | Mengdi Li | Wenhao Lu | Cornelius Weber | Jae Hee Lee | Kun Chu | Stefan Wermter
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Recent advancements in large language models have showcased their remarkable generalizability across various domains. However, their reasoning abilities still have significant room for improvement, especially when confronted with scenarios requiring multi-step reasoning. Although large language models possess extensive knowledge, their reasoning often fails to effectively utilize this knowledge to establish a coherent thinking paradigm. These models sometimes show hallucinations as their reasoning procedures are unconstrained by logical principles. Aiming at improving the zero-shot chain-of-thought reasoning ability of large language models, we propose LoT (Logical Thoughts), a self-improvement prompting framework that leverages principles rooted in symbolic logic, particularly Reductio ad Absurdum, to systematically verify and rectify the reasoning processes step by step. Experimental evaluations conducted on language tasks in diverse domains, including arithmetic, commonsense, symbolic, causal inference, and social problems, demonstrate the efficacy of enhanced reasoning by logic. The implementation code for LoT can be accessed at: https://github.com/xf-zhao/LoT.

2023

pdf bib
Visually Grounded Continual Language Learning with Selective Specialization
Kyra Ahrens | Lennart Bengtson | Jae Hee Lee | Stefan Wermter
Findings of the Association for Computational Linguistics: EMNLP 2023

A desirable trait of an artificial agent acting in the visual world is to continually learn a sequence of language-informed tasks while striking a balance between sufficiently specializing in each task and building a generalized knowledge for transfer. Selective specialization, i.e., a careful selection of model components to specialize in each task, is a strategy to provide control over this trade-off. However, the design of selection strategies requires insights on the role of each model component in learning rather specialized or generalizable representations, which poses a gap in current research. Thus, our aim with this work is to provide an extensive analysis of selection strategies for visually grounded continual language learning. Due to the lack of suitable benchmarks for this purpose, we introduce two novel diagnostic datasets that provide enough control and flexibility for a thorough model analysis. We assess various heuristics for module specialization strategies as well as quantifiable measures for two different types of model architectures. Finally, we design conceptually simple approaches based on our analysis that outperform common continual learning baselines. Our results demonstrate the need for further efforts towards better aligning continual learning algorithms with the learning behaviors of individual model parts.

2010

pdf bib
A Post-processing Approach to Statistical Word Alignment Reflecting Alignment Tendency between Part-of-speeches
Jae-Hee Lee | Seung-Wook Lee | Gumwon Hong | Young-Sook Hwang | Sang-Bum Kim | Hae-Chang Rim
Coling 2010: Posters