Camillo Jose Taylor


2025

pdf bib
ControlText: Unlocking Controllable Fonts in Multilingual Text Rendering without Font Annotations
Bowen Jiang | Yuan Yuan | Xinyi Bai | Zhuoqun Hao | Alyson Yin | Yaojie Hu | Wenyu Liao | Lyle Ungar | Camillo Jose Taylor
Findings of the Association for Computational Linguistics: EMNLP 2025

This work demonstrates that diffusion models can achieve font-controllable multilingual text rendering using just raw images without font label annotations. Visual text rendering remains a significant challenge. While recent methods condition diffusion on glyphs, it is impossible to retrieve exact font annotations from large-scale, real-world datasets, which prevents user-specified font control. To address this, we propose a data-driven solution that integrates the conditional diffusion model with a text segmentation model, utilizing segmentation masks to capture and represent fonts in pixel space in a self-supervised manner, thereby eliminating the need for any ground-truth labels and enabling users to customize text rendering with any multilingual font of their choice. The experiment provides a proof of concept of our algorithm in zero-shot text and font editing across diverse fonts and languages, providing valuable insights for the community and industry toward achieving generalized visual text rendering.

pdf bib
Towards Rationality in Language and Multimodal Agents: A Survey
Bowen Jiang | Yangxinyu Xie | Xiaomeng Wang | Yuan Yuan | Zhuoqun Hao | Xinyi Bai | Weijie J Su | Camillo Jose Taylor | Tanwi Mallick
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

This work discusses how to build more rational language and multimodal agents and what criteria define rationality in intelligent systems.Rationality is the quality of being guided by reason, characterized by decision-making that aligns with evidence and logical principles. It plays a crucial role in reliable problem-solving by ensuring well-grounded and consistent solutions. Despite their progress, large language models (LLMs) often fall short of rationality due to their bounded knowledge space and inconsistent outputs. In response, recent efforts have shifted toward developing multimodal and multi-agent systems, as well as integrating modules like external tools, programming codes, symbolic reasoners, utility function, and conformal risk controls rather than relying solely on a single LLM for decision-making. This paper surveys state-of-the-art advancements in language and multimodal agents, assesses their role in enhancing rationality, and outlines open challenges and future research directions. We maintain an open repository at https://github.com/bowen-upenn/Agent_Rationality.

2024

pdf bib
A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners
Bowen Jiang | Yangxinyu Xie | Zhuoqun Hao | Xiaomeng Wang | Tanwi Mallick | Weijie J Su | Camillo Jose Taylor | Dan Roth
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

This study introduces a hypothesis-testing framework to assess whether large language models (LLMs) possess genuine reasoning abilities or primarily depend on token bias. We go beyond evaluating LLMs on accuracy; rather, we aim to investigate their token bias in solving logical reasoning tasks. Specifically, we develop carefully controlled synthetic datasets, featuring conjunction fallacy and syllogistic problems. Our framework outlines a list of hypotheses where token biases are readily identifiable, with all null hypotheses assuming genuine reasoning capabilities of LLMs. The findings in this study suggest, with statistical guarantee, that most LLMs still struggle with logical reasoning. While they may perform well on classic problems, their success largely depends on recognizing superficial patterns with strong token bias, thereby raising concerns about their actual reasoning and generalization abilities.