Da Tang
2022
Toward Annotator Group Bias in Crowdsourcing
Haochen Liu
|
Joseph Thekinen
|
Sinem Mollaoglu
|
Da Tang
|
Ji Yang
|
Youlong Cheng
|
Hui Liu
|
Jiliang Tang
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Crowdsourcing has emerged as a popular approach for collecting annotated data to train supervised machine learning models. However, annotator bias can lead to defective annotations. Though there are a few works investigating individual annotator bias, the group effects in annotators are largely overlooked. In this work, we reveal that annotators within the same demographic group tend to show consistent group bias in annotation tasks and thus we conduct an initial study on annotator group bias. We first empirically verify the existence of annotator group bias in various real-world crowdsourcing datasets. Then, we develop a novel probabilistic graphical framework GroupAnno to capture annotator group bias with an extended Expectation Maximization (EM) algorithm. We conduct experiments on both synthetic and real-world datasets. Experimental results demonstrate the effectiveness of our model in modeling annotator group bias in label aggregation and model learning over competitive baselines.
2018
Subgoal Discovery for Hierarchical Dialogue Policy Learning
Da Tang
|
Xiujun Li
|
Jianfeng Gao
|
Chong Wang
|
Lihong Li
|
Tony Jebara
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Developing agents to engage in complex goal-oriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-the-art method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.
Search
Co-authors
- Haochen Liu 1
- Joseph Thekinen 1
- Sinem Mollaoglu 1
- Ji Yang 1
- Youlong Cheng 1
- show all...