Yedid Hoshen
2024
From Zero to Hero: Cold-Start Anomaly Detection
Tal Reiss
|
George Kour
|
Naama Zwerdling
|
Ateret Anaby Tavor
|
Yedid Hoshen
Findings of the Association for Computational Linguistics: ACL 2024
2018
Non-Adversarial Unsupervised Word Translation
Yedid Hoshen
|
Lior Wolf
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Unsupervised word translation from non-parallel inter-lingual corpora has attracted much research interest. Very recently, neural network methods trained with adversarial loss functions achieved high accuracy on this task. Despite the impressive success of the recent techniques, they suffer from the typical drawbacks of generative adversarial models: sensitivity to hyper-parameters, long training time and lack of interpretability. In this paper, we make the observation that two sufficiently similar distributions can be aligned correctly with iterative matching methods. We present a novel method that first aligns the second moment of the word distributions of the two languages and then iteratively refines the alignment. Extensive experiments on word translation of European and Non-European languages show that our method achieves better performance than recent state-of-the-art deep adversarial approaches and is competitive with the supervised baseline. It is also efficient, easy to parallelize on CPU and interpretable.
Search