Zhifeng Hu
2020
Obfuscation for Privacy-preserving Syntactic Parsing
Zhifeng Hu
|
Serhii Havrylov
|
Ivan Titov
|
Shay B. Cohen
Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies
The goal of homomorphic encryption is to encrypt data such that another party can operate on it without being explicitly exposed to the content of the original data. We introduce an idea for a privacy-preserving transformation on natural language data, inspired by homomorphic encryption. Our primary tool is obfuscation, relying on the properties of natural language. Specifically, a given English text is obfuscated using a neural model that aims to preserve the syntactic relationships of the original sentence so that the obfuscated sentence can be parsed instead of the original one. The model works at the word level, and learns to obfuscate each word separately by changing it into a new word that has a similar syntactic role. The text obfuscated by our model leads to better performance on three syntactic parsers (two dependency and one constituency parsers) in comparison to an upper-bound random substitution baseline. More specifically, the results demonstrate that as more terms are obfuscated (by their part of speech), the substitution upper bound significantly degrades, while the neural model maintains a relatively high performing parser. All of this is done without much sacrifice of privacy compared to the random substitution upper bound. We also further analyze the results, and discover that the substituted words have similar syntactic properties, but different semantic content, compared to the original words.
2018
A Simple yet Effective Joint Training Method for Cross-Lingual Universal Dependency Parsing
Danlu Chen
|
Mengxiao Lin
|
Zhifeng Hu
|
Xipeng Qiu
Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies
This paper describes Fudan’s submission to CoNLL 2018’s shared task Universal Dependency Parsing. We jointly train models when two languages are similar according to linguistic typology and then ensemble the models using a simple re-parse algorithm. We outperform the baseline method by 4.4% (2.1%) on average on development (test) set in CoNLL 2018 UD Shared Task.
Search
Co-authors
- Danlu Chen 1
- Mengxiao Lin 1
- Xipeng Qiu 1
- Serhii Havrylov 1
- Ivan Titov 1
- show all...