Hassan S. Shavarani

Also published as: Hassan Shavarani


2024

pdf bib
Unified Examination of Entity Linking in Absence of Candidate Sets
Nicolas Ong | Hassan Shavarani | Anoop Sarkar
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

Despite remarkable strides made in the development of entity linking systems in recent years, a comprehensive comparative analysis of these systems using a unified framework is notably absent. This paper addresses this oversight by introducing a new black-box benchmark and conducting a comprehensive evaluation of all state-of-the-art entity linking methods. We use an ablation study to investigate the impact of candidate sets on the performance of entity linking. Our findings uncover exactly how much such entity linking systems depend on candidate sets, and how much this limits the general applicability of each system. We present an alternative approach to candidate sets, demonstrating that leveraging the entire in-domain candidate set can serve as a viable substitute for certain models. We show the trade-off between less restrictive candidate sets, increased inference time and memory footprint for some models.

2023

pdf bib
SpEL: Structured Prediction for Entity Linking
Hassan Shavarani | Anoop Sarkar
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Entity linking is a prominent thread of research focused on structured data creation by linking spans of text to an ontology or knowledge source. We revisit the use of structured prediction for entity linking which classifies each individual input token as an entity, and aggregates the token predictions. Our system, called SpEL (Structured prediction for Entity Linking) is a state-of-the-art entity linking system that uses some new ideas to apply structured prediction to the task of entity linking including: two refined fine-tuning steps; a context sensitive prediction aggregation strategy; reduction of the size of the model’s output vocabulary, and; we address a common problem in entity-linking systems where there is a training vs. inference tokenization mismatch. Our experiments show that we can outperform the state-of-the-art on the commonly used AIDA benchmark dataset for entity linking to Wikipedia. Our method is also very compute efficient in terms of number of parameters and speed of inference.

2021

pdf bib
Better Neural Machine Translation by Extracting Linguistic Information from BERT
Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Adding linguistic information (syntax or semantics) to neural machine translation (NMT) have mostly focused on using point estimates from pre-trained models. Directly using the capacity of massive pre-trained contextual word embedding models such as BERT(Devlin et al., 2019) has been marginally useful in NMT because effective fine-tuning is difficult to obtain for NMT without making training brittle and unreliable. We augment NMT by extracting dense fine-tuned vector-based linguistic information from BERT instead of using point estimates. Experimental results show that our method of incorporating linguistic information helps NMT to generalize better in a variety of training contexts and is no more difficult to train than conventional Transformer-based NMT.

pdf bib
Translation-based Supervision for Policy Generation in Simultaneous Neural Machine Translation
Ashkan Alinejad | Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

In simultaneous machine translation, finding an agent with the optimal action sequence of reads and writes that maintain a high level of translation quality while minimizing the average lag in producing target tokens remains an extremely challenging problem. We propose a novel supervised learning approach for training an agent that can detect the minimum number of reads required for generating each target token by comparing simultaneous translations against full-sentence translations during training to generate oracle action sequences. These oracle sequences can then be used to train a supervised model for action generation at inference time. Our approach provides an alternative to current heuristic methods in simultaneous translation by introducing a new training objective, which is easier to train than previous attempts at training the agent using reinforcement learning techniques for this task. Our experimental results show that our novel training method for action generation produces much higher quality translations while minimizing the average lag in simultaneous translation.

2020

pdf bib
Multi-class Multilingual Classification of Wikipedia Articles Using Extended Named Entity Tag Set
Hassan S. Shavarani | Satoshi Sekine
Proceedings of the Twelfth Language Resources and Evaluation Conference

Wikipedia is a great source of general world knowledge which can guide NLP models better understand their motivation to make predictions. Structuring Wikipedia is the initial step towards this goal which can facilitate fine-grain classification of articles. In this work, we introduce the Shinra 5-Language Categorization Dataset (SHINRA-5LDS), a large multi-lingual and multi-labeled set of annotated Wikipedia articles in Japanese, English, French, German, and Farsi using Extended Named Entity (ENE) tag set. We evaluate the dataset using the best models provided for ENE label set classification and show that the currently available classification models struggle with large datasets using fine-grained tag sets.

2018

pdf bib
Simultaneous Translation using Optimized Segmentation
Maryam Siahbani | Hassan Shavarani | Ashkan Alinejad | Anoop Sarkar
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)

pdf bib
Top-down Tree Structured Decoding with Syntactic Connections for Neural Machine Translation and Parsing
Jetic Gū | Hassan S. Shavarani | Anoop Sarkar
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

The addition of syntax-aware decoding in Neural Machine Translation (NMT) systems requires an effective tree-structured neural network, a syntax-aware attention model and a language generation model that is sensitive to sentence structure. Recent approaches resort to sequential decoding by adding additional neural network units to capture bottom-up structural information, or serialising structured data into sequence. We exploit a top-down tree-structured model called DRNN (Doubly-Recurrent Neural Networks) first proposed by Alvarez-Melis and Jaakola (2017) to create an NMT model called Seq2DRNN that combines a sequential encoder with tree-structured decoding augmented with a syntax-aware attention model. Unlike previous approaches to syntax-based NMT which use dependency parsing models our method uses constituency parsing which we argue provides useful information for translation. In addition, we use the syntactic structure of the sentence to add new connections to the tree-structured decoder neural network (Seq2DRNN+SynC). We compare our NMT model with sequential and state of the art syntax-based NMT models and show that our model produces more fluent translations with better reordering. Since our model is capable of doing translation and constituency parsing at the same time we also compare our parsing accuracy against other neural parsing models.

2015

pdf bib
Learning segmentations that balance latency versus quality in spoken language translation
Hassan Shavarani | Maryam Siahbani | Ramtin Mehdizadeh Seraj | Anoop Sarkar
Proceedings of the 12th International Workshop on Spoken Language Translation: Papers