Fabio Massimo Zanzotto

Also published as: F. Zanzotto, Fabio Massimo Zanzotto, Fabio Zanzotto


2024

pdf bib
Linguistic Fingerprint in Transformer Models: How Language Variation Influences Parameter Selection in Irony Detection
Michele Mastromattei | Fabio Massimo Zanzotto
Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024

This paper explores the correlation between linguistic diversity, sentiment analysis and transformer model architectures. We aim to investigate how different English variations impact transformer-based models for irony detection. To conduct our study, we used the EPIC corpus to extract five diverse English variation-specific datasets and applied the KEN pruning algorithm on five different architectures. Our results reveal several similarities between optimal subnetworks, which provide insights into the linguistic variations that share strong resemblances and those that exhibit greater dissimilarities. We discovered that optimal subnetworks across models share at least 60% of their parameters, emphasizing the significance of parameter values in capturing and interpreting linguistic variations. This study highlights the inherent structural similarities between models trained on different variants of the same language and also the critical role of parameter values in capturing these nuances.

pdf bib
A Tree-of-Thoughts to Broaden Multi-step Reasoning across Languages
Leonardo Ranaldi | Giulia Pucci | Federico Ranaldi | Elena Sofia Ruzzetti | Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: NAACL 2024

Reasoning methods, best exemplified by the well-known Chain-of-Thought (CoT), empower the reasoning abilities of Large Language Models (LLMs) by eliciting them to solve complex tasks in a step-by-step manner. Although they are achieving significant success, the ability to deliver multi-step reasoning remains limited to English because of the imbalance in the distribution of pre-training data, which makes other languages a barrier. In this paper, we propose Cross-lingual Tree-of-Thoughts (Cross-ToT), a method for aligning Cross-lingual CoT reasoning across languages. The proposed method, through a self-consistent cross-lingual prompting mechanism inspired by the Tree-of-Thoughts approach, provides multi-step reasoning paths in different languages that, during the steps, lead to the final solution. Experimental evaluations show that our method significantly outperforms existing prompting methods by reducing the number of interactions and achieving state-of-the-art performance.

pdf bib
Less is KEN: a Universal and Simple Non-Parametric Pruning Algorithm for Large Language Models
Michele Mastromattei | Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: ACL 2024

pdf bib
Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL translation
Federico Ranaldi | Elena Sofia Ruzzetti | Dario Onorati | Leonardo Ranaldi | Cristina Giannone | Andrea Favalli | Raniero Romagnoli | Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: ACL 2024

Understanding textual description to generate code seems to be an achieved capability of instruction-following Large Language Models (LLMs) in zero-shot scenario. However, there is a severe possibility that this translation ability may be influenced by having seen target textual descriptions and the related code. This effect is known as Data Contamination.In this study, we investigate the impact of Data Contamination on the performance of GPT-3.5 in the Text-to-SQL code-generating tasks. Hence, we introduce a novel method to detect Data Contamination in GPTs and examine GPT-3.5’s Text-to-SQL performances using the known Spider Dataset and our new unfamiliar dataset Termite. Furthermore, we analyze GPT-3.5’s efficacy on databases with modified information via an adversarial table disconnection (ATD) approach, complicating Text-to-SQL tasks by removing structural pieces of information from the database. Our results indicate a significant performance drop in GPT-3.5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.

pdf bib
HANS, are you clever? Clever Hans Effect Analysis of Neural Systems
Leonardo Ranaldi | Fabio Zanzotto
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)

Large Language Models (LLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effectively. In fact, several multiple-choice questions (MCQ) benchmarks have been proposed to construct solid assessments of the models’ abilities. However, earlier works demonstrate the presence of inherent “order bias” in LLMs, posing challenges to the appropriate evaluation. In this paper, we investigate LLMs’ resilience abilities through a series of probing tests using four MCQ benchmarks. Introducing adversarial examples, we show a significant performance gap, mainly when varying the order of the choices, which reveals a selection bias and brings into discussion reasoning abilities. Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the LLMs, strengthened by including significant examples in few-shot scenarios. Finally, by using the Chain-of-Thought (CoT) technique, we elicit the model to reason and mitigate the bias by obtaining more robust models.

pdf bib
A Trip Towards Fairness: Bias and De-Biasing in Large Language Models
Leonardo Ranaldi | Elena Sofia Ruzzetti | Davide Venditti | Dario Onorati | Fabio Massimo Zanzotto
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)

Cheap-to-Build Very Large-Language Models (CtB-LLMs) with affordable training are emerging as the next big revolution in natural language processing and understanding. These CtB-LLMs are democratizing access to trainable Very Large-Language Models (VLLMs) and, thus, may represent the building blocks of many NLP systems solving downstream tasks. Hence, a little or a large bias in CtB-LLMs may cause huge harm. In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable. Indeed, according to current tests, the LLaMA and the OPT families have an important bias in gender, race, religion, and profession. In contrast to the analysis for other LMMs, we discovered that bias depends not on the number of parameters but on the perplexity. Finally, the debiasing of OPT using LORA reduces bias up to 4.12 points in the normalized stereotype score.

2023

pdf bib
Measuring bias in Instruction-Following models with P-AT
Dario Onorati | Elena Sofia Ruzzetti | Davide Venditti | Leonardo Ranaldi | Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: EMNLP 2023

Instruction-Following Language Models (IFLMs) are promising and versatile tools for solving many downstream, information-seeking tasks. Given their success, there is an urgent need to have a shared resource to determine whether existing and new IFLMs are prone to produce biased language interactions. In this paper, we propose Prompt Association Test (P-AT): a new resource for testing the presence of social biases in IFLMs. P-AT stems from WEAT (Caliskan et al., 2017) and generalizes the notion of measuring social biases to IFLMs. Basically, we cast WEAT word tests in promptized classification tasks, and we associate a metric - the bias score. Our resource consists of 2310 prompts. We then experimented with several families of IFLMs discovering gender and race biases in all the analyzed models. We expect P-AT to be an important tool for quantifying bias across different dimensions and, therefore, for encouraging the creation of fairer IFLMs before their distortions have consequences in the real world.

pdf bib
Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages
Elena Sofia Ruzzetti | Federico Ranaldi | Felicia Logozzo | Michele Mastromattei | Leonardo Ranaldi | Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: EMNLP 2023

The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language. In this paper, we propose a novel standpoint to investigate the above issue: using typological similarities among languages to observe how their respective monolingual models encode structural information. We aim to layer-wise compare transformers for typologically similar languages to observe whether these similarities emerge for particular layers. For this investigation, we propose to use Centered Kernel Alignment to measure similarity among weight matrices. We found that syntactic typological similarity is consistent with the similarity between the weights in the middle layers, which are the pretrained BERT layers to which syntax encoding is generally attributed. Moreover, we observe that a domain adaptation on semantically equivalent texts enhances this similarity among weight matrices.

pdf bib
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts
Fabio Massimo Zanzotto | Sameer Pradhan
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts

pdf bib
Modeling Easiness for Training Transformers with Curriculum Learning
Leonardo Ranaldi | Giulia Pucci | Fabio Massimo Zanzotto
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing

Directly learning from complex examples is generally problematic for humans and machines. Indeed, a better strategy is exposing learners to examples in a reasonable, pedagogically-motivated order. Curriculum Learning (CL) has been proposed to import this strategy when training machine learning models. In this paper, building on Curriculum Learning, we propose a novel, linguistically motivated measure to determine example complexity for organizing examples during learning. Our complexity measure - LRC- is based on length, rarity, and comprehensibility. Our resulting learning model is CL-LRC, that is, CL with LRC. Experiments on downstream tasks show that CL-LRC outperforms existing CL and non-CL methods for training BERT and RoBERTa from scratch. Furthermore, we analyzed different measures, including perplexity, loss, and learning curve of different models pre-trained from scratch, showing that CL-LRC performs better than the state-of-the-art.

pdf bib
The Dark Side of the Language: Pre-trained Transformers in the DarkNet
Leonardo Ranaldi | Aria Nourbakhsh | Elena Sofia Ruzzetti | Arianna Patrizi | Dario Onorati | Michele Mastromattei | Francesca Fallucchi | Fabio Massimo Zanzotto
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing

Pre-trained Transformers are challenging human performances in many Natural Language Processing tasks. The massive datasets used for pre-training seem to be the key to their success on existing tasks. In this paper, we explore how a range of pre-trained natural language understanding models performs on definitely unseen sentences provided by classification tasks over a DarkNet corpus. Surprisingly, results show that syntactic and lexical neural networks perform on par with pre-trained Transformers even after fine-tuning. Only after what we call extreme domain adaptation, that is, retraining with the masked language model task on all the novel corpus, pre-trained Transformers reach their standard high results. This suggests that huge pre-training corpora may give Transformers unexpected help since they are exposed to many of the possible sentences.

pdf bib
PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models
Leonardo Ranaldi | Elena Sofia Ruzzetti | Fabio Massimo Zanzotto
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing

Large Language Models (LLMs) are impressive machines with the ability to memorize, possibly generalized learning examples. We present here a small, focused contribution to the analysis of the interplay between memorization and performance of BERT in downstream tasks. We propose PreCog, a measure for evaluating memorization from pre-training, and we analyze its correlation with the BERT’s performance. Our experiments show that highly memorized examples are better classified, suggesting memorization is an essential key to success for BERT.

2022

pdf bib
Lacking the Embedding of a Word? Look it up into a Traditional Dictionary
Elena Sofia Ruzzetti | Leonardo Ranaldi | Michele Mastromattei | Francesca Fallucchi | Noemi Scarpato | Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: ACL 2022

Word embeddings are powerful dictionaries, which may easily capture language variations. However, these dictionaries fail to give sense to rare words, which are surprisingly often covered by traditional dictionaries. In this paper, we propose to use definitions retrieved in traditional dictionaries to produce word embeddings for rare words. For this purpose, we introduce two methods: Definition Neural Network (DefiNNet) and Define BERT (DefBERT). In our experiments, DefiNNet and DefBERT significantly outperform state-of-the-art as well as baseline methods devised for producing embeddings of unknown words. In fact, DefiNNet significantly outperforms FastText, which implements a method for the same task-based on n-grams, and DefBERT significantly outperforms the BERT method for OOV words. Then, definitions in traditional dictionaries are useful to build word embeddings for rare words.

pdf bib
Change My Mind: How Syntax-based Hate Speech Recognizer Can Uncover Hidden Motivations Based on Different Viewpoints
Michele Mastromattei | Valerio Basile | Fabio Massimo Zanzotto
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022

Hate speech recognizers may mislabel sentences by not considering the different opinions that society has on selected topics. In this paper, we show how explainable machine learning models based on syntax can help to understand the motivations that induce a sentence to be offensive to a certain demographic group. By comparing and contrasting the results, we show the key points that make a sentence labeled as hate speech and how this varies across different ethnic groups.

pdf bib
Every time I fire a conversational designer, the performance of the dialogue system goes down
Giancarlo Xompero | Michele Mastromattei | Samir Salman | Cristina Giannone | Andrea Favalli | Raniero Romagnoli | Fabio Massimo Zanzotto
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Incorporating handwritten domain scripts into neural-based task-oriented dialogue systems may be an effective way to reduce the need for large sets of annotated dialogues. In this paper, we investigate how the use of domain scripts written by conversational designers affects the performance of neural-based dialogue systems. To support this investigation, we propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where domain scripts are coded in semi-logical rules. By using CLINN, we evaluated semi-logical rules produced by a team of differently-skilled conversational designers. We experimented with the Restaurant domain of the MultiWOZ dataset. Results show that external knowledge is extremely important for reducing the need for annotated examples for conversational systems. In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system when trained with smaller sets of annotated dialogues.

2020

pdf bib
KERMIT: Complementing Transformer Architectures with Encoders of Explicit Syntactic Interpretations
Fabio Massimo Zanzotto | Andrea Santilli | Leonardo Ranaldi | Dario Onorati | Pierfrancesco Tommasino | Francesca Fallucchi
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Syntactic parsers have dominated natural language understanding for decades. Yet, their syntactic interpretations are losing centrality in downstream tasks due to the success of large-scale textual representation learners. In this paper, we propose KERMIT (Kernel-inspired Encoder with Recursive Mechanism for Interpretable Trees) to embed symbolic syntactic parse trees into artificial neural networks and to visualize how syntax is used in inference. We experimented with KERMIT paired with two state-of-the-art transformer-based universal sentence encoders (BERT and XLNet) and we showed that KERMIT can indeed boost their performance by effectively embedding human-coded universal syntactic representations in neural networks

2018

pdf bib
SyntNN at SemEval-2018 Task 2: is Syntax Useful for Emoji Prediction? Embedding Syntactic Trees in Multi Layer Perceptrons
Fabio Massimo Zanzotto | Andrea Santilli
Proceedings of the 12th International Workshop on Semantic Evaluation

In this paper, we present SyntNN as a way to include traditional syntactic models in multilayer neural networks used in the task of Semeval Task 2 of emoji prediction. The model builds on the distributed tree embedder also known as distributed tree kernel. Initial results are extremely encouraging but additional analysis is needed to overcome the problem of overfitting.

2015

pdf bib
Squibs: When the Whole Is Not Greater Than the Combination of Its Parts: A “Decompositional” Look at Compositional Distributional Semantics
Fabio Massimo Zanzotto | Lorenzo Ferrone | Marco Baroni
Computational Linguistics, Volume 41, Issue 1 - March 2015

2014

pdf bib
Compositional Distributional Semantics Models in Chunk-based Smoothed Tree Kernels
Nghia The Pham | Lorenzo Ferrone | Fabio Massimo Zanzotto
Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014)

pdf bib
haLF: Comparing a Pure CDSM Approach with a Standard Machine Learning System for RTE
Lorenzo Ferrone | Fabio Massimo Zanzotto
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf bib
Towards Syntax-aware Compositional Distributional Semantic Models
Lorenzo Ferrone | Fabio Massimo Zanzotto
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf bib
Transducing Sentences to Syntactic Feature Vectors: an Alternative Way to “Parse”?
Fabio Massimo Zanzotto | Lorenzo Dell’Arciprete
Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality

pdf bib
Linear Compositional Distributional Semantics and Structural Kernels
Lorenzo Ferrone | Fabio Massimo Zanzotto
Proceedings of the Joint Symposium on Semantic Processing. Textual Inference and Structures in Corpora

pdf bib
SemEval-2013 Task 5: Evaluating Phrasal Semantics
Ioannis Korkontzelos | Torsten Zesch | Fabio Massimo Zanzotto | Chris Biemann
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

2011

pdf bib
Senso Comune, an Open Knowledge Base of Italian Language
Guido Vetere | Alessandro Oltramari | Isabella Chiari | Elisabetta Jezek | Laure Vieu | Fabio Massimo Zanzotto
Traitement Automatique des Langues, Volume 52, Numéro 3 : Ressources linguistiques libres [Free Language Resources]

pdf bib
Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing
Irina Matveeva | Alessandro Moschitti | Lluís Màrquez | Fabio Massimo Zanzotto
Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing

pdf bib
Distributed Structures and Distributional Meaning
Fabio Massimo Zanzotto | Lorenzo Dell’Arciprete
Proceedings of the Workshop on Distributional Semantics and Compositionality

pdf bib
Linguistic Redundancy in Twitter
Fabio Massimo Zanzotto | Marco Pennacchiotti | Kostas Tsioutsiouliklis
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
Syntactic/Semantic Structures for Textual Entailment Recognition
Yashar Mehdad | Alessandro Moschitti | Fabio Massimo Zanzotto
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Proceedings of TextGraphs-5 - 2010 Workshop on Graph-based Methods for Natural Language Processing
Carmen Banea | Alessandro Moschitti | Swapna Somasundaran | Fabio Massimo Zanzotto
Proceedings of TextGraphs-5 - 2010 Workshop on Graph-based Methods for Natural Language Processing

pdf bib
Expanding textual entailment corpora fromWikipedia using co-training
Fabio Massimo Zanzotto | Marco Pennacchiotti
Proceedings of the 2nd Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources

pdf bib
Estimating Linear Models for Compositional Distributional Semantics
Fabio Massimo Zanzotto | Ioannis Korkontzelos | Francesca Fallucchi | Suresh Manandhar
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Generic Ontology Learners on Application Domains
Francesca Fallucchi | Maria Teresa Pazienza | Fabio Massimo Zanzotto
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In ontology learning from texts, we have ontology-rich domains where we have large structured domain knowledge repositories or we have large general corpora with large general structured knowledge repositories such as WordNet (Miller, 1995). Ontology learning methods are more useful in ontology-poor domains. Yet, in these conditions, these methods have not a particularly high performance as training material is not sufficient. In this paper we present an LSP ontology learning method that can exploit models learned from a generic domain to extract new information in a specific domain. In our model, we firstly learn a model from training data and then we use the learned model to discover knowledge in a specific domain. We tested our model adaptation strategy using a background domain that is applied to learn the isa networks in the Earth Observation Domain as a specific domain. We will demonstrate that our method captures domain knowledge better than other generic models: our model better captures what is expected by domain experts than a baseline method based only on WordNet. This latter is better correlated with non-domain annotators asked to produce the ontology for the specific domain.

2009

pdf bib
Singular Value Decomposition for Feature Selection in Taxonomy Learning
Francesca Fallucchi | Fabio Massimo Zanzotto
Proceedings of the International Conference RANLP-2009

pdf bib
Efficient kernels for sentence pair classification
Fabio Massimo Zanzotto | Lorenzo Dell’Arciprete
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
SVD Feature Selection for Probabilistic Taxonomy Learning
Francesca Fallucchi | Fabio Massimo Zanzotto
Proceedings of the Workshop on Geometrical Models of Natural Language Semantics

pdf bib
Proceedings of the 2009 Workshop on Applied Textual Inference (TextInfer)
Chris Callison-Burch | Ido Dagan | Christopher Manning | Marco Pennacchiotti | Fabio Massimo Zanzotto
Proceedings of the 2009 Workshop on Applied Textual Inference (TextInfer)

2008

pdf bib
Encoding Tree Pair-Based Graphs in Learning Algorithms: The Textual Entailment Recognition Case
Alessandro Moschitti | Fabio Massimo Zanzotto
Coling 2008: Proceedings of the 3rd Textgraphs workshop on Graph-based Algorithms for Natural Language Processing

pdf bib
Yet another Platform for Extracting Knowledge from Corpora
Francesca Fallucchi | Fabio Massimo Zanzotto
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

The research field of “extracting knowledge bases from text collections” seems to be mature: its target and its working hypotheses are clear. In this paper we propose a platform, YAPEK, i.e., Yet Another Platform for Extracting Knowledge from corpora, that wants to be the base to collect the majority of algorithms for extracting knowledge bases from corpora. The idea is that, when many knowledge extraction algorithms are collected under the same platform, relative comparisons are clearer and many algorithms can be leveraged to extract more valuable knowledge for final tasks such as Textual Entailment Recognition. As we want to collect many knowledge extraction algorithms, YAPEK is based on the three working hypotheses of the area: the basic hypothesis, the distributional hypothesis, and the point-wise assertion patterns. In YAPEK, these three hypotheses define two spaces: the space of the target textual forms and the space of the contexts. This platform guarantees the possibility of rapidly implementing many models for extracting knowledge from corpora as the platform gives clear entry points to model what is really different in the different algorithms: the feature spaces, the distances in these spaces, and the actual algorithm.

2007

pdf bib
Shallow Semantic in Fast Textual Entailment Rule Learners
Fabio Massimo Zanzotto | Marco Pennacchiotti | Alessandro Moschitti
Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing

2006

pdf bib
A Dependency-based Algorithm for Grammar Conversion
Alessandro Bahgat Shehata | Fabio Massimo Zanzotto
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper we present a model to transfer a grammatical formalism in another. The model is applicable only on restrictive conditions. However, it is fairly useful for many purposes: parsing evaluation, researching methods for truly combining different parsing outputs to reach better parsing performances, and building larger syntactically annotated corpora for data-driven approaches. The model has been tested over a case study: the translation of the Turin Tree Bank Grammar to the Shallow Grammar of the CHAOS Italian parser.

pdf bib
Mixing WordNet, VerbNet and PropBank for studying verb relations
Maria Teresa Pazienza | Marco Pennacchiotti | Fabio Massimo Zanzotto
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper we present a novel resource for studying the semantics of verb relations. The resource is created by mixing sense relational knowledge enclosed in WordNet, frame knowledge enclosed in VerbNet and corpus knowledge enclosed in PropBank. As a result, a set of about 1000 frame pairs is made available. A frame pair represents a pair of verbs in a peculiar semantic relation accompanied with specific information, such as: the syntactic-semantic frames of the two verbs, the mapping among their thematic roles and a set of textual examples extracted from the PennTreeBank. We specifically focus on four relations: Troponymy, Causation, Entailment and Antonymy. The different steps required for the mapping are described in detail and statistics on resource mutual coverage are reported. We also propose a practical use of the resource for the task of Textual Entailment acquisition and for Question Answering. A first attempt for automate the mapping among verb arguments is also presented: early experiments show that simple techniques can achieve good results, up to 85% F-Measure.

pdf bib
Automatic Learning of Textual Entailments with Cross-Pair Similarities
Fabio Massimo Zanzotto | Alessandro Moschitti
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
Discovering Asymmetric Entailment Relations between Verbs Using Selectional Preferences
Fabio Massimo Zanzotto | Marco Pennacchiotti | Maria Teresa Pazienza
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
Similarity between Pairs of Co-indexed Trees for Textual Entailment Recognition
Fabio Massimo Zanzotto | Alessandro Moschitti
Proceedings of TextGraphs: the First Workshop on Graph Based Methods for Natural Language Processing

2005

pdf bib
Discovering Entailment Relations Using “Textual Entailment Patterns”
Fabio Massimo Zanzotto | Maria Teresa Pazienza | Marco Pennacchiotti
Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment

2004

pdf bib
Ontological resources and question answering
Roberto Basili | Dorte H. Hansen | Patrizia Paggio | Maria Teresa Pazienza | Fabio Massimo Zanzotto
Proceedings of the Workshop on Pragmatics of Question Answering at HLT-NAACL 2004

pdf bib
Large Scale Experiments for Semantic Labeling of Noun Phrases in Raw Text
Louise Guthrie | Roberto Basili | Fabio Zanzotto | Kalina Bontcheva | Hamish Cunningham | David Guthrie | Jia Cui | Marco Cammisa | Jerry Cheng-Chieh Liu | Cassia Farria Martin | Kristiyan Haralambiev | Martin Holub | Klaus Macherey | Fredrick Jelinek
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
A2Q: An Agent-based Architecure for Multilingual Q&A
Roberto Basili | Nicola Lorusso | Maria Teresa Pazienza | Fabio Massimo Zanzotto
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
A Similarity Measure for Unsupervised Semantic Disambiguation
Roberto Basili | Marco Cammisa | Fabio Massimo Zanzotto
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2002

pdf bib
Knowledge-Based Multilingual Document Analysis
R. Basili | R. Catizone | L. Padro | M.T. Pazienza | G. Rigau | A. Setzer | N. Webb | F. Zanzotto
COLING-02: SEMANET: Building and Using Semantic Networks

pdf bib
Decision Trees as Explicit Domain Term Definitions
Roberto Basili | Maria Teresa Pazienza | Fabio Massimo Zanzotto
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf bib
Multilingual Authoring: the NAMIC Approach
Roberto Basili | Maria Teresa Pazienza | Fabio Massimo Zanzotto | Roberta Catizone | Andrea Setzer | Nick Webb | Yorick Wilks | Lluís Padró | German Rigau
Proceedings of the ACL 2001 Workshop on Human Language Technology and Knowledge Management

2000

pdf bib
The Italian Syntactic-Semantic Treebank: Architecture, Annotation, Tools and Evaluation
S. Montemagni | F. Barsotti | M. Battista | N. Calzolari | O. Corazzari | A. Zampolli | F. Fanciulli | M. Massetani | R. Raffaelli | R. Basili | M. T. Pazienza | D. Saracino | F. Zanzotto | N. Mana | F. Pianesi | R. Delmonte
Proceedings of the COLING-2000 Workshop on Linguistically Interpreted Corpora

pdf bib
Tuning Lexicons to New Operational Scenarios
Roberto Basili | Maria Teresa Pazienza | Michele Vindigni | Fabio Massimo Zanzotto
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

Search