Claudio Giuliano

Also published as: C. Giuliano


2014

pdf bib
Identification of Bilingual Terms from Monolingual Documents for Statistical Machine Translation
Mihael Arcan | Claudio Giuliano | Marco Turchi | Paul Buitelaar
Proceedings of the 4th International Workshop on Computational Terminology (Computerm)

2013

pdf bib
Outsourcing FrameNet to the Crowd
Marco Fossati | Claudio Giuliano | Sara Tonelli
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2010

pdf bib
FBK-IRST: Semantic Relation Extraction Using Cyc
Kateryna Tymoshenko | Claudio Giuliano
Proceedings of the 5th International Workshop on Semantic Evaluation

pdf bib
Extending English ACE 2005 Corpus Annotation with Ground-truth Links to Wikipedia
Luisa Bentivogli | Pamela Forner | Claudio Giuliano | Alessandro Marchetti | Emanuele Pianta | Kateryna Tymoshenko
Proceedings of the 2nd Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources

2009

pdf bib
Kernel Methods for Minimally Supervised WSD
Claudio Giuliano | Alfio Massimiliano Gliozzo | Carlo Strapparava
Computational Linguistics, Volume 35, Number 4, December 2009

pdf bib
Wikipedia as Frame Information Repository
Sara Tonelli | Claudio Giuliano
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Fine-Grained Classification of Named Entities Exploiting Latent Semantic Kernels
Claudio Giuliano
Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)

2008

pdf bib
Instance-Based Ontology Population Exploiting Named-Entity Substitution
Claudio Giuliano | Alfio Gliozzo
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf bib
FBK-IRST: Kernel Methods for Semantic Relation Extraction
Claudio Giuliano | Alberto Lavelli | Daniele Pighin | Lorenza Romano
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

pdf bib
FBK-irst: Lexical Substitution Task Exploiting Domain and Syntagmatic Coherence
Claudio Giuliano | Alfio Gliozzo | Carlo Strapparava
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

pdf bib
Instance Based Lexical Entailment for Ontology Population
Claudio Giuliano | Alfio Gliozzo
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf bib
Exploiting Shallow Linguistic Information for Relation Extraction from Biomedical Literature
Claudio Giuliano | Alberto Lavelli | Lorenza Romano
11th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Simple Information Extraction (SIE): A Portable and Effective IE System
Claudio Giuliano | Alberto Lavelli | Lorenza Romano
Proceedings of the Workshop on Adaptive Text Extraction and Mining (ATEM 2006)

pdf bib
Syntagmatic Kernels: a Word Sense Disambiguation Case Study
Claudio Giuliano | Alfio Gliozzo | Carlo Strapparava
Proceedings of the Workshop on Learning Structured Information in Natural Language Applications

2005

pdf bib
Domain Kernels for Word Sense Disambiguation
Alfio Gliozzo | Claudio Giuliano | Carlo Strapparava
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

2004

pdf bib
Pattern abstraction and term similarity for Word Sense Disambiguation: IRST at Senseval-3
Carlo Strapparava | Alfio Gliozzo | Claudio Giuliano
Proceedings of SENSEVAL-3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text

pdf bib
A Critical Survey of the Methodology for IE Evaluation
A. Lavelli | M. E. Califf | F. Ciravegna | D. Freitag | C. Giuliano | N. Kushmerick | L. Romano
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

We survey the evaluation methodology adopted in Information Extraction (IE), as defined in the MUC conferences and in later independent efforts applying machine learning to IE. We point out a number of problematic issues that may hamper the comparison between results obtained by different researchers. Some of them are common to other NLP tasks: e.g., the difficulty of exactly identifying the effects on performance of the data (sample selection and sample size), of the domain theory (features selected), and of algorithm parameter settings. Issues specific to IE evaluation include: how leniently to assess inexact identification of filler boundaries, the possibility of multiple fillers for a slot, and how the counting is performed. We argue that, when specifying an information extraction task, a number of characteristics should be clearly defined. However, in the papers only a few of them are usually explicitly specified. Our aim is to elaborate a clear and detailed experimental methodology and propose it to the IE community. The goal is to reach a widespread agreement on such proposal so that future IE evaluations will adopt the proposed methodology, making comparisons between algorithms fair and reliable. In order to achieve this goal, we will develop and make available to the community a set of tools and resources that incorporate a standardized IE methodology.