Alan K. Melby

Also published as: Alan Melby


2024

pdf bib
The Multi-Range Theory of Translation Quality Measurement: MQM scoring models and Statistical Quality Control
Arle Lommel | Serge Gladkoff | Alan Melby | Sue Ellen Wright | Ingemar Strandvik | Katerina Gasova | Angelika Vaasa | Andy Benzo | Romina Marazzato Sparano | Monica Foresi | Johani Innis | Lifeng Han | Goran Nenadic
Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 2: Presentations)

The year 2024 marks the 10th anniversary of the Multidimensional Quality Metrics (MQM) framework for analytic translation quality evaluation. The MQM error typology has been widely used by practitioners in the translation and localization industry and has served as the basis for many derivative projects. The annual Conference on Machine Translation (WMT) shared tasks on both human and automatic translation quality evaluations used the MQM error typology. The metric stands on two pillars: error typology and the scoring model. The scoring model calculates the quality score from annotation data, detailing how to convert error type and severity counts into numeric scores to determine if the content meets specifications. Previously, only the raw scoring model had been published. This April, the MQM Council published the Linear Calibrated Scoring Model, officially presented herein, along with the Non-Linear Scoring Model, which had not been published

pdf bib
Labels on Translation Output: a triple win
Alan Melby
Proceedings of the 16th Conference of the Association for Machine Translation in the Americas (Volume 2: Presentations)

In the 2023 edition of the ASTM International translation standard (F2575) the labels BRT and UMT have been standardized. The Label BRT stands for ‘Bilingually Reviewed Translation, by a qualified language professional’. The Label UMT is for everything else, from raw machine translation to MT where only the target text is checked, to human translation that does not involve a qualified professional. Thus, UMT could be expanded as ‘Unreviewed or Missing-qualifications Translation’. This presentation will argue that the use of the labels BRT and UMT is a triple win: The ‘consumers’ (end users) of a translation win because they have useful information for risk analysis (harm from errors). MT developers win because they have useful metadata when selecting training material. And professional translators win by increasing their visibility to the public. The presentation will give a history of these two labels and enlist the help of the entire AMTA community in promoting their use.

2018

pdf bib
Tutorial: MQM-DQF: A Good Marriage (Translation Quality for the 21st Century)
Arle Lommel | Alan Melby
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 2: User Track)

pdf bib
Translation API Cases and Classes (TAPICC)
Alan Melby
Proceedings of the AMTA 2018 Workshop on The Role of Authoritative Standards in the MT Environment

2015

pdf bib
QT21: A new era for translators and the computer
Alan Melby
Proceedings of Translating and the Computer 37

pdf bib
Quality evaluation of four translations of a kidney document: focus on reliability
Alan K. Melby
Proceedings of Machine Translation Summit XV: User Track

2014

pdf bib
LexTerm Manager: Design for an Integrated Lexicography and Terminology System
Joshua Elliot | Logan Kearsley | Jason Housley | Alan Melby
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

We present a design for a multi-modal database system for lexical information that can be accessed in either lexicographical or terminological views. The use of a single merged data model makes it easy to transfer common information between termbases and dictionaries, thus facilitating information sharing and re-use. Our combined model is based on the LMF and TMF metamodels for lexicographical and terminological databases and is compatible with both, thus allowing for the import of information from existing dictionaries and termbases, which may be transferred to the complementary view and re-exported. We also present a new Linguistic Configuration Model, analogous to a TBX XCS file, which can be used to specify multiple language-specific schemata for validating and understanding lexical information in a single database. Linguistic configurations are mutable and can be refined and evolved over time as understanding of documentary needs improves. The system is designed with a client-server architecture using the HTTP protocol, allowing for the independent implementation of multiple clients for specific use cases and easy deployment over the web.

2012

pdf bib
Linport as a standard for interoperability between translation systems
Alan K. Melby | Tyler A. Snow
Proceedings of Translating and the Computer 34

pdf bib
Reliably Assessing the Quality of Post-edited Translation Based on Formalized Structured Translation Specifications
Alan K. Melby | Jason Housley | Paul J. Fields | Emily Tuioti
Workshop on Post-Editing Technology and Practice

Post-editing of machine translation has become more common in recent years. This has created the need for a formal method of assessing the performance of post-editors in terms of whether they are able to produce post-edited target texts that follow project specifications. This paper proposes the use of formalized structured translation specifications (FSTS) as a basis for post-editor assessment. To determine if potential evaluators are able to reliably assess the quality of post-edited translations, an experiment used texts representing the work of five fictional post-editors. Two software applications were developed to facilitate the assessment: the Ruqual Specifications Writer, which aids in establishing post-editing project specifications; and Ruqual Rubric Viewer, which provides a graphical user interface for constructing a rubric in a machine-readable format. Seventeen non-experts rated the translation quality of each simulated post-edited text. Intraclass correlation analysis showed evidence that the evaluators were highly reliable in evaluating the performance of the post-editors. Thus, we assert that using FSTS specifications applied through the Ruqual software tools provides a useful basis for evaluating the quality of post-edited texts.

2006

pdf bib
The Potential and Limitations of MT Paradigm
Daniel Marcu | Alan Melby
Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Invited Talks

2000

pdf bib
Accessibility of Multilingual Terminological Resources - Current Problems and Prospects for the Future
Gerhard Budin | Alan K. Melby
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

1999

pdf bib
Integrating Translation Technologies Using SALT
Gerhard Budin | Alan K. Melby | Sue Ellen Wright | Deryle Lonsdale | Arle Lommel
Proceedings of Translating and the Computer 21

1996

pdf bib
Panel: The limits of automation: optimists vs skeptics.
Eduard Hovy | Ken Church | Denis Gachot | Marge Leon | Alan Melby | Sergei Nirenburg | Yorick Wilks
Conference of the Association for Machine Translation in the Americas

1994

pdf bib
Machine translation and philosophy of language
Alan Melby
Proceedings of the Second International Conference on Machine Translation: Ten years on

1991

pdf bib
TEI-TERM: an SGML-based interchange format for terminology files The EuroTermBank
Alan Melby | Sue Ellen Wright
Proceedings of Translating and the Computer 13: The theory and practice of machine translation – a marriage of convenience?

1988

pdf bib
Lexical Transfer: Between a Source Rock and a Hard Target
Alan K. Melby
Coling Budapest 1988 Volume 2: International Conference on Computational Linguistics

1986

pdf bib
Lexical Transfer: A Missing Element in Linguistics Theories
Alan K. Melby
Coling 1986 Volume 1: The 11th International Conference on Computational Linguistics

1984

pdf bib
Machine translation with post editing versus a three-level integrated translator aid system
Alan K. Melby
Proceedings of the International Conference on Methodology and Techniques of Machine Translation: Processing from words to language

The standard design for a computer-assisted translation system consists of data entry of source text, machine translation, and post editing (i.e. revision) of raw machine translation. This paper discusses this standard design and presents an alternative three-level design consisting of word processing integrated with terminology aids, simple source text processing, and a link to an off-line machine translation system. Advantages of the new design are discussed.

1983

pdf bib
COMPUTER-ASSISTED TRANSLATION SYSTEMS: The Standard Design and A Multi-level Design
Alan K. Melby
First Conference on Applied Natural Language Processing

1982

pdf bib
Multi-Level Translation Aids in a Distributed System
Alan K. Melby
Coling 1982: Proceedings of the Ninth International Conference on Computational Linguistics

1980

pdf bib
ITS: Interactive Translation System
Alan K. Melby | Melvin R. Smith | Jill Peterson
COLING 1980 Volume 1: The 8th International Conference on Computational Linguistics

1977

pdf bib
Pitch Contour Generation in Speech Synthesis: A Junction Grammar Approach
Alan K. Melby | William J. Strong | Eldon G. Lytle | Ronald Millett
American Journal of Computational Linguistics (February 1977)

1975

pdf bib
Junction Grammar as a Base for Natural Language Processing
Eldon G. Lytel | Dennis Packard | Daryl Gibb | Alan K. Melby | Floyd H. Billings, Jr.
American Journal of Computational Linguistics (September 1975)