Title: Learningbased MT Approaches for Languages with Limited Resources
1Learning-based MT Approaches for Languages with
Limited Resources
- Alon Lavie
- Language Technologies Institute
- Carnegie Mellon University
- Joint work with
- Jaime Carbonell, Lori Levin, Kathrin Probst, Erik
Peterson, Christian Monson, Ariadna Font-Llitjos,
Alison Alvarez, Roberto Aranovich
2Outline
- Rationale for limited-resource learning-based MT
- Roadmap for limited-resource learning-based MT
- Framework overview
- Elicitation
- Learning transfer Rules
- Automatic rule refinement
- Example prototypes
- Implications for MT with vast parallel data
- Conclusions and future directions
3Why Machine Translation for Languages with
Limited Resources?
- We are in the age of information explosion
- The internetwebGoogle ? anyone can get the
information they want anytime - But what about the text in all those other
languages? - How do they read all this English stuff?
- How do we read all the stuff that they put
online? - MT for these languages would Enable
- Better government access to native indigenous and
minority communities - Better minority and native community
participation in information-rich activities
(health care, education, government) without
giving up their languages. - Civilian and military applications (disaster
relief) - Language preservation
4The Roadmap to Learning-based MT
- Automatic acquisition of necessary language
resources and knowledge using machine learning
methodologies - Learning morphology (analysis/generation)
- Rapid acquisition of broad coverage word-to-word
and phrase-to-phrase translation lexicons - Learning of syntactic structural mappings
- Tree-to-tree and string-to-tree structure
transformations Knight et al, Eisner,
Melamed - Learning syntactic transfer rules with resources
(grammar, parses) for just one of the two
languages - Automatic rule refinement and/or post-editing
- A framework for integrating the acquired MT
resources into effective MT prototype systems - Effective integration of acquired knowledge with
statistical/distributional information
5CMUs AVENUE Approach
- Elicitation use bilingual native informants to
produce a small high-quality word-aligned
bilingual corpus of translated phrases and
sentences - Building Elicitation corpora from feature
structures - Feature Detection and Navigation
- Transfer-rule Learning apply ML-based methods to
automatically acquire syntactic transfer rules
for translation between the two languages - Learn from major language to minor language
- Translate from minor language to major language
- XFER Decoder
- XFER engine produces a lattice of possible
transferred structures at all levels - Decoder searches and selects the best scoring
combination - Rule Refinement refine the acquired rules via a
process of interaction with bilingual informants - Morphology Learning
- Word and Phrase bilingual lexicon acquisition
6AVENUE Architecture
7The Transfer Engine
8The Transfer Engine
- Some Unique Features
- Works with either learned or manually-developed
transfer grammars - Handles rules with or without unification
constraints - Supports interfacing with servers for
Morphological analysis and generation - Can handle ambiguous source-word analyses and/or
SL segmentations represented in the form of
lattice structures
9The Lattice Decoder
- Simple Stack Decoder, similar in principle to
SMT/EBMT decoders - Searches for best-scoring path of non-overlapping
lattice arcs - Scoring based on log-linear combination of
scoring components (no MER training yet) - Scoring components
- Standard trigram LM
- Fragmentation how many arcs to cover the entire
translation? - Length Penalty
- Rule Scores (not fully integrated yet)
10Outline
- Rationale for learning-based MT
- Roadmap for learning-based MT
- Framework overview
- Elicitation
- Learning transfer Rules
- Automatic rule refinement
- Example prototypes
- Implications for MT with vast parallel data
- Conclusions and future directions
11Data Elicitation for Languages with Limited
Resources
- Rationale
- Large volumes of parallel text not available ?
create a small maximally-diverse parallel corpus
that directly supports the learning task - Bilingual native informant(s) can translate and
align a small pre-designed elicitation corpus,
using elicitation tool - Elicitation corpus designed to be typologically
and structurally comprehensive and compositional - Transfer-rule engine and new learning approach
support acquisition of generalized transfer-rules
from the data
12Elicitation Tool English-Chinese Example
13Elicitation ToolEnglish-Chinese Example
14Elicitation ToolEnglish-Hindi Example
15Elicitation ToolEnglish-Arabic Example
16Elicitation ToolSpanish-Mapudungun Example
17Designing Elicitation Corpora
- What do we want to elicit?
- Diversity of linguistic phenomena and
constructions - Syntactic structural diversity
- How do we construct an elicitation corpus?
- Typological Elicitation Corpus based on
elicitation and documentation work of field
linguists (e.g. Comrie 1977, Bouquiaux 1992)
initial corpus size 1000 examples - Structural Elicitation Corpus based on
representative sample of English phrase
structures 120 examples - Organized compositionally elicit simple
structures first, then use them as building
blocks - Goal minimize size, maximize linguistic coverage
18Typological Elicitation Corpus
- Feature Detection
- Discover what features exist in the language and
where/how they are marked - Example does the language mark gender of nouns?
How and where are these marked? - Method compare translations of minimal pairs
sentences that differ in only ONE feature - Elicit translations/alignments for detected
features and their combinations - Dynamic corpus navigation based on feature
detection no need to elicit for combinations
involving non-existent features
19Typological Elicitation Corpus
- Initial typological corpus of about 1000
sentences was manually constructed - New construction methodology for building an
elicitation corpus using - A feature specification lists inventory of
available features and their values - A definition of the set of desired feature
structures - Schemas define sets of desired combinations of
features and values - Multiplier algorithm generates the comprehensive
set of feature structures - A generation grammar and lexicon NLG generator
generates NL sentences from the feature structures
20Structural Elicitation Corpus
- Goal create a compact diverse sample corpus of
syntactic phrase structures in English in order
to elicit how these map into the elicited
language - Methodology
- Extracted all CFG rules from Brown section of
Penn TreeBank (122K sentences) - Simplified POS tag set
- Constructed frequency histogram of extracted
rules - Pulled out simplest phrases for most frequent
rules for NPs, PPs, ADJPs, ADVPs, SBARs and
Sentences - Some manual inspection and refinement
- Resulting corpus of about 120 phrases/sentences
representing common structures - See Probst and Lavie, 2004
21Outline
- Rationale for learning-based MT
- Roadmap for learning-based MT
- Framework overview
- Elicitation
- Learning transfer Rules
- Automatic rule refinement
- Example prototypes
- Implications for MT with vast parallel data
- Conclusions and future directions
22Transfer Rule Formalism
SL the old man, TL ha-ish ha-zaqen NPNP
DET ADJ N -gt DET N DET ADJ ( (X1Y1) (X1Y3)
(X2Y4) (X3Y2) ((X1 AGR) 3-SING) ((X1 DEF
DEF) ((X3 AGR) 3-SING) ((X3 COUNT)
) ((Y1 DEF) DEF) ((Y3 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y4 GENDER)) )
- Type information
- Part-of-speech/constituent information
- Alignments
- x-side constraints
- y-side constraints
- xy-constraints,
- e.g. ((Y1 AGR) (X1 AGR))
23Transfer Rule Formalism (II)
SL the old man, TL ha-ish ha-zaqen NPNP
DET ADJ N -gt DET N DET ADJ ( (X1Y1) (X1Y3)
(X2Y4) (X3Y2) ((X1 AGR) 3-SING) ((X1 DEF
DEF) ((X3 AGR) 3-SING) ((X3 COUNT)
) ((Y1 DEF) DEF) ((Y3 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y4 GENDER)) )
- Value constraints
-
- Agreement constraints
24Rule Learning - Overview
- Goal Acquire Syntactic Transfer Rules
- Use available knowledge from the source side
(grammatical structure) - Three steps
- Flat Seed Generation first guesses at transfer
rules flat syntactic structure - Compositionality Learning use previously learned
rules to learn hierarchical structure - Constraint Learning refine rules by learning
appropriate feature constraints
25Flat Seed Rule Generation
26Flat Seed Rule Generation
- Create a flat transfer rule specific to the
sentence pair, partially abstracted to POS - Words that are aligned word-to-word and have the
same POS in both languages are generalized to
their POS - Words that have complex alignments (or not the
same POS) remain lexicalized - One seed rule for each translation example
- No feature constraints associated with seed rules
(but mark the example(s) from which it was
learned)
27Compositionality Learning
28Compositionality Learning
- Detection traverse the c-structure of the
English sentence, add compositional structure for
translatable chunks - Generalization adjust constituent sequences and
alignments - Two implemented variants
- Safe Compositionality there exists a transfer
rule that correctly translates the
sub-constituent - Maximal Compositionality Generalize the rule if
supported by the alignments, even in the absence
of an existing transfer rule for the
sub-constituent
29Constraint Learning
30Constraint Learning
- Goal add appropriate feature constraints to the
acquired rules - Methodology
- Preserve general structural transfer
- Learn specific feature constraints from example
set - Seed rules are grouped into clusters of similar
transfer structure (type, constituent sequences,
alignments) - Each cluster forms a version space a partially
ordered hypothesis space with a specific and a
general boundary - The seed rules in a group form the specific
boundary of a version space - The general boundary is the (implicit) transfer
rule with the same type, constituent sequences,
and alignments, but no feature constraints -
31Constraint Learning Generalization
- The partial order of the version space
- Definition A transfer rule tr1 is strictly more
general than another transfer rule tr2 if all
f-structures that are satisfied by tr2 are also
satisfied by tr1. - Generalize rules by merging them
- Deletion of constraint
- Raising two value constraints to an agreement
constraint, e.g. - ((x1 num) pl), ((x3 num) pl) ?
- ((x1 num) (x3 num))
32Automated Rule Refinement
- Bilingual informants can identify translation
errors and pinpoint the errors - A sophisticated trace of the translation path can
identify likely sources for the error and do
Blame Assignment - Rule Refinement operators can be developed to
modify the underlying translation grammar (and
lexicon) based on characteristics of the error
source - Add or delete feature constraints from a rule
- Bifurcate a rule into two rules (general and
specific) - Add or correct lexical entries
- See Font-Llitjos, Carbonell Lavie, 2005
33Outline
- Rationale for learning-based MT
- Roadmap for learning-based MT
- Framework overview
- Elicitation
- Learning transfer Rules
- Automatic rule refinement
- Example prototypes
- Implications for MT with vast parallel data
- Conclusions and future directions
34AVENUE Prototypes
- General XFER framework under development for past
three years - Prototype systems so far
- German-to-English, Dutch-to-English
- Chinese-to-English
- Hindi-to-English
- Hebrew-to-English
- In progress or planned
- Mapudungun-to-Spanish
- Quechua-to-Spanish
- Arabic-to-English
- Native-Brazilian languages to Brazilian Portuguese
35Challenges for Hebrew MT
- Paucity in existing language resources for Hebrew
- No publicly available broad coverage
morphological analyzer - No publicly available bilingual lexicons or
dictionaries - No POS-tagged corpus or parse tree-bank corpus
for Hebrew - No large Hebrew/English parallel corpus
- Scenario well suited for CMU transfer-based MT
framework for languages with limited resources
36Hebrew-to-English MT Prototype
- Initial prototype developed within a two month
intensive effort - Accomplished
- Adapted available morphological analyzer
- Constructed a preliminary translation lexicon
- Translated and aligned Elicitation Corpus
- Learned XFER rules
- Developed (small) manual XFER grammar as a point
of comparison - System debugging and development
- Evaluated performance on unseen test data using
automatic evaluation metrics
37(No Transcript)
38Morphology Example
- Input word BWRH
- 0 1 2 3 4
- --------BWRH--------
- -----B-----WR--H--
- --B---H----WRH---
-
39Morphology Example
- Y0 ((SPANSTART 0) Y1 ((SPANSTART 0)
Y2 ((SPANSTART 1) - (SPANEND 4) (SPANEND
2) (SPANEND 3) - (LEX BWRH) (LEX B)
(LEX WR) - (POS N) (POS
PREP)) (POS N) - (GEN F)
(GEN M) - (NUM S)
(NUM S) - (STATUS ABSOLUTE))
(STATUS ABSOLUTE)) - Y3 ((SPANSTART 3) Y4 ((SPANSTART 0)
Y5 ((SPANSTART 1) - (SPANEND 4) (SPANEND
1) (SPANEND 2) - (LEX LH) (LEX
B) (LEX H) - (POS POSS)) (POS
PREP)) (POS DET)) - Y6 ((SPANSTART 2) Y7 ((SPANSTART 0)
- (SPANEND 4) (SPANEND
4) - (LEX WRH) (LEX
BWRH) - (POS N) (POS
LEX)) - (GEN F)
- (NUM S)
40Sample Output (dev-data)
- maxwell anurpung comes from ghana for israel four
years ago and since worked in cleaning in hotels
in eilat - a few weeks ago announced if management club
hotel that for him to leave israel according to
the government instructions and immigration
police - in a letter in broken english which spread among
the foreign workers thanks to them hotel for
their hard work and announced that will purchase
for hm flight tickets for their countries from
their money
41Evaluation Results
- Test set of 62 sentences from Haaretz newspaper,
2 reference translations
42Hebrew-English Test Suite Evaluation
43Outline
- Rationale for learning-based MT
- Roadmap for learning-based MT
- Framework overview
- Elicitation
- Learning transfer Rules
- Automatic rule refinement
- Learning Morphology
- Example prototypes
- Implications for MT with vast parallel data
- Conclusions and future directions
44Implications for MT with Vast Amounts of Parallel
Data
- Learning word/short-phrase translations vs.
learning long phrase-to-phrase translations - Phrase-to-phrase MT ill suited for long-range
reorderings ? ungrammatical output - Recent work on hierarchical Stat-MT Chiang,
2005 and parsing-based MT Melamed et al, 2005 - Learning general tree-to-tree syntactic mappings
is equally problematic - Meaning is a hybrid of complex, non-compositional
phrases embedded within a syntactic structure - Some constituents can be translated in isolation,
others require contextual mappings
45Implications for MT with Vast Amounts of Parallel
Data
- Our approach for learning transfer rules is
applicable to the large data scenario, subject to
solutions for several challenges - No elicitation corpus ? break-down parallel
sentences into reasonable learning examples - Working with less reliable automatic word
alignments rather than manual alignments - Effective use of reliable parse structures for
ONE language (i.e. English) and automatic word
alignments in order to decompose the translation
of a sentence into several compositional rules. - Effective scoring of resulting very large
transfer grammars, and scaled up transfer
decoding
46Implications for MT with Vast Amounts of Parallel
Data
- Example
- ? ?? ? ??? ?? ? ??
- He freq with J Zemin Pres via
phone - He freq talked with President J Zemin over
the phone
47Implications for MT with Vast Amounts of Parallel
Data
- Example
- ? ?? ? ??? ?? ? ??
- He freq with J Zemin Pres via
phone - He freq talked with President J Zemin over
the phone
NP1
NP2
NP3
NP1
NP2
NP3
48Conclusions
- There is hope yet for wide-spread MT between many
of the worlds language pairs - MT offers a fertile yet extremely challenging
ground for learning-based approaches that
leverage from diverse sources of information - Syntactic structure of one or both languages
- Word-to-word correspondences
- Decomposable units of translation
- Statistical Language Models
- Provides a feasible solution to MT for languages
with limited resources - Extremely promising approach for addressing the
fundamental weaknesses in current corpus-based MT
for languages with vast resources
49Future Research Directions
- Automatic Transfer Rule Learning
- In the large-data scenario from large volumes
of uncontrolled parallel text automatically
word-aligned - In the absence of morphology or POS annotated
lexica - Learning mappings for non-compositional
structures - Effective models for rule scoring for
- Decoding using scores at runtime
- Pruning the large collections of learned rules
- Learning Unification Constraints
- Integrated Xfer Engine and Decoder
- Improved models for scoring tree-to-tree
mappings, integration with LM and other knowledge
sources in the course of the search
50Future Research Directions
- Automatic Rule Refinement
- Morphology Learning
- Feature Detection and Corpus Navigation
-
51(No Transcript)
52Mapudungun-to-Spanish Example
English I didnt see Maria
Mapudungun pelafiñ Maria
Spanish No vi a María
53Mapudungun-to-Spanish Example
English I didnt see Maria
Mapudungun pelafiñ Maria pe -la -fi -ñ Maria see
-neg -3.obj -1.subj.indicative Maria
Spanish No vi a María No vi a María neg see.1.sub
j.past.indicative acc Maria
54pe-la-fi-ñ Maria
V
pe
55pe-la-fi-ñ Maria
V
pe
VSuff
Negation
la
56pe-la-fi-ñ Maria
V
pe
VSuffG
Pass all features up
VSuff
la
57pe-la-fi-ñ Maria
V
pe
VSuffG
VSuff
object person 3
fi
VSuff
la
58pe-la-fi-ñ Maria
V
VSuffG
pe
Pass all features up from both children
VSuffG
VSuff
fi
VSuff
la
59pe-la-fi-ñ Maria
V
VSuffG
VSuff
pe
person 1 number sg mood ind
VSuffG
VSuff
ñ
fi
VSuff
la
60pe-la-fi-ñ Maria
V
VSuffG
VSuffG
VSuff
pe
Pass all features up from both children
VSuffG
VSuff
ñ
fi
VSuff
la
61pe-la-fi-ñ Maria
Pass all features up from both children
V
Check that 1) negation 2) tense is undefined
V
VSuffG
VSuffG
VSuff
pe
VSuffG
VSuff
ñ
fi
VSuff
la
62pe-la-fi-ñ Maria
NP
V
VSuffG
person 3 number sg human
VSuffG
VSuff
N
pe
VSuffG
VSuff
Maria
ñ
fi
VSuff
la
63pe-la-fi-ñ Maria
S
Check that NP is human
Pass features up from
VP
NP
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
64Transfer to Spanish Top-Down
S
S
VP
VP
NP
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
65Transfer to Spanish Top-Down
Pass all features to Spanish side
S
S
VP
VP
NP
NP
a
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
66Transfer to Spanish Top-Down
S
S
Pass all features down
VP
VP
NP
NP
a
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
67Transfer to Spanish Top-Down
S
S
Pass object features down
VP
VP
NP
NP
a
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
68Transfer to Spanish Top-Down
S
S
VP
VP
NP
NP
a
V
VSuffG
Accusative marker on objects is introduced
because human
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
69Transfer to Spanish Top-Down
S
S
VP
VP
VPVP VBar NP -gt VBar "a" NP ( (X1Y1) (X2
Y3) ((X2 type) (NOT personal)) ((X2
human) c ) (X0 X1) ((X0 object) X2)
(Y0 X0) ((Y0 object) (X0 object)) (Y1
Y0) (Y3 (Y0 object)) ((Y1 objmarker person)
(Y3 person)) ((Y1 objmarker number) (Y3
number)) ((Y1 objmarker gender) (Y3 ender)))
NP
NP
a
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
70Transfer to Spanish Top-Down
S
S
Pass person, number, and mood features to Spanish
Verb
VP
VP
NP
NP
a
Assign tense past
V
VSuffG
V
no
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
71Transfer to Spanish Top-Down
S
S
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
Introduced because negation
fi
VSuff
la
72Transfer to Spanish Top-Down
S
S
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
ver
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
73Transfer to Spanish Top-Down
S
S
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
ver
vi
VSuffG
VSuff
ñ
Maria
person 1 number sg mood indicative tense
past
fi
VSuff
la
74Transfer to Spanish Top-Down
S
S
Pass features over to Spanish side
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
vi
N
VSuffG
VSuff
ñ
Maria
María
fi
VSuff
la
75I Didnt see Maria
S
S
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
vi
N
VSuffG
VSuff
ñ
Maria
María
fi
VSuff
la
76(No Transcript)