Title: Transfer-based MT with Strong Decoding for a Miserly Data Scenario
1Transfer-based MT with Strong Decoding for a
Miserly Data Scenario
- Alon Lavie
- Language Technologies Institute
- Carnegie Mellon University
- Joint work with
- Lori Levin, Jaime Carbonell, Stephan Vogel,
Kathrin Probst, Erik Peterson, Ari Font-Llitjos,
Rachel Reynolds, Richard Cohen
2Rationale and Motivation
- Our Transfer-based MT approach is specifically
designed for limited-data scenarios - Hindi SLE was first open-domain large-scale test
for our system, but Hindi turned out to be not a
limited-data scenario - 1.5 Million words of parallel text
- Lessons Learned by end of SLE
- Basic XFER system did not have a strong decoder
- noisy statistical lexical resources interfere
with transfer-rules in our basic XFER system
3Rationale and Motivation
- Research Questions
- How would we do in a more realistic minority
language scenario, with very limited resources?
How does XFER compare with EBMT and SMT under
such a scenario? - How well can we do when we add a strong decoder
to our XFER system? - What is the effect of Multi-Engine combination
when using a strong decoder?
4A Limited Data Scenario for Hindi-to-English
- Put together a scenario with miserly data
resources - Elicited Data corpus 17589 phrases
- Cleaned portion (top 12) of LDC dictionary
2725 Hindi words (23612 translation pairs) - Manually acquired resources during the SLE
- 500 manual bigram translations
- 72 manually written phrase transfer rules
- 105 manually written postposition rules
- 48 manually written time expression rules
- No additional parallel text!!
5Learning Transfer-Rules from Elicited Data
- Rationale
- Large bilingual corpora not available
- Bilingual native informant(s) can translate and
word align a well-designed elicitation corpus,
using our elicitation tool - Controlled Elicitation Corpus designed to be
typologically comprehensive and compositional - Significantly enhance the elicitation corpus
using a new technique for extracting appropriate
data from an uncontrolled corpus - Transfer-rule engine and learning approach
support acquisition of generalized transfer-rules
from the data
6The CMU Elicitation Tool
7Elicited Data Collection
- Goal Acquire high quality word aligned
Hindi-English data to support system development,
especially grammar development and automatic
grammar learning - Recruited team of 20 bilingual speakers
- Extracted a corpus of phrases (NPs and PPs) from
Brown Corpus section of Penn TreeBank - Extracted corpus divided into files and assigned
to translators, here and in India - Controlled Elicitation Corpus also translated
into Hindi - Resulting in total of 17589 word aligned
translated phrases
8Rule Learning - Overview
- Goal Acquire Syntactic Transfer Rules
- Use available knowledge from the source side
(grammatical structure) - Three steps
- Flat Seed Generation first guesses at transfer
rules no syntactic structure - Compositionality use previously learned rules to
add structure - Seeded Version Space Learning refine rules by
generalizing with validation (learn appropriate
feature constraints)
9Examples of Learned Rules (I)
NP,14244 Score0.0429 NPNP N -gt DET N ( (X1Y2) )
NP,14434 Score0.0040 NPNP ADJ CONJ ADJ N -gt ADJ CONJ ADJ N ( (X1Y1) (X2Y2) (X3Y3) (X4Y4) )
PP,4894Score0.0470PPPP NP POSTP -gt PREP NP((X2Y1)(X1Y2))
10Manual Grammar Development
- Manual grammar developed only late into SLE
exercise, after morphology and lexical resource
issues were resolved - Covers mostly NPs, PPs and VPs (verb complexes)
- 70 grammar rules, covering basic and recursive
NPs and PPs, verb complexes of main tenses in
Hindi
11Manual Transfer Rules Example
PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT
VERB passive of 43 (7b) VP,28 VPVP V V
V -gt Aux V ( (X1Y2) ((x1 form) root)
((x2 type) c light) ((x2 form) part) ((x2
aspect) perf) ((x3 lexwx) 'jAnA') ((x3
form) part) ((x3 aspect) perf) (x0 x1)
((y1 lex) be) ((y1 tense) past) ((y1 agr
num) (x3 agr num)) ((y1 agr pers) (x3 agr
pers)) ((y2 form) part) )
12Manual Transfer Rules Example
NP1 ke NP2 -gt NP2 of NP1 Example jIvana
ke eka aXyAya life of (one)
chapter gt a chapter of life NP,12 NPNP
PP NP1 -gt NP1 PP ( (X1Y2) (X2Y1)
((x2 lexwx) 'kA') ) NP,13 NPNP NP1 -gt
NP1 ( (X1Y1) ) PP,12 PPPP NP Postp
-gt Prep NP ( (X1Y2) (X2Y1) )
13Adding a Strong Decoder
- XFER system produces a full lattice
- Edges are scored using word-to-word translation
probabilities, trained from the limited bilingual
data - Decoder uses an English LM (70m words)
- Decoder can also reorder words or phrases (up to
4 positions ahead) - For XFER(strong) , ONLY edges from basic XFER
system are used!
14Testing Conditions
- Tested on section of JHU provided data 258
sentences with four reference translations - SMT system (stand-alone)
- EBMT system (stand-alone)
- XFER system (naïve decoding)
- XFER system with strong decoder
- No grammar rules (baseline)
- Manually developed grammar rules
- Automatically learned grammar rules
- XFERSMT with strong decoder (MEMT)
15Results on JHU Test Set
System BLEU M-BLEU NIST
EBMT 0.058 0.165 4.22
SMT 0.093 0.191 4.64
XFER (naïve) man grammar 0.055 0.177 4.46
XFER (strong) no grammar 0.109 0.224 5.29
XFER (strong) learned grammar 0.116 0.231 5.37
XFER (strong) man grammar 0.135 0.243 5.59
XFERSMT 0.136 0.243 5.65
16Effect of Reordering in the Decoder
17Observations and Lessons (I)
- XFER with strong decoder outperformed SMT even
without any grammar rules - SMT Trained on elicited phrases that are very
short - SMT has insufficient data to train more
discriminative translation probabilities - XFER takes advantage of Morphology
- Token coverage without morphology 0.6989
- Token coverage with morphology 0.7892
- Manual grammar currently quite a bit better than
automatically learned grammar - Learned rules did not use version-space learning
- Large room for improvement on learning rules
- Importance of effective well-founded scoring of
learned rules
18Observations and Lessons (II)
- Strong decoder for XFER system is essential, even
with extremely limited data - XFER system with manual or automatically learned
grammar outperforms SMT and EBMT in the extremely
limited data scenario - where is the cross-over point?
- MEMT based on strong decoder produced best
results in this scenario - Reordering within the decoder provided very
significant score improvements - Much room for more sophisticated grammar rules
- Strong decoder can carry some of the reordering
burden - Conclusion transfer rules (both manual and
learned) offer significant contributions that can
complement existing data-driven approaches - Also in medium and large data settings?
19Conclusions
- Initial steps to development of a statistically
grounded transfer-based MT system with - Rules that are scored based on a well-founded
probability model - Strong and effective decoding that incorporates
the most advanced techniques used in SMT decoding - Working from the opposite end of research on
incorporating models of syntax into standard
SMT systems Knight et al - Our direction makes sense in the limited data
scenario
20Future Directions
- Significant work on automatic rule learning
(especially Seeded Version Space Learning) - Improved leveraging from manual grammar
resources, interaction with bilingual speakers - Developing a well-founded model for assigning
scores (probabilities) to transfer rules - Improving the strong decoder to better fit the
specific characteristics of the XFER model - MEMT with improved
- Combination of output from different translation
engines with different scorings - strong decoding capabilities