Automatic Rule Learning for ResourceLimited Machine Translation - PowerPoint PPT Presentation

1 / 38
About This Presentation
Title:

Automatic Rule Learning for ResourceLimited Machine Translation

Description:

Automatic Rule Learning for Resource-Limited Machine Translation. Faculty: ... S::S [ART ADJ N V ART N] [ART N ART ADJ V P ART N] ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 39
Provided by: AlonL
Category:

less

Transcript and Presenter's Notes

Title: Automatic Rule Learning for ResourceLimited Machine Translation


1
Automatic Rule Learning for Resource-Limited
Machine Translation
  • Faculty
  • Alon Lavie, Jaime Carbonell, Lori Levin,
  • Ralf Brown
  • Students
  • Katharina Probst, Erik Peterson,
  • Christian Monson, Ariadna Font-Llitjos,
  • Rachel Reynolds, Alison Alvarez

2
Transfer with Strong Decoding
3
Transfer Rule Formalism
SL the old man, TL ha-ish ha-zaqen NPNP
DET ADJ N -gt DET N DET ADJ ( (X1Y1) (X1Y3)
(X2Y4) (X3Y2) ((X1 AGR) 3-SING) ((X1 DEF
DEF) ((X3 AGR) 3-SING) ((X3 COUNT)
) ((Y1 DEF) DEF) ((Y3 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y4 GENDER)) )
  • Type information
  • Part-of-speech/constituent information
  • Alignments
  • x-side constraints
  • y-side constraints
  • xy-constraints,
  • e.g. ((Y1 AGR) (X1 AGR))

4
The Transfer Engine
5
Rule Learning - Overview
  • Goal Acquire Syntactic Transfer Rules
  • Use available knowledge from the source side
    (grammatical structure)
  • Three steps
  • Flat Seed Generation first guesses at transfer
    rules flat syntactic structure
  • Compositionality use previously learned rules to
    add hierarchical structure
  • Seeded Version Space Learning refine rules by
    learning appropriate feature constraints

6
Flat Seed Rule Generation
7
Flat Seed Generation
  • Create a transfer rule that is specific to the
    sentence pair, but abstracted to the POS level.
    No syntactic structure.

8
Compositionality
9
Compositionality - Overview
  • Traverse the c-structure of the English sentence,
    add compositional structure for translatable
    chunks
  • Adjust constituent sequences, alignments
  • Remove unnecessary constraints, i.e. those that
    are contained in the lower-level rule

10
Seeded Version Space Learning
11
Seeded Version Space Learning Overview
  • Goal add appropriate feature constraints to the
    acquired rules
  • Methodology
  • Preserve general structural transfer
  • Learn specific feature constraints from example
    set
  • Seed rules are grouped into clusters of similar
    transfer structure (type, constituent sequences,
    alignments)
  • Each cluster forms a version space a partially
    ordered hypothesis space with a specific and a
    general boundary
  • The seed rules in a group form the specific
    boundary of a version space
  • The general boundary is the (implicit) transfer
    rule with the same type, constituent sequences,
    and alignments, but no feature constraints

12
Seeded Version Space Learning Generalization
  • The partial order of the version space
  • Definition A transfer rule tr1 is strictly more
    general than another transfer rule tr2 if all
    f-structures that are satisfied by tr2 are also
    satisfied by tr1.
  • Generalize rules by merging them
  • Deletion of constraint
  • Raising two value constraints to an agreement
    constraint, e.g.
  • ((x1 num) pl), ((x3 num) pl) ?
  • ((x1 num) (x3 num))

13
Seeded Version Space Learning
  • NP v det n NP VP
  • Group seed rules into version spaces as above.
  • Make use of partial order of rules in version
    space. Partial order is defined
  • via the f-structures satisfying the constraints.
  • Generalize in the space by repeated merging of
    rules
  • Deletion of constraint
  • Moving value constraints to agreement
    constraints, e.g.
  • ((x1 num) pl), ((x3 num) pl) ?
  • ((x1 num) (x3 num)
  • 4. Check translation power of generalized rules
    against sentence pairs




14
Seeded Version Space LearningThe Search
  • The Seeded Version Space algorithm itself is the
    repeated generalization of rules by merging
  • A merge is successful if the set of sentences
    that can correctly be translated with the merged
    rule is a superset of the union of sets that can
    be translated with the unmerged rules, i.e. check
    power of rule
  • Merge until no more successful merges

15
Seeded VSL Some Open Issues
  • Three types of constraints
  • X-side constrain applicability of rule
  • Y-side assist in generation
  • X-Y transfer features from SL to TL
  • Which of the three types improves translation
    performance?
  • Use rules without features to populate lattice,
    decoder will select the best translation
  • Learn only X-Y constraints, based on list of
    universal projecting features
  • Other notions of version-spaces of feature
    constraints
  • Current feature learning is specific to rules
    that have identical transfer components
  • Important issue during transfer is to
    disambiguate among rules that have same SL side
    but different TL side can we learn effective
    constraints for this?

16
Examples of Learned Rules
17
Manual Transfer Rules Example
PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT
VERB passive of 43 (7b) VP,28 VPVP V V
V -gt Aux V ( (X1Y2) ((x1 form) root)
((x2 type) c light) ((x2 form) part) ((x2
aspect) perf) ((x3 lexwx) 'jAnA') ((x3
form) part) ((x3 aspect) perf) (x0 x1)
((y1 lex) be) ((y1 tense) past) ((y1 agr
num) (x3 agr num)) ((y1 agr pers) (x3 agr
pers)) ((y2 form) part) )
18
Manual Transfer Rules Example
NP PP NP1 NP P Adj N
N1 ke eka aXyAya N
jIvana
NP NP1 PP Adj N
P NP one chapter of N1
N life
NP1 ke NP2 -gt NP2 of NP1 Ex jIvana ke
eka aXyAya life of (one) chapter
gt a chapter of life NP,12 NPNP PP
NP1 -gt NP1 PP ( (X1Y2) (X2Y1) ((x2
lexwx) 'kA') ) NP,13 NPNP NP1 -gt
NP1 ( (X1Y1) ) PP,12 PPPP NP Postp
-gt Prep NP ( (X1Y2) (X2Y1) )
19
A Limited Data Scenario for Hindi-to-English
  • Put together a scenario with miserly data
    resources
  • Elicited Data corpus 17589 phrases
  • Cleaned portion (top 12) of LDC dictionary
    2725 Hindi words (23612 translation pairs)
  • Manually acquired resources during the SLE
  • 500 manual bigram translations
  • 72 manually written phrase transfer rules
  • 105 manually written postposition rules
  • 48 manually written time expression rules
  • No additional parallel text!!

20
Manual Grammar Development
  • Covers mostly NPs, PPs and VPs (verb complexes)
  • 70 grammar rules, covering basic and recursive
    NPs and PPs, verb complexes of main tenses in
    Hindi (developed in two weeks)

21
Adding a Strong Decoder
  • XFER system produces a full lattice
  • Edges are scored using word-to-word translation
    probabilities, trained from the limited bilingual
    data
  • Decoder uses an English LM (70m words)
  • Decoder can also reorder words or phrases (up to
    4 positions ahead)
  • For XFER(strong) , ONLY edges from basic XFER
    system are used!

22
Testing Conditions
  • Tested on section of JHU provided data 258
    sentences with four reference translations
  • SMT system (stand-alone)
  • EBMT system (stand-alone)
  • XFER system (naïve decoding)
  • XFER system with strong decoder
  • No grammar rules (baseline)
  • Manually developed grammar rules
  • Automatically learned grammar rules
  • XFERSMT with strong decoder (MEMT)

23
Results on JHU Test Set (very miserly training
data)
24
Effect of Reordering in the Decoder

25
Observations and Lessons (I)
  • XFER with strong decoder outperformed SMT even
    without any grammar rules in the miserly data
    scenario
  • SMT Trained on elicited phrases that are very
    short
  • SMT has insufficient data to train more
    discriminative translation probabilities
  • XFER takes advantage of Morphology
  • Token coverage without morphology 0.6989
  • Token coverage with morphology 0.7892
  • Manual grammar currently somewhat better than
    automatically learned grammar
  • Learned rules did not yet use version-space
    learning
  • Large room for improvement on learning rules
  • Importance of effective well-founded scoring of
    learned rules

26
Observations and Lessons (II)
  • MEMT (XFER and SMT) based on strong decoder
    produced best results in the miserly scenario.
  • Reordering within the decoder provided very
    significant score improvements
  • Much room for more sophisticated grammar rules
  • Strong decoder can carry some of the reordering
    burden

27
Conclusions
  • Transfer rules (both manual and learned) offer
    significant contributions that can complement
    existing data-driven approaches
  • Also in medium and large data settings?
  • Initial steps to development of a statistically
    grounded transfer-based MT system with
  • Rules that are scored based on a well-founded
    probability model
  • Strong and effective decoding that incorporates
    the most advanced techniques used in SMT decoding
  • Working from the opposite end of research on
    incorporating models of syntax into standard
    SMT systems Knight et al
  • Our direction makes sense in the limited data
    scenario

28
Future Directions
  • Continued work on automatic rule learning
    (especially Seeded Version Space Learning)
  • Improved leveraging from manual grammar
    resources, interaction with bilingual speakers
  • Developing a well-founded model for assigning
    scores (probabilities) to transfer rules
  • Improving the strong decoder to better fit the
    specific characteristics of the XFER model
  • MEMT with improved
  • Combination of output from different translation
    engines with different scorings
  • strong decoding capabilities

29
Flat Seed Generation - Example
  • The highly qualified applicant did not accept the
    offer.
  • Der äußerst qualifizierte Bewerber nahm das
    Angebot nicht an.
  • ((1,1),(2,2),(3,3),(4,4),(6,8),(7,5),(7,9),(8,6),(
    9,7))

SS det adv adj n aux neg v det n -gt det adv
adj n v det n neg vpart (alignments (x1y1)(x2
y2)(x3y3)(x4y4)(x6y8)(x7y5)(x7y9)(x8
y6)(x9y7)) constraints ((x1 def) ) ((x4
agr) 3-sing) ((x5 tense) past) . ((y1
def) ) ((y3 case) nom) ((y4 agr)
3-sing) . )
30
Compositionality - Example

SS det adv adj n aux neg v det n -gt det adv
adj n v det n neg vpart (alignments (x1y1)(x2
y2)(x3y3)(x4y4)(x6y8)(x7y5)(x7y9)(x8
y6)(x9y7)) constraints ((x1 def) ) ((x4
agr) 3-sing) ((x5 tense) past) . ((y1
def) ) ((y3 case) nom) ((y4 agr)
3-sing) . )
NPNP det AJDP n -gt det ADJP
n ((x1y1) ((y3 agr) 3-sing) ((x3 agr
3-sing) .)
SS NP aux neg v det n -gt NP v det n neg
vpart (alignments (x1y1)(x3y5)(x4y2)(x4
y6)(x5y3)(x6y4) constraints ((x2 tense)
past) . ((y1 def) ) ((y1 case) nom) .
)
31
Seeded Version Space Learning Example
SS NP aux neg v det n -gt NP v det n neg
vpart (alignments (x1y1)(x3y5)(x4y2)(x4
y6)(x5y3)(x6y4) constraints ((x2 tense)
past) . ((y1 def) ) ((y1 case) nom)
((y1 agr) 3-sing) ) ((y3 agr) 3-sing)
((y4 agr) 3-sing) )
SS NP aux neg v det n -gt NP n det n neg
vpart ( alignments (x1y1)(x3y5) (x4y2)(x
4y6) (x5y3)(x6y4) constraints ((x2
tense) past) ((y1 def) ) ((y1 case)
nom) ((y4 agr) (y3 agr)) )
SS NP aux neg v det n -gt NP v det n neg
vpart (alignments (x1y1)(x3y5)(x4y2)(x4
y6)(x5y3)(x6y4) constraints ((x2 tense)
past) ((y1 def) ) ((y1 case) nom) ((y1
agr) 3-plu) ((y3 agr) 3-plu) ((y4 agr)
3-plu) )
32
Preliminary Evaluation
  • English to German
  • Corpus of 141 ADJPs, simple NPs and sentences
  • 10-fold cross-validation experiment
  • Goals
  • Do we learn useful transfer rules?
  • Does Compositionality improve generalization?
  • Does VS-learning improve generalization?

33
Summary of Results
  • Average translation accuracy on cross-validation
    test set was 62
  • Without VS-learning 43
  • Without Compositionality 57
  • Average number of VSs 24
  • Average number of sents per VS 3.8
  • Average number of merges per VS 1.6
  • Percent of compositional rules 34

34
Conclusions
  • New paradigm for learning transfer rules from
    pre-designed elicitation corpus
  • Geared toward languages with very limited
    resources
  • Preliminary experiments validate approach
    compositionality and VS-learning improve
    generalization

35
Future Work
  • Larger, more diverse elicitation corpus
  • Additional languages (Mapudungun)
  • Less information on TL side
  • Reverse translation direction
  • Refine the various algorithms
  • Operators for VS generalization
  • Generalization VS search
  • Layers for compositionality
  • User interactive verification

36
Why Machine Translation for Minority and
Indigenous Languages?
  • Commercial MT economically feasible for only a
    handful of major languages with large resources
    (corpora, human developers)
  • Is there hope for MT for languages with limited
    resources?
  • Benefits include
  • Better government access to indigenous
    communities (Epidemics, crop failures, etc.)
  • Better indigenous communities participation in
    information-rich activities (health care,
    education, government) without giving up their
    languages.
  • Language preservation
  • Civilian and military applications (disaster
    relief)

37
MT for Minority and Indigenous Languages
Challenges
  • Minimal amount of parallel text
  • Possibly competing standards for
    orthography/spelling
  • Often relatively few trained linguists
  • Access to native informants possible
  • Need to minimize development time and cost

38
Transfer Rule Formalism (II)
SL the old man, TL ha-ish ha-zaqen NPNP
DET ADJ N -gt DET N DET ADJ ( (X1Y1) (X1Y3)
(X2Y4) (X3Y2) ((X1 AGR) 3-SING) ((X1 DEF
DEF) ((X3 AGR) 3-SING) ((X3 COUNT)
) ((Y1 DEF) DEF) ((Y3 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y4 GENDER)) )
  • Value constraints
  • Agreement constraints
Write a Comment
User Comments (0)
About PowerShow.com