Learningbased MT Approaches for Languages with Limited Resources - PowerPoint PPT Presentation

About This Presentation
Title:

Learningbased MT Approaches for Languages with Limited Resources

Description:

S::S [ART ADJ N V ART N] [ART N ART ADJ V P ART N] ... african, cas, jur dic, l, ... a.as.o.os.tro. 1. cas. a.as.os. 50. afectad, cas, jur dic, l, ... – PowerPoint PPT presentation

Number of Views:79
Avg rating:3.0/5.0
Slides: 144
Provided by: AlonL
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Learningbased MT Approaches for Languages with Limited Resources


1
Learning-based MT Approaches for Languages with
Limited Resources
  • Alon Lavie
  • Language Technologies Institute
  • Carnegie Mellon University
  • Joint work with
  • Jaime Carbonell, Lori Levin, Kathrin Probst, Erik
    Peterson, Christian Monson, Ariadna Font-Llitjos,
    Alison Alvarez, Roberto Aranovich

2
Outline
  • Rationale for learning-based MT
  • Roadmap for learning-based MT
  • Framework overview
  • Elicitation
  • Learning transfer Rules
  • Automatic rule refinement
  • Learning Morphology
  • Example prototypes
  • Implications for MT with vast parallel data
  • Conclusions and future directions

3
Machine Translation Where are we today?
  • Age of Internet and Globalization great demand
    for MT
  • Multiple official languages of UN, EU, Canada,
    etc.
  • Documentation dissemination for large
    manufacturers (Microsoft, IBM, Caterpillar)
  • Economic incentive is still primarily within a
    small number of language pairs
  • Some fairly good commercial products in the
    market for these language pairs
  • Primarily a product of rule-based systems after
    many years of development
  • Pervasive MT between most language pairs still
    non-existent and not on the immediate horizon

4
Approaches to MT Vaquois MT Triangle
Interlingua
Give-informationpersonal-data (namealon_lavie)
Generation
Analysis
Transfer
s vp accusative_pronoun chiamare proper_name
s np possessive_pronoun name vp be
proper_name
Direct
Mi chiamo Alon Lavie
My name is Alon Lavie
5
Progression of MT
  • Started with rule-based systems
  • Very large expert human effort to construct
    language-specific resources (grammars, lexicons)
  • High-quality MT extremely expensive ? only for
    handful of language pairs
  • Along came EBMT and then SMT
  • Replaced human effort with extremely large
    volumes of parallel text data
  • Less expensive, but still only feasible for a
    small number of language pairs
  • We traded human labor with data
  • Where does this take us in 5-10 years?
  • Large parallel corpora for maybe 25-50 language
    pairs
  • What about all the other languages?
  • Is all this data (with very shallow
    representation of language structure) really
    necessary?
  • Can we build MT approaches that learn deeper
    levels of language structure and how they map
    from one language to another?

6
Why Machine Translation for Languages with
Limited Resources?
  • We are in the age of information explosion
  • The internetwebGoogle ? anyone can get the
    information they want anytime
  • But what about the text in all those other
    languages?
  • How do they read all this English stuff?
  • How do we read all the stuff that they put
    online?
  • MT for these languages would Enable
  • Better government access to native indigenous and
    minority communities
  • Better minority and native community
    participation in information-rich activities
    (health care, education, government) without
    giving up their languages.
  • Civilian and military applications (disaster
    relief)
  • Language preservation

7
The Roadmap to Learning-based MT
  • Automatic acquisition of necessary language
    resources and knowledge using machine learning
    methodologies
  • Learning morphology (analysis/generation)
  • Rapid acquisition of broad coverage word-to-word
    and phrase-to-phrase translation lexicons
  • Learning of syntactic structural mappings
  • Tree-to-tree structure transformations Knight et
    al, Eisner, Melamed require parse trees for
    both languages
  • Learning syntactic transfer rules with resources
    (grammar, parses) for just one of the two
    languages
  • Automatic rule refinement and/or post-editing
  • A framework for integrating the acquired MT
    resources into effective MT prototype systems
  • Effective integration of acquired knowledge with
    statistical/distributional information

8
CMUs AVENUE Approach
  • Elicitation use bilingual native informants to
    produce a small high-quality word-aligned
    bilingual corpus of translated phrases and
    sentences
  • Building Elicitation corpora from feature
    structures
  • Feature Detection and Navigation
  • Transfer-rule Learning apply ML-based methods to
    automatically acquire syntactic transfer rules
    for translation between the two languages
  • Learn from major language to minor language
  • Translate from minor language to major language
  • XFER Decoder
  • XFER engine produces a lattice of possible
    transferred structures at all levels
  • Decoder searches and selects the best scoring
    combination
  • Rule Refinement refine the acquired rules via a
    process of interaction with bilingual informants
  • Morphology Learning
  • Word and Phrase bilingual lexicon acquisition

9
AVENUE Architecture
10
Outline
  • Rationale for learning-based MT
  • Roadmap for learning-based MT
  • Framework overview
  • Elicitation
  • Learning transfer Rules
  • Automatic rule refinement
  • Learning Morphology
  • Example prototypes
  • Implications for MT with vast parallel data
  • Conclusions and future directions

11
Data Elicitation for Languages with Limited
Resources
  • Rationale
  • Large volumes of parallel text not available ?
    create a small maximally-diverse parallel corpus
    that directly supports the learning task
  • Bilingual native informant(s) can translate and
    align a small pre-designed elicitation corpus,
    using elicitation tool
  • Elicitation corpus designed to be typologically
    and structurally comprehensive and compositional
  • Transfer-rule engine and new learning approach
    support acquisition of generalized transfer-rules
    from the data

12
Elicitation Tool English-Chinese Example
13
Elicitation ToolEnglish-Chinese Example
14
Elicitation ToolEnglish-Hindi Example
15
Elicitation ToolEnglish-Arabic Example
16
Elicitation ToolSpanish-Mapudungun Example
17
Designing Elicitation Corpora
  • What do we want to elicit?
  • Diversity of linguistic phenomena and
    constructions
  • Syntactic structural diversity
  • How do we construct an elicitation corpus?
  • Typological Elicitation Corpus based on
    elicitation and documentation work of field
    linguists (e.g. Comrie 1977, Bouquiaux 1992)
    initial corpus size 1000 examples
  • Structural Elicitation Corpus based on
    representative sample of English phrase
    structures 120 examples
  • Organized compositionally elicit simple
    structures first, then use them as building
    blocks
  • Goal minimize size, maximize linguistic coverage

18
Typological Elicitation Corpus
  • Feature Detection
  • Discover what features exist in the language and
    where/how they are marked
  • Example does the language mark gender of nouns?
    How and where are these marked?
  • Method compare translations of minimal pairs
    sentences that differ in only ONE feature
  • Elicit translations/alignments for detected
    features and their combinations
  • Dynamic corpus navigation based on feature
    detection no need to elicit for combinations
    involving non-existent features

19
Typological Elicitation Corpus
  • Initial typological corpus of about 1000
    sentences was manually constructed
  • New construction methodology for building an
    elicitation corpus using
  • A feature specification lists inventory of
    available features and their values
  • A definition of the set of desired feature
    structures
  • Schemas define sets of desired combinations of
    features and values
  • Multiplier algorithm generates the comprehensive
    set of feature structures
  • A generation grammar and lexicon NLG generator
    generates NL sentences from the feature structures

20
Structural Elicitation Corpus
  • Goal create a compact diverse sample corpus of
    syntactic phrase structures in English in order
    to elicit how these map into the elicited
    language
  • Methodology
  • Extracted all CFG rules from Brown section of
    Penn TreeBank (122K sentences)
  • Simplified POS tag set
  • Constructed frequency histogram of extracted
    rules
  • Pulled out simplest phrases for most frequent
    rules for NPs, PPs, ADJPs, ADVPs, SBARs and
    Sentences
  • Some manual inspection and refinement
  • Resulting corpus of about 120 phrases/sentences
    representing common structures
  • See Probst and Lavie, 2004

21
Outline
  • Rationale for learning-based MT
  • Roadmap for learning-based MT
  • Framework overview
  • Elicitation
  • Learning transfer Rules
  • Automatic rule refinement
  • Learning Morphology
  • Example prototypes
  • Implications for MT with vast parallel data
  • Conclusions and future directions

22
Transfer Rule Formalism
SL the old man, TL ha-ish ha-zaqen NPNP
DET ADJ N -gt DET N DET ADJ ( (X1Y1) (X1Y3)
(X2Y4) (X3Y2) ((X1 AGR) 3-SING) ((X1 DEF
DEF) ((X3 AGR) 3-SING) ((X3 COUNT)
) ((Y1 DEF) DEF) ((Y3 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y4 GENDER)) )
  • Type information
  • Part-of-speech/constituent information
  • Alignments
  • x-side constraints
  • y-side constraints
  • xy-constraints,
  • e.g. ((Y1 AGR) (X1 AGR))

23
Transfer Rule Formalism (II)
SL the old man, TL ha-ish ha-zaqen NPNP
DET ADJ N -gt DET N DET ADJ ( (X1Y1) (X1Y3)
(X2Y4) (X3Y2) ((X1 AGR) 3-SING) ((X1 DEF
DEF) ((X3 AGR) 3-SING) ((X3 COUNT)
) ((Y1 DEF) DEF) ((Y3 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y4 GENDER)) )
  • Value constraints
  • Agreement constraints

24
Rule Learning - Overview
  • Goal Acquire Syntactic Transfer Rules
  • Use available knowledge from the source side
    (grammatical structure)
  • Three steps
  • Flat Seed Generation first guesses at transfer
    rules flat syntactic structure
  • Compositionality Learning use previously learned
    rules to learn hierarchical structure
  • Constraint Learning refine rules by learning
    appropriate feature constraints

25
Flat Seed Rule Generation
26
Flat Seed Rule Generation
  • Create a flat transfer rule specific to the
    sentence pair, partially abstracted to POS
  • Words that are aligned word-to-word and have the
    same POS in both languages are generalized to
    their POS
  • Words that have complex alignments (or not the
    same POS) remain lexicalized
  • One seed rule for each translation example
  • No feature constraints associated with seed rules
    (but mark the example(s) from which it was
    learned)

27
Compositionality Learning
28
Compositionality Learning
  • Detection traverse the c-structure of the
    English sentence, add compositional structure for
    translatable chunks
  • Generalization adjust constituent sequences and
    alignments
  • Two implemented variants
  • Safe Compositionality there exists a transfer
    rule that correctly translates the
    sub-constituent
  • Maximal Compositionality Generalize the rule if
    supported by the alignments, even in the absence
    of an existing transfer rule for the
    sub-constituent

29
Constraint Learning
30
Constraint Learning
  • Goal add appropriate feature constraints to the
    acquired rules
  • Methodology
  • Preserve general structural transfer
  • Learn specific feature constraints from example
    set
  • Seed rules are grouped into clusters of similar
    transfer structure (type, constituent sequences,
    alignments)
  • Each cluster forms a version space a partially
    ordered hypothesis space with a specific and a
    general boundary
  • The seed rules in a group form the specific
    boundary of a version space
  • The general boundary is the (implicit) transfer
    rule with the same type, constituent sequences,
    and alignments, but no feature constraints

31
Constraint Learning Generalization
  • The partial order of the version space
  • Definition A transfer rule tr1 is strictly more
    general than another transfer rule tr2 if all
    f-structures that are satisfied by tr2 are also
    satisfied by tr1.
  • Generalize rules by merging them
  • Deletion of constraint
  • Raising two value constraints to an agreement
    constraint, e.g.
  • ((x1 num) pl), ((x3 num) pl) ?
  • ((x1 num) (x3 num))

32
Automated Rule Refinement
  • Bilingual informants can identify translation
    errors and pinpoint the errors
  • A sophisticated trace of the translation path can
    identify likely sources for the error and do
    Blame Assignment
  • Rule Refinement operators can be developed to
    modify the underlying translation grammar (and
    lexicon) based on characteristics of the error
    source
  • Add or delete feature constraints from a rule
  • Bifurcate a rule into two rules (general and
    specific)
  • Add or correct lexical entries
  • See Font-Llitjos, Carbonell Lavie, 2005

33
Outline
  • Rationale for learning-based MT
  • Roadmap for learning-based MT
  • Framework overview
  • Elicitation
  • Learning transfer Rules
  • Automatic rule refinement
  • Learning Morphology
  • Example prototypes
  • Implications for MT with vast parallel data
  • Conclusions and future directions

34
Morphology Learning
  • Goal Unsupervised learning of morphemes and
    their function from raw monolingual data
  • Segmentation of words into morphemes
  • Identification of morphological paradigms
    (inflections and derivations)
  • Learning association between morphemes and their
    function in the language
  • Organize the raw data in the form of a network of
    paradigm candidate schemes
  • Search the network for a collection of schemes
    that represent true morphology paradigms of the
    language
  • Learn mappings between the schemes and
    features/functions using minimal pairs of
    elicited data
  • Construct analyzer based on the collection of
    schemes and the acquired function mappings

35
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s.d blame
Ø.s blame solve
e.es blam solv
me.mes bla
s blame roam solve
36
Ø.s.d blame
e.es.ed blam
me.mes.med bla
e.es blam solv
Ø.s blame solve
me.mes bla
e.ed blam
Ø.d blame
me.med bla
s.d blame
es.ed blam
mes.med bla
Ø blame blames blamed roams roamed roaming solve s
olves solving
e blam solv
me bla
s blame roam solve
es blam solv
mes bla
ed blam roam
d blame roame
med bla roa
36
37
a.as.o.os.tro 1 cas
  • Spanish Newswire Corpus
  • 40,011 Tokens
  • 6,975 Types

a.as.o.os 43 african, cas, jurídic, l, ...
a.as.os 50 afectad, cas, jurídic, l, ...
a.as.o 59 cas, citad, jurídic, l, ...
a.o.os 105 impuest, indonesi, italian, jurídic,
...
as.o.os 54 cas, implicad, jurídic, l, ...
a.as 199 huelg, incluid, industri, inundad, ...
a.os 134 impedid, impuest, indonesi, inundad, ...
as.os 68 cas, implicad, inundad, jurídic, ...
a.o 214 id, indi, indonesi, inmediat, ...
as.o 85 intern, jurídic, just, l, ...
a.tro 2 cas.cen
o.os 268 human, implicad, indici,
indocumentad, ...
a 1237 huelg, ib, id, iglesi, ...
as 404 huelg, huelguist, incluid, industri, ...
os 534 humorístic, human, hígad, impedid, ...
o 1139 hub, hug, human, huyend, ...
tro 16 catas, ce, cen, cua, ...
37
38
a.as.o.os.tro 1 cas
C-Suffixes C-Stems
Level 5 5 C-suffixes C-Stem Type Count
a.as.o.os 43 african, cas, jurídic, l, ...
a.as.os 50 afectad, cas, jurídic, l, ...
a.as.o 59 cas, citad, jurídic, l, ...
a.o.os 105 impuest, indonesi, italian, jurídic,
...
as.o.os 54 cas, implicad, jurídic, l, ...
a.as 199 huelg, incluid, industri, inundad, ...
a.os 134 impedid, impuest, indonesi, inundad, ...
as.os 68 cas, implicad, inundad, jurídic, ...
a.o 214 id, indi, indonesi, inmediat, ...
as.o 85 intern, jurídic, just, l, ...
a.tro 2 cas.cen
o.os 268 human, implicad, indici,
indocumentad, ...
a 1237 huelg, ib, id, iglesi, ...
as 404 huelg, huelguist, incluid, industri, ...
os 534 humorístic, human, hígad, impedid, ...
o 1139 hub, hug, human, huyend, ...
tro 16 catas, ce, cen, cua, ...
38
39
Adjective Inflection Class
From the spurious c-suffix tro
a.as.o.os 43 african, cas, jurídic, l, ...
a.as.os 50 afectad, cas, jurídic, l, ...
a.as.o 59 cas, citad, jurídic, l, ...
a.o.os 105 impuest, indonesi, italian, jurídic,
...
as.o.os 54 cas, implicad, jurídic, l, ...
a.as 199 huelg, incluid, industri, inundad, ...
a.os 134 impedid, impuest, indonesi, inundad, ...
as.os 68 cas, implicad, inundad, jurídic, ...
a.o 214 id, indi, indonesi, inmediat, ...
as.o 85 intern, jurídic, just, l, ...
o.os 268 human, implicad, indici,
indocumentad, ...
a 1237 huelg, ib, id, iglesi, ...
as 404 huelg, huelguist, incluid, industri, ...
os 534 humorístic, human, hígad, impedid, ...
o 1139 hub, hug, human, huyend, ...
39
40
Basic Search Procedure
a.as.o.os 43 african, cas, jurídic, l, ...
a.as.os 50 afectad, cas, jurídic, l, ...
a.as.o 59 cas, citad, jurídic, l, ...
a.o.os 105 impuest, indonesi, italian, jurídic,
...
as.o.os 54 cas, implicad, jurídic, l, ...
a.as 199 huelg, incluid, industri, inundad, ...
a.os 134 impedid, impuest, indonesi, inundad, ...
as.os 68 cas, implicad, inundad, jurídic, ...
a.o 214 id, indi, indonesi, inmediat, ...
as.o 85 intern, jurídic, just, l, ...
o.os 268 human, implicad, indici,
indocumentad, ...
a 1237 huelg, ib, id, iglesi, ...
as 404 huelg, huelguist, incluid, industri, ...
os 534 humorístic, human, hígad, impedid, ...
o 1139 hub, hug, human, huyend, ...
40
41
Outline
  • Rationale for learning-based MT
  • Roadmap for learning-based MT
  • Framework overview
  • Elicitation
  • Learning transfer Rules
  • Automatic rule refinement
  • Learning Morphology
  • Example prototypes
  • Implications for MT with vast parallel data
  • Conclusions and future directions

42
AVENUE Prototypes
  • General XFER framework under development for past
    three years
  • Prototype systems so far
  • German-to-English, Dutch-to-English
  • Chinese-to-English
  • Hindi-to-English
  • Hebrew-to-English
  • In progress or planned
  • Mapudungun-to-Spanish
  • Quechua-to-Spanish
  • Arabic-to-English
  • Native-Brazilian languages to Brazilian Portuguese

43
Challenges for Hebrew MT
  • Puacity in existing language resources for Hebrew
  • No publicly available broad coverage
    morphological analyzer
  • No publicly available bilingual lexicons or
    dictionaries
  • No POS-tagged corpus or parse tree-bank corpus
    for Hebrew
  • No large Hebrew/English parallel corpus
  • Scenario well suited for CMU transfer-based MT
    framework for languages with limited resources

44
Hebrew-to-English MT Prototype
  • Initial prototype developed within a two month
    intensive effort
  • Accomplished
  • Adapted available morphological analyzer
  • Constructed a preliminary translation lexicon
  • Translated and aligned Elicitation Corpus
  • Learned XFER rules
  • Developed (small) manual XFER grammar as a point
    of comparison
  • System debugging and development
  • Evaluated performance on unseen test data using
    automatic evaluation metrics

45
Morphology Example
  • Input word BWRH
  • 0 1 2 3 4
  • --------BWRH--------
  • -----B-----WR--H--
  • --B---H----WRH---

46
Morphology Example
  • Y0 ((SPANSTART 0) Y1 ((SPANSTART 0)
    Y2 ((SPANSTART 1)
  • (SPANEND 4) (SPANEND
    2) (SPANEND 3)
  • (LEX BWRH) (LEX B)
    (LEX WR)
  • (POS N) (POS
    PREP)) (POS N)
  • (GEN F)
    (GEN M)
  • (NUM S)
    (NUM S)
  • (STATUS ABSOLUTE))
    (STATUS ABSOLUTE))
  • Y3 ((SPANSTART 3) Y4 ((SPANSTART 0)
    Y5 ((SPANSTART 1)
  • (SPANEND 4) (SPANEND
    1) (SPANEND 2)
  • (LEX LH) (LEX
    B) (LEX H)
  • (POS POSS)) (POS
    PREP)) (POS DET))
  • Y6 ((SPANSTART 2) Y7 ((SPANSTART 0)
  • (SPANEND 4) (SPANEND
    4)
  • (LEX WRH) (LEX
    BWRH)
  • (POS N) (POS
    LEX))
  • (GEN F)
  • (NUM S)

47
Sample Output (dev-data)
  • maxwell anurpung comes from ghana for israel four
    years ago and since worked in cleaning in hotels
    in eilat
  • a few weeks ago announced if management club
    hotel that for him to leave israel according to
    the government instructions and immigration
    police
  • in a letter in broken english which spread among
    the foreign workers thanks to them hotel for
    their hard work and announced that will purchase
    for hm flight tickets for their countries from
    their money

48
Evaluation Results
  • Test set of 62 sentences from Haaretz newspaper,
    2 reference translations

49
Outline
  • Rationale for learning-based MT
  • Roadmap for learning-based MT
  • Framework overview
  • Elicitation
  • Learning transfer Rules
  • Automatic rule refinement
  • Learning Morphology
  • Example prototypes
  • Implications for MT with vast parallel data
  • Conclusions and future directions

50
Implications for MT with Vast Amounts of Parallel
Data
  • Learning word/short-phrase translations vs.
    learning long phrase-to-phrase translations
  • Phrase-to-phrase MT ill suited for long-range
    reorderings ? ungrammatical output
  • Recent work on hierarchical Stat-MT Chiang,
    2005 and parsing-based MT Melamed et al, 2005
  • Learning general tree-to-tree syntactic mappings
    is equally problematic
  • Meaning is a hybrid of complex, non-compositional
    phrases embedded within a syntactic structure
  • Some constituents can be translated in isolation,
    others require contextual mappings

51
Implications for MT with Vast Amounts of Parallel
Data
  • Our approach for learning transfer rules is
    applicable to the large data scenario, subject to
    solutions for several challenges
  • No elicitation corpus ? break-down parallel
    sentences into reasonable learning examples
  • Working with less reliable automatic word
    alignments rather than manual alignments
  • Effective use of reliable parse structures for
    ONE language (i.e. English) and automatic word
    alignments in order to decompose the translation
    of a sentence into several compositional rules.
  • Effective scoring of resulting very large
    transfer grammars, and scaled up transfer
    decoding

52
Implications for MT with Vast Amounts of Parallel
Data
  • Example
  • ? ?? ? ??? ?? ? ??
  • He freq with J Zemin Pres via
    phone
  • He freq talked with President J Zemin over
    the phone

53
Implications for MT with Vast Amounts of Parallel
Data
  • Example
  • ? ?? ? ??? ?? ? ??
  • He freq with J Zemin Pres via
    phone
  • He freq talked with President J Zemin over
    the phone

NP1
NP2
NP3
NP1
NP2
NP3
54
Conclusions
  • There is hope yet for wide-spread MT between many
    of the worlds language pairs
  • MT offers a fertile yet extremely challenging
    ground for learning-based approaches that
    leverage from diverse sources of information
  • Syntactic structure of one or both languages
  • Word-to-word correspondences
  • Decomposable units of translation
  • Statistical Language Models
  • Provides a feasible solution to MT for languages
    with limited resources
  • Extremely promising approach for addressing the
    fundamental weaknesses in current corpus-based MT
    for languages with vast resources

55
Future Research Directions
  • Automatic Transfer Rule Learning
  • In the large-data scenario from large volumes
    of uncontrolled parallel text automatically
    word-aligned
  • In the absence of morphology or POS annotated
    lexica
  • Learning mappings for non-compositional
    structures
  • Effective models for rule scoring for
  • Decoding using scores at runtime
  • Pruning the large collections of learned rules
  • Learning Unification Constraints
  • Integrated Xfer Engine and Decoder
  • Improved models for scoring tree-to-tree
    mappings, integration with LM and other knowledge
    sources in the course of the search

56
Future Research Directions
  • Automatic Rule Refinement
  • Morphology Learning
  • Feature Detection and Corpus Navigation

57
(No Transcript)
58
Mapudungun-to-Spanish Example
English I didnt see Maria
Mapudungun pelafiñ Maria
Spanish No vi a María
59
Mapudungun-to-Spanish Example
English I didnt see Maria
Mapudungun pelafiñ Maria pe -la -fi -ñ Maria see
-neg -3.obj -1.subj.indicative Maria
Spanish No vi a María No vi a María neg see.1.sub
j.past.indicative acc Maria
60
pe-la-fi-ñ Maria
V
pe
61
pe-la-fi-ñ Maria
V
pe
VSuff
Negation
la
62
pe-la-fi-ñ Maria
V
pe
VSuffG
Pass all features up
VSuff
la
63
pe-la-fi-ñ Maria
V
pe
VSuffG
VSuff
object person 3
fi
VSuff
la
64
pe-la-fi-ñ Maria
V
VSuffG
pe
Pass all features up from both children
VSuffG
VSuff
fi
VSuff
la
65
pe-la-fi-ñ Maria
V
VSuffG
VSuff
pe
person 1 number sg mood ind
VSuffG
VSuff
ñ
fi
VSuff
la
66
pe-la-fi-ñ Maria
V
VSuffG
VSuffG
VSuff
pe
Pass all features up from both children
VSuffG
VSuff
ñ
fi
VSuff
la
67
pe-la-fi-ñ Maria
Pass all features up from both children
V
Check that 1) negation 2) tense is undefined
V
VSuffG
VSuffG
VSuff
pe
VSuffG
VSuff
ñ
fi
VSuff
la
68
pe-la-fi-ñ Maria
NP
V
VSuffG
person 3 number sg human
VSuffG
VSuff
N
pe
VSuffG
VSuff
Maria
ñ
fi
VSuff
la
69
pe-la-fi-ñ Maria
S
Check that NP is human
Pass features up from
VP
NP
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
70
Transfer to Spanish Top-Down
S
S
VP
VP
NP
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
71
Transfer to Spanish Top-Down
Pass all features to Spanish side
S
S
VP
VP
NP
NP
a
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
72
Transfer to Spanish Top-Down
S
S
Pass all features down
VP
VP
NP
NP
a
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
73
Transfer to Spanish Top-Down
S
S
Pass object features down
VP
VP
NP
NP
a
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
74
Transfer to Spanish Top-Down
S
S
VP
VP
NP
NP
a
V
VSuffG
Accusative marker on objects is introduced
because human
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
75
Transfer to Spanish Top-Down
S
S
VP
VP
VPVP VBar NP -gt VBar "a" NP ( (X1Y1) (X2
Y3) ((X2 type) (NOT personal)) ((X2
human) c ) (X0 X1) ((X0 object) X2)
(Y0 X0) ((Y0 object) (X0 object)) (Y1
Y0) (Y3 (Y0 object)) ((Y1 objmarker person)
(Y3 person)) ((Y1 objmarker number) (Y3
number)) ((Y1 objmarker gender) (Y3 ender)))
NP
NP
a
V
VSuffG
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
76
Transfer to Spanish Top-Down
S
S
Pass person, number, and mood features to Spanish
Verb
VP
VP
NP
NP
a
Assign tense past
V
VSuffG
V
no
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
77
Transfer to Spanish Top-Down
S
S
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
VSuffG
VSuff
ñ
Maria
Introduced because negation
fi
VSuff
la
78
Transfer to Spanish Top-Down
S
S
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
ver
VSuffG
VSuff
ñ
Maria
fi
VSuff
la
79
Transfer to Spanish Top-Down
S
S
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
ver
vi
VSuffG
VSuff
ñ
Maria
person 1 number sg mood indicative tense
past
fi
VSuff
la
80
Transfer to Spanish Top-Down
S
S
Pass features over to Spanish side
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
vi
N
VSuffG
VSuff
ñ
Maria
María
fi
VSuff
la
81
I Didnt see Maria
S
S
VP
VP
NP
NP
a
V
VSuffG
V
no
VSuffG
VSuff
N
pe
vi
N
VSuffG
VSuff
ñ
Maria
María
fi
VSuff
la
82
(No Transcript)
83
Conclusions
  • Transfer rules (both manual and learned) offer
    significant contributions that can complement
    existing data-driven approaches
  • Also in medium and large data settings?
  • Initial steps to development of a statistically
    grounded transfer-based MT system with
  • Rules that are scored based on a well-founded
    probability model
  • Strong and effective decoding that incorporates
    the most advanced techniques used in SMT decoding
  • Working from the opposite end of research on
    incorporating models of syntax into standard
    SMT systems Knight et al
  • Our direction makes sense in the limited data
    scenario

84
Missing Science
  • Monolingual learning tasks
  • Learning morphology morphemes and their meaning
  • Learning syntactic and semantic structures
    grammar induction
  • Bilingual Learning Tasks
  • Automatic acquisition of word and phrase
    translation lexicons
  • Learning structural mappings (syntactic,
    semantic, non-compositional)
  • Models that effectively combine learned symbolic
    knowledge with statistical information new
    decoders

85
AVENUE Partners
86
The Transfer Engine
87
Seeded VSL Some Open Issues
  • Three types of constraints
  • X-side constrain applicability of rule
  • Y-side assist in generation
  • X-Y transfer features from SL to TL
  • Which of the three types improves translation
    performance?
  • Use rules without features to populate lattice,
    decoder will select the best translation
  • Learn only X-Y constraints, based on list of
    universal projecting features
  • Other notions of version-spaces of feature
    constraints
  • Current feature learning is specific to rules
    that have identical transfer components
  • Important issue during transfer is to
    disambiguate among rules that have same SL side
    but different TL side can we learn effective
    constraints for this?

88
Examples of Learned Rules (Hindi-to-English)
89
(No Transcript)
90
Future Directions
  • Continued work on automatic rule learning
    (especially Seeded Version Space Learning)
  • Use Hebrew and Hindi systems as test platforms
    for experimenting with advanced learning research
  • Rule Refinement via interaction with bilingual
    speakers
  • Developing a well-founded model for assigning
    scores (probabilities) to transfer rules
  • Redesigning and improving decoder to better fit
    the specific characteristics of the XFER model
  • Improved leveraging from manual grammar resources
  • MEMT with improved
  • Combination of output from different translation
    engines with different confidence scores
  • strong decoding capabilities

91
Seeded Version Space Learning
  • NP v det n NP VP
  • Group seed rules into version spaces as above.
  • Make use of partial order of rules in version
    space. Partial order is defined
  • via the f-structures satisfying the constraints.
  • Generalize in the space by repeated merging of
    rules
  • Deletion of constraint
  • Moving value constraints to agreement
    constraints, e.g.
  • ((x1 num) pl), ((x3 num) pl) ?
  • ((x1 num) (x3 num)
  • 4. Check translation power of generalized rules
    against sentence pairs




92
Seeded Version Space LearningThe Search
  • The Seeded Version Space algorithm itself is the
    repeated generalization of rules by merging
  • A merge is successful if the set of sentences
    that can correctly be translated with the merged
    rule is a superset of the union of sets that can
    be translated with the unmerged rules, i.e. check
    power of rule
  • Merge until no more successful merges

93
AVENUE Architecture
Run-Time Module
Learning Module
SL Input
SL Parser
Morphology Pre-proc
Elicitation Process
Transfer Rule Learning
Transfer Rules
Transfer Engine
TL Output
TL Generator
Decoder
User
94
Learning Transfer-Rules for Languages with
Limited Resources
  • Rationale
  • Large bilingual corpora not available
  • Bilingual native informant(s) can translate and
    align a small pre-designed elicitation corpus,
    using elicitation tool
  • Elicitation corpus designed to be typologically
    comprehensive and compositional
  • Transfer-rule engine and new learning approach
    support acquisition of generalized transfer-rules
    from the data

95
The Transfer Engine
96
Transfer Rule Formalism
SL the man, TL der Mann NPNP DET N -gt
DET N ( (X1Y1) (X2Y2) ((X1 AGR)
3-SING) ((X1 DEF DEF) ((X2 AGR)
3-SING) ((X2 COUNT) ) ((Y1 AGR)
3-SING) ((Y1 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y1 GENDER)) )
  • Type information
  • Part-of-speech/constituent information
  • Alignments
  • x-side constraints
  • y-side constraints
  • xy-constraints,
  • e.g. ((Y1 AGR) (X1 AGR))

97
Transfer Rule Formalism (II)
SL the man, TL der Mann NPNP DET N -gt
DET N ( (X1Y1) (X2Y2) ((X1 AGR)
3-SING) ((X1 DEF DEF) ((X2 AGR)
3-SING) ((X2 COUNT) ) ((Y1 AGR)
3-SING) ((Y1 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y1 GENDER)) )
  • Value constraints
  • Agreement constraints

98
Rule Learning - Overview
  • Goal Acquire Syntactic Transfer Rules
  • Use available knowledge from the source side
    (grammatical structure)
  • Three steps
  • Flat Seed Generation first guesses at transfer
    rules flat syntactic structure
  • Compositionality use previously learned rules to
    add hierarchical structure
  • Seeded Version Space Learning refine rules by
    generalizing with validation (learn appropriate
    feature constraints)

99
Examples of Learned Rules (I)
100
A Limited Data Scenario for Hindi-to-English
  • Put together a scenario with miserly data
    resources
  • Elicited Data corpus 17589 phrases
  • Cleaned portion (top 12) of LDC dictionary
    2725 Hindi words (23612 translation pairs)
  • Manually acquired resources during the SLE
  • 500 manual bigram translations
  • 72 manually written phrase transfer rules
  • 105 manually written postposition rules
  • 48 manually written time expression rules
  • No additional parallel text!!

101
Manual Grammar Development
  • Covers mostly NPs, PPs and VPs (verb complexes)
  • 70 grammar rules, covering basic and recursive
    NPs and PPs, verb complexes of main tenses in
    Hindi (developed in two weeks)

102
Manual Transfer Rules Example
PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT
VERB passive of 43 (7b) VP,28 VPVP V V
V -gt Aux V ( (X1Y2) ((x1 form) root)
((x2 type) c light) ((x2 form) part) ((x2
aspect) perf) ((x3 lexwx) 'jAnA') ((x3
form) part) ((x3 aspect) perf) (x0 x1)
((y1 lex) be) ((y1 tense) past) ((y1 agr
num) (x3 agr num)) ((y1 agr pers) (x3 agr
pers)) ((y2 form) part) )
103
Manual Transfer Rules Example
NP PP NP1 NP P Adj N
N1 ke eka aXyAya N
jIvana
NP NP1 PP Adj N
P NP one chapter of N1
N life
NP1 ke NP2 -gt NP2 of NP1 Ex jIvana ke
eka aXyAya life of (one) chapter
gt a chapter of life NP,12 NPNP PP
NP1 -gt NP1 PP ( (X1Y2) (X2Y1) ((x2
lexwx) 'kA') ) NP,13 NPNP NP1 -gt
NP1 ( (X1Y1) ) PP,12 PPPP NP Postp
-gt Prep NP ( (X1Y2) (X2Y1) )
104
Adding a Strong Decoder
  • XFER system produces a full lattice
  • Edges are scored using word-to-word translation
    probabilities, trained from the limited bilingual
    data
  • Decoder uses an English LM (70m words)
  • Decoder can also reorder words or phrases (up to
    4 positions ahead)
  • For XFER(strong) , ONLY edges from basic XFER
    system are used!

105
Testing Conditions
  • Tested on section of JHU provided data 258
    sentences with four reference translations
  • SMT system (stand-alone)
  • EBMT system (stand-alone)
  • XFER system (naïve decoding)
  • XFER system with strong decoder
  • No grammar rules (baseline)
  • Manually developed grammar rules
  • Automatically learned grammar rules
  • XFERSMT with strong decoder (MEMT)

106
Results on JHU Test Set (very miserly training
data)
107
Effect of Reordering in the Decoder

108
Observations and Lessons (I)
  • XFER with strong decoder outperformed SMT even
    without any grammar rules in the miserly data
    scenario
  • SMT Trained on elicited phrases that are very
    short
  • SMT has insufficient data to train more
    discriminative translation probabilities
  • XFER takes advantage of Morphology
  • Token coverage without morphology 0.6989
  • Token coverage with morphology 0.7892
  • Manual grammar currently somewhat better than
    automatically learned grammar
  • Learned rules did not yet use version-space
    learning
  • Large room for improvement on learning rules
  • Importance of effective well-founded scoring of
    learned rules

109
Observations and Lessons (II)
  • MEMT (XFER and SMT) based on strong decoder
    produced best results in the miserly scenario.
  • Reordering within the decoder provided very
    significant score improvements
  • Much room for more sophisticated grammar rules
  • Strong decoder can carry some of the reordering
    burden

110
Conclusions
  • Transfer rules (both manual and learned) offer
    significant contributions that can complement
    existing data-driven approaches
  • Also in medium and large data settings?
  • Initial steps to development of a statistically
    grounded transfer-based MT system with
  • Rules that are scored based on a well-founded
    probability model
  • Strong and effective decoding that incorporates
    the most advanced techniques used in SMT decoding
  • Working from the opposite end of research on
    incorporating models of syntax into standard
    SMT systems Knight et al
  • Our direction makes sense in the limited data
    scenario

111
Future Directions
  • Continued work on automatic rule learning
    (especially Seeded Version Space Learning)
  • Improved leveraging from manual grammar
    resources, interaction with bilingual speakers
  • Developing a well-founded model for assigning
    scores (probabilities) to transfer rules
  • Improving the strong decoder to better fit the
    specific characteristics of the XFER model
  • MEMT with improved
  • Combination of output from different translation
    engines with different scorings
  • strong decoding capabilities

112
Rule Learning - Overview
  • Goal Acquire Syntactic Transfer Rules
  • Use available knowledge from the source side
    (grammatical structure)
  • Three steps
  • Flat Seed Generation first guesses at transfer
    rules no syntactic structure
  • Compositionality use previously learned rules to
    add structure
  • Seeded Version Space Learning refine rules by
    generalizing with validation

113
Flat Seed Generation
  • Create a transfer rule that is specific to the
    sentence pair, but abstracted to the POS level.
    No syntactic structure.

114
Flat Seed Generation - Example
  • The highly qualified applicant did not accept the
    offer.
  • Der äußerst qualifizierte Bewerber nahm das
    Angebot nicht an.
  • ((1,1),(2,2),(3,3),(4,4),(6,8),(7,5),(7,9),(8,6),(
    9,7))

SS det adv adj n aux neg v det n -gt det adv
adj n v det n neg vpart (alignments (x1y1)(x2
y2)(x3y3)(x4y4)(x6y8)(x7y5)(x7y9)(x8
y6)(x9y7)) constraints ((x1 def) ) ((x4
agr) 3-sing) ((x5 tense) past) . ((y1
def) ) ((y3 case) nom) ((y4 agr)
3-sing) . )
115
Compositionality - Overview
  • Traverse the c-structure of the English sentence,
    add compositional structure for translatable
    chunks
  • Adjust constituent sequences, alignments
  • Remove unnecessary constraints, i.e. those that
    are contained in the lower-level rule
  • Adjust constraints use f-structure of correct
    translation vs. f-structure of incorrect
    translations to introduce context constraints

116
Compositionality - Example

SS det adv adj n aux neg v det n -gt det adv
adj n v det n neg vpart (alignments (x1y1)(x2
y2)(x3y3)(x4y4)(x6y8)(x7y5)(x7y9)(x8
y6)(x9y7)) constraints ((x1 def) ) ((x4
agr) 3-sing) ((x5 tense) past) . ((y1
def) ) ((y3 case) nom) ((y4 agr)
3-sing) . )
NPNP det AJDP n -gt det ADJP
n ((x1y1) ((y3 agr) 3-sing) ((x3 agr
3-sing) .)
SS NP aux neg v det n -gt NP v det n neg
vpart (alignments (x1y1)(x3y5)(x4y2)(x4
y6)(x5y3)(x6y4) constraints ((x2 tense)
past) . ((y1 def) ) ((y1 case) nom) .
)
117
Seeded Version Space Learning Overview
  • Goal further generalize the acquired rules
  • Methodology
  • Preserve general structural transfer
  • Consider relaxing specific feature constraints
  • Seed rules are grouped into clusters of similar
    transfer structure (type, constituent sequences,
    alignments)
  • Each cluster forms a version space a partially
    ordered hypothesis space with a specific and a
    general boundary
  • The seed rules in a group form the specific
    boundary of a version space
  • The general boundary is the (implicit) transfer
    rule with the same type, constituent sequences,
    and alignments, but no feature constraints

118
Seeded Version Space Learning
  • NP v det n NP VP
  • Group seed rules into version spaces as above.
  • Make use of partial order of rules in version
    space. Partial order is defined
  • via the f-structures satisfying the constraints.
  • Generalize in the space by repeated merging of
    rules
  • Deletion of constraint
  • Moving value constraints to agreement
    constraints, e.g.
  • ((x1 num) pl), ((x3 num) pl) ?
  • ((x1 num) (x3 num)
  • 4. Check translation power of generalized rules
    against sentence pairs




119
Seeded Version Space Learning Example
SS NP aux neg v det n -gt NP v det n neg
vpart (alignments (x1y1)(x3y5)(x4y2)(x4
y6)(x5y3)(x6y4) constraints ((x2 tense)
past) . ((y1 def) ) ((y1 case) nom)
((y1 agr) 3-sing) ) ((y3 agr) 3-sing)
((y4 agr) 3-sing) )
SS NP aux neg v det n -gt NP n det n neg
vpart ( alignments (x1y1)(x3y5) (x4y2)(x
4y6) (x5y3)(x6y4) constraints ((x2
tense) past) ((y1 def) ) ((y1 case)
nom) ((y4 agr) (y3 agr)) )
SS NP aux neg v det n -gt NP v det n neg
vpart (alignments (x1y1)(x3y5)(x4y2)(x4
y6)(x5y3)(x6y4) constraints ((x2 tense)
past) ((y1 def) ) ((y1 case) nom) ((y1
agr) 3-plu) ((y3 agr) 3-plu) ((y4 agr)
3-plu) )
120
Preliminary Evaluation
  • English to German
  • Corpus of 141 ADJPs, simple NPs and sentences
  • 10-fold cross-validation experiment
  • Goals
  • Do we learn useful transfer rules?
  • Does Compositionality improve generalization?
  • Does VS-learning improve generalization?

121
Summary of Results
  • Average translation accuracy on cross-validation
    test set was 62
  • Without VS-learning 43
  • Without Compositionality 57
  • Average number of VSs 24
  • Average number of sents per VS 3.8
  • Average number of merges per VS 1.6
  • Percent of compositional rules 34

122
Conclusions
  • New paradigm for learning transfer rules from
    pre-designed elicitation corpus
  • Geared toward languages with very limited
    resources
  • Preliminary experiments validate approach
    compositionality and VS-learning improve
    generalization

123
Future Work
  • Larger, more diverse elicitation corpus
  • Additional languages (Mapudungun)
  • Less information on TL side
  • Reverse translation direction
  • Refine the various algorithms
  • Operators for VS generalization
  • Generalization VS search
  • Layers for compositionality
  • User interactive verification

124
Seeded Version Space Learning Generalization
  • The partial order of the version space
  • Definition A transfer rule tr1 is strictly more
    general than another transfer rule tr2 if all
    f-structures that are satisfied by tr2 are also
    satisfied by tr1.
  • Generalize rules by merging them
  • Deletion of constraint
  • Raising two value constraints to an agreement
    constraint, e.g.
  • ((x1 num) pl), ((x3 num) pl) ?
  • ((x1 num) (x3 num))

125
Seeded Version Space Learning Merging Two Rules
  • Merging algorithm proceeds in three steps.
  • To merge tr1 and tr2 into trmerged
  • Copy all constraints that are both in tr1 and tr2
    into trmerged
  • Consider tr1 and tr2 separately. For the
    remaining constraints in tr1 and tr2 , perform
    all possible instances of raising value
    constraints to agreement constraints.
  • Repeat step 1.

126
Seeded Version Space LearningThe Search
  • The Seeded Version Space algorithm itself is the
    repeated generalization of rules by merging
  • A merge is successful if the set of sentences
    that can correctly be translated with the merged
    rule is a superset of the union of sets that can
    be translated with the unmerged rules, i.e. check
    power of rule
  • Merge until no more successful merges

127
Constructing a Network of Candidate Pattern Sets
(An Example)
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
128
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s blame solve
129
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s.d blame
Ø.s blame solve
130
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s.d blame
Ø.s blame solve
131
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s.d blame
Ø.s blame solve
s blame roam solve
132
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s.d blame
Ø.s blame solve
s blame roam solve
133
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s.d blame
Ø.s blame solve
e.es blam solv
s blame roam solve
134
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s.d blame
Ø.s blame solve
e.es blam solv
s blame roam solve
135
Example Vocabulary blame blamed blames
roamed roaming roams solve
solves solving
Ø.s.d blame
Ø.s blame solve
e.es blam solv
me.mes bla
s blame roam solve
136
Add Test to the Generate
a
  • Finite state hub searching algorithm (Johnson and
    Martin, 2003) can weed out unlikely morpheme
    boundaries to speed up network generation

s
a
e
t
e
Ø
i
n
g
r
r
i
e
s
t.ting.ts res retrea
Ø.ing.s rest retreat roam
s
y
Ø
n
g
i
o
t.ting res retrea
Ø.ing rest retreat retry roam
a
z
136
137
a.as.o.os.tro 1 cas
Each c-suffix is a random variable with a value
equal to the count of the c-stems that occur with
that suffix
Use ?2 Test Reject hypothesis a - as
(p-value ltlt 0.005) Accept hypothesis a
- tro (p-value 0.2)
a.as.o.os 43 african, cas, jurídic, l, ...
a.as.os 50 afectad, cas, jurídic, l, ...
a.as.o 59 cas, citad, jurídic, l, ...
a.o.os 105 impuest, indonesi, italian, jurídic,
...
as.o.os 54 cas, implicad, jurídic, l, ...
a.as 199 huelg, incluid, industri, inundad, ...
a.os 134 impedid, impuest, indonesi, inundad, ...
as.os 68 cas, implicad, inundad, jurídic, ...
a.o 214 id, indi, indonesi, inmediat, ...
as.o 85 intern, jurídic, just, l, ...
a.tro 2 cas.cen
o.os 268 human, implicad, indici,
indocumentad, ...
as 404 huelg, huelguist, incluid, industri, ...
a 1237 huelg, ib, id, iglesi, ...
os 534 humorístic, human, hígad, impedid, ...
o 1139 hub, hug, human, huyend, ...
tro 16 catas, ce, cen, cua, ...
137
138
Currently each c-stem is implicitly weighted equal
Weight c-stems by Length, Length of longest
c-suffix that attaches Frequency
a.as.o.os 43 african, cas, jurídic, l, ...
a.as.os 50 afectad, cas, jurídic, l, ...
a.as.o 59 cas, citad, jurídic, l, ...
a.o.os 105 impuest, indonesi, italian, jurídic,
...
as.o.os 54 cas, implicad, jurídic, l, ...
a.as 199 huelg, incluid, industri, inundad, ...
a.os 134 impedid, impuest, indonesi, inundad, ...
as.os 68 cas, implicad, inundad, jurídic, ...
a.o 214 id, indi, indonesi, inmediat, ...
as.o 85 intern, jurídic, just, l, ...
o.os 268 human, implicad, indici,
indocumentad, ...
a 1237 huelg, ib, id, iglesi, ...
as 404 huelg, huelguist, incluid, industri, ...
os 534 humorístic, human, hígad, impedid, ...
o 1139 hub, hug, human, huyend, ...
138
139
Sub-network density Every descendent of
a.as.o.os is in the networkNot true for
a.as.o.os.tro
Some schemes absent from this network
(i.e. a.os.tro)
a.as.o.os 43 african, cas, jurídic, l, ...
a.as.os 50 afectad, cas, jurídic, l, ...
a.as.o 59 cas, citad, jurídic, l, ...
a.o.os 105 impuest, indonesi, italian, jurídic,
...
as.o.os 54 cas, implicad, jurídic, l, ...
a.as 199 huelg, incluid, industri, inundad, ...
a.os 134 impedid, impuest, indonesi, inundad, ...
as.os 68 cas, implicad, inundad, jurídic, ...
a.o 214 id, indi, indonesi, inmediat, ...
as.o 85 intern, jurídic, just, l, ...
o.os 268 human, implicad, indici,
indocumentad, ...
a 1237 huelg, ib, id, iglesi, ...
as 404 huelg, huelguist, incluid, industri, ...
os 534 humorístic, human, hígad, impedid, ...
o 1139 hub, hug, human, huyend, ...
139
140
Word-to-Morpheme Segmentation
  • De facto standard measure for unsupervised
    morphology induction
  • Prerequisite for many NLP tasks
  • Machine Translation
  • Speech Recognition of highly inflecting languages

140
141
S
VP
NP
V
Det
N
The
trees
fell
Los
cayeron
árboles
  • Subject number marked on
  • N-head (es)
  • dependent Det (El vs. Los), and
  • governing V (ó vs eron)

142
  • Morphology Learning
  • AVENUE Approach
  • Organize the raw data in the
  • form of a network of paradigm
  • candidate schemes
  • Search the network for a
  • collection of schemes that
  • represent true morphology
  • paradigms of the language
  • Learn mappings between the
  • schemes and features/functions
  • using minimal pairs of elicited
  • data
  • Construct analyzer based on the
  • collection of schemes and the
  • acquired function mappings

143
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com