What is coming - PowerPoint PPT Presentation

About This Presentation
Title:

What is coming

Description:

What is coming Today: Probabilistic models Improving classical models Latent Semantic Indexing Relevance feedback (Chapter 5) Monday Feb 5 Chapter 5 continued – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 32
Provided by: bert197
Category:

less

Transcript and Presenter's Notes

Title: What is coming


1
What is coming
  • Today
  • Probabilistic models
  • Improving classical models
  • Latent Semantic Indexing
  • Relevance feedback (Chapter 5)
  • Monday Feb 5
  • Chapter 5 continued
  • Wednesday Feb 7
  • Web Search Engines
  • Chapter 13 Google paper

2
Annoucement Free Food Event
  • Where When GWC 487, Tuesday Feb 6th,
    1215-130
  • What Pizza, Softdrinks
  • Catch A pitch on going to graduate school at ASU
    CSE
  • Meet admissions committee, some graduate
    students, some faculty members
  • Silver-lining Did we mention Pizza?
  • Also not as boring as time-sharing condo
    presentations.

Make Sure to Come!!
3
Problems with Vector Model
  • No semantic basis!
  • Keywords are plotted as axes
  • But are they really independent?
  • Or they othogonal?
  • No support for boolean queries
  • How do you ask for papers that dont contain a
    keyword?

4
Probabilistic Model
  • Objective to capture the IR problem using a
    probabilistic framework
  • Given a user query, there is an ideal answer set
  • Querying as specification of the properties of
    this ideal answer set (clustering)
  • But, what are these properties?
  • Guess at the beginning what they could be (i.e.,
    guess initial description of ideal answer set)
  • Improve by iteration

5
Probabilistic Model
  • An initial set of documents is retrieved somehow
  • User inspects these docs looking for the relevant
    ones (in truth, only top 10-20 need to be
    inspected)
  • IR system uses this information to refine
    description of ideal answer set
  • By repeting this process, it is expected that the
    description of the ideal answer set will improve
  • Have always in mind the need to guess at the very
    beginning the description of the ideal answer set
  • Description of ideal answer set is modeled in
    probabilistic terms

6
Probabilistic Ranking Principle
  • Given a user query q and a document dj,
  • estimate the probability that the user will find
    the document dj interesting (i.e., relevant).
  • The model assumes that this probability of
    relevance depends on the query and the document
    representations only.
  • Ideal answer set is referred to as R and should
    maximize the probability of relevance. Documents
    in the set R are predicted to be relevant.
  • But,
  • how to compute probabilities?
  • what is the sample space?

7
The Ranking
  • Probabilistic ranking computed as
  • sim(q,dj) P(dj relevant-to q) / P(dj
    non-relevant-to q)
  • This is the odds of the document dj being
    relevant
  • Taking the odds minimize the probability of an
    erroneous judgement
  • Definition
  • wij ? 0,1
  • P(R vec(dj)) probability that given doc is
    relevant
  • P(?R vec(dj)) probability doc is not relevant

8
The Ranking
  • sim(dj,q) P(R vec(dj)) / P(?R
    vec(dj)) P(vec(dj) R)
    P(R) P(vec(dj) ?R)
    P(?R) P(vec(dj) R)
    P(vec(dj) ?R)
  • P(vec(dj) R) probability of randomly
    selecting the document dj from the set R of
    relevant documents

Bayes Rule
P(R ) and P(?R ) Same for all docs
9
Bayesian Inference
  • Schools of thought in probability
  • freqüentist
  • epistemological

10
Basic Probability
  • Basic Axioms
  • 0 lt P(A) lt 1
  • P(certain)1
  • P(A V B)P(A)P(B) if A and B are mutually
    exclusive

11
Basic Probability
  • Conditioning
  • P(A)P(A ? B)P(A ? B)
  • P(A) ??i P(A ? Bi) , where Bi,?i is a set of
    exhaustive and mutually exclusive events
  • P(A) P(A) 1
  • Independence
  • P(AK) belief in A given the knowledge K
  • if P(AB)P(A), we sayA and B are independent
  • if P(AB ? C) P(AC), we say A and B are
    conditionally independent, given C
  • P(A ? B)P(AB)P(B)
  • P(A) ??i P(A Bi)P(Bi)

12
Bayesian Inference
  • Bayes Rule the heart of Bayesian techniques
  • P(He) P(eH)P(H) / P(e)
  • Where, H a hypothesis and e is an
    evidence
  • P(H) prior probability
  • P(He) posterior probability
  • P(eH) probability of e if H is true
  • P(e) a normalizing constant, then we
    write
  • P(He) P(eH)P(H)

13
The Ranking
  • sim(dj,q) P(vec(dj) R)
    P(vec(dj) ?R)
  • Where vec(dj) is of
    the form (k1, ?k2,k3....kt)
  • Using pair-wise independence assumption among
    keywords
  • ? P(ki R) ? P(?ki R)
  • ? P(ki ?R) ?
    P(?ki ?R)
  • P(ki R) probability that the index term ki is
    present in a document randomly selected from the
    set R of relevant documents

For keywords that are present in dj
For keywords that are NOT present in dj
14
The Ranking
  • sim(dj,q) log ? P(ki R) ?
    P(?kj R)
  • ? P(ki ?R)
    ? P(?kj ?R)
  • K log ? P(ki R)

  • P(?ki R)
  • log ? P(ki
    ?R) P(?ki ?R)
    ? wiq wij (log P(ki R)
    log P(ki ?R) ) P(?ki R)
    P(?ki ?R) where P(?ki R)
    1 - P(ki R) P(?ki ?R) 1 - P(ki ?R)

15
The Initial Ranking
  • sim(dj,q) ? wiq wij (log
    P(ki R) log P(ki ?R) )
    P(?ki R) P(?ki ?R)
  • Probabilities P(ki R) and P(ki ?R) ?
  • Estimates based on assumptions
  • P(ki R) 0.5
  • P(ki ?R) ni N where ni is
    the number of docs that contain ki
  • Use this initial guess to retrieve an initial
    ranking
  • Improve upon this initial ranking

16
Improving the Initial Ranking
  • sim(dj,q) ? wiq wij (log
    P(ki R) log P(ki ?R) )
    P(?ki R) P(?ki ?R)
  • Let
  • V set of docs initially retrieved
  • Vi subset of docs retrieved that contain ki
  • Reevaluate estimates
  • P(ki R) Vi V
  • P(ki ?R) ni - Vi N - V
  • Repeat recursively

Relevance Feedback..
17
Improving the Initial Ranking
  • sim(dj,q) ? wiq wij (log
    P(ki R) log P(ki ?R) )
    P(?ki R) P(?ki ?R)
  • To avoid problems with V1 and Vi0
  • P(ki R) Vi 0.5 V 1
  • P(ki ?R) ni - Vi 0.5 N - V 1
  • Also,
  • P(ki R) Vi ni/N V 1
  • P(ki ?R) ni - Vi ni/N N - V 1

Relevance Feedback..
18
Pluses and Minuses
  • Advantages
  • Docs ranked in decreasing order of probability of
    relevance
  • Disadvantages
  • need to guess initial estimates for P(ki R)
  • method does not take into account tf and idf
    factors

19
Brief Comparison of Classic Models
  • Boolean model does not provide for partial
    matches and is considered to be the weakest
    classic model
  • Salton and Buckley did a series of experiments
    that indicate that, in general, the vector model
    outperforms the probabilistic model with general
    collections
  • This seems also to be the view of the research
    community

20
Alternative Probabilistic Models
  • Probability Theory
  • Semantically clear
  • Computationally clumsy
  • Why Bayesian Networks?
  • Clear formalism to combine evidences
  • Modularize the world (dependencies)
  • Bayesian Network Models for IR
  • Inference Network (Turtle Croft, 1991)
  • Belief Network (Ribeiro-Neto Muntz, 1996)

21
Bayesian Networks
  • Definition
  • Bayesian networks are directed acyclic graphs
    (DAGS) in which the nodes represent random
    variables, the arcs portray causal relationships
    between these variables, and the strengths of
    these causal influences are expressed by
    conditional probabilities.

22
Bayesian Networks
  • yi parent nodes (in this case, root nodes)
  • x child node
  • yi cause x
  • Y the set of parents of x
  • The influence of Y on x
  • can be quantified by any function
  • F(x,Y) such that ??x F(x,Y) 1
  • 0 lt F(x,Y) lt 1
  • For example, F(x,Y)P(xY)

23
Bayesian Networks
  • Given the dependencies declared
  • in a Bayesian Network, the
  • expression for the joint
  • probability can be computed as
  • a product of local conditional
  • probabilities, for example,
  • P(x1, x2, x3, x4, x5)
  • P(x1 ) P(x2 x1 ) P(x3 x1 ) P(x4 x2, x3 ) P(x5
    x3 ).
  • P(x1 ) prior probability of the root node

24
Bayesian Networks
  • In a Bayesian network each
  • variable x is conditionally
  • independent of all its
  • non-descendants, given its
  • parents.
  • For example
  • P(x4, x5 x2 , x3) P(x4 x2 , x3) P( x5 x3)

25
An Example Bayes Net
  • Typically, networks written in causal direction
  • wind up being most compact
  • need least number of probabilities
  • to be specified

26
Two Models
Inference Network model
Belief network model
27
Comparison
  • Inference Network Model is the first and well
    known
  • Used in Inquery system
  • Belief Network adopts a set-theoretic view
  • a clearly defined sample space
  • a separation between query and document portions
  • is able to reproduce any ranking produced by the
    Inference Network while the converse is not true
    (for example the ranking of the standard vector
    model)

28
Belief Network Model
  • As the Inference Network Model
  • Epistemological view of the IR problem
  • Random variables associated with documents, index
    terms and queries
  • Contrary to the Inference Network Model
  • Clearly defined sample space
  • Set-theoretic view
  • Different network topology

29
Belief Network Model
  • The Probability Space
  • Define
  • Kk1, k2, ...,kt the sample space (a concept
    space)
  • u ? K a subset of K (a concept)
  • ki an index term (an elementary concept)
  • k(k1, k2, ...,kt) a vector associated to each u
    such that gi(k)1 ? ki ? u
  • ki a binary random variable associated with the
    index term ki , (ki 1 ? gi(k)1 ? ki ? u)

30
Belief Network Model
  • A Set-Theoretic View
  • Define
  • a document dj and query q as concepts in K
  • a generic concept c in K
  • a probability distribution P over K, as
  • P(c)??uP(cu) P(u)
  • P(u)(1/2)t
  • P(c) is the degree of coverage of the space K
    by c

31
Belief Network Model
  • Network topology
  • query side
  • document side

32
Belief Network Model
  • Assumption
  • P(djq) is adopted as the rank of the document dj
    with respect to the query q. It reflects the
    degree of coverage provided to the concept dj by
    the concept q.

33
Belief Network Model
  • The rank of dj
  • P(djq) P(dj ? q) / P(q)
  • P(dj ? q)
  • ??u P(dj ? q u) P(u)
  • ??u P(dj u) P(q u) P(u)
  • ??k P(dj k) P(q k) P(k)

34
Belief Network Model
  • For the vector model
  • Define
  • Define a vector ki given by
  • ki k ((gi(k)1) ? (?j?i gj(k)0))
  • ? in the state ki only the node ki is active
    and all the others are inactive

35
Belief Network Model
  • For the vector model
  • Define
  • (wi,q / q) if k ki ? gi(q)1
  • P(q k)
  • 0 if k ? ki
    v gi(q)0
  • P(q k) 1 - P(q k)
  • ? (wi,q / q) is a normalized version of
    weight of the index term ki in the query q

36
Belief Network Model
  • For the vector model
  • Define
  • (wi,j / dj) if k ki ? gi(dj)1
  • P(dj k)
  • 0 if k ? ki v
    gi(dj)0
  • P( dj k) 1 - P(dj k)
  • ? (wi,j / dj) is a normalized version of
    the weight of the index term
    ki in the document d,j

37
Inference Network Model
  • Epistemological view of the IR problem
  • Random variables associated with documents, index
    terms and queries
  • A random variable associated with a document dj
    represents the event of observing that document

38
Inference Network Model
  • Nodes
  • documents (dj)
  • index terms (ki)
  • queries (q, q1, and q2)
  • user information need (I)
  • Edges
  • from dj to its index term nodes ki indicate that
    the observation of dj increase the belief in the
    variables ki
  • .

39
Inference Network Model
  • dj has index terms k2, ki, and kt
  • q has index terms k1, k2, and ki
  • q1 and q2 model boolean formulation
  • q1((k1? k2) v ki)
  • I (q v q1)

40
Inference Network Model
  • Definitions
  • k1, dj,, and q random variables.
  • k(k1, k2, ...,kt) a t-dimensional vector
  • ki,?i?0, 1, then k has 2t possible states
  • dj,?j?0, 1 ?q?0, 1
  • The rank of a document dj is computed as P(q?
    dj)
  • q and dj,are short representations for q1 and dj
    1
  • (dj stands for a state where dj 1 and ?l?j ? dl
    0, because we observe one document at a time)

41
Inference Network Model
  • P(q ? dj) ??k P(q ? dj k) P(k)
  • ??k P(q ? dj ? k)
  • ??k P(q dj ? k) P(dj ? k)
  • ??k P(q k) P(k dj ) P( dj )
  • P((q ? dj)) 1 - P(q ? dj)

42
Inference Network Model
  • As the instantiation of dj makes all index term
    nodes
  • mutually independent P(k dj ) can be a
    product,then
  • P(q ? dj) ??k P(q k)
  • (??igi(k)1 P(ki dj ))
  • (??igi(k)0 P(ki dj))
  • P( dj )
  • remember that gi(k) 1 if ki1 in the
    vector k
  • 0 otherwise

43
Inference Network Model
  • The prior probability P(dj) reflects the
    probability associated to the event of observing
    a given document dj
  • Uniformly for N documents
  • P(dj) 1/N
  • P(dj) 1 - 1/N
  • Based on norm of the vector dj
  • P(dj) 1/dj
  • P(dj) 1 - 1/dj

44
Inference Network Model
  • For the Boolean Model
  • P(dj) 1/N
  • 1 if gi(dj)1
  • P(ki dj)
  • 0 otherwise
  • P(ki dj) 1 - P(ki dj)
  • ? only nodes associated with the index terms of
    the document dj are activated

45
Inference Network Model
  • For the Boolean Model
  • 1 if ?qcc (qcc? qdnf) ? (? ki, gi(k)
    gi(qcc)
  • P(q k)
  • 0 otherwise
  • P(q k) 1 - P(q k)
  • ? one of the conjunctive components of the
    query must be matched by the active index terms
    in k

46
Inference Network Model
  • For a tf-idf ranking strategy
  • P(dj) 1 / dj
  • P(dj) 1 - 1 / dj
  • ? prior probability reflects the importance of
    document normalization

47
Inference Network Model
  • For a tf-idf ranking strategy
  • P(ki dj) fi,j
  • P(ki dj) 1- fi,j
  • ? the relevance of the a index term ki is
    determined by its normalized term-frequency
    factor fi,j freqi,j / max freql,j

48
Inference Network Model
  • For a tf-idf ranking strategy
  • Define a vector ki given by
  • ki k ((gi(k)1) ? (?j?i gj(k)0))
  • ? in the state ki only the node ki is active
    and all the others are inactive

49
Inference Network Model
  • For a tf-idf ranking strategy
  • idfi if k ki ? gi(q)1
  • P(q k)
  • 0 if k ? ki v gi(q)0
  • P(q k) 1 - P(q k)
  • ? we can sum up the individual contributions of
    each index term by its normalized idf

50
Inference Network Model
  • For a tf-idf ranking strategy
  • As P(qk)0 ?k ? ki, we can rewrite P(q ? dj) as
  • P(q ? dj) ??ki P(q ki) P(ki dj )
  • (??ll?i P(kl dj)) P( dj )
  • (??i P(kl dj)) P( dj )
  • ??ki P(ki dj ) P(q ki) / P(ki
    dj)

51
Inference Network Model
  • For a tf-idf ranking strategy
  • Applying the previous probabilities we have
  • P(q ? dj) Cj (1/dj) ??i fi,j idfi
    (1/(1- fi,j ))
  • ? Cj vary from document to document
  • ? the ranking is distinct of the one
    provided by the vector model

52
Inference Network Model
  • Combining evidential source
  • Let I q v q1
  • P(I ? dj) ??k P(I k) P(k dj ) P( dj)
  • ??k 1 - P(qk)P(q1 k) P(k dj
    ) P( dj)
  • ? it might yield a retrieval performance which
    surpasses the retrieval performance of the query
    nodes in isolation (Turtle Croft)

53
Bayesian Network Models
  • Computational costs
  • Inference Network Model one document node at a
    time then is linear on number of documents
  • Belief Network only the states that activate each
    query term are considered
  • The networks do not impose additional costs
    because the networks do not include cycles.

54
Bayesian Network Models
  • Impact
  • The major strength is net combination of distinct
    evidential sources to support the rank of a given
    document.
Write a Comment
User Comments (0)
About PowerShow.com