Title: Information Retrieval and Web Search
1Information Retrieval and Web Search
- IR models Vector Space Model
- Instructor Rada Mihalcea
- Invited Lecturer András Csomai
- Class web page http//lit.csci.unt.edu/classes/C
SCE5200 - Note Some slides in this set were adapted from
an IR course taught by Ray Mooney at UT Austin,
who in turn adapted them from Joydeep Ghosh, who
in turn adapted them
2Topics
- Vector-based model
- TF/IDF Weighting
- Similarity measure
- Inner product
- Euclidian
- Cosine
- Naïve implementation
- Practical implementation
- Weighting methods
3IR Models
U s e r T a s k
Retrieval Adhoc Filtering
Browsing
4Vector-Space Model
- t distinct terms remain after preprocessing
- Unique terms that form the VOCABULARY
- These orthogonal terms form a vector space.
- Dimension t vocabulary
- 2 terms ? bi-dimensional n-terms ?
n-dimensional - Each term, i, in a document or query j, is given
a real-valued weight, wij. - Both documents and queries are expressed as
t-dimensional vectors - dj (w1j, w2j, , wtj)
5Vector-Space Model
- Query as vector
- We regard query as short document
- We return the documents ranked by the closeness
of their vectors to the query, also represented
as a vector. - Vectorial model was developed in the SMART system
(Salton, c. 1970) and standardly used by TREC
participants and web IR systems
6Graphic Representation
- Example
- D1 2T1 3T2 5T3
- D2 3T1 7T2 T3
- Q 0T1 0T2 2T3
- Is D1 or D2 more similar to Q?
- How to measure the degree of similarity?
Distance? Angle? Projection?
7Document Collection Representation
- A collection of n documents can be represented in
the vector space model by a term-document matrix. - An entry in the matrix corresponds to the
weight of a term in the document zero means
the term has no significance in the document or
it simply doesnt exist in the document.
8Term Weights Term Frequency
- More frequent terms in a document are more
important, i.e. more indicative of the topic. - fij frequency of term i in document j
- May want to normalize term frequency (tf) across
the entire corpus - tfij fij / maxfij
-
9Term Weights Inverse Document Frequency
- Terms that appear in many different documents are
less indicative of overall topic. - df i document frequency of term i
- number of documents containing term
i - idfi inverse document frequency of term i,
- log2 (N/ df i)
- (N total number of documents)
- An indication of a terms discrimination power.
- Log used to dampen the effect relative to tf.
- Make the difference
- Document frequency VS. corpus frequency ?
10TF-IDF Weighting
- A typical weighting is tf-idf weighting
- wij tfij idfi tfij log2 (N/ dfi)
- A term occurring frequently in the document but
rarely in the rest of the collection is given
high weight. - Experimentally, tf-idf has been found to work
well. - It was also theoretically proved to work well
(Papineni, NAACL 2001) - more weighting schemes next time
11Computing TF-IDF An Example
- Given a document containing terms with given
frequencies - A(3), B(2), C(1)
- Assume collection contains 10,000 documents and
- document frequencies of these terms are
- A(50), B(1300), C(250)
- Then
- A tf 3/3 idf log(10000/50) 5.3
tf-idf 5.3 - B tf 2/3 idf log(10000/1300) 2.0
tf-idf 1.3 - C tf 1/3 idf log(10000/250) 3.7
tf-idf 1.2
12Query Vector
- Query vector is typically treated as a document
and also tf-idf weighted. - Alternative is for the user to supply weights for
the given query terms.
13Similarity Measure
- We now have vectors for all documents in the
collection, a vector for the query, how to
compute similarity? - A similarity measure is a function that computes
the degree of similarity between two vectors. - Using a similarity measure between the query and
each document - It is possible to rank the retrieved documents in
the order of presumed relevance. - It is possible to enforce a certain threshold so
that the size of the retrieved set can be
controlled.
14Desiderata for proximity
- If d1 is near d2, then d2 is near d1.
- If d1 near d2, and d2 near d3, then d1 is not far
from d3. - No document is closer to d than d itself.
- Sometimes it is a good idea to determine the
maximum possible similarity as the distance
between a document d and itself
15First cut Euclidean distance
- Distance between vectors d1 and d2 is the length
of the vector d1 d2. - Euclidean distance
- Exercise Determine the Euclidean distance
between the vectors (0, 3, 2, 1, 10) and (2, 7,
1, 0, 0) - Why is this not a great idea?
- We still havent dealt with the issue of length
normalization - Long documents would be more similar to each
other by virtue of length, not topic - However, we can implicitly normalize by looking
at angles instead
16Second cut Manhattan Distance
- Or city block measure
- Based on the idea that generally in American
cities you cannot follow a direct line between
two points. - Uses the formula
- Exercise Determine the Manhattan distance
between the vectors (0, 3, 2, 1, 10) and (2, 7,
1, 0, 0)
y
x
17Third cut Inner Product
- Similarity between vectors for the document di
and query q can be computed as the vector inner
product - sim(dj,q) djq wij wiq
-
- where wij is the weight of term i in document
j and wiq is the weight of term i in the query - For binary vectors, the inner product is the
number of matched query terms in the document
(size of intersection). - For weighted term vectors, it is the sum of the
products of the weights of the matched terms.
18Properties of Inner Product
- Favors long documents with a large number of
unique terms. - Again, the issue of normalization
- Measures how many terms matched but not how many
terms are not matched.
19Inner Product Example 1
20Inner Product Exercise
21Cosine similarity
- Distance between vectors d1 and d2 captured by
the cosine of the angle x between them. - Note this is similarity, not distance
22Cosine similarity
- Cosine of angle between two vectors
- The denominator involves the lengths of the
vectors - So the cosine measure is also known as the
normalized inner product
23Cosine similarity exercise
- Exercise Rank the following by decreasing cosine
similarity - Two documents that have only frequent words (the,
a, an, of) in common. - Two documents that have no words in common.
- Two documents that have many rare words in common
(wingspan, tailfin).
24Example
- Documents Austen's Sense and Sensibility, Pride
and Prejudice Bronte's Wuthering Heights - cos(SAS, PAP) .996 x .993 .087 x .120 .017
x 0.0 0.999 - cos(SAS, WH) .996 x .847 .087 x .466 .017 x
.254 0.929
25Cosine Similarity vs. Inner Product
- Cosine similarity measures the cosine of the
angle between two vectors. - Inner product normalized by the vector lengths.
-
CosSim(dj, q)
InnerProduct(dj, q)
D1 2T1 3T2 5T3 CosSim(D1 , Q) 10 /
?(4925)(004) 0.81 D2 3T1 7T2 1T3
CosSim(D2 , Q) 2 / ?(9491)(004) 0.13 Q
0T1 0T2 2T3
D1 is 6 times better than D2 using cosine
similarity but only 5 times better using inner
product.
26Comments on Vector Space Models
- Simple, mathematically based approach.
- Considers both local (tf) and global (idf) word
occurrence frequencies. - Provides partial matching and ranked results.
- Tends to work quite well in practice despite
obvious weaknesses. - Allows efficient implementation for large
document collections.
27Problems with Vector Space Model
- Missing semantic information (e.g. word sense).
- Missing syntactic information (e.g. phrase
structure, word order, proximity information). - Assumption of term independence (e.g. ignores
synonomy). - Lacks the control of a Boolean model (e.g.,
requiring a term to appear in a document). - Given a two-term query A B, may prefer a
document containing A frequently but not B, over
a document that contains both A and B, but both
less frequently.
28Naïve Implementation
- Convert all documents in collection D to tf-idf
weighted vectors, dj, for keyword vocabulary V. - Convert query to a tf-idf-weighted vector q.
- For each dj in D do
- Compute score sj cosSim(dj, q)
- Sort documents by decreasing score.
- Present top ranked documents to the user.
- Time complexity O(VD) Bad for large V
D ! - V 10,000 D 100,000 VD
1,000,000,000
29Practical Implementation
- Based on the observation that documents
containing none of the query keywords do not
affect the final ranking - Try to identify only those documents that contain
at least one query keyword - Actual implementation of an inverted index
30Step 1 Preprocessing
- Implement the preprocessing functions
- For tokenization
- For stop word removal
- For stemming
- Input Documents that are read one by one from
the collection - Output Tokens to be added to the index
- No punctuation, no stop-words, stemmed
31Step 2 Indexing
- Build an inverted index, with an entry for each
word in the vocabulary - Input Tokens obtained from the preprocessing
module - Output An inverted index for fast access
32Step 2 (contd)
- Many data structures are appropriate for fast
access - B-trees, skipped lists, hashtables
- We need
- One entry for each word in the vocabulary
- For each such entry
- Keep a list of all the documents where it appears
together with the corresponding frequency ? TF - For each such entry, keep the total number of
occurrences in all documents - ? IDF
33Step 2 (contd)
34Step 2 (contd)
- TF and IDF for each token can be computed in one
pass - Cosine similarity also required document lengths
- Need a second pass to compute document vector
lengths - Remember that the length of a document vector is
the square-root of sum of the squares of the
weights of its tokens. - Remember the weight of a token is TF IDF
- Therefore, must wait until IDFs are known (and
therefore until all documents are indexed) before
document lengths can be determined. - Do a second pass over all documents keep a list
or hashtable with all document id-s, and for each
document determine its length.
35Time Complexity of Indexing
- Complexity of creating vector and indexing a
document of n tokens is O(n). - So indexing m such documents is O(m n).
- Computing token IDFs can be done during the same
first pass - Computing vector lengths is also O(m n).
- Complete process is O(m n), which is also the
complexity of just reading in the corpus.
36Step 3 Retrieval
- Use inverted index (from step 2) to find the
limited set of documents that contain at least
one of the query words. - Incrementally compute cosine similarity of each
indexed document as query words are processed one
by one. - To accumulate a total score for each retrieved
document, store retrieved documents in a
hashtable, where the document id is the key, and
the partial accumulated score is the value. - Input Query and Inverted Index (from Step 2)
- Output Similarity values between query and
documents
37Step 4 Ranking
- Sort the hashtable including the retrieved
documents based on the value of cosine similarity - sort retrievedb ? retrieveda keys
retrieved - Return the documents in descending order of their
relevance - Input Similarity values between query and
documents - Output Ranked list of documented in reversed
order of their relevance
38What weighting methods?
- Weights applied to both document terms and query
terms - Direct impact on the final ranking
- ? Direct impact on the results
- ? Direct impact on the quality of IR system
39Standard Evaluation Measures
Starts with a CONTINGENCY table
retrieved
not retrieved
relevant
w
x
n1 w x
y
z
not relevant
N
n2 w y
40Precision and Recall
From all the documents that are relevant out
there, how many did the IR system retrieve?
w
Recall
wx
From all the documents that are retrieved by the
IR system, how many are relevant?
w
Precision
wy