Title: CSci 8980: Data Mining (Fall 2002)
1CSci 8980 Data Mining (Fall 2002)
- Vipin Kumar
- Army High Performance Computing Research Center
- Department of Computer Science
- University of Minnesota http//www.cs.umn.edu/
kumar
2Sampling
- Sampling is the main technique employed for data
selection. - It is often used for both the preliminary
investigation of the data and the final data
analysis. -
- Statisticians sample because obtaining the entire
set of data of interest is too expensive or time
consuming. -
- Sampling is used in data mining because it is too
expensive or time consuming to process all the
data
3Sampling
- The key principle for effective sampling is the
following - using a sample will work almost as well as using
the entire data sets, if the sample is
representative. - A sample is representative if it has
approximately the same property (of interest) as
the original set of data.
4Types of Sampling
- Simple Random Sampling
- There is an equal probability of selecting any
particular item. - Sampling without replacement
- As each item is selected, it is removed from the
population. - Sampling with replacement
- Objects are not removed from the population as
they are selected for the sample. - In sampling with replacement, the same object
can be picked up more than once.
5Sample Size
8000 points 2000 Points 500 Points
6Sample Size
- What sample size is necessary to get at least one
object from each of 10 groups.
7Discretization
- Some techniques dont use class labels.
Data
Equal interval width
Equal frequency
K-means
8Discretization
- Some techniques use class labels.
- Entropy based approach
3 categories for both x and y
5 categories for both x and y
9Aggregation
- Combine data or attribute
- More stable behavior
Standard Deviation of Average Monthly
Precipitation
Standard Deviation of Average Yearly Precipitation
10Dimensionality Reduction
- Principal Components Analysis
- Singular Value Decomposition
- Curse of Dimensionality
11Feature Subset Selection
- Redundant features
- duplicate much or all of the information
contained in one or more other attributes, e.g.,
the purchase price of a product and the amount of
sales tax paid contain much the same information.
- Irrelevant features
- contain no information that is useful for the
data mining task at hand, e.g., students' ID
numbers should be irrelevant to the task of
predicting students' grade point averages.
12Mapping Data to a New Space
- Fourier transform
- Wavelet transform
Two Sine Waves
Two Sine Waves Noise
Frequency
13Classification Outline
- Decision Tree Classifiers
- What are Decision Trees
- Tree Induction
- ID3, C4.5, CART
- Tree Pruning
- Other Classifiers
- Memory Based
- Neural Net
- Bayesian
14Classification Definition
- Given a collection of records (training set )
- Each record contains a set of attributes, one of
the attributes is the class. - Find a model for class attribute as a function
of the values of other attributes. - Goal previously unseen records should be
assigned a class as accurately as possible. - A test set is used to determine the accuracy of
the model. Usually, the given data set is divided
into training and test sets, with training set
used to build the model and test set used to
validate it.
15Classification Example
categorical
categorical
continuous
class
Learn Classifier
Training Set
16Classification Techniques
- Decision Tree based Methods
- Rule-based Methods
- Memory based reasoning
- Neural Networks
- Genetic Algorithms
- Naïve Bayes and Bayesian Belief Networks
- Support Vector Machines
17Decision Tree Based Classification
- Decision tree models are better suited for data
mining - Inexpensive to construct
- Easy to Interpret
- Easy to integrate with database systems
- Comparable or better accuracy in many
applications
18Example Decision Tree
categorical
categorical
continuous
Splitting Attributes
class
Refund
Yes
No
MarSt
NO
Married
Single, Divorced
TaxInc
NO
lt 80K
gt 80K
YES
NO
The splitting attribute at a node is determined
based on the Gini index.
19Another Example of Decision Tree
categorical
categorical
continuous
class
Single, Divorced
MarSt
Married
Refund
NO
No
Yes
TaxInc
lt 80K
gt 80K
YES
NO
There could be more than one tree that fits the
same data!
20Decision Tree Algorithms
- Many Algorithms
- Hunts Algorithm (one of the earliest).
- CART
- ID3, C4.5
- SLIQ,SPRINT
- General Structure
- Tree Induction
- Tree Pruning
21Hunts Method
- An Example
- Attributes Refund (Yes, No), Marital Status
(Single, Married, Divorced), Taxable Income
(Continuous) - Class Cheat, Dont Cheat
Dont Cheat
22Tree Induction
- Greedy strategy.
- Choose to split records based on an attribute
that optimizes the splitting criterion. - Two phases at each node
- Split Determining Phase
- How to Split a Given Attribute?
- Which attribute to split on? Use Splitting
Criterion. - Splitting Phase
- Split the records into children.
23Splitting Based on Nominal Attributes
- Each partition has a subset of values signifying
it. - Multi-way split Use as many partitions as
distinct values. - Binary split Divides values into two subsets.
Need to find optimal partitioning.
OR
24Splitting Based on Ordinal Attributes
- Each partition has a subset of values signifying
it. - Multi-way split Use as many partitions as
distinct values. - Binary split Divides values into two subsets.
Need to find optimal partitioning. - What about this split?
OR
25Splitting Based on Continuous Attributes
- Different ways of handling
- Discretization to form an ordinal categorical
attribute - Static discretize once at the beginning
- Dynamic ranges can be found by equal interval
bucketing, equal frequency bucketing (percenti
les), or clustering. - Binary Decision (A lt v) or (A ? v)
- consider all possible splits and finds the best
cut - can be more compute intensive
26Splitting Criterion
- Gini Index
- Entropy and Information Gain
- Misclassification error
27Splitting Criterion GINI
- Gini Index for a given node t
- (NOTE p( j t) is the relative frequency of
class j at node t). - Measures impurity of a node.
- Maximum (1 - 1/nc) when records are equally
distributed among all classes, implying least
interesting information - Minimum (0.0) when all records belong to one
class, implying most interesting information
28Examples for computing GINI
P(C1) 0/6 0 P(C2) 6/6 1 Gini 1
P(C1)2 P(C2)2 1 0 1 0
P(C1) 1/6 P(C2) 5/6 Gini 1
(1/6)2 (5/6)2 0.278
P(C1) 2/6 P(C2) 4/6 Gini 1
(2/6)2 (4/6)2 0.444
29Splitting Based on GINI
- Used in CART, SLIQ, SPRINT.
- Splitting Criterion Minimize Gini Index of the
Split. - When a node p is split into k partitions
(children), the quality of split is computed as, -
- where, ni number of records at child i,
- n number of records at node p.
30Binary Attributes Computing GINI Index
- Splits into two partitions
- Effect of Weighing partitions
- Larger and Purer Partitions are sought for.
B?
Yes
No
Node N1
Node N2
31Categorical Attributes Computing Gini Index
- For each distinct value, gather counts for each
class in the dataset - Use the count matrix to make decisions
Multi-way split
Two-way split (find best partition of values)
32Continuous Attributes Computing Gini Index
- Use Binary Decisions based on one value
- Several Choices for the splitting value
- Number of possible splitting values Number of
distinct values - Each splitting value has a count matrix
associated with it - Class counts in each of the partitions, A lt v and
A ? v - Simple method to choose best v
- For each v, scan the database to gather count
matrix and compute its Gini index - Computationally Inefficient! Repetition of work.
33Continuous Attributes Computing Gini Index...
- For efficient computation for each attribute,
- Sort the attribute on values
- Linearly scan these values, each time updating
the count matrix and computing gini index - Choose the split position that has the least gini
index
34Alternative Splitting Criteria based on INFO
- Entropy at a given node t
- (NOTE p( j t) is the relative frequency of
class j at node t). - Measures homogeneity of a node.
- Maximum (log nc) when records are equally
distributed among all classes implying least
information - Minimum (0.0) when all records belong to one
class, implying most information - Entropy based computations are similar to the
GINI index computations
35Examples for computing Entropy
P(C1) 0/6 0 P(C2) 6/6 1 Entropy 0
log 0 1 log 1 0 0 0
P(C1) 1/6 P(C2) 5/6 Entropy
(1/6) log2 (1/6) (5/6) log2 (1/6) 0.65
P(C1) 2/6 P(C2) 4/6 Entropy
(2/6) log2 (2/6) (4/6) log2 (4/6) 0.92
36Splitting Based on INFO...
- Information Gain
- Parent Node, p is split into k partitions
- ni is number of records in partition i
- Measures Reduction in Entropy achieved because of
the split. Choose the split that achieves most
reduction (maximizes GAIN) - Used in ID3 and C4.5
- Disadvantage Tends to prefer splits that result
in large number of partitions, each being small
but pure.
37Splitting Based on INFO...
- Gain Ratio
- Parent Node, p is split into k partitions
- ni is the number of records in partition i
- Adjusts Information Gain by the entropy of the
partitioning (SplitINFO). Higher entropy
partitioning (large number of small partitions)
is penalized! - Used in C4.5
- Designed to overcome the disadvantage of
Information Gain
38Splitting Criteria based on Classification Error
- Classification error at a node t
- Measures misclassification error made by a node.
- Maximum (1 - 1/nc) when records are equally
distributed among all classes, implying least
interesting information - Minimum (0.0) when all records belong to one
class, implying most interesting information
39Examples for Computing Error
P(C1) 0/6 0 P(C2) 6/6 1 Error 1
max (0, 1) 1 1 0
P(C1) 1/6 P(C2) 5/6 Error 1 max
(1/6, 5/6) 1 5/6 1/6
P(C1) 2/6 P(C2) 4/6 Error 1 max
(2/6, 4/6) 1 4/6 1/3
40Comparison among Splitting Criteria
For a 2-class problem
41C4.5
- Simple depth-first construction.
- Sorts Continuous Attributes at each node.
- Needs entire data to fit in memory.
- Unsuitable for Large Datasets.
- Needs out-of-core sorting.
- Classification Accuracy shown to improve when
entire datasets are used!
42Decision Tree for Boolean Function
43Decision Tree for Boolean Function
Can simplify the tree