Online Objective Reduction to Deal with ManyObjective Problems - PowerPoint PPT Presentation

1 / 24
About This Presentation
Title:

Online Objective Reduction to Deal with ManyObjective Problems

Description:

Domination relation doesn't work when a problem has large number of objectives ... in a short time (the first experimental study) and improves MOEAs search ... – PowerPoint PPT presentation

Number of Views:54
Avg rating:3.0/5.0
Slides: 25
Provided by: shu11
Category:

less

Transcript and Presenter's Notes

Title: Online Objective Reduction to Deal with ManyObjective Problems


1
Online Objective Reduction to Deal with
Many-Objective Problems
  • A. Jaimes, C. Coello Coello, J. E. Barrentos
  • CINVESTAV-IPN,Mexico
  • _at_Intl Conf. on Evolutionary Multi-Criterion
    Optimization 2009

2
Contents
  • Research Issue and Objective
  • Issues in existing offline reduction algorithms
  • Online Objective Reduction
  • Offline objective reduction algorithm
  • Online Objective reduction algorithm REDGA-S and
    REDGA-X
  • Experimental Studies
  • Conclusion

3
Introduction
  • Its know that its difficult to find the real
    Pareto front in high-dimensional problems (where
    of objectives gt 3)
  • Since the number of individuals in the Pareto set
    increase with the number of objectives
  • Domination relation doesnt work when a problem
    has large number of objectives
  • Several algorithms are proposed to reduce the
    number of objectives
  • by Brockhoff and Zitzler, 2006
  • by Deb and Saxena, 2006
  • by Jaimes and Coello, 2008

4
Research Issues
  • Existing algorithms computational cost is high
    since they requires to run a MOEA twice
  • First, explore the original problem with a MOEA
    (e.g., NSGA-II). Based on solutions produced,
    objective reduction algorithms decide which
    objectives to reduce. Then, run a MOEA again to
    solve a problem with a reduced set of objectives
  • Computational cost to solve a reduced problem is
    smaller than that of an original problem, however
    the total computational cost its high

5
  • It could be too aggressive to reduce a number of
    objectives from the beginning
  • Reduction of some objectives implies some loss of
    information that could be important to converge
    to the real Pareto front
  • Too much objective reduction may damage MOEAs
    search ability
  • Successive reduction could be better but no one
    have investigated this issue

6
Research Objectives
  • Propose an online objective reduction algorithm
  • Less computational cost
  • Since it doesnt require an offline objective
    reduction Successive reduction of objectives
  • Reduce the most redundant objectives successively
  • It reduces the damage to MOEAs search ability
    caused by too aggressive objective reduction
  • The proposed algorithm is an extension to
    authors prior work 1
  • Incorporate an existing objective reduction
    algorithm into the main loop of a MOEA
  • Investigate its computational cost and the
    quality of solutions

1 A. James, et. al., Objective Reduction Using
a Feature Selection Technique, GECCO 2008
7
Objective Reduction Algorithm
  • The algorithm finds a k-sized objective subset
    (KOSSA) uses a correlation matrix to estimate the
    conflict between each pair of objectives
  • Takes a set of non-dominated solutions as an input

Dissimilarity Measure 1
  • ?X,Y correlation coefficient ? -1, 1
  • cov(X, Y) covariance between X and Y
  • sX variance of Xs objective values
  • µX average of Xs objective values
  • The result of 1 ?X,Y
  • 0 ? X and Y are completely positively correlated
  • 2 ? X and Y are completely negatively correlated

8
8th objective
  • Project objectives in the similarity space where
    the distance indicates the dissimilarity between
    objectives
  • 8 objectives in this example
  • For each objective, form a cluster with p closest
    neighbors
  • p 2 in this example
  • Select the most compact cluster
  • Compare the distance to the farthest neighbor
  • Shorter distance ? more compact ? more similar
  • Remove p neighbors in the most compact cluster
  • Distance to the farthest neighbor is considered
    as an error introduced by removing p objectives
  • Go back to 2nd step and repeat

9
A contains the distancesbetween objectives ri,j
ri,q qth closest (farthest) neighbor of i
Cannot reduce q objectives,so reduce q
Only k objectives left, then its over
rmini,q gt error means that the nextreduction
introduces a larger error. So, try to reduce an
error by reducingthe size of clusters
10
The Proposed Online Reduction Algorithm
  • Two variants of online reduction
  • Successive reduction (REDGA-S)
  • Objectives are reduced successively until k
    objectives remains during the search
  • The interval (generations) to reduce an objective
    is mainly determined by the max generations and
    the number of objectives to reduce
  • Mixed reduction (REDGA-X)
  • Less aggressive scheme than REDGA-S
  • Objectives are reduced successively, but the
    entire objective set is periodically integrated
    during the search to counterbalance the loss of
    information

11
Use the entire objectives during the first Gpre
and the last Gpost generations
Runs a MOEA for G generations P population, F
reduced set of objectives
Reduce k objectives
G of generations until the next
reduction
12
Gmax
Evaluate population with F - k objectives
Evaluate population with F - 2k objectives
Evaluate population with k objectives
Gpre
Gpost
Evaluate population with the entire objective set
F
Evaluate population with the entire objective set
F
13
of generations (per cycle) to use the reduced
objectives
of generations (per cycle) to use the entire
objectives
Reduce k objectives andrun a MOEA for Gred
generations
Integrate the entire objectives andrun a MOEA
for Gint generations
14
Experimental Studies
  • Two types of experiments
  • Overall Assessment
  • Investigate the computational cost of the
    proposed method and the effect to solutions
    caused by the removal of objectives
  • Use the real computational time as a stopping
    criterion
  • Search Ability Assessment
  • To investigate if an objective reduction method
    increases or decreases the number of generations
    to find solutions
  • Investigate the quality of individuals after a
    certain generations
  • Both types of experiments compare NSGA-II
    equipped with the reduction (REDGA) and NSGA-II

15
Problems
  • Problems in experiments
  • 0/1 multi-objective knapsack problem with 200
    items
  • 4, 6, 8 and 10 objectives
  • DTLZ2BZ, a variant of DTLZ2, with 30 variables
  • 4, 6, 8 and 10 objectives

16
Metrics
  • e-Indicator, Ie(A, B), is used as metrics

Where
  • e-Indicator is the minimum value (i.e., infimum)
    of e such that an individual in A e-dominates all
    individuals in B

Individuals in A
Individuals in B
e
Add e to all objectives of individuals in B
This individual dominates all individuals in B
  • The smaller Ie(A, B) and larger Ie(B, A), the
    better A over B

17
Simulation Configurations
  • NSGA-II parameters
  • of individuals 300
  • Crossover probability 0.9
  • Mutation probability 1/N
  • Stopping criteria running 2, 4, 6 and 10 seconds
    for problems with 4, 6, 8 and 10 objectives,
    respectively

18
  • REDGA-S parameters
  • Minimum of objectives (k) 3
  • Generations to run before (Gpre) and after
    (Gpost) reduction 20 and 5 of the total
    generations
  • (No max generations are define. How to obtain??)
  • of objectives to reduce in one reduction
  • All objectives ? REDGA-S-1
  • Intermediate number (2, 3, and 4 objectives when
    a problem has 6, 8 and 10 objectives,
    respectively) ? REDGA-S-m
  • REDGA-X parameters
  • Percentage of generations to use reduced
    objective set (Pred) 0.85
  • of objectives to reduce in one reduction
  • Same as REDGA-S-m ? REDGA-X-m

19
  • All reduction schemes outperform NSGA-II
  • REDGA-S-m achieves the best results
  • Integrating the entire objectives, REDGA-X-m,
    doesnt work
  • Successive reduction leads to good results
    although REDGA-S-1 runs more generations

20
  • REDGA-S-1 achieves better results than in BTLZ2BZ
  • Knapsack's objective functions are
    computationally expensive ? REDGA-S-1 can
    evaluates many more generations and find better
    results

21
Search Ability Assessment
  • Runs MOEAs for 200 generations and investigate
    the quality of individuals
  • In DTLZ2BZ, solutions on the true Pareto front
    have the property
  • D is the distance from the origin to the true
    Pareto front
  • By investigating solutions D values, it is
    possible to know the quality of them
  • (Distribution is not considered here)

22
  • Distribution of D values

23
  • Average and STDEV of D values
  • The quality of NSGA-II decays as the number
    objectives increases
  • REDGA-S-m outperforms other objective reduction
    schemes

24
Conclusion
  • The authors propose algorithms to reduce
    objectives online
  • REDGA-S-m can find good individuals in a short
    time (the first experimental study) and improves
    MOEAs search ability (the second experimental
    study)
  • When objective functions are computationally
    expensive (e.g., knapsack problem), the strategy
    to reduce many objectives at a time (REDGA-S-1)
    could outperform successive objective reduction
  • More generations are evaluated during a certain
    period, and individuals are evolved more
  • Still it requires the minimum of objectives as
    a parameter
  • Were trying to remove this parameter
Write a Comment
User Comments (0)
About PowerShow.com