Title: KNOWLEDGE-BASED SOLUTION TO DYNAMIC OPTIMIZATION PROBLEMS USING CULTURAL ALGORITHMS
1KNOWLEDGE-BASED SOLUTION TO DYNAMIC OPTIMIZATION
PROBLEMS USING CULTURAL ALGORITHMS
- by
- Saleh M. Saleem
- Computer Science Department
- Wayne Sate University
- Detroit, MI 48202
- sms_at_cs.wayen.edu
2Outlines
- Introduction
- Applications in Dynamic Environments
- Current Approaches for dynamic optimization.
- Cultural Algorithm framework
- System Description
- Problem Generator DF1 description.
- Example runs
- Experiments setup
- System performance in static environments
- Experiments in Magnitude Dominant Environments
- Experiments in Frequency Dominant Environments
- System performance in deceptive environments
- Conclusion and Future work
3Our general problem area
- In general we consider real-valued function
optimization problems - max (f(x)) - min (f(x))
- The problem is to find x to
- Maximize f(x), X ( X1, .., Xn) ? ?n
4Introduction
- Over the years many approaches have been
developed to track optimal solution in real-time
dynamic environments. - The motivation for this study is the fact that
Cultural Algorithm (CA) naturally contains
self-adaptive components that can make it an
ideal model for dynamic environments. - Our goal in this study is to evaluate the
different types of knowledge required to track
optimal solutions in real-time dynamic
environments. - In this presentation we will show that
- The search emerges into three main search phases.
- One knowledge source is more active than others
depends upon the problem dynamic behaviors and
the phase of search. - Different knowledge sources interact
symbiotically to solve a problem. - The CA becomes more useful as the problem
complexity increases.
5Applications in Dynamic Environments
- Fraud detection in the AAA insurance claims
Sternberg Reynolds 97. Used Cultural
Algorithms in reengineering a rule-based fraud
detection expert system when the perpetrators
behind the fraudulent claims change their fraud
producing strategies. - Job shop scheduling Bierwirth 94, Lin 97.
- A new job arrives anytime and has to be
integrated into the schedule. - Changing peaks problem Morrison De Jong 99,
Branke 99. Finding the highest peak in
multi-dimensional landscape where the peaks
parameters are changing over time.
6Current Approaches in Optimizing Dynamic Problems
- Reinitializing approach Karr 95, 95b, Kidwell
94, Pipe 94 - Adapting mutation Cobb 90, Gefenstette 92
- Self-Adaptation Reynolds Sternberg 97
- Memory support Trojanowski 97, Mori 96
- Modifying the selection operator Goldberg 87,
Ghosh 98
7The Reinitializing Approach
- Simple restart from scratch with every
environmental change Karr 95, 95b, kidwell 94,
pipe 94. - Injecting some solutions from the old problem
into the new initialized problem Louis 96, 97.
8Adaptive Mutation
- Hyper-mutation Cobb 1990.
- Random-immigrants Gefenstette 1992.
- Standard mutation.
- A comparative study concludes that hyper-mutation
performed best in slowly changing environments
but with a big change, random-immigrants perform
better Cobb 93.
9Self-Adaptive Mutation
- Using Cultural Algorithms to influence the search
in self-adaptive way. Reynolds Sternberg 97 - The belief space knowledge used to reason about
and influence the mutation on the search space in
self-adaptive way. - The mutation direction and step size dynamically
adjusted by the influence function to meet the
need of the new search.
10Memory Support
- Use memory to store the ancestors of an
individual Trojanowski 1997 - Store the best individual in every generation and
use them to generate offspring Mori 96
11Modifying the Selection Operator
- To maintain diversity, Individuals in less
populated area given boost in their performance
score over those in crowded areas Goldberg
1987. - Taking the individuals age into account to
maintain diversity Ghosh 1998.
12Cultural Algorithms for Self-Adaptive Search
- Developed by R. Reynolds in 1979 as computational
framework in which to describe evolution of
social system Reynolds 95 - The CA is a dual inheritance evolutionary system
derived from models of cultural evolution.
Reynolds 93 - Any of the dynamic approaches discussed above can
be used within Cultural Algorithm framework.
13Previous Cultural Algorithms Applications in
Dynamic Environments
- Systems with offline historical dynamics
- Nazzal 1997, Generates archeological minimum
spanning trees for resource networks on the
valley of Oaxaca. The change occurs offline over
long period of time. - Sternberg 1997, Experiments on Fraud Detection
when the perpetrators behind the fraudulent
claims change their fraud producing strategies. - Real-Time Dynamic Optimization
- Saleem 2000, Evaluates the contribution of the
belief space knowledge for solving problems in
dynamic environments.
14Cultural Algorithms Components
- Belief space.
- Population space.
- Communication channels
- Acceptance function.
- Update function.
- Influence function.
15Culture Algorithm framework
16The Cultural Algorithm
- Initialize population Pop(0)
- Initialize belief Blf(0)
- Initial communication protocol
- t0
- Repeat
- Evaluate Pop(t)
- Communicate (Pop(t), Blf(t)) acceptance
function - Adjust (Blf(t))
- Communicate (Blf(t), Pop(t)) influence function
- tt1
- Select (Pop(t), Pop(t-1))
- Until (halting condition)
17Belief Space Configuration
- The knowledge in the belief space represents
generalizations of the properties of a good
solution in the population space. - The Belief Space contains
- Topographical Knowledge.
- History Knowledge
- Domain Knowledge.
- Normative Knowledge
- Situational Knowledge
- Basic method for selecting which knowledge will
influence the variation operators.
18Belief Space Representation
19Situational Knowledge Chung97
S
where is the best individual in the
population at time t.
20Domain Knowledge
21Normative Knowledge Chung 97
22Updating the Normative Knowledge Chung 97
- Lower bound
Upper bound - Range of variable xi at time t.
-
- A
- B C
- D
- E
- F
23Updating the Normative Knowledge
24History Knowledge
.
.
- The history knowledge was represented as a list
(or window) of change events. - When a change event occurs, the history knowledge
stores the current best and the moving direction
from the previous optimum. - After each change event the history knowledge
recomputed the average moving distance and
direction over the events in the window size. - The window size used in this study was 2.
- The history knowledge also detect stagnation in
the population via checking progress in the best
solution. -
25Update History Knowledge
26Topographical Knowledge
- Example of landscape grid
- The topographical knowledge is a grid of cells
mapping the problem landscape. - Only those cells containing promising solutions
are divided and further divided into smaller
cells, as shown in the example above.
27Topographical Knowledge Representation
The data structures representation of the
topographical knowledge represent the grid size
as a list of cells where each cell contains the
interval boundaries in each dimension, the best
founded solution in that cell, and pointers to
the cells children (if the cell was divided).
28Updating Topographical Knowledge
- A cell is divided if it contains an accepted
individual with a fitness value better than its
current best. - When environmental change occurs
- All links to children become nil.
- Best solutions in the original grid are
reevaluated and cells are divided if their
fitness values were improved.
29The Acceptance Function
- The acceptance function determines which
individuals and their behaviors can impact the
belief space knowledge. - The acceptance function here is determined as the
top 25 of the population size.
30Influence Function
- The influence function determines which knowledge
will be applied to guide the problem solution. - Depends on the nature of the problem environment,
the influence function can be different for
different problem domain.
31Situational Knowledge Influence Function Chung97
32Normative Knowledge Influence function Chung97
- Current interval
- Lower upper boundary
33Domain Knowledge Influence Function
- The domain knowledge influence function mutate
individuals relative to the difference in the
fitness value form the best-founded solution so
far. - When a change event occurs the step size will
increase relative to the drop in the fitness
value, ?
34History Knowledge Influence Function
- The history knowledge influence function
generates individuals relative to the average
moving distance and direction. - We use, hare, a roulette wheel with three
different portion sizes to generate individuals - Relative to the moving distance from previous
optimum. - Relative to the moving direction from previous
optimum. - Relative to the entire domain range.
- Note here, the highest percentage of generated
individuals is in the overlap area between the
moving distance and the moving direction, W3 in
the next figure.
35History Knowledge Influence Function (cont.)
In this study we use a 45 , ß 45 , f
10
36Topographical Knowledge Influence Function
- Maintain a list of the best n cells in the grid,
( bestcells ). - Use a roulette wheel with three different portion
sizes to generate individuals - Within the best cell in the grid, bestcells0.
- Within the best n cells in the gird, bestcells.
- Within any cell in the entire grid.
In this study we use a 45 , ß 45 , f
10
37Influence Demon
- The influence demon determines which knowledge
source to influence the search. - The Influence demon uses a roulette wheel to
randomly select an influence function relative to
its average performance - All influence operators are initialized with
equal proportion of the wheel. - Portion size is then adjusted according to
-
38Problem Generator DF1 Morrison 99
- The problem is to find the highest peak in
multi-dimensional multi-peaks N landscape in a
real-time dynamic environments.
39DF1 Dynamic Variables Morrison 99
- The dynamics variables in the cones-world
environments are - Height Hi ? H-base, H-baseH-range
- Slope Ri ? R-base, R-base R-range
- Location xi, yi ? -1, 1
- The performance function
- The fitness of any individual (x, y) is the max
value on all cones, where N is the number of
cones.
40Specifying The Dynamics Morrison 99
- The problem generator provides a standard method
to easily describe the dynamic behavior on each
changing variable using the Logistic Function. - where A specify the change Magnitude in each
dynamic variable change height Ah, Slope Ar, and
Location Ac.
41The Logistic Function
- Examples
- When A 2.2, gives Y0.5455
- When A3.5, gives Y 0.3828, 0.5009, 0.8750
42Embedding the DF1 into the CA framework
Acceptance
Influence
Function
Function
Evolutionary
Performance Operators
Function
Belief Space Update beliefs
DF1 Generator
Dynamics
Environment
Population Space
43Example runs
- Here, we will discuss two example runs in static
environment to demonstrate how the system
behaves - when the search converges to the optimum
solution in the first example, and - when the system initially converges to a false
peak in the second example.
44Problems Settings
- The two examples have the same problem setting of
32 cones in a static environment where the
height, slope and locations are randomly
generated in the ranges between 5 and 20, 20 and
30, -1 and 1 respectively. - We consider the system to have found the optimum
if the difference between the best solution and
the optimum is less than or equal to 1E-10
(0.0000000001).
45First Example run
- Numbers 1 represents the normative knowledge, 2
represents the situational knowledge, 3
represents the domain knowledge, 4 represents the
history knowledge, and 5 represents the
topographical knowledge influence operator. - As shown in the second figure, from left, the
topographical knowledge operator (5) was the
dominant operator at the beginning of the run
until the best solution becomes close enough to
the optimum (first figure) which is when the
situational knowledge operator (2) becomes the
dominant operator until the end of the run.
46First Example run (cont.)
- This figure shows the initial roulette wheel
assignments of 1/5 for the five operators used by
the system. - As shown the T operator wheel portion increases
until around generation 11 when S and DS
operators take the lead. The S and DS operators
gain almost similar wheel percentages because of
the indirect interaction between the knowledge
structures.
47First Example run (cont.)
- The above figure highlights different phases that
the search takes in term of the type of knowledge
structures used to guide the search. At the
beginning of the run the T operators dominates
the search in term of the number of selected
solutions until around generation 9 when the
number of selected solutions decreases for the
benefit of the S operator. - It suggests that the search takes two main
phases. - First when the T operator determines the most
promising region that may contains the optimal
solution. From the start until around generation
9, (coarse-grained phase). - The second phase when the S operator leads the
search, as a fine-tuning operator, to search with
in the suggested region by the T operator. From
generation 10 until the end of the run,
(fine-tuning phase).
48First Example run (cont.)
These two figures show how different knowledge
structures indirectly influence each other. The
topographical knowledge range convergence helps
the normative knowledge interval to converge at
much faster rate than what we will see in the
next example.
49Second Example run
- In this example the search initially converges to
a false peak, as shown in first figure, above. - The search begins as we have seen in the first
example, T operator (5) leads the search at the
beginning of the run until around generation 10
when the search lead by situational (2) and
Domain (3) operators before the search stagnate
on a fitness value of 18.75. - As shown in the second figure, the normative
knowledge (1) generates the first best solution
that shift the search to a new promising region. - After that the topographical knowledge leads the
search for a short period before Situational
knowledge operator (2) dominates the search again
and leads the search to the optimum solution.
50Second Example run (Cont.)
- This figure shows the normative knowledge
interval range. - The normative knowledge converged initially to a
false peak. - When a stagnation situation is detected, it
triggers the system it introduce diversity to the
population causing the normative knowledge to
increase its interval for backtracking the
search. - During the backtracking search, the normative
knowledge completely controls the search and
search for a new promising region within its
interval. - Note that the interval range did not converge as
quickly as in the first example because normative
knowledge here is not influenced by any other
knowledge structures. - The normative knowledge slowly converged around
the new promising region. - Note also, that the normative knowledge reduces
the search space to the interval size, which
suggest that it expedite the search for recovery.
51Observations
- The runs exhibit three phases of search where in
each phase different knowledge structures
dominates the search. The phases are called the
Coarse-grained phase where the search is for the
most promising region (first phase), followed by
the Fine-tuning phase, where the promising region
is explored in more detail, and the Backtracking
search which occurs when the population stagnates
(stabilizes). - The search phases are defined based on the
improvement in the best solutions fitness value - The Coarse-grained phase is when the improvement
is gt c . (in our examples from start until
generation 10) - The Fine-tuning phase is when the improvement is
? c. - The Backtracking phase is when the improvement is
? c/. - the constant c here was 0.01 and the c/ was
zero.
52Experiments Framework
- The problems generated by the Problem Generator
DF1 are classified in terms of - Problem complexity
- Number of cones (1 to 100).
- Number of dimensions (1 to 10).
- Dynamic behaviors
- Shift magnitude.
- Same step size
- Small step size (1.05 1.80) ,
- Large step size (1.81 2.90) ,
- Different step size
- Few different step sizes (2.91 3.50)
- Chaotic step size (3.51 3.99).
- Changing frequency.
- High frequency ( lt 60)
- Medium frequency ( 61- 100)
- Low frequency ( gt 101)
53Experiments Settings
- In this studys experiments the problems selected
according to the categories above to generate
problems in - Static environments
- Exponentially increasing problem complexity (4,
8, 16, 32, 64 cones environments) - The run continues until a system reaches the
optimum or to maximum of 1000 generations. - Dynamic Magnitude dominant environments.
- Exponentially increasing problem complexity (4,
8, 16, 32, 64 cones environments) - Changing Heights, Slopes, and Locations for
- Large magnitude (selected randomly within the
ranges in the table above) - Low frequency (change occurs every 300
generations) - Every run contains 10 change events.
54Experiments Settings (cont.)
- Dynamic Frequency dominant environments.
- Exponentially increasing problem complexity (4,
8, 16, 32, 64 cones environments) - Changing Heights, Slopes, and Locations for
- Low magnitude (selected randomly between 1.05 and
1.80) - High frequency (selected randomly between 20 and
60) - Every run contains 10 change events.
- Dynamic Deceptive environments.
- Experiments on four cones environment for
deceptive and non-deceptive environments. - Changing Locations for
- Large magnitude (selected randomly within the
ranges in the table above) - Low frequency (change occurs every 300
generations) - Every run contains 10 change events.
55Experiments
- The experiments focus on
- The contribution of different knowledge
structures to the problem solving process. - Investigate how a system reacts to increases in
problem complexity (scaling up the problem
complexity). - All systems are compared with a population-only
evolutionary system (Evolutionary Programming) so
that the impact of adding knowledge in the belief
space can be assessed. - The results of each experiment settings are the
average of 20 runs for each system (CA and EP).
- The systems were implemented on Java 1.2,
JBuilder environments. - The experiments conducted on PC P-II 400 MHz, 128
MB RAM, and windows 2000 operating system.
56Static Environments
- Here we investigate the impact of increasing
complexity on the problem solving for static
environments. - Also, we identify the contribution of the various
types of knowledge structures to problem solving
in each of the three search phases.
574 Cones Experiments
58Table Description
- The above table presents the results for 20 runs.
- The columns are
- Run is the run sequence number.
- Gen is the number of generations for CA and EP.
- Difference gives the difference between the best
and the optimal solution. - CPUtime gives the execution time, in
milliseconds. - CA/EPtime gives the difference in CPU time for
the CA over the population-only system. - T gives the number of generations that the
Topographical knowledge produced the best. - H gives the number of generations that the
History knowledge produced the best. - DS gives the number of generations that the
Domain knowledge produced the best. - S gives the number of generations that the
Situational knowledge produced the best. - N gives the number of generations that the
Normative knowledge produced the best. -
59Success ratio
- The success ratio for CA is much less sensitive
to the problem complexity than EP. - It is clear from the results that knowledge in
the CA contributes to achieve a success ratio
higher than population only system for all
environments. - The relative advantage of the knowledge-based
approach improves as problem complexity
increases.
60CPU time
- The CA produced higher success ratio than the
population-only model and CA consumed much less
CPU time than the population only approaches. - Also, CPU time taken by the CA rises more slowly
than the population-only system with increasing
problem complexity.
61Observations
- Belief space knowledge contribution to the
problem solving process increases the success
ratio and decreases the required CPU time
relative to the performance of Population-only
system. - The decrease in the CPU time suggests that belief
space knowledge was significantly useful in
expediting the search. - The increase in the success ratio suggest that
belief space knowledge was significantly useful
in selecting the most promising region and
recovering from false peaks.
62Contribution of Different Knowledge Structures
- Now, we will look at the contribution of
different knowledge structures in the
Coarse-grained phase, the Fine-tuning phase, and
the Backtracking phase across all environments,
regardless of the complexity. - The tables in each phase combine the runs from
all of the environments. - The tables show number of times that a knowledge
structure produced the best solution in a
generation (overall). - Each cell represents the likelihood that an
operator produces the best solution in one
generation (raw) is followed by operator that
produces the best in the next.
63Coarse-grain phase
- In this phase, as expected, the topographical
knowledge operator is the dominant operator in
guiding the search. - Row FG, here, represents first transition
sequence when population randomly initialized in
generation zero. In 60 out of 100 runs, the
first transition was to the topographical
operator T. - The Situational knowledge was the second
contributor in that phase. - The highest transition percentage from the T
operator to a different operator was to the S
operator 31 and the highest transition sequence
from S operator to a different operator was to
the T operator, 38. - That suggests that the two knowledge structures
work symbiotically. - History knowledge was not productive here since
the experiment lacked the dynamic information
content that it can exploit.
64Fine-Tuning Phase
- The dominant knowledge contributor, here, is the
situational knowledge operator, S, 60. - Since the domain knowledge operator behaves
similar to the situational knowledge operator, it
competes with the situational knowledge operator
in producing the best. Thus, The D operator is
the second contributor in this phase.
65Backtracking Phase
66Backtracking phase (cont.)
- Here, in this phase the normative knowledge, N,
contribution increased, 25, from its percentage
in coarse-grained and the fine-tuning phases. - The normative knowledge produces the first best
solution to brake stagnation in the search. The
chart shows an example of how normative knowledge
proceed during the backtracking search. - The transition table shows that in 20 of the
time T operator generates the best after N
operator shifted the search to a new search
space. - Reversing direction from T to N (only 2.7)
suggests the control of the search is transferred
form normative to topographical knowledge
operator.
67Summary
- The experiments in static environments show that
- The Cultural system is less sensitive to the
problem complexity than a population-only
approach, in term of the solution quality and the
excavation time. - Different knowledge structures can work
symbiotically to solve a problem. - The dominant knowledge operators in the
coarse-grained, the fine-tuning, and the
backtracking phase are the topographical
knowledge, situational knowledge, and the
normative knowledge operators respectively.
68Dynamic Environments withMagnitude Dominant
Environmental Changes
- The Experiments here investigate the contribution
of different knowledge structures in environment
with infrequent changes of high magnitude. - The experiment settings were as we shown earlier
- Exponentially increasing problem complexity (4,
8, 16, 32, 64 cones environments) - Changing Heights, Slopes, and Locations for
- Large magnitude (selected randomly within the
ranges in the table above) - Low frequency (change occurs every 300
generations) - Every run contains 10 change events.
694 Cones Experiments
70Table Description
- The above table presents the results for forty
consecutive runs (20 runs for EP and 20 runs for
CA). - The tables columns are
- Run the run sequence number.
- AvrGen the average number of generations
required by CA and EP. - AvrDiffer the average difference of 10
environmental changes between the best solution
found by each system and the optimum solution. - AvrCPUtime the average execution time per change
event, in milliseconds. CA/EPt a percentage
difference in the CPU time for the Cultural
System over the population-only system. - T, H, DS, S, N the number of generations for
which each of the different CA knowledge
structures produced the optimal solution
(Topographical knowledge, T, History knowledge,
H, Domain knowledge, DS, Situational Knowledge,
S, and the Normative knowledge, N).
71Success Ratio
- Even in dynamic environments, CA produced better
solution quality and was shown to be less
sensitive to increase in problem complexity than
the population-only system. - When the problem become dynamic CA shown to be
even less sensitive than CA in static
environments. The reason perhaps history
knowledge becomes more useful in using its
knowledge about previous environments. -
72CPU time
- The difference in CPU time between CA and
population-only system becomes larger when the
problem shifts from static to dynamic. It
suggests that as the problem become more complex
the belief space knowledge become more and more
useful. - The increase rate in CPU time as the problem
become more complex shows that CA in dynamic
environments is less sensitive than CA in a
static one, especially when the problem becomes
complex as in 64 cones. -
-
73Contribution of Different Knowledge Structures in
Dynamic Environments
- Now, we will look at the contribution of
different knowledge structures in the
Coarse-grained phase, the Fine-tuning phase, and
the Backtracking phase. - The tables in each phase are the summary results
of a total of 100 runs in dynamic environments. - The tables show the number of times and
percentages that a knowledge structures have
produced the best solution and the sequence of
which influence operator generates the best in
the next generation.
74Coarse-grained phase
75Observation
- The history knowledge shows increase in its
contribution to produce best solution. - The increase in the contribution of the history
seems to affect the contribution of topographical
knowledge more than any other knowledge source. - The reason is perhaps the history and the
topographical knowledge operators are both
coarse-grained operators and thus they compete to
generate the best solution. - The other knowledge structure contributions are
similar to that in static environments.
76Fine-tuning phase
77Observation
- The D operator becomes less of a contributor, as
expected, in the fine-tuning phase. - The reason perhaps, the D operator in static
environments work in similar way as the S
operator does as a fine-tuning operator, but in
dynamic environments the D operator contributes
in generating diversity to the population and
become a diversity generator instead of a
fine-tuning operator. - Since the D and the S were both fine-tuning
operators in the static environment, S is more
dominant in the dynamic environments, Thus, The
S contributions increases to adjust to the change
in the D operator. - The contributions of the other knowledge
structures are the same as seen in static
environments.
78Backtracking Phase
79Observation
- The backtracking search is needed when the
prediction for most promising region by
topographical and history knowledge failed. - Thus, as expected, the contribution of H and T
operators decreased a bit from their contribution
in static environments. - The D operator becomes more useful, as a
variation generator in the backtracking phase. - The normative knowledge operator N is the main
contributor in this phase because N operator
generates the best new individual after
stagnation which shifts the search to explore a
new search space, as we have shown in the second
example run earlier.
80Dynamic Environments withFrequency Dominant
Environmental Changes
- Here we investigate the affect of a high
frequency of change coupled with low shift
magnitude on CA versus EP performance in term of
solution quality and execution time for
exponentially increasing problem complexity. - Also, another goal of this experiments is to
investigate the contribution of different
knowledge structures and whether they behave
differently than what we have seen in previous
experiments. - The experiments settings were as we shown
earlier - Exponentially increasing problem complexity (4,
8, 16, 32, 64 cones environments) - Changing Heights, Slopes, and Locations for
- Low magnitude (selected randomly between 1.05 and
1.80) - High frequency (selected randomly between 20 and
60) - Every run contains 10 change events.
81Success Ratio
- First figure, from on the left, shows that the CA
produces much higher success ratio than the
population-only system in all of the experiments. - EP produced zero success ratios in all of the
experiments, perhaps because EP requires at least
105 generations to reach the optimum which is
more time than allowed here. - The second chart suggests that a higher frequency
of change makes the problem harder than high
magnitude and has the greatest affect on the CA
success ratio. - Although the success ratio is significantly lower
than the previous two experiments the success
ratio decreases almost linearly relative to the
exponential increase in the problem complexity.
82CPU time
- In the first figure the increase in problem
complexity did not show a consistent increase in
the CPU time perhaps because changes occur so
frequently causing both systems to run to the
maximum allowable time. - Since the success ratio for the EP system was
zero in all of the high frequency experiments,
the CPU time for EP does not reflect the time
required to reach optimum solution as in the
previous experiments. - The second figure suggest that the CPU time for
CA in high frequency environments is less
sensitive to increase in complexity than in high
magnitude environments.
83Contribution of Different Knowledge Structures in
High frequency Environments
- Now, we will look at the contribution of the
different knowledge structures in the
Coarse-grained phase, the Fine-tuning phase, and
the Backtracking phase. - The tables in each phase are the results of total
100 run. - The tables show the number of times and
percentages that a knowledge structure has
produced the best solution and the influence
operators that generate the best in the next
generation after a given operator has generated
the best.
84Coarse-grained Phase
85Fine-tuning Phase
86Backtracking phase
- The backtracking search has shifted the search
five times into unexplored search space but it
has not succeeded in reaching the optimum
solution because of the time limitation. - The results suggest that backtracking search is
not as effective with very high frequency of
change than with high magnitude low frequency
changes.
87Summary
- The results in the coarse-grained and the
fine-tuning phases show that all the knowledge
sources contribute in the same way as in high
magnitude environments. - This suggests that changes from high to low
frequency of change does not change the
contribution percentage of the belief space
knowledge sources. - The backtracking search was not useful in high
frequency environments because of the time
limitation which did not let the system continue
to backtracking. - In general the knowledge contribution in
magnitude dominant environments and frequency
dominant environments are shown to be almost the
same in all phases except for the backtrack
tracking phase. -
88Deceptive Environments
- The results of the comparative study between CA
and self-adaptive EP motivate these experiments
to see how the system behaves with problems in
deceptive environments. - Goldberg 1987 defines a problem to be deceptive
if it contains two blocks A and B where the
average fitness of A is greater than B even
though B includes a solution that has a greater
fitness value than any other solution in A. - The experiment settings were as we shown earlier
- Experiments on four cones environments complexity
for deceptive and non-deceptive environments. - Changing Locations for
- Large magnitude (selected randomly within the
ranges in the table above) - Low frequency (change occurs every 300
generations) - Every run contains 10 change events.
89Deceptive Environments Example
90Experiments in Deceptive Environments
- To ensure generation of deceptive environments,
we imposed some restrictions on the problem
generator DF1. - The slopes, and the heights were static but the
locations were dynamic. - The cones heights were 10.0, 13.0, 14.0, 11.0
and the slopes were 20, 20, 70, 20 respectively,
where cones number 1 and 3 centered in same
location and assigned the same moving directions
to generate deceptive environments with cone
number 2. - The following table gives the results of the 4
cones experiments in deceptive environments. - The CA success ratio was 93 versus 100 in
problem complexity for magnitude dominant
environments. - The experiments show the increasing use of the
backtracking search (44 times out of 200 runs).
914 Cones Experiments in Deceptive Environments
92Experiments in Non-deceptive Environments
- Since the maximum success ratio that any system
can attain is 100 and the CA success ratio in
the four cones environments was 100 for the
magnitude dominant environments, we will use the
same environmental setting used there as our
non-deceptive environmental setting for this
experiment. - To achieve a fair comparison between our
experiments in deceptive and non-deceptive
environments we set the slopes, and heights of
the cones to be static but the locations were
dynamic. - The results, as expected, 100 success ratio with
one instance of backtracking search.
934 Cones Experiments in Non-deceptive Environments
94Coarse-grained phase in Non-deceptive Environments
- This chart compares knowledge sources
contributions in magnitude dominant environments
(Mag) where all cones variables are changing, and
non-deceptive environments ( static H, S) where
only the cones locations are changing. - The Domain operator (D) shows significant
improvement in the coarse grained phase when the
heights and slopes were static. - This improvement achieved, perhaps, because D
operator generates step size relative to the drop
in fitness value. - This suggest that D operator is sensitive to the
change in heights and slopes. -
95Coarse-grained phase in Deceptive versus
Non-deceptive Environments
- Although the deceptive experiments were static in
heights and slopes, the contribution of the D
operator was much less successful in
non-deceptive environments. - The reason perhaps, the step size generated by D
operator in deceptive environments is too large. - This also suggest that D operator is sensitive to
the large differences in slopes.
96Fine-tuning phase in Non-deceptive Environments
- Also, in the fine-tuning phase the D operator
increase its contribution in non-deceptive
environments (static S, H) relative to the
magnitude dominate environments where all cones
variables are dynamic. - The increase in the D contribution, suggests that
D operator works as a fine-tuning operator for
dynamic environments with static heights and
slopes.
97Fine-tuning phase in Deceptive versus
Non-deceptive Environments
- The main difference in knowledge contribution for
the fine-tuning phase is still the domain
knowledge operator. - The differences also, suggest that the D operator
is sensitive to the large difference in slopes.
98Backtracking phase in Deceptive Environments
- The backtracking phase in deceptive environments
shows an increase in the contribution of T, H, S,
and N operators and decrease in the contribution
of D operator from their contribution levels in
the backtracking phase for magnitude dominant
environments. - The D operator contribution in deceptive
environments is only for generating diversity.
99Observations
- The backtracking search using normative knowledge
was successful in recovering from 31 false peak
convergences, out of 44 total. - That suggests that a major player in the systems
success is the role of the normative knowledge in
the backtracking search. - Backtracking means going back to the basics (or
going back to the basic interval schemata in the
normative knowledge). It is analogous for
rethinking to solve a problem in human society
when the current method fails to solve it.
Conservatives usually advocate going back to the
basics and searching for a solution based on
basic principles. - The normative knowledge backtrack the search and
search within its interval to shift the search to
a new unexplored search space, as shown in the
following example in deceptive environments.
100Example Run in Deceptive Environments
101Example (cont.)
102Conclusion
- The goal was to investigate the role that
knowledge plays in guiding the search of an
evolutionary population in dynamic environments.
The results compared against those of a
population-only adaptive systems. - The five categories of knowledge taken together
are effective in all of the classes of dynamic
environments tested, magnitude dominant,
frequency dominant, and deceptive environments. - However, each has a different role to play in the
problem solving process. - In the coarse-grained phase the topographical
knowledge operator was the main contributor in
determining the region containing the most
promising solution. - In the fine-tuning phase the situational
operator is the main contributor for exploring
local search. - In the backtracking phase the normative
knowledge operator is the main contributor in
backtracking the search and recovering the search
from a false peak.
103Conclusion (cont.)
- Each of these phases emerged in all experiments,
but depending upon the problem dynamics certain
phase were more prevalent than others in problem
solving (e.g. backtracking increased in deceptive
environments). - The knowledge sources interact with each other in
terms of the population space in the sense that
one knowledge structure generates patterns that
are then exploited by others. This produces
sequences of symbiotic operators (T?S, N?S, N?T,
H?S, H?T) , and therefore the problem solving
phases. - The CA approach was less sensitive than
population-only approach to scaling of the
problem complexity in terms of the problem
success ratio and the CPU time. - Also, the results suggest some social
manifestation of these results (e.g. the notion
of back to the basics)
104Future Work
- Future work for this study is to investigate
whether the success ratios produced by the
Cultural Algorithms using EP as the population
model in different types of environments can be
generalized to other population-only models, when
a different population model is used such as GA
or ES. - Another interesting study is to modify and expand
this model of Cultural Algorithms for
optimization and search to applications like the
stock market.
105The End
106Conclusion
- Experiments in the CA problem solving process
emerged into three phases of searches
coarse-grained, fine-tuning, and the backtracking
phases where the topographical, situational, and
normative knowledge structures are the most
problem solving contributors in these phases
respectively. - The topographical knowledge shown to be useful in
detecting the most promising region and avoiding
the need for backtracking search. - The history knowledge is useful in detecting a
stagnation situation in the search and
contributing to guiding the search in the
coarse-grained phase in dynamic environments. - The domain knowledge operator contribute to the
search in different roles - Work as a fine-tuning operator in static
environments. - If the locations only are dynamic the operator
contributes in more significant role in both the
coarse-grained phase and the fine-tuning phase. - The domain operator is sensitive to large
difference in slopes. - In completely dynamic environments where all
variables are changing, the domain operator works
as a diversity generator.
107 Conclusion (cont.)
- The situational knowledge operator (S) works
allows as a fine-tuning operator. - The normative knowledge operator (N) is a major
player in contribution to the system success
ratio. - The N operator backtrack and recover the search
when other knowledge sources leads the search to
a false peak. - It is, as we said, analogous for the
conservatives rethinking and going back to the
basics to solve a problem in human society when
the current method fails to solve it. - Throughout all of the experiments, here, using
knowledge shows that it can produce significant
improvement over population-only systems in term
of the solution quality and execution time. - The CA system is less sensitive to the increase
in problem complexity than population-only
system.
108Observations
- The runs exhibit three phases of search where in
each phase different knowledge structures
dominates the search. The phases are called the
Coarse-grained phase where the search is for the
most promising region (first phase), followed by
the Fine-tuning phase, where the promising region
is explored in more detail, and the Backtracking
search which occurs when the population stagnates
(stabilizes). - The examples also, suggested that
- The first 10 generations represent the
Coarse-grained phase. - Backtracking phase start when a stagnation
situation is detected until 10 generations after
normative knowledge produces the new best. - Fine-tuning phase follows the the coarse-grained
and the backtracking phases.