Title: Smooth, Unconstrained Nonlinear Optimization without Gradients
1Smooth, Unconstrained Nonlinear Optimization
without Gradients
2Hooke Jeeves or Pattern Search
Characteristics
- Zero order
- No derivatives
- No line searches
- Works in discontinuous domain
- No proof of convergence
- Tool when other tools fails
- References
- Evolution and Optimum Seeking by Hans-Paul
Schwefel - Mark Johnson code handout
3Hooke Jeeves
With downhill simplex is the simplest algorithm
in iSIGHT
Has essentially no formal diagnostics for outputs.
The algorithm is an unconstrained optimization
algorithmwhich can also be used in constrained
situations.
Expected number of iterations
StepSizeReductionFactor n lt Termination Step
Size
StepSizeReductionFactor between 0 1. Default
.5 The larger the value the slower the
convergence.
4(No Transcript)
5(No Transcript)
6Hooke Jeeves Algorithm
Termination step size e Step size reduction
rho Step 0 Initialization Choose a starting
point, an accuracy bound e gt 0, and initial step
lengths (current value rho). If
current value 0.0 make step length rho
7(No Transcript)
8Note change
9(No Transcript)
10(No Transcript)
11Lab
- Rerun the Spring_Start.desc file using
Hooke_Jeeves with the default step size. - Does it reach the same optimum?
- How many function calls did it take?
- Is this more or less efficient then Steepest
Descent? - On the next slide, label the X1 Step Size, X2
Step Size and algorithm step number next to each
row for first 7 run counters.
12Spring Hooke Initial Steps