ESI 6448 Discrete Optimization Theory - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

ESI 6448 Discrete Optimization Theory

Description:

simple and easy to implement. difficult to choose step lengths. theoretic or practical stopping criteria. example ... if data are integral, then z(ut) z 1 ... – PowerPoint PPT presentation

Number of Views:125
Avg rating:3.0/5.0
Slides: 23
Provided by: Min26
Category:

less

Transcript and Presenter's Notes

Title: ESI 6448 Discrete Optimization Theory


1
ESI 6448Discrete Optimization Theory
  • Lecture 29

2
Last class
  • Subgradient algorithm
  • simple and easy to implement
  • difficult to choose step lengths
  • theoretic or practical stopping criteria
  • example (STSP)

3
Solving LD
  • Large number of constraints
  • constraints (or cutting plane) generation
    approach is required
  • use separation problem
  • Alternative approach
  • use subgradient to solve

4
Gradient
  • When f Rm?R is differentiable at u, the
    gradient vector ? (u) is ?f (u) (?f /?u1, , ?f
    /?um).
  • The gradient vector is the local direction of
    maximum increase of f (u), and ?f (v) 0 solves
    minf (u) u ? Rm.
  • The classical steepest descent method for
    minimizing f (u) is given by the sequence of
    iterations
  • method for finding local minimum of a function
    when the gradient of the function can be computed
  • With appropriate assumption on the sequence of
    step sizes ? t, the iterates ut converges to
    a minimizing point.

5
Subgradient
  • A subgradient at u of a convex function f Rm?R
    is a vector ? (u)?Rm s.t. f (v) ? f (u) ?
    (u)T(v u) for all v?Rm.
  • generalization of gradient
  • ? (u) can be viewed as the slope of the
    hyperplane that supports the set (v, w)?Rm1 w
    ? f (v) at (v, w) (u, f (u))

f (v)
w f (u) ? (u)T(v u)
(u, f (u))
u
6
Subgradient algorithm
  • At each iteration one takes a step from the
    present point uk in the direction opposite to a
    subgradient d Dx(uk).
  • The difficulty is in choosing the step lengths
    ?kk1,

7
Step lengths
  • (a) guarantees convergence, but slow
  • (b) or (c) lead to faster convergence, but have
    difficulties
  • (b) needs sufficiently large ?0 and ?
  • (c) requires a dual upper bound (unknown).
  • use primal lower bound and if not converge,
    increase it

8
Stopping criteria
  • Ideally, the subgradient algorithm can be stopped
    when we find a subgradient 0.
  • Typically, the stopping rule is either
  • to stop after a fixed number of iterations or
  • to stop if the function has not increased by at
    least a certain amount within a given number of
    iterations
  • Without convergence, we can stop at iteration t
    if
  • st 0
  • if data are integral, then z(ut) z lt 1
  • after a specific number of subgradient iterations
    has occurred, i.e. t ? tmax

9
UFL

10
STSP
  • wLD max z(u)
  • Step direction
  • minimizing
  • Follow the directions of subgradients.
  • dualized constraints are the equality constraints
  • Dual variable u is unbounded.
  • Step size (using rule (c))

11
Dual sol to primal sol
  • When dual variables u approach the set of optimal
    solutions, x(u) is obtained that is close to
    primal feasible solution.
  • STSP many nodes of the 1-tree have degree 2
  • UFL many clients are served exactly once
  • Heuristic to convert x(u) into a feasible
    solution without greatly decrease/increase
    value/cost?
  • fixing variables?

12
Heuristics
  • Set-covering problemmin?j?N cjxj ?j?N aijxj ?
    1 for i?M, j?N
  • z(u) ?i?M ui min?j?N (cj ?i?M uiaij)xj
    x?Bn
  • One possibility
  • take an optimal sol x(u),drop all rows covered
    by x(u), i.e. the rows i?M s.t.
    ?j?N
    aijxj(u) ? 1and solve the remaining smaller
    covering problem by a greedy heuristic
  • If y is the heuristic sol, then xH x(u) y
    is a feasible sol.
  • check if it can be improved by changing values of
    x

13
Lagrange heuristics
  • use Lagrangian for variable fixing
  • If z is the incumbent value, then any better
    feasible sol x satisfies?i?M ui ?j?N (cj
    ?i?M uiaij)xj ? cx lt z
  • Let N1 j?N cj ?i?M uiaij gt 0 and N2
    j?N cj ?i?M uiaij lt 0
  • i) If k?N1 and ?i?M ui ?j?N0 (cj ?i?M uiaij)
    (ck - ?i?M uiaij) ? z, then xk 0 in any
    better feasible solution.
  • ii) If k?N0 and ?i?M ui ?j?N0\k (cj ?i?M
    uiaij) ? z, then xk 1 in any better feasible
    solution.

14
Example

15
Lagrangian dual
  • z max cx A1x ? b1
    A2x ? b2 x?Zn
  • choices based on trade-offs between
  • the strength of the resulting LD bound wLD
  • ease of solution of the Lagrangian relaxation
    IP(u)
  • ease of solution of the LD wLD minu?0 z(u)

16
Generalized Assignment Problem

17
Decomposition
  • maxcx x?X where X ?Kk0 Xk for some
    K?1A1x1 A2x2 AKxK bD1x1
    ? d1
    .... DKxK ? dK
  • Xk xk?Znk Dkxk ? dk are independent
    andjoint constraints ?Kk1Akxk b link together
    the different sets of variables
  • cut generation for each subset Xk
  • LR to dualize the joint constraints

18
LR Decomposition
  • Assume each Xk is bounded for k 1, , K
  • solve an equivalent problem of the formwhere
    each matrix Bk has a very large number of
    columns, one for each of the feasible points in
    Xk,and each vector ?k contains the corresponding
    variables

19
Example
  • UFL
  • locations j 1, , n correspond to indices k
    1, , K
  • for each nonempty subset S ? M of clients,let
    ?Sj 1 if depot j satisfies the demand of
    clients S

20
Dantzig-Wolfe reformulation
  • IP Master Problem

21
Example
  • UFL

22
Today
  • Convert dual solution into primal solution
  • using heuristics
  • greedy heuristics to solve smaller problem
  • convert into a primal feasible solution
  • fix variables w/o degrading value
  • Lagrangian dual
  • which constraints to be dualized?
  • Decomposition
  • Dantzig-Wolfe reformulation
Write a Comment
User Comments (0)
About PowerShow.com