| |
- EvaluableProblem
-
- CorrectedMultigridableProblem
-
- QCorrectedMultigridableProblem
- WCorrectedMultigridableProblem
- HarmonicProblem
-
- MultigridableProblem(HarmonicProblem, SamplingReducer)
- MROptimizerState
- MultigridOptimizer
- OptimizerGdes
- OptimizerGdesP
-
- OptimizerGdesQ
- OptimizerML
- OptimizerMLH
- OptimizerMLdH
- SimpleReducer
-
- SamplingReducer
- SplineReducer
class CorrectedMultigridableProblem(EvaluableProblem) |
|
Take an MultigridableProblem and make another multigridable problem
out of it. A correction term is added so that the fixed point of the fine
grid solution is a fixed point of the coarse grid solution, too.
The correction comes from Bouman ICIP99 paper and is:
E'=E-<r,x>
where r= grad_x1 E1(R(x0)) - R grad_x0 E0(x0)
This implementation inherits: updateFine
It encapsulates the rest of the EvaluableProblem routines |
| - __init__(self, p)
- p is an MultigridableProblem instance which we will encapsulate
- canReduce(self)
- dHerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- gerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- getE(self, x)
- Return corrected criterion
- getEg(self, x)
- Return corrected criterion and gradient
- getEgDh(self, x)
- getInitialX(self)
- getReduced(self, x)
- returns (pc,xc), the coarsened problem and the coarsened x
- numDh(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- numg(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- reduceX(self, x)
- updateFine(self, xf, xc0, xc1)
|
class EvaluableProblem |
|
Prototype multidimensional optimization problem
E=(x0-1)^4+(x1-2)^4+(x2-3)^4+(x0-1)^2*(x1-2)^2*(x2-3)^2
The minimum is clearly at (1,2,3) |
| - dHerr(self, x, h=9.9999999999999995e-07)
- gerr(self, x, h=9.9999999999999995e-07)
- getE(self, x)
- getEg(self, x)
- getEgDh(self, x)
- getInitialX(self)
- numDh(self, x, h=9.9999999999999995e-07)
- numg(self, x, h=9.9999999999999995e-07)
|
class HarmonicProblem(EvaluableProblem) |
|
Larger 1D prototype problem, whose solution is a harmonic function
E=Int f'^2+T f^2 dx
evaluated at x=0..1 on a N point grid. Fix f[0]=1
Use MirrorOnBounds boundary conditions |
| - __init__(self, T=1.0, xshape=(10,))
- dHerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- gerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- getE(self, xp)
- getEg(self, xp)
- getEgDh(self, xp)
- getInitialX(self)
- newx(self, xp)
- numDh(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- numg(self, x, h=9.9999999999999995e-07) from EvaluableProblem
|
class MROptimizerState |
|
Encapsulates a state of a one level of a multiresolution optimizer.
After instantiation, solveMR() is called to solve the problem.
Then, the final state of the problem can be queried using
getProblem(), or getOptimizer().
New parameters include: optimizerClass (defaults to OptimizerGdesQ)
and minlevel, minimum level at which we optimize (default is 0, optimize
at all levels). If xtollast is set, it replaces xtol for level 0.
maxlevel is the maximum level at which we descend. |
| - __init__(self, problem, par=None, level=0)
- Accepts an instance of an EvaluablemProblem problem,
implementing the Multigridable protocol (functions canReduce(),
getReduced(), and updateFine() )
- canReduce(self)
- getOptimizer(self)
- getProblem(self)
- getReduced(self)
- Returns a reduced version of itself, passing the current
optimization state along
- smoothToConvergence(self)
- Optimizes until convergence is reached.
Returns a reference to self.self.
- solveMR(self)
- Finds a solution to the problem using multiresolution approach.
Returns a reference to self.self.
|
class MultigridOptimizer |
|
A class representing a multigrid optimizer. Usage:
m=MultigridOptimizer(OptimizerMLdH,
inismooth=3,presmooth=1,
postsmooth=2,intsmooth=2,verbose=0,
abstol=1e-50,reltol=1e-5)
x=m.optimizeFMG(problem) |
| - __init__(self, optimizer, presmooth=3, postsmooth=3, intsmooth=2, verbose=0, maxiter=100, abstol=1e-50, reltol=1.0000000000000001e-05)
- makeTcycle(self, p, x, level=0, callbackE=None)
- Returns a value improved by a multigrid T cycle
Its main characteristics are not to iterate at fine level
when iteration at coarse level improved also the fine level
criterion.
- makeVcycle(self, p, x, level=0, callbackE=None, maxlevel=1000, parentopt=None)
- Returns a value improved by a multigrid V cycle
- makeWcycle(self, p, x, level=0, callbackE=None)
- Returns a value of x improved by a multigrid W cycle
- optimizeFMG(self, p, level=0, maxiter=500, callbackE=None)
- Full multigrid optimizer of problem p. Returns the optimal value
x found.
|
class MultigridableProblem(HarmonicProblem, SamplingReducer) |
|
An extension of HarmonicProblem to show the multigrid protocol.
The following methods are added:
getReduced(x) which gets a reduced version of the problem and
the parameters x
updateFine(xf,xc0,xc1) which updates xf at fine level with the
difference xc1-xc0 at the coarse level
canReduce() returns true if further reduction is possible
The following implementation is only prototypical and will
probably have to be modified for a specific problem. |
| - __init__(self, T=1.0, xshape=(10,)) from HarmonicProblem
- canReduce(self)
- dHerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- expand(self, x, targetlen=None) from SimpleReducer
- expandexceptfirst(self, x, targetlen=None) from SimpleReducer
- gerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- getE(self, xp) from HarmonicProblem
- getEg(self, xp) from HarmonicProblem
- getEgDh(self, xp) from HarmonicProblem
- getInitialX(self) from HarmonicProblem
- getReduced(self, x)
- returns (pc,xc), the coarsened problem and the coarsened x
- newx(self, xp) from HarmonicProblem
- numDh(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- numg(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- reduce(self, x) from SimpleReducer
- reduceX(self, x)
- reduceexceptfirst(self, x) from SimpleReducer
- simpleexpand(self, x, targetlen=None) from SamplingReducer
- simplereduce(self, x) from SamplingReducer
- updateFine(self, xf, xc0, xc1)
- returns an updated array xf based on the coarse x before (xc0)
and after (xc1) coarse level smoothing
|
class OptimizerGdes |
|
Gradient descent optimizer for EvaluableProblem-like classes.
Just go down along the gradient, adapting the step size as needed.
Usage:
o=OptimizerGdes(evaluableProblemInstance)
while not o.hasConverged():
o.makeStep()
print o.getX()
optional parameters to the constructor include the starting point
'startx', tolerances 'abstol', 'reltol', verbosity flag 'verbose'.
Callback function 'cbf' can also be specified. |
| - __init__(self, problem, startx=None, abstol=1e-50, reltol=9.9999999999999995e-07, verbose=0, xtol=9.9999999999999995e-07, startstep=1.0, minstep=1e-50, stepf=10.0, cbf=None)
- getE(self)
- getStep(self)
- getX(self)
- hasConverged(self)
- hasConvergedE(self)
- makeStep(self)
- makeSuccessfulStep(self)
- setStep(self, step)
- setX(self, x)
- showEnvironment(self, title=None, grad=None, step=None)
|
class OptimizerGdesP |
|
Gradient descent optimizer for EvaluableProblem-like classes.
Just go down along the gradient, adapting the step size as needed.
It works almost exactly as bigoptimize.OptimizerGdes, except that
parameters are given as a parameter object
Usage:
o=OptimizerGdesP(evaluableProblemInstance,parameters=None)
while not o.hasConverged():
o.makeStep()
print o.getX()
optional parameters include the starting point
'startx', tolerances 'xtol', starting step 'startstep',
step mult. factors 'stepf' and 'stepdownf', verbosity flag 'verbose'.
Callback function 'cbf' can also be specified, which is then
called whenever the criterion is evaluated.
Some optimization has been performed to minimize the number
of evaluations needed. |
| - __init__(self, problem, par=None)
- getE(self)
- getEg(self)
- getIterCount(self)
- getX(self)
- hasConverged(self)
- makeStep(self)
- makeSuccessfulStep(self)
- setX(self, x)
- smoothToConvergence(self)
- Iterate until convergence is reached. By virtue of the
stopping criterion, the last evaluation is always the best
|
class OptimizerML |
|
Prototype optimizer for EvaluableProblem-like classes. It uses
Marquardt-Levenberg like method using the gradient but does not use the
Hessian. Usage:
o=OptimizerML(evaluableProblemInstance)
while not o.hasConverged():
o.makeStep()
print o.getX()
optional parameters to the constructor include the starting point
'startx', tolerances 'abstol', 'reltol', verbosity flag 'verbose',
lambda multiplication factor 'lambdaf', maximum and minimum
values for lambda 'maxlambda' and 'minlambda'. Callback function 'cbf'
can also be specified. |
| - __init__(self, problem, startx=None, abstol=1e-50, reltol=9.9999999999999995e-07, verbose=0, xtol=9.9999999999999995e-07, lambdaf=10.0, maxlambda=1e+100, minlambda=1e-50, cbf=None)
- getE(self)
- getX(self)
- hasConverged(self)
- hasConvergedE(self)
- makeStep(self)
- makeSuccessfulStep(self)
- setX(self, x)
|
class OptimizerMLH |
|
Optimizer for EvaluableProblem-like classes that support the
'getEgH' method. Marquardt-Levenberg like method using the gradient
and the full Hessian approximated using first derivatives. Usage:
o=OptimizerMLH(evaluableProblemInstance)
while not o.hasConverged():
o.makeStep()
print o.getX()
optional parameters to the constructor include the starting point
'startx', tolerances 'abstol', 'reltol', verbosity flag 'verbose',
lambda multiplication factor 'lambdaf', maximum and minimum
values for lambda 'maxlambda' and 'minlambda'. Callback function 'cbf'
can also be specified. |
| - __init__(self, problem, startx=None, abstol=1e-50, reltol=9.9999999999999995e-07, xtol=9.9999999999999995e-07, verbose=0, lambdaf=10.0, maxlambda=1e+100, minlambda=1e-50, cbf=None)
- hasConverged(self)
- hasConvergedE(self)
- makeStep(self)
- makeSuccessfulStep(self)
- setX(self, x)
|
class OptimizerMLdH |
|
Prototype optimizer for EvaluableProblem-like classes. It uses
Marquardt-Levenberg like method using the gradient and the diagonal of
the Hessian. Usage:
o=OptimizerMLdH(evaluableProblemInstance)
while not o.hasConverged():
o.makeStep()
print o.getX()
optional parameters to the constructor include the starting point
'startx', tolerances 'abstol', 'reltol', verbosity flag 'verbose',
lambda multiplication factor 'lambdaf', maximum and minimum
values for lambda 'maxlambda' and 'minlambda'. Callback function 'cbf'
can also be specified. |
| - __init__(self, problem, startx=None, abstol=1e-50, reltol=9.9999999999999995e-07, xtol=9.9999999999999995e-07, verbose=0, lambdaf=10.0, maxlambda=1e+100, minlambda=1e-50, cbf=None)
- getE(self)
- getX(self)
- hasConverged(self)
- hasConvergedE(self)
- makeStep(self)
- makeSuccessfulStep(self)
- setX(self, x)
|
class QCorrectedMultigridableProblem(CorrectedMultigridableProblem) |
|
Quadratic extension to Bouman correction
The corrected problem is:
E'=E-betacorr*<r,x>+<r,x>^2
This assures that E' does no tend to -infty in any direction
and consequently the existence of the minimum. |
| - __init__(self, p)
- p is an MultigridableProblem instance which we will encapsulate
- canReduce(self) from CorrectedMultigridableProblem
- dHerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- gerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- getE(self, x)
- Return corrected criterion
- getEg(self, x)
- Return corrected criterion and gradient
- getEgDh(self, x)
- Return corrected criterion and gradient
- getInitialX(self) from CorrectedMultigridableProblem
- getReduced(self, x)
- returns (pc,xc), the coarsened problem and the coarsened x
- numDh(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- numg(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- reduceX(self, x) from CorrectedMultigridableProblem
- updateFine(self, xf, xc0, xc1) from CorrectedMultigridableProblem
|
class SimpleReducer |
|
The SimpleReducer class assembles the expand and reduce operations
based on linear averaging and linear interpolation. It works
for matrices with arbitrary dimensions. |
| - expand(self, x, targetlen=None)
- expand arbitrary vector x
- expandexceptfirst(self, x, targetlen=None)
- expand x except the first dimension
- reduce(self, x)
- reduce arbitrary vector/matrix x
- reduceexceptfirst(self, x)
- reduce x except the first dimension
- simpleexpand(self, x, targetlen=None)
- expand 1D vector by 3pt averaging
- simplereduce(self, x)
- reduce 1D vector by 3pt averaging
|
class WCorrectedMultigridableProblem(CorrectedMultigridableProblem) |
|
Windowed extension to Bouman correction
The corrected problem is:
E'=E-<r,x>*W(<r,x>)
where W(z)=exp(-(z/sigma)^2) is a Gaussian shaped window
where the window size 'sigma^2=20 zm^2' is calculated from
zm=| I R x- x | * |r| /2
This assures that E' coincides with E except a small neighborhood
of the original point. |
| - __init__(self, p)
- p is an MultigridableProblem instance which we will encapsulate
- canReduce(self) from CorrectedMultigridableProblem
- dHerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- gerr(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- getE(self, x)
- Return corrected criterion
- getEg(self, x)
- Return corrected criterion and gradient
- getEgDh(self, x)
- Return corrected criterion, gradient and Hessian
- getInitialX(self) from CorrectedMultigridableProblem
- getReduced(self, x)
- returns (pc,xc), the coarsened problem and the coarsened x
- getTau(self, x)
- Calculate the scalar product tau2=<r,x-x0>
- getWindow(self, tau2)
- Calculate the window value
- numDh(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- numg(self, x, h=9.9999999999999995e-07) from EvaluableProblem
- reduceX(self, x) from CorrectedMultigridableProblem
- updateFine(self, xf, xc0, xc1) from CorrectedMultigridableProblem
| |