diff --git a/cmaes_sourcecode_page.html b/cmaes_sourcecode_page.html index 7ec56de..ee58527 100644 --- a/cmaes_sourcecode_page.html +++ b/cmaes_sourcecode_page.html @@ -138,12 +138,11 @@

Boundary and Constraint Handling

Another reasonably good method for handling boundaries (box-constraints) in connection with CMA-ES is described in A Method for Handling Uncertainty in Evolutionary Optimization...(2009) (pdf), see the prepended Addendum and Section IV B. In this method, a penalty, which is a quadratic function of the distance to the feasible domain with adaptive penalty weights, is implemented in Matlab and Python codes below.

Addressing non-linear constraints is more intricate. In benign cases the optimum is not very close to the constraint and simple resampling works fine. The provided implementations do automated resampling when the objective function returns NaN. A simple (to implement) alterative is to compute Σi (xi - domain_middlei)2 + a_large_constant as objective function value for infeasible solutions. This might often work better than resampling. Yet, it still might work poorly if the optimum is at the domain boundary. + The class cma.ConstrainedFitnessAL + provides a non-linear constraints handling which works well when the optimum is at the boundary given the function is sufficiently smooth.

-Another method to address non-linear constraints is described in Multidisciplinary Optimisation in the Design of Future Space Launchers (2010) (pdf), Section 12.4.1, but not (yet) available in any of the provided source codes. - -

-For a very short and general overview on boundary and constraints handling (as of 2014 not entirely up-to-date though) in connection with CMA-ES, see the appendix of The CMA Evolution Strategy: A Tutorial, p.34f. +For a very short and general overview on boundary and constraints handling (as of 2014 not anymore up-to-date though) in connection with CMA-ES, see the appendix of The CMA Evolution Strategy: A Tutorial, p.34f.

Initial Values

After the encoding of variables, see above, the initial solution point x0 and the initial standard deviation (step-size) σ0 must be chosen. In a practical application, one often wants to start by trying to improve a given solution locally. In this case we choose a rather small σ0 (say in [0.001, 0.1], given the x-values "live" in [0,10]). Thereby we can also check, whether the initial solution is possibly a local optimum. When a global optimum is sought-after on rugged or multimodal landscapes, σ0 should be chosen such that the global optimum or final desirable location (or at least some of its domain of attraction)