resource: Evolutionary computing, A.E.Eiben
1. What is Evolution Strategies (ES)
Evolution strategies(ES) is another member of the evolutionary algorithm family.
ES technical summary tableau
2. Introductory Example
2.1 Task
minimimise f : Rn -> R
2.2 Original algorithm
“two-membered ES” using
- Vectors from Rn directly as chromosomes
- Population size 1
- Only mutation creating one child
- Greedy selection
2.3 pseudocde
------------------------------------------------------------
Set t = 0
Create initial point xt = 〈 x1t ,…,xnt 〉
REPEAT UNTIL (TERMIN.COND satisfied) DO
Draw zi from a normal distr. for all i = 1,…,n
yit = xit + zi
IF f(xt) < f(yt) THEN xt+1 = xt
ELSE xt+1 = yt
Set t = t+1
OD
------------------------------------------------------------
2.4 Explanation
As is shown on the pseudocode above, given a current solution xt in the form of a vector of length n, a new candidate xt+1 is created by adding a random number zi for i =1,...,n to each of the n components.
( let‘s talk about the random number Zi )
The random number Zi:
A Gaussian, or normal, distribution is used with zero mean and standard deviation σ for drawing the random numbers.
This distribution is symmetric about zero and has the feature that the probability of drawing a random number with any given magnitude is a rapidly decreasing function of the standard deviation σ. (more information about Gaussian distribution)
( let‘s talk about the random number σ)
Thus the σ value is a parameter of the algorithm that determines the extent to which given values xi are perturbed by the mutation operator.
For this reason σ is often called the mutation step size.
Theoretical studies motivated an on-line adjustment of step sizes by the famous 1/5 success rule. This rule states that the ratio of successful mutaions (those in which the child is fitter than the parent) to all mutations should be 1/5.
- If the ratio is greater than 1/5, the step size should be increased to make a wider search of the space.
- If the ratio is less than 1/5 then it should be decreased to concentrate the search more around the current solution.
The rule is executed at periodic intervals.
For instance, after k iterations each σ is reset by
- σ = σ / c if ps > 1/5
- σ = σ • c if ps < 1/5
- σ = σ if ps = 1/5
Where ps is the relative frequency of successful mutations measured over a number of trials, and the parameter c is in the range [0.817,1]
As is apparent, using this mechanism the step sizes change based on feedback from the search process.
2.5 Conclusion
This example illuminiates some essential characteristics of evolution strategies:
- Evolution strategies are typically used for continuous parameter optimisation.
- There is a strong emphasis on mutation for creating offspring.
- Mutation is implemented by adding some random noise drawn from a Gaussian distribution.
- Mutation parameters are changed during a run of the algorithm
3. Representation
Chromosomes consist of three parts:
- Object variables: x1,…,xn
- Strategy parameters:
- Mutation step sizes: σ1,…,σnσ
- Rotation angles: α1,…, αnα
Full size: 〈 x1,…,xn, σ1,…,σn ,α1,…, αk 〉,where k = n(n-1)/2 (no. of i,j pairs) ---This is the general form of individuals in ES
The σ values represent the mutation step sizes, and their number nσ is usually either 1 or n. For any easonable self-adaptation mechanism at least one σ must be present.
The α values, which represent interactions between the step sizes used for different variables, are not always used.
4. Mutation
4.1 Main mechanism
Changing value by adding random noise drawn from normal distribution
The mutation operator in ES is based on a normal (Gaussian) distribution requiring two parameters: the mean ξ and the standard deviation σ.
Mutations then are realised by adding some Δxi to each xi, where the Δxi values are randomly drawn using the given Gaussian N(ξ,σ), with the corresponding probability density function.
xi‘ = xi + N(0,σ)
xi‘ can be seen as a new xi.
N(0,σ) here denotes a random number drawn from a Gaussian distribution with zero mean and standard deviation σ.
4.2 Key ideas
- σ is part of the chromosome 〈 x1,…,xn, σ 〉
- σ is also mutated into σ ’ (see later how)
- Self-adaption
4.3 A simplest case
In the simplest case we would have one step size that applied to all the components xi and candidate solutions of the form <x1, ..., xn, σ>.
Mutations are then realised by replacing <x1, ..., xn, σ> by <x1‘, ..., xn‘, σ‘>, where σ‘ is the mutated value of σ and xi‘ = xi + N(0,σ)
4.4 Mutate the value of σ
The mutation step sizes are not set by the user; rather the σ is coevolving with the solutions.
In order to achieve this behaviour:
- modify the value of σ first
- mutate the xi values with the new σ value.
The rationale behind this is that a new individual <x‘, σ‘> is effectively evaluated twice:
- Primarily, it is evaluated directly for its viability during survivor selection based on f(x‘).
- Second, it is evaluated for its ability to create good offspring.
This happens indirectly: a given step size (σ) evaluates favourably if the offspring generated by using it prove viable (in the first sense).
To sum up, an individual <x‘, σ‘> represents both a good x‘ that survived selection and a good σ‘ that proved successful in generating this good x‘ from x.
4.5 Uncorrelated Mutation with One Step Size(σ)
In the case of uncorrelated mutation with one step size, the same distribution is used to mutate each xi, therefore we only have one strategy parameter σ in each individual.
This σ is mutated each time step by multiplying it by a term eΓ, with Γ a random variable drawn each time from a normal distribution with mean 0 and standard deviation τ.
Since N(0,τ) = τ•N(0,1), the mutation mechanism is thus specified by the following formulas:
- σ‘ = σ•eτ•N(0,1)
- xi‘ = xi + σ‘•Ni(0,1)
Furthermore, since standard deviations very close to zero are unwanted(they will have on average a negligible effect), the following boundary rule is used to force step sizes to be no smaller than a threshold:
- σ ’ < ε0 ⇒ σ ’ = ε0
Tips:
- N(0,1) denotes a draw from the standard normal distribution
- Ni(0,1) denotes a separate draw from the standard normal distribution for each variable i.
The proportionality constant τ is an external parameter to be set by the user.
It is usually inversely proportional to the square root of the problem size:
- τ ∝ 1/ n½
The parameter τ can be interpreted as a kind of learning rate, as in neural networks.