Difference between revisions of "Talk:Presentation"
m (→Expansion (reflection)) |
(→Which order for the phases?: new section) |
||
(3 intermediate revisions by one other user not shown) | |||
Line 9: | Line 9: | ||
where $\left\lfloor \right.$ is the floor operator. | where $\left\lfloor \right.$ is the floor operator. | ||
− | Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, [http://clerc.maurice.free.fr/pso/pso_tsp/Discrete_PSO_TSP.htm for PSO]. | + | Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, [http://clerc.maurice.free.fr/pso/pso_tsp/Discrete_PSO_TSP.htm for PSO]. |
+ | |||
+ | Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. We know how to "substract" two permutations, how to "add" two permutations, how to "multiply" a permutation by a real coefficient. At first glance, it is all we need to define a combinatorial APS. | ||
+ | |||
+ | === Local search === | ||
+ | The easiest one: we just have to apply a "small" number of transpositions. | ||
=== Expansion (reflection) === | === Expansion (reflection) === | ||
Line 18: | Line 23: | ||
$$ | $$ | ||
− | It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions. | + | It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions. |
=== Contraction === | === Contraction === | ||
− | + | $$ | |
+ | Y=(1/3)*(X_{best}+X_{worst2}+X_{worst})\label{eq:contraction} | ||
+ | $$ | ||
− | === Local search | + | -------- |
− | + | However, in APS, the formulae for Expansion/Reflection and Contraction are applied in fact only for some dimensions, not for all, according to an adaptive probability threshold. In order to "simulate" the same method, we should apply the combinatorial equivalent formula only on a subset of the dimensions. The difficulty is that we are then not sure that the global result is still a permutation (we may generate two times the same element). How to cope with it? | |
+ | |||
+ | == Which order for the phases? == | ||
+ | |||
+ | In standard APS, after initialisation, we have three phases in the loop, namely | ||
+ | # Expansion/Réflection | ||
+ | # Contraction | ||
+ | # Local search | ||
+ | |||
+ | However, there is for the moment no theoretical reason for this order. So, it may be modified. For example, let $p_Exp(t)$, $p_Con(t)$, and $p_Ls(t)$, the probabilities of improvement by using each phase at time $t$. Of course, we do not know them. However, they may be adaptively estimated by analysing the history, or, at least, they may be sorted. For example, we often have | ||
+ | $$ p_Con(t) > p_Ls(t) > p_Exp(t) $$ | ||
+ | |||
+ | In such a case, if want an algorithm that quickly find an acceptable solution (but with a higher risk to be trapped in a local optimum), the order should be | ||
+ | # Contraction | ||
+ | # Local search | ||
+ | # Expansion/Reflection | ||
+ | |||
+ | Another method could be to simply randomly permutate the phases at each iteration. |
Latest revision as of 20:47, 18 June 2013
Contents
Combinatorial APS?
For some problems that are at least partly discrete, the basic APS applies a quantisation operator. If the quantum is $q_{d}$ for
the dimension $d$, the operator is defined by
$$
x_{i,d}\leftarrow q_{d}\left\lfloor \left(0.5+x_{i,d}/q_{d}\right)\right.\tag{1}
$$
where $\left\lfloor \right.$ is the floor operator.
Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, for PSO.
Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. We know how to "substract" two permutations, how to "add" two permutations, how to "multiply" a permutation by a real coefficient. At first glance, it is all we need to define a combinatorial APS.
Local search
The easiest one: we just have to apply a "small" number of transpositions.
Expansion (reflection)
We have three "positions" (i.e. permutations), $X_{best}$, $X_{worst}$, and $X_{worst2}$ (the second worst), we have to combine them. In standard this combination is $$ Y=X_{worst2}+(X_{worst}-X_{best})\tag{2} $$
It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions.
Contraction
$$ Y=(1/3)*(X_{best}+X_{worst2}+X_{worst})\tag{3} $$
However, in APS, the formulae for Expansion/Reflection and Contraction are applied in fact only for some dimensions, not for all, according to an adaptive probability threshold. In order to "simulate" the same method, we should apply the combinatorial equivalent formula only on a subset of the dimensions. The difficulty is that we are then not sure that the global result is still a permutation (we may generate two times the same element). How to cope with it?
Which order for the phases?
In standard APS, after initialisation, we have three phases in the loop, namely
- Expansion/Réflection
- Contraction
- Local search
However, there is for the moment no theoretical reason for this order. So, it may be modified. For example, let $p_Exp(t)$, $p_Con(t)$, and $p_Ls(t)$, the probabilities of improvement by using each phase at time $t$. Of course, we do not know them. However, they may be adaptively estimated by analysing the history, or, at least, they may be sorted. For example, we often have $$ p_Con(t) > p_Ls(t) > p_Exp(t) $$
In such a case, if want an algorithm that quickly find an acceptable solution (but with a higher risk to be trapped in a local optimum), the order should be
- Contraction
- Local search
- Expansion/Reflection
Another method could be to simply randomly permutate the phases at each iteration.