Difference between revisions of "Talk:Presentation"

From Adaptive Population based Simplex
Jump to: navigation, search
(Combinatorial APS?)
(Which order for the phases?: new section)
 
(7 intermediate revisions by one other user not shown)
Line 9: Line 9:
 
where $\left\lfloor \right.$ is the floor operator.  
 
where $\left\lfloor \right.$ is the floor operator.  
  
Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, for PSO. It means that we have to define a distance. Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. As we know, to transform a given permutation $X$ into another one $Y$, there exists a minimum number of transpositions. So we define the quasi-distance between $X$ and $Y$ as this minimum number.
+
Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, [http://clerc.maurice.free.fr/pso/pso_tsp/Discrete_PSO_TSP.htm for PSO].
  
The specific Local search operator is the easiest one to define: we have just to apply a "small" number of transpositions.
+
Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. We know how to "substract" two permutations, how to "add" two permutations, how to "multiply" a permutation by a real coefficient. At first glance, it is all we need to define a combinatorial APS.
  
Expansion and Contraction are a bit more difficult. For Expansion, starting from three "positions" (i.e. permutations), $X_{best}$, $X_{worst}$, and $X_{worst2}$ (the second worst), we have to combine them in order to define $Y$ so that 
+
=== Local search ===
 +
The easiest one: we  just have to apply a "small" number of transpositions.  
 +
 
 +
=== Expansion (reflection) ===
 +
We have three "positions" (i.e. permutations), $X_{best}$, $X_{worst}$, and $X_{worst2}$ (the second worst), we have to combine them.
 +
In standard this combination is
 
$$
 
$$
distance(Y,X_{best}) > distance(Y,X_{worst})
+
Y=X_{worst2}+(X_{worst}-X_{best})\label{eq:expansion}
 
$$
 
$$
and<br />
+
 
 +
It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions. 
 +
 
 +
=== Contraction ===
 
$$
 
$$
distance(Y,X_{best}) > distance(Y,X_{worst2})
+
Y=(1/3)*(X_{best}+X_{worst2}+X_{worst})\label{eq:contraction}
 
$$
 
$$
and we should have
 
$$distance(X_{best},X_{worst})+distance(X_{worst},Y)$$
 
as similar as possible to
 
$$distance(X_{best},X_{worst2})+distance(X_{worst2},Y)$$
 
  
 +
--------
 +
However, in APS, the formulae for Expansion/Reflection and Contraction are applied in fact only for some dimensions, not for all, according to an adaptive probability threshold. In order to "simulate" the same method, we should apply the combinatorial equivalent formula only on a subset of the dimensions. The difficulty is that we are then not sure that the global result is still a permutation (we may generate two times the same element). How to cope with it?
 +
 +
== Which order for the phases? ==
 +
 +
In standard APS, after initialisation, we have three phases in the loop, namely
 +
# Expansion/Réflection
 +
# Contraction
 +
# Local search
 +
 +
However, there is for the moment no theoretical reason for this order. So, it may be modified. For example, let $p_Exp(t)$, $p_Con(t)$, and $p_Ls(t)$, the probabilities of improvement by using each phase at time $t$. Of course, we do not know them. However, they may be adaptively estimated by analysing the history, or, at least, they may be sorted. For example, we often have
 +
$$ p_Con(t) > p_Ls(t) > p_Exp(t) $$
  
For Contraction, the three distances $distance(Y,X_{best})$, $distance(Y,X_{worst})$, and $distance(Y,X_{worst2})$, should be as similar as possible.
+
In such a case, if want an algorithm that quickly find an acceptable solution (but with a higher risk to be trapped in a local optimum), the order should be
 +
# Contraction
 +
# Local search
 +
# Expansion/Reflection
  
Moreover, in Standard APS only some components are modified (according to a probability threshold $p$. In order to "simulate" this, we should also randomly keep only some of the components of the $Y$ given by the above methods, the other ones being still the components of $X_{best}$.
+
Another method could be to simply randomly permutate the phases at each iteration.

Latest revision as of 20:47, 18 June 2013

Combinatorial APS?

For some problems that are at least partly discrete, the basic APS applies a quantisation operator. If the quantum is $q_{d}$ for the dimension $d$, the operator is defined by

$$ x_{i,d}\leftarrow q_{d}\left\lfloor \left(0.5+x_{i,d}/q_{d}\right)\right.\tag{1} $$

where $\left\lfloor \right.$ is the floor operator.

Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, for PSO.

Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. We know how to "substract" two permutations, how to "add" two permutations, how to "multiply" a permutation by a real coefficient. At first glance, it is all we need to define a combinatorial APS.

Local search

The easiest one: we just have to apply a "small" number of transpositions.

Expansion (reflection)

We have three "positions" (i.e. permutations), $X_{best}$, $X_{worst}$, and $X_{worst2}$ (the second worst), we have to combine them. In standard this combination is $$ Y=X_{worst2}+(X_{worst}-X_{best})\tag{2} $$

It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions.

Contraction

$$ Y=(1/3)*(X_{best}+X_{worst2}+X_{worst})\tag{3} $$


However, in APS, the formulae for Expansion/Reflection and Contraction are applied in fact only for some dimensions, not for all, according to an adaptive probability threshold. In order to "simulate" the same method, we should apply the combinatorial equivalent formula only on a subset of the dimensions. The difficulty is that we are then not sure that the global result is still a permutation (we may generate two times the same element). How to cope with it?

Which order for the phases?

In standard APS, after initialisation, we have three phases in the loop, namely

  1. Expansion/Réflection
  2. Contraction
  3. Local search

However, there is for the moment no theoretical reason for this order. So, it may be modified. For example, let $p_Exp(t)$, $p_Con(t)$, and $p_Ls(t)$, the probabilities of improvement by using each phase at time $t$. Of course, we do not know them. However, they may be adaptively estimated by analysing the history, or, at least, they may be sorted. For example, we often have $$ p_Con(t) > p_Ls(t) > p_Exp(t) $$

In such a case, if want an algorithm that quickly find an acceptable solution (but with a higher risk to be trapped in a local optimum), the order should be

  1. Contraction
  2. Local search
  3. Expansion/Reflection

Another method could be to simply randomly permutate the phases at each iteration.