Difference between revisions of "Talk:Presentation"

From Adaptive Population based Simplex
Jump to: navigation, search
m (Expansion (reflection))
(Which order for the phases?: new section)
 
(4 intermediate revisions by one other user not shown)
Line 9: Line 9:
 
where $\left\lfloor \right.$ is the floor operator.  
 
where $\left\lfloor \right.$ is the floor operator.  
  
Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, [http://clerc.maurice.free.fr/pso/pso_tsp/Discrete_PSO_TSP.htm for PSO]. It means that we have to define a distance. Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. As we know, to transform a given permutation $X$ into another one $Y$, there exists a minimum number of transpositions. So we define the quasi-distance between $X$ and $Y$ as this minimum number. Here are a few ideas of how the specific operators could be defined. In what follows, we keep the same adaptive probability $p$ threshold than in standard APS. It means that we have to modify only
+
Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, [http://clerc.maurice.free.fr/pso/pso_tsp/Discrete_PSO_TSP.htm for PSO].
 +
 
 +
Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. We know how to "substract" two permutations, how to "add" two permutations, how to "multiply" a permutation by a real coefficient. At first glance, it is all we need to define a combinatorial APS.
 +
 
 +
=== Local search ===
 +
The easiest one: we just have to apply a "small" number of transpositions.
  
 
=== Expansion (reflection) ===
 
=== Expansion (reflection) ===
Line 15: Line 20:
 
In standard this combination is
 
In standard this combination is
 
$$
 
$$
Y=X_{worst2}+(X_{worst}-X_{best}\label{eq:expansion}
+
Y=X_{worst2}+(X_{worst}-X_{best})\label{eq:expansion}
 
$$
 
$$
  
It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions.  We already know how to compute the difference of two permutations, and how to add two permutations. We could apply here these two operators. However, in APS, the above formula is applied in fact only for some dimensions, not for all, according to the probability threshold. In order to "simulate" the same method, we have to apply the combinatorial equivalent formula only on a subset of the dimensions.
+
It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions.   
  
 
=== Contraction ===
 
=== Contraction ===
We "add" the three permutations $X_{best}$, $X_{worst}, and $X_{worst2}$, and we "multiply" the result by 1/3.
+
$$
 +
Y=(1/3)*(X_{best}+X_{worst2}+X_{worst})\label{eq:contraction}
 +
$$
  
=== Local search ===
+
--------
The easiest one: we just have to apply a "small" number of transpositions.
+
However, in APS, the formulae for Expansion/Reflection and Contraction are applied in fact only for some dimensions, not for all, according to an adaptive probability threshold. In order to "simulate" the same method, we should apply the combinatorial equivalent formula only on a subset of the dimensions. The difficulty is that we are then not sure that the global result is still a permutation (we may generate two times the same element). How to cope with it?
 +
 
 +
== Which order for the phases? ==
 +
 
 +
In standard APS, after initialisation, we have three phases in the loop, namely
 +
# Expansion/Réflection
 +
# Contraction
 +
# Local search
 +
 
 +
However, there is for the moment no theoretical reason for this order. So, it may be modified. For example, let $p_Exp(t)$, $p_Con(t)$, and $p_Ls(t)$, the probabilities of improvement by using each phase at time $t$. Of course, we do not know them. However, they may be adaptively estimated by analysing the history, or, at least, they may be sorted. For example, we often have  
 +
$$ p_Con(t) > p_Ls(t) > p_Exp(t) $$
 +
 
 +
In such a case, if want an algorithm that quickly find an acceptable solution (but with a higher risk to be trapped in a local optimum), the order should be
 +
# Contraction
 +
# Local search
 +
# Expansion/Reflection
 +
 
 +
Another method could be to simply randomly permutate the phases at each iteration.

Latest revision as of 20:47, 18 June 2013

Combinatorial APS?

For some problems that are at least partly discrete, the basic APS applies a quantisation operator. If the quantum is $q_{d}$ for the dimension $d$, the operator is defined by

$$ x_{i,d}\leftarrow q_{d}\left\lfloor \left(0.5+x_{i,d}/q_{d}\right)\right.\tag{1} $$

where $\left\lfloor \right.$ is the floor operator.

Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, for PSO.

Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. We know how to "substract" two permutations, how to "add" two permutations, how to "multiply" a permutation by a real coefficient. At first glance, it is all we need to define a combinatorial APS.

Local search

The easiest one: we just have to apply a "small" number of transpositions.

Expansion (reflection)

We have three "positions" (i.e. permutations), $X_{best}$, $X_{worst}$, and $X_{worst2}$ (the second worst), we have to combine them. In standard this combination is $$ Y=X_{worst2}+(X_{worst}-X_{best})\tag{2} $$

It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions.

Contraction

$$ Y=(1/3)*(X_{best}+X_{worst2}+X_{worst})\tag{3} $$


However, in APS, the formulae for Expansion/Reflection and Contraction are applied in fact only for some dimensions, not for all, according to an adaptive probability threshold. In order to "simulate" the same method, we should apply the combinatorial equivalent formula only on a subset of the dimensions. The difficulty is that we are then not sure that the global result is still a permutation (we may generate two times the same element). How to cope with it?

Which order for the phases?

In standard APS, after initialisation, we have three phases in the loop, namely

  1. Expansion/Réflection
  2. Contraction
  3. Local search

However, there is for the moment no theoretical reason for this order. So, it may be modified. For example, let $p_Exp(t)$, $p_Con(t)$, and $p_Ls(t)$, the probabilities of improvement by using each phase at time $t$. Of course, we do not know them. However, they may be adaptively estimated by analysing the history, or, at least, they may be sorted. For example, we often have $$ p_Con(t) > p_Ls(t) > p_Exp(t) $$

In such a case, if want an algorithm that quickly find an acceptable solution (but with a higher risk to be trapped in a local optimum), the order should be

  1. Contraction
  2. Local search
  3. Expansion/Reflection

Another method could be to simply randomly permutate the phases at each iteration.