Difference between revisions of "Talk:Presentation"

From Adaptive Population based Simplex
Jump to: navigation, search
m (Combinatorial APS?)
(Combinatorial APS?)
Line 9: Line 9:
 
where $\left\lfloor \right.$ is the floor operator.  
 
where $\left\lfloor \right.$ is the floor operator.  
  
Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, for PSO. It means that we have to define a distance. Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. As we know, to transform a given permutation $X$ into another one $Y$, there exists a minimum number of transpositions. So we define the quasi-distance between $X$ and $Y$ as this minimum number.
+
Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, [http://clerc.maurice.free.fr/pso/pso_tsp/Discrete_PSO_TSP.htm for PSO]. It means that we have to define a distance. Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. As we know, to transform a given permutation $X$ into another one $Y$, there exists a minimum number of transpositions. So we define the quasi-distance between $X$ and $Y$ as this minimum number. Here are a few ideas of how the specific operators could be defined. In what follows, we keep the same adaptive probability $p$ threshold than in standard APS. It means that we have to modify only
  
The specific Local search operator is the easiest one to define: we have just to apply a "small" number of transpositions.
+
=== Expansion (reflection) ===
 
+
We have three "positions" (i.e. permutations), $X_{best}$, $X_{worst}$, and $X_{worst2}$ (the second worst), we have to combine them.
Expansion and Contraction are a bit more difficult. For Expansion, starting from three "positions" (i.e. permutations), $X_{best}$, $X_{worst}$, and $X_{worst2}$ (the second worst), we have to combine them in order to define $Y$ so that 
+
In standard this combination is
$$
+
distance(Y,X_{best}) > distance(Y,X_{worst})
+
$$
+
and<br />
+
 
$$
 
$$
distance(Y,X_{best}) > distance(Y,X_{worst2})
+
Y=X_{worst2}+(X_{worst}-X_{best}\label{eq:expansion}$
 
$$
 
$$
and we should have
 
$$distance(X_{best},X_{worst})+distance(X_{worst},Y)$$
 
as similar as possible to
 
$$distance(X_{best},X_{worst2})+distance(X_{worst2},Y)$$
 
  
 +
It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions.  We already know how to compute the difference of two permutations, and how to add two permutations. We could apply here these two operators. However, in APS, the above formula is applied in fact only for some dimensions, not for all, according to the probability threshold. In order to "simulate" the same method, we have to apply the combinatorial equivalent formula only on a subset of the dimensions.
  
For Contraction, the three distances $distance(Y,X_{best})$, $distance(Y,X_{worst})$, and $distance(Y,X_{worst2})$, should be as similar as possible.
+
=== Contraction ===
 +
We "add" the three permutations $X_{best}$, $X_{worst}, and $X_{worst2}$, and we "multiply" the result by 1/3.
  
Moreover, in Standard APS only some components are modified (according to a probability threshold $p$). In order to "simulate" this, we should also randomly keep only some of the components of the $Y$ given by the above methods, the other ones being still the components of $X_{best}$.
+
=== Local search ===
 +
The easiest one: we  just have to apply a "small" number of transpositions.

Revision as of 17:30, 14 June 2013

Combinatorial APS?

For some problems that are at least partly discrete, the basic APS applies a quantisation operator. If the quantum is $q_{d}$ for the dimension $d$, the operator is defined by

$$ x_{i,d}\leftarrow q_{d}\left\lfloor \left(0.5+x_{i,d}/q_{d}\right)\right.\tag{1} $$

where $\left\lfloor \right.$ is the floor operator.

Now, for combinatorial problems, it is certainly better to define specific operators for Expansion, Contraction, and Local search, similarly to what has been done, for example, for PSO. It means that we have to define a distance. Let us consider the canonical case in which the search space is the set of permutations of $D$ different elements. As we know, to transform a given permutation $X$ into another one $Y$, there exists a minimum number of transpositions. So we define the quasi-distance between $X$ and $Y$ as this minimum number. Here are a few ideas of how the specific operators could be defined. In what follows, we keep the same adaptive probability $p$ threshold than in standard APS. It means that we have to modify only

Expansion (reflection)

We have three "positions" (i.e. permutations), $X_{best}$, $X_{worst}$, and $X_{worst2}$ (the second worst), we have to combine them. In standard this combination is $$ Y=X_{worst2}+(X_{worst}-X_{best}\tag{2}$ $$

It is formally equivalent to the PSO operator "new_position = position + velocity", for the "velocity" is anyway the difference of two positions. We already know how to compute the difference of two permutations, and how to add two permutations. We could apply here these two operators. However, in APS, the above formula is applied in fact only for some dimensions, not for all, according to the probability threshold. In order to "simulate" the same method, we have to apply the combinatorial equivalent formula only on a subset of the dimensions.

Contraction

We "add" the three permutations $X_{best}$, $X_{worst}, and $X_{worst2}$, and we "multiply" the result by 1/3.

Local search

The easiest one: we just have to apply a "small" number of transpositions.