In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature.
The first SPO algorithm was proposed for two-dimensional unconstrained optimization[1] based on two-dimensional spiral models. This was extended to n-dimensional problems by generalizing the two-dimensional spiral model to an n-dimensional spiral model.[2] There are effective settings for the SPO algorithm: the periodic descent direction setting[3] and the convergence setting.[4]
The motivation for focusing on spiral phenomena was due to the insight that the dynamics that generate logarithmic spirals share the diversification and intensification behavior. The diversification behavior can work for a global search (exploration) and the intensification behavior enables an intensive search around a current found good solution (exploitation).
The SPO algorithm is a multipoint search algorithm that has no objective function gradient, which uses multiple spiral models that can be described as deterministic dynamical systems. As search points follow logarithmic spiral trajectories towards the common center, defined as the current best point, better solutions can be found and the common center can be updated.
The general SPO algorithm for a minimization problem under the maximum iteration k max {\displaystyle k_{\max }} (termination criterion) is as follows:
0) Set the number of search points m ≥ 2 {\displaystyle m\geq 2} and the maximum iteration number k max {\displaystyle k_{\max }} . 1) Place the initial search points x i ( 0 ) ∈ R n ( i = 1 , … , m ) {\displaystyle x_{i}(0)\in \mathbb {R} ^{n}~(i=1,\ldots ,m)} and determine the center x ⋆ ( 0 ) = x i b ( 0 ) {\displaystyle x^{\star }(0)=x_{i_{\text{b}}}(0)} , i b = argmin i = 1 , … , m { f ( x i ( 0 ) ) } {\displaystyle \displaystyle i_{\text{b}}=\mathop {\text{argmin}} _{i=1,\ldots ,m}\{f(x_{i}(0))\}} ,and then set k = 0 {\displaystyle k=0} . 2) Decide the step rate r ( k ) {\displaystyle r(k)} by a rule. 3) Update the search points: x i ( k + 1 ) = x ⋆ ( k ) + r ( k ) R ( θ ) ( x i ( k ) − x ⋆ ( k ) ) ( i = 1 , … , m ) . {\displaystyle x_{i}(k+1)=x^{\star }(k)+r(k)R(\theta )(x_{i}(k)-x^{\star }(k))\quad (i=1,\ldots ,m).} 4) Update the center: x ⋆ ( k + 1 ) = { x i b ( k + 1 ) ( if f ( x i b ( k + 1 ) ) < f ( x ⋆ ( k ) ) ) , x ⋆ ( k ) ( otherwise ) , {\displaystyle x^{\star }(k+1)={\begin{cases}x_{i_{\text{b}}}(k+1)&{\big (}{\text{if }}f(x_{i_{\text{b}}}(k+1))<f(x^{\star }(k)){\big )},\\x^{\star }(k)&{\big (}{\text{otherwise}}{\big )},\end{cases}}} where i b = argmin i = 1 , … , m { f ( x i ( k + 1 ) ) } {\displaystyle \displaystyle i_{\text{b}}=\mathop {\text{argmin}} _{i=1,\ldots ,m}\{f(x_{i}(k+1))\}} . 5) Set k := k + 1 {\displaystyle k:=k+1} . If k = k max {\displaystyle k=k_{\max }} is satisfied then terminate and output x ⋆ ( k ) {\displaystyle x^{\star }(k)} . Otherwise, return to Step 2).
The search performance depends on setting the composite rotation matrix R ( θ ) {\displaystyle R(\theta )} , the step rate r ( k ) {\displaystyle r(k)} , and the initial points x i ( 0 ) ( i = 1 , … , m ) {\displaystyle x_{i}(0)~(i=1,\ldots ,m)} . The following settings are new and effective.
This setting is an effective setting for high dimensional problems under the maximum iteration k max {\displaystyle k_{\max }} . The conditions on R ( θ ) {\displaystyle R(\theta )} and x i ( 0 ) ( i = 1 , … , m ) {\displaystyle x_{i}(0)~(i=1,\ldots ,m)} together ensure that the spiral models generate descent directions periodically. The condition of r ( k ) {\displaystyle r(k)} works to utilize the periodic descent directions under the search termination k max {\displaystyle k_{\max }} .
min i = 1 , … , m { max j = 1 , … , m { rank [ d j , i ( 0 ) R ( θ ) d j , i ( 0 ) ⋯ R ( θ ) 2 n − 1 d j , i ( 0 ) ] } } = n {\displaystyle \min _{i=1,\ldots ,m}\{\max _{j=1,\ldots ,m}{\bigl \{}{\text{rank}}{\bigl [}d_{j,i}(0)~R(\theta )d_{j,i}(0)~~\cdots ~~R(\theta )^{2n-1}d_{j,i}(0){\bigr ]}{\bigr \}}{\bigr \}}=n} where d j , i ( 0 ) = x j ( 0 ) − x i ( 0 ) {\displaystyle d_{j,i}(0)=x_{j}(0)-x_{i}(0)} . Note that this condition is almost all satisfied by a random placing and thus no check is actually fine.
This setting ensures that the SPO algorithm converges to a stationary point under the maximum iteration k max = ∞ {\displaystyle k_{\max }=\infty } . The settings of R ( θ ) {\displaystyle R(\theta )} and the initial points x i ( 0 ) ( i = 1 , … , m ) {\displaystyle x_{i}(0)~(i=1,\ldots ,m)} are the same with the above Setting 1. The setting of r ( k ) {\displaystyle r(k)} is as follows.
Many extended studies have been conducted on the SPO due to its simple structure and concept; these studies have helped improve its global search performance and proposed novel applications.[6][7][8][9][10][11]