ISSN: 2641-3086
Trends in Computer Science and Information Technology
Research Article       Open Access      Peer-Reviewed

A Study of Global Numerical Maximization using Hybrid Chemical Reaction Algorithms

Ransikarn Ngambusabongsopa1, Vincent Havyarimana2* and Zhiyong Li

1College of Computer Science and Electronic Engineering, Hunan University, 410082, Changsha, China
2Departemnt of Applied Sciences, Ecole Normale Supérieure, 6983, Bujumbura, Burundi
*Corresponding author: Havyarimana Vincent, Departemnt of Applied Sciences, Ecole Normale Supérieure, 6983, Bujumbura, Burundi, Tel: (+257) 79830258; Fax: (+257) 22258948; E-mail: havincent12@gmail.com ; havicent14@hnu.edu.cn
Received: 28 February, 2017 | Accepted: 06 March, 2017 | Published: 07 March, 2017
Keywords: Chemical reaction optimization; Maximization; Mutation operator; Polynomial mutation

Cite this as

Ngambusabongsopa R, Havyarimana V, Li Z (2017) A Study of Global Numerical Maximization using Hybrid Chemical Reaction Algorithms. Trends Comput Sci Inf Technol 2(1): 001-011. DOI: 10.17352/tcsit.000004

Several approaches are proposed to solve global numerical optimization problems. Most of researchers have experimented the robustness of their algorithms by generating the result based on minimization aspect. In this paper, we focus on maximization problems by using several hybrid chemical reaction optimization algorithms including orthogonal chemical reaction optimization (OCRO), hybrid algorithm based on particle swarm and chemical reaction optimization (HP-CRO), real-coded chemical reaction optimization (RCCRO) and hybrid mutation chemical reaction optimization algorithm (MCRO), which showed success in minimization. The aim of this paper is to demonstrate that the approaches inspired by chemical reaction optimization are not only limited to minimization, but also are suitable for maximization. Moreover, experiment comparison related to other maximization algorithms is presented and discussed.

Introduction

Optimization [1] is prevalent in various field of science, engineering, economics and other related topics. Since the past few decades, plenty optimization frameworks were proposed to solve existing optimization problems. They depend on using a predefined constraint which is involved in the area of the search space to find variables of the functions to be optimized. In general, an optimization problem includes minimizing or maximizing a function systematically by selecting input values from a given feasible set [2]. The most famous framework is the evolution algorithm (EA), EA is heuristic algorithm which is inspired by the nature of the biological evolution and the social behavior of species. Several evolutionary algorithms have been addressed to optimization including Simulated Annealing (SA). It is inspired by annealing in metallurgy or physical process of increasing the crystal size of a material and reducing the defects through a controllable cooling procedure [3]. Furthermore, the genetic algorithm (GA) [4], is affected by Darwinian principle of the ‘survival of the fittest’ and the natural process of evolution through reproduction. Memetic algorithm (MA) [5], is inspired by Dawkins’ notion of a meme. However, particle swarm optimization (PSO) [6], is developed from social behavior of bird flocking or fish schooling by Eberhart and Kennedy. Ant colony optimization (ACO) [7], mimics ants which are able to discover the shortest route between their nest, and a source of food. Shuffled frog leaping algorithm (SFL) [8], is introduced to combine the benefits of the genetic based and the social behavior-based PSO. Bat algorithms (BAs) are inspired by the echolocation behavior of bats [9]. Harmony search (HS) [10], which is based on natural musical performance processes, occurs when a musician searches for an optimal state of harmony. Finally, chemical reaction optimization (CRO) is developed and proposed by Lam and Li [11,12], which simulate the effective drive to molecules in chemical reaction.

Although there are abundant approaches suggested to solve optimization problems, researchers still measure the capability of algorithms by comparing the optimal results generated with the previous methods in minimization aspect. They have corroborated that the published algorithms are suitable for solving minimization problems. Nevertheless, these are unwarranted to be appropriate for absolutely all optimization problems. As a result, the simulation based on maximization is considered in this paper. We also intend to solve maximization problems by experimenting several optimization algorithms based on CRO framework. These algorithms include an effective version of RCCRO (i.e., RCCRO4), HP-CRO2 a best version of hybrid algorithm based on PSO with CRO, OCRO which is a hybrid orthogonal crossover with CRO, and a recent established algorithm MCRO which is hybrid polynomial mutation operator with CRO.

The rest of this paper is organized as follows: a brief review of optimization algorithms based on CRO is introduced in Section 2. In Section 3, optimization problem functions and evaluation methods which are used for measuring the performance of algorithms are presented. The experimental results and discussions for different experiments are illustrated in Section 4. Finally, Section 5 highlights conclusion and our future work.

Optimization algorithms based on CRO

CRO [11,12], is a framework that mimics molecular interactions in chemical reactions to reach a low-energy stable state. Potential energy is the energy stored in a molecule with respect to its molecular configuration; the system becomes disordered when potential energy is converted to other forms [11]. Molecules stored in a container are vital to the manipulation of agents. Each molecule contains a profile that includes several attributes, such as molecular structure (ω), current potential energy (PE), and current kinetic energy (KE). In CRO reaction, the initial reactants in high-energy states incur a sequence of collisions. Molecules collide either with other molecules or with the walls of the container, pass through energy barriers, and become the final products in low-energy stable states. Typically, CRO includes three phases: initialization, iteration, and the final phase. In each run, the algorithm begins with initializing the population or the molecules contained in a container, and a particular number of iterations are, then, performed. When it satisfies the stopping criteria, the global optimal solution is presented.

This section provides a brief of RCCRO which is represented as the best research of original CRO, and several hybrid optimization algorithms based on CRO have practiced the performance in minimization aspect: HP-CRO, OCRO and MCRO.

Real-coded chemical reaction optimization (RCCRO): The concept of CRO is captured from the phenomenon of driving high-energy molecules to stable states through various types of elementary reactions. In CRO algorithm, four types of elementary reactions can take place: On-wall ineffective collision, Intermolecular ineffective collision, Decomposition, and Synthesis. On-wall and intermolecular ineffective collisions are local searches, whereas decomposition and synthesis are global searches. The characteristics and descriptions of these four elementary reactions are as follow [12,13].

On-wall ineffective collision is the reaction created when a molecule hits the wall of a container and then bounces back. This reaction only slightly changes the molecular structure (ω) when PEω + KEω ≥ PEω occurs. The new molecular structure (ώ) is produced by neighborhood search operator as ώ = neighborhood (ω). The central energy buffer is updated by extracting and storing a certain portion of its KE. The profile of the molecule is updated as KEώ = (PEω − PEώ + KEω) × r, where r is a random number that r ω [KELossRate,1]. The new PE of new fitness is calculated by the new ω or ώ as f(ώ).

Intermolecular ineffective collision refers to two or more molecules that collide with one another and then separate. This reaction also slightly changes the molecular structure similar to on-wall ineffective collision. The profiles of the molecules and the central energy buffer are updated when PEώ1 + PEώ2 + KEώ1 + KEώ2 ≥ PEώ1 + PEώ2. The molecular structures are produced from their own neighborhood by neighborhood search operator. The number of molecules are unchanged after the collision.

Decomposition represents the situation when a molecule hits the wall of a container and then splits into two or more molecules. This elementary reaction is applied to finish local search and explore other regions (i.e., global search). The profiles of the molecules and the central energy buffer are updated when PEω + KEω ≥ PEώ1 + PEώ2 and when the energy buffer is sufficient. This reaction significantly processes new molecular structures of the resultant molecules.

Synthesis occurs in the situation that two or more molecules collide and transform to a single molecule. The profiles of the molecules and the central energy buffer are updated when PEω1 + PEω2 + KEω1 + KEω2 ≥ PEώ. This reaction strongly and significantly alters the resultant molecular structure. The number of molecules are reduced after the collision.

RCCRO is the most powerful algorithm of original CRO. In addition, RCCRO4 is the best version of RCCRO. The algorithm of RCCRO4 is represente as algorithm 1.

Hybrid algorithm based on particle swarm and chemical reaction optimization (HP-CRO): The HP-CRO [14], is an approach for optimizing functions based on PSO and CRO algorithm. This algorithm presents the combination between PSOupdate operator and local search operators which make algorithm efficient. In addition, structure of the algorithm can easily control the whole search space to find global minimum based on the difference between the two boundary handling constraints. HP-CRO contains two elementary reactions: On-wall ineffective collision and Intermolecular ineffective collision. Several parameters are included in HP-CRO: inertia weight w = 0.729, local weight c1 = 1.49445, global weight c2 = 1.49445, r1 and r2 randomizes. There are two versions of HP-CRO which are HP-CRO1 and HP-CRO2, whereas the best version is HPCRO2. Algorithm of HP-CRO is demonstrated as Algorithm 2. For more details, HP-CRO is represented in [14], we note that we have changed the condition of algorithm to maximization operation.

Orthogonal chemical reaction optimization (OCRO): In general, the OCRO [15], is an algorithm that hybrids quantization orthogonal crossover (QOX) and CRO. It creates new molecules by two types of elementary reaction on-wall ineffective collision and intermolecular ineffective collision as original CRO. Moreover, it uses QOX search operator to create new molecules. The two elementary reactions in CRO serve as local search while QOX is provided to work as a global search operator. However, synthesis and decomposition elementary reactions are not included in OCRO. The algorithm has three main phases including: 1) initialization, 2) iteration, and 3) the final phase as original CRO. In the iteration phase, the type of search is identified by judging whether a random number t is bigger than Molecoll. QOX search takes place when t is smaller than Molecoll. Otherwise, it may result in a local search included on-wall ineffective collision and intermolecular ineffective collision. Any new local minimum found are checked and recorded as a new global minimum. Later on, the judge determines whether it is an intermolecular collision. The final global minimum is presented as the optimal result. OCRO Algorithm is shown in Algorithm 3. See more description of OCRO in [15].

A hybrid mutation chemical reaction optimization algorithm (MCRO): A hybrid optimization algorithm MCRO [16] combines mutation operator with CRO in which turning operator. The latter is also included in this algorithm. The most appreciate mutation operator in MCRO framework is polynomial mutation operator. The exclusivity of MCRO compared to the other two hybrid CRO algorithms (i.e., HP-CRO2 and OCRO) is that MCRO focuses on hybrid new contents specific into four elementary reactions of CRO. However, HP-CRO2 and OCRO interest hybrid new contents to core procedure in order to work instead of two elementary reactions (i.e., synthesis and decomposition) which serve as global search in RCCRO. Hence, the main algorithm of MCRO is similar to RCCRO4 but different in the sub-algorithm. The main procedures such as on-wall ineffective collision, intermolecular ineffective collision, decomposition, synthesis, and neighborhood search operator, are discussed as follow:

Turning operator is invented and merged into neighborhood search operator which works in three types of elementary reactions of RCCRO: on-wall ineffective collision, intermolecular ineffective collision, and decomposition. Turning operator transforms the molecular structure from the neighborhood of the operand to highly improve the optimal quality and reliability of the algorithm.

A mutation operator is applied to each new molecule in the initialization phase of algorithm and incorporated in changing the molecular structure. As a consequence, the results are judged before and after studied the performance of mutation operator and selected the better result to be recorded as a new global minimum. Mutation operator can spread the search space by randomly sampling new points and increases the opportunity of generating more ideal result not less than twice of RCCRO for every elementary reaction. Related algorithms are represented in algorithms 4, 5, 6 and 7, further information can be found in [16].

The performance in terms of minimization among these four algorithms has been discussed in [16]. The ranking by the best powerful algorithm to the worst algorithm are MCRO, OCRO, HP-CRO2, and RCCRO4, respectively.

Problem functions and evaluation methods

Maximization Operation: As previously mentioned, discovering the most powerful optimal solution of any problem is the main purpose of optimization. In general, the function is called an objective function, cost function (i.e., minimization), or utility function (i.e., maximization). An optimal solution is considered to be the minimum / maximum of the objective function which is known as a global optimal solution. An optimization problem includes minimizing or maximizing a function systematically by selecting input values from a given feasible set. Problem function f(x) is a scalar, where a variable x represents a particular solution and is usually a vector of n components. An optimization problem can be subject to nominated constraints C, defined as C = {c1, c2, . . . , cm} which limits the feasible region. In the literature, the standard formulation of an optimization problem is largely stated in terms of minimization. Generally, unless both the feasible region and the objective function are convex in a minimization problem, there may be more than one local minimal. A local minimum 𝑥∗ is defined as a point for which the following expression holds [13,17].

(𝑥∗) ≤ (𝑥) (1)

The goal of minimization is to find the minimum solution ś ∈ S and (ś) ≤ f (s), ∀ s∈ S.

Mathematically, a minimization problem has the following form:

maxf( x ) subject to xϵ R n = { C i ( x )=0 i ϵ E C i ( x )0 i ϵ I           (2) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaciGGTbGaaiyyaiaacIhacaWGMbWaaeWaa8aabaWdbiaadIhaaiaawIcacaGLPaaacaGGGcGaam4CaiaadwhacaWGIbGaamOAaiaadwgacaWGJbGaamiDamaaxababaGaamiDaiaad+gaaSqaaiaadIhatuuDJXwAK1uy0HwmaeHbfv3ySLgzG0uy0Hgip5wzaGqbciab=v=aYlaadkfapaWaaWbaaWqabeaapeGaamOBaaaaaSqabaGccqGH9aqpcaGGGcWaaiqaa8aabaqbaeqabiqaaaqaa8qacaWGdbWdamaaBaaaleaapeGaamyAaaWdaeqaaOWdbmaabmaapaqaa8qacaWG4baacaGLOaGaayzkaaGaeyypa0JaaGimaiaacckacaWGPbGaaiiOaiab=v=aYlaacckacaWGfbaapaqaa8qacaWGdbWdamaaBaaaleaapeGaamyAaaWdaeqaaOWdbmaabmaapaqaa8qacaWG4baacaGLOaGaayzkaaGaeyizImQaaGimaiaacckacaWGPbGaaiiOaiab=v=aYlaacckacaWGjbaaaaGaay5EaaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabIcacaqGYaGaaeykaaaa@7E67@

Where R,E and I symbolize the real number set, the index set for equality constraints, and the index set for inequality constraints, respectively.

With the same concept, the aim of maximizing operation is to generate the maximum solution of f(x). ś ∈ S and. (ś) ≥ f (s), ∀ s∈ S

Mathematically, a maximization problem has the following form:

maxf( x ) subject to xϵ R n = { C i ( x )=0 i ϵ E C i ( x )0 i ϵ I           (3) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaaeaaaaaaaaa8qaciGGTbGaaiyyaiaacIhacaWGMbWaaeWaa8aabaWdbiaadIhaaiaawIcacaGLPaaacaGGGcGaam4CaiaadwhacaWGIbGaamOAaiaadwgacaWGJbGaamiDamaaxababaGaamiDaiaad+gaaSqaaiaadIhatuuDJXwAK1uy0HwmaeHbfv3ySLgzG0uy0Hgip5wzaGqbciab=v=aYlaadkfapaWaaWbaaWqabeaapeGaamOBaaaaaSqabaGccqGH9aqpcaGGGcWaaiqaa8aabaqbaeqabiqaaaqaa8qacaWGdbWdamaaBaaaleaapeGaamyAaaWdaeqaaOWdbmaabmaapaqaa8qacaWG4baacaGLOaGaayzkaaGaeyypa0JaaGimaiaacckacaWGPbGaaiiOaiab=v=aYlaacckacaWGfbaapaqaa8qacaWGdbWdamaaBaaaleaapeGaamyAaaWdaeqaaOWdbmaabmaapaqaa8qacaWG4baacaGLOaGaayzkaaGaeyizImQaaGimaiaacckacaWGPbGaaiiOaiab=v=aYlaacckacaWGjbaaaaGaay5EaaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabIcacaqGZaGaaeykaaaa@7E68@

Benchmark functions and Parameters: The benchmark functions in this paper are similar to the previous CRO publication [13-16], all experiments are simulated to solve the 23 objective problem functions. Such benchmark functions are classified into three categories as shown in Table 1. Category I is the high-dimensional unimodal functions, category II is the High-Dimensional Multimodal Functions, and category III is the Low-Dimensional Multimodal Functions, More details are contrasted in [13-16].

This research obtains the main parameters provided in CRO framework [13], as shown in Table 2. Moreover, there are several individual parameters for each algorithm that are obtained as its original work, and more description are contained in [14-16].

Evaluation methods: There are three evaluation methods that are used in our experiments to compare the performance of competition algorithms. These three methods are optimal solution quality evaluation, convergence speed, and statistical hypothesis testing.

(1) Optimal solution quality evaluation: In each computing time of optimization algorithm when solving an objective function, this will output a most potential result named optimal solution. Since almost previous publications were focused on solving minimization, the authors considered the result the best global minimum. Different from this paper, we call the optimum result as the best global maximum because we are interested in solving maximization problem.

For algorithms based on CRO in our simulation, each round of iteration contains the local maximum as the best result of round. Accordingly, the current global maximum is replaced by the local maximum when the new local maximum is better than current global maximum. The best global maximum at each computing time is generated when the program meets the stop condition of algorithm. The optimal solution quality is evaluated by comparing the mean of the best global maximum or the optimal solution of each function at all computing times (i.e., in this paper is 25). If the optimal solutions of the competitors are equal, then, the second key for comparison is the standard deviation, the lower standard deviation is winner.

(2) Convergence Speed Evaluation: Beside optimal solution quality, convergence speed is an essential issue that indicates the capability of competition algorithms in terms of speed to reach the global optimal solution. In our experiment, the convergence speed of the algorithm is calculated by counting the number of iterations (FEs) before the algorithm converges into the acceptable solution. Since there is no any acceptable solution that has been published for maximizing, we determine the acceptable solution by averaged the mean of objective function’s optimal solution for these four comparison algorithms. The strong algorithms have to generate the result at least not worse than the acceptable solution. The algorithm with fewer FEs is more outstanding than that with greater FEs. Furthermore, we draw a convergence curve for specific functions in a particular run. The ideal curve should begin with the lowest number, then, growing to the highest number, opposite minimization.

(3) Statistical Hypothesis Testing: In order to strengthen our experiments over above two evaluation methods, we verify aptitude of competition algorithms by using statistical hypothesis testing names Friedman test [18]. It is a famous non-parametric statistical testing. Friedman rank is processed by transforming the quality results of contestant algorithms to ranks for each objective function, the average ranks are selected when ranks are equal.

Experimental results and discussion

The proposal for the conducted experiments is to evaluate the performance of competitor algorithm RCCRO4, HP-CRO2, OCRO, and MCRO by running each objective function for 25 times in maximization aspect. We note that, we have changed the condition of algorithms to maximization. Our simulation is built by using C# programming language and the experiments are executed on a computer with Intel Corei7, CPU 3.10 GHz, and Ram 4GB specifications. According to the evaluation method mentioned in section 3.3, the simulation results are demonstrated in the following, proved that the best results are marked in bold.

(1) The results of optimal solution quality evaluation are illustrated in Table 3-5. Table 3 represents the optimal solution quality of MCRO, OCRO, HP-CRO2, and RCCRO4 for category I which contains seven high-dimensional unimodal functions (f1-f7). MCRO conducts the best for f2, f3, f4, f5, f6 and f7, except the f1 that generates the best result by RCCRO4.The ranking of optimal solution quality for category I functions from best to worst as follows: MCRO, HP-CRO2, RCCRO4 and OCRO respectively.

Table 4 compares the optimal solution quality for 6 functions in Category II which are high-dimensional multimodal functions. MCRO operates the best for f11, f12 and f13 while HP-CRO2 has the best results for f8, f9, and f10. Therefore, the ranking of optimal solution quality of Category II functions is led by MCRO and HP-CRO2, followed by RCCRO4 and OCRO respectively.

Table 5 compares the results of Category III functions or low-dimensional multimodal functions. MCRO is the most outstanding in this category because it generates the best results for all 10 functions: f14 - f23. MCRO is followed by RCCRO4, while HP-CRO2 is the third rank, and OCRO is the fourth rank.

The overall ranking of optimal solution quality is represented in Table 6, showing that MCRO performs the best in the optimal solution quality evaluation. MCRO is followed successively by HP-CRO2, RCCRO4 and OCRO.

(2) The results of convergence speed evaluation for 4 comparison algorithms are presented in Table 7, all algorithms perform the best convergence of f14 when converges the acceptable solution at 1 FE; MCRO, OCRO and HP-CRO2 are leaders of f18 when converges the acceptable solution at 1 FE; MCRO and OCRO output the most advantageous convergence speed of f20 when converges the acceptable solution at 1 FE; MCRO and HP-CRO2 report the best results for f16 .Moreover, MCRO and RCCRO4 generate the best results of f4 when converges the acceptable solution at 1 FE. Summary, MCRO generates the fastest convergence of 11 functions: f4, f10, f14, f15, f16, f17, f18, f19, f20, f21 and f23, and 7 of 11 functions converge the acceptable solution at 1 FE. In addition, ranking of other 12 functions for MCRO are second; HP-CRO2 is the best convergence of 12 functions: f2, f3, f5, f6, f7, f8, f9, f11, f12, f14, f16 and f18 .Although HP-CRO2 performs best for a number of functions there are only 3 of 12 functions that converge the acceptable solution at 1 FE. Moreover, ranking of the rest 11 functions are second for 7 functions and the third for 4 functions. OCRO generates the best results of 4 functions: f14, f18, f20 and f22; and RCCRO4 generates the best convergence speed of 3 functions: f1, f4 and f14. The average convergence speed ranking of comparison algorithms for all functions from fastest to slowest is led by MCRO, the second is HP-CRO2, followed by HP-CRO2 and RCCRO4 which are the same order.

Besides, when evaluating the convergence speed based on the iteration number (FEs), we evaluate the similar results of competition algorithm: MCRO, OCRO, HP-CRO2, and RCCRO4 by drawing a convergence curve of specific functions which select 2 functions from each category: category I (f3, f7), category II (f11, f12) and category III (f16, f22) in a particular run. We note that appreciate curve for maximizing operation should be grown, as opposed to minimization. Figure 1-3 shows that MCRO remains the most outstanding among the four algorithms.

(3) As mentioned above, we provide Freidman test for statistical hypothesis testing .Friedman rank is processed by transforming the results of each function comparison algorithms to ranks. The average ranks are provided when ranks are equal. Table 8 presents the results of Freidman test for optimum solution quality and convergence speed. The results of Friedman rank test in terms of the comparison of optimal solution quality MCRO achieves the best rank, followed by HP-CRO2, RCCRO4 and OCRO respectively. In terms of the competition of convergence speed, MCRO also performs the outstanding and HP-CRO2 is the second best similar to optimal solution quality comparison, while OCRO is the third, then, RCCRO4 is the worst different from optimal solution quality comparison.

In the statistic test, comparing both the optimal solution quality and the convergence speed on the basis of the corresponding Friedman ranks 1.59 and 1.22, the statistic Ff values are 21.76398 and 35.96694, and the p-values are 7.30387E-05 and 7.60987E-08. In addition, the results explicitly display significant differences across the competing algorithms.

The performance measured by these three methods concludes that MCRO absolutely outstanding in order to solve maximization problem.

Conclusion

This paper is concerned with solving optimization problem in maximization aspect. We provided several approaches based on CRO which had already promised the perspective of minimization in previous publications to the experimental such as RCCRO4, HP-CRO2, OCRO and MCRO. The evaluation results have proved that MCRO which is a hybrid algorithm CRO and polynomial mutation operator is the most excellent in maximizing and minimizing problems among the comparison algorithms for both optimal solution quality and convergence speed. But, there are some variations on ranking of other competitors such as HP-CRO2, which is the third on minimization and the second on maximization for both optimal solution quality and convergence speed. OCRO, which is the second best on minimization, is the third in the part of convergence speed and the worst in optimal solution quality on maximization. Finally, RCCRO4, which the worst on minimization, is the third in the part of optimal solution quality and the worst in convergence speed of maximization.

The results forcefully verified that MCRO is the best optimization approach to be considered as a promising algorithm for solving optimization (minimization or maximization) problems.

  1. Kita H, Yabumoto Y, Mori N, Nishikawa Y (1996) Multi-objective optimization by means of the thermodynamical genetic algorithm. Lecture Notes in Computer Science 1141: 504–512. Link: https://goo.gl/i2jSYU
  2. Wikipedia Mathematical optimization. Link: https://goo.gl/iYWRMx
  3. Metropolis N, Rosenbluth A, Rosenbluth M, Teller A, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21: 1087–1092 Link: https://goo.gl/gc15MC
  4. Holland J (1992) Adaptation in natural and artificial systems: An introductory analysis with applications to biology. Control and Artificial Intelligence, Cambridge Link: https://goo.gl/tfWTEq
  5. Dawkins R (1976) The selfish gene. Oxford University press Link: https://goo.gl/iU0gqm
  6. Kennedy J, Eberhart R (1995) Particle swarm optimization. IEEE International Conference on Proceedings Neural Networks 4: 1942-1948 Link: https://goo.gl/5nIs4E
  7. Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. IEEE Trans SystMnCybern 26: 29-41 Link: https://goo.gl/rBSyyw
  8. Liong SY, Atiquzzaman M (2004) Optimal design of water distribution network using shuffled complex evolution. J InstEng Singapore 44: 93-107 Link: https://goo.gl/iM0i1h
  9. Yang XS (2010) A new metaheuristic bat-inspired algorithm. University of Cambridge, Department of Engineering. Link: https://goo.gl/NAcNAc
  10. Geem ZW, Kim JH, Loganathan G V (2001) A new heuristic optimization algorithm: harmony search. Simulation 76: 60–68 Link: https://goo.gl/xJYIaO
  11. Lam AY, Li VO (2010) Chemical-reaction-inspired metaheuristic for optimization. IEEE Transactions on Evolutionary Computation 14: 381–399.
  12. Lam AY, Li VO (2012) Chemical reaction optimization: A tutorial. Memetic Computing 4: 3–17 Link: https://goo.gl/vptngX
  13. Lam AY, Li VO, Yu JJ (2012) Real-coded chemical reaction optimization. Evolutionary Computation, IEEE Transactions 16: 339-535 Link: https://goo.gl/Y790Cx
  14. Nguyen TT, Li ZY, Zhang SW, Truong TK (2014) A Hybrid algorithm based on particle swarm and chemical reaction optimization. Expert System with Applications 41: 2134-2143 Link: https://goo.gl/9pq2Aj
  15. Li ZY, Li Z, Nguyen TT, Chen SM (2015) Orthogonal chemical reaction optimization algorithm for global numerical optimization problems. Expert System with Applications 42: 3242–3252 Link: https://goo.gl/fxkWDt
  16. Ngambusabongsopa R, Li ZY, Eldesouky E (2015) A Hybrid Mutation Chemical Reaction Optimization Algorithm for Global Numerical Optimization. Mathematical Problems in Engineering 2015: 1-18 Link: https://goo.gl/ypoEWw
  17. Wang G, Guo L (2013) A novel Hybrid Bat algorithm with Harmony Search for Global Numerical Optimization. Applied Mathematics 2013: 1-21 Link: https://goo.gl/39XmFP
  18. Derrac J, Garcia S, Molina D, Herrara F (2011) Swarm and Evolutionary Computation. Swarm and Evolutionary Computation 1: 3-19.
© 2017 Ngambusabongsopa R, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.