involves a competitive selection that carried out poor solutions. The solutions with high "fitness" are "recombined" with other solutions by swapping parts of a solution with another. Solutions are also "mutated" by making a small change to a single element of the solution. Recombination and mutation are used to generate new solutions that are biased towards regions of the search space for which good solutions have already been seen. Different main schools of EA evolved during the last 30 years: Genetic algorithms (GA), Evolutionary Programming (EP), Evolutionary Strategy (ES), Differential Evolution (DE), Ant Colony Optimization (ACO), Immunology System Method (ISM), Scatter Search (SS), Particle Swarm (PS), Self Organizing Migrating Algorithm (SOMA) etc.
Genetic algorithms are an optimization technique based on natural evolution. They include the survival of the fittest idea into a search algorithm which provides a method of searching which does not need to explore every possible solution in the feasible region to obtain a good result. Genetic algorithms are based on the natural process of evolution. In nature, the fittest individuals are most likely to survive and mate; therefore the next generation should be fitter and healthier because they were bred from healthy parents. This same idea is applied to a problemby first ’guessing’ solutions and then combining the fittest solutions to create a new generation of solutions which should be better than the previous generation. We also include a random mutation element to account for the occasional ’mishap’ in nature.
Differential evolution (DE) is one of the efficient evolutionary computing techniques that seem to be effective to handle optimization problems in many practical applications. Conversely, the performance of DE is not always flawless to guarantee fast convergence to the global optimum. It can certainly get inaction resulting in low accuracy of acquired results. An enhanced differential evolution (EDE) algorithmby integrating excited arbitrary confined search (EACS) to augment the performance of a basic DE algorithm have been proposed in this paper. EACS is a local search method that is excited to swap the present solution by a superior candidate in the neighbourhood. Only a small subset of arbitrarily selected variables is used in each step of the local exploration for randomly deciding the subsequent provisional solution. The proposed EDE has been tested in standard IEEE 30 bus test system. The simulation results show clearly about the better performance of the proposed algorithm in reducing the real power loss with control variables within the limits.
Surveys of existing methods for multi-objective problems were presented in Jozefowiez et al. (2008) and Zhou et al. (2011). In Jozefowiez et al. (2008), the authors examined multiobjective versions of several variants of the Vehicle Routing Problem (VRP) in terms of their objectives, their characteristics and the types of proposed algorithms to solve them. A survey of the state of the art of the multi-objective evolutionary algorithms was proposed by Zhou et al. (2011). This papers covers algorithms frameworks for multiobjective combinatorial problems during the last eight years. However, in the literature reviewed, there are few works considering the multi-objective version of the MDVRPB. Multiobjective metaheuristic approaches for combinatorial problems were presented in Doerner et al. (2004), Liu et al. (2006) and Lau et al. (2009). A multiobjective methodology by Pareto Ant Colony Optimization for solving a portfolio problem was introduced by Doerner et al. (2004). A multi-objective mixed zero-one integer-programming model for the vehicle routing problem with balanced workload and delivery time was introduced by Liu et al. (2006). In this work, a heuristic-based solution method was developed. A fuzzy multi-objective evolutionaryalgorithm for the problem of optimization of vehicle routing problems with multiple depots, multiple customers, and multiple products was proposed by Lau et al. (2009). In this work, two objectives were considered: minimization of the traveling distance and also the traveling time.
2. Imperialist competitive algorithm (ICA) Imperialist competitive algorithm (ICA) is a new population-based meta-heuristic algorithm proposed by Atashpaz-Gargari and Lucas (2007), inspired from the socio-political process of imperialism and imperialistic competition. Algorithm’s capability in dealing with different types of optimization problems has been proven by the authors . Similar to any evolutionary algorithms, ICA also starts with an initial population of solutions, called countries, representing the concept of the nations. Reflecting the quality of objective function in each solution, some of the best countries in the population are chosen to be the ‘imperialists’ and the rest are assumed to be the ‘colonies’ of those imperialists. The set of one imperialist and its colonies is called an ‘empire’. Over the time, imperialists try to extend their own characteristics to the governing colonies; however, it is not totally a controlled procedure and revolutions might happen in each country. Countries can also leave from their empire to others if they see higher chance of promotion there. ICA has extensively been used to solve different kinds of optimization problems. For example, this method are used for stock market forecasting , digital filter design , travelingsalesman problems , multi-objective optimization , integrated product mix- outsourcing problem  and scheduling problem [27, 28].
The significant part of heuristics comprises metaheuristic methods, which differs from the classical methods in that they combined the stochastic and deterministic composition. It means that they are focused on global optimization, not only for local extremes. The big advantage of met heuristics is that they are built not only for solving a concrete type of problem, but they describe general algorithm in that, they show only the way, how to apply some procedures to become solution of the problem. This procedure is defined only descriptively, by black-box, and the implementation depends from the specific type of problem. The group of the most known met heuristics includes evolutionary algorithms, which are inspired by process in nature (for example genetic algorithms, particle swarm optimization, differential evolution, ant colony optimization, etc.).
been used as it is implemented in MATLAB. The branch- and-bound algorithm, named after the work of , has become the most commonly used tool for solving NP-hard optimization problems. The branch-and-bound method consists of a systematic enumeration of candidate solutions by means of a state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set as the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enu- merating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algo- rithm. The algorithm depends on the efficient estimation of the lower and upper bounds of a region/branch of the search space and approaches exhaustive enumeration as the size (n-dimensional volume) of the region moves to zero. To get an upper bound on the objective function, the branch-and-bound procedure must find feasible points. There are techniques for finding feasible points faster before or during the branch-and-bound procedure. Cur- rently, the MATLAB/intlinprog module uses these tech- niques only at the root node, not during the branch-and- bound iterations. Figure 1 shows the flowchart.
The genetic randomize search and optimization technique guided by the principal of natural genetic system and natural evolution [10,14]. It includes the “survival of the fittest” idea into a Search Algorithm. In nature the fittest individuals are most likely to survive and mate; therefore the next generation should be fitter and healthier because they were bred from healthy parents. This same idea is applied by a GA to a problemby first ‘guessing’ the possible solutions of the related problem and then combining the fittest solution to create a new generation of solutions which should be better than the previous generation. Also, a random mutation element has included to make up the occasional mishap in nature. Genetic Algorithms have been applied to a large number of real world problems. The concept using these “evolutionary computing algorithms” for combinatorial optimization problems has been a well-studied problem-solving approach. The benefit of evolutionary computing is not only its simplicity but also its ability to obtain global optima. The main advantages of genetic algorithm is it works with the coding of variable which discretizes the search space. It works on a population of points at a time in parallel. It is nondeterministic approach that supports multi objective optimization.
A crossover operator, called edge assembly crossover (EAX). It was proposed by Nagata and Kobayashi in 1997. The EAX has two important features: preserving parents' edges using a novel approach and adding new edges by applying a greedy method, analogous to a minimal spanning tree. In this we select two individuals tours denoted as A and B, are selected as parents. EAX first merges A and B into a single graph denoted as R and then considered a powerful crossover operator. We called this even-cycle AB-cycle. All of the edges in this AB-cycle are then deleted from graph R. The same procedure is repeated until all of the edges in graph R are eliminated. Finally, in order to get a valid solution the EAX uses a greedy method to merge these distinct sub tours together. .
We can find applications of the Generalized TravelingSalesmanProblem in programming pro- cesses of machines in industries, postal routing and layout of networks. Noon & Bean  em- ployed a Lagrangian relaxation to compute a lower bound on the total cost of an optimal so- lution. Many studies about the Generalized TravelingSalesmanProblem and their applications were done to obtain a reduction of this problem to a TravelingSalesmanProblem, since there are many heuristics already performed for this problem. The first reduction method of a Generalized TravelingSalesmanProblem to a TravelingSalesmanProblem was presented by Lien et al. . More recent, Mestria  proposes a hybrid heuristic algorithm to solve the Clustered Travel- ing SalesmanProblem (CTSP). Hybrid Heuristic algorithm uses several variable neighborhood structures combining the intensification (using local search operators) and diversification (con- structive heuristic and perturbation routine). Experimental results show that the proposed hybrid heuristic obtains competitive results within reasonable computational time.
Abstract—The TravelingSalesmanProblem (TSP) is a well known Nondeterministic Polynomial (NP) problem. A number of sequential algorithms have been well defined to solve TSP. However in this paper the authors demonstrate an alternative way of solving TSP with parallelism by modifying Prim’s algorithm and integrate the modified algorithm with a Random algorithm making the whole process more computational extensive in visiting all the nodes/cities. This paper discussed on converting the sequential algorithm into a parallel version to be computed in a High Performance Computer (HPC) using Message Passing Interface (MPI) libraries running on ROCKS open source systems. Outcome of load balancing and cluster performance is also discussed.
Research on Liner Shipping problems has been relatively scarce, for an overview of earlier research on the topic please refer to Christiansen et al.  and Christiansen et al. . Since these reviews, the interest in the field has increased and a number of articles has been published, with various approaches and scopes of the LSNDP. The work of Shintani et al.  has a detailed description of the problem cost structure and includes consideration of repositioning empty containers. The network design problem considered by Agarwal and Ergun  generate multiple services and handles transhipments, a bender’s and a column generation based algorithm is implemented. These scale to large instances, but a drawback is that transhipments costs are excluded. The model of Alvarez  considers transhipment cost and find solutions for large instances in a heuristical column generation approach. The Branch-and-Cut method of Reinhardt and Pisinger  has the first model considering Proceedings of the International MultiConference of Engineers and Computer Scientists 2012 Vol II,
Adding new constraints to the master problem modifies its structure. For the subproblem to identify new attractive columns for the modified master problem, those modifications must be accounted for in the subproblem. Basically, the dual information associated with the new constraints must be considered at the subproblem level. Every extra constraint respects to the scheduling of a specific task in a specific time slot. The dual value associated to an extra constraint can be seen as a prize or penalty for scheduling such task in such time slot. Therefore, the modifications to the subproblem can be easily done because they only affects the calculation of the state transition costs in the dynamic programming formulation.
algorithm. A set of eligible channels E(k) is determined in order to assign a possible channel upon a new call requests in cell k. In this case E(k) = S – (O(k) P(k)), where S is the entire set of available channels, O(k) is the set of channels allocated to the existing calls in cell k, and P(k) is the set of channels used in the neighboring cells which less than the reuse distance with cell k. The channels allocation matrix A will include all the information related to the channel usage. The initial population consists of the solution vectors with length equals to the magnitude of vector E(k). Each solution from the vector contains a unique integer. The remaining (t – 1) integers in all the solution vectors are determined as the channels allocated to the ongoing calls in cell k.
At each step the algorithm considers a growing set of potential exchanges (starting with r = 2). These exchanges are chosen in such a way that a feasible tour may be formed at any stage of the process. If the exploration succeeds in finding a new shorter tour, then the actual tour is replaced with the new tour. The Lin-Kernighan algorithm belongs to the class of so-called local optimiza- tion algorithms [17, 18]. The algorithm is specified in terms of exchanges (or moves) that can convert one tour into another. Given a feasible tour, the algo- rithm repeatedly performs exchanges that reduce the length of the current tour, until a tour is reached for which no exchange yields an improvement. This process may be repeated many times from initial tours generated in some randomized way. The algorithm is described below in more detail.
0-1 Knapsack Problem (0-1 KP) is a special case of general 0-1 linear problem in which the allocation of items to a knapsack is discussed. Knapsack problems appear in real-world decision-making processes in a wide variety of fields such as production, logistics, distribution and ﬁnancial problems (Marchand et al., 1999; Kellerer et al., 2004; Gorman & Ahire, 2006; Wascher & Schumann, 2007; Granmo et al., 2007; Nawrocki et al., 2009; Vanderster et al., 2009). Dantzig (1957) is believed to be the first who introduced the knapsack problem and proved that the complexity of this problem is NP- hard (Garey & Johnson, 1979). There are literally many practical applications for knapsack problem and it has become the object of numerous studies and a great number of papers have been proposed for solving this problem. In this problem, different items with various profits (p) and weights (w) are considered. There is also capacity limit in knapsack problem (C). The general KP model is a binary problem, stated as follows:
To remove or reduce noise and effects from other unexpected sources and further enhance signal components of interest, a reconstruction approach is used to filter and assemble the frequency components to reconstruct signal without loss of information of interest. The scheme of reconstruction method is illustrated in Fig. 2. Each signal is transformed to FFT spectrum. Then, eighteen band-pass filters are applied to select specific frequency bands within the signal. Finally, all the eighteen frequency segments are reassembled together to reconstruct a new signal. The functions of these eighteen band-pass filters are listed in Table 1, which shows the criteria for defining these filters. In this table, frequency components are obtained by calculating corresponding vibration characteristic frequencies of shafts, gears and bearings. Frequency order is the ratio of the characteristic frequency to the shaft rotating frequency.
The aim of this article is to perform a computational study to analyze the impact of formulations, and the solution strategy on the algorithmic performance of two clas- sical optimization problems: the travelingsalesmanproblem and the cuting stock problem. In order to assess the algorithmic performance on both problems three de- pendent variables were used: solution quality, computing time and number of itera- tions. The results are useful for choosing the solution approach to each speciic problem. In the STSP, the results demonstrate that the multistage decision formula- tion is beter than the conventional formulations, bysolving 90.47% of the instanc- es compared with MTZ (76.19%) and DFJ (14.28%). The results of the CSP demonstrate that the cuting paterns formulation is beter than the standard formu- lation with symmetry breaking inequalities, when the objective function is to mini- mize the loss of trim when cuting the rolls.
Optimization problems with or without constraints arise in various fields such as science, engineering, economics, management sciences, etc., where numerical information is processed. In recent times, many problems in business situations and engineering designs have been modeled as an optimization problem for taking optimal decisions. In fact, numerical optimization techniques have made deep in to almost all branches of engineering and mathematics.
Now consider the fig 3. As shown in fig 3 the minor change is that the edge between nodes A to node D is removed and the edge weight are remained to be same as in fig 1. Now in this example of graph we can see the use of backtracking and see that how it is useful for determine the Hamiltonian cycle. As shown in fig 4, we will start from vertex as assuming that it is the starting vertex. Now there is two edges out of one is to node C and the other is to node B. so we can start from any vertex. Let’s start from vertex C. in algorithm later, we consider the vertex first which come in the matrix notation first. As a recursive call we reach at node D, but we don’t have E (D, A) exist so we can have Hamiltonian path but not a Hamiltonian cycle. So we had to backtrack to B, now B don’t have any edge remaining, so again backtrack to C and continued with child node D. Then A-C-D-B-A is the HC. There may be many HC possible in a given graph, the minimal of them is the travelling salesmanproblem. In fig 4 to find a HC we actually finding the shortest path between C to B containing all the remaining vertices except A. we can reduce the algorithm complexity by adding the logic, to start the tour from the vertex which has out-degree is minimum. Then in fig 4 no need to explore node B (as a child of node C) further because we know that the path must lead to node B and from it to A. so the moment we got node B as a child node of node C we not need to go further as that will not lead to a Hamiltonian cycle. Now we develop the algorithm for the Hamiltonian cycle and the tsp. in every recursive algorithm there is required the base step to stop the recursive thing gone further. So when edges=N and there must be edge between the node and the starting node so we can write the string that we had as that is the path and if one want the length than it can also be calculated. So the code will be.