• Nenhum resultado encontrado

Metaheuristics

No documento Tiayyba Riaz (páginas 50-55)

1.7 More Deep Into DNA Metabarcoding

1.7.2 Metaheuristics

One of the emerging class of approximate methods is metaheuristics that has been de- signed to solve a very general class of combinatorial optimization problems. The term metaheuristics was first introduced byGlover(1986) howeverKirkpatricket al.(1983) had already proposed a well know metaheuristic technique called Simulated Annealing (SA) in 1983. According to Glover “a metaheuristics refers to a master strategy that guides and modifies other heuristics to produce solutions beyond those that are normally generated in a quest for local optimality“. Metaheuristics are not problem specific, they provide a general algorithmic framework which can be applied to different optimization problems with relatively few modifications and using domain specific knowledge to make them adapted to a specific problem (Blum and Roli,2003). There is no standard and commonly accepted definition for the term metaheuristics, however, in the last few years different researchers tried to propose different definitions for the term (Osman and Laporte,1996, Stützle,1999,Vosset al.,1999). The simplest of these definitions is one given byOsman and Laporte(1996), which says:

Definition 5. ”A metaheuristic is formally defined as an iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, learning strategies are used to structure infor- mation in order to find efficiently near-optimal solutions.”

The main goal of metaheuristics algorithms is to avoid the disadvantage of iterative local search to escape from local minima. Different strategies have been devised to achieve this. They include either allowing the low quality solutions or generating new starting solutions in a more intelligent way than just using random initial solutions. Many of the proposed methods make use of objective functions, information of previously made decisions or probabilistic models during the search to escape from local minima.

Metaheuristics methods have many interesting applications in almost all fields of scientific research including psychology, biology and physics. A number of applications have been discussed (Beer,1996,Osman and Kelly,1996,Vidal,1993) and a useful metaheuristic survey is given (Osman and Laporte,1996). In this section we will talk about the two most studied and used methods called Simulated Annealing and Tabu Search. We make use of these methods in our primers sets approach which is the chapter 3 of this thesis.

Simulated Annealing

Simulated Annealing (SA) is the oldest among the metaheuristics and was independently proposed byKirkpatricket al.(1983) and ˇCerný(1985). The concept of simulated annealing algorithm is taken from physical annealing in metallurgy. The technique of physical annealing involves heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. Controlled cooling means to lower the temperature very slowly and spending a long time at low temperatures in order to grow solids with a perfect smooth structure. If cooling is done too fast, the resulting crystals will have irregularities and defects. This undesirable situation is avoided by a careful annealing process in which the temperature descends slowly through several temperature levels and each temperature is held long enough to allow the solid to reach thermal equilibrium.

Such a state corresponds to a state of minimum energy and the solid is said to be in a ground state. There exists a strong analogy between combinatorial optimization problems and physical annealing of solids (crystals), where the set of solutions of the problem can be associated with the states of the physical system, the objective function corresponds to physical energy of the solid, and globally optimal solution corresponds to the ground state of solids.

Simulated annealing uses the idea of basic local search however it allows moves of inferior

sGenerateInitialSolution() TT0

whiletermination conditions not metdo sPickAtRandom(N(s))

if(f(s)< f(s))then ss

else

Accept s’ as new solution with probabilityPaccept(T,s,s) end if

U pdate(T) end while

Figure 1.2:Simulated Annealing Algorithm

quality (Reeves,1995) to escape from local minima. The algorithm starts by generating a tentative solutions and initializing a temperatureT. This solutionsis accepted if it improves the objective function value, however, ifsis worse than the current solution, it is accepted with a probability which depends on the difference△ = f(s)− f(s)of objective function for current solutions, the tentative solutionsand temperatureT. The probability of acceptance is computed by following Boltzmann distribution ase−△/T. The probabilityPacceptto accept worse solutions is defined as:

Paccept(T,s,s) =

1 if f(s) < f(s’)

e−△/T otherwise

A simpler version of algorithm for simulated annealing is shown in figure1.2.

At the start of algorithm, the temperature is high and the probability to accept inferior quality solutions is also high but it decreases gradually, converging to a simple iterative improvement algorithm when the temperature is lowered gradually. In the beginning when the probability to accept inferior quality solutions is high, the improvement in the final solution is low and a large part of solution space is explored however the algorithm eventually tends to converge to local minima when the probability is lowered. The proba- bility of accepting inferior quality solutions is controlled by two factors: the difference of the objective functions and the temperature. It means that at fixed temperature, the higher the difference△= f(s)− f(s), the lower the probability to accept a move fromstos. On the other hand, the higher the temperature, the higher the probability of accepting inferior quality solutions.

An appropriate temperature lowering system (defined byU pdate(T)function in figure 1.2) is crucial for the performance of algorithm. Such a system is calledannealing schedule orcooling schedule. It is defined by an initial temperatureT0and a scheme saying how the new temperature is obtained from the previous one. Such a system also defines the

number of iterations to be performed at each temperature and a termination condition.

An appropriatecooling scheduleguarantees the convergence to a global optimum, however such a schedule is not feasible in applications because it is too slow for practical purposes Therefore, faster cooling schedules are adopted in applications. One of the most used cooling schedule follows a geometric law: Tk+1 = αTk where α ∈ (0, 1)and k is the number of iterations (Blum and Roli,2003). Such a schedule corresponds to an exponential decay of the temperature. More successful variants are non-monotonic cooling schedules (Lourenço et al., 2001), which are characterized by alternating phases of cooling and reheating, thus providing a balance between revisiting some regions and exploring the new regions of search space. However in actual applications one good strategy could be to vary the cooling rules during the search, like temperature could be constant or linearly decreasing at the beginning in order to sample the search space and thenTmight follow a geometric rule at the end of search to converge to a local minimum. SA has been successfully applied to several combinatorial optimization problems, such as the Quadratic Assignment Problem (Connolly,1990) and Job Shop Scheduling Problems (van Laarhovenet al.,1992).

Tabu Search

The basic idea of Tabu Search (TS) was first introduced byGlover(1986). This is among the most cited and used metaheuristics for combinatorial optimization problems. The basic idea of TS is to use information about the search history to guide local search approaches to escape from local minima and to implement an explorative strategy. This is done by using a short term memory calledtabu list, which is a small list for storing some forbidden solutions.

TS uses a local search algorithm that in each step tries to make the best possible move from current solutionsto a neighboring solutions even if that move gives an inferior quality value of objective function. To prevent the local search to immediately return to a previously visited solution and to avoid cycling, moves to recently visited solutions are forbidden. This can be done by keeping track of previously visited solutions by adding those solutions totabu listand forbidding moving to those. These moves are forbidden for a pre-specified number of algorithm iterations for exampletiterations.

Forbidding possible moves dynamically restricts the neighborhoodN(s)of the current solutions to a subsetA(s)of admissible solutions. At each iteration the best solution from the allowed subsetA(s)is chosen as the new current solution. Additionally, this solution is added to thetabu listand one of the solutions that were already in thetabu list is removed usually in a FIFO (First In First Out) order. Basic algorithm forTSis shown in

sGenerateInitialSolution() sbests

tabulist

whiletermination conditions not metdo A(s)←GenerateAdmissibleSolutions(s) sChooseBestO f(N(s)|tabulist) U pdate(tabulist)

if(f(sbest)< f(s))then sbests

end if end while

Figure 1.3:Simple Tabu Search Algorithm

figure1.3. Removal of elements fromtabu listis important because of two reasons. First, size oftabu listis kept small for fast access, so when the list is full and there is no more room for new elements, the one previously added elements has to be removed. Second it is important to remove already added solutions to list so that they can be made available for next moves. The algorithm stops when a termination condition is met or if the allowed set is empty,i.e.all the solutions inN(s) are forbidden by the tabu list, however this rarely happens because usually the size oftabu listis very small as compared to the actual neighborhood size|N(s)|.

The size of thetabu list(tabu size) controls the memory of the search process. With small tabu sizethe search will concentrate on small areas of the search space and a largetabu size forces the search process to explore larger regions, because it forbids revisiting a higher number of solutions. Thetabu sizecan be varied during the search, leading to more robust algorithms. One of the example of dynamically changing size oftabu listis presented in (Battiti and Protasi,1997), where thetabu sizeis increased if solutions are repeated, while it is decreased if there are no improvements.

Another important thing to be considered is that the short term memory used as tabu listdoes not actually contain the full solutions because managing a list of solutions is computationally very inefficient. Instead of adding the complete solutions to the list, some attributes to the solutions are stored as storing attributes is much more efficient than storing complete solutions. Because more than one attribute can be considered, a tabu listis built for each of them. The set of attributes and the correspondingtabu list define the tabu conditions which are used to filter the neighborhoodN(s)of a solutions and generate the allowed setA(s). Although managing attributes is more efficient than managing full solutions, yet it may introduce a loss of information, as forbidding an attribute means assigning the tabu status to probably more than one solutions as more than one solutions can have same attributes. A major disadvantage of this phenomena is that an unvisited good quality solution can be excluded from the allowed set. To

sGenerateInitialSolution() Initializetabulists(tl1,tl2, ...tln) k←0

whiletermination conditions not metdo

AllowedSet(s,k)← {s ∈(N(s)|s does not violate a tabu condition,

or it satisfies at least one aspiration condition} sChooseBestO f(AllowedSet(s,k))

U pdateTabuListsAndAspirationConditions() kk+1

end while

Figure 1.4:Tabu Search Algorithm with aspiration condition

overcome this problem, aspiration criteria are defined which allow to include a solution in the allowed set A(s)even if it is forbidden by tabu conditions. Aspiration criteria define the aspiration conditions that are used to increase the size of allowed setA(s) by adding more elements in it during the search process. The most commonly used aspiration criterion selects solutions which are better than the current best one. Tabu Searchalgorithm with aspiration condition is shown in figure1.4.

To date,TSappears to be one of the most successful metaheuristics. For many problems, TS implementations are among the algorithms giving the best tradeoff between solution quality and the computation time required (Nowicki and Smutnicki, 1996, Vaessens et al.,1996). However, for the empirical success of this algorithm a very careful choice of parameter value adjustments and implementation data structures is required which includes managingtabu size, deciding the number of iterationtfor algorithm and carefully choosing the aspiration criteria.

No documento Tiayyba Riaz (páginas 50-55)