The user-level extensibility of CP languages has been **an** important goal for over a decade. In the tra- ditional global **search** approach to CP (namely heuristic-based tree **search** interleaved with propagation), higher-level abstractions for describing new constraints include indexicals [17]; (possibly enriched) de- terministic finite automata (DFAs) via the **automaton** [2] and regular [11] generic constraints; and multi- valued decision diagrams (MDDs) via the mdd [5] generic **constraint**. Usually, a generic but efficient propagation algorithm achieves a suitable level of **local** consistency by processing the higher-level de- scription of the new **constraint**. In the more recent **local** **search** approach to CP (called **constraint**-based **local** **search**, CBLS, in [14]), higher-level abstractions for describing new constraints include invari- ants [9]; a subset of first-order logic with arithmetic via combinators [16] and differentiable invariants [15]; and existential monadic second-order logic for constraints on set decision variables [1]. Usually, a generic but incremental algorithm maintains the **constraint** and variable violations by processing the higher-level description of the new **constraint**.

Mostrar mais
13 Ler mais

While using unbounded auxiliary variables one can avoid **local** traps and achieves polynomial efficiency in the analog **search** times, the question is whether one can design a continuous-time dynamical system for k-SAT using only bounded variables (implementation friendly), but preserving as many of the desirable features of the system as possible. At a recent conference [30] we presented **an** implementation friendly model for solving k-SAT (see below), which is a cellular neural network model similar to those used in CNN computers and also similar to Hopfield models. Here we rigorously define the parameter region where a one-to-one correspondence between solutions and fixed-points can be achieved. We also show that the optimal region of parameters (where the system is most efficient) is independent of the properties of the problem. Most importantly, our system does not get trapped in **local** minima, finding the solution is one single continuous-time process which does not need ‘‘intervention’’ and tuning of parameters.

Mostrar mais
13 Ler mais

The meta-heuristic employs diversification to escape **local** minima. The **search** procedure is depicted in Algorithm 3. The diversification is also moving **toward** a better objective function by selecting a set of flights and setting their delays to zero. The selection is not totally random as it uses a reverse sequence of probabilities derived from (4) with a ratio of 1.5. Once the constraints are satisfied, the diversification level is set to a higher value in order to ensure more modification to the flights’ delays, and the state changes according to the new violations. This process continues to satisfy the constraints again, while the objective function is monitored to store the best solution so far before timing out or maximum number of iteration is reached.

Mostrar mais
14 Ler mais

Beside the importance of the practical aspects, solving complex NRPs which is NP-hard [1] also raises a scientific challenge to the researchers. NRPs are a special type of scheduling problem with a wide range of heterogeneous and specific constraints, thus are over-constrained and hard to solve effi- ciently. It has been extensively studied in Operational Research, Artificial Intelligence and **local** **search** (meta-heuristics) communities for more than 40 years [2-4]. Exact procedures, in particular Operational Research techniques such as linear programming [5], integer programming [6] and mixed-integer pro- gramming [7] have been proposed to tackle the problems. Another exact procedure, **constraint** program- ming, which originated from Artificial Intelligence research, also forms **an** important research direction in solving NRPs [8]. Its flexibility of modelling the complex logical constraints makes **constraint** pro- gramming a strong candidate to model and solve NRPs. However, due to the exponential growth of **search** space along with the problem size, exact procedures including **constraint** programming are com- putationally expensive for solving large scale nurse rostering problems.

Mostrar mais
12 Ler mais

The technique described before is called branch and prune. For the sake of simplifi- cation of the domain’s representation, most solving strategies impose that only the single F-boxes should be presented (as opposed to a union of F-boxes). In that sense, prun- ing corresponds to narrowing the original F-box into a smaller one, where the lengths of some F-intervals are decreased by some filtering algorithms (or if it became empty, prov- ing the original F-box to be inconsistent). The branching step usually consists of splitting the original F-box into two smaller F-boxes by splitting one of the original variable do- mains around **an** F-number, which is in most algorithms the F-number representing the mid-value of the F-interval of its domain.

Mostrar mais
71 Ler mais

Abstract. Multilocal programming aims to locate all the **local** solutions of **an** optimization problem. A stochastic method based on a multistart strategy and a derivative-free filter **local** **search** for solving general con- strained optimization problems is presented. The filter methodology is integrated into a coordinate **search** paradigm in order to generate a set of trial approximations that might be acceptable if they improve the **constraint** violation or the objective function value relative to the cur- rent one. Preliminary numerical experiments with a benchmark set of problems show the effectiveness of the proposed method.

Mostrar mais
15 Ler mais

Heuristic **search** algorithms can be classified as being either instance-based or model- based [137, 178]. Instance-based heuristics generate new candidate solutions using only the current solution or set of solutions, while model-based heuristics rely on a probabilistic model that is updated according to previously seen solutions. Most traditional **search** methods can be considered instance-based, including genetic al- gorithms [77, 47], simulated annealing [89, 167], tabu **search** [43, 44, 45], iterated **local** **search** [106, 107], and others. On the other hand, model-based **search** (MBS) is a more recent approach, which have gain increasing popularity in the last decade. Model-based **search** algorithms generate candidate solutions using a parameter- ized probabilistic model that is updated using the previously seen solutions in such way that the **search** will concentrate in the regions containing high quality solu- tions [178]. Figure 2.7 presents the main scheme of model-based **search**. Clearly, EDAs belong to such class of algorithms. Other algorithms that can be classified as model-based include ant colony optimization (ACO) [32, 34, 33], cross-entropy method [142], and stochastic gradient ascent [141, 12].

Mostrar mais
192 Ler mais

To overcome the problem of the disruption of building blocks, a growing interest arose in methods that are able to learn the structure of the prob- lem on the fly and use this information to ensure **an** efficient growth and mixing of BBs [24]. EDAs (Estimation of Distribution Algorithms) [17] also known as PMBGAs (Probabilistic Model Building Genetic Algorithms) [24], differ from the simple GA in the way they process the population of promis- ing solutions (mating pool) and generate new individuals [20]. Instead of the traditional recombination and mutation operators, these algorithms use probabilistic modeling of promising solutions to guide the exploration of the **search** space. The main feature of EDAs is to prevent disruption of impor- tant partial solutions, which is done by giving them high probability to be present in the offspring population [13].

Mostrar mais
79 Ler mais

Future work will address other useful operations such as projection of polyhedra, conversions to and from other representations, and operations that are specific to symbolic state-space exploration al- gorithms. For this particular application, IRVA in their present form are still impractical, since they only provide efficient representations of polyhedra in spaces of small dimension. (Indeed, the size of **an** IRVA grows with the number of components of the polyhedron it represents, and simple polyhedra such as n-cubes have exponentially many components in the spatial dimension n.) We plan on tackling this problem by applying to IRVA the reduction techniques proposed in [4], which seems feasible thanks to the acyclicity of their transition relation. This would improve substantially the efficiency of the data structure for large spatial dimensions.

Mostrar mais
14 Ler mais

Given **an** undirected graph g, **an** instance tr of VarRootedSpanningTree(g, s,t), **an** edge e = (u, v) such that e ∈ E(g) \ E(tr) is called replacing edge of tr. We denote rpl(tr) the set of replacing edges of tr. Given e ∈ rpl(tr), **an** edge e ′ that belongs to the path between two endpoints of e on tr is called replacable edge of e. We denote r pl(tr, e) the set of replacable edges of e. Intuitionally, a replacing edge e is **an** edge that is not in the tree tr but that can be added to tr (this edge insertion creates a cycle C when we ignore orientations of edges of tr), and all edges of this cycle except e are replacable edges of e.

Mostrar mais
7 Ler mais

Following Voudris [9], each real-valued variable in a continuous optimization problem is represented by a number of bits. Therefore, the flip bit mutation is used as a **local** **search** method. The **local** **search** moves from one solution to another by flipping the value of a bit in the solution. It starts from a random bit and then examines all the possible bits.

5 Ler mais

Kleene Algebra [BP12]. Moreover, Thierry Coquand and Vincent Siles described and formally verified a procedure in [CS11] base on Brzozowki’s derivatives [Brz64]. Also Alexander Krauss and Tobias Nipkow came up with a decision procedure for regular expression equivalence and used it to prove equations in relation algebra in Isabelle/HOL [KN12]. Marco Almeida, Nelma Moreira and Rogério Reis developed **an** algorithm in [AMR10] that whithout constructing the underlying automata, decides the equivalence of regular expressions by testing the equivalence of their partial derivatives. Moreover, this alorithm was then implemented and verified in COQ by Nelma Moreira, David Pereira and Simão Melo de Sousa [MPMdS12]. Another example is the work of Tobias Nipkow and Dmitriy Traytel in [NT14] that formalises **an** unified framework for verified regular expression decision procedures.

Mostrar mais
58 Ler mais

favored the adaptation of specialized cognitive mechanisms for detecting cheaters. In fact, evolutionary psychologists who study the cognitive structure of the human brain are converging **toward** the conclusion that humans do not develop general analytical skills that are applied to a variety of specific problems, as implied by the unitary cognitive foundation that characterizes the traditional model of individual in neoclassic economic theory. Rather, the human brain appears to have evolved a domain(specific, human reasoning architecture (CLARK; KARMILOFF(SMITH, 1991). More generally, the authors report that growing evidence indicates that humans have reasoning, learning, and preferences circuits that: (i) are complexly specialized for solving the specific adaptive problems our hominid ancestors regularly encountered; (ii) reliably develop in all normal human beings; (iii) develop without any conscious effort; (iv) develop without any formal instruction; (v) are applied without any awareness of their underlining logic; (vi) are distinct from more general abilities to process information or behave intelligently. Indeed, the authors add, these reasoning, learning, and preference circuits have all the hallmarks of what people usually think of as “instincts” (for discussion see TOOBY; COSMIDES, 1992). The authors comments that “this view implies [furthermore] that cultural differences are vastly overstated, because beneath existing surface variability all humans share the same set of preferences(generating and decision(making devices” (COSMIDES; TOOBY, 1994, p. 330; see also PINKER, 1994). Grounded on such findings, Axelrod (1986) has investigated the emergence and stability of norms on the basis of a double indirect reciprocity mechanism he labeled “metanorms”, which specifies the agents’ willingness to punish someone who did not the said norms.

Mostrar mais
554 Ler mais

Research design: A research design is similar to **an** architectural blueprint (Yin, 1994; 2002). It is a plan as described by Merriam (1991) for assembling, organizing and integrating data and it results in a specific finding. For this study, the research design is determined by several important factors such as how the researcher shaped the problems, the research inquiry statements that have been developed earlier and the types of findings desired. Based on the research setting, the Phases developed in collecting the data follows a kind of cyclic pattern, repeated over and over again (Glaser, 2002). It is important to note that this cycle only stopped when the researcher was confident with the information collected especially when the same specific pattern of behavior emerge over and over again. One way to justify that the finding are accurate is to verify the data and findings with the respondents (Anshel, 2002). The researcher will only reach the final Phase when the general picture reaffirms itself over and over again (Burns, 1994). The Phase of the research cycle (Fig. 1) provides a clearer picture of the whole events.

Mostrar mais
6 Ler mais

A GA for the CTSP which finds inter-cluster paths and then intra-cluster paths was developed in Potvin & Guertin [37]. Comparisons were performed with the heuristic proposed in Gendreau, Laporte & Potvin [14] and with lower bounds obtained using the procedure suggested in Jon- gens & Volgenant [23]. This GA solved problems with up to 500 vertices and with 4 and 10 clusters. A Two-Level Genetic Algorithm (TLGA) was proposed for the CTSP in Ding, Cheng & He [8]. In the lower level, the algorithm finds Hamiltonian cycles in each cluster, whereas in the higher level, it randomly chooses **an** edge to be deleted on the cycle in each cluster and simultaneously determines routing among clusters. Computational results demonstrated that the TLGA outperformed the classical GA developed by the authors Ding, Cheng & He [8].

Mostrar mais
20 Ler mais

The governmental entities should at least ensure that there are appropriate channels for unemployed people to find the majority of existing opportunities and that those channels are working and being promoted correctly. It seems that “in several countries that have been particularly hard-hit by the global economic and financial crisis – among them Spain, Italy, Greece, and Ireland – **search** via informal channels clearly outweighs the use of the public employment services as a job **search** method” (Bachmann and Baumgarten, 2013). This situation could mean two different opportunities: for the private sector, which can develop new informal channels through the creation of good job searching platforms (as the one I am proposing on the current project), and also for the public sector, which could analyse those informal channels and understand their advantages in order to optimize and enhance public employment services. Those organizations also have the duty of guaranteeing the existence of fair and unbiased services and opportunities, since some people can face more difficulties during the job searching process. As some authors stated, “efforts should be made such that **search** becomes less costly and more worthwhile for women, particularly if they have many family responsibilities” (Bachmann and Baumgarten, 2013). With the development of new tools and platforms that facilitate the access to information, education, training and career opportunities, the ones who struggle the most to get employed could find new and easier ways of reaching it.

Mostrar mais
107 Ler mais

Palavras-chave: Roteamento de veículos; Múltiplos entregadores; Busca **Local** Iterada; Busca em Vizinhança Grande. Abstract: This paper addresses the vehicle routing problem with time windows and multiple deliverymen, a variant of the vehicle routing problem which includes the decision of the crew size of each delivery vehicle, besides the usual scheduling and routing decisions. This problem arises in the distribution of goods in congested urban areas where, due to the relatively long service times, it may be dificult to serve all customers within regular working hours. Given this dificulty, **an** alternative consists in resorting to additional deliverymen to reduce the service times, which typically leads to extra costs in addition to travel and vehicle usage costs. The objective is to deine routes for serving clusters of customers, while minimizing the number of routes, the total number of assigned deliverymen, and the distance traveled. Two metaheuristic approaches based on Iterated **Local** **Search** and Large Neighborhood **Search** are proposed to solve this problem. The performance of the approaches is evaluated using sets of instances from the literature.

Mostrar mais
15 Ler mais

As a result, major **search** companies have invested significant resources into geographic **search** technologies, also often called **local** **search**. This paper studies geographic **search** queries, i.e.,text queries such as "hotel Newyork" that employs geographical terms in **an** attempt to restrict results to a particular region or location. The main motivation is to identify opportunities for improving geographical **search** and related technologies, and we perform **an** analysis of 36 million queries of the recently released AOL query trace. First, this paper identifies typical properties of geographic **search** (geo) queries based on a manual examination of several thousand queries. Based on these observations, this paper builds a classifier that separates the trace into geo and non-geo queries. It then investigates the properties of geo queries in more detail, and relate them to web sites and users associated with such queries.

Mostrar mais
6 Ler mais

Abstract — This paper introduces a novel improved evolutionary algorithm, which combines genetic algorithms and hill climbing. Genetic Algorithms (GA) belong to a class of well established optimization meta-heuristics and their behavior are studied and analyzed in great detail. Various modifications were proposed by different researchers, for example modifications to the mutation operator. These modifications usually change the overall behavior of the algorithm. This paper presents a binary GA with a modified mutation operator, which is based on the well-known Hill Climbing Algorithm (HCA). The resulting algorithm, referred to as GAHC, also uses **an** elite tournament selection operator. This selection operator preserves the best individual from the GA population during the selection process while maintaining the positive characteristics of the standard tournament selection. This paper discusses the GAHC algorithm and compares its performance with standard GA.

Mostrar mais
6 Ler mais

Research conducted over the past decades shows that IS users have different perceptions, priorities, and cultural habits. These variables influence user perceptions about the success of these systems. Fewer studies evaluate what makes a system successful or how to evaluate this success. Bokhari (2005, p. 211) suggests that **an** IS can be considered successful if it satisfies its use rs’ needs and achieves the objectives and goals of the organization. However, measuring IS success is complex, because numerous factors affect its development and operation. The literature shows that this complexity leads to the development of measurement instruments that evaluate indirectly related variables, such as user satisfaction, system use, service quality, and information quality (Li, 1997; McHaney & Cronan, 1998; Doll & Torkzadeh, 1989; Ives, Olson, & Baroudi, 1983).

Mostrar mais
16 Ler mais