Simultaneously, to the best of our knowledge, this is the ﬁrst project proposed to optimize microcavities’ structure, mainly focusing on its robustness. Diﬀerent types of microcavities were optimized and the strategy proposed in this study proved to be eﬀective, leading to structures with higher Quality Factor than those previously described in literature while delivering robustness in the growth process. Microcavity structures has attracted the attention of scientist and engineers and has been applied to technological or purely scientiﬁc purpose. The optimiza- tion of microcavities parameters is a challenging task, mainly because some uncer- tainties are related to the growth process. These cause the synthesis of semicon- ductor nanodevices with undesirable layers thickness. The optimization procedure proposed here was able to ﬁnd satisfactory results, overcoming the known exper- imental solution. Also, the procedure found parameters sets that minimized the problem caused by the uncertainty. The results present high Quality Factor de- spite the uncertainties involved, which can assist the experts in the development of optimized structures. Many microcavities structures with diﬀerent resonance peaks were optimized. As expected, the increase in the value of the resonance peak, lead to a higher Quality Factor. In all optimization cases, the shift of the desirable peak position was minimized.
The literature describes different ways to deal with this non linear relation. If the head loss term in (7) is neglected, we get a linear function between the power and the flow. Doing this, the power is larger than the one given by the real curve originating larger errors in the area of large flows. This is undesirable because the current practice indicates that hydro stations tend to operate at the maximum power as most as possible. Another approximation consists of using a constant value for the head loss, corresponding to the maximum discharge flow as illustrated by the red line in Figure 1. This is a more conservative approach that is interesting since the error is small in the area of large discharge flows. The adoption of a particular approximation depends on the nature of the station. For large reservoirs, even if there are large flows, we can admit that the head barely changes and so using a constant head will not originate large errors. However, for small reservoirs larger head variation can easily occur.
2011 ). Energy of ultrasonic backscattering (Arthur et al., 2003 ) and tissue attenuation (Ueno et al., 1990) are also known to be dependent on temperature and have been proposed as methods to monitor temperature using, for example, the gray level information in B-mode images (Alvarenga et al., 2017; Teixeira et al., 2014). Later, in the 2000’s, photoacoustic (PA) imaging was proposed as a technique for temperature estimation based on PA signal amplitude temperature dependence (Larina et al., 2005; Pramanik and Wang, 2009; Schüle et al., 2004; Shah et al., 2008). Temperature estimative using PA imaging can be an interesting approach due to its good spatial resolution and optical contrast besides being capable of imaging at greater depths than purely optical techniques (Wang and Hu, 2012; Xu and Wang, 2006). Generation of PA-based thermal images during thermotherapy procedures has been investigated by analyzing the laser-induced pressure profile in PA images (Larina et al., 2005).
The floorplanning of VLSI circuits is of utter importance since the physical layout of a chip determines its performance and reliability. A hybrid simulated annealing (HSA) algorithm is presented (J. Chen, Zhu, & Ali, 2011) for non-slicing VLSI floorplanning. The HSA uses a greedy algorithm to create an initial B*-tree, a new operation to explore the search space, and an innovative search strategy to balance global exploration and local exploitation. The cost function used is measured by both area and wirelength used in the chip. Regarding the layout representation by the B*-tree, the root represents the module on the bottom left-most corner of the floorplan. The right child of a given node is adjacent to its parent to its right side. The left child is located above and adjacent relative to its parent node. Modules to be placed are sorted decreasingly by an evaluation function that compares the height and width of each module with the same dimensions of the floorplanning region multiplied by a couple of weights. Two experiments were carried out in which area optimization and simultaneous area and wirelength optimization were used as objective functions. Since only the new approach and another comparison method were implemented on the same programming language on the same platform, the other comparison methods running times were not listed. Benchmark tests from MCNC were used to gather results and draw conclusions. The HSA algorithm showed the best improvement in the area only objective function tests. Regarding simultaneous area and wirelength optimization objective functions, tests made show that the new approach presents better average area results but slightly worse average wirelength usage, when comparing with other testing methods.
The evolutionary technique used in our GMLA is based on Classiﬁer Systems , a machine-learning technique based on genetic algorithms (GA) that is capable of learning syntactically simple rules. The adopted classiﬁer system scheme is shown in Figure 2(a) and works as follows. A message received from the environment can activate one or more classiﬁers. As classiﬁers are selected, they perform their rules. Afterwards, the selected classiﬁers are rewarded based on their performance. GAs consider a population of classiﬁers for some optimization problem. In this case there are individuals (classiﬁers) representing their genotypes, which are usually a set of bits or characters (in our case the genotype is the Markov model). This population is evolved by the GA after a predetermined number of consults. At each generation of answers, a new set of artiﬁcial creatures (classiﬁers) is generated. The answers are based on fragments of the most adapted previous individuals.
In order to tackle such issues most research was focused on finding other methods, so- called unconventional optimization models. They are procedural models which send in background the process model, the central role in modeling being given to the algorith- mic procedure that adjusts the system. These procedural models cover a very active re- search subject in the last decade for modeling and control of optimization processes. This special kind of approach was imposed by the major difficulty in rigorous mathematical characterization of big complex technologi- cal processes behavior, as also MOFJSSP are; in this case, it is not possible an ap- proach which easily covers all the possible states of the system . Such models are evolutionary algorithms and genetic algo- rithms in particular, the agent-based models (negotiation-based techniques, Ant Colony Optimization, Particle Swarm Optimization and Wasp Behavior Model), the neural net- works, the expert systems, the knowledge- based systems and fuzzy techniques.
Adhinarayanan T. et al.  presented an efficient and reliable particle swarm optimization algorithm for solving the economic dispatch problems with smooth cost functions as well as cubic fuel cost functions. The practical ED problems have non- smooth cost functions with equality and inequality constraints that made the problem of finding the global optimum difficult using any mathematical approaches. For such cases, the PSO was applied to the ED problems with real power of generator in a system as state variables. However when the incremental cost of each unit was assumed to be equal, the complexity involved in this may be reduced by using the incremental cost as state variables. The proposed PSO algorithm had been tested on 3 generator systems with smooth cost functions and 3 generator systems, 5 generator systems and 26 generator systems with cubic fuel cost function. The results were compared with genetic algorithm (GA) and showed better results and computation efficiency than genetic algorithm. Al Rashidi M. R. et al.  presented a PSO algorithm to solve an Economic-Emission Dispatch problem (EED). This problem had been getting more attention recently due to the deregulation of the power industry and strict environmental regulations. It was formulated as a highly nonlinear constrained multi-objective optimization problem with conflicting objective functions. PSO algorithm was used to solve the formulated problem on two standard test systems, namely the 30-bus and 14- bus systems. Results obtained show that PSO algorithm outperformed most previously proposed algorithms used to solve the same EED problem. These algorithms included evolutionary algorithm, stochastic search technique, linear programming, and adaptive hopfield neural network. PSO was able to find the pareto optimal solution set for the multi-objective problem.
The simple multi-objective method is to form a composite objective function as the weighted sum of the objectives, where a weight for an objective is proportional to the preference factor assigned to that particular objective. This method of secularizing an objective vector into a single composite objective function converts the multi-objective optimization problem into a single-objective optimization problem. In an ideal multi-objective optimization procedure, multiple trade-off solutions are found. Higher level information is used to choose one of the trade-off solutions. It is realized that, single-objective optimization is a degenerate case of multi-objective optimization. Srinivas and Deb (1994) developed Non-dominated Sorting Genetic Algorithm (NSGA) in which a ranking selection method emphasizes current non-dominated solutions and a niching method maintains diversity in the population. Chitra and Subbaraj (2010) applied NSGA to shortest path routing problem and compared its validity with single-objective optimization. However, NSGA suffers from three weaknesses: computational complexity, non-elitist approach and the need to specify a sharing parameter.
The first use of GA and multi-agent systems (MAS) for query optimization in distributed DBMS is proposed in Ghaemi, Fard, Tabatabaee, and Sadeghizadeh (2008). They define the following agents: Query Distributor Agent (QDA) to divide the query into sub-queries, Local Optimizer Agents (LOAs) that applies a local genetic algorithm and Global Optimizer Agent (GOA) responsible for finding the best join order between the sites. In a comparison with a dynamic programming method, the authors verified that their approach takes less time for processing the queries. An extension of Ghaemi et al. (2008) with focus on building an adap- tive system is given by Zafarani, Derakhshi, Asil, and Asil (2010) and Feizi-Derakhshi, Asil, and Asil (2010). The results have shown a reduction of up to 29% in time response. It is worth noting the distinctions between the present work and the work from Ghaemi et al. (2008), Zafarani et al. (2010) and Feizi-Derakhshi et al. (2010). First, their methods are supposed to run in a distrib- uted environment; secondly, they are running outside of the DBMS; and lastly, the agents have a limited and different way of interaction. Basically, one agent (QDA) breaks the query into pieces and distributes part of the analysis of relations of some data source with a registered agent (LOA) that will execute a standalone version of genetic algorithm to find a possible order to join the associated relations. Finally, the last agent (GOA) will try to mini- mize the network traffic, by executing the partial plan defined by
Due to the increasing interest in augmentation of fuzzy systems with learning and adaptation capabilities we find works that join it with some soft computing techniques. In (Cord ´on et al., 2004), a brief introduction to models and applications of genetic fuzzy systems are presented. In addition, this work includes some of the key references about this topic. In (Coello, 2002), we can find a survey of state of the art theoretical and numerical techniques, which are used in some genetic algorithms to deal with constraint-handling optimization problems. The genetic algorithms are also used to solve multi-objective optimization problems and a hybrid approach, combining fuzzy logic and genetic algorithm, is proposed in (Sakawa, 2002).
The demand for different levels of Quality of Service (QoS) in IP networks is growing, mainly to attend multimedia applications. However, not only indicators of quality have conflicting features, but also the problem of determining routes covered by more than two QoS constraints is NP-complete (Nondeterministic Polynomial Time Complete). This work proposes an algorithm to optimize multiple Quality of Service indices of Multi Protocol Label Switching (MPLS) IP networks. Such anapproach aims at mini- mizing the network cost and the amount of simultaneous requests rejection, as well as performing load balancing among routes. The proposed algorithm, the Variable Neigh- borhood Multiobjective Genetic Algorithm (VN-MGA), is a Genetic Algorithm based on the Elitist Non-Dominated Sorted Genetic Algorithm (NSGA-II), with a particular fea- ture that different parts of a solution are encoded differently, at Level 1 and Level 2. In order to improve results, both representations are needed. At Level 1, the first part of the solution is encoded by considering as decision variables the arrows that form the routes to be followed by each request (whilst the second part of the solution is kept constant), whereas at Level 2, the second part of the solution is encoded by considering the sequence of requests as decision variables, and first part is kept constant. Pareto- fronts obtained by VN-MGA dominate fronts obtained by fixed-neighborhood encoding schemes. Besides potential benefits of the proposed approach application to packet routing optimization in MPLS networks, this work raises the theoretical issue of the sys- tematic application of variable encodings, which allow variable neighborhood searches, as operators inside general evolutionary computation algorithms.
Fig. 6 presents scatter graphs showing the relationship between connectivity, locality and synonymity of the NN g ( 14 , 11 ) family, which corresponds to the family of representations with the highest value of k that was completely enumerated. Although the scatter graphs only show the properties for the NN g ( 14 , 11 ) family, the results are similar for the remaining families of NN g (, k ) . These scatter graphs show that there are representations with low locality that reach high connectivity values. The chart at the right-top in Fig. 6 contradicts the idea presented in  that synonymously redundant representations do not allow the connectivity between the phenotypes to increase in comparison with the corresponding non-redundant representations. The circle in this chart shows that there is a set of synonymously neutral representations that have high values of connectivity. The NN g (, k ) family has a variety of representations with different properties that provide an im- portant basis for the study of the relationships between them and will allow to study to what extent these properties can affect the performance of anevolutionary algorithm.
Maenhout and Vanhoucke  propose a heuristic optimization procedure for the integrated personnel shift and task re-scheduling problem (IPSTrSP). This problem can be divided into two different phases. In the first phase, the scheduler has to compose an integrated schedule, making a lot of deterministic assumptions, like the workers’ availability, the number of tasks or even the timing of the tasks. These assumptions may not entirely represent the real context due to some existing operational variability. For each worker, a line- of-work is composed, where a particular task is assigned. It is important to keep in mind that a worker works according to shifts. So, a task is assigned to a worker, and different shifts are assigned to that task. The goal is to determine the minimum personnel cost. In the second phase, more accurate information is provided, and so some estimated values may turn out wrong. Therefore, the original personnel roster needs to be adapted, and a re-scheduling problem has to be solved. The goal of re-scheduling is to rebuild the schedule while minimizing the number of deviations to the original schedule and also to reestablish the feasibility of the personnel roster .
We propose anoptimizationapproach for the feature selection problem that considers a “chaotic” version of the antlion optimizer method, a nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. The balance between exploration of the search space and exploitation of the best solutions is a challenge in multi-objective optimization. The exploration/exploitation rate is controlled by the parameter I that limits the random walk range of the ants/prey. This variable is increased iteratively in a quasi-linear manner to decrease the exploration rate as the optimization progresses. The quasi-linear decrease in the variable I may lead to immature convergence in some cases and trapping in local min- ima in other cases. The chaotic system proposed here attempts to improve the tradeoff between exploration and exploitation. The methodology is evaluated using different chaotic maps on a number of feature selection datasets. To ensure generality, we used ten biologi- cal datasets, but we also used other types of data from various sources. The results are compared with the particle swarm optimizer and with genetic algorithm variants for feature selection using a set of quality metrics.
I proposed in this work the incorporation of preferences in an EA in order to better help the DM in problems with many objectives. In this way, the main challenges faced by EAs could be solved more eﬃciently: there is no need to adopt huge population sizes because only a small portion of the front is required, so the computational cost is reduced; and the evaluation of the alternatives to choose a most preferred one becomes easier. Furthermore, the preferences provide a way for comparing even non-dominated solutions, solving the selective pressure the Pareto-based algorithms lack in many objectives. Also, in order to simplify the process of deﬁning the size of the region of interest (ROI), I proposed a new self-adaptive method, named MyIdea, whose ROI’s size depends only on the reference point choice, which is an extension of the method proposed by Wierzbicki in section 184.108.40.206, with a more natural way of thinking.
(DNA, RNA and proteins), assembly of fragments, recognition of genes, determination of the structure of proteins   and unification of the representation of gene and gene product attributes across species . Due to the great amount of information and its complexity, tools based on conventional computation have shown to be limited in tackling complex biological problems. One explanation for such difficulty is the inefficiency of conventional tools in working with large volumes of data, as well as, problems with finding the optimal solution (or a good approximate solution). Following along those lines, computational intelligence (CI) techniques, such as, genetic algorithms, more and more are being used to solve problems in Molecular Biology. The growth in the use of CI techniques is due to specific characteristics such as their ability to automatically learn from data producing relevant results . Based on the biological inspiration for new computational methods, several computer scientists independently studied evolutionary systems with the idea that evolution could be used as anoptimization tool for engineering problems. Many au- thors propose methods using the biology inspiration, such as: Rechenberg introduced ”evolution strategies”, Fogel, Owens, and Walsh developed ”evolutionary programming”, among others. To evolutionary-computation researchers, the mecha- nisms of evolution seem well suited for some of the most pressing computational problems in many fields. Many com- putational problems require searching through a huge number of possibilities for solution .
tion increasing gene transfer in mice models. (a–d) Amyloid fibrils, formed from a self-assembling 12 amino acids peptide named enhancing factor C (EF-C), are transduction enhancers. (a) Inoculating HeLa cells (blue) with viral particles with the MLV glycoprotein labelled with yellow fluorescent protein (MLV-YFP, green) in the absence (top line of the panel) of rhodamine-labelled fibers (Rho-EF-C, red) or in the presence of the labelled fibers shows that the presence of the fibers, which are not cytotoxic, enhances viral replication. (b) Quantitative comparison of the fibril effect in the increase of the virions rate of fusion with cells. (c) EF-C fibrils increase lentiviral transduction of 293T cells independently of the viral glycoprotein. (d) EF-C fibrils mediated lentiviral gene transfer is effective with several different cell types, namely human glioblastoma (U87MG), endocrine pancreatic tumour (BON), myeloid KG-1, peripheral blood mononuclear (PBL) and haematopoetic (CD34) stem cells. Importantly, EF-C fibrils can be immobilized (e–g) and allow efficient gene transfer into mice (h–j). (e) Z-stack images of immobilized Rho-EF-C fibrils only (top left), MLV- YFP only (bottom left) and immobilized Rho-EF-C fibrils exposed to virus (right; the upper image shows both channels, merged, and the lower images, separately). (f) EF-C fibrils coating facilitates lentiviral infection in a degree similar to an established standard method based on (g) RetroNectin (RN) which, (h) if put together possess an additive effect to each other. (i) EF-C fibril coating also prompts lentiviral gene transfer to mouse cells. (j) Treating bone marrow cells with the EF-C amyloid fibrils and transducing them with a GFP-labeled lentiviral vector before transplanting the cells into recipient mice results in a high success rate with a number of GFP-positive cells in peripheral blood, demonstrating that EF-C amyloid fibrils facilitate retroviral gene transfer (adapted with permission). 154
Evolutionary optimized networks for faster consensus is made of a ring and many tree structures that are connected to various nodes on the ring. Although Ramanujan networks are known to have fast consensus characteristics, the optimized network is neither a Ramanujan graph nor random regular networks in which all nodes have the same degrees. Most of the nodes have the same degrees but some nodes have more links and other nodes have less links than the majority of nodes. For slow consensus, a core-with-line network was obtained where one end of a string of nodes is a dense clique with many nodes. Finally, two clique graphs were connected by a single link and its consensus property was observed. The consensus dynamics of the above networks show that consensus is formed among agents with denser links first, then a global consensus is formed. This demonstrates that both local and global topological properties contribute to the overall behavior of the network dynamics.
Abstract—Structural optimization tools have grasped enormous applications in engineering design and development activities because of increasing demands of lightweight and rigid products. In this milieu, researchers have exploited the potentials of nonparametric structural optimization tools like topology optimization by coupling and integrating with other compatible softwares. The first part of this paper deals with the mathematical physical fundamentals of the problem formulation of topology optimization. In the second part application of topology optimization is described by adopting an integrated design approach. The objective is to find the optimal design proposal of a thing walled aerospace component, for which a predecessor design subsists. ANSYS is used to develop the FE model of initial design space of component. TOSCA is used for topology optimization, data reduction and smoothing of optimization results for CAD compatible output. The optimized model is then imported in CATIA to incorporate necessary refinements for manufacturing, machining and geometric restrictions. To validate this new design proposal, optimized model is then imported in ANSYS and analyzed again under the stipulated loads and boundary conditions. At the end, a comparison is made between the predecessor design and the new optimized design proposal. Comparison validates the new design proposal assisted by optimization and simulations is more reliable and reduced weight with enhanced structural performance.
Abstract— This paper presents an algorithm for order reduction of higher order linear interval system into stable lower order linear interval system by means of Genetic algorithm. In this algorithm the numerator and denominator polynomials are determined by minimizing the Integral square error (ISE) using genetic algorithm (GA). The algorithm is simple, rugged and computer oriented. It is shown that the algorithm has several advantages, e.g. the reduced order models retain the steady-state value and stability of the original system. A numerical example illustrates the proposed algorithm.