Newyork City Tunnel WDN  is taken-up as a case study II, for testing the performance of CE method. The layout of WDN is shown in Figure 2. The network consists of 20 nodes, 21 pipes and 1 loop, and is fed by gravity from a reservoir at a fixed head of 300 ft (91.44 m). The ground elevation for all nodes is 0. This system is in place and requires expansion. The pipe lengths, existing pipe diameters, and nodal demands are given in Table 6, and a Hazen-Williams constant of 100 is assumed for both the old tunnels and new pipes . The system constraint is the minimum pressure head requirement for all nodes which is also given in Table 6. Fifteen commercially available pipe diameters and their unit cost are listed in Table 7. No velocity constraint is taken into account for this network. The objective is to determine whether a new pipe is to be laid parallel to an existing pipe or not, and if needed what will be the diameter of a parallel pipe, while the system is required to provide minimum hydraulic gradients. This network is firstly studied in  and thereafter studied by a number of other researchers (; ; ). Due to pipe aging, the existing gravity flow tunnels are inadequate to meet the pressure requirements at nodes 16, 17, 18, 19, and 20 for the projected demands. Therefore new pipes can be added in parallel to the existing pipes to meet the minimum pressure head requirements. For this problem, 16 possible candidate diameters are available including 15 commercially available diameters and the ‘zero diameter-zero unit cost’ option. Considering all 21 pipes for possible duplication, it results in 16 21 possible designs.
Quality of Service in MANETs will imply guaranteed delivery of packets corresponding to the specific flows at higher priority so as to satisfy loss and delay performance requirements. In MANETs, the nodes function using remaining battery power, availability of which can vary widely across the nodes. The nodes may be mobile, thus the links in the optimal path from source to destination may break either due to mobility or less battery power. Thus providing QoS guarantees with highly unreliable links, need fast or even proactive routing recovery, alongwith transport and application layer optimization, which may start even before the link failure finally happens. Thus the measurements at data link layer and MAC layer need to be used at the network, transport and application layers to avoid wastage of transmitted power due to transmission of data frames which are of no use due to link failure.
lected at multiple sensors without mutual communica- tion between these sensors, and sets the multiple coding rates at these different sensor nodes. DSC works in a unique way profoundly different from the typical data aggregation methods [9,10,11,12,13,14] .The data aggregation method combines data flows, and attempts to remove the redundant data among them by collecting informa- tion from multiple correlated sensor nodes. In order to prevent the redundant data from flowing into the whole network, data aggregation usually has to be performed early in the local area. This results in the bottleneck problem on aggregation points and the lack of load balancing, which deteriorates the network lifetime. In DSC design, because the inherent redundancy among the information bits collected from sensor nodes is al- ready removed, there is no need of data aggregation. DSC information bits are collected independently, separately, and at different data rates to reach the sink node for source decoding. Therefore, it is desirable for efficient WSN designs to support such multi rate data- flow in network. Due to the nature of distributed and collaborative signal processing in WSN, these multi- rate data transmission requirements in DSC are not limited to a small category of applications, but are rather general in many signal processing related WSN applications. To date, there is a lack of multi-rate WSN designs and a strong need for such efforts on the other hand, while the complexities of both sensor nodes and networks grow.
Vladisavljević and Williams, 2005) and ultrasound cavitation (Sivakumar et al., 2014, Tang et al., 2013, Tang et al., 2012). For production of nanoemulsions intense shear should be applied in order to overcome the Laplace pressure and break up droplets into smaller (nanometre scale) dimensions (Sivakumar et al., 2014). The developed high-energy input tech- niques adequate for production of nanoemulsions include the use of high-pressure homogenizers, ultra- sonicators and microfluidizers (Sivakumar et al., 2014, Landfester 2006). Also, low-energy input tech- niques adequate for the production of nanoemulsions have been developed such as phase inversion tem- perature, solvent-diffusion and spontaneous emulsifi- cation (Sivakumar et al., 2014). However, low-en- ergy input techniques have their own limitations (Sivakumar et al., 2014) including the use of a large quantity of surfactant, usually not of the food grade type, and instability after long-term storage, which can be improved when the droplet disruption is pro- vided predominantly by high-energy input tech- niques (Santana et al., 2013, Sivakumar et al., 2014, Tang et al., 2013).
cause additional losses that compromise both friction and turbulent components. The generic equation for single head loss can be calculated using equation 3.12, ⇣ is the coefficient of singular loss that depends on the geometry of the singularity of the network and Reynolds number. Each element of the network will have a specific value for single loss, table 3.3 and appendix I. Usually, the values, specially for valves, are determined experimentally and the data must be provided by manufactures. It is important to note that friction loss of these elements are not included in the local resistance factor, instead this factor is calculated as a part of the main friction loss by including their length and diameter when calculating the pipeline length.
luoride contents different from those listed on the labels of all the mineral waters evaluated. Additionally, some of the brands did not list the amount of luoride on the label, and four of them had luoride levels above that speciied and thus could potentially cause dental luorosis if consumed regularly by children while the teeth are being formed. The results of the present study corroborate those found by Grec et al. 15 in a study conducted in São Paulo (SP), in
This article presents a harmony search optimization based method for optimal placement of UPFCs with a view of reducing the network loss. Harmony search optimization is inspired the musical process of searching for a perfect state of harmony. The harmony in music is analogous to the optimization solution vector, and the musician’s improvisations are analogous to local and global search schemes in optimization techniques. The method determines the optimal locations and the parameters of the UPFCs for placement. Test results on IEEE 30 bus system exhibits the superiority of the developed algorithm.
In Step 2, all the optimal designs (the machines of the IPF front) are made in the continuous space. We need to design at least one machine by discretizing the parameters that must be discretized (typically, for our PMG, the number of conductors in a slot must be an integer and the diameter of the wires must be chosen from a commercial round wires table). For this, we use the OM with a simple procedure: a successive rounding of continuous parameters to the nearest discrete values, and we restart optimization after each discretization. Of course, a more sophisticated procedure can be imagined, but this one is sufficient for our PMG to obtain a machine that is very close to the one obtained in step 2 as can be seen in Fig. 8 where a red cross marks this solution.
Abstract: Economic production quantity (EPQ) model is analyzed for trended demand and the units which are subject to constant rate of deterioration. The system allows rework of imperfect units and preventive maintenance time is random. The proposed methodology, a search method used to study the model, is validated by a numerical example. Sensitivity analysis is carried out to determine the critical model parameters. It is observed that the rate of change of demand and the deterioration rate have a significant impact on the decision variables and the total cost of an inventory system. The model is highly sensitive to the production and demand rate.
The EMC of electronic systems is often achieved by fil- ter networks. The used components take an increasing part of the total system weight and volume. Therefore, the op- timal filter for different requirements is determined with previous simulation estimations. The filters are character- ized in terms of the number and type of filter stages, com- ponent values or attenuation. Therefore passive, active and hybrid filter structures are used. The active filters are in- vestigated by using a feedforward structure (Fig. 1). Fur- thermore, the frequency characteristic of the filter is de- scribed by the use of A
In this contribution, optimization procedures based on particle swarm intelligence have been investigated in details, aiming to efficiently solve the optimal resource allocation for the signal-to- noise plus interference ratio (SNIR) optimizationof optical code paths (OCPs) from WDM/OCDM considering imperfections on physical layer. The SNIR model considers multiple access interference (MAI) between the OCP based on 2-D codes (time/wavelength), amplifier spontaneous emission (ASE) at cascaded amplified spans, group velocity dispersion (GVD), as well as polarization mode dispersion (PMD) effects. The features of the optimization algorithm based on particle swarm intelligence (PSO) are attractive due their performance-complexity tradeoff and fairness regarding other optimization methods that use numerical methods, matrix inversion or even other heuristic approaches. The resource allocation optimization based on PSO strategy allows dynamically the regulation of the transmitted power and the number of active OCPs in order to maximize the aggregate throughput of the WDM/OCDM networks. The numerical results have shown a penalty when the ASE, GVD and PMD effects are considered. This penalty represents the received power reduction due to temporal spreading. Indeed, when the ASE, GVD and PMD effects are considered in the model of the optical network, the power penalty increases substantially when the number of OCPs and/or bit rate grow.
important factor, which influences the grain size and the quality of the casting is the amount of modifier introduced into the mould. The amount of cobalt aluminate in the primary slurry is highly variable, (ranging from 1 to 10% or higher) and depends on the specification requirements, the alloy being casted, the section thickness, and other factors [9-11]. On the grounds of the obtained results it was found that the optimal concentration of cobalt aluminate powder in ceramic mould to produce casting elements made from Inconel 713C superalloy is about 5-6%mass, however in case of S6K - 2%mass. The higher concentration of modifier does not change the grain size significantly and does not improve mechanical properties of castings. So the next step in the study is to define what chemical and physical properties should cobalt aluminate characterize in order to archive the best nucleating effect.
The Computational Fluid Dynamic allows to perform many calculations both in steady conditions (performance curves of the machine) and in transient flow regimes. This kind of calculations can be very complex, and the computational complexity increases when dynamic simulations are performed. Thus, a first work has been leaded in order to find the minimum number of elements in the 3D fluid dynamic mesh necessary to obtain reliable results for the flow inside the PAT runner. Then the performance curve of a specific PAT model have been computed, both for pump and turbine mode. A preliminary transient calculation has been carried out, in order to obtain a first result about the inertial response of the machine with a sudden discharge finite variation.
The optimal results are robust against fatigue. However, from a practical view point, that is, if one must design a real structure, even the robust optimaldesign would hardly be manufactured. As a suggestion to obtain a viable optimaldesign which can be used in practice, geometric constraints could be included on the volume minimization. The simplest possibility is to consider that the plate is composed of a base plate whose thickness is uniform and ﬁxed and external layers of variable thickness exist. A more elaborate proposal is to add constraints on the maximum thickness gradient such that the maximum diﬀerence in thickness between adjacent elements must not exceed a certain amount.
the ﬁll rate, and the maximization of demand ful ﬁlled within a coverage distance. The lead time was implied in the cost of the safety stock, but it was not related to transportation decisions. The method proposed was a hybrid of NSGA-II and an assignment heuristic. Pinto- Varela et al. (2011) presented a bi-objective optimization model for the designof supply chains considering economic and environmental criteria. In their model, time was considered since the point of view of a multi-period approach. Different transportation modes may exist, but they are not associated to the time. They solved three small examples with mathematical programming commercial software. The review by Mansouri et al. (2012) emphasized the importance of multiobjective optimization techniques as decision support tool in supply chain management. Although order promising decisions and network design decisions were identi ﬁed as important criteria, none of the works reviewed integrated them in a multiobjective approach. Chaabane et al. (2012) presented a multi-period multiobjective optimization problem where cost and environmental objectives were opti- mized. In their mixed-integer programming model, the selection of transportation modes was considered as a decision variable but it was not connected with time. They used mathematical programming commercial software to solve small instances of the problem. Sadjady and Davoudpour (2012) studied a problem for supply chain design where cost and time were tied to transportation alternatives. The approach, however, was to opti- mize a single objective function where lead time from the transportation alternative was transformed into a cost function. The cost objective function is optimized using a Lagrangian relaxation method. As proposed by Olivares-Benitez et al.(2012), the cost and time criteria may not be comparable and should be treated in separate objectives(Sohrabi et al, 2015).
Tonnon at all  have used interactive procedure to solve multiple-objective optimization problems. A fuzzy set has been used to model the engineer’s judgment on each objective function. The properties of the obtained compromise solution were investigated along with the links between the present method and those based fuzzy logic. An uncertainty which has been affecting the parameters is modelled by means of fuzzy relations or fuzzy numbers, whose probabilistic meaning is clarified by random set and possibility theory. Constraint probability bounds that satisfy a solution can be calculated and procedures that consider the lower bound as a constraint or as an objective criterion are presented. Some theorems make the computational effort particularly limited on a vast class of practical problems. The relations with a recent formulation in the context of convex modelling are also pressured. In the paper of Wang at all  a fuzzy-decision-making procedure is applied to find the optimal feed policy of a fed-batch fermentation process for fuel ethanol production using a genetically engineered Saccharomyces yeast 1400 (pLNH33). The policy consider control variables such as - feed flow rate, feed concentration, and fermentation time. By using an assigned membership function for each of the objectives, the general multiple-objective optimization problem can be converted into a maximizing decision problem. In order to obtain a global solution, a hybrid search method of differential evolution is introduced.
Figure 15 shows a WMSN application example, where a ﬁxed network infrastruc- ture is available. In such case, some nodes are able to continuously monitor physical scalar sensor data to predict an event occurrence, by means of existing models or methods and deﬁned by an expert . As soon as there is an event occurrence, another set of nodes must transmit multimedia data, e.g., audio and video streaming, with QoE assurance to headquarters or IoT platforms, such as provided by the semantic system , sen- sor4cities [12, 13], i-SCOPE , and other IoT platforms. In this context, multimedia content provides more precise information than simple scalar data, enabling specialists, mobile users, or computer vision software to visually verify the real impact of the event, avoid false-positive alarms, take consciousness of what is happening in the environment, plan actions, detect objects or intruders, and analyse scenes. Sensor4cities [12, 13] is an example of a tool that can be used to allow the users to request scalar or multimedia data from the monitored area via webpages and social networks, e.g., Twitter or Facebook. After receiving scalar or real-time multimedia ﬂows, sensor4cities can share them with a control center or mobile user for further analysis. However, in this chapter, we focus only on how to disseminate video ﬂows and physical scalar sensor data with QoS or QoE as- surance, robustness, and reliability to feed smart cities or IoT platforms for static WMSN scenarios.
informational technologies and opportunities of automated systems. For this purpose it is necessary to develop the automated system of steel designs, allowing to consider some criteria of optimality and a wide range of the restrictions for steel structural designs. This will allow to accelerate projection process, to reduce labor input of a designer and es- sentially increase the quality ofdesign solutions for steel designs.
Internet of Things (IoT) is becoming a reality and new and advanced applications are expected to emerge. For applications with reliability needs to work well in IoT envi- ronments, robust data transportation is required. Approaches like TCP are known for not being adequate in sensor network environments, while UDP has been included in the 6LoWPAN stack allowing low-power and limited processing devices to participate in the IoT. However, UDP provides no reliability. One way of providing reliability is to use link-layer acknowledgements but this mechanism may lead to an inefficient use of resources if used unconditionally throughout all the network. Another way is to request the confirmation of messages sent, done at the application layer, but this is an end-to-end process that can only be applied for specific message type transactions. If used for all data then there will be long delays and inefficient use of resources also. Here we address the designof a cross-layer reactive mechanism that improves reliability of data delivery, in order to support applications that require some reliability level when delivering data notifications. This mechanism introduces link layer reliability at specific nodes, gradually and only when needed, having no scaling problems. Results show that this mechanism can improve data delivery and improve the use of network resources.
The genetic algorithms, proposed in Holland (1975), are inspired in the species evolution and according to Darwin’s theory they develop an extensive search in order to find the strongest and the most well adapted chromosome to its environment. The best group of genes is selected by crossover and mutation from others. Genetic algorithms are simple, robust, flexible and able to find the global optimal solution. They are especially useful in finding solution to problems for which other optimization techniques encounter difficulties (Goldberg, 1989). A basic genetic algorithm is constituted by a random creation of an initial population and a cycle of three stages namely: