The most applied methodologies are Robust Optimization and Genetic Algorithms, and the articles analyzed prove their effectiveness. Another methodology that is widely discussed is the Monte Carlo Simulation to create the scenarios of uncertainties. For future studies, the need for more complete analysis tools, which consider the distributed generation and the characteristics of the wind and solar plants that are being more widespread, is evident. It is also necessary to integrate distributed energy storage systems in the analysis, such as electric vehicles, among other aspects.
In order to be consumed, electricity must be generated, transformed and transported over wide distances, which are specially wide in cases as the Brazilian, whose centers of generation are very far away from those of consumption. It consists on a form of energy much sensitive to infrastructure. The amount of electrical power transmission lines can transport over kilometers is fundamentally dependent on its physical parameters, like transformers properties, for instance. Since distribution power lines even into urban environments represent huge infrastructure investments, errors or improprieties in planning such assets expansion may breed economic losses for years. This need of keeping available power always above instantaneous consumption justifies electricpower microgeneration as an advantageous measure, due to the incremental power offer to the overall balance, as discussed in Section 1.2 and demonstrated in Chapter 4. In many situations, microgenerators enable the load to assume a flatter behavior, i. e., with less variance on power demand values, given that microgenerators can sustain part of the load in the daily peak, when the value of unitary kWh is more expensive.
different energy supply systems in the same area. This may seem impractical. However, there are situations in which this kind of solutions are present or expected in real-life situations. The most remarkable situation is the one in which the trend of energy supply in the region in which the demand is located is changing, for instance because the energy mix in that region has been varied by the availability of new energy sources (e.g., from renewable generation) or by obsolescence of the existing power plants that are replaced with new technologies using different energy carriers. This situation includes for example either change from power to gas (leading to “less-electric” demand, or from gas to power (leading to “more electric” demand) . In these cases, the end users can be induced to change the technologies they are using. However, the end users could decide to keep the previously used technology and integrate them with a new one, with the prospect of possible usage of both technologies depending on their convenience, e.g., to manage the case of shortage of energy supply for one energy carrier or large price fluctuations for the energy carriers that can be used to provide the same service. The demand side can change the source of providing the same service based on each energy carrier’s price, availability of technologies, or only its preference. In the presence of multiple end users acting on the same system, the customer choices can be applied in a random way, so that the dependent demand becomes stochastic.
Heuristic methods are optimization algorithms based on natural processes, commonly used to optimize problems with high level of complexity, namely the ones with a combinatorial nature. Meta-heuristics can be applied to resolve problems from different fields independently of the na- ture of the variables involved . In the specific case of powersystems, meta-heuristics are an important tool with extensive potential to solve several problems from daily operation to planning studies. The dimension associated to such problems, as well as the need to obtain appropriate so- lutions within a limited period of time, favors the application of meta-heuristics, which are being more and more popular among the powersystems research community.
The distribution system operation planning stage is concerned with the determination of capacitor bank, transformer and voltage regulator settings. This must be accom- plished considering active and reactive power injections of distributed generators and at the main substation, as well as physical and regulatory issues in order to improve the network performance. This is a quite complex Optimal Power Flow (OPF) problem because it involves technical factors such as steady state voltage regulation, economic factors such as losses reduction and energy bids from independent power producers. This work proposes two approaches to solve this operation planning problem. The first one uses a genetic algorithm similar to that developed by Chu and Beasley, however with a different strategy to create the initial population. This algorithm can provide good quality solutions and in some cases even optimal solutions. The second one is based on the use of sensitivities, where good quality solutions are obtained at low com- puting times, much lower than those obtained using the proposed genetic algorithm. Besides being applicable in short-term operation planning of distribution networks, the proposed methods could also assist the utility operator in setting up conditions for establishing contracts with independent power producers. The results presented here using radial distribution systems of 34, 70 and 135 buses demonstrated the potential of the proposed algorithms.
Electric vehicle (EV) technology has been developed for more than a hundred years with the advent of chemical batteries as energy storage device, providing electricity to the vehicle powertrain. Powertrain is the group of components that deliver power to the driving wheels including engine or electric motor ( KLOMP , 2010). In the late 1800s, engineers from France, England, United States, and other countries started to expand the construction of electric vehicle prototypes. In 1897, the first commercial electric vehicle was introduced into the New York City taxi fleet. Creating a new market in the automotive industry, Pope Manufacturing Co. took advantage of this opportunity and became the first large-scale electric cars manufacturer in United States ( EMADI , 2014). An interesting fact occurred in 1899 with the Belgium racing driver Camille Jenatzy, who drove the car called La Jamais Content, exceeding 100 km/h with an electric car, setting an important mark to the evolution of the technology ( LARMINIE; LOWRY , 2004).
Linear programming is an optimization method suitable for solving problems in which the objective function and the constraints appears as linear function of the decision variables . In the present optimization problem, controllable components of the system are represented by decision variables while energy production from the photovoltaic modules and the consumption at each time step, as well as the technical specifications of the system com- ponents, determine equality and inequality constraints. However, technical minimum of the CHP plant is about 20% of the maximum output power and thus the generator operating range is not continuous and depends on whether is the plant turned-off or on. The decision on the generator on/off state can be included into the optimization problem by adding the binary variable, but it causes the energy management problem becomes a problem of mixed integer linear programming. As an alternative to the use of mixed integer linear program, that is much harder to solve than linear program, the strategy S1 proposes the introduction of an additional rule that defines the generator on/off state as shown by eq. (8):
The development of the plan is executing in two phases. During the first phase the design of the plan is developed with different kind of options. The decision makers (Ministry of Defense) can chose one option and the detailed plan will be developed according to that intention. • Planning of the implementation: The approved 10 year strategic plan is the bases of the development of the short term detailed planning activities. Taking into account the annual approved budget and the prognoses for the next year, very detailed short term plans are developed (annual and 1+n year plans).
Nowadays, ICE are the most commonly motors used in powertrains for land vehicles. Typical characteristics of an ICE are shown in Figure 2.3, which has torque–speed curves far from the ideal performance characteristic shown before. ICE starts operating smoothly around the idle speed. Good combustion quality and maximum torque are reached at an intermediate engine speed. As the speed further increases, the torque decreases due to a reduction in the air introduced in the cylinders and the additional power losses caused by mechanical friction and hydraulic viscosity .
but GPRS was considered in cases where PLC was not feasible due to either technical or economic restrictions. In the backhaul, broadband technologies like ADSL were defined, although GPRS is being pointed out as an alternative in segments where ADSL coverage may not be available. For local control, ZigBee was considered the preferred solution for fast and low-cost deployment of monitoring and control. The second version of InovGrid reference architecture  focused on the integration of a control and management structure at the level of a HV/MV substation. This was introduced in the spirit of the Multi- Microgrid concept in order to take advantage of MV DER to increase system reliability. The architecture and control schemes were updated and a new entity was defined as a Smart Substation Controller (SSC), which introduced a new layer in the previous architecture definition. The main objective of the SSC was to manage the controllable devices within the MV network, among which are the DTCs that, in turn control LV devices. The presence of SSCs allows some of the centralized functions in the DMS to be delegated to this entity to more efficiently coordinate MV devices and at the same time enable the exploration of enhanced and distributed management and control algorithms. The SSC was also defined having in mind a more detailed monitoring scheme that allows for instance fault events to be located and isolated automatically without having to wait for a centralized system to take over, with clear benefits in reducing the time of system restoration services. Coordinated voltage control systems were also considered to benefit from a greater degree of control by managing flexible entities such as MGs. This architecture accounts for a greater interaction with the market domain, where newer services were thought to be easily deployed. The final version of InovGrid architecture, presented in Fig. 2.22, established the possibility of customer participating in system services by allowing the DSO to control, through the customer HAN, the operation of microgeneration devices and flexible loads like EVs.
This two-segment reaction curve naturally defines two types of primary frequency reg- ulation: the first one resembles conventional droop response of synchronous machines, while the second is similar (but in a continuous version) to the operation of load or gen- eration shedding relays. If the penetration of flexible devices reaches important levels, the use of this controller results in the existence of two values for the network stiffness: the quasi-steady-state stiffness and the transient, or emergency, stiffness. This second is particulary relevant in the case of isolated systems or areas within large powersystems that disconnect during and emergency (Microgrids ). For a high penetration sce- nario is important to mention that  finds potential stability problems for widespread implementation of frequency responsive devices because of the non-instantaneous prop- agation of frequency waves due to the existence of reactive components in the network. This means that when a power imbalance close to a power plant causes it to slow down, the frequency measured in a node several hundreds of kilometers from it will take an in- terval in the order of seconds to reflect it. This calls for a stability analysis and possibly to the need of including a power system stabilizer (PSS) in the responsive controllers. It is proposed that the payments to storage devices for the provision of the mentioned services are computed in kWh $ for the available power and for the execution energy. For the power component, the following performance characteristics are taken into account differentiating between upwards and downwards adjustments:
The main objective of this project was to study the allocation of workers to workstations in the assembly line of GE Power Controls Portugal plant, to increase productivity in the assembly lines, once the number of equipments produced was lower than the traced objective. This project has started with the study of the assembly line and the methods used by the workers. The two first steps, associated with the daily analysis of the numbers of equipments produced, became possible to build a new Skill Matrix with a new scale of classification. The empiric use of this data has redefined the teams that constituted the assembly lines. This redefenition has increased daily output for the pretended levels. The utilization of Operational Research technics launched this project to its most interesting fase allowing to optimise the choice of new allocations of assembly line workers to workstations and to have a term of comparison with the allocations made previously. Cross training of workers associated with the rotation between workstations, as substancially improved the balance of work load as well as equitativity between workers. This action allowed to diminish the existing of subgroups. The variation in client needs, took to the cration of three scenarios of workers allocation to workstations using the same Operational Research technics. During the project was also defined rules to the change of the models in assembly lines.
generation poses new challenges for dynamic voltage stability analysis of an electricpower system. The practical importance of dynamic voltage stability analysis is to help in designing and selecting counter-measures in order to avoid voltage collapse and enhance system stability. The impact of wind integration on reactive reserve requirements is a current area of interest for renewable integration studies and power system operators. In this paper is studied a new wind power plant model with reactive power management. The active power and the frequency management are taken into account too. The developed model can be used to represent, in a simplified way, an entire wind farm in order to simulate the dynamic voltage stability of the system, whatever the technology involved in the wind turbine. The system is completely modelled by a single dynamic converter model with appropriate control loops intended to reproduce the overall response of a wind farm for different grid events, such as faults or voltage and reactive power management at the point of common coupling.
84 its planning it must take into consideration the production planning, maintenance decisions, equipment inherited reliability and market and commercial requirements (Al-Turki, 2011). Maintenance and production have close relationship, both have cooperative objectives in their planning, based upon each other’s plans and viewpoints. As the main goal for the organisation is to produce in order to satisfy demand through the maximum utilization of available resources, maintenance enters into this perspective as to maximize asset value and availability. So, to function perfectly this cooperation, information flowing back and forth between operations (production) and equipment condition (maintenance) is a key enabler to promote planning and decision making (Swanson, 1997). Figure 5 illustrates this relationship.
Between the optical source and the λ/4-wave plate, the polarized light is injected at 45° into a polarization maintaining fiber (PMF). After crossing the PMF, the λ/4-wave plate which is oriented at 45 ° with respect to the birefringence axis of the PMF, transforms the two orthogonal polarization modes into circular polarization modes with opposite rotation directions. At the end of the fiber, reflection occurs and the reverse process occurs. Due to reflection configuration, the states of polarization swaps and each polarization component travels the optical path made by the other polarization. At the end, both polarization pass through the same optical path and compensate all reciprocal effects, but still maintaining a phase difference proportional to the electric current to be measured. Due to the operation in reflection this scheme also doubles the sensitivity in relation to the first one, for the same number of fiber turns around the wire.
From the 24 CT scanners that entered the audit, most of them were RT dedicated scanners. The majority of the centres used a constant kVp value for the planning CTs and a customized CT to RED curve. Nevertheless a general failure of CT to RED conversion has been observed in bone (92% failures) and dense bone (75% failures) which was probably due to the use of different reference materials for CT calibration. The weak inﬂuence of this kind of deviations in dose calculations was veriﬁed in one centre where a study with different CT-to-RED curves has been performed, conﬁrming previ- ous published results . In Fig. 4 the audit measurements and corresponding calibration curve (2011) are compared with the in- ternal curve used in that centre since 2001 where a different CIRS phantom model had been used for CT calibration. The deviations in dense bone were justiﬁed by the extrapolation of the old curve in the high density region. Also the different shapes, sizes and com- positions of the two CIRS phantoms could contribute to the re- ported deviations. Nevertheless dose differences of just up to 0.5% were found when the dose distributions for Test-Case 4 (box technique) were compared using the old and the new CT-to-RED curves (in a 2D dose difference analysis). Despite the conﬁrmed reduced inﬂuence in dose calculations, most centers have replaced
Electricity networks link power generation to electricity demand. Thus, the geography of consumer patterns, that is, their location in space, consumption times and magnitude, eventually determine the layout of electricity networks . DER are changing the distributed electricity consumption+generation morphology and therefore impact on network planning , . However, traditional network planning routines rarely consider the structure and propensity of consumers to adopt new technologies. The reason for this is that usually planning has its roots in forecasting, and a common forecasting assumption is that the future is structurally and in human behaviour similar to the past and present. However, new technologies may be disruptive of a sustained pattern and make such assumption invalid. Consequently, usual oversimplified representations of DER adoption dynamics are unable to capture the large- scale technology diffusion  and this will result in suboptimal network investment decisions , , . Recent studies suggest that the prediction of future spatial distributions of DER such as PV, EV, and HVAC may bring in high economic value to energy utilities. In a first case study, absent or existing, accurate DER adoption forecasts could decrease or increase network companies revenues by several millions of U.S. dollars per TWh consumed .
A series of recently published papers by Caro et al. –, which intend to modify the well-known WLS to Dependent Weighted Least Squares (DWLS) are also important to be mentioned. The key thought is that consideration of independence among measurements errors is inappropriate, especially if measurements are acquired from the same substation. In  the authors suggest a way to take measurement dependencies into account. Here, active and reactive power measurements provided by the SCADA are actually computed from the primary raw measurements: voltage magnitudes, current magnitudes and current-voltage phase angles. In other words, voltages, currents and phase angles, which are directly measured, are affected by statistically independent errors whilst power injections and flows, which are fabricated out of them, are obviously affected by dependent errors. It is shown that a proper model of correlation among measurements errors (as proposed in ) is able to make significant evolution in estimation quality. Furthermore, in  the same paradigm is particularly analyzed for the gross errors scenarios, when only a single bad measurement of voltages or currents can result in a measurement vector that contains multiple bad data. Accordingly, the authors propose to modify the LNRT on the way to consider the statistical correlation between measurements and make a significant improvement in multiple bad data identification.