• Nenhum resultado encontrado

MOCAPIRA - Monte Carlo parallel implementation for reliability assessment

N/A
N/A
Protected

Academic year: 2021

Share "MOCAPIRA - Monte Carlo parallel implementation for reliability assessment"

Copied!
80
0
0

Texto

(1)

F

ACULDADE DE

E

NGENHARIA DA

U

NIVERSIDADE DO

P

ORTO

Monte Carlo Parallel Implementation

for Reliability Assessment

Inês Maria Afonso Trigo de Freitas Alves

Mestrado Integrado em Engenharia Eletrotécnica e de Computadores Supervisor: Professor Vladimiro Henrique Barrosa Pinto de Miranda, Ph.D.

Second Supervisor: Leonel de Magalhães Carvalho, Ph.D.

(2)

c

(3)

Resumo

A Simulação de Monte Carlo é uma das ferramentas mais importantes para o estudo da fiabilidade do sistema elétrico. É capaz de simular os vários estados dos componentes do sistema e o incor-porar o seu cortamento estocástico. Há dois tipos importantes de simulação de Monte Carlo, a dita não cronológicae cronológica. No caso não-cronológico, a a amostragem de situações obtêm-se como se tratassem de fotografias do sistema, sem dependência temporal de umas para as outras. No caso cronológico há um relógio virtual que é posto em marcha e, com o fluir do tempo, são geradas sequências de eventos. Devido a sua natureza cronológica é possível incluir outros dados cronológicamente dependentes, como por exemplo, a produção a partir de fontes renováveis, no entanto, o esforço computacional a ele associado tem erguido barreiras ao seu uso generalizado.

O objetivo principal desta dissertação é investigar a possibilidade da implementação em par-alelo da Simulação de Monte Carlo em GPU’s e verificar o ganho temporal desta mudança.

Para tal, o objetivo principal foi dividido em três fases, com contínuo aumento de complexi-dade. Inicialmente paralelizou-se a Simulação de Monte Carlo não sequencial. Aqui foi possível diminuir o tempo computacional da simulação, no entanto, apenas para coeficientes de variância baixos, que não são usuais utilizar em estudos probabilísticos. De seguida foi implementada a ver-são em paralelo do método de Entropia Cruzada, para o qual se obtiveram bons resultados, uma vez que esta implementação é mais rápida para a amostra recomendada. Por fim, foi paralelizada a Simulação de Monte Carlo Sequencial, tornando-a mais rápida para todos os coeficientes de variância testados.

A simulação de Monte Carlo Sequencial foi, por fim, aplicada a um sistema com penetração de renováveis. Verificou-se ainda maior diferença entre os tempos de computação para a imple-mentação em paralelo e em série indicando que a paralelização deste método é tão mais eficiente quanto a complexidade do sistema a analisar.

(4)
(5)

Abstract

The Monte Carlo Simulation is one of the most powerful tools for power systems adequacy as-sessment. It is capable of sampling several states of the system and incorporate the stochastic behaviour of its components. This can be done in both sequential and non-sequential implemen-tations. The latter is similar to a set of photographs of the system taken randomly through time, and the first is capable of recreating the lifeline of the electric system, allowing for the inclusion of time-dependent issues like renewable power production. Despite the advantages, the simulation time of the Monte Carlo remains its major weakness.

The main goal of this dissertation is to research the feasibility of parallelising Monte Carlo Simulations with GPU’s and verify the time-efficiency of this change.

The main goal was divided into three parts with increasing complexity. Firstly the non-sequential Monte Carlo was parellelised. The resulting implementation was able to run faster than its serial counterpart, however only while using very small coefficients of variance, which are not usually used in probabilistic studies. Then the cross-entropy method was parallelised, with good results, since its implementation is faster for the recommended sample sizes. Lastly, the parallelisation of the sequential Monte Carlo was proven to be a success since it is faster than its serial version for all coefficients of variance tested.

The sequential Monte Carlo was then applied to a system with penetration of renewables. This is where the most significant changes in computational times were found, indicating that the parallelisation of this method increases efficiency with the system’s complexity.

(6)
(7)

Acknowledgments

I start this list of acknowledgements by expressing my sincere gratitude to my supervisors Pro-fessor Doctor Vladimiro Miranda and Doctor Leonel Carvalho for their cheerful encouragement, unparallel support and key insights.

A special mention of thanks goes to my family. I am grateful to them for providing me with the means and opportunity to pursue my aspirations and improve my education. In particular, I must thank my mother for always believing in me.

This list could not be complete until I make a reference to my beloved boyfriend Vitor for his care, presence, support, enthusiasm and, most of all, his love.

Lastly, I would like to thank all of my friends for hearing me say Monte Carlo countless times and for being there to encourage me. In particular, I must thank João and Filipe that, together with Vitor, made me see programming through a different light, thus making this project possible.

Inês Trigo

(8)
(9)

"He who fails to plan is planning to fail."

Winston Churchill

(10)
(11)

Contents

1 Introduction 1

1.1 Context and Motivation . . . 1

1.2 Objectives and Achievements . . . 2

1.3 Dissertation Outline . . . 2

2 Reliability Assessment 3 2.1 Adequacy and Security . . . 3

2.2 Functional Zones and Hierarchical Levels . . . 4

2.3 Reliability Indices on HLI . . . 6

2.4 Data Collection for Reliability Studies . . . 6

2.5 Reliability Assessment Methods . . . 7

2.5.1 Analytical Methods of Reliability Assessment . . . 7

2.5.2 Simulation based Methods . . . 8

2.5.3 Non-Sequential Monte Carlo Simulation Method . . . 10

2.5.4 Sequential Monte Carlo Simulation Method . . . 11

2.5.5 Variance Reduction . . . 12

2.6 The non-sequential MCS Method for the Adequacy Assessment of Power Systems 14 2.6.1 Models of the System Components . . . 14

2.7 The sequential MCS Method for the Adequacy Assessment of Power Systems . . 16

2.7.1 Models of the system Components . . . 16

2.7.2 Load . . . 18

2.8 Agent-based technology applied to power systems reliability . . . 18

3 Parallel Computing 21 3.1 Important concepts . . . 22

3.2 GPU and CUDA Programming . . . 22

3.3 Numba enabled GPU Programming . . . 24

4 Methodology 25 4.1 Non-sequential Monte Carlo . . . 25

4.2 Importance Sampling and Cross Entropy . . . 27

4.3 Sequential Monte Carlo . . . 29

4.4 IEEE Reliability Test System 79 . . . 32

4.4.1 Description of the Reliability Test System - Load Model . . . 32

4.4.2 Description of the Reliability Test System - Generation Model . . . 34

4.5 Random Number Generator . . . 35

4.5.1 Numba’s random number generator . . . 35

4.5.2 Python’s random number generator . . . 35 ix

(12)

x CONTENTS

5 Results 37

5.1 Non-sequential Monte Carlo Simulation . . . 37

5.2 Cross Entropy . . . 40

5.3 Sequential Monte Carlo Simulation . . . 42

5.3.1 Parallelising the generation of system components lifelines . . . 43

5.3.2 Sampling several years at a time in parallel . . . 43

5.4 Applying the parallelised version of the Monte Carlo Simulation to a system with a high penetration of renewables . . . 45

5.4.1 Modified IEEE RTS 79 . . . 45

5.4.2 Influence of the variation in hydro plants in the system reliability . . . . 45

5.4.3 Influence of wind farms in the system reliability . . . 48

5.5 Influence of hydro plants and wind farms in the system reliability . . . 51

5.6 Analysis and discussion of the results . . . 52

6 Conclusion 55 6.1 Future work . . . 56

(13)

List of Figures

2.1 Division of the System Reliability . . . 3

2.2 Organisation of the Power System in functional zones . . . 4

2.3 Hierarchical levels . . . 4

2.4 Evolution of the functional zones and hierarchical levels . . . 5

2.5 Normal Distribution N(0, 1) with two standard deviations shaded . . . 10

2.6 Example of a system lifeline. The orange line corresponds to the hourly load variance. The blue line to the variation in the generation, and the shaded grey area to the period with load curtailment. . . 12

2.7 Single bus model for HLI . . . 14

2.8 Two-state model for a generating unit . . . 15

2.9 Example of a daily peak load variation curve . . . 15

2.10 Multistate Markov . . . 16

2.11 Analythical modeling of a dam . . . 17

2.12 Basic Architecture of an agent . . . 19

2.13 Non-synchronised approach to a agent-base MCS . . . 19

3.1 Differences between serial and parallel programming. Using serial programming, a single instruction can be executed at a given moment while the use of parallel programming allows for the execution of several instructions at once. . . 21

3.2 Difference of cores between a CPU and GPU . . . 22

3.3 Organisation of a CUDA application . . . 23

3.4 GPU memory hierarchy . . . 23

4.1 States generator’s lifeline . . . 30

4.2 Peak load variation throughout an year. It’s possible to visualise the peaks during winter and summer and the vales during spring and autumn . . . 34

4.3 Sampled values according to the random distribution . . . 35

5.1 Comparison of computational times for non sequential MCS . . . 39

5.2 Possible implementations of a parallelised sequential Monte Carlo Simulation . . 42

5.3 Comparison between the recorded simulation times of both parallel implementa-tions of the MCS . . . 44

5.4 Example of a hydropower plant lifeline, on Figure5.4athe state of the system is illustrated, here two failures → state = 0, can be observed. On Figure 5.4bthe lifeline of the system is illustrated. Here it is possible to observe both failures and the variability of production. . . 46

5.5 Comparison of the system LOLE with and without hydropower plants variability 48 5.6 Example of a vector of states of a wind farm 2000h lifeline . . . 49

(14)

xii LIST OF FIGURES

5.7 Example of a wind farm 2000h lifeline. It’s worth noticing that the maximum capacity is represented by the orange line, at 120 MW. The wind farm is not able to produce it’s maximum capacity during the 2000h sampled. Therefore it is to be expected a high value of LOLE and EPNS . . . 49 5.8 Comparison of the system LOLE with and without wind farms . . . 50

(15)

List of Tables

4.1 Weekly peak load, in pu . . . 32

4.2 Daily peak load, in pu, in relation to the peak of each week . . . 32

4.3 Hourly peak load, in pu, in relation to the peak of each day . . . 33

4.4 Generation System Data . . . 34

5.1 Serial non-sequential MCs results for several values of β . . . 37

5.2 Parallel non-sequential MCs results for several values of β . . . 38

5.3 Average number of iterations for several β . . . 38

5.4 Comparison between the computational times and estimations for both serial and parallel implementations for β = 0.1, β = 0.05 and β = 0.01 . . . 38

5.5 Comparison between the computational times and estimations of serial and paral-lel implementations for β = 0.005 and β = 0.001 . . . 39

5.6 Comparison between computational times for the non-sequential MCS for β = 0.05 with different samples sizes . . . 40

5.7 Comparison between computational times for both serial and parallel implemen-tations of CE . . . 40

5.8 Comparison between several values of for: the original and the obtained after CE implemented in series or in parallel . . . 41

5.9 LOLPˆ estimated with several values of f or and β = 0.05 . . . 41

5.10 Average number of years sampled for several β . . . 42

5.11 Results of the serial implementation of the sequential MCS aaaaaaaaa aaaaaaaa aaaaaa aaaaaa . . . 43

5.12 Results of the parallel MCS implementation, parallelising the sampling of gener-ators states . . . 43

5.13 Results of the parallel MCS implementation, parallelising the sampling and eval-uation of several years at a time . . . 43

5.14 Comparison between computational times for sequential MCS for β = 0.05 with different years sampled at a time . . . 44

5.15 IEEE MRTS 79 Generation system data . . . 45

5.16 Reliability indices for the IEEE RTS 79 and its modified version . . . 47

5.17 Variation of LOLE and EPNS with the number of hydropower plants on the electric system . . . 47

5.18 Reliability indices for the IEEE RTS 79 and its modified version . . . 50

5.19 Variation of LOLE and EPNS with the number of wind mills represented in multi-ples of the number used on the RTS . . . 50

5.20 Reliability indices for the IEEE RTS 79 and its modified version . . . 51

5.21 Variation of LOLE and EPNS with the different combinations of power plants . . 51

5.22 Needed power plants to achieve the same reliability indices of the baseline . . . . 52 xiii

(16)

xiv LIST OF TABLES

5.23 Comparison between computational times for both serial and parallel implemen-tations . . . 53

(17)

Abbreviations

API Application Programming Interface

CE Cross-Entrophy

CI Confidence Interval CPU Central Processing Unit

CUDA Compute Unified Device Architecure EENS Expeceted Energy Not Supplied EPNS Expeceted Power Not Supplied F&D Frequency and Duration for Forced outage rate GA Genetic Algorithms GPU Graphics processing unit HL0 Hierarchical Level 0 HLI Hierarchical Level I HLII Hierarchical Level II HLII Hierarchical Level IIL

IEEE MRTS 79 IEEE Modified Reliability Test System IEEE RTS 79 IEEE Realibility Test System

JIT Just in Time

LOLE Loss of Load Expectation LOLP Loss of Load Probability MCS Monte Carlo Simulation MTTF Mean Time To Failure MTTR Mean Time To Repair OPF Optimal Power Flow

PBM Populational Based Methods PSO Particle Swarm Optimization

pu Per unit

(18)
(19)

Chapter 1

Introduction

1.1

Context and Motivation

In the past decades, the Climate Changes caused by increasing carbon emission [1], amongst others, are becoming increasingly apparent [2] which has made the environmental movement alert to the need of making a transition to an emission-free and sustainable energy system. This entails a reorganisation of the electric system.

Currently, the electric system is experiencing a transition from a centralised system composed of a few, very large, fossil-fuelled units to a decentralised one, composed of several smaller gen-erator powered by renewable sources [3]. This change in environment, however, bring a new set of complications to the reliability assessment.

The reliability studies have as a purpose to quantify the inherent risks of the power system, us-ing forced outage rates of equipment and load forecastus-ing as input for decision-makus-ing processes involving the planning and operation of the system. As a matter of fact, there is a relationship between reliability cost/worth that links the investment with increment in reliability. The shift that the generating system is experiencing implies that reliability studies must also accommodate the uncertainty and stochastic behaviour associated with renewable sources of energy.

The Monte Carlo Simulation method is widely used in reliability estimation since not only does it provides accurate frequency and duration assessments but also allows to compute reli-ability indexes such as Loss of Load Probreli-ability and Loss of Load Expectation. It defines the expected frequency of occurrence of an event as its probability of occurrence, according to the frequentist theory of sampling. It is a stochastic model and can include both chronological and non-chronological events, thus allowing for the inclusion of previsions models needed for renew-ables [4].

Unfortunately, this method comes with a significant downside: Its considerable simulation time needed to provide accurate estimations. [4]. Several studies have shown that is possible to speed up the process by reducing the needed number of iterations with methods such as Importance Sampling[5] and Control Variate [6].

(20)

2 Introduction

Nowadays, with the recent research and technical breakthroughs in semiconductors, it is pos-sible to utilise Graphics Processing Unit (GPU) and their parallelising capabilities [7] for fast model iterations, opening the door for flexible and high-performance paradigms to be applied to the Monte Carlo Simulation.

The goal of this dissertation is to accelerate the Monte Carlo Simulation for reliability assess-ment by impleassess-menting it in parallel on a GPU.

1.2

Objectives and Achievements

The main objective of this dissertation is to implement a Monte Carlo Simulation, in parallel, to be applied in reliability assessment studies.

To accomplish this goal, the approach used was to divide it into smaller milestones, listed below:

• Implement a parallel version of the non-sequential Monte Carlo; • Implement a parallel version of the Cross-Entropy method; • Implement a parallel version of the sequential Monte Carlo;

• Reliability assessment of a system with penetration of renewable power sources;

1.3

Dissertation Outline

Besides Introduction, this document has five more chapters:

Chapter2, where a theoretical introduction of some important concepts regarding the electric systems reliability assessment is made.

Chapter3, where is made an introduction to parallel computing as well as the tools utilised throughout this dissertation: Cuda and Numba.

Chapter4, where the methodology and algorithms implemented are explained. Here is also made an introduction on the test system utilised as input: IEEE RST 79, and the random number generator utilised.

Chapter5, where the results of the implementation of the algorithms described in the previous chapter are presented and discussed. Here both serial and parallel implementations are compared in terms of estimations and computational time. Initially, the results of the parallelisation of the non-sequential Monte Carlo are discussed, followed by the Cross-Entropy method and sequential Monte Carlo. Lastly, the parallelised version of the sequential Monte Carlo will be utilised to simulate the IEEE RST 79 modified to include the uncertainty brought into the electric system by the inclusion of generators powered by renewable sources, and these results are compared to the original IEEE RST 79.

(21)

Chapter 2

Reliability Assessment

This chapter provides a theoretical introduction of some important concepts used throughout this dissertation, such as the definition of reliability and reliability assessment methods and their dif-ferences.

2.1

Adequacy and Security

Reliability is the term used to describe the ability of the electric system to provide an adequate supply of energy. However, this concept has a wide range of meanings and cannot be associated with a specific definition. Therefore it is usual to create subdivisions of the term [8]. A simple but effective division is the one between the two essential pillars of a power system: system adequacy and system security, as shown in figure2.1.

Figure 2.1: Division of the System Reliability [8]

System adequacy refers to the static conditions of the system. In other words, the existence of adequate facilities to satisfy consumer demand [8] this includes the facilities necessary to generate sufficient energy, as well as the associated transmission and distribution required to transport it from the generation points to the consumer.

On the other hand, system security refers to the ability that the system has to respond to dy-namic or transient disturbances arriving from within it, such as load or breaker switches, fuse disconnection, islanding and rare natural phenomena like lightning [9]. The transient disturbances are associated with local and widespread disturbances and, even though their duration does not exceed a few milliseconds, can cause a sudden loss of significant generation and damage trans-mission components.

(22)

4 Reliability Assessment

It is important to note that most of the probabilistic techniques, such as the Monte Carlo Simulation, are used for reliability studies that are in the domain of the adequacy assessment [8].

2.2

Functional Zones and Hierarchical Levels

The modern power system is a vast, complex and highly integrated system that can be represented with varying detail thus allowing the existence of several different models for the systems compo-nents as well as different techniques to solve them.

This variety of models and techniques, in conjunction with the fact that the system can be divided into subsystems which can be analysed separately, has led to a categorisation of the power system into several Functional zones. Adequacy studies are conducted for each of these zones.

Figure 2.2: Organisation of the Power System in functional zones [8]

The most intuitive approach for dividing the electric system is the separation off its main functional zones, namely the generation, transmission and distribution [10].

Functional zones, showed in Figure2.2, can be subsequently organised into hierarchical levels [10].

(23)

2.2 Functional Zones and Hierarchical Levels 5

The first level, Hierarchical Level I (HLI), refers only to the generation facilities. Here, ade-quacy studies aim to find the capacity to supply the system load [8]. Hierarchical level II (HLII) refers to both the generation and transmission facilities. The HLII models aim to determine the ability of the system to supply bulk consumption points [8]. Lastly, the Hierarchical Level III (HLIII) includes all three functional zones. Studies on this level are not usually conducted since it includes the entire power system, meaning that not only would the computation time required for its accurate simulation be unfeasible, but "the results would be so vast that meaningful interpreta-tion would be close to impossible" [10]. The analysis of the distribution functional zone is usually conducted separately.

(a) Inclusion of the energy zone

aaaaaa aaaa aaaa

(b) Inclusion of decentralised generation

Figure 2.4: Evolution of the functional zones and hierarchical levels [11]

In the last decades, hierarchical concepts have been revised, mainly due to the important changes in the power industry [11]. A new functional zone, Energy, was added, corresponding to the Hierarchical Level 0 (HL0), figure2.4a. This new zone becomes especially important to accommodate the renewable sources of energy that cannot be stored and are characterised by the variability and intermittency of their primary energy resources, unlike the generation zone, in which the generating units have as many primary resources as needed.

A new organisation of the system is characterised by the decentralisation of the generation pushing it directly to the distribution, figure2.4b[12].

All in all, despite its simple nature, this division in functional zones and hierarchical levels is used since most utilities separate their activities according to these zones or are entirely responsible for them [8].

(24)

6 Reliability Assessment

2.3

Reliability Indices on HLI

Adequacy studies have as a primary outcome the assessment of the reliability indices. They can be divided into predictive and post-performance indices. The first provides an insight into the future of the power system and are associated with planning. On the other hand, post-performance indices provide a realistic view of the system and are used for reporting purposes. In this dissertation, only predictive indices of the HLI are considered, some of them being:

• Expected failure rate - λ (failure/year) • Expected repair rate - µ (year−1)

• Mean time to repair - r (hour), equivalent to r = 1 µ

• Forced outage rate - f or, a probability that defines the disponibility of a group • Loss of Load Probability - LOLP, which gives the probability of curtailment.

• Loss of Load Expectation - LOLE (hour), which represents the average number of hour, days or weeks during the evaluation period with load curtailment.

• Expected Power Not Supplied - EPNS (MW), which gives the average load curtailed • Expected Energy Not Supplied - EENS (MWh), which represents the average energy

cur-tailed during the evaluation period.

2.4

Data Collection for Reliability Studies

In order to conduct a meaningful study of reliability, it is fundamental to have meaningful data, which is not always easy to obtain and has a high degree of uncertainty.

Although the collected data can be unlimited, it is not efficient nor desirable to collect, analyse and store more data than required for the intended purpose. It is essential to identify its utility and process it accordingly.

Initially, field data is obtained by documenting the failures as they take place, as well as the various outage durations associated with these failures. This data is then analysed to create the statistical indices used in reliability assessment models [10].

Data can be collected through a number of approaches, namely, the component approach and the unit approach [13]. The component approach is more useful for providing data for the pre-dictive assessment of future system performance while the unit approach is considered useful in assessing the chronological changes in the reliability of the system.

(25)

2.5 Reliability Assessment Methods 7

2.5

Reliability Assessment Methods

Power system reliability indices can be calculated using a variety of methods. The two main ap-proaches are analytical-based or simulation-based. The vast majority of techniques have been analytical, since simulation techniques generally require a large amount of computing power. An-alytical techniques, on the other hand, are simple. They rely on mathematical models to calculate the exact value of reliability indices or close approximations [4]. With the advances of technology and computational power, the simulation techniques started gaining traction [14]. These methods provide estimations of the reliability indices and are bounded by an interval of confidence. They rely on probabilistic models [14].

The majority of models assume that events in the system are independent; however, if needed, common cause failure events can also be used.

2.5.1 Analytical Methods of Reliability Assessment

Analytical methods calculate the reliability indices by obtaining the probability density function of the system states. After having that information, the reliability indices can be calculated according to equation2.1[15].

E[H(x)] =

x∈A

H(x) f (x) (2.1)

Here x corresponds to a system state that contains the states of all the components of the system, A is a set that contains all system states, f (x) is the probability of the system state x and H(x) is the outcome of the test function H for the system state x. E[H(x)] is a given reliability index, mathematically modelled by H. In other words, these models combine the probability density function of the system state with a probabilistic load model to construct the system risk model used to calculate the reliability indices.

Despite the fact that these kinds of methods are computationally efficient, the complex be-haviour of the systems implies that it can only be modelled mathematically if some assumptions are made and resulting in a vast number of states that might end up being trimmed, leading to questionable trustworthiness of the reliability indices.

An example of indices calculated with analytical methods is Loss of Load Expectation (LOLE) and Frequency and Duration (F&D). Both can incorporate long-term load forecast uncertainty as well as scheduled maintenance and the evolution of the power system.

Inside of the analytical methods, there is also the subgroup of Population-based Methods (PBM).

2.5.1.1 Population-based Methods

The main thesis that supports those methods are that the power systems are, in fact, very reliable, the states where a loss of load is verified are a small minority. On the other hand, some of the

(26)

8 Reliability Assessment

states that provoke a loss of load have an infimal probability of actually occurring, rendering them irrelevant to the final index.

The PBM are designed to find the states that are relevant to the computing of reliability indices, among all other states. Searching solely for the significant states reduces the time that would be needed if the goal was to analyse all of the possible states that the system might occupy, just like it is necessary for simulation-based methods.

The PBM are methods of enumeration of states, not probabilistic, hence not allowing for the definition of confidence intervals. [16]. The stopping criteria relies only on the variance between simulated states. After a certain number of iterations without significant variance, the process is interrupted.

The estimation of index is done through equation2.2. E[F(x)] =

i∈D

piFi (2.2)

Here D refers to the set of visited off states, piis the probability of occurrence of that state and

Fithe value of reliability on the said state.

Initially, the goal was the progression of the population to states with a higher probability of failure, using Genetic Algorithms (GA). During subsequent work other experiments were con-ducted , with new criteria, looking for states with the highest Expected power not Supplied (EPNS) [17]. Later the GA was replaced with Particle Swarm Optimisation (PSO)[18], Artificial Immune Systems[19] and Ant Colonies [20] with success. New formulations of the search were also tested with multiple-criteria, that aim to maximise both the loss of load and the cost of loss of load or loss of load and probability of occurrence [21].

All in all, the PBM mechanisms led to the expected result and with less computational effort than the simulation-based methods [17], due to the reduction of visited states. However, it is not yet possible to avoid visiting the same sate multiple times, reducing the efficiency of the algorithm. It is still worth noting that the states must be enumerated and kept in memory in order to keep a record of the visited states and avoid acknowledging multiple instances. That said the queries to this memory imply an increasing computational effort as the algorithm progresses, which is the precise moment in the algorithm, when this search is the most important. In addition, there is a lack of methods to evaluate the error, and the stopping criteria rely solely on the state, through which false results can be detected when reaching a plateau, which making this kind of methods far from ideal when dealing with large power systems.

2.5.2 Simulation based Methods

The Monte Carlo Simulation is an essential tool for analysing events that exhibit a probabilistic behaviour [22]. It provides estimations for the reliability indices within an interval of confidence by simulating the behaviour of the power systems [23].

This method aims to create a sample that demonstrates the behaviour of the power systems through the raffle of random states and their analysis in order to assess the mean value of the

(27)

2.5 Reliability Assessment Methods 9

results and use it to estimate the global behaviour of the system through the behaviour of the sample. The advantage of the MCS over analytical methods is that the number of iterations is not related to the size of the system exhibit, but instead to the variance of its states.[4] The flexibility of this method also allows using it for both HLI and HLII studies.

In it is most simple form, the MCS is not more than a systematic application of the statistical concept of sampling [24]. It relies on samples with an N dimension inside the space of states and the posterior estimation of its expected value (equation2.3) and variance (equation2.4).

E(X ) =

i p(Xi)Xi= 1 N N

i=1 Xi (2.3) V(X ) = E([X − E(X )]2) = 1 N N

i=1 (Xi− E(X))2 (2.4)

The equations2.5and2.6are only possible when working with discrete and finite sets. Where pirelates to the probability of a particular state (Xi) occurring.

Considering a process of sample extraction, that results in samples of size N and that ¯Xis the mean of said sample. Admitting that ¯X is also a random variable of size N, since it is generated through multiple size N samples, the expected value of the mean sample distribution equals the expected value of the initial random variable (equation 2.5). The variance of the distribution is related to the variance of X and decreases with the size of the sample (equation2.6) [24].

E( ¯X) = E(1 N N

i=1 Xi) = 1 N N

i=1 E(Xi) = 1 NNµ = µ (2.5) V( ¯X) = V (1 N

i = 1 N Xi) = 1 N2 N

i=1 V(Xi) = 1 N2Nσ 2= 1 Nσ 2 (2.6)

The convergence in MCS methods is monitored by the coefficient of variance β of the esti-mates of the reliability indices, and it is calculated according to:

β2= V(X ) ˆ

E(X )2 (2.7)

Rearranging equation2.7to emphasise N, it’s possible to get:

N= V(F)

[β ˆE(F)]2 (2.8)

which shows that the coefficient of confidence β is related to the number of iterations, meaning that, if we want to reduce the number of iterations, we must reduce the variance of data [25].

The knowledge of the variance of the samples allows for the easy estimation of an interval of confidence for the estimated values.

(28)

10 Reliability Assessment

Figure 2.5: Normal Distribution N(0, 1) with two standard deviations shaded [26]

Figure2.5represents the probability density function of the normal distribution N(0, 1). With mean µ = 0 and variance σ2= 1, the symmetrical interval centred in the mean, and with a semi-width of two standard deviations, corresponds to a probability obtained by the integral of the distribution between the limits of the interval. Therefore, if an interval of two standard deviations is marked on the base,it has a level of trust α = 95.44% that the value we are looking for belongs inside said interval.

In practice, a confidence interval of 95% is used in detriment of 95.44%. While the latter corresponds to 2 standard deviations, the former equates to 1.96 standard deviations. So, for the determination of a confidence intreval(CI) for x% ofE(X ) is done though the equationˆ 2.9.

IC(95%) = [ ˆE− 1.96√σˆ N, ˆE+ 1.96 ˆ σ √ N] (2.9)

The simulation process can follow one of two approaches:

• Random - examining random samples, generated in random intervals of time;

• Sequential - examining each sample through an interval of time of the simulated period in chronological order

This division in the sampling process calls for a subdivision of the Simulation-based meth-ods into sequential or non-sequential [10]. Pseudo-sequential [27] or quasi-sequential [28] MCS methods were also proposed throughout the evolution of the method. These adopt neither a pure state space nor a chronological representation.

2.5.3 Non-Sequential Monte Carlo Simulation Method

A system state in MCS can be viewed as an aggregation of the states of its components and the load state. In Non-Sequential MCS, these states are estimated without considering any time dependency between consecutive states, similarly to a set of pictures of the system.

The reliability indices are estimated according to Equation2.10 ˆ E(H(X )) = 1 N N

i=1 H(xi) (2.10)

(29)

2.5 Reliability Assessment Methods 11

where xi is a sampled system state, N the number of states, H(xi) the outcome of the test

function H for each system state and ˆE(H(X )) is the estimate of a given reliability index that is modelled mathematically by H.

Taking a look at the calculation ofLOLP, a possible function test H to be used [ˆ 8] is:

HLOLP(xi) =

(

1 if xi is a failure state

0 if xiis a success state

(2.11) For each state in which there is a load curtailment, H(xi) gets the value 1, otherwise, if x is a

successful state, H(xi) equals 0. This function can easily be changed to fit other reliability indices.

If, in turn, H(x) is the amount of load curtailment associated with the state x, E(H) represents the system EPNS.

Some efforts were made in order to allow non-sequential MCS to accommodate chronological aspects of the power system operation, such as load and generation variations. However, those have some limitations [29]. It is also not possible to model markovian processes since non-sequential MCS is based on the state-space representations [14].

2.5.4 Sequential Monte Carlo Simulation Method

The sequential MCS method goes through the system by turning on a virtual clock [15]. Following the flow of time, system states are generated containing a sequence of time and creating the Lifeline of the power system. Since this approach can reproduce the operation of the power system, it is easy to include variations that cannot be modelled though Markovian models, such as load and capacity fluctuations caused by renewable power sources [30].

For the characterisation of a chronological simulation, it is necessary to know the probability density function associated with the times of operation and failure of each component [14]. It is also necessary to have a model for the load variation, usually based on predictions, in order to build a chronological and deterministic load curve [14].

Resembling the non-sequential MCS, the estimation of the reliability indices are calculated according to equation2.12, where xnis the sequence of system states x over the period i, and NY

is the number of periods simulated.

ˆ E[H(X )] = 1 NY NY

i=1 H(xn)Sn=1i (2.12)

The test function H is evaluated at the end of each simulated period. For example, in order to estimate the index LOLP (HLOLP(xn)) :

H(xn)Sn=1i = 1 T Si

n=1 d(xn) × HLOLP(xn) (2.13)

where xn is the n-th state of the sequence, T is the duration of the simulated period, d(xn) is

(30)

12 Reliability Assessment

Figure 2.6: Example of a system lifeline. The orange line corresponds to the hourly load variance. The blue line to the variation in the generation, and the shaded grey area to the period with load curtailment.

unable to supply the load, illustrated by figure2.6and dividing it by the duration of the simulated period.

2.5.5 Variance Reduction

Equation2.8 shows that, in order to reduce the number of iterations while maintaining the same interval of confidence, the variance between the sampled states must be reduced, making variance reduction a very important topic. In fact, it is one of the main reasons that brought this method back into use rather than keeping analytical methods. It should, however, be noted that it is an alternative method of processing the same information. Before variance reduction can be achieved, a computational price has to be paid to obtain and deal with the required information. [4]

2.5.5.1 Control Variate

The primary trait of the control variate method in the use of information obtained by another method, usually analytical, to reduce the variance. The problem is solved in two parts. First, the analytical one and then MSC is used to calculate the difference between the solution of the total problem and the analytical part.

If Z is a random variable that is strongly correlated with another random variable F, then define a new random variable Y:

Y = F − (Z − E(Z)) (2.14)

Y and F have the same expected value:

(31)

2.5 Reliability Assessment Methods 13

the variance of Y is given by:

V(Y ) = V (F − Z) = V (F) −V (Z) − 2cov(F, Z) (2.16) Where cov(F, Z) is the covariance between F and Z. If F and Z are strongly correlated V(F) will be smaller than V(F), and since they both have the same expected values, E(F) might be estimated more efficiently through the sampling of Y than F:

ˆ E(F) = 1 N∗ N∗

i=1 Yi (2.17)

where N∗ is the new number of samples of systems states necessaries, and that should be smaller than N.

This technique can be used to evaluate reliability indices in HLII [31]. The load curtailment due to generation capacity shortage can be the control variable Z and the curtailment attributed to the composite generating and transmission system be the variable F. This technique, however, cannot be used to evaluate bus indices of composite systems unless a proper control variate for the proposal can be found [14].

2.5.5.2 Importance Sampling

The importance sampling (IS) method is based on the distortion of the probability distribution of xand therefore of F(x) in order to increase the probability of meaningful events (events that lead to loss of load) without changing the expected value [32].

As discussed before the probabilistic analysis of a power system can be seen as the determina-tion of the expected value of a test funcdetermina-tion of the system H(x). Multiplying and dividing the right member of the equation2.5by a new distribution function p∗(x) with a known distribution:

E(F) =

x∈X F(x)p(x) p∗(x) p ∗ (x) = E(F∗) (2.18)

which can be interpreted as a new function of test F∗(x) given by: F∗(x) =F(x)p(x)

p∗(x) (2.19)

Where p∗(x) represents a new distribution of the state’s probability. The mean remains the same E(H∗) = E(H), but the variance differs. The expected value of F can then be estimated using: ˆ E(H) = 1 N∗ N∗

i=1 H(xi) Z(xi)E(Z) (2.20)

(32)

14 Reliability Assessment

where the vectors xi are sampled according to p∗(x). Note that the efficiency of this process depends on the relationship between F(x)Z(x), the knowledge kept on the model Z allows allows for the reduction of variance of the Monte Carlo estimation, similarly to the Control Variate. [4]

2.5.5.3 Cross Entropy

On the last section, the importance of having the right relationship between F(x) and Z(x) was explained. In some cases, a Monte Carlo sampling can be conducted to develop an adequate sampling density function. This method is called Cross-Entropy (CE) [33].

The CE optimisation method follows the same basic steps as the MCS previously discusses. Since the probability distribution for the unavailability of a generation unit belongs to the expo-nential family, Bernoulli distribution, the CE method is suitable to be applied [34]. In this case, the parameters to be optimised will be the generating units unavailabilities u. This will be done in an interactive way until the convergence criteria is verified. The outcome will be distorted un-availability for each generating unit that favours the loss of load cases that can be used directly in the Importance sampling method. [34] [35]

This method will be discussed in detail on chapter4.2.

2.6

The non-sequential MCS Method for the Adequacy Assessment

of Power Systems

From this point on in this dissertation, every discussion, conclusion and result will pertain to the Hierarchical Level I (HLI), which focusses solely on the production functional zone.

It is modelled as a single bus with all the load depend on it and all the generation groups connected to it as illustrated in figure2.7.

Figure 2.7: Single bus model for HLI [4]

2.6.1 Models of the System Components

The most simple representation of a generator is the one with two working states, namely unit-up and unit-down. Admitting that the generator transitions from states in an aleatory manner, due to non-programmed incidents with a failure rate λ and changes from failure to operational with the repair rate of µ which corresponds to µ = 1r where r represents the average time to repair.

(33)

2.6 The non-sequential MCS Method for the Adequacy Assessment of Power Systems 15

Figure 2.8: Two-state model for a generating unit [15]

The probability of a generator to be out of service, also known as forced outage rate (for), is given by:

f or= λ

λ + µ (2.21)

It is common to use the parameters Mean time to Failure, MTTF and Mean Time to Repair, MTTR instead of λ and µ. These parameters can be obtained by using [4]:

MT T F= 1

λ (2.22)

MT T R= 1

µ (2.23)

Figure 2.9: Example of a daily peak load variation curve

The load, in its turn, is represented by a daily peak load variation curve, which is composed of the daily peak load of the active energy consumption. These values are usually known due to prediction studies. Each set of 365 days of data is organised into a diagram where the x-axis represents the percentage of days where the peak load is exceeded, and the y-axis represents the value of peak load as shown on figure2.9.

(34)

16 Reliability Assessment

2.7

The sequential MCS Method for the Adequacy Assessment of

Power Systems

2.7.1 Models of the system Components

In sequential MCS it is not only possible to implement two-state but also multi-state Markov models [36].

In the two-state model, Figure2.8, a component has its max capacity available when in the up-stateand, in the down-state, the capacity is zero. Since the duration of the states is exponentially distributed [4], the residence time in the up and down states is given by:

TU p= − 1 λlnU1 (2.24) TDown= − 1 µ lnU2 (2.25)

where TU pis the residence time in the state up and TDownthe residence time in down state. U1

and U2are normally distributed numbers sampled from the interval [0,1].

Figure 2.10: Multistate Markov model [15]

Some of the system Components might have to be modelled through a multi-state Markov Chain[37][38], as illustrated in Figure2.10) case the failure/repair of a component is a part of an aggregation of N components. In this case, the maximum capacity of the state is given by:

CCk= (N − k) ×C (2.26)

Where N is the number of components, and C is the capacity of one component[38]. Knowing that the duration of the states is exponentially distributed, the residence time in the states C0with

all the components working and CNwith none of the components working is given by:

TC0 = − 1 NλlnU1 (2.27) TCN = − 1 (k + 1)µlnU2 (2.28)

(35)

2.7 The sequential MCS Method for the Adequacy Assessment of Power Systems 17

Figure 2.11: Analythical modeling of a dam

where TC0 is the residence time in C0and TCN is the residence time in CN. The residence time

in the remaining states is calculated through Equation2.29:

TCk= min  − 1 (N − k)λ lnU1, − 1 kµlnU2  (2.29) The model to use is chosen based on the type of generation unit that is to be simulated.

2.7.1.1 Conventional Generating Units

Conventional Generating Units are those that convert the thermal energy contained in fossil fuels or the atomic nucleus to electricity [39]. A two-state Markov Chain can model them [40]. In a up-statethey are capable of producing their maximum capacity. Likewise, the capacity is zero when they are on the down-state.

2.7.1.2 Hydro Generating Units

Hydropower Generating Units are responsible for converting the potential energy of water into electricity [41]. A two-state Markov chain can model them with transactions that follow an expo-nential probability distribution. [40]

The capacity of the generating unit, however, follows a time-dependent model.

It depends on the levels of water stored in the reservoirs and on the inflows. The dam can be modelled through the topographic survey of its contour lines allowing for an estimation of the area and consequently the volume of each level, as can be seen in2.11, with a step approximation is obtained that can be implemented into an analytic model. This, however, might become a very complex method.

A more straightforward but robust approach is using the several hydrological series [14], which are based on historical observations and have probabilities associated with them and creates a re-lationship between the total water stored in the dam and the power produced by the corresponding unit in each month of the year. Despite the probabilistic nature of the hydrological series, they are a deterministic input of the MCS sequential method. At every simulated year, a new hydrological

(36)

18 Reliability Assessment

year is sampled, and each hydro unit is assigned to its corresponding hydrological series. The ca-pacity available is then multiplied by the maximum caca-pacity that is obtained from the failure/repair stochastic model [15].

2.7.1.3 Wind Generating Units

Wind Farms are responsible for converting wind kinetic energy into electricity [42]. They are modelled by a multi-level Markov chain where each state corresponds to the failure of a single windmill inside of the Wind Farm [15].

Akin to the hydropower generating units, the maximum capacity then multiplied by the wind farm lifeline state and multiplied by the corresponding value taken from the hourly wind time series. Once more, despite them having a probabilistic nature, they are used as deterministic input parameters of the sequential MCS method. They differ, however, on the amount of data available. Since wind farms are a much newer technology, when compared to hydropower, there is not as much data available, leading to the usage of artificially generated wind series [14].

2.7.2 Load

The sequential MCS demands a chronological description of the evolution of demand; it is no longer possible to use classified load diagrams. It is necessary a representation that contains a load level for each hour of the day [10].

It is also possible to model the uncertainty of these values since they come from a prediction. One of the methods consists in corresponding each point of the load diagram with an uncertainty of Gaussian nature. During the simulation it would be necessary to sample the value of the load at each step of the simulation [14].

Also, one can associate the uncertainty of the load to the global value of the load diagram, resulting in multiple diagrams that are higher or lower than the average load. In this case, the sample of the load is made at the beginning of the simulation for each period [14].

2.8

Agent-based technology applied to power systems reliability

In order to accelerate the MCS method on a computational level, an agent-based technology was used to ascertain the reliability of the Power Systems. [43][11]

These aim to divide the MCS algorithm into several tasks and submit them to agent processing. [44]

(37)

2.8 Agent-based technology applied to power systems reliability 19

Figure 2.12: Basic Architecture of an agent [11]

An agent, like its name suggests, is, in it is core, "something that acts" [11]. It cannot be categorised as a mere program since it has many attributes that distinguishes it. Agents can operate under autonomous control, perceive the environment where they are included as well as persisting over a prolonged time. Agents are also able to adapt and change, even taking up different goals. [11]

Figure 2.13: Non-synchronised approach to a agent-base MCS [11]

The main goal of this agent-based MCS is to concentrate the computationally expensive tasks on agents that can replicate and work together. Monte Carlo techniques facilitate this kind of approach due to its inherent capacity to produce independent blocks of information, such as state vectors. In order to assess the reliability indices, an agent can create as many blocks of the system as necessary. This was proven to produce reliable and robust results [11].

(38)

20 Reliability Assessment

In this dissertation, a similar approach is proposed. Instead of using agents, the tasks will be divided and parallelised between the several threads of a GPU.

(39)

Chapter 3

Parallel Computing

The demand for computational resources is becoming higher, reaching a point where it cannot be accommodated by a single processing unit [45]. In fact the gap between the Central Processing Unit(CPU) performance and the performance that Moore’s law, which is a projection of the evo-lution of the number of transistors in an integrated circuit [46], promised continues to widen [45]. There is a need for a more powerful approach, using parallel processing.

Before the introduction of parallel programming it is essential to take a look back at serial computing. This technique consists of breaking a large task into smaller instructions that are executed one by one on the CPU of a computer [47]. Only one instruction can be executed at any moment. On the other hand, parallel computing is characterised by enabling the execution of more than one instruction simultaneously, thus achieving a more dynamic simulation and modelling as well as handling larger dataset. However, most algorithms require adaptations in order to be run concurrently and give the same result, meaning that this approach can, in some cases, require considerable overhead. [7].

(a) Serial programming (b) Parallel programming

Figure 3.1: Differences between serial and parallel programming [48].

Using serial programming, a single instruction can be executed at a given moment while the use of parallel programming allows for the execution of several instructions at once.

(40)

22 Parallel Computing

3.1

Important concepts

It was previously seen that serial programming is usually done in the CPU. This Section introduces the concept of a CPU and compares it to a GPU. A CPU can be compared to the brain of a computer [49]. A modern computer has between one and four processing cores [47]. On the other hand, a Graphics Processing Unit (GPU) is a particular type of microprocessor, almost like a specialised type of CPU. The GPU has a limited instruction set, but is able to process operations considerably faster when compared to a typical CPU [50]. Unlike the CPU, the GPU can have thousands of processing cores, running simultaneously [51]. Each core is slower than the typical CPU cores. However GPU cores can can be extremely efficient at basic mathematical operations [50]. The GPU is especially well-suited to address problems that can be expressed as data-parallel computations - the same program is executed on many data elements in parallel, which makes a suitable candidate for accelerating the MCS method.

Figure 3.2: Difference of cores between a CPU and GPU [49]

That said, the first thing one must think about before attempting to parallelise an algorithm is to verify whether the problem under consideration can be solved in parallel.

Lastly, it is important to note that a parallelising compiler can work in two different ways, fully automaticor programmer dictated [52]. In the former the compiler analyses the source code and identifies regions that can be parallelized. This analysis aims to find the time-consuming parts of the code and ascertain if they are able of being parallelised. The chosen parts are usually loops and vector or material operations, where you have to do the same steps repeatedly [52]. Programmer dictated, on the other hand, use compiler flags so that the programmer explicitly tells the compiler how to parallelise the code [52].

In this dissertation automatic parallelisation was used since it represents the first steps into a parallelised MCS. It is important to note that the automation of the parallelization process might lead to lower performance in the overall system and allows for less flexibility.

3.2

GPU and CUDA Programming

Compute Unified Device Architecure - CUDA is a programming Application Programming In-terface (API) created by NVIDIA for general purpose usage of GPUs [50] . It provides a set of

(41)

3.2 GPU and CUDA Programming 23

subroutines, as well as communication protocols to be used by high-level programming languages such as python [50]. It provides the means for users to perform serial or simple parallel computing using CPUs while offloading massively pasrallel tasks to the GPU. [47]

A CUDA program is composed of two primary components: a host, the CPU, and a GPU Kernel, a GPU function launched by the host and executed on the machine, the GPU.

The host runs on the CPU while the GPU Kernel runs on the GPU. The Kernel execution can be completed independently of the host execution.

Figure 3.3: Organisation of a CUDA application [52]

An application starts by executing the code on a CPU host. At a specific moment, the host code invokes a GPU Kernel that is executed on a GPU grid, which is composed of several independent groups of threads called thread blocks, figure3.4. The GPU is capable of executing a kernel in parallel by using multiple threads.

Figure 3.4: GPU memory hierarchy [52]

(42)

24 Parallel Computing

3.3

Numba enabled GPU Programming

Numba is an open-source just-in-time (JIT) compiler for python [53] that supports CUDA GPU programming by directly compiling a restricted subset of python code into CUDA kernels. Numpy arrays are transferred between the CPU and GPU automatically.

(43)

Chapter 4

Methodology

In this chapter, the methodology used throughout this dissertation is presented.

In order to understand how it might be possible to implement a parallelised version of MCS, it is first necessary to comprehend its serial version and pinpoint the tasks that might be taking the majority of the computational time and are able to be parallelised. The chapter starts with presenting and explaining the serial algorithms for the several case studies and then the steps that will be parallelised. Lastly, an introduction is made to the test system used as input (IEEE RTS 79) and to the random number generator utilised.

4.1

Non-sequential Monte Carlo

Algorithm 1: Non-sequential Monte Carlo for Reliability Assessment Result:LOLPˆ

PGenas the capability of each system generator;

f orof each generator;

PLas the values of the peak load variation curve; β ;

maxit as the maximum number of iterations;

k = 0;

while β2< V(X )

N×LOLPˆ 2 or k< maxit do

Sample of the vector of states (xi∈ X);

Compute the probability of loss of load in the raffled state xi;

EstimateLOLPˆ = avg(X );

Compute the variance between the raffled states ; end

The algorithm utilised to implement a non-sequential MCS is presented on Algorithm1. Here, the main goal is to obtain an estimation ofLOLPˆ within a confidence interval of β , known a priori.

The first step is to sample a vector of states, according to Algorithm2. 25

(44)

26 Methodology

Algorithm 2: Sampling of the system state for the Non-sequential MCS Result: Available capacity

PGenas the max capacity of each system generator;

f orof each generator;

while The state of all generators haven’t been sampled do

Raffle of number (r), between 0 and 1, according to the uniform distribution; if for < r then

The generator is considered in its down state;

Reduce the Available capacity in the capacity of the broken generator end

The vector of states is obtained, generating a random number, between 0 and 1, according to the normal distribution, and comparing it to the generator’s f or. If this value is smaller than the generator’s for, the generator is considered out of service.

The total capacity of the system is then reduced by the maximum capacity of the out of service generator. Next, this will be used to compute LOLP for this iteration, according to Algorithm3.

Algorithm 3: Loss of Load Probability for the Non-sequential MCS Result: Loss of load probability

Peak Loadfrom the peak load variation curve;

Probability of a peak load occurring, from the peak load variation curve; Available Capacity;

if Available Capacity > maximum value of the peak load variation curve then Probability = 0

else if Available Capacity < minimum value of the peak load variation curve then Probability = 1

else

Is it in the list of values? if Yes then

Probability = Probability of a peak load occurring, from the peak load variation else

Find the closest highest value and its index;

Obtain the approximated probability through linear interpolation between the closest value and the one immediately before it

To calculate the loss of load probability, he peak load variation curve values are used as input. As seen before in Section2.6.1, the curve represents the daily peak loads throughout a year, organised according to the probability of occurrence.

If the available capacity is higher than the largest value of the daily peak load, which has the smallest probability of occurrence, then it can be concluded that the probability of loss of load is zero.

(45)

4.2 Importance Sampling and Cross Entropy 27

On the other hand, if the capacity is smaller than the minimum value of peak load, that has a 100% probability of occurring, then LOLPk= 1.

If the available capacity is comprehended between the maximum and the minimum of the peak load variation curve, then it is necessary to look for it on the list of daily peak loads and find the correspondent probability.

This list, however, is finite and therefore it cannot to contain all the combinations of loss of load scenarios. In this case, it is necessary to compute the probability given a value that is not present in the list. For that, the linear interpolation method is used. It starts by finding the higher closest number in the list of peak load, its respective index and probability of occurrence. The loss of load probability is calculated according to the equation of a line as seen in Equation4.1, Equation4.2and Equation4.3.

LOLPi= mC + b (4.1)

m= Probability(x − 1) − Probability(x)

PeakLoad(x − 1) − PeakLoad(x) (4.2)

b= Probability(x − 1) − PeakLoad(x − 1) × m (4.3) where x is the index of the closest higher value, and C is the system capacity of the sorted state.

Looking into Algorithm1, it is possible to observe that the most computationally heavy parts are the sampling of new states and the evaluation of said state by calculating its LOLP. Those will be the parts that will be executed in parallel by simulating multiple years at a time.

Numba allows the user to decide which functions to parallelise. To take advantage of this tool, the states described by Algorithm2and3are implemented as individual functions.

It’s also worth noting that to use Numba and Cuda efficiently, vectors must be implemented as numpyarrays and that all variables that will be utilised in parallel are initialised with the correct size and type so that memory can be allocated for them.

4.2

Importance Sampling and Cross Entropy

The cross-entropy method is an auxiliary method for Monte Carlo that aims to distort the forced outage date, f or, of the generators in order to incite the appearance of small probability events thus minimising the computation effort of MCS process.

To obtain a new set of forced outage rates for each generator the original f or and the system load are needed, just like in the non-sequential Monte Carlo, as well as the sample size N for the optimisation process (usually 10 000 samples) [28] [15]. The multilevel parameter ρ, typically

(46)

28 Methodology

Algorithm 4: Cross Entropy for Monte Carlo Reliability Assessment Result: A new, optimised, value for for ( f ornew)

PGenas the max capacity of each system generator;

f orof each generator;

Set the sample size N and the multilevel parameter ρ; Set the maximum number of iterations itmax;

k= 0;

while ˆLk< Lmaxor k < kmaxdo

f orold= f ornew;

Sample Xi using f orold(Algorithm2);

Evaluate the state Xi according to Equation4.4;

Sort S(Xi) in descending order;

ˆLk= S[(1 − ρ)N];

if ˆLk> L then

ˆLk= S[(1 − ρ)N]

else ˆLk= L

Considering all Si. if Si< Lmaxthen

H[Xi] = 1

Compute W according to Equation4.6;

Updating f ornewfor each generator according to Equation4.7

between 0.01 and 0.1 [33] [54] and smoothing parameter α that can be used to prevent the occur-rences of 0’s and 1’s on the new f or [33] [54]. Lastly, the number of iterations allowed is also used as input, to prevent the program from running indefinitely.

The algorithm is initialised by using the original value of for to compute sample a vector of states with size N. This will be evaluated according to Equation4.4:

S(Xi) = N

j=1

Xi j×Cj (4.4)

where Xi jcorresponds to the state of the generator j in the hour i and Cjthe maximum capacity

of the generator. Thereafter, the evaluated states, S[i] are ordered in descending order i.e. S[1]>

S[2]> ... > S[N][15]. Set ˆLk as S[(1 − ρ)N], if this is greater that L, otherwise set ˆLk= L.

Next, the text function for the LOLE index is evaluated according to Equation4.5,

H(Xi) =

(

1 if S(Xi) < ˆLk

0 if S(Xi) ≥ ˆLk

(4.5) then the likelihood ratio, Wi(Xi, f ornew, f orold) is calculated. This represents the correlation

made during the sampling process.

Wi(Xi, f ornew, f orold) = ∏Nj=1(1 − f ornewj) Xi j× f or(1−Xi j) newj ∏Nj=1(1 − f oroldj) Xi j× f or(1−Xi j) oldj (4.6)

(47)

4.3 Sequential Monte Carlo 29

Algorithm 5: Sequential Monte Carlo for Reliability Assessment Result:LOLEˆ

MT T Fof each generator that composes the system; MT T Rof each generator that composes the system; PGenas the capacity of each generator;

PLas the hourly peak load ;

maxit as the maximum number of iterations;

while β2< V(X )

N×LOLPˆ 2 or k< maxit do

Sampling of the life line states for each system generator; Compute the total system capacity;

i = 0;

while i < 8760 do

if Capacity(h = i) < Peak Load(h = i) then LOLEk= LOLEk+ 1;

end

EstimateLOLEˆ = average(LOLEk);

Compute the variance between the sampled states; end

A distortion of the probability is determined by Equation4.7.

f ornew, j= 1 − 1 NG× ∑Nj=1H(Xi)Wi(Xi, f ornew, f orold)Xi j ∑Nj=1H(Xi)Wi(Xi, f ornew, f orold) (4.7) When the CE optimisation algorithm is finished, a MCS algorithm can be run based on IS tech-niques and using the optimal vector parameter. The estimation of LOLP is now made according to Equation4.8. ˆ LOLP= 1 M M

i=1 H(Xm) ×Wi(Xi, f ornew, f orold) (4.8)

This is done iteratively until the stopping criteria is reached.

4.3

Sequential Monte Carlo

Sequential MCS, has the goal of estimating LOLE using as the MTTF and MTTR of each genera-tor, their maximum capacity, and the hourly peak load of the system during a specific time frame, usually a year, as inputs..

The state of the system is sampled by the raffle of the lifeline states of each generator that composes the system and posterior processing.

The evaluation of the state is done by comparing the hourly produced capacity with the peak load. If the generation is bigger than the consumption at a given hour according to Equation4.9,

(48)

30 Methodology

the consumers are supplied. Otherwise, there is a load curtailment, and it is noted that this hour of the year was not supplied.

H(Xi) =

(

1 if Avaible capacity < Load

0 if Avaible capacity > Load (4.9) The LOLE of each sampled state is the sum of the hours that experience load curtailment, as is demonstrated by Equation4.6. LOLEi= 8760

i=1 H(Xi) (4.10)

LOLE is estimated by computing the mean of all sampled states. The cycle ends when the stopping criteria are achieved, according to the Equation2.7 or when the maximum number of iterations is exceeded.

Algorithm 6: Simulating the life line of a generator, during the period of a year, for the sequential Monte Carlo Simulation

Result: Lifeline states MT T Fof a generator; MT T Rof a generator;

Initialise a vector of zeros, with size 8760, for the system life states; k= 0;

while i < 8760 do

Draw a number, U1, between 0 e 1, using the uniform distribution;

timeup= −λ1lnU1;

Lifeline vector elements from i to i + time up become 1; i = i + timeup;

Draw a number, U2, between 0 e 1, using the uniform distribution;

timedown= −1µlnU2;

i = i + timedown;

The system states are simulated through a period of a year using Equations2.24and2.25and an hourly vector of states is obtained, an example of which can be observed in Figure4.1.

(49)

4.3 Sequential Monte Carlo 31

This vector can later be used to compute the capacity of the system by merely multiplying it by the maximum capacity of the generator and grouping all of the generator’s states following the method described in section2.7.

In sequential Monte Carlo, parallelisation can be achieved in one of two forms:

• Generate the lifeline a year at a time, by sampling multiple generators states in parallel, and evaluate in the CPU.

• Generate several years at a time evaluate them in parallel.

Both options are explored. Once again, it is worth noting that all variables present in the parallel processing must be initialised with the final size and type.

During the sampling of states, a vector of zeroes with 8760 elements is used, that is later changed according to the sampling result. It is assumed that every year starts with all the gener-ators on the on-state, which isn’t a representation of reality and will lead to an error on the final estimated value.

(50)

32 Methodology

4.4

IEEE Reliability Test System 79

The IEEE Reliability Test System (IEEE RTS 79) was created from the need for a standardised test for reliability assessment. It defines a system sufficiently broad to provide a basis for both HLI and HLII studies since it describes a load model, generation system and transmission network.

Its annual peak load is 2850MW , and the total installed generation is 3405MW allowing for a gap of 555MW . In this dissertation, it will be used as input data for MCS for reliability assessment in HLI.

4.4.1 Description of the Reliability Test System - Load Model

The load is described by hourly measurements, throughout a year in a per unit (pu) basis. It’s expressed in a way that represents daily, weekly and seasonal patterns.

Load data is organised in weeks, the peak load occurs at week number 51, during winter, similarly to what would happen in reality, which is illustrated in Table4.1.

Table 4.1: Weekly peak load, in pu

Week Peak Load Week PL W PL W PL

1 0.862 14 0.75 27 0.755 40 0.724 2 0.9 15 0.721 28 0.816 41 0.743 3 0.878 16 0.8 29 0.801 42 0.744 4 0.834 17 0.754 30 0.88 43 0.8 5 0.88 18 0.837 31 0.722 44 0.881 6 0.841 19 0.87 32 0.776 45 0.885 7 0.832 20 0.88 33 0.8 46 0.909 8 0.806 21 0.856 34 0.72 47 0.94 9 0.74 22 0.811 35 0.726 48 0.89 10 0.737 23 0.9 36 0.705 49 0.942 11 0.715 24 0.887 37 0.78 50 0.97 12 0.727 25 0.896 38 0.695 51 1 13 0.704 26 0.861 39 0.724 52 0.952

Table4.2has a daily peak load in the percentage of the weekly peak load. The same distribu-tion of load is applied for all seasons.

Table 4.2: Daily peak load, in pu, in relation to the peak of each week Day Monday Tuesday Wednesday Thursday Friday Saturday Sunday

Referências

Documentos relacionados

[r]

Para o cancro colo-retal os resultados são pouco concordantes na literatura, pois alguns estudos apontam como fator de risco para esta patologia uma alimentação

Neste sentido, na Republica Federal da Alemanha, existe uma relação fraca mas positiva entre transferências extraordinárias e ciclos económicos. Contudo este estudo analisa somente

[r]

Neste relatório final da minha Prática de Ensino Supervisionada(PES), proponho-me desenvolver, do ponto de vista teológico, os conteúdos do programa de Educação Moral e

Assinale-se, como aspecto cujo significado importa averiguar, a dispersão por toda esta área de seixos de quartzito talhados e de restos de talhe que, além disso,

Abaixo encontra-se descrito pormenorizadamente a forma como foi calculado cada um dos custos individualmente para cada trimestre (peças de desgaste, contratos de

Assim, esta dissertação emerge da pesquisa TDEBB, visto que tem como objetivo geral analisar as compreensões dos trabalhadores docentes dos anos finais do Ensino