• Nenhum resultado encontrado

Numerical Investigation of One-Dimensional Stochastic Neural Field Equations with Delay

N/A
N/A
Protected

Academic year: 2021

Share "Numerical Investigation of One-Dimensional Stochastic Neural Field Equations with Delay"

Copied!
80
0
0

Texto

(1)

Numerical Investigation of One-Dimensional Stochastic

Neural Field Equations with Delay

Madalena França Martins Rolão Nabais

Thesis to obtain the Master of Science Degree in

Mathematics and Applications

Supervisor(s):

Prof. Pedro Miguel Rita da Trindade e Lima

Examination Committee

Chairperson: Prof. Adélia da Costa Sequeira dos Ramos Silva

Supervisor: Prof. Pedro Miguel Rita da Trindade e Lima

Member of the Committee: Prof. Wolfram Erlhagen

(2)
(3)

“I open at the close.” J.K.Rowling , Harry Potter and the Deathly Hallows

(4)
(5)

Acknowledgments

I want to thank my supervisor, Professor Pedro Lima for all his support, patience, advice and guidance throughout the development of this thesis. I am very grateful for the wonderful introduction to the world of Neural Field Equations.

I would also like to thank my family for the love, the laughs and the motivation. I can not express how much I love you all.

(6)
(7)

Resumo

Equac¸ ˜oes do campo neuronal (ECN/NFE) s ˜ao um tipo particular de equac¸ ˜oes integro-diferenciais que ajudam a modelar a evoluc¸ ˜ao espacial e temporal de vari ´aveis como actividade sin ´aptica em populac¸ ˜oes de neur ´onios, o que por sua vez permite uma an ´alise detalhada do comportamento din ˆamico deste tipo de populac¸ ˜oes. Devido `a complexidade dos modelos e `a sua import ˆancia em contexto real, como em Rob ´otica e Neuroci ˆencia, t ˆem vindo a desenvolver-se diversos m ´etodos num ´ericos para a resoluc¸ ˜ao correcta e eficiente das ECNs .

Esta tese analisa ECNs determin´ısticas e estoc ´asticas unidimensionais, tendo em conta velocidade de propagac¸ ˜ao dos sinais, ru´ıdo e est´ımulo externo, e apresenta um m ´etodo num ´erico para as resolver, que ´e devidamente testado quanto `a sua converg ˆencia e complexidade. O m ´etodo pertence `a classe das aproximac¸ ˜oes espectrais de Galerkin, baseia-se na regra dos trap ´ezios para a aproximar os integrais e no m ´etodo de Euler-Maruyama para a resoluc¸ ˜ao do sistema de equac¸ ˜oes diferenciais estoc ´asticas. A linguagem de programac¸ ˜ao MATLAB ´e comparada com a mais recente Julia quanto `a rapidez de execuc¸ ˜ao. Adicionalmente, o algoritmo aqui proposto ´e comparado com outro, que faz uso da Transfor-mada R ´apida de Fourier. Para terminar, apresentamos algumas simulac¸ ˜oes que ilustram os resultados do algoritmo e analisamos as diferenc¸as importantes no comportamento das soluc¸ ˜oes induzidas pela velocidade de propagac¸ ˜ao finita, pelo ru´ıdo e quando os dois factores coexistem.

Palavras-chave:

Equac¸ ˜oes do campo neuronal estoc ´asticas; Velocidade de propagac¸ ˜ao finita; M ´etodo de Galerkin; Euler-Maruyama; Julia.

(8)
(9)

Abstract

Neural field equations (NFE) are a particular type of integro-differential equations that model the spa-tiotemporal evolution of variables like synaptic activity in populations of neurons, which in turn enables a detailed analysis of the dynamic behaviour of this type of populations. Due to the complexity of the mod-els and their importance in real life applications such as Robotics and Neuroscience, several numerical methods have been developed for the correct and efficient resolution of NFEs.

This thesis analyses deterministic and stochastic one-dimensional NFEs, taking into account the trans-mission speed, noise and external stimulus, and presents a numerical method to solve them, which is properly tested for its convergence and complexity. The numerical method belongs to the class of Galerkin-type spectral approximations, relies on the trapezoidal rule to solve the integrals and in Euler-Maruyama’s method to solve the system of stochastic differential equations. The programming language MATLAB is compared with the up-and-coming Julia with respect to computational speed. Additionally, we compare the proposed algorithm with another one, that uses the Fast Fourier Transform. Finally, we present several simulations that illustrate the performance of the algorithm and analyse the important differences in behaviour induced by finite transmission speed, noise and when both factors co-exist.

Keywords:

Stochastic neural field equations; Finite propagation speed; Galerkin method; Euler-Maruyama; Julia.

(10)
(11)

Contents

Acknowledgments . . . v

Resumo . . . vii

Abstract . . . ix

List of Tables . . . xiii

List of Figures . . . xv

Nomenclature . . . xix

1 Introduction 1 1.1 Motivation . . . 1

1.2 Basic elements of Neuronal Systems . . . 1

1.3 Mathematical Models . . . 2

1.4 Deterministic Neural Field Equations . . . 4

1.4.1 NFE with delay . . . 4

1.4.2 Numerical methods for the deterministic NFE . . . 5

1.5 Objectives and Thesis Outline . . . 6

2 Stochastic Neural Field Equation 7 2.1 Problem Statement . . . 7

2.2 Numerical methods for SNFE . . . 9

3 Implementation 11 3.1 Numerical Algorithm . . . 11

3.1.1 Some notes on implementation . . . 16

3.1.1.1 Delay-free case . . . 16

3.1.1.2 Delay-dependent case . . . 16

3.2 Choosing programming language . . . 17

3.3 Comparing performance of programming languages . . . 18

3.4 Complexity of the algorithm . . . 19

3.5 Comparing algorithms . . . 21

4 Simulations 23 4.1 Formulation of the problems . . . 23

(12)

4.2 Numerical results . . . 25 4.2.1 Deterministic case . . . 25 4.2.1.1 Delay-free . . . 25 4.2.1.2 Delay-dependent . . . 29 4.2.1.3 Conclusions . . . 33 4.2.2 Stochastic case . . . 33 4.2.2.1 Delay-free . . . 33 4.2.2.2 Delay-dependent . . . 39 4.2.2.3 Conclusions . . . 48

5 Conclusions and Future work 49

Bibliography 53

(13)

List of Tables

3.1 Comparing performance of programming languages: Time performance of the Julia and MATLAB programming languages in the case with delay . . . 19 3.2 Complexity Analysis: Time

performance of the algorithm in the case

without delay . . . 20 3.3 Complexity Analysis: Time

performance of the algorithm in the case

with delay . . . 20 3.4 Complexity Analysis: case without delay . . . 20 3.5 Comparing algorithms: Time performance of Algorithm 1 and Algorithm 2 in the

delay-dependent case . . . 22

4.1 Numerical values of U (xk, 4)for the deterministic case without delay, using a Heaviside type firing rate function . . . 26 4.2 Convergence with respect to hxfor the deterministic case without delay, using a Heaviside

type firing rate function . . . 26 4.3 Numerical values of U (xk, 4)for the deterministic case without delay, using a continuous

firing rate function . . . 26 4.4 Convergence with respect to hxfor the deterministic case without delay, using a

continu-ous firing rate function . . . 27 4.5 Numerical values of U (0, tn)to obtain convergence with respect to ht, for the deterministic

case without delay . . . 27 4.6 Convergence with respect to ht, for the deterministic case without delay . . . 27 4.7 Summary-table for the delay-free and delay-dependent, deterministic case with σ=13 and

without noise (=0) . . . 33 4.8 Number of paths with a certain number of bumps, for the strong noise scenario in the

delay-free case, at t=10 and σ = 0.4 and 3, out of 100 paths . . . 37 4.9 Number of paths with a certain number of bumps, for the strong noise scenario in the

delay-free case, at t=10 and t=40 and σ = 13, out of 100 paths . . . 37 4.10 Summary-table for stochastic, delay-free case for σ=0.4 and 3 . . . 38 4.11 Summary-table for stochastic, delay-free case for σ=13 . . . 38

(14)

4.12 Number of paths with a certain number of bumps considering the strong noise scenario in the delay-free and delay-dependent cases for σ=3, at t=10 out of 100 paths . . . 41 4.13 Number of paths with a certain number of bumps, for the weak noise scenario in the

delay-free and delay-dependent cases for σ=13, at t=10 (considering [0,5]-[5,10]) out of 100 paths . . . 43 4.14 Number of paths with a certain number of bumps, for the weak noise scenario in the

delay-free and delay-dependent cases for σ=13, at t=40 (considering [0,5]-[5,40]) out of 100 paths . . . 43 4.15 Number of paths with a certain number of bumps, for the strong noise scenario in the

delay-free and delay-dependent cases for σ=13, at t=10 (considering [0,5]-[5,10]) out of 100 paths . . . 45 4.16 Number of paths with a certain number of bumps, for the strong noise scenario in the

delay-free and delay-dependent cases for σ=13, at t=40 (considering [0,5]-[5,40]) out of 100 paths . . . 46 4.17 Summary-table for the delay-free and delay-dependent case, considering the

determinis-tic, weak noise and strong noise for σ=3, at t=10 . . . 48 4.18 Summary-table for the delay-free and delay-dependent case, weak noise and σ=13, at t=10 48 4.19 Summary-table for the delay-free and delay-dependent case, strong noise and σ=13, at

(15)

List of Figures

1.1 Single neuron drawing, adapted from [1] . . . 2 1.2 Caricature of a neural field with delay, adapted from [15] . . . 5

3.1 Comparing performance of programming languages: Time performance of the Julia (Blue) and MATLAB (Red) programming languages in the case without delay . . . 19 3.2 Comparing algorithms: Time performance of Algorithm 1 (Blue) and Algorithm 2 (Red) in

the delay-free case . . . 21

4.1 Dispersion of the external stimulus inputs, for σ=0.4, 3 and 13 . . . 24 4.2 Numerical solution profile of the delay-free deterministic NFE for the values presented in

4.2.1.1, which matches the solution in ([44], pag 56) . . . 25 4.3 Numerical solution profiles of the deterministic, delay-free NFE with σ=0.4, at time t=5

(Left); t=10 (Right) . . . 28 4.4 Numerical solution profiles of the deterministic, delay-free NFE with σ=3, at time t=5 (Left);

t=10 (Right) . . . 28 4.5 Numerical solution profiles of the deterministic, delay-free NFE with σ=13, at time t=5

(Left); t=10 (Right) . . . 28 4.6 (Top) Numerical solution profiles of the delay-free and delay-dependent deterministic NFE

with σ=0.4, at time t = 5 (Left); t=10 (Right); (Bottom) Evolution of the solution maximum for the delay-free (Blue) and delay-dependent (Red) deterministic NFE with σ=0.4 . . . . 29 4.7 (Top) Numerical solution profiles of the delay-free and delay-dependent deterministic NFE

with σ=3, at time t = 5 (Left); t=10 (Right); (Bottom) Evolution of the solution maximum for the delay-free (Blue) and delay-dependent (Red) deterministic NFE with σ=3 . . . 30 4.8 Numerical solution profiles of the delay-free and delay-dependent deterministic NFE with

finite transmission speed v=10 and σ=13, at time t = 5 (Left); t=10 (Right) . . . 30 4.9 Numerical solution profiles of the delay-free and delay-dependent deterministic NFE with

finite transmission speed v=80 and σ=13, at time t = 5 (Left); t=10 (Right) . . . 31 4.10 Numerical solution profiles of the delay-free and delay-dependent deterministic NFE with

(16)

4.11 Numerical solution profiles of the delay-free and delay-dependent deterministic NFE with finite transmission speed v=10 and σ=13, at time t = 10 (Left); t=40 (Right), for the ex-tended external input . . . 32 4.12 Numerical solution profiles of the delay-free and delay-dependent deterministic NFE with

finite transmission speed v=80 and σ=13, at time t = 10 (Left); t=40 (Right), for the ex-tended external input . . . 32 4.13 Graphics of max, mean and min of the delay-free stochastic NFE for σ=0.4, at t=5 (Left);

t=10 (Right) and with weak noise (



=0.05) . . . 34 4.14 Graphics of max, mean and min of the delay-free stochastic NFE for σ=0.4, at t=5 (Left);

t=10 (Right) and with strong noise (



=0.5) . . . 34 4.15 Graphics of max, mean and min of the delay-free stochastic NFE for σ=3, at t=5 (Left); t=10

(Right) and with weak noise (



=0.05) . . . 34 4.16 Graphics of max, mean and min of the delay-free stochastic NFE for σ=3, at t=5 (Left); t=10

(Right) and with strong noise (



=0.5) . . . 35 4.17 Graphics of max, mean and min of the delay-free stochastic NFE for σ=13, at (a) t=5; (b)

t=10; (c) t=40 and with weak noise (



=0.05) . . . 35 4.18 Graphics of max, mean and min of the delay-free stochastic NFE for σ=13, at (a) t=5; (b)

t=10; (c) t=40 and with strong noise (



=0.5) . . . 36 4.19 Numerical solution profile with two bumps, for the delay-free stochastic NFE, σ=13, at

t=10 and with weak noise (=0.05) . . . 37 4.20 Some numerical solution profiles with one-bump (a); two-bumps (b),(c); three-bumps

(d),(e); five-bumps (f); seven-bumps (g); eight-bumps (h); ten-bumps (i) for the delay-free stochastic NFE, for σ=13, at t=10 and with strong noise (=0.5) . . . 38 4.21 Graphic of mean for the delay-free and delay-dependent stochastic NFE for σ=3, at t=10,

with weak noise



=0.05 and finite transmission speed of (a) v=10; (b) v=80 . . . 39 4.22 Graphics of max, mean and min of the delay-dependent stochastic NFE for σ=3, at t=10,

with weak noise (



=0.05) and finite transmission speed (a) v=10; (b) v=80 (Right) . . . 39 4.23 Graphic of mean for the delay-free and delay-dependent stochastic NFE for σ=3, at t=5,

with strong noise



=0.5 and finite transmission speed of (a) v=10; (b) v=80 . . . 40 4.24 Graphic of mean for the delay-free and delay-dependent stochastic NFE for σ=3, at t=10,

with strong noise



=0.5 and finite transmission speed of (a) v=10; (b) v=80 . . . 40 4.25 Graphics of max, mean and min of the delay-dependent stochastic NFE for σ=3, at t=10,

with strong noise (



=0.5) and finite transmission speed (a) v=10 ; (b) v=80 . . . 41 4.26 Numerical solution profiles with three-bumps(a); seven-bumps (b); nine-bumps(c) for the

delay-dependent stochastic NFE (v=80), σ=3, at t=10 and with strong noise (=0.5) . . . 41 4.27 Graphic of mean for the delay-free and delay-dependent stochastic NFE for σ=13, at t=10,

with weak noise (



=0.05) and finite transmission speed of (a) v=10; (b) v=80 . . . 42 4.28 Graphics of max, mean and min of the delay-dependent stochastic NFE for σ=13, at t=10,

(17)

4.29 Graphic of mean for the delay-free and delay-dependent stochastic NFE for σ=13, at t=40, with weak noise



=0.05 and finite transmission speed of (a) v=10; (b) v=80 . . . 43 4.30 Graphic of mean for the delay-free and delay-dependent stochastic NFE for σ=13, at t=5,

with strong noise



=0.5 and finite transmission speed of (a) v=10; (b) v=80 . . . 44 4.31 Graphic of mean for the delay-free and delay-dependent stochastic NFE for σ=13, at t=10,

with strong noise



=0.5 and finite transmission speed of (a) v=10; (b) v=80 . . . 44 4.32 Graphics of max, mean and min of the delay-dependent stochastic NFE for σ=13, with

strong noise (



=0.5) and finite transmission speed (a) v=10; (b) v=80 . . . 45 4.33 Graphic of mean for the delay-free and delay-dependent stochastic NFE for σ=13, at t=40,

(18)
(19)

Nomenclature

Greek symbols

α Constant (related to the decay rate of the potential)

 Noise

Ω Interval of integration

τ Delay

ξ Parameter to model the spatial correlation length

Roman symbols

F Connectivity between neurons

I External sources of excitation

S Firing rate of the neurons

(20)
(21)

Chapter 1

Introduction

1.1

Motivation

Neural field equations (NFEs) are a particular type of partial integro-differential equations and constitute an important tool for analysing the dynamic behaviour of populations of neurons by describing the spa-tiotemporal evolution of variables such as synaptic activity. They are important in fields such as Robotics, notably in the creation of autonomous robots that interact with other agents in order to solve a mutual task, since their architecture is strongly inspired by the processing principles and neuronal circuitry in the primate brain. They also play an important role in Neuroscience, where they serve as a method of interpreting experimental data, including information obtained from EEG, fMRI and optical imaging. As in other sciences, we are always searching for the most consistent and realistic model that translates the phenomena that happens in our brain. This can be achieved if the effects of random processes and delays are taken into account.

We introduce the delayed deterministic NFE in Section 1.4.1 and the notion of noise in Chapter 2, since delays and noise arise naturally when modelling neurobiological systems.

1.2

Basic elements of Neuronal Systems

The nervous system is composed by the central nervous system (CNS) and the peripheral one (PNS). The central nervous system comprises the brain and the spinal cord and the peripheral nervous system consists of the nerves and the ganglia outside the brain and spinal chord.

It contains two main types of cells: neurons and glial cells. The basic unit of the nervous system is the neuron, while glial cells are supporter cells that are required for energy supply, structural stabilization of brain tissue and protection of neurons. Glial cells do not affect directly the processing of information, therefore we will only focus on neurons.

Neurons are cells that use electrical and chemical signs to carry information throughout the human body, from receiving sensory input from the external world, to sending motor commands to our muscles. They can be decomposed in three parts: thesoma (cell body) is responsible for processing information,

(22)

dendrites are responsible for carrying information from other neurons to the soma (input) and lastly the axon, that carries the information from the soma to other neurons (output), this can all be observed in

Figure 1.1.

Figure 1.1: Single neuron drawing, adapted from [1]

To carry the information, neurons communicate between themselves and with other cells by sending signals, using specific connections called synapses. According to a lower estimate from 2009, the human nervous system contains about 0.89 × 1011 neurons that are connected by 1014synapses. The passage of the signal across the synapse is not immediate, that is, there exists a delay when passing information from neuron to neuron. The signals emitted during the synapse are actually small electrical pulses, called action potentials or spikes, that are caused by the change of voltage in the membrane cell of a neuron. These action potentials trigger the release of other neurotransmitters, which means that neurons communicate with each other by firing and that the firing of a single neuron can trigger the firing of the connected neurons, generating an intricate pattern of connectivity that may vary according to the area of the brain where they are located.

Neurons exist in different locations of the body and have different tasks accordingly, nevertheless, it is assumed that the brain has the largest concentration of neurons, in particular in the cerebral cortex which plays a key role in controlling memory, attention, perception, awareness, thought, language and other important processes.

1.3

Mathematical Models

The investigation of biological neural networks in animal brains inspired mathematicians to create ar-tificial neural networks (ANN). Scientists were searching for computational models with the intent of developing neural networks that would solve problems as a human brain does. As a result of this, in 1943, Warren McCulloch and Walter Pitts created a computational model for neural networks based on mathematics and algorithms.

The ANN functions like the human brain, i.e the ANN is based on a set of connected basic units called artificial neurons. The connection is also called a synapse and transmits signals from one artificial

(23)

neu-ron to another one. Once an artificial neuneu-ron receives this signal, it can process it and send a new signal to the connected neighbours.

As neural networks are made of connected neurons, it is important to mathematically understand these connections (synapses) and how they are triggered. In 1952, A.H. Hodgkin and A.F.Huxley, see [2], authored a paper where it is described how action potentials in neurons are initiated and propagated, using the language of electric circuits and translating them into a system of four ordinary differential equations. Further investigation of the propagation of nervous stimulus lead in 1962 to the FitzHugh-Nagumo equations, [3], [4], [5], where the Hodgkin-Huxley system is reduced to two equations that describe the propagation of impulses in the nervous system.

The high density of neurons and the huge number of synapses that occur in the cortex, suggest the creation of a continuum approximation model of neural activity and therefore in 1956, a first attempt was made by Beurle, see [6]. In this model, it is considered a continuously distributed population of excitatory neurons, with a fixed threshold to send signals to each other. Moreover, by focusing on the fraction of neurons becoming active per unit of time, Beurle was able to analyse the triggering and propagation of large scale brain activity. This was also studied by Griffith in 1963 [7] and 1965 [8], where a model is described in which neural activity is represented by a field quantity φ, with the neurons as sources of it. It is shown that, with certain physically realistic assumptions, φ satisfies a moderately non-linear differential equation. In the second article (of 1965) the equation firstly introduced in 1963 is further examined, in particular, the stability of critical solutions is investigated and it is concluded that in certain cases, general solutions tend toward critical ones. Nonetheless, the models created by these two au-thors consisted of networks of excitatory neurons with no refractory or recovery variable.

It was Wilson and Cowan in the 1970s [9], who extended the theory to include both inhibitory and ex-citatory neurons as well as refractoriness. Posterior to the works of Wilson and Cowan, Amari (1977) developed further work into neural fields, particularly on pattern formation under natural assumptions of connectivity and action potentials, thus providing the formulations for neural field models as we know them today, see [10].

Following the seminal works by Wilson and Cowan [9] and Amari [10] in developing continuum models for neural fields, there has been a considerable interest in the study of spatiotemporal dynamics of neu-ronal activity. The continuum models are usually translated as non-linear integro-differencial equations (as seen in Section 1.4), although other approaches may be taken, see [11]. Concerning neural fields, topics such as stationary states, travelling waves and pattern formation have been studied extensively (see [12] and references within).

(24)

1.4

Deterministic Neural Field Equations

The idea of neural field equations is to treat the cerebral cortex as a continuous space and describe the spatiotemporal dynamics of the neural interactions.

Let us start by analysing a deterministic neural field equation (NFE).

∂U (x, t)

∂t = I(x, t) − αU (x, t) + Z

F (| x − y |)S(U (y, t))dy (1.1)

where t ∈ [0, T ], x ∈ Ω ⊂ R.

As referenced in Section 1.3, this equation was firstly introduced by Wilson and Cowan [9] and then by Amari [10] to describe the dynamics of spatially localized populations containing both excitatory and inhibitory neurons. On a neurological context, U (x, t) is the membrane potential in point x at time t; I represents the external sources of excitation; F (| x − y |) represents the connectivity strength between the different neurons x and y and can be positive or negative, depending on whether the connectivity is excitatory or inhibitory; S is the dependency between the firing rate of the neurons and their membrane potentials (a sigmoidal or Heaviside function); α is a constant related to the decay rate of the potential. On a mathematical context, U : Ω × [0, T ] −→ R is the unknown continuous function; I, F and S are given functions; α is a constant.

Equation (1.1) is completed with an initial condition of the type

U (x, 0) = U0(x)

where U0is a given function.

1.4.1

NFE with delay

Delays arise naturally when we model neurobiological systems. According to equation (1.1), the prop-agation speed of neuronal interactions is infinite. However, in reality, action potentials and synaptic processing can have delays in the order of milliseconds. Effective delays can also be induced by the spike-generation dynamics. In a less theoretical notion, the transference of signals between neurons does not happen instantly, but considering it so allows to find simpler solutions for the equations. In order to make the neural field models more realistic, we must take into account that the propagation speed of neuronal interactions is finite. In our context, the delay τ is defined as τ =| x−yv | (the time spent by the signal to travel between x and y) and v is the propagation speed.

In the case of equation (1.1), we need to guarantee that τ ≈ 0, so that the transmission of signals be-tween neurons is instantaneous. In the case of our present equation (1.2), we have τ > 0.

Such a model was discussed in [13] and [14]:

∂U (x, t)

∂t = I(x, t) − αU (x, t) + Z

(25)

where t ∈ [0, T ], x ∈ Ω ⊂ R and τ > 0 is the delay, which depends on the space variables as a result of the finite propagation speed of nervous stimulus in the brain.

In this case, the initial condition takes the form

U (x, t) = U0(x, t)

with t ∈ [−τmax, 0]where τmax= |Ω|v is the maximum value of the delay and | Ω | is the cardinality of Ω.

Figure 1.2: Caricature of a neural field with delay, adapted from [15]

1.4.2

Numerical methods for the deterministic NFE

As explained in the introduction of Section 1.4, NFEs serve not only as a method of interpreting experi-mental data but also play an important role in cognitive robotics. Due to this, neural field equations (in several dimensions) are nowadays a very important subject of research.

Simulations play a fundamental role in studying brain dynamics in computational neuroscience and in understanding diseases such as Parkinson’s, enabling the analysis of the effect of different treatments. Hence, it is very important to develop an efficient, fast and reliable numerical method to solve this type of equation, however, simulations of integro-differential equations in several spatial dimensions require a high computational effort, which can explain why there exists few studies concerning their numerical analysis.

In 2010, Faye and Faugeras [14] proposed a method to solve delayed NFEs using approaches from the theory of delayed differential equations to prove the existence and uniqueness of a solution for these equations. Furthermore, through Lyapunov analysis they gave sufficient conditions for the solutions to be asymptotically stable. As for the numerical method, it was presented a way to solve a system of NFEs similar to equation (1.2), for different p (number of populations of neurons) and q (spatial dimension). In the case of p=q=1 and Ω=R, the system is discretized in time in order to turn the initial equation into a finite number of equations, then the integral term is also discretized using the trapezoidal rule, thus obtaining a numerical scheme. Other papers that dealt with this case (p=q=1 and Ω=R) are [16], [17], [18], [19], [20]. In [21], [22] Jackiewicz, Rahman and Welfert applied Gaussian quadrature rules and interpolation to approximate the solution of integro-differential equations, modelling neural networks which are similar to NFEs without delay.

(26)

the delay-dependent (1.2) and the delay-free (1.1) equations grows very fast as the discretization step is reduced. Therefore, in this case, creating an efficient method poses a greater challenge than consid-ering simply one dimension. This efficiency can be achieved by means of low rank methods (see [23]), when the kernel is approximated by polynomial interpolation, which allows for a significant reduction of the dimensions of the matrices. In [24], the authors use an iterative method to solve linear systems of equations, which takes into account the special form of the matrix to introduce parallel computation. In regards to the delay-dependent equation (1.2), we cite again [14] as the authors also developed a method for one population of neurons in two dimensions. It consists in applying a quadrature rule in space to reduce the problem to a system of delay differential equations, which in turn is solved by a standard algorithm for this type of equation. In [25], [26] a more efficient concept was presented, where the authors introduce a new method to deal with the convolution kernel of the equation and use Fast Fourier Transforms to reduce the computational effort demanded by the numerical integration.

Finally, we cite [27] where it is described a numerical method for the approximation of the solutions (in the one population of neurons in two-dimensional case), including a space dependent delay in the inte-grand function. This method uses a rank reduction technique based on polynomial interpolation using Chebyshev nodes in Ω. While the numerical integration is carried out on a fine grid, the approximate so-lution is computed only at a much smaller set of points (the Chebyshev nodes), which enables a drastic reduction of the computational effort without losing accuracy. The advantages of this algorithm are its stability, accuracy and efficiency when dealing with the two-dimenstional case.

1.5

Objectives and Thesis Outline

The objective of this work is to study the influence of delay and additive noise in neural field equations. With this purpose, we started by introducing the deterministic equations without delay (1.1) and with delay (1.2) and in the next chapter we present the stochastic equations without delay (2.1) and with delay (2.2). In order to solve these equations, we will describe a computational algorithm, which is coded in Julia and is based on the ideas in [28] and [29].

We will begin by evaluating whether Julia is the adequate programming language to implement our algorithm, followed by an analysis of its complexity. Additionally, we will compare its time efficiency with an algorithm that uses the Fast Fourier Transform. Subsequently, we examine the convergence of the method using the deterministic delay-free case and study the influence of delay by itself (4.2.1.2), additive noise by itself (4.2.2.1) and how the two-factors interact with each other (4.2.2.2).

(27)

Chapter 2

Stochastic Neural Field Equation

In this chapter, we follow the ideas used by K ¨uhn and Riedler [30], to introduce the notion of noise and model a stochastic neural field. The authors cite [31], [32] and [33] as motivation for the approach taken, since it is well known that intra and interneuron dynamics are subject to fluctuations [31] and many meso or macroscale continuum models have stochastic perturbations due to finite size effects [32], [33]. For SNFEs, there exists direct motivation to understand the relation between noise and short term working memory [34], as well as noise induced phenomena [35] in perceptual bistability [36]. The sources of noise include irregularity of spikes, non-homogeneous/irregular connectivity and perturbation of external stimulus.

We consider the effect of additive noise in NFEs, more precisely, the stochastic integro-differencial equation of the form:

dU (x, t) = dW (x, t) + [I(x, t) − αU (x, t) + Z

F (| x − y |)S(U (y, t))dy]dt (2.1)

where t ∈ [0, T ], x ∈ Ω ⊂ R,  is a constant that describes the level of noise and W (x, t) is a Q-Wiener process. This equation will be analysed in detail in the next paragraphs.

2.1

Problem Statement

We are concerned with finding a numerical solution for the stochastic neural field equation (SNFE) (2.1). Just as in the case of deterministic neural field equations, we have the option to study SNFE with or without delay (τ ). In the delay-dependent case, equation (2.1) is replaced by

dU (x, t) = dW (x, t) + [I(x, t) − αU (x, t) + Z

F (| x − y |)S(U (y, t − τ ))dy]dt (2.2)

Equation (2.1) is completed with the initial condition

(28)

where U0is a given stochastic process. In the case of equation (2.2) we have

U (x, t) = U0(x, t), t ∈ [−τmax, 0], x ∈ Ω

It is assumed that U (x, t) satisfies periodic boundary conditions in space and that the domains are of the form Ω = [−L, L], including the limit case where L → ∞.

We begin by summarising some theoretical concepts which are used in the formation of equations (2.1) and (2.2).

Remark 2.1

Q-Wiener Process

Let us recall the definition of Q-Wiener process in [ Definition 2.1.9, [37] ].

Assume V is a real Hilbert space and let us fix an element Q ∈ L(V ), nonnegative, symmetric and with finite trace.

Definition: A V -value stochastic process W (t), t ∈ [0, T ], on a probability space (Ω, F , P ) is

called a (standard) Q-Wiener process if: • W (0) = 0,

• W has P -a.s (almost surely) continuous trajectories,

• the increments of W are independent, i.e the random variables

W (t1), W (t2) − W (t1), · · · , W (tn) − W (tn−1)

are independent for all 0 ≤ t1< · · · < tn≤ T, n ∈ N, • the increments have the following Gaussian laws:

P ◦ (W (t) − W (s))−1 = N (0, (t − s)Q) for all 0 ≤ s ≤ t ≤ T.

Note that there exists a complete orthonormal system {ek} in V and a bounded sequence of nonnegative real numbers {λk} such that

Qek = λk k ∈ N

In [Proposition 4.3 , [38]], we have the proof that the following statement holds: Assume that W (t) is a Q-Wiener process, then

• W is a Gaussian process on V and

(29)

• For arbitrary t ≥ 0, W has the expansion W (t) = ∞ X i=1 pλjβj(t)ej

where βj are a sequence of independent scalar Wiener processes on (Ω, F , P ) and the series W (t) is convergent in L2(Ω, F , P ).

2.2

Numerical methods for SNFE

There is a relatively small amount of recent work on SNFEs. In 2007, Brackley and Turner [39] studied the incorporation of a source of noise into a continuum neural field model by allowing the firing thresh-old to fluctuate noisily about a mean value, i.e, the firing rate function S is a gain function which has a random firing threshold. This kind of fluctuating gain functions are also considered in [40], where the authors have analysed the propagation of travelling wavefronts in models of neural media that incorpo-rate randomness, such that translational invariance is broken. Bressloff and Webber [41] considered the effects of extrinsic multiplicative noise on SNFE, while Bressloff and Wilkerson [42] analyse the in-fluence of extrinsic noise on travelling pulses in a neural field model. These previous papers dealt with issues such as front diffusion and effects of noise on the wave speed. Further work includes Hutt et al. [20], where it is studied the spatiotemporal dynamics of a generic integro-differential equation subject to additive random fluctuations and the influence of these fluctuations on Turing bifurcations. Kilpatrick and Ermentrout [43] are interested in the effect of additive noise on stationary bump solutions.

K ¨uhn and Riedler [30] analysed the effect of additive noise on NFEs, where an Amari type model is considered (see [10]) driven by a Q-Wiener process. The authors provide a starting point for an efficient computation of this type of equation, demonstrating that a good finite dimensional approximation of the SNFE can be achieved using the Galerkin method.

In [44], the ideas of K ¨uhn and Riedler are used and adapted to include the delay-dependent case. Fur-thermore, the author proposes a new numerical algorithm for solving SNFEs with delays, which include the use of the Galerkin method and utilises the Fast Fourier Transform to reduce the computational effort needed to solve these equations. Recently in [45], the authors explored an efficient numerical integra-tion scheme for solving two-dimensional neural field equaintegra-tions in the presence of external stimulus input in both deterministic and stochastic scenarios. The method also uses a Galerkin-type approximation and the efficiency of the implementation is based on the characteristics of the partial integro-differential equation in discussion and its two-dimensional spatial domain.

The algorithm presented in this text gives the numerical solution of deterministic and stochastic neural field equations, in the presence of external stimulus and finite transmission speed (delay) . The main idea of this algorithm, is to transform a SNFE, which is essentially a stochastic integro-differential equa-tion, into a system of stochastic differential equations (SDEs), which in turn can be solved by a numerical method appropriate for this type of equation. There are many numerical methods for solving SDEs. The

(30)

simplest one is the Euler-Maruyama method, which is a generalisation of the Euler method for ordi-nary differential equations. We also mention the Milstein method ([46], pg.345), named after Grigori N.Milstein who first published the scheme in 1974, which uses It ˆo’s lemma to add the second order term to the Euler-Maruyama scheme, increasing the accuracy by one. The previous methods are explained in Kloeden and Platen’s book [46], which describes a method based on the stochastic Taylor series ex-pansion, although this method presents some difficulties when the Wiener process is multidimensional ([46], Chapter 5).

In this text, we will be using the Euler-Maruyama method.

Method 2.2

The Euler-Maruyama method: is a method of obtaining the approximate solution of an

stochastic differential equation. Consider:

dUt= a(Ut)dt + b(Ut)dWt

with initial condition U0= u0and where Wtis a Wiener process.

We want to solve this equation on a time interval [0,T], thus the Euler-Maruyama approximation to the solution Ut, Vt, is obtained through:

• partition the interval [0,T] into n equal time subintervals of width ht> 0, 0 < τ0< τ1< · · · < τn= T and ht=Tn

• V0=u0

• recursively define Vkfor 1 ≤ k ≤ n by:

Vk+1= Vk+ aht(Vk) + b(Vk)∆Wk

where ∆Wk = Wτn+1 − Wτn

Notice that ∆Wk are independent and identically distributed normal random variables with ex-pected value zero and variance ht.

(31)

Chapter 3

Implementation

The following chapter serves as a guide to understand the implementation of the algorithm used to solve the deterministic and stochastic NFEs, to choose the adequate programming language to implement it (MATLAB vs Julia), to evaluate its complexity and finally to test the efficiency of our algorithm and compare it to the efficiency of an algorithm that uses the Fast Fourier Transform.

3.1

Numerical Algorithm

The code implemented in this work is based on the algorithm developed by Kulikov and his co-authors described in [28] and [29]. This algorithm is based on the Karhunen-Lo ´eve formula, which is used to expand the solution U (x, t) (see Theorem 3.1) into the form:

U (x, t) = ∞ X k=0 uk(t)vk(x) (3.1) Theorem 3.1

Theorem: Karhunen-Lo `eve

Let X: T × Ω → R be a stochastic process on a compact subset T ⊆ Rdsuch that the covariance function K is a Mercer kernel on T . Let TK:L2(T ) → L2(T ), be the integral operator for Lebesgue measure with symbol K and let ek be the continuous eigenfunction of TK corresponding to the eigenvalue λk > 0given by Mercer’s Theorem. Set

Zk= 1 λk Z T Xtek(t)dt

Then { Zk : k=1,2, · · · } is a sequence of pairwise uncorrelated random variables in L2(Ω), with mean 0 and variance 1. Furthermore,

Xt= ∞ X k=1

(32)

where the series converges in L2(Ω). This expansion is known as the Karhunen-Lo ´eve expan-sion.

Notes: As we are considering one-dimensional SNFE, we have that d = 1 and T =[a,b]. Moreover,

ekis an orthonormal basis in L2([a, b]).

For the general Karhunen-Lo `eve theorem and proof see [47], for more information on Mercer’s theorem and Mercer kernels consult Appendix A.

where vk(x) are the eigenfunctions of the covariance operator of the noise term in equation (2.2). These eigenfunctions establish an orthonormal basis in the Hilbert space associated to the problem under consideration. Knowing this, we need a formula to calculate the coefficient functions uk(t), which are random distributions. It is important to observe that expanding our solution into the form (3.1) using Theorem 3.1, allows the integrations in time and in space to be fulfilled separately.

In order to derive a formula for the coefficient functions, we take the inner product of equation (2.2) with the basis functions vk(x):

< dU (x, t), vk(x) >= [< I(x, t), vk(x) > −α < U (x, t), vk(x) > + (

Z Ω

< F (| x − y |)S(U (y, t − d(x, y)))dy, vk(x) >]dt +  < dW (x, t), vk(x) > (3.2)

Since the delay τ in equation (2.2) is space-dependent (it depends on the distance between x and y) we represent, in this chapter, τ as d(x, y).

Here, we follow K ¨uhn and Riedler [30] and define the inner product of any two functions in the integral form as:

< f (x), g(x) >= Z

f (x)g(x)dx (3.3)

Furthermore, we can expand dW (x, t) as:

dW (x, t) = ∞ X k=0

vk(x)λkdβk(t) (3.4)

where βk(t)are a sequence of independent scalar Wiener processes and λkare the eigenvalues of the covariance operator Q, which defines the Q-Wiener process.

To simplify the algorithm, we deal with a particular case, described by [30] p.7, where the correlation function obeys the rule:

E{W (x, t)W (y, s)} = min(t, s) 1 2ξexp (

−π 4

| x − y |2 ξ2 ) for any space points and x, y ∈ Ω and time instants t, s > 0.

The E{·} refers to the expectation operator and the fixed parameter ξ determines the spatial correlation length. Following [48], we choose ξ << 2L and derive the eigenvalues λk of the covariance operator Q

(33)

in the SNFE (2.2), which satisfy the formula:

λ2k= exp (−ξ 2k2 4π )

Taking into account the orthonormality of the basis functions vk(x), the inner product in the form (3.3) and expansion (3.4), substituting it in equation (3.2) yields the formula for describing the evolution of the coefficient functions uk(t)as follows:

duk(t) = [< I(x, t), vk(x) > −αuk(t) + F Sk(U (y, t − d(x, y)))]dt + λkdβk(t) (3.5)

where F Sk(U (y, t − d(x, y)))denotes the delay dependent, non-linear term in the stochastic differential equation. This term also links all SDEs (3.5) with different k’s into a unified system that requires the SDEs to be solved simultaneously. Using the expansion of the solution presented in (3.1) and the inner product definition (3.3) the term F Sk(U (y, t − d(x, y)))is evaluated as:

Z Ω vk(x)[ Z Ω F (| x − y |)S( ∞ X l=0 ul(t − d(x, y))vl(y))dy]dx (3.6)

When using the Karhunen-Lo ´eve theorem, we obtain an infinite expansion of the solution U (x, t). In order to truncate this infinite series we use a Galerkin-type approximation, i.e we choose a sufficiently large positive number K and replace the exact solution U (x, t) to equation (2.2) with a finite approxima-tion: U (x, t) = K X k=0 uk(t)vk(x) (3.7) Method 3.1 Galerkin method

Following the definition in [49], let G be a non-linear operator, with domain of definition in a Banach space X and a range of values in a Banach space Y . To solve the equation

G(x) = h

by the Galerkin method, one chooses a linearly independent system {φi}∞

1 of elements from Y and a linearly independent system {ψj}∞

1 of functionals from the space Y∗ (dual to Y). An approximation solution x of the equation presented above is

xn= K X i=1

(34)

The numerical coefficients c1, · · · , cnare found from the system of equations < G( K X i=1 ciφi), ψj>=< h, ψj> j = 1, · · · , n

Following the example of K ¨uhn and Riedler [30], the inner product between two functions is defined as < f (x), g(x) >=R

Ωf (x)g(x) dx.

Note: in our particular case, we have that the coordinate and the projection systems are identical

( φi = ψi = vi) where vi are the eigenfunctions of the covariance operator of the noise, as explained before, therefore n=K.

The accuracy of the Galerkin method depends strongly on the number of eigenfunctions chosen (K) to truncate the series, since the higher the dimension K of the space HK where we search for the approximate solution, the smaller is the error of the approximation.

In equation (3.7), we obtain a truncated series of the infinite expansion, however, this equation does not account for the continuous-time properties of the eigenfunctions vk(x)and the coefficient functions uk(t).

In this paper we adapt the eigenfunctions considered by K ¨uhn and Riedler [[30], Example 2.1] and assume: vk(x) =      1 √ 2L, if k = 0 1 √ Lcos( kπx L ), if k ≥ 1 k = 0, · · · , K

which have been accommodated to deal with the delay condition.

This orthonormal basis is of the conventional Fourier type where all the sinusoidal eigenfunctions of the Fourier series are absent, because if the functions I and F are even the exact solution to the SNFE is also an even function. At the same time, the coefficient functions uk(t)obey the SDEs of the system in (3.5). Subsequently, we need to adapt the rest of our results to the approximation (3.7). To do this, we have to first discretize the eigenfunctions vk(x)in space and second, to derive a discretized version of equations (3.5).

To implement these discretization steps, we start by introducing an equidistant space mesh:

x := {xi= x0+ ihx, i = 0, 1, · · · , N }

where x0= −L, xN = L, hx= 2LN (step size in space) and where N is a chosen number of the prefixed discretization steps in the space interval [-L,L]. This subdivisionx of the space interval is demanded to

satisfy the condition N >> K, that is, the number of space-discretization steps utilized must be much larger than the number of basis functions in the Galerkin approximation (3.7). In this way, we significantly improve the accuracy of the numerical integration.

Having applied the Galerkin approximation and, then, replaced all the space-dependent functions in formulas (3.5) and (3.6) with their values calculated at the mesh nodesx, we obtain the space-discretized

(35)

SDEs of the form:

duk(t) = [< I(x, t), vk> −αuk(t) + F Sk(Uk(x, t − d(x, x)))]dt + λkdβk(t) (3.8)

wherex is the (N +1)-dimensional vector in which x(i) = x0+ ihxandvk := vk(x).Here and below, we takevkas the (N +1)-dimensional vector with entries vk(xi), i = 0, 1, · · · , N.

To complete the discretization, we replace all integrals with quadrature rules, in our case, the trapezoidal rule. Applying the quadrature rule to < I(x, t), vk>and F Sk(Uk(x, t − d(x, x))) yields:

< I(x, t), vk >= hx N −1 X i=0 I(xi, t)vk(xi) (3.9) F Sk(Uk(x, t − d(x, x))) = h2x N −1 X i=0 vk(xi)[ N −1 X j=0 F (| xj− xi|) × S( K X l=0 ul(t − d(xi, xj))vl(xj))] (3.10)

where (3.9) and (3.10) are utilized in the system (3.8) for any subscript k = 0, 1, · · · , K.

Notice that it is taken into account that all the integrands are even functions and therefore, as the values at the initial and last points of the introduced mesh coincide, we double the corresponding values at the initial mesh node and reduce our summation index ranges by one in the mentioned integral approxima-tions.

The system in (3.8) can be represented in a vectorized form, for each k = 0, 1, · · · , K as:

du(t) = [hxV × I(x, t) − αu(t) + FS(Uk(x, t − d(x, x)))]dt + dW (3.11)

whereu(t) = (u0(t), u1(t), · · · , uK(t))T,V is the rectangular matrix of size (K + 1)×N whose (k, i)-entry

refers to the value of the kth eigenfunction vkcomputed at the space point xi, i.e vk(xi), k = 0, 1, · · · , K and i = 0, 1, · · · , N − 1. FS(Uk(x, t − d(x, x))) represents the (K + 1)-dimensional column-vector whose kth entry is calculated by formula (3.10) and finally dW is the zero-mean white Gaussian process with

the diagonal covariance matrix Λdt in which Λ is a diagonal matrix where Λ = diag{λ2

0, λ21, · · · , λ2K}. It is important to notice that the last entries of the vectors vkare excluded from matrixV because of the

explanation given above about the Trapezoidal rule.

Now, to solve the system (3.11) we can simply use a commonly known method for solving SDEs. Here, we use the explicit Euler-Maruyama method which is applied to an equidistant mesh with sufficiently small step size ht, prefixed in the time interval [0,T] of SNFE (2.2).

(36)

3.1.1

Some notes on implementation

3.1.1.1 Delay-free case

Let us first analyse the case without delay. We are simply considering equations:

du(t) = [hxV × I(x, t) − αu(t) + FS(Uk(x, t))]dt + dW (3.12)

We can compute the non-linear termFS(Uk(x, t)) efficiently by:

FS(Uk(x), t)) = h2xV × F × s (3.13)

whereF is the symmetric matrix of dimension N × N whose (i, j)-entry means F (| xj− xi|) with i, j = 0, 1, · · · , N − 1 and s is a time-variant column-vector of size N and its ith entry is evaluated by the

formula S(uT(t)Vi)at each time instant t used, whereV

istands for the ith column in the matrixV. Observe that the biggest computational effort is the calculation of the term h2

xV × F, and that this product

is time-invariant, thus, it can be pre-computed before starting (3.12) by the Euler-Maruyama scheme. In conclusion, at each time step in our time integration of SDEs (3.12) we need only to evaluate vector

s and multiply it by the already computed h2

xV × F, which saves a significant amount of computational

work.

3.1.1.2 Delay-dependent case

When we are performing simulations with this algorithm, in the case of infinite transmission speed be-tween neurons, one can set the time step htand space step hxindependently and by taking into account accuracies of the time and space discretizations. In contrast, when delay is considered, we have to en-sure that the value (t − d(xi, xj)) belongs to the time discretization mesh accepted in the numerical simulation. By setting the time step size by the formula :

ht= hx

v (3.14)

we guarantee this condition , i.e for each pair of nodes (x, y) belonging to the mesh, there exists an integer a, such that

d(x, y) = ahx= avht

When there is delay, it may happen that (t − d(x, y)) < 0. In this case, the value of U (x, tj − d(x, y)) will depend on the initial condition of the deterministic (1.2) or stochastic (2.2) equations. This initial condition is also discretized with the predefined step size hxand htsatisfying the required relation in the space-time domain [−L, L] × [−τmax, 0].

When programming the algorithm for solving the system (3.11), we have to pay attention to the nec-essary efforts needed to perform some of the practical calculations. Here, the evaluation formula of

(37)

FS(Uk(x, t − d(x, x))) is given by :

FS(Uk(x, t − d(x, x))) = h2xV × (F ◦ S) (3.15)

where matrix V and matrix F have already been defined. Matrix S is the time-variant square

ma-trix (N × N ) whose (i, j)-entry is the value of S(Uk(xj, t − d(xi, xj))) already computed in past time (t − d(xi, xj)). In formula (3.15), ◦ refers a specifically defined multiplication of two matrices. This prod-uct returns a column-vector whose ith entry is obtained by the inner prodprod-uct of the ith row of the first matrix with the ith column of the second one.

Here and below, we consider N as the chosen number of the prefixed discretization steps in the space interval [-L,L]; K is the number of basis functions; n is the chosen number of the prefixed discretization steps in the time interval [0,T]; ns is the number of paths/trajectories.

3.2

Choosing programming language

Having an algorithm to solve SNFEs, it is now time to choose a programming language to implement it. It is of utter importance for it to be as efficient as possible, since the algorithm presented contains very expensive computations, such as operations involving high dimensional matrices, summations and a complex chain of iterative loops to obtain the final solution.

When solving numerical problems, interpreted languages such as MATLAB or Python are preferred as they are simple to program in, offer easy ways to plot data and do complex mathematics. However, with these programs, the speed is usually not very high. Recently, a high-level programming language called Julia was developed, which merges the simplicity of Python with the efficiency of lower level languages such as C/C#. We will be choosing between Julia and MATLAB to implement the proposed algorithm. In the 1970s, Cleve Moler a numerical analyst specialized in matrix computation and chairman of the computer science department at the University of New Mexico, started developing MATLAB. While par-ticipating in an effort organized by Argonne National Laboratory to produce a library of reliable and efficient Fortran subroutines for computing the eigenvalues of matrices (EISPACK) and solving systems of linear equations (LINPACK), Moler created MATLAB (contraction for ”Matrix Laboratory”) as a way to make these packages easier for students to use. It soon spread to other universities and its commercial potential was recognised, leading to the rewritting of the original MATLAB code in C and the creation of MathWorks in 1984. MATLAB is a high level language. Using MATLAB Coder, the codes written in MAT-LAB can be converted to C++, Java and Python. Moreover, it has a very large (and growing) database of built-in algorithms and functions for algebra, statistics and image processing which makes it a simple language to use. It also has a large set of options when it comes to graphical output, allowing to plot data very easily and facilitating the user interaction with it, by providing ways to change colors, sizes, scales , etc. by using graphical interactive tools. In contrast, as it is an interpreted language, it uses large amounts of memory and therefore, it can lead to poor execution speed. Furthermore, MATLAB’s

(38)

licensing is expensive, unless it can be provided by an academic or professional deal.

Julia is an open source, high level programming language designed for high performance numerical analysis and computational science. Its inception can be traced back to 2009, when the lead developers Alan Edelman, Jeff Bezanson, Stefan Karpinski and Viral Shah strove for a language more efficient in numerical computing. Julia provides the ease and expressiveness of high level numerical computing, in the same way as languages as MATLAB and Python, but also supports general programming, as Julia builds upon the lineage of mathematical programming languages. Moreover, Julia borrows some features from popular dynamic languages and its programs compile to efficient native code for multiple platforms via LLVM (Low Level Virtual Machine), which makes it different from the interpreters used in the previously mentioned languages. With a deep knowledge of how Julia works, it is possible to achieve programs nearly as fast as C.

In terms of comparing MATLAB and Julia some characteristics stand out. MATLAB, for example, when accessing matrices uses the same bracket type as functions calls, making the code harder to under-stand and the initial index starts at 1, whilst Julia incorporates distinguished brackets for this feature and the initial index starts at 0, as most other programming languages. MATLAB is neither type safe nor equipped with the proper namespaces, but Julia can optionally use type declarations and object orien-tation is built in. MATLAB, as said before, is an interpreted language leading to longer processing time. Although tricks can be used to speed up computations, it is not as fast as Julia assures to be with its just-in-time compiling. As Julia promises faster computations, using a high syntax language and having a graphical output (still being built) but very similar to MATLAB, for the particular uses we need in this text, the results will be presented coded in Julia. A comparison of performance of Julia and MATLAB will be done in3.3.

3.3

Comparing performance of programming languages

As the discussion in Section3.2 concludes, Julia appears to be the most suitable and efficient

program-ming language for the implementation of our algorithm.

The algorithm was initially programmed in Julia and then, adapted to MATLAB. To test the differences between the computing time of both languages, we used the functions described in Problem 1 of 4.1,

on a time interval t ∈ [0, 4] and a space interval of [−50, 50], with N and K varying. The number of time steps is n=200 for the delay-free case and for the case with delay, to satisfy the formula ht=hx

v , n also varies. The tests were only performed for the cases without noise because each iteration is made inde-pendently, thus, in order to obtain the performance of the cases with noise, we simply need to multiply the elapsed time for one trajectory by the number of trajectories computed.

(39)

N = 500, K = 50 N = 1000, K = 100 N = 2500, K = 250 N = 5000, K = 500 0.0 2.5 5.0 7.5 10.0

Nodes(N) and basis functions(K)

Time (in seconds)

Julia Matlab

Figure 3.1: Comparing performance of programming languages: Time performance of the Julia (Blue) and MATLAB (Red) programming languages in the case without delay

By Figure 3.1, we observe that for the first pair of (Nodes, Basis functions) the Julia implementation is slightly quicker, but with a larger number of nodes MATLAB becomes faster. In spite of this, the difference is around 0.5 to 1 second, hence we could use either MATLAB or Julia for the implementation in the delay-free case. As for the computing time in the delay case,

Performing time Performing time (Nodes, Basis functions) n (in seconds) for Julia (in seconds) for MATLAB

(5000,500) 2000 4725.89 4818.22

(2500,250) 1000 662.44 722.35

(1000,100) 400 54.69 52.35

(500,50) 200 5.97 4.59

Table 3.1: Comparing performance of programming languages: Time performance of the Julia and MATLAB programming languages in the case with delay

Julia proves slightly faster for bigger values of (Nodes,Basis functions). Therefore, we choose Julia to present the results of our method.

3.4

Complexity of the algorithm

To numerically solve integro-differential equations such as NFE and SNFE, we must develop efficient and correct algorithms, since they require high computational effort.

The correctness is going to be tested, to some extent, in Chapter 4 and in this section we are going to test the algorithm’s efficiency by analysing its complexity. It is important to understand for which parameter we are going to test the complexity. We could consider the number of points of discretization in space (N ) and the number of basis functions (K), yet it suffices to test the complexity for N . Since N has to be much greater than K in order to improve the accuracy of Galerkin’s method, we choose to have (always) N =10K. By equation (3.11) at each step in time, K inner products need to be computed

(40)

and for each of them, N sums need to be calculated, thus we obtain an order of O(K × N )=O(N102) =O(N2)and therefore it is sufficient to consider the complexity with respect to the number of points in the mesh.

Suppose that t(N ) is the time needed to finish one simulation, where the parameter of discretization is N.

We assume that

t(N ) = CNw

hence,

t(2N ) = C(2N )w

We want to find w which will be the complexity of our algorithm, thus we do the following calculations:

t(2N ) t(N ) = (2N )w Nw = 2 w ⇔ w = log2 t(2N ) t(N ) (3.16)

We will now apply this complexity estimate to the algorithm described in Section 3.1. With this purpose, we will measure the time that a simulation takes to run, fixing n=200 and ns=1 (we consider the cases without noise) and changing the (Nodes, Basis functions) values to be (1000,100), (2000,200) and (4000,400). We will consider the functions of Problem 1 (see Section 4.1, Problem 1) on a time interval t ∈ [0, 4]and a space interval of [−50, 50].

Although we can expect an order of ≈ N2, t(N ) does not depend solely on the number of operations and on equation (3.11).

Without delay: With delay:

( Nodes , Basis functions ) Performing time (in seconds)

(1000,100) 0.6844

(2000,200) 2.1067

(4000,400) 8.1207

Table 3.2: Complexity Analysis: Time performance of the algorithm in the case without delay

( Nodes , Basis functions ) n Performing time (in seconds)

(1000,100) 400 54.69

(2000,200) 800 324.13

(4000,400) 1600 2484.23 Table 3.3: Complexity Analysis: Time

performance of the algorithm in the case with delay

Considering Table 3.2, we can now obtain the order of our method for the deterministic delay-free case, which was as expected w ≈ 2 (Table 3.4).

N 2N w = log2t(2N )t(N ) 1000 2000 1.6221 2000 4000 1.9467

Table 3.4: Complexity Analysis: case without delay

As discussed above, in the delay-dependent case, the step size in time ht varies with hx and v (see formula 3.14); therefore, as a rule, when we change the value of N and K we need to change the value

(41)

of n as well. As we are trying to compute the complexity with respect to N , n should remain constant, which is incompatible with the rule explained above.

Here and below, all computations were made on a PC with processor Intel(R) Core(TM) i5-5200U CPU 2.20GHz×4 and with 4 GB of installed memory (RAM).

3.5

Comparing algorithms

With the growing interest in finding accurate and efficient methods for deterministic and stochastic NFEs, many algorithms are being developed or improved. In order to evaluate the efficiency of our method, we will compare it with another algorithm, which was proposed by Hutt and Rougier ([25], [26]) and imple-mented on an online and open source simulator. This simulator approximates the solution of equation (1.2) in the two-dimensional case. Due to this, the code was adapted to one dimension to be in accor-dance with this text (this was done by Master’s Student Tiago Sequeira [50]). The authors suggest a numerical method that is based on the Fast Fourier Transform (FFT) and resembles the Ritz-Galerkin method for partial differential equations. Here and bellow, Algorithm 1 represents the algorithm imple-mented in this text and Algorithm 2 represents the one-dimensional version of the algorithm proposed by Hutt and Rougier, where the number of basis functions K is equal to N , thus we compare the algorithms by using the same values of N . As input parameters we use the same as in3.3.

N = 500, K = 50 N = 1000, K = 100 N = 2500, K = 250 N = 5000, K = 500 0.0 2.5 5.0 7.5 10.0

Nodes(N) and basis functions(K)

Time (in seconds)

Algorithm 1 Algorithm 2

Figure 3.2: Comparing algorithms: Time performance of Algorithm 1 (Blue) and Algorithm 2 (Red) in the delay-free case

For Algorithm 2, in the delay-free case, the computation time increases slowly with N , in contrast with Algorithm 1 in which the computational time increases faster with N and K.

(42)

(Nodes , Basis functions) n Performing time (in seconds) Performing time (in seconds) for Algorithm 1 for Algorithm 2

(5000,500) 2000 4725.89 953.91

(2500,250) 1000 662.44 125.88

(1000,100) 400 54.69 10.11

(500,50) 200 5.97 2.35

Table 3.5: Comparing algorithms: Time performance of Algorithm 1 and Algorithm 2 in the delay-dependent case

Although we can not account for the accuracy of this method, it seems that using the Fast Fourier Transform improves considerably the time needed to find a solution for neural field equations.

(43)

Chapter 4

Simulations

In this chapter we intend not only to confirm the accuracy of the computational method presented in Sec-tion 3.1, but also to study the effects of random disturbance (additive noise) and delay in the dynamical behaviour of stationary solutions of the NFE, in the presence of spatially heterogeneous external inputs. To do this, we solve with the proposed algorithm, the previously discussed variations of our problem:

• NFE without delay, equation (1.1)

• NFE with delay, equation (1.2)

• SNFE without delay, equation (2.1)

• SNFE with delay, equation (2.2)

To confirm the convergence of our method (just considering the deterministic case without delay), we will begin by testing our algorithms in the conditions proposed in Problem 1 of Section 4.1. We confirm that the numerical solution profile obtained with our algorithm is equal to the one presented in [44] and test the convergence of the method with respect to time (ht) and space (hx). Furthermore, we present the results gathered considering the deterministic, delay-free case for the different variations of Problem 2 of Section 4.1. Lastly, by changing the delay and additive noise conditions, we will test the effect that this parameters have on stationary solutions.

4.1

Formulation of the problems

Here and below, we consider that a solution has one bump if this bump appears above 0, i.e regions where U > 0. For all examples considering the following problems we take α=1 and ξ=0.1.

• Problem 1

In this first case, we will apply the input functions presented in [44]. The firing rate function S(x) is the Heaviside function; the connectivity kernel is given by

F (x) = 2 exp (−0.08x)(0.08 sin (πx

10) + cos ( πx 10))

(44)

As referenced in [44], it was proved that such oscillatory connectivity kernels support the formation of stationary stable multi-bump solutions (see [51] ), induced by external inputs. The external input has the form

I(x, t) = −3.39967 + 8 exp (−x 2

18 ) • Problem 2

In the second case, the functions used are introduced in [29] and [28]. Just as in [44], the firing rate function S(x) is the Heaviside function and the connectivity kernel is given by:

F (x) = 2 exp (−0.08x)(0.08 sin (πx

10) + cos ( πx 10))

However, the external stimulus is given by:

I(x, t) =      −3.39967 + 8 exp(−x2 2σ2), if t ∈ [0, 5] −2.89967, if t ∈ (5, 10]

which was inspired by the example considered by Ferreira, see [[51], p. 37]. With the given time interval, the number of bumps of the solution, as well as their form, in most cases stabilizes. How-ever, it is not excluded that the position of the bumps may be shifted with time.

Notice that in this case, we assume that I(x, t) has a gaussian form in the interval t ∈ [0, 5]. In this text, we will consider the values of σ to be 0.4, 3 and 13.

-100 -50 0 50 100 -2 0 2 4 σ = 0.4 σ = 3 σ = 13

Figure 4.1: Dispersion of the external stimulus inputs, for σ=0.4, 3 and 13

As we can observe by Figure 4.1, as the value of σ increases so does the width of the ”bell” in the gaussian, hence with a higher value of σ we have a bigger dispersion of the external input signal. In both problems, Problem 1 and Problem 2, we consider homogeneous initial conditions, that is, U0(x) ≡ 0for the delay-free case and U0(x, t) ≡ 0if t ∈ [τmax, 0]in the delay-dependent case. This prob-lem is closely related with the simulation of working memory in the cortex. Actually, I(x, t) represents a transient external stimulus (e.g. a light signal) which is turned off after a certain amount of time. The stationary solution results from the neuronal activity and it contains the information about the stimulus, which is kept in the cortex after the stimulus is removed.

(45)

4.2

Numerical results

4.2.1

Deterministic case

4.2.1.1 Delay-free

Since there are many papers that handle the deterministic case without delay (e.g. [28], [29], [44]), for this particular case, we will confirm experimentally that the numerical profile of the solution is equal to the solution obtained in [44] (Figure 4.2) and that the method converges as hxand httend to zero. We will perform some numerical experiments similar to those described in the articles cited above.

We consider the functions as defined in Problem 1 of Section 4.1, where the time interval is [0, 4] and the space interval is [−50, 50]. As we do not have an exact solution to compare our results with, we estimate the convergence order p, using the following formula:

p ≈ log |uh−uh 2 | |uh 2 −uh 4 | log 2 (4.1)

We assume that, in the deterministic, delay-free case the algorithm will have order of O(h2

x) + O(ht), due to the trapezoidal method used to approximate the integrals in the space interval [-L,L] and the Euler-Maruyama method which is applied on a equidistant mesh, with step size ht, in the time interval [0,T]. -40 -20 0 20 40 -5 0 5 10 15

Figure 4.2: Numerical solution profile of the delay-free deterministic NFE for the values presented in 4.2.1.1, which matches the solution in ([44], pag 56)

(46)

Convergence with respect to hx

Fixing a point in time tn= 4, we will analyse the different results obtained in particular space points. We consider n=10000, N =500, 1000 and 2000. The specific points studied are xk=-20, 0, 40.

xk N = 500 N = 1000 N = 2000

-20 -0.8749172393822017 -0.8837014425500683 -0.8793940270049149 0 16.156610958739112 16.144467841467986 16.149605437828715 40 -2.840408508693375 -2.8420592044320023 -2.8411795875993135

Table 4.1: Numerical values of U (xk, 4)for the deterministic case without delay, using a Heaviside type firing rate function

Considering Table 4.1 and using formula (4.1) we can now obtain the expected order of convergence with respect to hx:

xk p

-20 1.028088912475725 0 1.2409733714071207 40 0.9081271051774218

Table 4.2: Convergence with respect to hx for the deterministic case without delay, using a Heaviside type firing rate function

Notice that this order of convergence is not p ≈ 2 or higher, as it was expected. This is due to the fact that the integrand functions contains the Heaviside function which is not continuous. Hence, we can not guarantee that in this case, the trapezoidal rule will converge at the optimal rate. To test how the smoothness of the integrand function can influence the convergence order, we have carried an additional numerical experiment with a different firing rate function:

S(x) = 1

1 + exp (−10(x − 1))

which is twice continuously differentiable on the interval [−50, 50], that is S ∈ C2([−50, 50]).

xk N = 500 N = 1000 N = 2000

-20 -0.8486316507584067 -0.8490291027973098 -0.848990004213313 0 16.079226060995953 16.077001492301452 16.076908369300035 40 -2.835011243884671 -2.8350439969193095 -2.8350401849275984

Table 4.3: Numerical values of U (xk, 4)for the deterministic case without delay, using a continuous firing rate function

Referências

Documentos relacionados

Numerical simulations of one-dimensional electrokinetic filtration and consolidation tests are presented, and the numerical results are compared to experimental data obtained in

We study the consequences of dissipation and stochastic noise terms in the field equations of motion that result from a calculation in nonequlibrium quantum field theory and

In this section, we discuss brie‡y how our results extend to a simpler form of collusion in which, given a state of nature, principals either abstain from hiring each other’s star

The structure of the remelting zone of the steel C90 steel be- fore conventional tempering consitute cells, dendritic cells, sur- rounded with the cementite, inside of

social assistance. The protection of jobs within some enterprises, cooperatives, forms of economical associations, constitute an efficient social policy, totally different from

The probability of attending school four our group of interest in this region increased by 6.5 percentage points after the expansion of the Bolsa Família program in 2007 and

Mediante o conhecimento prévio dos produtos utilizados no processo de Hemodiálise na Clínica de Nefrologia do Hospital São José do Avaí, nos permite analisar os dados

Tabela 1 - Valores para os descritores fitossociológicos de uma comunidade arbórea de Cerrado, Município de Cáceres, sudoeste do Estado de Mato Grosso, fronteira Brasil –