• Nenhum resultado encontrado

Limite hidrodinâmico para neurônios interagentes estruturados espacialmente

N/A
N/A
Protected

Academic year: 2017

Share "Limite hidrodinâmico para neurônios interagentes estruturados espacialmente"

Copied!
53
0
0

Texto

(1)

Hydrodynamic limit for

spatially structured interacting neurons

Guilherme Ost de Aguiar

Tese apresentada

ao

Instituto de Matem´

atica e Estat´ıstica

da

Universidade de S˜

ao Paulo

para

obtenc

¸˜

ao do t´ıtulo

de

Doutor em Ciˆ

encias

Programa: Estat´ıstica

Orientador: Prof. Dr. Antonio Galves

Durante o desenvolvimento deste trabalho o autor recebeu auxlio financeiro da CAPES e CNPq. Esta tese foi produzida no N´ucleo de Apoio `a Pesquisa em Modelagem Estoc´astica e Complexidade/MaCLinC como parte das atividades do Centro de Pesquisa, Inova¸c˜ao e Difus˜ao

em Neuromatem´atica, financiado pela FAPESP (2013/07699-0)

(2)

Hydrodynamic limit for

spatially structured interacting neurons

Esta vers˜ao da tese cont´em as corre¸c˜oes e altera¸c˜oes sugeridas pela Comiss˜ao Julgadora durante a defesa da vers˜ao original do trabalho, realizada em 17/07/2015. Uma c´opia da vers˜ao original est´a dispon´ıvel no Instituto de Matem´atica e Estat´ıstica da Universidade de S˜ao Paulo.

Comiss˜ao Julgadora:

• Prof. Dr. Jefferson Antonio Galves (orientador) - IME-USP

• Prof. Dr. Pablo Augusto Ferrari - IME-USP

• Profa. Dra. Eva L¨ocherbach - U-Cergy

• Prof. Dr. Jacob Ricardo Fraiman Maus - U. San Andr´es

(3)

Acknowledgments

First of all, I would like to thank my parents, Danilo and Ledir, for all the affection, love, comprehension and support during all my life. As a small repayment to all you two have done for me I would like to offer the ending of this step to you.

I also would like to thank Aline, an incredible woman with whom I have been these past 9 years. Along this way you always been by my side, giving support, attention and affection. I hope you have enjoyed this journey full of discoveries as much as I have. You always will be in my heart. I would like to thank my advisor, prof. Antonio Galves, for having dedicated part of his scientific life teaching me so many things. You have shown me the beauty of the world of research with enthusiastic and hard work. I also have thank you for giving me the opportunity of being part of the great team NeuroMat. I will always be in debt to you .

I can not forget to express my gratitude to NUMEC and all its goers (students, professors, cleaners and guards). This lab is a place of excellence and undoubtedly has contributed a lot in my education. I would like to thank Lourdes who has shown an immeasurable dedication to help everyone in NUMEC.

I also thank all friends I have made here in S.Paulo: Douglas (o ga´ucho), Rodrigo (o goiˆano), Karina, Manuel, Hugo, Aldana, Bruno, Andressa, Pedro, Grabiel, Michelle and many others. I have had a great time with you all.

I am in debt to professor E. Presutti for uncountable illuminating discussions, for all the teach-ing, attention and hospitality during me visit in Gran Sasso Science Institute (GSSI) last fall.

Finally, I would to thank the department of statistics of the Universidade de S˜ao Paulo, the CAPES and CNPq for financial support and also the jury members of my PhD defense.

(4)
(5)

Abstract

We study the hydrodynamic limit of a stochastic system of neurons whose interactions are given by Kac Potentials that mimic chemical and electrical synapses and leak currents. The system consists of ε−2 neurons embedded in [0,1)2, each spiking randomly according to a point process with rate depending on both its membrane potential and position. When neuron i spikes, its membrane potential is reset to 0 while the membrane potential of j is increased by a positive value ε2a(i, j), if i influences j. Furthermore, between consecutive spikes, the system follows a deterministic motion due both to electrical synapses and leak currents. The electrical synapses are involved in the synchronization of the membrane potentials of the neurons, while the leak currents inhibit the activity of all neurons, attracting simultaneously their membrane potentials to 0. We show that the empirical distribution of the membrane potentials converges, as ε vanishes, to a probability densityρt(u, r) which is proved to obey a nonlinear PDE of Hyperbolic type.

Key words: Hydrodynamic limit, Piecewise deterministic Markov process, Neuronal Systems, Interacting particle systems

(6)
(7)

Resumo

Nessa tese, estudamos o limite hidrodinˆamico de um sistema estoc´astico de neurˆonios cujas intera¸c˜oes s˜ao dadas por potenciais de Kac que imitam sinapses el´etricas e qu´ımicas, e as correntes de vazamento. Esse sistema consiste de ε−2 neurˆonios imersos em [0,1)2, cada um disparando aleatoriamente de acordo com um processo pontual com taxa que depende tanto do seu potential de membrana como da posi¸c˜ao. Quando o neurˆonioidispara, seu potential de membrana ´e resetado para 0, enquanto que o potencial de membrana do neurˆonio j ´e aumentado por um valor positivo

ε2a(i, j), seiinfluenciaj. Al´em disso, entre disparos consecutivos, o sistema segue uma movimento determin´ıstico devido `as sinapses el´etricas e `as correntes de vazamento. As sinapses el´etricas est˜ao envolvidas na sincroniza¸c˜ao do potencial de membrana dos neurˆonios, enquanto que as correntes de vazamento inibem a atividade de todos os neurˆonios, atraindo simultaneamente todos os potenciais de membrana para 0. No principal resultado dessa tese, mostramos que a distribui¸c˜ao emp´ırica dos potenciais de membrana converge, quando o parˆametro ε tende `a 0 , para uma densidade de probabilidade ρt(u, r) que satisfaz uma equa¸c˜ao diferencial parcial n˜ao linear do tipo hiperb´olica .

Palavras-Chave: Limite Hidrodinˆamico, Processos Markovianos determin´ısticos por partes, Sistemas Neuronais, Sistemas de Part´ıculas Interagentes

(8)
(9)

Contents

List of Figures ix

1 Introduction 1

2 Model Definition and Main Results 3

2.1 Boundedness of the Membrane Potentials . . . 8

2.2 Tightness of the Sequence of Laws P[0,T)] . . . 8

3 The Auxiliary Process and the Coupling Algorithm 11 3.1 Coupling the Auxiliary and True Processes . . . 13

3.2 Consequences of the Coupling Algorithm. . . 15

4 Hydrodynamic Limit for the Auxiliary Process 17 4.1 Hydrodynamic Evolution of the Auxiliary Process . . . 17

4.2 The Limit Trajectory of the Auxiliary Process. . . 19

4.3 Convergence of ρ(δ,ℓ,E,τ) asℓ, E, τ →0 and its Consequences . . . 21

5 Hydrodynamic limit for the True Process 27 6 Appendices 33 6.1 proof of Theorem 4 . . . 33

6.2 Proof of proposition 3.1 . . . 38

6.3 Proof of Theorem 5. . . 38

6.4 Proof of Theorem 2 for general firing rates . . . 40

Bibliography 41

(10)
(11)

List of Figures

2.1 The ǫ-mesh Λε of the set [0,1)2.. . . 3 3.1 The red dots represent the centers of each half-open squareCm with length ℓ. . . . 12

(12)
(13)

Chapter 1

Introduction

In this thesis we present a stochastic process which describes a population of spatially structured interacting neurons. Our aim is to study the hydrodynamical limit of such process and characterize its limit law as well. Despite of its own interest in mathematics, the analysis of hydrodynamical behavior of neuronal systems is an important issue in neurobiology. For instance, the most common imaging techniques, including EEG and fMRI, do not measure individual neuron activity but rather a resulting effect driven by interactions of large subpopulations of neurons. Thus, the rigorous mathematical modeling of EEG and fMRI data requires a collective description (a “macroscopic equation”) derived from many interacting neurons (“large microscopic systems”), a typical setting of study on hydrodynamical limits of stochastic particle systems.

In a nutshell, neurons are electrically excitable cells whose activity consist in sudden peaks, called action potentials and often referred to as spikes. More specifically, spikes are short-lasting electrical pulses in the membrane potential of the cell and the higher the membrane potential the higher the probability of a spike to occur. Thus, it is quite natural to assume that the generating mechanism of spikes is given by a point process in which the spiking rate of a given neuron depends on its membrane potential. In this paper, we work under that assumption and additionally, assume that the membrane potential evolves under the effect of chemical and electrical synapses, and leak currents.

Electrical synapses are due to so-called gap-junction channels between neurons which induce a constant sharing of potential. The unique aspect of electrical synapses is their reciprocity. This means they are neither excitatory nor inhibitory but rather synchronizer. For each pair of neurons (i, j), we modulate this synchronizing strength by b(i, j), where (i, j) 7→ b(i, j) is a nonnegative symmetric function. For instance, if N is the size of the set of neurons andb(i, j) =N−1 fori6=j

and b(i, i) = 0, the electrical synapses would push the membrane potential of each neuron to the average membrane potential of the system. In the general case, the membrane potential of each neuron is also attracted to a mean value, although this value may vary for each neuron depending on the shape of the function b(i, j).

In contrast with electrical synapses, chemical synapsesare point events which can be described as follows. Each neuron i with membrane potential U spikes randomly at rate ϕ(U, i), where U 7→ ϕ(U, i) is a non decreasing function, positive at U > 0 and vanishing at 0. This last as-sumption implies the absence of external stimuli. When neuron i spikes, its membrane potential is immediately reset to a resting potential 0. Simultaneously, the neurons which are influenced by neuron i receive an additional positive value to their membrane potential. Specifically, the mem-brane potential of neuronjis increased by the valuea(i, j) in each spike ofi, if the latter influences the former. The positiveness of the function (i, j) 7→ a(i, j) means that all chemical synapses are of the excitatory type.

Additionally to the synapses, neurons loose potential to the environment along time due to leakage channels which pushes down the membrane potential of each neuron toward the resting state. This constant outgoing flow of potential is referred to, in the neurobiological literature, as

leak currents.For an account on these subjects we refer the reader to [GK02].

(14)

2 INTRODUCTION 1.0

Our model is inspired by the ones introduced in [DMGLP15], [DO14] and [GL13]. For a critical readers guide to these papers - together with the one in [FL14] - we refer to [GL15]. Our model is also an example of piecewise deterministic Markov processes introduced in 1984 by Davis in [Dav84]. Such processes combine random jump events, in our case due to the chemical synapses, with deterministic continuous evolutions, in our case due both to electrical synapses and leak currents. The piecewise deterministic Markov processes have been used also to model neuronal systems by other authors, see for instance the papers [DMGLP15], [DO14], [FL14], [MRW12] and [RT14].

In the study of Hydrodynamic limits amean-fieldtype assumption is quite frequent. This means that a(i, j) =b(i, j) =N−1 for any pair of neurons (i, j),withN being the size of the population of neurons. For recent neuromathematical models adopting the mean-field assumption see, among others, the models in [DMGLP15] and [FL14]. However, a more realistic description should incor-porate the mutual distance among neurons. In order to achieve such accurate description, we use Kac potentials ideas and techniques developed for such potentials in statistical mechanics. In our context, this means that the functionsa(i, j) and b(i, j) considered here are quite general but are scaled by factorN−1, ifN stands for the size of the set of neurons. For an account on hydrodynamic limits and Kac potentials we refer respectively to [DMP91]-[KL99] and [Pre08].

To the best of our knowledge, it is the first time that stochastic modeling of spatially struc-tured neuronal networks whose occurrences of spikes are described by Poisson processes has been addressed. Most of the mathematical models of neuronal system taking into account also spatial locations have been done with Brownian random components, see for instance [Tou14].

For eachε >0,the set of neurons is denoted by Λε= (εZ)2∩[0,1)2 and the state of our system at timet≥0 is specified by U(ε)(t) =U(ε)

i (t), i∈Λε

, with U(ε)i (t)∈R+. For each neuroniΛε and timet≥0,U(ε)i (t) represents the membrane potential of neuron iat timet. Our main result, Theorem2, shows that the empirical distribution of the membrane potentials converges, asε→0, to a law having, at each timet,ρt(u, r)dudr as a probability density. This means that, in the limit, for any set C ⊂ [0,1]2, interval I ⊂ R+ and time t 0, R

I

R

Cρt(u, r)dudr is the limit fraction of neurons located in C whose membrane potentials are inside of I at time t. This limit density

ρt(u, r) is the unique solution of a nonlinear PDE of the hyperbolic type.

The strategy for proving this theorem can be described in the following way. We identify the process with its empirical distribution and, as a first step, we show that the sequence of laws of the empirical distributions is tight. Once tightness is proven, we identify the limiting law as supported by the solutions of the PDE by a coupling argument. Specifically, we first approximate the true process by a discrete space and time family of processes Y(ε,δ,ℓ,E,τ) for which the analysis of the Hydrodynamic limit is somehow easier. Once established the convergence to Y(δ,ℓ,E,τ), we obtain the result by takingδ, ℓ, E, τ →0.A similar approach was recently used in [DMGLP15], however, in the present work, we generalize their approach to the case of spatially structured interacting neurons. Finally, we show the solutions of the PDE are unique to get full convergence.

(15)

Chapter 2

Model Definition and Main Results

For each ε > 0, let Λε = (εZ)2 ∩[0,1)2 be a ε-mesh of the set [0,1)2. The set Λε represents the set of neurons and its size is|Λε|=ε−2,see figure 2.1. We consider a continuous time Markov process (U(ε)(t))t≥0 taking values in R+Λε. For each t ≥ 0 and neuron i ∈ Λε, U

(ε)

i (t) models the membrane potential of neuron iat timet. The global configuration at timet≥0 is denoted by

U(ε)(t) = (U(ε)i (t), i∈Λε).

As usual in the theory of Markov processes, the dynamics of the processes is given through the infinitesimal generatorL. We assume that the action ofLon any smooth test functionf :RΛε

+ →R, is given by

Lf(u) = X i∈Λε

ϕ(ui, i)[f(u+ ∆i(u))−f(u)]−

X

i∈Λε ∂f ∂ui(u)

h

αui+ε2

X

j∈Λε

b(i, j)(ui−uj)

i

, (2.0.1)

where for alli∈Λε, the function ∆i:RΛε

+ →RΛ+ε is defined by

(∆i(u))j =

ε2a(i, j), ifj6=i −ui, ifj=i , with a : [0,1)2 ×[0,1)2 7→ R

+ being a Lipschitz continuous function such that a(r, r) = 0 for all r ∈ [0,1)2, α is a nonnegative parameter, b : [0,1)2 ×[0,1)2 7→ R+ is a symmetric Lipschitz continuous function also satisfying b(r, r) = 0 for all r∈[0,1)2, and

Assumption 2.1. ϕ∈C1(R+×[0,1)2,R+) is increasing in the first variable such that for all r∈[0,1)2,ϕ(0, r) = 0.

(0,0) (0,1)

(1,0) (1,1)

ε

Λ

ε

Figure 2.1: The ǫ-meshΛεof the set [0,1)2.

(16)

4 MODEL DEFINITION AND MAIN RESULTS 2.0

The first term in (2.0.1) depicts how the chemical synapses are incorporated in our model. A neuron i with potential u spikes at rate ϕ(u, i). Intuitively this means that for any initial configurationu∈RΛε

+ of the membrane potentials

P(U(t) =u+ ∆i(u)|U(0) =u) =ϕ(ui, i)t+o(t), ast→0.

Thus, the function ϕ(·, i) is called firing or spiking rate of the neuron i. Notice that under such assumption neurons may have different spike rates, i.e, the function ϕ(·, i) may be different from

ϕ(·, j).The functiona(·,·), appearing in the definition of ∆i(·), mimics the chemical synapses. The valueε2a(i, j) corresponds to the energy added to the membrane potential of neuronjwhen neuron

ispikes.

The second term in (2.0.1) represents both electrical synapses and leak currents. They describe the deterministic time evolution of the system between two consecutive spikes. More specifically, if there are no spikes in an interval of time [a, b], the membrane potential of each neuron i∈ Λε obeys the following ordinary differential equation

d dtU

(ε)

i (t) =−αU (ε)

i (t)−ε2

X

j∈Λε

b(i, j)hU(ε)i (t)−U(ε)j (t)i. (2.0.2)

The function b(·,·) incorporates the action of the gap-junction channels. The valueε2b(i, j) corre-sponds the synchronization strength between the neurons iand j. Notice also that the first term of the right-hand side of (2.0.2) pushes the membrane potential of neuronito the resting state 0, so that we interpret α as the rate in which the membrane potential of each neuron decreases due to leak channels.

Definingλ(ε)i =ε2P

j∈Λεb(i, j) and ˜b(i, j) = λ (ε) i

−1

b(i, j), automaticallyi7→λ(ε)i and (i, j)7→

˜b(i, j) are Lipschitz continuous functions, ε2P

j∈Λε˜b(i, j) = 1 and we can rewrite the ODE (2.0.2) as

d dtU

(ε)

i (t) =−αU (ε) i (t)−λ

(ε) i

U(ε)i (t)−U¯(ε)i (t)

,

where for eacht≥0 andi∈Λε, ¯

U(ε)i (t) =ε2 X

j∈Λε

˜b(i, j)U(ε) j (t).

We call ¯U(ε)i (t) thelocal average potential of the neuroniat timet. Thus, the second term of both ODE’s is, in fact, pushing with rate λ(ε)i the membrane potential of neuron ito an average value which depends on iitself.

We shall study a simpler situation in which all rates λ(ε)i - and consequently the function (i, j)7→b(i, j) - do not change with ε, keeping all other properties. In this way, hereafter we shall assume that there exist functionsλ: [0,1)2 7→R+ and b: [0,1)2×[0,1)2 7→R+ satisfying:

(i) λis Lipschitz continuous;

(ii) bis Lipschitz continuous such that for each i∈Λε,ε2Pj∈Λεb(i, j) = 1;

(iii) Between consecutive spikes the membrane potential of each neuron i∈Λε obeys

d dtU

(ε)

i (t) =−αU (ε)

i (t)−λi(U (ε)

i (t)−U¯ (ε)

i (t)), (2.0.3)

where for eacht≥0 andi∈Λε, ¯U(ε)i (t) =ε2

P

j∈Λεb(i, j)U (ε) j (t).

For notational convenience we shall write, for any r∈[0,1)2,λr instead ofλ(r).

(17)

2.0 5

at 0, is given by Ψt(u) = eAtu, where A is a symmetric matrix whose entries depend on α, ε2, b and λ:

A= (Ai,j :i, j ∈Λε), Ai,j =

ε2λib(i, j), ifi6=j

−α−λi, ifi=j . (2.0.4) In the result below, Theorem 1, we prove the existence and uniqueness of the process described above and provide an uniform control on the maximal membrane potential of the system. The proof of Theorem1is omitted here since it is analogous, modulo a small modification of the notation, to the proof of Theorem 1 given in [DMGLP15]. In what follows, for any vectoru∈RΛε

+ ,

||u||= max i∈Λε

{ui}.

With this notation, the maximum membrane potential at time tis ||U(ε)(t)||.

Theorem 1. Assume the function ϕsatisfies the Assumption 2.1.

(i) Given ε >0 and u ∈RΛε

+, there exists a unique strong Markov process U(ε)(t) taking values

in RΛε

+ starting from u whose generator is given by (2.0.1).

(ii) Let Pu(ε) be the probability law under which the initial condition of the process U(ε)(t) is U(ε)(0) =u∈RΛε

+. Then for any R >0 and T >0 there exists a constant C >0 such that sup

u:kuk≤R

Pu(ε)hsup t≤T

kUε(t)k< Ci≥1−c1e−c2ε−2, (2.0.5)

where c1 and c2 are suitable positive constants. All the constants C, c1 and c2 do not depend

onε.

We now focus on the hydrodynamic limit of the process (U(ε)(t))

t≥0. We suppose that for all

ε >0 the following assumption holds.

Assumption 2.2. There exists a smooth functionψ0 :R+×[0,1)2 7→R+ fulfilling the condi-tions:

(i) For each r ∈[0,1)2, ψ

0(·, r) is a probability density on R+ whose support is [0, R0]; (ii) ψ0(·, r)>0 on [0, R0);

(iii) U(ε)i (0) i∈Λε

is a sequence of independent random variables, U(ε)i (0) being distributed ac-cording toψ0(u, i)du.

Remark 2.1. The above assumption can be weakened. Indeed, all proofs work under the assump-tion in which items (i) and (ii) are replaced by (i′) and (ii′) where

(i’) For each r∈[0,1)2, ψ0(·, r) is a probability density onR+ with compact support [0, R0(r)];

ψ0(·, r)>0 on [0, R0(r)).

(ii’) There exits a positive parameterR0 such that sup

r∈(0,1]2

R0(r)≤R0<∞.

Since the state space of the process changes with ε, it is convenient to identify our process (U(ε)(t))t≥0 as an element of a suitable space which is independent of ε. The identification is achieved through the map

RΛε

+ ∋U(ε)(t)7→µ (ε) t :=ǫ2

X

i∈Λε δ

(18)

6 MODEL DEFINITION AND MAIN RESULTS 2.0

In this way we identify our process with the element t 7→ µ(ε)t of the Skorohod space D(R+,S), where S is the Schwartz space of all smooth functions φ : R+×[0,1)2 → R, and S′ the dual space ofS. The associated elementµ(ε)t has the nice biological interpretation of being the empirical distribution of the membrane potential of the neurons at time t.

For any fixed T >0, we denote the restriction of the process to [0, T] by µ(ε)[0,T] which belongs to the spaceD [0, T],S′

.We writeP[0,T(ε) ]to denote the law onD [0, T],S′

of the processesµ(ε)[0,T]. Our main result shows that for any positiveT, the sequence of laws P[0,T(ε)] converges, asε→0,to a lawP[0,T] on D [0, T],S′

which is supported by a deterministic trajectory

ρ:= (ρt(u, r)dudr)t[0,T],uR+,r[0,1)2.

The functionρt(u, r) is interpreted as the limit density function and is proved to solve the nonlinear PDE

∂ρt(u, r)

∂t +

∂[V(u, r, ρt)ρt(u, r)]

∂u =−ϕ(u, r)ρt(u, r), t >0, u >0 andr ∈[0,1)

2, (2.0.6)

whereV(u, r, ρt) =−αu−λr(u−u¯t(r)) +pt(r),where for eacht≥0 and r∈[0,1)2,

¯

ut(r) =

Z

[0,1)2

Z ∞

0

ub(r, r′)ρt(u, r′)dudr′, pt(r) =

Z

[0,1)2

Z ∞

0

a(r′, r)ϕ(u, r′)ρt(u, r′)dudr′ (2.0.7)

are respectively the limit average potential and the limit value added to the membrane potential of the neurons near to the position r.

The boundary conditions of (2.0.6) are specified by

ρ0(u, r) =ψ0(u, r), ρt(0, r) =v(t, r), (2.0.8) whereψ0(u, r) is given, whilev(t, r) has to be derived together with (2.0.6). From our analysis we deduce that

v(t, r) = qt(r)

λru¯t(r) +pt(r)

, (2.0.9)

whereqt(r) is the proportion of neurons close to positionr spiking at timet, i.e,

qt(r) =

Z ∞

0

ϕ(u, r)ρt(u, r)du.

The expression of v(t, r) in (2.0.9) comes from conservation of mass in the following sense. The functionV(u, r, ρt) may be interpreted as the limit velocity field, so that the proportion of neurons close to location r which are moving away from the membrane potentials around 0 at time t is exactlyV(0, r, ρt)ρt(0, r).On the other hand, the proportion of neurons close to r spiking at t (these neurons will have potential very close to 0) is preciselyqt(r). The conservation of mass holds once we set V(0, r, ρt)ρt(0, r) =qt(r) which implies equation (2.0.9).

Since we may have ψ0(0, r)6=v(0, r), i.e, ψ0(0, r)6= q0(r)

λr¯u0(r)+p0(r), the functionρt(u, r) may not be continuous, so that we need a weak formulation of (2.0.6).

(19)

2.0 7

real-valued functionR+t7→R∞

0 f(u)ρt(u, r)duis continuous in t, differentiable int >0 and

∂ ∂t

Z ∞

0

f(u)ρt(u, r)du−

Z ∞

0

f′(u)V(u, r, ρt)ρt(u, r)du−f(0)V(0, r, ρt)v(t, r)

=−

Z ∞

0

ϕ(u, r)f(u)ρt(u, r)du, (2.0.10)

Z ∞

0

f(u)ρ0(u, r)du=

Z ∞

0

f(u)ψ0(u, r)du,

whereV(u, r, ρt) =−uα−λr(u−u¯t(r)) +pt(r),with ¯ut(r) and pt(r) as in (2.0.7).

The solution of (2.0.10) can be computed explicitly by the method of characteristics. Charac-teristics are curves along which the PDE reduces to an ODE. They are defined by the equation

dx(t, r)

dt =V(x(t, r), r, ρt). (2.0.11)

The solution of (2.0.11) on the interval [s, t], with valueuatsis denoted byTs,t(u, r),u∈R+. Its explicit expression is given by:

Ts,t(u, r) =e−(α+λr)(t−s)u+

Z t

s

e−(α+λr)(t−h)[λ

ru¯h(r) +ph(r)]dh. The statement of our main theorem is the following.

Theorem 2. Under Assumptions 2.1and 2.2, for any fixed T >0,

P[0,T(ε) ]→ Pw [0,T] in D [0, T],S′

as ε→0,

where P[0,T] is the law on D [0, T],S′

supported by the distribution-valued trajectoryωt given by

ωt(φ) =

Z

[0,1)2

Z ∞

0

φ(u, r)ρt(u, r)dudr, t∈[0, T],

for all φ ∈ S. The function ρt(u, r) is the unique weak solution of (2.0.6)-(2.0.8) with v0 = ψ0

and v1 given by (2.0.9). Furthermore, ρt(u, r) is continuous in R+×R+[0,1)2\ {(t, T0,t(0, r), r) : (t, r) ∈ R+×[0,1)2}, where it is also differentiable in t and u and its derivative satisfy (2.0.6). Moreover, for any t≥0 and r∈[0,1)2,ρ

t(u, r) has compact support in u and

ρt(0, r) = qt(r)

λrt(r) +pt(r) and

Z ∞

0

ρt(u, r)du= 1.

The explicit expression of the solution ρt(u, r) for (t, u, r) with u≥T0,t(0, r),is:

ρt(u, r) =ψ0

T0,t−1(u, r), rexp

Z t

0

ϕ Ts,t−1(u, r), r

−α−λrds

, (2.0.12)

and for (t, u, r) such that u=Ts,t(0, r) for some 0< s≤t,

ρt(u, r) =

qs(r)

λru¯s(r) +ps(r) exp

Z t

s

[ϕ(Ts,h(0, r), r)−α−λr]dh

. (2.0.13)

Theorem 3. Assume Assumptions 2.1and 2.2. If additionally for all r∈[0,1)2,

ψ0(0, r) = q0(r)

λr0(r) +p0(r), where q0(r) =

Z ∞

0

(20)

8 MODEL DEFINITION AND MAIN RESULTS 2.2

and

¯

u0(r) =

Z

[0,1)2

Z ∞

0

ub(r, r′)ψ0(u, r)dudr′, p0(r) =

Z

[0,1)2

Z ∞

0

a(r′, r)ϕ(u, r)ψ0(u, r)dudr′,

thenρt(u, r) is a continuous in the whole space R+×R+×[0,1)2.

The estimate in (2.0.5) provided by Theorem 1 implies that with probability going to 1 as

ε→0 all the membrane potentials are uniformly bounded in the time interval [0, T]. Therefore, we are allowed to change the values of the spiking rateϕfor those values of membrane potentials not reached by the system of neurons. In doing this we can suppose without lost of generality that the functionϕsatisfies the following stronger condition.

Assumption 2.3. ϕ∈C1(R+×[0,1)2,R+) is non-decreasing, Lipschitz continuous, bounded and constant for all u≥u0 for someu0 >0.We denote by ϕ∗=kϕk∞the sup norm of ϕ.

The argument above is given precisely at the end of the Appendix 6.4.

2.1

Boundedness of the Membrane Potentials

Hereafter, we work under Assumption2.3. Exploiting such assumption we are able to prove a re-sult stronger than in Theorem1. Its proof is analogous to the proof of Proposition 1 in [DMGLP15], so that we omit it here.

Proposition 2.1. Let ϕbe any function satisfying Assumption 2.3.

(i) Given ε > 0 and u ∈RΛε

+ there exists a unique strong Markov process U(ε)(t) taking values

in RΛε

+ starting from u whose generator is given by (2.0.1).

(ii) Let N(ε)(t) be the total number of spikes in the time interval [0, t]. For any t≥0, it holds

N(ε)(t)≤N˜(ε)(t) stochastically,

whereN˜(ε)(t)is the total number of events in the time interval [0, t]of a Poisson process with rate ε−2ϕ∗.

(iii) For any given T >0, it holds that

sup t≤T

kU(ε)(t)k ≤ kU(ε)(0)k+a∗ε2N(ε)(T),

where a∗ = ||a||∞. In particular, there exist positive constants c1 and c2 such that for any

ε >0 and U(ε)(0):

P(ε)

U(ε)(0)

h

sup t≤T

kU(ε)(t)k ≤ kU(ε)(0)k+ 2a∗ϕ∗T

i

≥1−c1e−c2T ε

−2

. (2.1.1)

The constantsc1 and c2 do not depend on ε.

2.2

Tightness of the Sequence of Laws

P

[0,T) ]

In this section we shall prove the tightness of the sequenceP[0,T(ε)]under Assumption2.3. This is the first step to prove Theorem2. Although the proof of the tightness is similar to the one provided in [DMGLP15], we decide to keep it here for the sake of completeness.

Proposition 2.2. Assume Assumption2.3. Assume also thatU(ε)(0) =u(ε) satisfies the Assump-tion 2.2. Then the sequence of lawsP[0,T(ε)] of µ[0,T(ε) ] is tight in D [0, T],S′

(21)

2.2 TIGHTNESS OF THE SEQUENCE OF LAWSP(ε)

[0,T] 9

Proof. Indeed, for any test function φ∈ S and all t∈[0, T],we write

µ(ε)t (φ) =ε2 X

i∈Λε

φ(U(ε)i (t), i).

By [Mit83], we have only to check tightness of µ(ε)t (φ), t ∈ [0, T] ∈ D [0, T],R

for any fixed

φ ∈ S. For that sake, we shall use a tightness criterion provided by Theorem 2.6.2 of [DMP91]. The criterion requires the existence of a positive constantc such that

sup t≤T

Ehγt(ε)i2≤c, sup t≤T

h

σt(ε)i2 ≤c, (2.2.1)

whereγt(ε) and σ(ε)t are respectively given by

γt(ε)=L[µ(ε)t (φ)], σt(ε)=L[µ(ε)t (φ)]2−2µ(ε)t (φ)L[µ(ε)t (φ)],

beingL the generator given by (2.0.1). In order to show (2.2.1), we computeγt(ε).By its definition,

γt(ε)=ε2X

j

X

i6=j

ϕ(U(ε)j (t), j)hφU(ε)i (t) +ε2a(j, i), i−φ(U(ε)i (t), i)i

+ε2X

j

ϕ(U(ε)j (t), j)hφ(0, j)−φ(U(ε)j (t), j)i

−αε2X

j

φ′(U(ε)j (t), j)U(ε)j (t)−ε2X

j

φ′(U(ε)j (t), j)λj[U(ε)j (t)−U¯(ε)j (t)].

From simple calculations we deduce, from the expression above, that

γt(ε)=ε4X

j

X

i6=j

ϕ(U(ε)j (t), j)φ′(Ui(ε)(t), i)a(j, i) +ε2X

j

ϕ(U(ε)j (t), j)φ(0, j)

−ε2X

j

ϕ(U(ε)j (t), j)φ(U(ε)j (t), j)−αε2X

j

φ′(U(ε)j (t), j)U(ε)j (t)

−ε2X

j

φ′(U(ε)j (t), j)λj[U(ε)j (t)−U¯ (ε)

j (t)] +O(ε2),

with

O(ε2) :=ε2X

j

X

i6=j

ϕ(U(jε)(t), j)hφUi(ε)(t) +ε2a(j, i), i−φ(U(iε)(t), i)−ε2a(j, i)φ′(U(iε)(t), i)i.

Now, Assumption 2.3 implies that ϕ is bounded and since φ, φ′, φ′′, a, λ are also bounded, we have that there is a positive constant cso that

|γ(ε)t | ≤c

1 +ε2 X

j

U(ε)j (t) +ε2X

j ¯ U(ε)j (t)

≤c

1 + 2 sup t≤T

||U(ε)(t)||.

By Assumption2.2and Proposition 2.1, it follows that for a positive constant cnot depending on

ε, suptTE

h

γ(ε)t i2≤c.

We now turn to the proof of (2.2.1) forσ(ε)t . For that sake, we writeL=Lfire+L(α+λ), where

Lfireφand L(α+λ)φ are given respectively by the first and second terms on the right hand side of (2.0.1). Notice thatL(α+λ) acts as a “derivative”, so that we have

(22)

10 MODEL DEFINITION AND MAIN RESULTS 2.2

The equality above is directly verified. Thus, it follows that

σt(ε)=Lfire[µ(ε)t (φ)]2−2µ(ε)t (φ)Lfire[µ(ε)t (φ)].

Since |2µ(ε)t (φ)| ≤ c and we have already proven the bound for Lfire[µ(ε)t (φ)], it remains only to bound uniformly int≤T and in ε, theL2-norm of Lfire[µ(ε)t (φ)]2. By definition,

Lfire[µ(ε)t (φ)]2 =ε4X

j

X

i,k6=j

ϕ(U(ε)j (t), j)

"

φ(U(ε)i (t) +ε2a(j, i), i)φ(U(ε)k (t) +ε2a(j, k), k)

−φ(U(ε)i (t), i)φ(U(ε)j (t), j)

#

+ε4X

j

ϕ(U(ε)j (t), j)[φ2(0, j)−φ2(U(ε)j (t), j)]

+ 2ε4X

j

X

i6=j

ϕ(U(ε)j (t), j)[φ(0, j)−φ(U(ε)j (t), j)][φ(U(ε)i (t) +ε2a(j, i), i)−φ(U(ε)i (t), i)].

(23)

Chapter 3

The Auxiliary Process and the

Coupling Algorithm

In this section we shall define an auxiliary process which we later shall prove that it is close to the true process as ε → 0. This uniform closeness in the limit ε → 0 is the content of the Theorem4. The proof of this result is based on a coupling algorithm designed so that neurons in both processes spike together as often as possible. In the section 4, we analyse the hydrodynamic limit for the auxiliary process and in section5 we provide the proofs of Theorem2 and 3.

Throughout the section ε is kept fixed so that we omit the superscript εfrom U(ε)(t) and all variables involved in the definition of the auxiliary process. Before defining the auxiliary process we shall introduce three partitions.

Definition 2 (Partition on space). Let ℓ > 0 be a fixed parameter such that ℓ−1 is an integer number. We then partition the set [0,1)2 into half-open squares of side lengthℓ:

Cℓ=

C(m1,m2) :m1, m2∈(ℓZ)2∩[0,1)2 , C(m1,m2)= [m1, m1+ℓ)×[m2, m2+ℓ).

Since we shall not use the form chosen for the elements of Cℓ, we take any enumeration of the set (ℓZ)2[0,1)2 and assume that

Cℓ ={Cm:m= 1. . . , ℓ−2}. For each square Cm we denote byim its center.

Definition 3 (Partition on time). Let δ and τ be positive numbers such that δ is divisible by τ. We partition the interval [0, δ) into intervals of lengthτ:

Jτ ={Jh:h= 1, . . . δτ−1}, Jh=

δ−hτ, δ−(h−1)τ

.

Let us explain the role of the partitionsCℓ andJτ in the definition of the auxiliary process. The auxiliary process is denoted by Y(δ,ℓ,E,τ)(nδ) (the parameter E will appear below) and is defined at discrete timesnδ,n∈N.Its definition is such that neurons in the squareCm, having potential U≥0, spike with a constant rateϕ(U, im) in the time interval [nδ,(n+ 1)δ). Thus, neurons in same the square spike according to the same spiking rate u7→ϕ(u, im).Moreover, in the same interval, all firing events after the first one are suppressed.

The configuration of Y(δ,ℓ,E,τ) is updated at every time interval

nδ,(n+ 1)δ

. Neurons in a common square have the same updating rule, so that we need to specify it in each square for a single neuron. For that sake, denote by ¯Yi(δ,ℓ,E,τ)m (nδ) the average potential attributed to neuron im in the auxiliary process at timenδ and takei∈Cm. Conditionally on ¯Yi(δ,ℓ,E,τ)m (nδ) = ¯y(im), suppose first that i have not spiked during the interval

nδ,(n+ 1)δ

. Then the value of its membrane potential at time (n+ 1)δ is obtained by first letting the value of its current potential evolve, for a time δ, under the attraction of ¯y(im) and then taking into account the effect of the spikes in

(24)

12 THE AUXILIARY PROCESS AND THE COUPLING ALGORITHM 3.0

(0,0) (0,1)

(1,0) (1,1)

= 4

ε

ε

Λ

ε

i

m

Figure 3.1: The red dots represent the centers of each half-open squareCm with lengthℓ.

the interval [0, δ). If, on the other hand, ihave spiked in the interval Jh, its potential is updated by first setting its current potential to 0 and then applying the earlier updating rule during the interval

δ−(h−1)τ, δ.This means that the potential ofiis then attracted for a time (h−1)τ by ¯

y(im) and next the effect of the spikes duringδ−(h−1)τ, δ is taken into account. Before giving the precise definition of the auxiliary process, we need to introduce a third partition.

Definition 4 (Partition on the membrane potential at time 0). Let E be a positive real number which divides R0. We then partition the interval [0, R0] into subintervals

IE ={Ik:k= 1, . . . , R0E−1}, Ik=

(k−1)E, kE

.

For eachIk we denote its center byEk.

For each neuron i∈Λε, the valueYi(δ,ℓ,E,τ)(0) will be defined by first picking a point in [0, R0] according to the probability density ψ0(u, i)du and then redefining it as Ek if the chosen value belongs to Ik.The precise definition of the auxiliary process is given now.

The definition of the process is done by induction. Initially, we consider the map [0, R0]∋u7→ Φ0(u) which assigns Φ0(u) =Ek ifu∈Ik and we then put

Yi(δ,ℓ,E,τ)(0) = Φ0(Ui(0)), for eachi∈Λε. (3.0.1) Now suppose that the configuration Y(δ,ℓ,E,τ)(nδ) = y = (yi, i ∈ Λε) is given and consider the sequence of independent exponential random variables (ξi)i∈Λε which are independent of anything else, whose rates are ϕ(yi, im) when i ∈ Cm. Notice that we keep constant the spiking intensity of the neurons. We write N(m, h) to denote the number of neurons in Cm spiking in the interval

Jh ∈ Jτ,

N(m, h) = X i∈Cm

1{ξiJh}, Jh=

δ−hτ, δ−(h−1)τ, (3.0.2)

while the contribution, due to spikes of other neurons, to the membrane potential of those neurons inCm which spike in Jh is given by

S(m, h) =ε2

ℓ−2

X

m′=1

h−1

X

s=1

a(im′, im)N(m′, s), h= 2, . . . , δτ−1,

and for h = 1, we set S(m,1) = 0. Neurons which do not spike in [nδ,(n+ 1)δ) will have their membrane potentials increased by

S(m) =ε2

ℓ−2

X

m′=1

δτ−1

X

h=1

(25)

3.1 COUPLING THE AUXILIARY AND TRUE PROCESSES 13

The average potential of neuron im (at timenδ) is defined by

¯

y(im) =ε2 ℓ−2

X

m′=1

X

i∈Cm

b(im, im′)yi.

Notice that the electrical synaptic strength is constant in each square Cm′.Setting for simplicity

¯

y(m) := ¯y(im) andλm:=λim, we write,

Φt,¯y(m)(yi) =e−t(α+λm)yi+

λm

α+λm

1−e−t(α+λm)

¯

y(m), 0≤t≤δ, i∈Cm, (3.0.4)

for deterministic flow attracting the valueyi to ¯yim, and set

Yi(δ,ℓ,E,τ)((n+ 1)δ) = Φδ,¯y(m)(yi) +S m), i∈Cm, ifξi > δ. (3.0.5) Hence neurons which did not spike follow the deterministic flow for a time δ. Afterwards, we add to their membrane potentials the value S(m), generated by the spiking of other neurons, only at the end of the interval [nδ,(n+ 1)δ).

For those neurons which spike in the interval Jh, we set

Yi(δ,ℓ,E,τ)((n+ 1)δ) = Φ(h1)τ,¯y(m)(0) +S(m, h), i∈Cm, ifξi ∈Jh. (3.0.6) This is the value of the membrane potential of a neuron initially having potential 0, following the deterministic flow for the remaining time (h−1)τ and receiving an additional potential S(m, h), due to spikes of other neurons in the time interval

δ−(h−1)τ, δ

.

Remark 3.1. Notice that all variables N(m, h), S(m, h), S(m) and ¯y(m) depend also on n. We shall stress this dependency in the analysis of the hydrodynamic limits for Y(δ,ℓ,E,τ), section4.

Remark 3.2. Even though the auxiliary process Y(δ,ℓ,E,τ) is defined in such a way that Y(δ,ℓ,E,τ)

is close to the true process, we could have chosen the distribution of the spiking neurons in the auxiliary process differently. The choice we have made is convenient, specially in the analysis of the hydrodynamic limit for Y(δ,ℓ,E,τ).

3.1

Coupling the Auxiliary and True Processes

In this section, we present a coupling algorithm for the two processes (U(nδ))n≥1and (Y(δ,ℓ,E,τ)(nδ))n≥1. The algorithm is designed so that neurons in both processes spike together as often as possible.

At time 0, it is set, for each i ∈ Λε, Yi(δ,ℓ,E,τ)(0) = Φ0(Ui(0)). Then, for n ≥ 0, the input of the algorithm is the configuration (U(nδ), Y(δ,ℓ,E,τ)(nδ)) and its output is the new configuration (U((n+ 1)δ), Y(δ,ℓ,E,τ)((n+ 1)δ)). The following auxiliary variables are used in the algorithm.

• (u, y)∈RΛε

+ ×RΛ+ε representing the configuration of membrane potentials in the two processes and ¯y(m) =ε2P

m′

P

i∈Cm′b(im′, im)yi representing the average membrane potential of the

neuron im.

• Independent random times ξi1, ξi2, ξi∈(0,∞),i∈Λε, indicating possible times of updates.

• q = (qi, i ∈ Λε) ∈ {0,1}Λε. The variable qi marks the possible spike of the neuron i in the auxiliary process.

• β = (βi, i∈Λε)∈ {0,1, . . . δτ−1}Λε. The variable βi indicates in which subinterval of length

τ the neuron ihas spiked in the auxiliary process. The conditionβi= 0 means the neuron i has not spiked.

(26)

14 THE AUXILIARY PROCESS AND THE COUPLING ALGORITHM 3.2

The deterministic flows follow by the processes U andY(δ,ℓ,E,τ)make part of the coupling algorithm. Recall that the deterministic flow of the process Yi(δ,ℓ,E,τ) is denoted by Φt,¯y(m)(yi), see equation (3.0.4), while the deterministic flow of the Ui at timetis Ψit,u(ui) := (Ψt(u))i= (eAtu)i,see (2.0.4) and formulas therein.

The coupling algorithm can be described as follows. Conditionally on random vector (U(nδ), Y(δ,ℓ,E,τ)(nδ)) = (u, y), we attach to each neuron i two independent random clocks ξi1 and ξi2. For i∈ Cm, ξi1 has

intensity ϕ(Ψit,u(ui), i)∧ϕ(yi, im), while ξ2i intensity |ϕ(Ψit,u(ui), i)−ϕ(yi, im)|. Random clocks associated to different neurons are independent. If ξi1 rings first, then the neuron ispikes in both processes and the coupling is successful. On the other hand, ifξ2i rings first, then the neuronifires only in the process whose membrane potential ofiat timeξi2−is the largest. Whenever the neuron

i fires in the interval Jh, in the auxiliary process, we set qi = 1 and βi = h and disregard other spikes ofiin the auxiliary process. Thus, all other possible spikes ofiwill be considered in the true process Ui. For this reason we also consider a random clock ξi with intensity ϕ(Ψit,u(ui), i) whose rings will indicate the next spikes of i in the true process. All the random clocks are considered only if they ring in the interval of time [0, δ).

The algorithm is given below.

Algorithm 1 Coupling algorithm

1: Input: U(ε)(), Y(δ,ℓ,E,τ)()

2: Output: U(ε)((n+ 1)δ), Y(δ,ℓ,E,τ)((n+ 1)δ)

3: Initial values: (u, y)←U(ε)(), Y(δ,ℓ,E,τ)(),q

i←0 andβi←0, for all i∈Λε,L←δ

4: while L >0do

5: For each i∈Λε, choose independent random times

• ξi1with intensityϕ(Ψi

t,u(ui), i)∧ϕ(yi, im) for all neurons inCm

• ξ2

i with intensity|ϕ(Ψit,u(ui), i)−ϕ(yi, im)|for all neurons inCm

• ξi with intensityϕ(Ψit,u(ui), i)

• R= inf

i∈Λε;qi=0(ξ 1

i ∧ξ2i)∧ inf i∈Λε;qi=1ξi

6: if R≥L then Stop situation:

7: yi←Φδ,y¯(m)(yi) +S(m), for alli∈Λε∩Cmsuch thatqi= 0

8: yi←Φ(βi−1)τ,y¯(m)(0) +S(m, βi), for alli∈Λε∩Cmsuch thatqi= 1

9: ui←ΨiL,u(ui), L←0

10: else if R=ξi1< Lthen

11: L←L−R, qi ←1, βi←δτ−1− Rτ−1

12: ui←0,uj ←ΨjR,u(uj) +ε2a(i, j) for allj6=i

13: else if R=ξi2< Lthen

14: if ϕ(Ψi

R,u(ui), i)> ϕ(yi, im)then

15: L←L−R, ui←0,uj←ΨjR,u(u) +ε2a(i, j) for allj6=i

16: end if

17: if ϕ(Ψi

R,u(ui), i)≤ϕ(yi, im)then

18: L←L−R, qi←1 , βi←δτ−1−

R τ

−1

, ui ←ΨiR,u(ui) for alli∈Λε

19: end if

20: else if R=ξi< Lthen

21: L←L−R, ui←0,uj←ΨjR,u(ui) +ε2a(i, j) for allj6=i

22: end if

23: end while

24: U(ε)((n+ 1)δ), Y(δ,ℓ,E,τ)((n+ 1)δ)

←(u, y)

(27)

3.2 CONSEQUENCES OF THE COUPLING ALGORITHM 15

3.2

Consequences of the Coupling Algorithm

The Theorem 4 is the main result of this section. It states that typically the difference of the potentials ∆i(n) = |Ui(nδ)−Yi(δ,ℓ,E,τ)(nδ)| is small (proportionally to δ). In addition, it claims that the proportion of neurons having large values of ∆i(n) is also small (again proportional toδ).

Definition 5. A labeli∈Λε is called “good at time kδ” if for alln≤kthe following is true: (i) Either ξi1 rings first andξi does not ring on interval [(n−1)δ, nδ];

(ii) or neitherξ1i norξi2 ring on the interval [(n−1)δ, nδ].

We denote byGnthe set of good labels at timenδandBn= Λε\Gnthe set of bad labels. Fori∈ Gn we set ∆i(n) := |Ui(nδ)−Yi(δ,ℓ,E,τ)(nδ)| so that the maximum distance between the membrane potential of the true and auxiliary process, for the good labels, is

θn= max{∆i(k), i∈ Gn, k≤n}.

We now enunciate Theorem 4. Its proof is postponed to Appendix 6.1.

Theorem 4. Grant Assumption 2.3, for any given T > 0, there exist δ0 > 0 and a constant C

depending on kϕk∞ and on T such that for all δ≤δ0,

θn≤Cδ and ε2|Bn| ≤Cδ for alln such that nδ≤T,

with probability≥1−c1δ−1e−c2ǫ

−2δ4

. The constantsc1 and c2 do not depend on ε andδ.

For any test function φ∈ S and n≥1, we write

ν(δ,ℓ,E,τ)(φ) =ε2X

i∈Λε

φ(Yi(δ,ℓ,E,τ)(nδ), i).

As a by product of Theorem4, we obtain an upper bound for theL1- distance between the random variables µnδ(φ) andν(δ,ℓ,E,τ)(φ), for each test function φ∈ S.This result will be used, in section 5, in the analysis of the Hydrodynamic for U. Let

T =nt∈[0, T] :t=n2−qT, n, q∈No.

Remember that Pu(ε) denotes the law under which the true process U(ε)(t) satisfies the condition U(0) = u.We write ˜Pu(ε) to denote the law under which the processY(δ,ℓ,E,τ)(·) satisfies Φ0(u) = (Φ0(ui), i∈Λε) and write Q(ε)u to denote the joint law of the true and auxiliary processes induced by coupling algorithm provided above. We shall denote the associated expectations by E(ε)u and

˜

Eu(ε), and, by abuse of notation, the joint expectation byQ(ε)u .

Proposition 3.1. Take t ∈ T, δ ∈ {2−qT, q ∈ N} and let n be such that t = δn and fix φ ∈ S.

Then, there exists a constant C, not depending on δ, such that

Q(ε)u hµt(φ)−ν

(δ,ℓ,E,τ) t (φ)

i

≤C||ϕ||Lip

e−Cǫ−2δ4

δ +δ

. (3.2.1)

(28)
(29)

Chapter 4

Hydrodynamic Limit for the

Auxiliary Process

In this section, we initially describe the random evolution of the membrane potentials in the auxiliary process. Next, we define a deterministic version of this evolution taking into account the average behaviour of the auxiliary process in each time interval

nδ,(n+ 1)δ

. Beside, we also consider the random variables which compute the number of neurons of the auxiliary process in a given square with a given potential and, from the dynamics of these variables, we define a second deterministic evolution. The main theorem of this section, Theorem5, states that both the random potentials and the counting variables become deterministic asε→0 and they are described respectively by the first and second deterministic evolutions.

In the remaining of the section these deterministic evolutions will be used to define the hydro-dynamic evolution for the auxiliary processes. When necessary we shall stress the dependence both on εand nwritingY(ε,δ,ℓ,E,τ),Nn(ε)(m, h), ¯y(ε)n (m),Sn(ε)(m, h) and Sn(ε) m).

4.1

Hydrodynamic Evolution of the Auxiliary Process

Throughout the subsection the parameters δ, ℓ, E, τ are kept fixed, so that we omit the super-script in all variables considered below. In what follows we work in Cm and doing so we drop also the dependency onm unless some confusion may arise.

We denote by En(ε) the random set of potentials which the auxiliary process (restricted to Cm) assume at time nδ. By (3.0.1), we have E0(ε) ={E0,k(ε):k= 1. . . , R0E−1} where we set E0,k(ε) =Ek. At timeδ, the potential of neurons which spike in theJh =

δ−hτ, δ−(h−1)τ, independently of their initial membrane potentials, will be a valueE1,h(ε) ∈ E1(ε).By (3.0.6), we immediately see that

E1,h(ε) = Φ

(h−1)τ,¯y(ε)0 (m)(0) +S (ε)

1 (m, h), h= 1, . . . , δτ−1. (4.1.1) On the other hand, at timeδ, the membrane potential of those neurons which initially had potential

E0,k(ε) and do not spike will be a value E1,k+δτ(ε) 1 ∈ E (ε)

1 .Recalling (3.0.5), it is readily verified that

E1,k+δτ(ε) −1 = Φδ,¯y(ε) 0 (m)

E0,k(ε)

+S1(ε)(m), k= 1, . . . ,|E0(ε)| (4.1.2)

where E0,k(ε) ∈ E0. Thus, we may split the elements of the finite set E1(ε) into two groups. The first group consists of those potentials satisfying (4.1.2), reached only by neurons which do not spike in [0, δ). On the other hand, due to spikes of neurons in the interval [0, δ) some potentials are “created” at timeδ. This leads to the second group of potentials, those satisfying (4.1.1). Moreover,

(30)

18 HYDRODYNAMIC LIMIT FOR THE AUXILIARY PROCESS 4.1

the following chain of inequalities holds

0 =E1,1(ε)< . . . < E1,δτ(ε) 1 < E (ε)

1,1+δτ−1 < . . . < E (ε)

1,R0E−1+δτ−1.

Iterating the argument above, for eachnδ≤T, we may also split the elements of the setEn(ε) into two groups. Those potentials belonging to the first group satisfy

En,k+δτ(ε) −1 = Φδ,¯y(ε) n−1(m)

En(ε)1,k

+Sn(ε)(m), k= 1, . . . ,|En(ε)1|,

whereEn(ε)1,k ∈ En(ε)1, while the potentials of the second group satisfy

En,h(ε) = Φ

(h−1)τ,¯y(ε)n1(m)(0) +S (ε)

n (m, h), h= 1, . . . , δτ−1. From our definitions, we have also that

0 =En,1(ε) < . . . < En,δτ(ε) −1 < E (ε)

n,1+δτ−1 < . . . < E (ε)

n,R0E−1+nδτ−1. Now, writing

e(ε)n (m) = ˜E(ε)

Y(ε,δ,ℓ,E,τ)(0)

¯

Yi(ε,δ,ℓ,E,τm )(nδ)

, 1≤m≤ℓ2, nδ≤T, (4.1.3)

to denote the expected value of the local average membrane potential ¯Yi(ε,δ,ℓ,E,τm )at timenδ, we set

D(ε)0 =E0(ε) and then recursively define for k= 1, . . . ,|D(ε)n1|,

D(ε)n,k+δτ−1 := Φδ,e(ε) n−1(m)

Dn(ε)1,k

+ ˜E(ε)

Y(ε,δ,ℓ,E,τ)(0)

Sn(ε)(m)

, with Dn(ε)1,k ∈ Dn(ε)1, (4.1.4)

Dn,h(ε) := Φ

(h−1)τ,e(ε)n1(m)(0) + ˜E (ε)

Y(ε,δ,ℓ,E,τ)(0)

h

Sn(ε)(m, h)i, h= 1, . . . , δτ−1. (4.1.5) GivenEn,k(ε) ∈ En(ε), we writeηn(ε)(m, k) to denote the number of neurons of Y(ε,δ,ℓ,E,τ), inCm, with membrane potential En,k(ε) at timenδ. Finally, we write

ζ0(ε)(m, k) = ˜E(ε)

Y(ε,δ,ℓ,E,τ)(0)

h

η0(ε)(m, k)i,

to denote the expected number of neurons of the Y(ε,δ,ℓ,E,τ) in the square Cm whose potential at time 0 isE0,k(ε), and iteratively we set

ζn(ε)(m, k+δτ−1) =ζn(ε)1(m, k)e−δϕ D(ε)n−1,k,im

, D(ε)n1,k ∈ D(ε)n1, (4.1.6) and for h= 1, . . . δτ−1,

ζn(ε)(m, h) =X k

ζn(ε)1(m, k)e−(δ−hτ)ϕ D(ε)n−1,k,im

−e−(δ−(h−1)τ)ϕ Dn(ε)−1,k,im . (4.1.7)

Suppose we have computed the number neurons ofY(ε,δ,ℓ,E,τ), inCm, with a given potentialEn(ε)1,k. Then, the probability of a neuron with such potential does not spike in the interval [0, δ) is exactly

e−δϕ E(ε)n−1,k,im

.Thus, we expect that the number of neurons having potentialEn,k(ε) at the next step satisfies

ηn(ε)(m, k+δτ−1)≈η(ε)n1(m, k)e−δϕ En(ε)−1,k,im

.

(31)

4.2 THE LIMIT TRAJECTORY OF THE AUXILIARY PROCESS 19

having potential En(ε)1,k, which spike in the intervalJh =δ−hτ, δ−(h−1)τ is precisely

ηn(m, k)

e−(δ−hτ)ϕ E(ε)n−1,k,im

−e−(δ−(h−1)τ)ϕ E(ε)n−1,k,im .

Then, summing overk we get the random version of (4.1.7).

We shall show that the random membrane potentials E(ε)n,k are close (proportionally to ε1/2) to the deterministic values Dn,k(ε) define above. Furthermore, it will be shown that the collection of counting variables ηn(ε)(m, k) are close to the values ζn(ε)(m, k). Here, close means again to be proportional toε1/2.

Theorem 5. There exist positive constants C, c1 and c2, not depending on ε such that for all n

with 0≤nδ≤T, En,k(ε) ∈ En(ε) and D(ε)n,k∈ Dn(ε),

E

(ε) n,k−D

(ε) n,k

≤Cε1/2, ε2

ηn(ε)(m, k+δτ−1)−ζn(ε)(m, k+δτ−1)

≤Eℓ2ε1/2

for k= 1, . . . ,|En(ε)|and

ε2

η

(ε)

n (m, h)−ζn(ε)(m, h)

≤τ ℓ

2ε1/2, h= 1, . . . , δτ−1,

with probability≥1−c1ec2ε− 1

.

The proof is given in the Appendix 6.3.

Remark 4.1. The constantC, c1 and c2 given in the Theorem5, which does depend on ε, turns out to have a bad dependency on the parameters δ, ℓ, E and τ. However all these parameters are fixed in this section, so that the Theorem5 implies that bothEn,k(ε) andDn,k(ε), as well asε2ηn(m, k) and ε2ζn(m, k) are close to each other asε→0 (keepingδ, ℓ, E, τ fixed).

4.2

The Limit Trajectory of the Auxiliary Process

As a consequence of the Theorem 5we shall prove that the law of ν converges in the Hydro-dynamic limit to a limit law denote by ρ(δ,ℓ,E,τ) (u, r) to be defined below. The limit as ε→ 0 of

D(ε)n,kand ζn,m Dn,k(ε)

will appear in its definition. In what follows we make explicit the dependence on δ, ℓ, E, τ writingD(ε,δ,ℓ,E,τn,k ),ζn(ε,δ,ℓ,E,τ)(m, k),en(ε,δ,ℓ,E,τ)(m), Sn(ε,δ,ℓ,E,τ)(m) and Sn(ε,δ,ℓ,E,τ)(m, h.)

We set for each 1≤k≤R0E−1 and 1≤m≤ℓ−2,

ζ0(δ,ℓ,E,τ)(m, k) := lim ε→0ε

2ζ(ε,δ,ℓ,E,τ)

0 (m, k), I0,k=Ik∈ IE. By Assumption2.2this limit exits and its value is equal toR

Cm

R

Ikψ0(u, im)du.The valueζ (δ,ℓ,E,τ) 0 (m, k) has the nice probabilistic meaning of being the limit fraction of neurons, inside Cm, whose mem-brane potential isD(ε,δ,ℓ,E,τ0,k )=D(δ,ℓ,E,τ)0,k =Ek.

The function ρ(δ,ℓ,E,τ0 )(u, r) is then obtained by distributing the number ζ0,m D0,k(δ,ℓ,E,τ) uni-formly over the rectangleI0,k×Cm:

ρ(δ,ℓ,E,τ)0 (u, r) := ζ (δ,ℓ,E,τ) 0 (m, k)

(32)

20 HYDRODYNAMIC LIMIT FOR THE AUXILIARY PROCESS 4.3

D(δ,ℓ,E,τ)n,k+δτ−1. Taking the limit as ε→0 of in the expressions (4.1.4) and (4.1.5), it follows that

Dn,k+δτ(δ,ℓ,E,τ)1 = Φδ,e(δ,ℓ,E,τ) n−1 (m)

Dn(δ,ℓ,E,τ)1,k +s(δ,ℓ,E,τ)n (m), (4.2.2)

Dn,h(δ,ℓ,E,τ)= Φ

(h−1)τ,e(δ,ℓ,E,τ)n1 (m)(0) +s

(δ,ℓ,E,τ,h) n (m),

where for each n ≥ 0, the functions e(δ,ℓ,E,τ)n (m), s(δ,ℓ,E,τ)n (m) and s(δ,ℓ,E,τ,h)n (m) are obtained by letting ε→0 :

e(δ,ℓ,E,τ)n (m) = lim ε→0e

(ε,δ,ℓ,E,τ) n (m),

s(δ,ℓ,E,τ)n (m) = lim ε→0

˜

E(ε)

Y(ε,δ,ℓ,E,τ)(0)

Sn(ε,δ,ℓ,E,τ)(m)

,

s(δ,ℓ,E,τ,h)n (m) = lim ε→0

˜

E(ε)

Y(ε,δ,ℓ,E,τ)(0)

Sn(ε,δ,ℓ,E,τ)(m, h)

.

We need also to compute the limit as ε→ 0 of the numbers ζn(ε,δ,ℓ,E,τ)(m, k).By letting ε→0 in (4.1.6), it is clear that

ζn(δ,ℓ,E,τ)(m, k+δτ−1) =ζn(δ,ℓ,E,τ)1 (m, k)e−δϕ

Dn(δ,ℓ,E,τ1,k ),im

. (4.2.3)

Similarly, sendingε→0 in (4.1.7), we have that

ζn(δ,ℓ,E,τ)(m, h) =X k

ζn(δ,ℓ,E,τ)1 (m, k)e−(h−1)τ ϕ Dn(δ,ℓ,E,τ−1,k ),im

−e−hτ ϕ Dn(δ,ℓ,E,τ−1,k ),im . (4.2.4)

Now, consider the set of intervalsIn,k(δ,ℓ,E,τ)=nIn,k(δ,ℓ,E,τ)owhere forh= 1, . . . , δτ−1, the intervals are of the form

In,h(δ,ℓ,E,τ)=hDn,h(δ,ℓ,E,τ), Dn,h+1(δ,ℓ,E,τ),

while fork= 1, . . . ,|Dn(δ,ℓ,E,τ)1 |, In,k+δτ(δ,ℓ,E,τ)−1 is the interval having center in the valueD (δ,ℓ,E,τ)

n,k+δτ−1 whose length satisfies

|In,k+δτ(δ,ℓ,E,τ)1|=e−(α+λm)δ|I (δ,ℓ,E,τ)

n−1,k+δτ−1|. (4.2.5) Finally, we set

ρ(δ,ℓ,E,τ )(u, r) = ζ (δ,ℓ,E,τ) n (m, k)

|In,k(δ,ℓ,E,τ)|ℓ2 , (u, r)∈I (δ,ℓ,E,τ)

n,k ×Cm. (4.2.6)

Notice thatρ(δ,ℓ,E,τ) (u, r) is obtained by distributing the number ζn(δ,ℓ,E,τ)(m, k) uniformly over the rectangle In,k×Cm. Furthermore, for all r ∈ [0,1)2, the function ρ(δ,ℓ,E,τ) (u, r) is a probability density onR+, i.e,

1 =

Z ∞

0

ρ(δ,ℓ,E,τ) (u, r)du.

As an immediate consequence of the definition ofρ(δ,ℓ,E,τ) and of Theorem 5,

Corollary 4.1 (Hydrodynamic limit for the auxiliary process). Let t∈ T,δ∈ {2−qT, q N} such

thatt=δn for some positive integer nand φ∈ S. Then almost surely, as ε→0,

νt(δ,ℓ,E,τ)(φ)→

Z

[0,1)2

Z ∞

0

(33)

4.3 CONVERGENCE OFρ(δ,ℓ,E,τN δ )ASℓ, E, τ→0 AND ITS CONSEQUENCES 21

4.3

Convergence of

ρ

(δ,ℓ,E,τ)

as

ℓ, E, τ

0

and its Consequences

We shall next prove that the limit evolutionρ(δ,ℓ,E,τ) (u, r) converges asℓ, E, τ →0 to a function denoted by ρ(δ)(u, r). Its explicit expression will be given in the Proposition 4.1 below. Before going to this proposition, we shall make some considerations which motivate the definitions of all ingredients involved in the definition ofρ(δ)(u, r).

The convergence ofρ(δ,ℓ,E,τ)0 (u, r) is direct. Indeed, by definition (4.2.1), we have thatρ(δ,ℓ,E,τ)0 (u, r) =

ρ(ℓ,E)0 (u, r) and by smoothness ofψ0, definingρ(δ)0 (u, r) =ψ0(u, r), it follows

lim E,ℓ→0||ρ

(ℓ,E) 0 −ρ

(δ)

0 ||∞= 0. (4.3.1)

Now, we set ¯u(δ)0 (r) := limℓ,E,τ→0e(δ,ℓ,E,τ)0 (m) and δp (δ)

0 (r) := limℓ,E,τ→0s(δ,ℓ,E,τ)1 (m) where the index m = m(r, ℓ) is such that for each ℓ , r ∈ Cm. Let us compute their explicit expressions. Recall that|Ik,0(δ,ℓ,E,τ)|=E so that we writeE0,k(E) instead ofE0,k to stress the dependence on E. By equality (4.1.3).

e(ε,δ,ℓ,E,τ)0 (m) =ε2

ℓ−2

X

m′=1

R0E−1

X

k=1

b(im, im′)E0,k(E)ζ0(ε,δ,ℓ,E,τ)(m, k)

so that taking the limit asε→0, we get from (4.2.1) that

e(δ,ℓ,E,τ)0 (m) =Eℓ2X

m′,k

b(im′, im)E0,k(E)ρ(δ,ℓ,E,τ)0 (E0,k(E), im′).

From this last expression and using the uniform convergence in (4.3.1), we immediately have

¯

u(δ)0 (r) =

Z

[0,1)2

Z ∞

0

ub(r′, r)ρ0(δ)(u, r′)dudr′. (4.3.2)

We now derive the expression of δp(δ)0 (r). Notice that by definition, see (3.0.3),

˜

E(ε)

Y(ε,δ,ℓ,E,τ)(0)

S1(ε,δ,ℓ,E,τ)(m)

=ε2

ℓ−2

X

m′=1

R0E−1

X

k=1

a(im′, im)ζ0(ε,δ,ℓ,E,τ)(m, k) 1−e−δϕ(E

(E) 0,k,im′)

.

Thus, it follows as before that

s(δ,ℓ,E,τ1 )(m) =Eℓ2

ℓ−2

X

m′=1

R0E−1

X

k=1

a(im′, im)ρ(δ,ℓ,E,τ)0

E0,k(E), im′

1−e−δϕ(E0,k(E),im′).

Therefore, using again (4.3.1) and then taking ℓ, E→0 in the above expression, we deduce that

δp(δ)0 (r) := lim ℓ,E,τ→0s

(δ,ℓ,E,τ) 1 (m) =

Z

[0,1)2

Z ∞

0

a(r′, r)ρ(δ)0 (u, r′)(1−e−δϕ(u,r′))dudr′, (4.3.3)

which implies that

p(δ)0 (r) =

Z

[0,1)2

Z ∞

0

a(r′, r)ρ(δ)0 (u, r′)(1−e

−δϕ(u,r′)

)

δ dudr

.

(34)

22 HYDRODYNAMIC LIMIT FOR THE AUXILIARY PROCESS 4.3

Notice that by the considerations above and (4.2.2), it follows that

D(δ,ℓ,E,τ)1,1+δτ1 = Φδ,e(δ,ℓ,E,τ) 0 (m)(

D0,1(δ,ℓ,E,τ)) +s(δ,ℓ,E,τ1 )(m) = Φ

δ,e(δ,ℓ,E,τ)0 (m)(E (E) 0,1) +s

(δ,ℓ,E,τ)

1 (m)→x0(r), as E, ℓ→0,

wherex0(r) = λr

α+λr(1−e

−δ(α+λr)u(δ)

0 (r) +δp (δ)

0 (r).Now, we claim that for (u, r) withu≥x0(r), it holds true that

E0,k(E)→Φ−1

δ,¯u(δ)0 (r)(u)−e

(α+λr)δp(δ)

0 (r)δ, asE, ℓ→0, (4.3.4) where the index k = k(u, ℓ, E) is such that for each ℓ and E, u ∈ I1,k+δτ(δ,ℓ,E,τ)−1. Indeed, using again (4.2.2),

E

(E)

0,k −Φ−δ,¯u1(δ) 0 (r)

(u) +e(α+λr)δp(δ) 0 (r)δ

≤ Φ −1

δ,e(δ,ℓ,E,τ)0 (m)(E (δ,ℓ,E,τ)

1,k+δτ−1)−Φ−1

δ,¯u(δ)0 (r)(u)

+ e

(α+λr)δp(δ)

0 (r)δ−e(α+λm)δs (δ,ℓ,E,τ) 1 (m)

.

The second term in the right hand side of the inequality above goes to 0, as E, ℓ → 0, thanks to (4.3.3). Now, using that by definition

u−E

(δ,ℓ,E,τ) 1,k+δτ−1

→0,asE, ℓ→0 and (4.3.2), it is not difficult

to conclude that the first term in the right hand side of the inequality above also goes to 0 as

E, ℓ→0.

Now, observe that by equations (4.2.3), (4.2.5) and (4.2.6), for (u, r)∈I1,k+δτ(δ,ℓ,E,τ)1 ×Cm,

ρ(δ,ℓ,E,τ)δ (u, r) =ρ(δ,ℓ,E,τ)0 Φ−1

δ,e(δ,ℓ,E,τ0 )(m)(u)−e

δ(α+λm)s(δ,ℓ,E,τ) 1 (m), r

e−δ[ϕ E0,k(E),im

−α−λm],

where E0,k(E) = D0,k(E) is such that Φ−1

δ,e(δ,ℓ,E,τ)0 (m)(u)−e

δ(α+λm)s(δ,ℓ,E,τ)

1 (m) ∈ I (δ,ℓ,E,τ)

0,k . As a conse-quence of this equality and (4.3.4), we deduce that

ρ(δ)δ (u, r) =ρ(δ)0 Φ−1

δ,¯u(δ)0 (r)(u)−e

δ(α+λr)p(δ) 0 (r)δ, r

e

−δϕ Φ−1

δ,¯u(δ)0 (r)(u)−e

δ(α+λr)p(δ) 0 (r)δ,r

−α−λr

,

foru ≥x0(r) = λr

α+λr(1−e

−δ(α+λr)u(δ)

0 (r) +δp (δ)

0 (r). This formula express the flow of potentials of those neurons which do not spike in the interval [0, δ).

Now, take a pair (u, r) with u < x0(r).Then, for all τ and ℓ small enough, we have (u, r) ∈

I1,h(δ,ℓ,E,τ)×Cm where h=h(u, r, ℓ) and m=m(r, ℓ).By (4.2.6) and (4.2.4), it holds

ρ(δ)δ (u, r) =E

R0E−1

X

k=1

ρ(δ)0 (E0,k(E), im)

|I1,h(δ,ℓ,E,τ)|τ−1

e−(h−1)τ ϕ(E0,k(E),im)e−hτ ϕ(E(E)0,k,im)τ−1. (4.3.5)

Let us first compute limτ,ℓ→0|I1,h(δ,ℓ,E,τ)|τ−1.By definition we have

I

(δ,ℓ,E,τ) 1,h

τ−1=

e(0δ,ℓ,E,τ)(m)λm

(α+λm)

e−hτ(α+λm)(e

τ(α+λm)1)

τ +

(s(1δ,ℓ,E,τ,h+1)(m)−s

(δ,ℓ,E,τ,h)

1 (m))

τ ,

where, forτ small enough,δ−hτ ≈δ−(h−1)τ is the time a neuron has to spike so that at time

δ its membrane potential is inI1,h(δ,ℓ,E,τ).From this and (4.3.2) , we may conclude that

e(δ,ℓ,E,τ)0 (m)λm (α+λm)

e−hτ(α+λm)(e

τ(α+λm)1)

τ →λre

Imagem

Figure 2.1: The ǫ-mesh Λ ε of the set [0, 1) 2 .
Figure 3.1: The red dots represent the centers of each half-open square C m with length ℓ.

Referências

Documentos relacionados

i) A condutividade da matriz vítrea diminui com o aumento do tempo de tratamento térmico (Fig.. 241 pequena quantidade de cristais existentes na amostra já provoca um efeito

didático e resolva as ​listas de exercícios (disponíveis no ​Classroom​) referentes às obras de Carlos Drummond de Andrade, João Guimarães Rosa, Machado de Assis,

The fourth generation of sinkholes is connected with the older Đulin ponor-Medvedica cave system and collects the water which appears deeper in the cave as permanent

In the hinterland (Lika region) they are partly permeable but on higher positions at Velebit Mt. calcareous breccias are highly permeable. The best prove for the mentioned is

The irregular pisoids from Perlova cave have rough outer surface, no nuclei, subtle and irregular lamination and no corrosional surfaces in their internal structure (Figure

Além disso, o Facebook também disponibiliza várias ferramentas exclusivas como a criação de eventos, de publici- dade, fornece aos seus utilizadores milhares de jogos que podem

Despercebido: não visto, não notado, não observado, ignorado.. Não me passou despercebido

Ousasse apontar algumas hipóteses para a solução desse problema público a partir do exposto dos autores usados como base para fundamentação teórica, da análise dos dados