• Nenhum resultado encontrado

To define the process associated to that model, let r(a, b, c

N/A
N/A
Protected

Academic year: 2022

Share "To define the process associated to that model, let r(a, b, c"

Copied!
27
0
0

Texto

(1)

DOI:10.1214/105051606000000079

©Institute of Mathematical Statistics, 2006

POSITIVE RECURRENCE OF PROCESSES ASSOCIATED TO CRYSTAL GROWTH MODELS

BY E. D. ANDJEL, M. V. MENSHIKOV1AND V. V. SISKO2

Université de Provence,University of Durham and Universidade de São Paulo

We show that certain Markov jump processes associated to crystal growth models are positive recurrent when the parameters satisfy a rather natural condition.

1. Introduction. Gates and Westcott studied some Markov processes repre- senting crystal growth models. In these models particles accumulate on a finite set of sites. The first two theorems in the present paper study the model obtained when this set of sites is one-dimensional. To define the process associated to that model, let

r(a, b, c)=

β2, ifb <min{a, c},

β1, if min{a, c} ≤b <max{a, c}, β0, if max{a, c} ≤b,

and for each n≥2, let Xn(t)=(Xn1(t), . . . , Xnn(t)) be a Markov jump process on{Z+}nsuch that

Xni(t)Xin(t)+1, 1≤in, at rate

rXni1(t), Xni(t), Xin+1(t).

While these rates are well defined if 1< i < n, their expression fori=1 ori=n depends on which boundary conditions we adopt. In this paper we will consider two boundary conditions:

(a) Zero boundary conditions:Xn0(t)=Xnn+1(t)=0 for allt≥0.

(b) Periodic boundary conditions:X0n(t)=Xnn(t)andXn+n 1(t)=Xn1(t)for all t≥0.

We now define for 1≤in

iXn(t)=Xni(t)Xni+1(t).

Received April 2005; revised November 2005.

1Supported in part by FAPESP (2004/13610–2).

2Supported by FAPESP (2003/00847–1).

AMS 2000 subject classifications.Primary 60J20; secondary 60G55, 60K25, 60J10, 80A30.

Key words and phrases.Positive recurrent, Markov chain, invariant measure, exponentially de- caying tails.

1059

(2)

Let

˜

r(u, v)=

β2, if min{u, v}>0,

β1, if min{u, v} ≤0<max{u, v}, β0, if max{u, v} ≤0.

Then,r(a, b, c)= ˜r(ab, cb). Therefore, the process 1Xn(t), . . . , nXn(t) is Markovian too.

Gates and Westcott in [4] considered this process under periodic boundary con- ditions and parametersβ such that

0< β0< β1< β2. (1.1)

There, they proved that it is positive recurrent if eithern≤4 or (n−1)2β0< β2.

In an earlier paper [3], the same authors studied the case in which the parametersβ satisfy (1.1) andβ1=120+β2). For this case, they gave for any nan explicit expression for a stationary measure of the process, thus, proving its positive recur- rence.

In this paper we show that (1.1) is a sufficient condition for positive recurrence for all values ofn. More explicitly, we prove the following:

THEOREM 1.1. Letβ0, β1 andβ2 satisfy(1.1).Then,for periodic boundary conditions and for anyn≥2,the process(1Xn(t), . . . , nXn(t))is positive re- current and its unique invariant measure has exponentially decaying tails.

THEOREM1.2. Letβ0, β1andβ2satisfy(1.1).Then,for zero boundary con- ditions and for any n≥2, the process(1Xn(t), . . . , n−1Xn(t)) is positive re- current and its unique invariant measure has exponentially decaying tails.

In the statement of the last theorem the process considered is 1Xn(t), . . . , n1Xn(t)

rather than

1Xn(t), . . . , nXn(t),

because for zero boundary conditions, we clearly have limtnXn(t)= ∞.Note that the former process is Markovian too.

In Section 3 we prove that all the processes considered and starting from (0, . . . ,0)are such that

sup

i=1,...,n1, t0

P|iXn(t)| ≥k

(3)

decays exponentially ink. It is then easy to conclude that Theorems1.1and1.2 hold. The proof is first given for zero boundary conditions and relies on an induc- tion onn, some coupling arguments and Doléans exponential martingales.

Whenβ1=120+β2), the invariant measure given in [3] is such that the dis- tribution of iXn is the same for all n. It is therefore natural to ask if, under condition (1.1), there is a lower bound of the constant associated with the expo- nential decay of the invariant measure that is valid for all n. Unfortunately, our proof does not provide such a bound and its existence remains an open problem.

In Section2we prove the positive recurrence of a class of discrete time Markov chains. The precise result is stated in Theorem1.3. A consequence of this theorem is that the processes considered by Theorems1.1and1.2are positive recurrent (see Examples1and2below). This proof does not give any information about the decay of the invariant measure, but we believe that many readers will be interested in it because it is much shorter than the proof given in Section3and because it treats many models which are not covered by Theorems1.1and1.2(see Example3).

To describe the class of Markov chains considered in Section2, letG=(V , E) be a connected nonoriented finite graph, whereV = {1, . . . , n}. The elements ofV are seen as different columns, on which particles accumulate with time. A Markov chainζt, t ≥0 from that class will have as state spaceY =Zn1and its transition probability matrix will by denoted byQ. Theith coordinate of the chain represents the difference between the number of particles in columnsi andn.

In Theorem1.3we impose some conditions on Q, using the following func- tions:

fij(y)=

y(i)y(j ), ifi < n, j < n, y(i), ifi < n, j=n,

y(j ), ifi=n, j < n,

whereyY and 1≤i, jn. Hence, in all casesfij(y)is the algebraic difference between the number of particles in columnsiandj.

Fori=1, . . . , n, leteiY be the vector that we have to add to an element ofY when columni increases by one unit, that is,

e1=(1,0, . . . ,0), . . . , en1=(0,0, . . . ,1), en=(−1, . . . ,−1,−1).

We now state our third theorem:

THEOREM 1.3. Consider a graph G=(V , E) and a Markov chainζt with state spaceY =Zn−1.LetδandM be constants such thatδ, M(0,1).Suppose that,for the transition probability matrixQofζt,the following conditions hold:

(i) Q(x, y)=0unlessy=xory=x+eifor some1≤in.

(ii) infyYQ(y, y+ei)δfor all1≤in.

(iii) Suppose thatyY,{i, j} ∈E,andfij(y) >0.Then Q(y, y+ei)Q(y, y+ej).

(4)

(iv) Suppose that yY, iV and for all such that {i, } ∈E, we have fi(y) >0.Then

Q(y, y+ei)Q(y, y+e)M for anyas above.

Then,the Markov chainζt is positive recurrent.

We end this section with three examples of Markov chains to which this the- orem can be applied. All these examples can be seen as crystal growth models.

The first and second examples treat the embedded chains appearing in Theorems 1.2and1.1, respectively. The third example does not require the graph to have a one-dimensional structure.

EXAMPLE 1. Adopting the same notation as in Theorem 1.2, consider the process

Z(t)=Z1(t), . . . , Zn1(t), where

Zi(t)=

ni k=1

nkXn(t), i=1, . . . , n−1.

This process is Markovian because it is obtained applying a one to one map to another Markovian process. The embedded Markov chain of the processZ(t)has as state spaceY =Zn1and its probability transition matrixQis such that, for all yY, we haveQ(y, z)=0 unlessz=y+ei for somei=1, . . . , n.

Now, letG=(V , E)be given by

V = {1, . . . , n}, E={i, i+1}:i=1, . . . , n−1 ,

then, using (1.1), it is easy to check that this chain satisfies all the hypothesis of Theorem 1.3. Hence, it is positive recurrent. This implies that the process Z(t) is positive recurrent because its rates are bounded. Since this process is obtained applying a one to one map to the process

1Xn(t), . . . , n1Xn(t), this last process is positive recurrent too.

EXAMPLE2. In this example(1Xn(t), . . . , nXn(t))is as in Theorem1.1.

We start noting that in this case ni=1iXn(t)≡0. Therefore, (1Xn(t), . . . , n1Xn(t))is also Markovian and it suffices to show that this last process is pos- itive recurrent. This is done almost exactly as in the previous example, the only difference being that now

E={i, i+1}:i=1, . . . , n−1 ∪ {1, n}.

(5)

EXAMPLE 3. Suppose that condition (1.1) is satisfied and thatβ2≤1. Then, letG=(V , E)be any admissible graph, that is, it is nonoriented, finite and con- nected. As in the other examples, we takeV = {1, . . . , n}. Then, define the matrix of transition probabilities as follows: foryY andi=1, . . . , n, let

Q(y, y+ei)=

β0/n, iffi(y) >0 for allsuch that{i, } ∈E, β2/n, iffi(y)≤0 for allsuch that{i, } ∈E, β1/n, otherwise,

and let

Q(y, y)=1− n i=1

Q(y, y+ei).

It is now easy to see that the conditions of Theorem1.3are satisfied withM= 1β0)/nandδ=β0/n.

2. Positive recurrence. To prove Theorem1.3, we will apply the following variation of Foster’s theorem:

THEOREM2.1. Letζt be an irreducible Markov chain on a countable state spaceY.Then,ζt is positive recurrent if there exist a positive functionf defined onY,a bounded strictly positive integer-valued functionkalso defined onY and a finite subsetAofY such that the following inequalities hold:

Efζt+k(ζt)

f (ζt)|ζt =y≤ −1, y /A, (2.1)

Efζt+k(ζt)

|ζt =y<, yA.

(2.2)

This theorem follows easily from Theorem 2.2.4 of [2]. Although in that refer- ence the Markov chain is also assumed to be aperiodic, this extra condition is not needed if we are only interested in positive recurrence instead of ergodicity.

Throughout this section we adopt the same notation as in Theorem1.3, assume its hypothesis and let

p= |E| (i.e., the cardinality ofE).

We start defining the functionsf andkto which we will apply Theorem2.1. Let f (y)=

{i,j}∈E

fij2(y).

We need the following lemmas.

LEMMA2.1. For all{i, j} ∈Eand allyY,we have Efij2t+1)fij2t)|ζt=y≤1.

(6)

PROOF. The lemma is a consequence of the following observations:

(i) |fijt+1)fijt)| ≤1 almost surely,

(ii) from condition (iii) of Theorem 1.3, it follows that, for ζt such that

|fijt)| ≥1, we have

P|fijt+1)| = |fijt)| +1≤P|fijt+1)| = |fijt)| −1. LEMMA2.2. SupposeC12M+p and let

D=yY:|fij(y)|> C,{i, j} ∈E . Then inequality(2.1)holds fork≡1and anyyD.

In words, the set D is the set of all states of the Markov chain ζt such that, for every i, j ∈ {1, . . . , n}such that i andj are neighbor columns, we have that the absolute value of the difference between heights of columnsi andj is greater thanC.

PROOF OFLEMMA2.2. For a givenyD, letube in the set of the columns that have the maximal number of particles and let v be such that{u, v} ∈E. We write

Ef (ζt+1)f (ζt)|ζt =y

=Efuv2 t+1)fuv2 t)|ζt =y

+

{i,j}∈E, {i,j}={u,v}

Efij2t+1)fij2t)|ζt =y.

It is easy to see thatfuv(y)= |fuv(y)|> C. From condition (iv) of Theorem1.3, it follows that the first term in the right-hand side is bounded above by−2CM+1.

By Lemma2.1, the second term is less than or equal top−1. Putting the bounds together, we complete the proof.

Note that the number of elements in the setY \Dis infinite. Therefore, we are not ready yet to apply Theorem2.1and finish the proof of Theorem1.3.

Before the formulation of the last lemma we need some notation. Let 0< C1< C2<· · ·< Cp<.

The values of these constants will be determined later. Let D0=yY:|fij(y)|< Cp,{i, j} ∈E , D1=yY:|fij(y)| ≥C1,{i, j} ∈E , Dm=DmD m, m=2, . . . , p,

(7)

where

Dm=yY: there exists{u, v} ∈Esuch that|fuv(y)| ≥Cm

and

Dm =yY: for any{i, j} ∈Ewe have|fij(y)|∈ [/ Cm−1, Cm) . SinceEhaspelements, we have

Y =D0D1D2∪ · · · ∪Dp.

The following lemma constitutes the main step in the proof of Theorem1.3.

LEMMA 2.3. There exist positive constantsC1, . . . , Cp and positive integers k1, . . . , kpsuch that,for everym∈ {1, . . . , p},inequality(2.1)holds foryDmif k(y)=km.

PROOF. We take an increasing sequence of integersCmsuch that C1≥1+p

2M , Cm≥max

pCm1,1+p+p2Cm1

pCm−1

, m=2, . . . , p, and let

km=

1, ifm=1, 1+pCm1, ifm=2, . . . , p.

Recall that the sets Dm depend on the constants Cm. By Lemma 2.2, inequal- ity (2.1) holds foryD1ifk(y)=k1.

Suppose thatyDmfor somem=2, . . . , p. Letbe in the set of the columns that have the maximal number of particles. Consider the subgraphG˜ =(V ,E)˜ of the graphG=(V , E), where

E˜ ={i, j} ∈E:|fij(y)|< Cm1 . ByO denote the largest connected subgraph ofG˜ containing.

Let us prove thatO=V. Assume the converse. Recall that|E| =p. Then, for any{i, j} ∈E, we have|fij(y)|< pCm1Cm. This contradictsyDm.

Also, since the graphG is connected, we see that there exists{u, v} ∈E such thatuO,v /O, and|fuv(y)| ≥Cm1. SinceyDm , we have|fuv(y)| ≥Cm.

Let us prove thatfuv(y)Cm. Assume the converse. Then we havefuv(y)

Cm, and therefore,

fv(y)=fu(y)+fuv(y)(p−1)Cm1Cm<0.

This contradicts the fact that the columnhas the largest number of particles.

(8)

Suppose thaty˜ is obtained fromy by addingpCm1 particles to the columnu.

It is easy to see that for y˜ the column u has at least one more particle than any other column.

To complete the proof of the lemma, write Efζt+kmf (ζt)|ζt =y

=Efζt+km1

f (ζt)|ζt =y +

zY

Efζt+km

fζt+km1

|ζt+km1=zQ(kyzm1).

By Lemma2.1, the first term of the right-hand side is bounded above byp(km−1).

For the second term write

zY

Efζt+km

fζt+km1

|ζt+km1=zQ(kyzm1)

=

zY {i,j}∈E, {i,j}={u,v}

Efij2ζt+km

fij2ζt+km1

t+km1=zQ(kyzm1)

+

z∈Y

Efuv2 ζt+km

fuv2ζt+km1

|ζt+km1=zQ(kyzm1),

where, by Lemma2.1, the first term of the right-hand side is bounded above by p−1, while the second term can be written as

zY,z= ˜y

Efuv2 ζt+km

fuv2ζt+km1

|ζt+km1=zQ(kyzm1)

+Efuv2 ζt+km

fuv2ζt+km1

t+km1= ˜yQ(kyy˜m1),

which, by Lemma2.1and conditions (iii) and (iv) of Theorem1.3, is less than or equal to

zY,z= ˜y

Q(kyzm1)+(1−2CmM)Q(kyy˜m1)≤1−2Cmkm1.

The last inequality follows from condition (ii) of Theorem1.3and the fact that it is possible forζt to reachy˜ fromyinkm−1 steps.

Putting these bounds together, we get Efζt+km

f (ζt)|ζt =yp(km−1)+(p−1)+1−2Cmkm1, which is less than or equal to−1 by our choice of the sequenceskmandCm.

We can now complete the proof of Theorem1.3: LetA=D0, which is finite.

Then define the functionkas follows:k(y)=1 ifyD0D1andk(y)=kmif yDm\(D0D1∪ · · · ∪Dm1), m=2, . . . , p.

(9)

Inequality (2.2) follows from the fact that the functionkis bounded and the transi- tion matrixQis such that, for anyx,Q(x, y)=0 except for finitely many values ofy. Since inequality (2.1) follows from Lemma2.3, we see that the hypotheses of Theorem2.1are fulfilled.

REMARK. Theorem1.3is still valid if we replace condition (iv) by the fol- lowing condition:

(iv) Suppose that yY, iV and for all such that {i, } ∈E, we have fi(y) <0. Then

Q(y, y+ei)Q(y, y+e)+M for anyas above.

The proof of the theorem requires the following minor modifications:

(a) In the proof of Lemma2.2the columnu is in the set of the columns that have the minimal number of particles.

(b) In the proof of Lemma2.3letbe one of the columns having the smallest number of particles. Then, fuv(y)≤ −Cm. Suppose that y˜ is obtained from y by adding at least pCm1 particles to each column that is a neighbor of u and belongs to the set O. Then fory˜ the columnu has at least one particle less than any neighbor column. Also, we take an increasing sequence of integersCm such that

C1≥1+p 2M , Cm≥max

pCm1,1+p+p3Cm1

p2Cm−1

, m=2, . . . , p, and let

km=1, ifm=1,

1+p2Cm1, ifm=2, . . . , p.

3. Exponential decay of the invariant distribution. We start this section in- troducing some notation.

For 0≤stand 1≤jn, let

s,tj (Xn)=Xnj(t)Xnj(s) and for 1≤jn−1, let

Dj(Xn(t))= sup

1ij

iXn(t).

Let PX be the probability associated to this process with initial conditionX. If this initial condition isXn(0)=(0, . . . ,0), we will write P0.

(10)

Throughout this sectionTa, Tb andTc will be independent Poisson r.v. with pa- rametersa,bandc, respectively, andCi andKi will be strictly positive constants.

Coupling techniques will be often used in this section. These techniques allow us to compare two different processes or two versions of the same process starting from different initial configurations and we will assume that the reader is familiar with them.

3.1. Construction of the processes,coupling. The inductive argument used in this section requires a simultaneous construction on the same probability space of the processes for all values ofnand for the two types of boundary conditions. This is done as follows:

Let(N0,j, N1,j, N2,j;j ∈N)be a collection of independent Poisson processes whose parameters are β0 for N0,j, β1β0 for N1,j and β2β1 for N2,j. We letXjn(·)increase by 1 at timetif one of the following conditions are satisfied:

(1) r(Xnj1(t), Xnj(t), Xjn+1(t))=β0andN0,jjumps at timet, (2) r(Xnj1(t), Xnj(t), Xjn+1(t))=β1andN0,j+N1,j jumps at timet, (3) r(Xnj1(t), Xnj(t), Xjn+1(t))=β2 and N0,j +N1,j +N2,j jumps at timet.

The following lemma is an immediate consequence of our simultaneous con- struction of all processes. It allows us to compare two processes with different and not necessarily deterministic initial conditions.

LEMMA 3.1. SupposeXn(·)andX˜n(·)are versions of the process with zero boundary conditions constructed as above(with the same Poisson processes):

(a) IfP(Xjn(0)≥ ˜Xnj(0))=1for all1≤jn,then

PXnj(t)≥ ˜Xnj(t),t≥0=1 for all1≤jn.

(b) IfP(Xjn(0)− ˜Xnj(0)=k)=1for all1≤jn,then

PXnj(t)− ˜Xnj(t)=k,t≥0=1 for all1≤jn.

REMARK. Suppose that j < n, t > 0 and let X ∈ {Z+}n be such that jX≥0. LetXn(·)be the process with zero boundary conditions starting fromX and let Xj(·)be the process with zero boundary conditions starting from the re- striction ofX to the first j coordinates. Then, with the above simultaneous con- struction ofXn(·)andXj(·), we have

{jXn(s)≥0,∀0≤st} ⊂ {Xin(s)=Xij(s),∀0≤st, ij}.

(11)

3.2. An auxiliary process and some exponential martingales. We introduce an auxiliary process on{Z+}r× {Z+}r× {Z+}r+1, which we denote

(Xr, Zr, Xr+1).

The first and third marginal of this process evolve as processes with zero boundary conditions in{Z+}r and in{Z+}r+1 respectively and their jumps follow the same collection of Poisson processes (N0,j, N1,j, N2,j;j ∈N). The second marginal performs the same jumps as Xr and at any given time t, also increases all its coordinates simultaneously by one unit when one of the following two conditions is satisfied:

(i) Xrr++11(t) > Xrr+1(t), Xrr+11(t)Xrr+1(t) and the process N1,r

jumps at timet.

(ii) Xr+1r+1(t) > Xrr+1(t), Xr+1r1(t) > Xrr+1(t) and the process N2,r

jumps at timet.

When any of these conditions is satisfied, it may happen that therth coordinate of the Xr process also jumps at time t. If that is the case, it is understood that theZr process performs both jumps. This means thatZrr increases by two units, while all the other coordinates ofZr increase by one unit.

The following lemma is a consequence from the construction of this process.

LEMMA 3.2. Suppose the auxiliary process starts with all its3r+1coordi- nates equal to0,then

PXri(t)Xir+1(t)Zir(t),∀1≤ir, t≥0=1 (3.1)

and

PZir(t)Xri(t)=Z1r(t)X1r(t),∀2≤ir, t≥0=1.

(3.2)

PROOF. One just needs to check that if the initial condition is such that Xir(0)Xri+1(0)Zri(0) ∀1≤ir

and

Zir(0)Xir(0)=Z1r(0)X1r(0) ∀2≤ir,

then no jump of the process can break any of these inequalities. This is straight- forward except for the inequality involving therth coordinate of Zr and Xr+1. However, since the coordinates ofXr+1can only make jumps of one unit, we may assume that Xr+r 1=Zrr and when this happens, conditions (i) and (ii) guarantee that any jump of the rth coordinate of Xr+1 due to its (r+1)st coordinate is simultaneous to a jump of all the coordinates ofZr.

We will later need the following:

(12)

LEMMA 3.3. Suppose the auxiliary process starts with all its3r+1coordi- nates equal to0,let

us=rXrr+11(s), Xrr+1(s), Xrr++11(s)rXrr+11(s), Xrr+1(s),0, and let

vs=rXrr+1(s), Xrr++11(s),0, then,for allα≥0,

(1+α)Zrr(t )Xrr(t )exp

α t

0

usds (3.3)

and

(1+α)Xr+1r+1(t )exp

α t

0

vsds (3.4)

are martingales.

PROOF. We will apply Theorem T2 in page 165 of [1] and use as much as possible the notation of that reference. Consider the point processZrr(t)Xrr(t) and letFt be theσ-algebra generated by the Poisson processes Ni,j, 0≤i≤2, 1≤jr+1. Then, the intensity of this point process is theFt-predictable process λt =limstus. Letµs≡1+α. Sinceλandµare bounded, condition (2.1) in that reference is satisfied. It then follows from the referred theorem that

(1+α)Zrr(t )Xrr(t )exp

α t

0

usds (3.5)

is a local martingale. Since for 0≤st, Zrr(s)Xrr(s) is increasing, positive and bounded by a Poisson random variable of parameter 2β0)t, it is also a martingale. A similar argument shows that (3.4) is a martingale too.

3.3. Zero boundary conditions. In this subsection we consider the process with zero boundary conditions. In several parts of our proofs the following ob- servations will play an important role. Supposen∈N, 0< i < n, and 0s < t, then

{1Xn(u)≥0,∀u∈ [s, t]} ⊂ {s,t1 (Xn)=N0,1(t)N0,1(s)} (3.6)

and

{iXn(u) >0,∀u∈ [s, t]}

⊂ {s,ti+1(Xn)N0,i+1(t)N0,i+1(s)+N1,i+1(t)N1,i+1(s)}. (3.7)

The strategy of the proof is to proceed by induction in the number of coordi- nates. The initial lemma provides the result we need when we have two coordi- nates. The second lemma takes advantage of the boundary effects to show that the first and last coordinates are unlikely to be much higher than there respec- tive neighbors. This provides the initial statement for the induction in indexi per- formed in the proof of Theorem3.1.

(13)

LEMMA3.4. There existα2,γ2>0,and0< d2< β1such that P0

|1X2(t)| ≥k≤exp(−α2k)k∈N, t ≥0, and

P0

X22(t)d2t≤exp(−γ2t)t≥0.

PROOF. First note that|1X2(t)|is a continuous time birth and death process onZ+, such that:

• 0→1 at rate 2β0,

nn+1 at rateβ0 ifn≥1,

nn−1 at rateβ1 ifn≥1.

Sinceβ0< β1, this birth and death process is positive recurrent and its invariant measure has an exponentially decaying tail. It is then easy to couple two versions of this process, one starting from its invariant distribution and the other one from the point mass at 0 in such a way that, with probability 1, the former is at all times above the latter. This proves the first assertion of the lemma. For the sec- ond assertion, note that the processX21(t)+X22(t) increases by one unit at a rate which is bounded above byβ0+β1. Therefore, it can be coupled with a Poisson processZ(t)of parameterβ0+β1in such a way that

PX12(t)+X22(t)Z(t)=1.

The second assertion then follows from the first assertion, the inequality 2X22(t)X21(t)+X22(t)+ |1X2(t)|

and standard large deviations estimates forZ(t).

LEMMA 3.5. There existC andα >0such that,for alln∈N, k∈N,and t≥0,we have

P0

1Xn(t)kC exp(−αk) (3.8)

and

P0

n1Xn(t)kC exp(−αk).

(3.9)

PROOF. Since the process starting fromX≡0 is invariant under the map (x1, x2, . . . , xn1, xn)(xn, xn1, . . . , x2, x1),

under P0 the random variables1Xn(t)and−n1Xn(t)have the same distrib- ution. Hence, it suffices to prove (3.8). We assume without loss of generality that k≥4β1, since this additional condition can be dropped adjusting the constantsC andα. Moreover, we also assume that

t > k1

(3.10)

(14)

because when this inequality fails, we have P0

1Xn(t)k≤P(N0,1+N1,1)(t)k≤P(Tk/2k), which decays exponentially ink. Let

τt =sup{s: 0≤stsuch that1Xn(s)=0}, and let

=min{q∈N:(tq)β1k/2}.

It follows from this definition and our assumptions onkandtthat t≥1,

and

2(t−1k≤2(t−+1)β1≤4(t−1. (3.11)

We will show that both

P0

1Xn(t)k, τt >

and

P0

1Xn(t)k, τt

decay exponentially ink with constants which depend onβ0 andβ1 but not ont orn.

For the first of these terms, write P0

1Xn(t)k, τt >

≤P0

,t1 (Xn)k

≤P0

(N0,1+N1,1)(t)(N0,1+N1,1)()k

≤PTβ1(t)k

≤P(Tk/2k), which decays exponentially ink.

For the second term, letmand note that

t ∈ [m−1, m), 1Xn(t) >0} ⊂ {1Xn(s) >0,∀s∈ [m, t]}.

Hence, we deduce from (3.6) that on{τt ∈ [m−1, m), 1Xn(t) >0}we have m11,t(Xn)=m11,m(Xn)+m,t1 (Xn)

N0,1(t)N0,1(m)+(N0,1+N1,1)(m)

(N0,1+N1,1)(m−1),

(15)

and from (3.7) that

m,t2 (Xn)(N0,2+N1,2)(t)(N0,2+N1,2)(m).

Since on{1Xn(t) >0, τt ∈ [m−1, m)}we also have m11,t(Xn) > m,t2 (Xn), we can write

P0

1Xn(t)k, τt∈ [m−1, m)

≤P0

1Xn(t) >0, τt∈ [m−1, m)

≤PN0,1(t)N0,1(m)+(N0,1+N1,1)(m)(N0,1+N1,1)(m−1)

(N0,2+N1,2)(t)(N0,2+N1,2)(m)

=PTβ0(tm)+β0+β1Tβ1(tm),

which decays exponentially intm. Adding this over 1m, we obtain that P0

1Xn(t)k, τt

decays exponentially int. The lemma now follows from (3.11).

We will now state and prove the main result of this section. Theorem1.2follows immediately from it.

THEOREM3.1. For alln≥2,there existαn>0andansuch that P0

|iXn(t)| ≥kanexp(−αnk)t≥0,1≤in−1, k∈N, (3.12)

and there existγn>0,bn,and0< dn< β1such that P0

Xnn(t)dntbnexp(−γnt)t≥0.

(3.13)

The second inequality of the conclusion in this theorem says that the coordi- nates next to the boundary grow at a speed which is strictly smaller thanβ1. This will play an important role in the inductive step: In time intervals on which co- ordinater remains higher than coordinater +1, the firstr coordinates behave as a process with r coordinates and zero boundary conditions. Hence, coordinater grows at a rate which is strictly smaller thanβ1, while coordinate r+1 grows at least at that rate. This implies that coordinater is unlikely to remain higher than coordinater+1 for a long time.

PROOF OF THEOREM3.1. We proceed by induction onn. For n=2, (3.12) and (3.13) follow from Lemma 3.4. Suppose that (3.12) and (3.13) hold for n∈ {1, . . . , r}.

Referências

Documentos relacionados

The aim of this study was to evaluate two sources of oil (soybean and fish) and four additional levels of vitamin E (0, 150, 250 and 350mg/kg diet) in breeder diets between the

According to Halliwell and co-workers (Halliwell and Gutteridge, 2007), an antioxidant in a biological system is “any substance that when present at low

En tanto ficción normativa de la comunidad imaginada, el cuerpo homoerótico en su calidad fantasmática se constituyó en el lugar material para los ejercicios de la

adaptação da EAC à atitude compassiva perante os outros, ao invés do próprio, como a original. Não pareceria muito apropriado nomeá-la simplesmente “escala

Mesmo não sendo possível comprovar que a presença do estímulo olfativo no ponto de venda em estudo tenha exercido influência estatisticamente significante no comportamento

A sua atividade de investigação centra- se, essencialmente, na área da formação de professores, em geral, e, em parti- cular, na formação de professores de ciências; nas

Esse entendimento revela a importância e justifica a escolha do tema a ser pesquisado: adoção do BSC como sistema de medição de desempenho estratégico do governo; configurando-se em