• Nenhum resultado encontrado

that is, for the distribution of the hidden state conditioned on the observed sequence

N/A
N/A
Protected

Academic year: 2022

Share "that is, for the distribution of the hidden state conditioned on the observed sequence"

Copied!
25
0
0

Texto

(1)

DOI:10.1214/13-AAP929

©Institute of Mathematical Statistics, 2014

LOSS OF MEMORY OF HIDDEN MARKOV MODELS AND LYAPUNOV EXPONENTS1

BYPIERRECOLLET2 ANDFLORENCIALEONARDI3

Ecole Polytechnique and Universidade de São Paulo

In this paper we prove that the asymptotic rate of exponential loss of memory of a finite state hidden Markov model is bounded above by the differ- ence of the first two Lyapunov exponents of a certain product of matrices. We also show that this bound is in fact realized, namely for almost all realizations of the observed process we can find symbols where the asymptotic exponen- tial rate of loss of memory attains the difference of the first two Lyapunov exponents. These results are derived in particular for the observed process and for the filter; that is, for the distribution of the hidden state conditioned on the observed sequence. We also prove similar results in total variation.

1. Introduction. Let (Xt)t∈Z be a Markov chain over a finite alphabet A. We consider a probabilistic function(Zt)t∈Z of this chain, a model introduced by Petrie(1969). More precisely, there is another finite alphabetBand for anyXt we choose at random aZt inB. The random choice ofZt depends only on the value Xt of the original process at timet. The process(Zt)is the observed process and (Xt)is the hidden process. This model is called a hidden Markov process.

We are interested in the asymptotic loss of memory of the processes(Xt)t∈Z and (Zt)t∈Z conditioned on the observed sequence. For example, if the condi- tional probability ofZt givenXt does not depend onXt, the process(Zt)t∈Z is an independent process. Another trivial example is when there is no random choice, namely Zt =Xt, in this case the process (Zt)t∈Z is Markovian. However, as we will see, under natural assumptions, the process(Zt)t∈Z has infinite memory. On the other hand, a particularly interesting question from the point of view of appli- cations is to consider the loss of memory of the filter; that is, the distribution of X0 conditioned on the past observed sequenceZ1, . . . , Z−n+1 and for different initial conditions onXn; see, for example,Cappé, Moulines and Rydén(2005).

Received November 2011.

1Supported in part by FAPESPs projectConsistent estimation of stochastic processes with vari- able length memory (2009/09411-8), CNPqs projectsStochastic systems with variable length in- teractions(476501/2009-1) and USP projectMathematics, computation, language and the brain (11.1.9367.1.5).

2Supported in part by the DynEurBraz European project.

3Supported in part by a CNPq-Brazil fellowship.

MSC2010 subject classifications.Primary 60J2; secondary 62M09.

Key words and phrases.Random functions of Markov chains, Oseledec’s theorem, perturbed pro- cesses.

422

(2)

Our goal is to investigate how fast these processes loose memory.

Exponential upper bounds for this asymptotic loss of memory have been ob- tained in various papers; see, for example,Douc, Moulines and Ritov(2009),Douc et al.(2009) and references therein. For the case of projections of Markov chains and the relation with Gibbs measures, seeChazottes and Ugalde(2011) and refer- ences therein.

In the present paper, under generic assumptions, we prove that the asymptotic rate of exponential loss of memory is bounded above by the difference of the first two Lyapunov exponents of a certain product of matrices. We also show that this bound is in fact realized, namely for almost all realizations of the process(Zt)t∈Z, we can find symbols where the asymptotic exponential rate of loss of memory attains the difference of the first two Lyapunov exponents. As far as we know our results provide the first lower bounds for the loss of memory of these processes.

Similar results (in particular lower bounds) are also obtained in the total variation distance.

Conditioned on the observed sequence Z−1, . . . , Z−n+1, we have considered different possibilities for the initial distribution at time−n, namely one can either give the initial distribution ofXnor the initial distribution ofZn. Similarly one can ask for the distribution ofX0(the hidden present state) or ofZ0(the observable present state).

As an application, we consider the case of a randomly perturbed Markov chain with two symbols. We show that the asymptotic rate of loss of memory can be expanded in powers of the perturbation with a logarithmic singularity. This was our original motivation coming from our previous work with Galves [Collet, Galves and Leonardi(2008)].

The relation between product of random matrices and hidden Markov models was previously described in Jacquet, Seroussi and Szpankowski (2008). In this paper it was proved in particular that the first Lyapunov exponent is the opposite of the entropy of the process.

The content of the paper is as follows. In Section2we give a precise definition of the asymptotic exponential rate of loss of memory and state the main results about the relation of this rate with the first two Lyapunov exponents.

Proofs are given in Section 3. They rely on more general propositions which allow to treat at once the different situations of initial distributions and present distributions. In Section4we give the application to the random perturbation of a two states Markov chain.

2. Definitions and main results. Let (Xt)t∈Z be an irreducible aperiodic Markov chain over a finite alphabet A with transition probability matrix p(·|·) and unique invariant measureπ. Without loss of generality we will assume A = {1,2, . . . , k}. In the sequel, we will use the shorthand notationxrs for a sequence of symbols(xr, . . . , xs) (r ≤s). Consider another finite alphabet B= {1,2, . . . , }, and a process (Zt)t∈Z, a probabilistic function of the Markov chain (Xt)t∈Z

(3)

overB. That is, there exists a matrixq(·|·)∈R such that for anyn≥0, any z0nBn+1 and anyx0nAn+1, we have

PZn0=zn0|Xn0=x0n= n

i=0

P(Zi=zi|Xi=xi)= n

i=0

q(zi|xi).

(2.1)

From now on, the symbol

¯z will represent an element inBZ. Define the shift- operatorS :BZBZby

(S

¯z)n=zn+1. The shift is invertible, and its inverse is given by

S−1

¯zn=zn1.

To state our results we will need the following hypotheses:

(H1) mini,jp(j|i) >0, mini,mq(m|i) >0.

(H2) det(p)=0.

(H3) rank(q)=k.

Note that hypothesis (H3) impliesk.

For the convenience of the reader we recall Oseledec’s theorem in finite dimen- sion; see, for example,Katok and Hasselblatt(1995),Ledrappier(1984). As usual, we denote by log+(x)=max(log(x),0).

OSELEDECS THEOREM. Let(, μ) be a probability space and let T be a measurable transformation ofsuch thatμisT-ergodic.LetLωbe a measurable function fromtoL(Rk)(the space of linear operators ofRkinto itself).Assume the functionLωsatisfies

log+Lωdμ(ω) <+∞.

Then,there existλ1> λ2>· · ·> λs,with sk and there exists an invariant set ˜ ⊂of full measure(μ(\ ˜)=0)such that for all ω∈ ˜there exists+1 sub-vector spaces

Rk=Vω(1)Vω(2)· · ·Vω(s+1)= {0} such that for anyvVω(j )\Vω(j+1)(1≤js)we have

n→+∞lim 1

nlogL[n]ω v=λj,

whereL[ωn]=LTn−1(ω)· · ·Lω.Moreover,the subspaces satisfy the relation LωVω(j )VT ω(j ).

(4)

The numbersλ1, λ2, . . . , λs are called the Lyapunov exponents.

In the sequel we will use this theorem with=BZ,μthe stationary ergodic measure of the process (Zt)t∈Z [Cappé, Moulines and Rydén(2005)],T =S1 andL

¯zthe linear operator inRkwith matrix given by (L¯z)i,j=q(z0|i)p(j|i).

With this notation, we have, for example,

PX0=a, Zn1+1=z1n+1, Xn=b=θb, L[Sn11]

¯z1a ,

whereb)i=p(i|b)and1ais the basis vector with component numberaequal to one.

From now on we will use the2norm · and the corresponding scalar product onRk. Note that from our definition ofL

¯zwe have sup

¯z L

¯z<+∞.

Therefore we can apply Oseledec’s theorem to get the existence of the Lyapunov exponents.

For any

¯zBZ, for probabilitiesρonA,ηonBand any integern, we define two probabilities onA by

ν[n]

¯z,ρ(a)= b∈AP(X0=a, Z1n+1=z1n+1, Xn=b)ρ(b)

bAP(Z1n+1=z1n+1, X−n=b)ρ(b) , aA, and

σ[n]

¯z,η(a)= cBP(X0=a, Z−n+11 =z−n+11 , Zn=c)η(c)

cBP(Z1n+1=z1n+1, Zn=c)η(c) , aA. These are the probabilities ofX0conditioned on the observed stringz1n+1when the distribution ofX−nisρ(resp., the distribution ofZ−nisη).

Whenρ is a Dirac measure concentrated onbwe will simply denote the mea- sureν[n]

¯z,ρ byν[n]

¯z,b, and similarly forσ[n]

¯z,η. We can state now our main results.

THEOREM 2.1. Under the hypothesis(H1),for each aA, for any proba- bilitiesρandρonA,

lim sup

n→+∞

1

nlogν[n]

¯z,ρ(a)ν[n]

¯z,ρ(a)λ2λ1,

μ-almost surely. Similarly, under the hypothesis (H1), for each aA, for any probabilitiesηandη onB,

lim sup

n→+∞

1

nlogσ[n]

¯z,η(a)σ[n]

¯z,η(a)λ2λ1, μ-almost surely.

(5)

REMARK. WhenA =B andq is the identity matrix,(Zt)t∈Z=(Xt)t∈Z is a Markov chain. The second part of hypothesis (H1) does not hold, but it is easy to adapt the proof of Theorem2.1for this particular case. It is easy to verify re- cursively that the matricesL[n]

¯z are of rank one. The Lyapunov exponents can be computed explicitly. One getsλ1= −H (p)(the entropy of the Markov chain with transition probabilityp) from the ergodic theorem, andλ2= −∞with multiplicity k−1.

THEOREM2.2. Under hypotheses(H1)–(H2),forμ-almost all

¯zthere exists a, b, cA (which may depend on

z)¯ such that lim sup

n→+∞

1

nlogν[n]

¯z,b(a)ν[n]

¯z,c(a)=λ2λ1. Under hypotheses (H1)–(H3), for μ-almost all

¯z there exists aA, b, cB (which may depend on

z)¯ such that lim sup

n→+∞

1

nlogσ[n]

¯z,b(a)σ[n]

¯z,c(a)=λ2λ1.

As a corollary, we derive equivalent results for the loss of memory of the process (Zt)t∈Z. For any

¯zBZ, for probabilitiesρ onA,ηonBand any integern, we define two probabilities onBby

˜ ν[n]

¯z,ρ(e)= b∈AP(Z0=e, Z−1n+1=z−1n+1, Xn=b)ρ(b)

bAP(Zn1+1=z1n+1, X−n=b)ρ(b) , eB, and

˜ σ[n]

¯z,η(e)= cBP(Z0=e, Zn1+1=z1n+1, Zn=c)η(c)

cBP(Z1n+1=z1n+1, Z−n=c)η(c) , eB. COROLLARY2.3. Under the hypothesis(H1),for eacheB,for any proba- bilitiesρandρonA,

lim sup

n→+∞

1

nlogν˜[n]

¯z,ρ(e)− ˜ν[n]

¯z,ρ(e)λ2λ1,

μ-almost surely. Similarly, under the hypothesis (H1), for each eB, for any probabilitiesηandηonB,

lim sup

n→+∞

1

nlogσ˜[n]

¯z,η(e)− ˜σ[n]

¯z,η(e)λ2λ1, μ-almost surely.

(6)

Moreover,under hypotheses(H1)–(H3),for μ-almost all

¯zthere exists eB, b, cA (which may depend on

¯z)such that lim sup

n→+∞

1

nlogν˜[n]

¯z,b(e)− ˜ν[n]

¯z,c(e)=λ2λ1. Under hypotheses(H1)–(H3), for μ-almost all

¯z there exists e, b, cB (which may depend on

¯z)such that lim sup

n→+∞

1

nlogσ˜[n]

¯z,b(e)− ˜σ[n]

¯z,c(e)=λ2λ1.

From a practical point of view, one can prove various lower bounds for the quantityλ2λ1. As an example we give the following result.

Let

= 1

minm,i{q(m|i)}.

PROPOSITION2.4. Under hypotheses(H1)–(H2)we have λ2λ1≥ 1

k−1logdet(p)k

k−1log.

We now state a related result using the total variation distance between the dis- tributions. This result will only use a weaker version of hypothesis (H3), namely

(H3), there existsb, cBin andiA such that q(b|i)=q(c|i).

(2.2)

Note that under this hypothesis, we do not assume any relation between the car- dinality ofA and the cardinality ofB(we require of course the cardinality ofB being at least two).

We recall that the total variation distance TV(ν1, ν2)between two measuresν1

andν2onA is defined by

TV(ν1, ν2)=1 2

aA

ν1(a)ν2(a). Similar definitions are given for two measures onB.

It follows at once that under the hypothesis (H1) we haveμ-almost surely lim sup

n→+∞

1

nlog TVν[n]

¯z,ρ(a)ν[n]

¯z,ρ(a)λ2λ1,

and similarly for the measuresσ,ν˜ andσ˜. In order to state a lower bound for these quantities, we need to recall a result about Lyapunov dimensions.

(7)

We denote bys the number of different Lyapunov exponents, and bymi (1≤ is) the multiplicity of the exponentλi, namely

mi=dimVω(i)−dimVω(i+1).

It follows from Oseledec’s theorem that these numbers are μ-almost surely con- stant.

THEOREM 2.5. Assume hypotheses (H1)–(H2). Then for μ-almost every

¯z and for any pair(b, c)of elements inBsatisfying(H3)we have

lim inf

n→∞

1

nlog TVσ[n]

¯z,b, σ[n]

¯z,c

λsλ1+ s

i=2

miiλ2)

≥2log|detp| −klog.

REMARK. Similar lower bounds for the total variation distance between the measuresν[n]

¯z,bandν[n]

¯z,c can be proven under hypotheses (H1)–(H2). In the case of the total variation distance betweenσ˜[n]

¯z,b andσ˜[n]

¯z,c (resp., betweenν˜[n]

¯z,bandν˜[n]

¯z,c) we can prove also the same lower bounds, but this requires the full set of hypotheses (H1)–(H3).

3. Proofs. We begin by proving some lemmas which will be useful later. We introduce the order (Rk,) given by vw if and only if viwi for all i = 1, . . . , k. When needed, we will also make use of the symbols<,>and≥, defined in an analogous way. Note that since the matricesL

¯zhave strictly positive entries, if vw, then L

¯zvL

¯zw. We will use the notation 1∈Rk for the vector with components(1)i=1 for eachi=1, . . . , kand the notation1a∈Rkfor the vector with components(1a)a=1 and(1a)i=0 fori=a.

LEMMA3.1. Under hypothesis(H1),ifξV(2)

¯z \ {0},thenξhas two nonzero components of opposite signs,μ-almost surely.

PROOF. Assume there exits ξV(2)

¯z \ {0} with ξi ≥0 for all i =1, . . . , k.

Then, from hypothesis (H1) it follows that there existsα >0 such that, for all z,¯ L¯zξαξ1.

One may take, for example, α= 1

k inf

z0,i,jq(z0|i)p(j|i)= 1

k inf

¯z,i,j(L

¯z)i,j. We can applyL[Sn11]

¯zto both sides, use monotonicity and take norms, to obtain L[n]

¯z ξαξL[n−1]S−1

¯z1.

(8)

LetwVS(1)−1

¯z\VS(2)−1

¯z. Then L[Sn−11]

¯zwL[Sn−11]

¯z| w|wL[Sn−11]

¯z1w αξL[n]

¯z ξ. Therefore

L[n]

¯z ξαξ

wL[Sn−11]

¯zw, and using Oseledec’s theorem we haveμ-almost surely that

n→+∞lim 1

nlogL[n]

¯z ξ≥ lim

n→+∞

1

nlogL[Sn11]

¯zw=λ1, which contradicts the fact thatξV(2)

¯z \ {0}.

LEMMA 3.2. Under hypothesis (H1) we have Codim(V(2)

¯z )=1, μ-almost surely.

PROOF. Assume Codim(V(2)

¯z )≥2. Since any vector w1 of norm one in the coneCk= { w:w > 0}does not belong toV(2)

¯z (by Lemma3.1), the vector space V(2)

¯z ⊕Rw1is of codimension at least one,μ-almost surely. Therefore we can find a vectorw2of norm one inCk\(V(2)

¯z ⊕Rw1). Note that inf

yV(2)

¯z

w1γw2y>0 (3.1)

since otherwise, the minimum is reached at a finite nonzero pair (γ ,y) which would contradictw2Ck\(V(2)

¯z ⊕Rw1). Let

¯zbe a fixed element inBZ. Define

γn=max

i

(L[n]

¯z w1)i

(L[n]

¯z w2)i and δn=min

i

(L[n]

¯z w1)i

(L[n]

¯z w2)i. Let

φ=inf

¯z min

i,j,r,s

(L¯z)r,j(L

¯z)s,i

(L¯z)s,j(L

¯z)r,i

. It follows from hypothesis (H1) thatφ >0. Let

α=1−√ φ 1+√

φ <1.

From the Birkhoff–Hopf theorem [see, e.g.,Cavazos-Cadena(2003)], there exists a constantβ >0 such that for all

¯zBZ and alln, 1≤γn

δn ≤1+βαn. (3.2)

(9)

We now prove that

γn

1+βαnδn+1γn+1γn.

To see this observe thatδn+1γn+1 by definition. We also have by monotonicity ofL

¯z

γn+1=max

i

(L[n+1]

¯z w1)i

(L[n+1]

¯z w2)i

=max

i

(LS−n

¯zL[n]

¯z w1)i

(L[n+1]

¯z w2)i

≤max

i

(LSn

¯zγnL[n]

¯z w2)i

(L[n+1]

¯z w2)i

=γn and also

δn+1δn=γn

δn

γnγn 1+βαn.

Since the sequencen)is decreasing, there existsγandβ>0 such that γnγβαn.

On the other hand, it follows immediately from (3.2) that for anyi=1, . . . , k, we have

γnβαn(L[n]

¯z w2)i

1+βαnL[n]

¯z w1

iγn L[n]

¯z w2

i≤0.

Then there existsβ>0 such that L[n]

¯z w1γnL[n]

¯z w2 L[n]

¯z w2βαn. This implies

L[n]

¯z w1γL[n]

¯z w2 L[n]

¯z w2 ≤ |γnγ|L[n]

¯z w2 L[n]

¯z w2 +L[n]

¯z w1γnL[n]

¯z w2 L[n]

¯z w2

β+βαn.

Sincew1 andw2 are linearly independent, we havew1γw2= 0. This and the previous inequality imply that

n→+∞lim 1

nlogL[n]

¯z

w1γw2λ1+logα < λ1, thenw1γw2V(2)

¯z \ {0}, and this contradicts (3.1).

The proof of Theorem2.1will follow from the next proposition.

(10)

PROPOSITION3.3. Let(ψa)aA be a basis ofRksatisfyingψa≥0for anya.

Letθ1>0andθ2>0be two vectors inRk.Then lim sup

n→∞

1 nlog

θ1, L[n−1]S−1

¯zψa θ1, L[Sn11]

¯z aψaθ2, L[n−1]S−1

¯zψa θ2, L[Sn11]

¯z aψa

λ2λ1.

PROOF. By Lemma3.1, sincea)i≥0 for anyi=1, . . . , kthenψa/V(2)

¯z , μ-almost surely. In the same way, from Lemma3.1we have

ψ =

aA

ψaV(1)

¯z \V(2)

¯z .

Note also that sincea)aA form a basis of nonnegative vectors, we must have ψi>0 for alli=1, . . . , k. Therefore, by Lemma3.2we have that for anyaA,

ψa=uaψ + ξa, (3.3)

whereξaV(2)

¯z ,ua=0, and this decomposition is unique. Then θj, L[n−1]S−1

¯zψa θj, L[Sn11]

¯z aψa=ua+θj, L[n−1]S−1

¯zξa θj, L[Sn11]

¯zψ, j=1,2.

Define for anynand

¯z

γ (n,

¯z)=θ1, L[Sn11]

¯zψ θ2, L[Sn−11]

¯zψ (3.4)

and let

R=max

sup

i

1)i

2)i

,sup

i

2)i

1)i

. Then we have

θ1, L[Sn11]

¯zψ= k i=1

1)i L[Sn11]

¯zψiR k i=1

2)i L[Sn11]

¯zψi

=Rθ2, L[Sn−11]

¯zψ and similarly

θ2, L[Sn11]

¯zψRθ1, L[Sn11]

¯zψ. In other words for anynand

¯z,

R1γ (n,

¯z)R.

(3.5)

(11)

Then

θ1, L[Sn11]

¯zψa θ1, L[n−1]S−1

¯z aψaθ2, L[Sn11]

¯zψa θ2, L[n−1]S−1

¯z aψa

=θ1, L[Sn11]

¯zψ1θ1γ (n,

¯z)θ2, L[Sn11]

¯zξa . Note that

θ1, L[Sn11]

¯zψ≥ 1

k

L[Sn11]

¯zψinf

i 1)i

and θ1γ (n,

¯z)θ2, L[Sn11]

¯zξaθ1γ (n,

¯z)θ2·L[Sn11]

¯zξa

θ1 +2L[Sn11]

¯zξa. Then, using Oseledec’s theorem the result follows.

PROOF OFTHEOREM2.1. We observe that

bA

PX0=a, Z1n+1=z1n+1, X−n=bρ(b)

=

bA

x−n+11 An−1

p(x−n+1|b)ρ(b)q(z−1|x−1)p(a|x−1)

×

n2 l=1

q(zl1|xl1)p(xl|xl1)

=θρ, L[n−S−11]

¯zψa ,

whereθρa∈Rkare given by ρ)i=

bA

ρ(b)p(i|b) and ψa= 1a. (3.6)

Therefore,

ν[n]

¯z,ρ(a)= θρ, L[n−1]S−1

¯zψa θρ, L[Sn11]

¯z aψa,

and the first statement of Theorem2.1follows from Proposition3.3since the con- ditions ona)aA,θρandθρcan be immediately verified using hypothesis (H1).

The second part follows similarly by noting that

cB

PX0=a, Z−n+1 1=z−n+1 1, Zn=cη(c)=θη, L[Sn11]

¯zψa ,

(12)

where

η)i=

cB

xA

p(i|x)q(c|x)η(c) and ψa= 1a. (3.7)

This finishes the proof of Theorem2.1.

Before proceeding with the proof of Theorem2.2we will prove a useful lemma.

LEMMA 3.4. Let(ψa)aA be a basis ofRk such thatψa≥0 for alla. Let ˜ be a set of full μ-measure where the Oseledec’s theorem holds.Then for any

¯z∈ ¯,there exists a symbola=a(

¯z)A such thatξaV(2)

¯z \V(3)

¯z ,whereξa is the unique vector inV(2)

¯z satisfying ψa=ua

bA

ψb+ ξa

for some real numberua. PROOF. AssumeξaV(3)

¯z for all a. Then, as Codim(V(3)

¯z )≥2, the set { ψa} generates a sub-space of co-dimension 1. This contradicts the fact that the set of vectors{ ψa:aA}forms a basis ofRk.

The proof of Theorem2.2will follow from the next proposition.

PROPOSITION 3.5. Let(ψa)aA be a basis ofRk satisfyingψa≥0for any a.Let(θj)jA be another basis ofRksuch thatθj >0.Then forμ-almost every

z¯ there existaA and two indicesr, s∈ {1, . . . , k}such that

lim sup

n→∞

1 nlog

θr, L[Sn−11]

¯zψa θr, L[Sn11]

¯z aψaθs, L[Sn−11]

¯zψa θs, L[Sn11]

¯z aψa

λ2λ1. (3.8)

PROOF. Let˜ be a set of fullμ-measure where the Oseledec’s theorem holds.

Applying Lemma3.4, for any

z¯∈ ˜we find a symbola=a(

¯z)A such that ψa=ua

bA

ψb+ ξa

withξaV(2)

¯z \V(3)

¯z . Let ξ˜a(n,

¯z)= L[Sn11]

¯zξa L[n−1]S−1

¯zξaVS(2)n

¯z. (3.9)

Referências

Documentos relacionados

São depois apresentados os objetivos e os métodos utilizados na investigação, a que se segue a apresentanção dos resultados organizados em duas partes distintas: a primeira

The probability of attending school four our group of interest in this region increased by 6.5 percentage points after the expansion of the Bolsa Família program in 2007 and

didático e resolva as ​listas de exercícios (disponíveis no ​Classroom​) referentes às obras de Carlos Drummond de Andrade, João Guimarães Rosa, Machado de Assis,

H„ autores que preferem excluir dos estudos de prevalˆncia lesŽes associadas a dentes restaurados para evitar confus‚o de diagn€stico com lesŽes de

Ao Dr Oliver Duenisch pelos contatos feitos e orientação de língua estrangeira Ao Dr Agenor Maccari pela ajuda na viabilização da área do experimento de campo Ao Dr Rudi Arno

Neste trabalho o objetivo central foi a ampliação e adequação do procedimento e programa computacional baseado no programa comercial MSC.PATRAN, para a geração automática de modelos

Ousasse apontar algumas hipóteses para a solução desse problema público a partir do exposto dos autores usados como base para fundamentação teórica, da análise dos dados

Los resultados indican que el estado del contrato psicológico predice la satisfacción con la vida, el conflicto trabajo-familia y el bienestar psicológico más allá de la