• Nenhum resultado encontrado

Simulation of multivariate distributions with fixed marginals and correlations

N/A
N/A
Protected

Academic year: 2019

Share "Simulation of multivariate distributions with fixed marginals and correlations"

Copied!
14
0
0

Texto

(1)

arXiv:1311.2002v1 [math.PR] 8 Nov 2013

Simulation of multivariate distributions with fixed

marginals and correlations

Mark Huber∗ and Nevena Mari´c

November 11, 2013

Abstract

Consider the problem of drawing random variates (X1, . . . , Xn)

from a distribution where the marginal of eachXi is specified, as well

as the correlation between every pairXi andXj. For given marginals,

the Fr´echet-Hoeffding bounds put a lower and upper bound on the correlation betweenXi andXj. Hence any achievable correlation can

be uniquely represented by a convexity parameter λij ∈ [0,1] where

1 gives the maximum correlation and 0 the minimum correlation. We show that for a given convexity parameter matrix, the worst case is when the marginal distribution are all Bernoulli random variables with parameter 1/2 (fair 0-1 coins). It is worst case in the sense that given a convexity parameter matrix that is obtainable when the marginals are all fair 0-1 coins, it is possible to simulate from any marginals with the same convexity parameter matrix. In addition, we characterize com-pletely the set of convexity parameter matrices for symmetric Bernoulli marginals in two, three and four dimensions.

1

Introduction

Consider the problem of simulating a random vector (X1, . . . , Xn) where for

allithe cumulative distribution function (cdf) of Xi isFi, and for alliand

j the correlation betweenXi and Xj should be ρij ∈[0,1]. The correlation

here is the usual notion

Corr(X, Y) = E[(X−E(X))(Y −E(Y))]

SD(X)SD(Y) =

E[XY]−E[X]E[Y] SD(X)SD(Y) ,

Claremont McKenna College (email: mhuber@cmc.edu)

(2)

for standard deviations SD(X) and SD(Y) that are finite.

Let Ω denote the set of matrices with entries in [−1,1], all the diagonal entries equal 1, and are nonnegative definite. Then it is well known that any correlation matrix (ρij) must lie in Ω.

This problem, in different guises, appears in numerous fields: physics [16], engineering [11], ecology [4], and finance [12], to name just a few. Due to its applicability in the generation of synthetic optimization problems, it has also received special attention by the simulation community [9], [8].

It is a very well studied problem with a variety of approaches. When the marginals are normal and the distribution is continuous with respect to Lebesgue measure, this is just the problem of generating a multivariate normal with specified correlation matrix. It is well known how to accomplish this (see, for instance [6], p. 223) for any matrix in Ω.

For marginals that are not normal, the question is very much harder. A common method is to employ families of copulas (see for instance [15]), but there are very few techniques that apply to general marginals. Instead, dif-ferent families of copulas typically focus on difdif-ferent marginal distributions. Devroye and Letac [3] showed that if the marginals are beta distributed with equal parameters at least 1/2, then when the dimension is three it is possible to simulate such a vector where the correlation is any matrix in Ω. This set of beta distributions includes the important case of uniform [0,1] marginals, but they have not been able to extend their technique to higher dimensions.

Chagnanty and Joe [1] characterized the achievable correlation matrices when the marginals are Bernoulli. When the dimension is 3 their charac-terization is easily checkable, in higher dimensions they give a number of inequalities that grows exponentially in the dimension.

For the case of general marginals, in statistics there is a tradition of using transformations of mutivariate normal vectors dating back to Mardia [14] and Li and Hammond [13]. This approach relies heavily on developing usable numerical methods. In this paper we approach the same problem using exclusively probabilistic techniques.

(3)

The convexity matrix approach. Any two random variables X and Y have correlation in [−1,1], but if the marginal distributions of X and Y are fixed, it is generally not possible to build a bivariate distribution for any correlation in [−1,1]. For instance, for X and Y both exponentially distributed, the correlation must lie in [1−π2

/6,1]. The range of achievable correlations is always a closed interval.

For two dimensions it is well known how to find the minimum and max-imum correlation. These come from the inverse transform method, which works as follows. First, given a cdfF, define the pseudoinverse of the cdf as

F−1= inf{x:F(x)≥u}. (1)

When U is uniform over the interval [0,1] (write U ∼Unif([0,1])), F−1(U)

is a random variable with cdfF (see for instance p. 28 of [2]). SinceU and 1−U have the same distribution, both can be used in the inverse transform method. The random variablesU and 1−U areantitheticrandom variables. Of course Corr(U, U) = 1 and Corr(U,1−U) = −1, so these represent an easy way to get minimum and maximum correlation when the marginals are uniform random variables.

The following theorem comes from work of Fr´echet [7] and Hoeffding [10].

Theorem 1 (Fr´echet-Hoeffding bound). For X1 with cdf F1 and X2 with cdf F2, and U ∼Unif([0,1]):

Corr(F1−1(U), F −1

2 (1−U))≤Corr(X1, X2)≤Corr(F− 1 1 (U), F

−1 2 (U)).

In other words, the maximum correlation betweenX1andX2is achieved

when the same uniform is used in the inverse transform method to gener-ate both. The minimum correlation between X1 and X2 is achieved when

antithetic random variates are used in the inverse transform method. Now consider (X1, . . . , Xn), where eachXihas cdfFi, and the correlation

betweenXi and Xj is ρij. Then let

ρ−ij = Corr(Fi−1(U)Fj−1(1−U)) andρ+ij = Corr(Fi−1(U)Fj−1(U)).

For the ρij to be achievable, it must be true that ρij ∈ [ρ−ij, ρ +

ij], so there

existsλij ∈[0,1] so that

ρij =λijρ+ij+ (1−λij)ρ−ij.

Given a correlation matrix (ρij) we will refer to (λij) as the convexity

(4)

Theorem 2. Suppose that for a convexity parameter matrix Λ, it is possible to simulate (B1, . . . , Bn) where for all i, P(Bi = 1) =P(Bi = 0) = 1/2 and

(∀i, j)(P(Bi=Bj) =λij).

Then for any specified set of marginals F1, . . . , Fn it is possible to simulate

in linear time from a multivariate distribution on (X1, . . . , Xn) such that

(∀i)(Xi ∼Fi) and(∀i, j)(Corr(Xi, Xj) =λijρ+ij + (1−λij)ρ−ij).

The next result characterizes when such a multivariate Bernoulli exists in two, three, and four dimensions, and gives necessary conditions for higher dimensions.

Theorem 3. Suppose (B1, B2, . . . , Bn) are random variables with P(Bi =

1) = P(Bi = 0) = 1/2 for all i. When n = 2, it is possible to

simu-late (B1, B2) for any λ12 ∈ [0,1]. When n = 3, it is possible to simulate

(B1, B2, B3) if and only if

1 + 2 min{λ23, λ12, λ13} ≥λ23+λ12+λ13≥1.

When n= 4, it is possible to simulate (B1, B2, B3, B4) if and only if

1−(1/2)ℓ≤(1/2)(u−1)

where

ℓ= min(λ14+λ14+λ13+λ23, λ14+λ34+λ12+λ23, λ24+λ34+λ12+λ13)

u= min

{i,j,k}λij +λjk+λik.

(5)

2

The algorithm

To present our algorithm, we begin by defining a matrix that measures the chance that any two components are equal.

Definition 1. For a random vector (X1, . . . , Xn) the concurrence matrix

A= (aij) is defined asaij =P(Xi=Xj).

Let Bern(p) denote the Bernoulli distribution with parameter p, so for X∼Bern(p), P(X= 1) =p andP(X= 0) = 1p.

Proposition 1. For a random vector(B1, . . . , Bn) withBi∼Bern(1/2) for

all i, the concurrence matrix and convexity matrix are the same matrix.

Proof. Let (B1, . . . , Bn) be a random vector with marginals Bern(1/2), and

i6=j be elements of {1, . . . , n}.

Let 1(expression) denote the indicator function that is 1 if the expres-sion is true and 0 otherwise. The inverse transform method for generating

Bern(1/2) random variables isB =1(U >1/2) whereU Unif([0,1]). From

the Fr´echet-Hoeffding bound (Theorem 1), the correlation between Bi and

Bj can range anywhere from -1 to 1. Hence

Corr(Bi, Bj) =λij(1) + (1−λij)(−1)⇒λij = (1/2)(Corr(Bi, Bj) + 1).

Now denoteP(Bi =a, Bj =b) bypab. Then

P(Bi =Bj) =p11+p00=aij

P(Bi = 1) =p10+p11= 1/2

P(Bj = 1) =p01+p11= 1/2

X

a,b

P(Bi =a, Bj =b) =p00+p01+p10+p11= 1

Adding the top three equations and then using the fourth gives

3p11+p10+p01+p00=aij+ 1 = 2p11+ 1.

Hencep11=aij/2.

That means

Corr(Bi, Bj) = E

[BiBj]−E[Bi]E[Bj]

SD(B1)SD(B2)

= (1/2)aij −(1/2)(1/2)

(6)

Given the ability to generate from a multivariate Bernoulli with given convexity/concurrence matrix, generating from a multivariate with general marginals with the same convexity matrix turns out to be easy. Suppose thenby nconvexity matrix Λ = (λij) and cdf’s F1, . . . , Fn are given.

Generating multivariate distributions with given convexity matrix

1) Draw U uniformly from [0,1].

2) Draw (B1, . . . , Bn) (Bi∼Bern(1/2)) with concurrence matrix Λ = (λij).

3) For all i∈ {1, . . . , n}, letXi←F− 1

i (U Bi+ (1−U)(1−Bi)).

That is, ifBi= 1 then chooseXi with the inverse transform method and

U, otherwise use the inverse transform method and 1−U. Theorem 2 can now be restated in terms of this algorithm.

Theorem 4. The above algorithm generates (X1, . . . , Xn) with Xi ∼Fi for

all i and convexity matrix Λ.

Proof. Each Xi can be written as BiF− 1

i (U) + (1−Bi)F− 1

i (1−U). That

is, it is a mixture of F−1

i (U) and F− 1

i (1−U) with weights equal to 1/2.

Since Fi−1(U) andFi−1(1−U) have the same distribution with cdf Fi, the

random variable Xi also shares this distribution. Hence they all have the

same mean (call itµi) and standard deviation (call itσi.)

That means the correlations satisfy

Corr(Xi, Xj)σiσj =E[(Xi−µi)(Xj−µj)]

=E[E[(Xi−µi)(Xj −µj)]|Bi, Bj]

= X

a∈{0,1} b∈{0,1}

P(Bi =a, Bj =b)E[(Xi−µi)(Xj −µj)|Bi=a, Bj =b].

Note when Bi = Bj = 0, then Xi =Fi−1(U) and Xj = Fj−1(U) which

are maximally correlated. So

E[(Xi−µi)(Xj −µj)|Bi= 0, Bj = 0] =ρ+ijσiσj.

Following this same logic further gives

E[(Xi−µi)(Xj −µj)|Bi = 1, Bj = 0] =ρ−ijσiσj

E[(Xi−µi)(Xj −µj)|Bi = 0, Bj = 1] =ρ−ijσiσj

(7)

Hence

Corr(Xi, Xj) =ρ+ijP(Bi=Bj) +ρ−ijP(Bi 6=Bj) =ρ+ijaij +ρ−ij(1−aij).

Theorem 2 is an immediate consequence.

The complete algorithm for generating multivariate distributions where componenti has cdfFi and Corr(Xi, Xj) =ρij is as follows.

Generating multivariate distributions with given correlation matrix

1) For every {i, j} ⊆ {1, . . . , n} do (whereA∼Unif([0,1]))

2) ρ+

ij ←Corr(F− 1

i (A), F− 1

i (1−A)), ρ +

ij ←Corr(F− 1

i (A), F− 1

i (1−A)).

3) λij ←(ρij−ρ−ij)/(ρ+ij −ρ−ij)

4) Draw U ←Unif([0,1])

5) Draw (B1, . . . , Bn) (Bi∼Bern(1/2)) with concurrence matrix Λ = (λij).

6) For all i∈ {1, . . . , n}, letXi←F− 1

i (U Bi+ (1−U)(1−Bi)).

Line 2 just generates the maximum and minimum correlation bounds from Theorem 1. Line 3 is set up so thatρij =λijρ+ij + (1−λij)ρ−ij.

Note that when the marginals for i and j are the same distribution, ρ+

ij = 1. However, even for equal marginals, the minimum correlation is

often strictly greater than −1. See [5] and the references therein for the minimum correlation for a variety of common distributions.

This algorithm is only applicable when it is actually possible to simulate from the multivariate symmetric Bernoulli distribution with specified con-currence matrix. The next section gives necessary and sufficient conditions for this to be possible for two, three, and four dimensions.

3

Attainable convexity matrices

In this section we prove Theorem 3 which gives necessary and sufficient conditions for dimensions two, three, and four for when it is possible to create a multivariate Bernoulli 1/2 with a given convexity matrix. We break the Theorem into three parts.

Lemma 1. For any λ12 ∈[0,1], there exists a unique joint distribution on

{0,1}2

such that (B1, B2)with this distribution has B1, B2 ∼Bern(1/2) and

(8)

Proof. Letpij =P(B1 =i, B2 =j). Then the equations that are necessary

and sufficient to meet the distribution and convexity conditions are:

p10+p11= 0.5, p01+p11= 0.5, p11+p00=λ12, and p00+p01+p10+p11= 1.

This system of linear equations has full rank, so there exists a unique solu-tion. Given there exists a unique solution, it is easy to verify that

p00= (1/2)λ12, p01= (1/2)[1−λ12], p10= (1/2)[1−λ12], p11= (1/2)λ12

satisfies the equations.

This provides an alternate algorithm to that found in [5] for simulating from bivariate distributions with correlation betweenρ−1,2 and ρ

+ 1,2.

Lemma 2. A random vector (B1, B2, B3) with Bi ∼Bern(1/2) exists (and

is possible to simulate from in a constant number of steps) if and only if the concurrence matrix satisfies

1≤λ23+λ12+λ13≤1 + 2 min{λ12, λ13, λ23}

Proof. Letting pijk =P(B1 =i, B2 =j, B3 =k), there are three conditions

from the marginals.

X

j,k∈{0,1}

p1jk= 0.5,

X

i,k∈{0,1}

pi1k= 0.5,

X

ij∈{0,1}

pij1 = 0.5.

Then there are three conditions from the correlations

X

k∈{0,1}

p00k+p11k=λ12,

X

j∈{0,1}

p0j0+p1j1 =λ13,

X

i∈{0,1}

pi00+pi11=λ23.

A seventh condition is P

i,j,kpijk = 1. Since eight equations are needed,

suppose that p111=α.

This 8 by 8 system of equations has full rank, so there is a unique solu-tion. It is easy to verify that the solution is

p000 = (1/2)(λ12+λ13+λ23−1)−α

p001 = (1/2)(1−(λ13+λ23)) +α

p010 = (1/2)(1−(λ12+λ23)) +α

p011 = (1/2)λ23−α

p100 = (1/2)(1−(λ12+λ13)) +α

p101 = (1/2)λ13−α

p110 = (1/2)λ12−α

(9)

In order for this solution to yield probabilities, all must lie in [0,1]. The p011, p101,and p110 equations give the restriction

α≤(1/2) min{λ12, λ23, λ13}. (2)

Of course, α≥0 so the p111 equation can be satisfied.

The p000 equation requires that

α≤(1/2)(λ13+λ12+λ23−1). (3)

With these two conditions, equationsp001,p010, andp100 give the constraint

α≥(1/2)(λ13+λ12+λ23−min{λ13, λ12, λ23} −1). (4)

The lower bound in (4) is below the upper bound in equation (3), but not necessarily below the upper bound in equation (2). So this gives another restriction:

(1/2)(λ13+λ12+λ23−min{λ13, λ12, λ23}−1) ≤(1/2) min{λ13, λ12, λ23}. (5)

As long as (3) and (5) are satisfied, their will exist a solution. If satisfied with strict inequality, there will be an infinite number of solutions.

Note that not all positive definite correlation matrices are attainable with Bern(1/2) marginals. For instance, if λ12 = λ13 = λ23 = 0.3, then ρ12=ρ13=ρ23=−0.4.With diagonal entries 1, theρvalues form a positive

definite matrix, but it is impossible to build a multivariate distribution with

Bern(1/2) marginals with these correlations.

4

Asymmetric Bernoulli distributions

Now consider the problem of drawing a multivariate Bernoulli (X1, . . . , Xn)

where Xi ∼ Bern(pi) where i is not necessarily 1/2, and the concurrence

matrix is given.

Lemma 3. Anndimensional multivariate Bernoulli distribution where the marginal of component iis Bern(pi) and concurrence matrix Λexists if and

only if an n+ 1dimensional multivariate Bernoulli distribution exists with

Bern(1/2) marginals and concurrence matrix

  

Λ

p..1 .

pn

p1· · ·pn 1

(10)

Proof. Suppose such ann+ 1 dimensional distribution exists withBern(1/2)

marginals and specified concurrence matrix. Let (B1, . . . , Bn+1) be a draw

from this distribution. Then set Xi = 1(Bi = Bn+1). The concurrence

matrix givesP(Xi = 1) =pi, and fori6=j,P(Xi=Xj) =P(Bi=Bj) =λij.

Conversely, suppose such an n dimensional distribution with Bern(pi)

marginals exists. Let Bn+1 ∼ Bern(1/2) independent of the Xi, and set

Bi =Bn+1Xi+(1−Bn+1)(1−Xi).ThenP(Bi = 1) = (1/2)pi+(1/2)(1−pi) =

1/2, and P(Bi = Bn+1) = pi, the correct concurrence parameter. Finally,

fori6=j,

P(Bi =Bj) =P(Xi =Xj) =λij.

This result gives an alternate way of deriving Lemma 2 using the Fr´echet-Hoeffding bounds. Consider a bivariate asymmetric Bernoulli (X, Y) where X ∼Bern(p), and Y Bern(q). The pseudoinverse cdf of Bern(p) is given

byF−1

p (U) =1(U >1−p). Hence the minimum and maximum correlation

ofX andY can be directly found using Theorem 1:

ρ−(X, Y) = (p+qp−1)1(p+q >1)−pq

pq(1−p)(1−q) ρ+(X, Y) = pmin(p, q)−pq

pq(1−p)(1−q).

Suppose that we want to simulate X and Y so that P(X =Y) =r. For whatr is this possible?

Letpij =P(X=i, Y =j). Then

P(X =Y) =r =p11+p00

P(X= 1) =p=p10+p11

P(Y = 1) =q =p01+p11

Adding the three equations above we have

p+q+r=p11+p00+p10+p11+p01+p11= 1 + 2p11

and thereforep11= (p+q+r−1)/2. On the other hand

E(XY) =E(1(X= 1, Y = 1)) =P(X= 1, Y = 1) =p11.

The Fr´echet-Hoeffding Theorem provides the condition

E(F−1 p (U)F−

1

q (1−U))≤E(XY)≤E(F− 1 p (U)F−

(11)

which is equivalent to

(p+q−1)1(p+q >1)≤(p+q+r−1)/2≤min(p, q). Solving the inequalities for r we get the following condition

|1−(p+q)| ≤r≤2 min(p, q) + 1−(p+q).

Lettingp=λ13,q=λ23, andr=λ12 we obtain an equivalent condition

to the one in Lemma 2.

As a further consequence of this new proof technique, we obtain a simple method for simulating from the 3 dimensional symmetric Bernoulli distri-bution for given concurrence probabilities when possible.

Generating trivariate symmetric Bernoulli

Input: 3 by 3 matrix (λij) with 1≤λ12+λ13+λ23≤1 + 2 min{λ12, λ13, λ23}. Output: (B1, B2, B3) where (∀i)(Bi ∼Bern(1/2)) and (∀i, j)(P(Bi =Bj) =λij)

1) Draw B3←Bern(1/2) andU ←Unif([0,1]).

2) X1←1(U ∈[0, λ13])

3) X2←1(U ∈[(1 +λ13−λ23−λ12)/2,(1 +λ13+λ23−λ12)/2])

4) B1←B3X1+ (1−B3)(1−X1)

5) B1←B3X2+ (1−B3)(1−X2)

Proposition 2. The above algorithm simulates (B1, B2, B3) where Bi ∼

Bern(1/2) for all iand P(Bi=Bj) =λij for alli and j.

Proof. Note 1≤λ12+λ13+λ23≤1 + 2 min{λ12, λ13, λ23} if and only if

0≤ 1 +λ13−λ23−λ12

2 ≤λ13≤

1 +λ13+λ23−λ12

2 ≤1.

Hence

P(X1 = 1) =λ13−0 =λ13.

P(X2 = 1) =

1 +λ13+λ23−λ12

2 −

1 +λ13−λ23−λ12

2 =λ23.

P(X1=X2) =

λ13−

1 +λ13−λ23−λ12

2

+

1−1 +λ13+λ23−λ12

2

=λ12.

(12)

Lemma 3 also allows a characterization of the 4 dimensional symmetric Bernoulli concurrence (and so also convexity) matrices.

Lemma 4. A random vector (B1, B2, B3, B4) with Bi ∼ Bern(1/2) exists

(and is possible to simulate in a constant number of steps) if and only if for

ℓ= min(λ14+λ14+λ13+λ23, λ14+λ34+λ12+λ23, λ24+λ34+λ12+λ13)

u= min

{i,j,k}λij +λjk+λik,

it is true that

1−(1/2)ℓ≤(1/2)(u−1)

Proof. By using Lemma 3, the problem is reduced to finding a distribution for (X1, X2, X3) whereXi ∼Bern(λi4) and the upper 3 by 3 minor of Λ is the

new concurrence matrix. Just as in Lemma 2, this gives eight equations of full rank with a single parameter α. Letting qijk=P(X1 =i, X2 =j, X3 =

k), the unique solution is

q000= (1/2)(λ12+λ13+λ23)−(1/2)−α

q001=−(1/2)(λ14+λ24+λ13+λ23) + 1 +α

q010=−(1/2)(λ14+λ34+λ12+λ23) + 1 +α

q011= (1/2)(λ24+λ34+λ23)−(1/2)−α

q100=−(1/2)(λ24+λ34+λ12+λ13) + 1 +α

q101= (1/2)(λ14+λ34+λ13)−(1/2)−α

q110= (1/2)(λ34+λ24+λ12)−(1/2)−α

q111=α.

All of these right hand sides lie in [0,1] if and only if 1−(1/2)ℓ ≤

(1/2)(u−1), andα is chosen to lie in [1−(1/2)ℓ,(1/2)(u−1)].

As with the 3 dimensional case, this proof can be used to simulate a 4 dimensional multivariate symmetric Bernoulli: generate (X1, X2, X3) using

anyα∈[1−(1/2)ℓ,(1/2)(u−1)] and theqdistribution, then generateB4 ∼

Bern(1/2), and then setBi to beB4Xi+ (1B4)(1Xi) fori∈ {1,2,3,4}.

5

Conclusions

(13)

for higher dimensions the correlation matrix can be written as a convex-ity matrix whose parameters indicated where on the line from the lower to the upper bound the correlation lies. With a simple algorithm, we have shown that when viewed as a convexity matrix problem, the worse case for marginals is when they are all symmetric Bernoulli random variables. It is worst case in the sense that if it is possible to build a symmetric Bernoulli multivariate distribution with the given convexity matrix, then it is possi-ble to build a multivariate distribution with that convexity matrix for any marginal distributions. For symmetric Bernoulli marginals, the convexity matrix is also the concurrence matrix that gives the probabilities that any pair of random variables are the same.

For three and four dimensions, the set of convexity matrices that yield a symmetric Bernoulli distribution is characterized completely. For five or higher dimensions, every subset of three and four have these characteriza-tions as necessary condicharacteriza-tions.

Acknowledgement Support from the National Science Foundation (grant DMS - 1007823) is gratefully acknowledged.

References

[1] N. R. Chagnanty and H. Joe. Range of correlation matrices for depen-dent Bernoulli random variables. Biometrika, 93:197–206, 2006.

[2] L. Devroye. Non-uniform random variate generation. Springer, 1986.

[3] L. Devroye and G. Letac. Copulas in three dimensions with prescribed correlations. 2010. arXiv:1004.3146v1.

[4] C. T. S. Dias, A. Samaranayaka, and B. Manly. On the use of correlated Beta random variables with animal population modelling. Ecological Modelling, 215(4): 293–300, 2008.

[5] V. M Dukic and N. Mari´c. Minimum correlation in construction of multivariate distributions. Physical Review E, 87(3):032114, 2013.

[6] G. S. Fishman. Monte Carlo: concepts, algorithms, and applications. Springer-Verlag, 1996.

(14)

[8] S. G. Henderson, B. A. Chiera, and R. M. Cooke. Generating de-pendent quasi-random numbers. Proceedings of the Winter Simulation Conference 2000, Vol. 1. IEEE, 2000.

[9] R. R. Hill and C. H. Reilly. Composition for multivariate random vari-ables. Proceedings of the Winter Simulation Conference 1994, 332–339, IEEE, 1994.

[10] W. Hoeffding. Masstabinvariante korrelatiostheorie. Schriften des Mathematischen Instituts und des Instituts f¨ur Angewandte Mathematik der Universitat Berlin, 5:179–233, 1940.

[11] D. G. Lampard. A stochastic process whose successive intervals between events form a first order Markov chain: I.Journal of Applied Probability, pages 648–668, 1968.

[12] A. J. Lawrance and P. A. W. Lewis. A new autoregressive time se-ries model in exponential variables (NEAR (1)). Advances in Applied Probability, 13(4):826–845, 1981.

[13] S. T. Li and J. L. Hammond. Generation of pseudorandom numbers with specified univariate distributions and correlation coefficients. Sys-tems, Man and Cybernetics, IEEE Transactions (1975): 557-561.

[14] K. V. Mardia. A translation family of bivariate distributions and Fr´echet’s bounds. Sankhy˜a: The Indian Journal of Statistics, Series A (1970): 119-122.

[15] R. B. Nelson. An Introduction to Copulas. Springer, 1999.

Referências

Documentos relacionados

Na avaliação do tempo decorrido entre o primeiro contacto médico e a realização do electrocardiograma para o diagnóstico de enfarte agudo do miocárdio com

O trabalho artístico realizado em contextos prisionais - os atores sociais em interação sobre o seu território (a instituição total) - as relações entre os grupos de reclusos e

As principais dificuldades apontadas, por exemplo dos CEO do estudo de caso 1 (detêm várias participadas estrangeiras) relacionam-me com o facto de uma empresa

Resumo: Este texto apresenta uma resenha crítica do livro Music, Analysis, Experience: New Perspectives in Musical Semiotics, editado por Constantino Maeder e Mark Reybrouk e

É ainda objetivo (3º) desta dissertação explanar e avaliar a abordagem socioeconómica da Colónia do Sacramento, com vista à compreensão da estrutura demográfica e

Figura 4 - Estrutura Institucional programa Minha Casa, Minha Vida...26 Figura 5 - Quadro esquemático com fundamentação teórica da seção Instituições e governança

Resultados: O presente estudo demonstrou que o antagonista do MIF (p425) diminuiu significativamente proteinúria, excreção urinária de GAGs , relação proteína/creatinina na

FEDORA is a network that gathers European philanthropists of opera and ballet, while federating opera houses and festivals, foundations, their friends associations and