• Nenhum resultado encontrado

Replacing the Parameterized Vector with a Linear Combination of Constant Vectors

4 Approach to Computing Transitive Closure

4.1 Replacing the Parameterized Vector with a Linear Combination of Constant Vectors

To find constant vectors whose linear combination represents the parameterized vector, we can apply the following theorem.

Theorem 1. Let vp be a vector in Zd andpi,i= 1,2, . . . , q, are its parameter- ized coordinates, whereq is the number of parameterized coordinates. We may replace vectorvp with a linear combination of constant vector vc,vc Zd, and unit normal vectorsei,eiZd as follows:

vp=vc+Σipi×ei. (2)

Ifvc =0then vc can be rejected from (2).

Proof. Without loss of generality, we may assume that the first n positions of vp have constant coordinates and the lastqpositions have parameterized ones.

Then, we can write:

⎜⎜

⎜⎜

⎜⎜

⎜⎜

c1

... cn

pn+1

... pd

⎟⎟

⎟⎟

⎟⎟

⎟⎟

=

⎜⎜

⎜⎜

⎜⎜

⎜⎜

c1

... cn

0 ... 0

⎟⎟

⎟⎟

⎟⎟

⎟⎟

⎠ +

⎜⎜

⎜⎜

⎜⎜

⎜⎜

⎝ 0

... 0 pn+1

... pd

⎟⎟

⎟⎟

⎟⎟

⎟⎟

, (3)

whered−n=q, and further, the second vector can be written as the linear com- bination of unit normal vectorsekand parameterized coefficientspn+1, . . . , pdin the lastdpositions:

⎜⎜

⎜⎜

⎜⎜

⎜⎜

⎝ 0

... 0 pn+1

... pd

⎟⎟

⎟⎟

⎟⎟

⎟⎟

=

⎜⎜

⎜⎜

⎜⎜

⎜⎜

⎝ 0

... 0 pn+1

... 0

⎟⎟

⎟⎟

⎟⎟

⎟⎟

⎠ +. . .+

⎜⎜

⎜⎜

⎜⎜

⎜⎜

⎝ 0

... 0 0 ... pd

⎟⎟

⎟⎟

⎟⎟

⎟⎟

=pn+1×

⎜⎜

⎜⎜

⎜⎜

⎜⎜

⎝ 0

... 0 1 ... 0

⎟⎟

⎟⎟

⎟⎟

⎟⎟

+. . .+ pd×

⎜⎜

⎜⎜

⎜⎜

⎜⎜

⎝ 0

... 0 0 ... 1

⎟⎟

⎟⎟

⎟⎟

⎟⎟

. (4)

Substituting (4) into (3), we obtain:

⎜⎜

⎜⎜

⎜⎜

⎜⎜

c1

... cn pn+1

... pd

⎟⎟

⎟⎟

⎟⎟

⎟⎟

=

⎜⎜

⎜⎜

⎜⎜

⎜⎜

c1

... cn

0 ... 0

⎟⎟

⎟⎟

⎟⎟

⎟⎟

+pn+1×en+1+. . .+ pd×ed, (5)

that proves Theorem 1.

It is obvious that if vc = 0, then vc can be rejected without affecting the

result.

Property 1. Replacing parameterized vectors with a linear combination of vec- tors with constant coordinates can be done in a polynomial time.

Proof. To check each position in vector vp, vp Zd, the algorithm requires d operations. In the worst case, alldpositions can be parameterized coordinates, henced unit normal vectorsek, ek Zd must be created. This defines O

d2 time complexity of replacing parameterized vectors.

Transitive Closure of a Union of Dependence Relations 43 4.2 Algorithm for Computing Transitive Closure

The idea of the algorithm presented in this section is the following. Given a setD ofmdependence distance vectors in then-dimensional integer space derived from a union of dependence relationsR (it describes all the dependences in a loop), we first replace all parameterized vectors with constant vectors using Theorem 1 presented in subsection 4.1. As a result, we getk, k≥m, dependence distance vectors with constant coordinates. This allows us to get rid of parameterized vectors and to form an integer matrix A, A Zn×k, by inserting dependence distance vectors with constant coordinates into columns ofA. The columns ofA span the vector spaceV.

To decrease the complexity of further computations, redundant dependence distance vectors are eliminated from matrix A by finding a subset ofl, l ≤k, linearly independent columns ofA. This subset of dependence distance vectors forms the basisB,B Zn×l, of Aand generates the same vector spaceV asA does [15]. Every element of vector spaceV can be expressed uniquely as a finite linear combination of the basis dependence distance vectors belonging toB.

After B is completed, we can work out relation T representing the exact transitive closure R or its over-approximation. For each vertex x in the data dependence graph (wherexis the source of a dependence, x∈domain R), we can identify all verticesy (the destination(s) of a dependence(s),y ∈range R) that are connected with xby a path of length equal or more than 1, wherey is calculated as x plus a linear combination of the basis dependence distance vectors B, i.e. y = x+B ×z, z Zl. The part B × z of the formula represents all possible paths in the dependence graph, represented by rela- tion R, connecting x and y. Moreover, we have to preserve the lexicographic order foryandx, i.e.y−x0. Below, we present the algorithm in a formal way.

Algorithm. Calculating the exact transitive closure of a relation describing all the dependences in the parameterized perfectly-nested loop or its over-approximation.

Input: Dependence distance setDn×m=d1, d2, . . . , dm, wheremis the number ofn-dimensional dependence distance vectors.

Output: Exact transitive closure of the relation describing all the dependences in the loop or its over-approximation.

Method:

1. Replace each parameterized dependence distance vector in Dn×m with a linear combination of vectors with constant coordinates. For this purpose apply Theorem 1 presented in subsection 4.1.

2. Using all constant dependence vectors, form matrix A, A Zn×k, k ≥m, whose columns spanC(A) [22].

3. Extract a finite subset ofl,l≤k, linearly independent columns from matrix A∈Zn×k over fieldZn that can represent (generate) every vector inC(A).

Form matrix Bn×l, representing the basis of the dependence distance vec- tors set, where linearly independent vectors are represented with columns

of matrixBn×l. For this purpose apply the Gaussian elimination algorithm [15,16].

4. Calculate relation T representing the exact transitive closure of a depen- dence relation, describing all the dependences in the input loop, or its over- approximation,T, as follows:

T =

[x][y]| ∃z s.t. y=x+Bn×l×z y−x0, z∈Zl

∧y∈range R x∈domain R

, (6) where:

Ris the dependence relation describing all the dependences in the input loop,

Bn×l×zrepresents a linear combination of the basis dependence distance vectorsdi (the columns ofBn×l), 1≤i≤l,

y−x0 imposes the lexicographically forward constraints on tuples x

andy ofT.

Let us demonstrate that for exact transitive closureR+ and relationT, formed according to (6), the following condition is satisfied:R+⊆T. To prove this, let us note that relationT represents all possible paths between verticesx(standing for dependence sources,x∈domain R) and verticesy(standing for dependence destinations,y∈range R) in the dependence graph, represented with relationR.

Indeed, a linear combination of the basis dependence distance vectorsBn×l×z:

reproduces all dependence distance vectors exposed for the loop,

describes all existing (true) paths between any pair of xand y as a linear combination of all dependence distance vectors exposed for the loop,

can describe not existing (false) paths in the dependence graph represented by relationR.

The last case occurs when on a path betweenx and y, being described by T, there exists a vertexwsuch thatw∈range R w /∈domain R. Such a case is presented in Figure 1, wherex2∈range R x2∈/domain R. RelationT, built according to (6), describes the false path betweenx1 andx4 depicted with the dotted line.

Fig. 1.False path in a dependence graph represented by relationT

Summing up, we conclude that relation T describes all existing paths in the dependence graph represented by relationRand can describe not existing paths, i.e.,R+⊆T; when relationT does not represent false paths,R+=T.

Transitive Closure of a Union of Dependence Relations 45 4.3 Time Complexity

The first tree steps of the proposed algorithm can be accomplished in polynomial time.

1. As we have proved in subsection 4.1, the task of replacing parameterized vectors with a linear combination of vectors with constant coordinates can be done inO

d2

operations.

2. The task of forming a dependence matrix using allk constant dependence vectors inZn requiresO(kn) operations (memory accesses).

3. The task of identifying a set of linearly independent columns of matrix A, A∈Zn×k with constant coordinates to find the basis can be done in poly- nomial time by the Gaussian elimination algorithm. According to [23], this computation can be done inO(ldk) arithmetic operations.

To calculate relationT in step 4 of the algorithm, we use the Presburger arith- metic. In general, calculations based on the Presburger arithmetic are not char- acterized by polynomial time complexity [1].