• Nenhum resultado encontrado

A Modified Precondition in the Gauss-Seidel Method

N/A
N/A
Protected

Academic year: 2019

Share "A Modified Precondition in the Gauss-Seidel Method"

Copied!
7
0
0

Texto

(1)

A Modified Precondition in the Gauss-Seidel Method

Alimohammad Nazari, Sajjad Zia Borujeni

Department of Mathematics, Faculty of Science, Arak University, Arak, Iran Email: a-nazari@araku.ac.ir, z-borujeni@arshad.araku.ac.ir Received March 6,2012; revised June 28, 2012; accepted August 21, 2012

ABSTRACT

In recent years, a number of preconditioners have been applied to solve the linear systems with Gauss-Seidel method (see [1-13]). In this paper we use Sl instead of (S + Sm) and compare with M. Morimoto’s precondition [3] and H. Niki’s precondition [5] to obtain better convergence rate. A numerical example is given which shows the preference of our method.

Keywords: Preconditioning; Gauss-Seidel Method; Regular Splitting; Z-Matrix; Nonnegative Matrix

1. Introduction

Consider the linear system

= ,

Ax b (1) where A

 

aij Rn n

 

, n

is a known nonsingular matrix and x bR are vectors. For any splitting A = MN with a nonsingular matrix M, the basic splitting iterative method can be expressed as

 1 1   1

= . =

k k

xM Nx M bk 0,1, 2,

n

,

(2) Assume that

0, = 1, 2, , ,

ii

ai

without loss of generality we can write =

A I L U (3) where I is the identity matrix, L and are strictly lower triangular and strictly upper triangular parts of

U

A, respectively. In order to accelerate the convergence of the iterative method for solving the linear system (1), the original linear system (1) is transformed into the follow- ing preconditioned linear system

=

PAx Pb,

 

(4) where P, called a preconditioner, is a nonsingular matrix. In 1991, Gunawardena et al. [2] considered the modi- fied Gauss-Seidel method with P

IS , where

 

1 for = 1, 2, , 1, 1

0 otherwise. ii

ij

a i n j i

Ss    

Then, the preconditioned matrix AS =

IS A

can

be written as

 

 

=

= ,

S

A I L SL U S SU

where D and E are the diagonal and strictly lower trian- gular parts of SL, respectively. If

1 1 1 1 1

ii i i

a a    i n , then

ID

 

LE

1 exists. Therefore, the preconditioned Gauss-Seidel ite- rative matrix TS for AS becomes

 

1

= ,

S

T IDLEU S SU

which is referred to as the modified Gauss-Seidel itera- tive matrix. Gunawardena et al. proved the following in- equality:

 

TS

 

T < 1,

 

where 

 

T denotes the spectral radius of the Gauss- Seidel iterative matrix T. Morimoto et al. [3] have pro- posed the following preconditioner,

 

= s m = .

s m ij m

P aI S S In this preconditioner, Sm is defined by

 

 

1 < 1, 1 <

= =

0 otherwise,

il

m i

m ij

a i n i j

S s     n, 

where li= minIi and

= : is maximal for 1 <

i ij

I j a ijn , for

1i<n1. The preconditioned matrix

m

=

s m

A I S S A can then be written as

 

=

,

s m

m m m

s m s m

m m s

A I L SL U S

SU S S L S U

I D L E

U S S SU S U F

 

   

   

   

      m

I D L E U S SU

    

      where

s m

(2)

lower and strictly upper triangular parts of

respectively. Assume that the following inequalities are satisfied:

SSm

L

1 . n

1 1 1 1

0 < < 1 1 <

0 < < 1 = 1

ii i i ili l il

ii i i

a a a a i

a a i n

 

 

 

 

 

Then  is nonsingular. The

preconditioned Gauss-Seidel iterative matrix

 

IDs m  LEs m

s m T for s m

A is then defined by

 

s m

1

=

s m s m s m

m m

T I D L E

U S S SU S U F

  

  

      .

Morimoto et al. [3] proved that

s m

 

s . To extend the preconditioning effect to the last row, Mori- moto et al. [7] proposed the preconditioner

TT

 

= ,

R P IR where R is defined by

 

1 1

= =

0 otherwise nj

nj

a j n

R r    



The elements anjR of AR are given by

 

1 =1

= = ,

1 < , 1 =

1 .

R

R ij

ij

R n

ij

nj nk kj k

A I R A a

a i

a

a a a j n

 

 

  

  

n jn

 

And Morimoto et al. proved that  TR 

 

T holds, where TR is the iterative matrix for AR. They al- so presented combined preconditioners, which are given by combinations of R with any upper preconditioner, and showed that the convergence rate of the combined meth- ods are better than those of the Gauss-Seidel method ap- plied with other upper preconditioners [7]. In [14], Niki et al. considered the preconditioner PSR =

I S R

. De- note ASR =MSRNSR. In [5], Niki et al. proved that if the following inequality is satisfied,

n1 ,

1 =1,

= = 1, 2, ,

n R

nj nj nj nk kj

k k j

a a a a a j

 

 (5)

then holds, where TSR is the iterative matrix for

 

TSR

S

 

SR

T

A . For matrices that do not satisfy Equation

(5), by putting

 

1 =1 ,

= =

n

nj nk kj nj

k k j

R r a a a

 ,1 j< ,n

= 1

1,

therwise.

n

Equation (5) is satisfied. Therefore, Niki et al. [5] pro- posed a new preconditioner PG IG

 , where

 

1 =1 ,

, 1

= =

0 o

n

nk kj nj k k j

nj

a a a j

G g

 

   

Put = =

 

G =

G G ij G G

A P A a MN , and 1 .

=

G G

T M NG Replacing PG by PG =I S SmG, and setting

= 1,

 the Gauss-Seidel splitting of AG can be written as

 

,

G s m s m

m m s m

A I D L E G L U G

U S S SU S U F

 

      

     

where G L U

G njajn

is constructed by the elements . Thus, if the preconditioner G is used, then all of the rows of

=

G nj

a g P

A are subject to preconditioning. Niki et al. [5] proved that under the condition u >,

 

TR 

 

TG ,

 where u is the upper bound of those values of  for which 

 

TG < 1. By setting ,

they obtained

= 0

G nj a

1 =1,

= .

n

nj nj nk kj

k k j

a g g a

 

 

Niki et al. [5] proved that the preconditioner G satisfies the Equation (5) unconditionally. Moreover, they reported that the convergence rate of the Gauss- Seidel method using preconditioner G is better than that of the SOR method using the optimum

P

P

 found by numerical computation. They also reported that there is an optimum  

 

opt in the range u>opt>m, which produces an extremely small 

 

Topt , where m is the upper bound of the values of  for which , for all .

0 nj aGj

In this paper we use different preconditions for solving (1) by Gauss-Siedel method, that assuming none of the components of the matrix A to be zero. If the largest component of the column j is not then the value of

, 1,

i i a

 

T

 will be improved.

2. Main Result

In this section we replace Sl by of Morimoto such that

SSm

m

=

l n

S SS and define Sn by

 

= 1, 2,..., 2 , = 1,

= = = 1,

0 otherwise,

k ji

n ij ij

a i n j i

S s a i n

  

 

 

  

where k ji max kj s.t , and m has the same form as the proposed by Mori- moto et al. [3].

a a

  k= ,i i2,i3,,n m

S S

The precondition Matrix ASl

ISl

A can then be written as

 

 

=

= ,

Sl l l l

Sl Sl l l

A I L U S S L S U

I D L E U S S U F

Sl

    

      

where

l l

S , S and Sl

(3)

spectively. Assume that the following inequalities are satisfied:

1 1 1

1 1 1 1

0 = 1, 2, , 2

0 < < 1 = 1, 2, , 2

0 < < 1 = 1,

ii k ii ili l ii

k ii i i ili l ii

ii i i

a a a a i n

a a a a i n

a a i n

  

 

 

    

 

 

 

 

 (6)

Therefore

l

S

1

M exists and the preconditioned Gauss- Seidel iterative matrix for

l

S T

l

S

A is defined by

 

 

1

1

=

= .

l l l

l l

S S S

S S l l

T M N

I D L E U S S U F

 

  Sl

For A=

 

aij and =

 

, we write n n

ij

B bRAB

whenever ij holds for all A is non- negative if

ij ab

0,

, = 1

i j

0; , = 1, ,

, 2,, .n

, ij

Aai jn and AB if and only if AB0.

Definition 2.1 (Young, [15]). A real n n matrix

 

= ij

A a with for all is called a Z- matrix.

0

ij

aij

Definition 2.2 (Varga, [16]).A matrix A is irreducible if the directed graph associated to A is strongly connected.

Lemma 2.3. If A is an irreducible diagonally dominant Z-matrix with unit diagonal, and if the assumption (6) holds, then the preconditioned matrix AS is a diagonally dominant Z-matrix.

Proof. The elements Sl of ij a

l

S

A are given by

, 1 1, , , , 1 1,

1 < 1,

= 1,

= .

l

ij k ii i j i li l ji S

ij ij i i i j

ij

a a a a a i n

a a a a i n

a i

 

 

  

 

 

n

1,

,

,

,

,

,

,

.

1 < .n

(7)

Since A is a diagonally dominant Z-matrix, so we have

, 1 1, , ,

, 1 1, 1

0 1 for

0 1 for

1 0.

k ii i j

i li lij i

k ii i i

a a j i

a a j l

a a  

  

  

  

  

(8)

Therefore, the following inequalities hold:

, 1 1,

= i 0

i k i i i p a a

, ,

= i i 0

i i l l i q a a

1 , 1 1,

=1

= 0

i

i i k i i j

j

r a a

 

1

, ,

=1

= i i 0

i

i i l l j j

s a a

, 1 1, = 1

= 0

i

n

i k i i j

j i t a a

, ,

= 1

= i i 0

n

i i l l j j i

u a a

.

We denote that Then

the following inequality holds:

= = = = = = 0

n n n n n n

p q r s t u

, 1 1, , ,

=1 =1

= i i i 0

i i i i i i

n n

k i i j i l l j

j j

p q r s t u

a a a a i

    

  

Furthermore, if ai i,10, and , for some

, then we have

1, =1

< 0

i j j

a

n

<

i n

< 0 for some < .

i i i i i i

p     q r s t u i n (9) Let d S

 

l i, l S

 

l i and

 

l

L i

u S be the sums of the elements in row i of

l

S , Sl, and , respectively. The following equations hold:

D

l

S U

 

 

 

1 =1

= 1

= = 1 , 1

= = 1

= = 1

l

l

l

S

l ij i i

i S

l ij i i i

j n

S

l ij i i i

j i

d S a p q i n

l S a l r s i n

u S a u t u i n

,

,

,

   

    

    

(10)

where i and are the sums of the elements in row i of L and U for

l ui

=

A I L U , respectively. Since A is a diagonally dominant Z-matrix, by (8) and by the con- dition (6) the following relations hold:

, 1 1, , ,

1ak iiaija ai li li j> 0, for j=i

, 1 1, , ,

= 1 = 1

0, for >

i i i

n n

ij k i i j i l l j

j i j i

a a a a a i j

 

, 1 , , 1 , 1 1, , ,

= 2 = 2

0

for < .

i i i

n n

ij k ii i li l i k i i j i l li j

j i j i

a a a a a a a a

i j

   

 

  

Therefore, l S

 

l 0, u S

 

l 0 , and ASl is a Z-

matrix. Moreover, by (9) and by the assumption, we can easily obtain

     

 

= >

for all .

l l l

i i i i i i i i i

d S l S u S

d l u p q r s t u

i

 

        0 (11)

Therefore, ASl satisfies the condition of diagonal do- minance.

Lemma 2.4 [10, Lemma 2]. An upper bound on the spectral radius

 

T for the Gauss-Seidel iteration matrix T is given by

 

max ,

1 i i

i u T

l

 

 

where i and i are the sums of the moduli of the elements in row i of the triangular matrices L and U, respectively.

lu

Theorem 2.5. Let A be a nonsingular diagonally do- minant Z-matrix with unit diagonal elements and let the condition (6) holds, then

 

< 1.

l

S T

(4)

Proof. From (11) and u S

 

l 0 we have

   

l l >

 

l 0 for a

d Sl S u S  ll .i This implies that

 

   

l l l < 1.

u S

d Sl S (12) Hence, by Lemma (2.4) we have 

 

T < 1.

n l

S

Definition 2.6. Let A be an n real matrix. Then, =

A MN is referred to as:

1) a regular splitting, if M is nonsingular, and

1

0 M  0.

N

2) a weak regular splitting, if M is nonsingular, and

1

0

M  M N1 0.

3) a convergent splitting, if

1

< 1. M N

 

n n

Lemma 2.7 (Varga, [10]). Let AR be a non- negative and irreducible

n

n

matrix. Then

1) A has a positive real eigenvalue equal to its spectral radius 

 

A

;

2) for  A

, there corresponds an eigenvector x > 0; 3) 

 

A

 

is a simple eigenvalue of A;

4)  A increases whenever any entry of A increases.

Corollary 2.8 [16, Corollary 3.20].If A=

 

aij n

1

is a real, irreducibly diagonally dominant matrix with

for all , and for all n < 0

ij

a ij aii > 0  i n,

then 1

> AO.

Theorem 2.9 [16, Theorem 3.29]. Let A=MN be a regular splitting of the matrix A. Then, A is nonsingular with 1

>

AO if and only if

M N1

, where < 1

1

1

1

= <

1

A N M N

M N

 

  

 1

Theorem 2.10 (Gunawardena et al. [2, Theorem 2.2]). Let A be a nonnegative matrix. Then

1) If  x Ax for some nonnegative vector x, x0, then  

 

A .

2) If Axx for some positive vector x, then

 

A .

 

0

Moreover, if A is irreducible and if

x Ax x

 

  for some nonnegative vector x, then

 

A

   and

x

is a positive vector.

Let be a real Banach space, its dual and the space of all bounded linear operator mapping B into itself. We assume that B is generated by a normal cone K [17]. As is defined in [17], the operator

B B

 

L B

 

AL B has the property “d” if its dual A possesses a Fro- benius eigenvector in the dual cone K which is de- fined by

= : , = ( ) 0 for all

. KxBx xx x  xK

As is remarked in [1,17], when B=Rn and = n

K R, all real matrices have the property “d”. Therefore

the case are discussing fulfills the property “d”. For the space of all

n n

n n matrices, the theorem of Marek and Szyld can be stated as follows:

Theorem 2.11 (Marek and Szyld [17, Theorem 3.15]).

Let A1=M1N1 and A2=M2N2

1 1

1 2 2

= N T, =MN

be weak regular splitting with T1 M1 2. Let

be such that

0, 0

xy

 

1 = T x1

T x and T y2 =

 

T2 y . If

1

1 ,

M M21 and if either

A1A2

x0, A x1 0, or

A1A2

y0,A1y0, with y> 0, then

 

T1

 

T2 .

 

Moreover, if 11> 1 2

MM and N1N2, then

 

T1 <

 

T2 .

 

Now in the following lemma we prove that

= = S

l l

S n m Sl

A ISS A MN is Gauss-Seidel con- vergent regular splitting.

Theorem 2.12. Let A be an irreducibly diagonally dominant Z-matrix with unit diagonal, and let the con- dition (6) holds, then

l l l

S = S S

A MN is Gauss-Seidel convergent regular splitting. Moreover

 

< 1.

l

S s m

T T

  

Proof. If A is an irreducibly diagonally dominant Z- matrix, then by Lemma (2.3), ASl is a diagonally dominant Z -matrix. So we have ASl1 . By hypothesis

0 we have

Sl

1

ID  I. Thus the strictly lower trian- gular matrix

l

S has nonnegative elements. By considering Neumanns series, the following inequality holds:

LE

 

 

 

1 1

2 1

1

1 1

0.

l l l

l l

l l l

S S S

S S

n

S S S

M I I D L E

L E

L E I D

 

 



 

I D

I D

 

 

  

Direct calculation shows that Sl holds. Thus, by definition (2.6) Sl Sl Sl

0

N

=

A MN is the Gauss-Seidel convergent regular splitting. Also in [3] we have

= s m s

A MmNs m and

 

 

 

1 1

2 1

1

1 1

= [

,

0.

s m S M S M

S M

n

S M S M

M I D L E

L E

L E I D

 

  

 

 

  

 

S M

S M I

I D

I D

 

 

   



Direct comparison of the two matrix elements

1

S M

ID  and

1

l

S

ID  also

LES M

and

(5)

1 1

, .

l l

S M S

S M S

I D I D

L E L E

 

  

  

Thus

1 1

0 .

l

s m S M M

 

Furthermore, since , we have

l

S s mn . From Lemma (2.7), x is an eigenvector of Sl , and x is also a Perron vector of

. Therefore, from Theorem (2.11), n

SS

S Ax

0

A xA xS

T

l

S T

 

TSl

Ts

  m

holds. Denote

, ,

, ,

smr m

S rl l

G m

GSl l

A I S S R A

A I S R A

A I S S G A

A I S G A

 

   

  

   

  

and also let smr, , and be the iterative matrix associated to

T

l

S r

T TG T SG l

smr A ,

l

S r

A , AG and AGSl respectively. Then we can prove

l

S r smr and G l G , similarly. In summary, we have the following inequalities:

 

T 

T

T S

 

T

 

 

 

 

 

 

.

l

l l

s s m S smr

S G GS

T T T T T

T r T T

    

  

   

  

Remark 2.13. W. Li, in [18] used the M-matrix in- stead of irreducible diagonally dominant Z-matrix, there- fore we can say that the Lemma 2.3 and the Theorems 2.5 and 2.12 are hold for M-matrices.

3. Numerical Results

In this section, we test a simple example to compare and contrast the characteristics of the different precondition- ers. Consider the matrix

1 0.2 0.6 0.2

0.2 1 0.3 0.1

0.1 0.2 1 0.3

0.2 0.3 0.2 1

A

  

 

 

  

 

  

 

Applying the Gauss-Seidel method, we have 

 

T =

= S A,

. By using preconditioner S we find that

0.5099 P

I

S

A and TS have the following forms:

0.96 0 0.66 0.22

0.23 0.94 0 0.19

0.16 0.29 0.94 0

0.2 0.3 0.2 1

S A

 

 

 

 

 

  

 

0 0 0.6875 0.2292 0 0 0.1682 0.2582 0 0 0.1689 0.1187 0 0 0.2217 0.1470 S

T

 

 

 

 

 

 

and 

 

TS = 0.3205.

Using the preconditioner Ps m , we obtain 0.90 0.12 0.06 0.4

0.25 0.91 0.2 0.09 =

0.16 0.29 0.94 0 0.2 0.3 0.2 1 s m

A

  

 

 

  

 

  

 

0 0.133 0.067 0.444 0 0.037 0.040 0.221 =

0 0.034 0.023 0.144 0 0.044 0.030 0.184 s m

T

 

 

 

 

 

 

and 

Ts m

= 0.2581

=

P I S . For

l

Sl, we have

0.88 0.02 0.09 0.41 0.25 0.91 0.2 0.09 =

0.16 0.29 0.94 0 0.2 0.3 0.2 1

l

S

A

  

 

 

  

 

  

 

0 0.023 0.102 0.465 0 0.006 0.050 0.227 =

0 0.006 0.033 0.149 0 0.007 0.042 0.1911

l

S

T

 

 

 

 

 

 

and

 

= 0.2328

l

S T

= smr

P I S

.

For  SmR, we have 0.90 0.12 0.06 0.4

0.25 0.91 0.2 0.09 =

0.16 0.29 0.94 0 0.08 0.08 0.21 0.87 smr

A

  

 

 

  

 

  

 

0 0.133 0.067 0.444 0 0.037 0.040 0.221 =

0 0.034 0.023 0.144 0 0.024 0.015 0.096 smr

T

 

 

 

 

 

 

and 

Tsmr

= 0.1694 =

P I S

. ,

l

S r l R

For   we have

0.88 0.02 0.09 0.41 0.25 0.91 0.02 0.09 =

0.16 0.29 0.94 0 0.08 0.08 0.21 0.87

l

S r A

  

 

 

  

 

  

(6)

0 0.023 0.102 0.465 0 0.006 0.050 0.227 =

0 0.006 0.033 0.149 0 0.004 0.022 0.1

l

S r T

 

 

 

 

 

 

and 

 

T = 0.1415.

l

S r

From the above results, we have

. Then and

have the forms:

0.28, 0.38, 0.41, 0

nj

GAG

 = 1

G

    

  

G T

0.90 0.12 0.06 0.4 0.25 0.91 0.02 0.09 =

0.16 0.29 0.94 0 0.04 0.06 0.07 0.78 G

A

    

 

  

 

 

  

 

0 0.133 0.067 0.444 0 0.037 0.040 0.221 =

0 0.034 0.024 0.144 0 0.012 0.008 0.050 G

T

 

 

 

 

 

 

and 

 

TG = 0.1218.

= 1 =

P

For GSl I Sl , we have 0.88 0.02 0.09 0.41

0.25 0.91 0.02 0.09 =

0.16 0.29 0.94 0 0.04 0.06 0.07 0.78

GSl

A

    

  

  

  

0 0.023 0.102 0.465 0 0.006 0.050 0.227 =

0 0.006 0.033 0.149 0 0.002 0.011 0.052

GSl

T

 

 

 

  

and GSl Since the preconditioned matri- ces differ only in the values of their last rows, the related matrices also differ only in these values, as is shown in the above results. Thus the elements of new G

 

T = 0.0940.

A and

l

GS

A are similar to elements of AG and AGSl , respec-

tively than the elements of last rows. Therefore, we hereafter show only the last row.

By putting 1= 1.22699, the matrices AG1,TG1, AGSl1

and TGSl1 have the following forms:

1 1

= 0, 0.003, 0.042, 0.733 , = 0, 0.0021, 0.0015, 0.0093 , G

G A

T

 

 

and

 

,

1 = 0.0779

G

T

1 1

= 0, 0.003, 0.042, 0.733 ,

= 0, 0.00036, 0.0021, 0.0096 ,

l l

GS

GS A T

 

 

and 

TGSl1

= 0.0509.

For 2= 1.23966 we have:

2 2

= 0.0021, 0, 0.0413, 0.7310 ,

= 0, 0.0015, 0.0011, 0.0069 ,

G

G A T

 

and

 

2

= 0.0751 G

T

 and

2 2

= 0.0021, 0, 0.0413, 0.7310 , = 0, 0.00026, 0.0016, 0.0071 ,

l l

GS

GS

A

T

and 

TGSl2

= 0.0483.

For 3

 

opt = 1.5625 we have:

 

 

3 3

= 0.0547, 0.0781, 0, 0.6609 , = 0, 0.0154, 0.0102, 0.0629 , G opt

G opt A

T

   

and

 

3opt = 0.0227

G

T

 and

 

 

3 3

= 0.0547, 0.0781, 0, 0.6609 ,

= 0, 0.0026, 0.0144, 0.0654 ,

l l

GS opt

GS opt A

T

   

and

 

3 = 0.0216

l

GS opt

T

 .

From the numerical results, we see that this method with the preconditioner GSl opt produces a spectral radius smaller than the recent preconditioners that above was introduced.

P

REFERENCES

[1] T. Z. Huang, G. H. Cheng and X. Y. Cheng, “Modified SOR-Type Ierative Method for Z-Matrices,”Applied Ma- thematics and Computation, Vol. 175, No. 1, 2006, pp. 258-268. doi:10.1016/j.amc.2005.07.050

[2] A. D. Gunawardena, S. K. Jain and L. Snyder, “Modified Iterative Method for Consistent Linear Systems,” Linear Algebra and Its Applications, Vol. 154-156, 1991, pp. 123- 143. doi:10.1016/0024-3795(91)90376-8

[3] M. Morimoto, K. Harada, M. Sakakihara and H. Sawami, “The Gauss-Seidel Iterative Method with the Precondi- tioning Matrix (I + S + Sm),” Japan Journal of Industrial and Applied Mathematics, Vol. 21, No. 1, 2004, pp. 25- 34. doi:10.1007/BF03167430

[4] D. Noutsos and M. Tzoumas, “On Optimal Improvements of Classical Iterative Schemes for Z-Matrices,” Journal of Computational and Applied Mathematics, Vol. 188, No. 1, 2006, pp. 89-106. doi:10.1016/j.cam.2005.03.057

[5] H. Niki, T. Kohno and M. Morimoto, “The Precondi-tioned Gauss-Seidel Method Faster than the SOR Meth- od,” Journal of Computational and Applied Mathematics, Vol. 218, No. 1, 2008, pp. 59-71.

doi:10.1016/j.cam.2007.07.002

[6] H. Niki, T. Kohno and K. Abe, “An Extended GS Method for Dense Linear System,” Journal of Computational and Applied Mathematics, Vol. 231, No. 1, 2009, pp. 177-186.

(7)

[7] M. Morimoto, H. Kotakemori, T. Kohno and H. Niki, “The Gauss-Seidel Method with Preconditioner (I + R),” Trans- actions of the Japan Society for Industrial and Applied Mathematics, Vol. 13, 2003, pp. 439-445.

[8] H. Kotakemori, K. Harada, M. Morimoto and H. Niki, “A Comparison Theorem for the Iterative Method with the Preconditioner (I + Smax),”Journal of Computational and

Applied Mathematics, Vol. 145, No. 2, 2002, pp. 373-378.

doi:10.1016/S0377-0427(01)00588-X

[9] A. Hadjidimos, D. Noutsos amd M. Tzoumas, “More on Modifications and Improvements of Classical Iterative Schemes for Z-Matrices,” Linear Algebra and Its Appli- cations, Vol. 364, 2003, pp. 253-279.

doi:10.1016/S0024-3795(02)00570-0

[10] T. Kohno, H. Kotakemori, H. Niki and M. Usui, “Improv- ing the Gauss-Seidel Method for Z-Matrices,” Linear Al- gebra and Its Applications, Vol. 267, No. 1-3, 1997, pp. 113-123.

[11] W. Li and W. Sun, “Modified Gauss-Seidel Type Meth- ods and Jacobi Type Methods for Z-Matrices,” Linear Al- gebra and Its Applications, Vol. 317, 2000, pp. 227-240.

doi:10.1016/S0024-3795(00)00140-3

[12] J. P. Milaszewicz, “On Modified Jacobi Linear Operators,”

Linear Algebra and Its Applications, Vol. 51, 1983, pp. 127-136. doi:10.1016/0024-3795(83)90153-2

[13] J. P. Milaszewicz, “Improving Jacobi and Gauss-Seidel Iterations,” Linear Algebra and Its Applications, Vol. 93, 1987, pp. 161-170. doi:10.1016/S0024-3795(87)90321-1

[14] H. Niki, K. Harada, M. Morimoto and M. Sakakihara, “The Survey of Preconditioners Used for Accelerating the Rate of Convergence in the Gauss-Seidel Method,” Jour- nal of Computational and Applied Mathematics, Vol. 164- 165, 2004, pp. 587-600. doi:10.1016/j.cam.2003.11.012

[15] D. M. Young, “Iterative Solution of Large Linear Sys-tems,” Academic Press, New York, 1971.

[16] R. S. Varga, “Matrix Iterative Analysis,” Prentice-Hall, Englewood Cliffs, 1962.

[17] I. Marek and D. Szyld, “Comparison Theorems for Weak Splittings of Bound Operators,” Numerische Mathematik, Vol. 58, No. 1, 1990, pp. 387-397.

doi:10.1007/BF01385632

[18] W. Li, “Comparison Results for Solving Preconditioned Linear Systems,” Journal of Computational and Applied Mathematics, Vol. 176, No. 2, 2005, pp. 319-329.

Referências

Documentos relacionados

Na fundamentação teórica são abordados os aspetos teóricos relativamente à vivência da prematuridade e da parentalidade pela família, as necessidades sentidas pelos pais durante a

• Selecionar aquele cujo o ganho de informação é o mais alto.. • O processo para selecionar um novo atributo e particionar os exemplos de treinamento é repetido para cada

Com este mini plano de marketing, pretende-se reorganizar a estratégia global da empresa, definindo algumas linhas orientadoras que possam servir de base, para

Descrição: Este é o processo pelo qual o cliente pode escolher a forma de pagamento do pacote de viagem e suas reservas (hotéis, passagens, guia turístico, passeios

Isto é, no cenário do maior conjunto de dados avaliado, por exemplo, considerando a execução das diferentes versões do algoritmo em um mesmo período de tempo para classificação,

Salas de Espetáculos – Verificamos a existência de várias salas de espetáculos, sendo que a maior, a Suggia, tem 1152 lugares na plateia, 26 lugares nos camarotes, 143 na

Para Carpintério (1995), o cálculo do custo por atividade (no caso de instituições universitárias, atividades de ensino, pesquisa e extensão, ficando apenas com

A autonomia e a mobilidade dos participantes (ou nós) das redes MANET podem impos- sibilitar execução de protocolos de acordo referidos anteriormente porque assumem que o conjunto