• Nenhum resultado encontrado

A New Iterative Solution Method for Solving Multiple Linear Systems

N/A
N/A
Protected

Academic year: 2019

Share "A New Iterative Solution Method for Solving Multiple Linear Systems"

Copied!
6
0
0

Texto

(1)

A New Iterative Solution Method for Solving Multiple

Linear Systems

Saeed Karimi

Department of Mathematics, Persian Gulf University, Bushehr, Iran Email: karimi@pgu.ac.ir

Received July 25,2012; revised August 25, 2012; accepted September 8, 2012

ABSTRACT

In this paper, a new iterative solution method is proposed for solving multiple linear systems A x( ) ( )i i =b( )i , for , where the coefficient matrices

1£ £i s A( )i and the right-hand sides are arbitrary in general. The proposed method is based on the global least squares (GL-LSQR) method. A linear operator is defined to con-nect all the linear systems together. To approximate all numerical solutions of the multiple linear systems simultane-ously, the GL-LSQR method is applied for the operator and the approximate solutions are obtained recursively. The presented method is compared with the well-known LSQR method. Finally, numerical experiments on test matrices are presented to show the efficiency of the new method.

( )i

b

: n s n s

´ ´

Keywords: Iterative Method; Multiple Linear Systems; LSQR Method; GL-LSQR Method; Projection Method

1. Introduction

We want to solve, using global least squares (GL-LSQR) method, the following linear systems:

( ) ( ) ( )

= , 1

i i i

A x b £ £i s (1) where A( )i are arbitrary matrices of order n, and in general A( )i ¹A( )j and for In spite of that, in many practical application the coefficient matrices and the right-hand sides are not arbitrary, and often there is information that can be shared among the coefficient matrices and right-hand sides. Multiple linear systems (1) arise in many problems in scientific com- puting and engineering application, including recursive least squares computations [1,2], wave scattering pro- blems [3,4], numerical methods for integral equations [4,5], and image restorations [6].

( )i ( )

b ¹b j i¹ j.

Many authors, see [7,8], have researched to approxi- mate the solutions of the multiple linear systems (1) with the same coefficient matrix but different right-hand sides, i.e.,

( ) ( )

= , 1 .

i i

Ax b £ £i s

´

(2)

Recently, S. Karimi and F. Toutounian proposed an iterative method for solving the linear systems (2) with the some advantages over the existing methods, see [9- 11], for more details. In [12], Tony F. chan and Michael K. Ng presented the Galerkin projection method for solving linear systems (1). They focused on the seed

projection method which generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems onto the generated Krylov subspace to get the approximate solu- tions. Note that the coefficient matrices in this method are real symmetric positive definite.

In this paper, we propose a new method to solve the linear systems (1) simultaneously, where the coefficient matrices and right-hand sides are arbitrary. We define a linear operator to connect all the linear systems (1) together. Then we apply the GL-LSQR me- thod [13] for the linear operator and obtain recur- sively the approximate solutions simultaneously. In the new method , the linear operator will be reduced to a lower global bidiagonal, namely -Bidiag, matrix form. We obtain a recurrence formula for generating the se- quence of approximate solutions. Our new method has certain advantages over the existent methods. In the new method, the coefficient matrices

: n s n s

´

 

( )i

A and right-hand sides are arbitrary. Also we do not need to store the basis vectors, we do not need to predetermine a subspace dimension and the approximate solutions and residuals are cheaply computed at every stage of the algorithm be- cause they are updated with short-term recurrence.

( )i

b

(2)

ple linear systems (1). we also show how to reduce the low-rank approximate solutions to linear operator . The section 4 is devoted to some numerical experiments and comparing the new method with the well known LSQR method , when it is applied to s linear systems independently. Finally, we make some concluding re- marks in section 5.

We use the following notations. For X and Y two ma- trices in n s´ , we consider the following inner product

(

T

)

, F =

X Y tr X Y , where denotes the trace of a matrix. The associated norm is the Frobenious norm denoted by

( )

. tr

.F . The notation X ^FY means that

, = 0

X Y F and X

( )

:,i means that the th column of X. Finally, we use the notation * for the following product:

i

=1

* = ,

m j j j

y y V

å

(3)

where =

[

1, 2,

]

,

n s m j

V V V V

δ

m

for , and

.

1£ £j m yÎ

By the same way, we define

( )

( )

( )

*T = *T :,1 , *T :, 2 , , *T :,m ,

 éë   ùû (4)

where T is the matrix. It is easy to show that the following relations are satisfied:

m m´

(

)

(

)

(

* y z = *y * , z *T *y= *T

)

,

 +  +   y (5)

where y and z are two vectors of m.

2. The Global Least Squares (GL-LSQR)

Method

In htis section, we recall some fundamental properties of GL-LSQR algorithm [13], which is an iterative method for solving the multiple linear systems (2). When all the ’s are available simultaneously, the multiple linear systems (2) can be written as

( )i b

= ,

AX B (6)

where A is an nonsingular arbitrary matrix, B and X are an rectangular matrices whose columns are

n n´

) n s´ ( , ( )1 ( )2

, , s

b bb and x( )1,x( )2,,x( )s , respectively. For solving the matrix Equation (6), the GL-LSQR method uses a procedure, namely Global-Bidiag proce- dure, to reduce A to the global lower bidiagonal form. The Global-Bidiag procedure can be described as follows.

Global-Bidiag (starting matrix B; reduction to global lower bidiagonal form):

1 1 = , 1 1= 1 T

U B V A U

b a

1 1

1 1 1 1

=

, = 1, 2, , =

i i i i i

T

i i i i i

β U AV αU

i

α V A U β V

+ +

+ + + +

ü - ïï ýï

- ïþ

n s´

(7)

where i i . The scalars and

are chosen so that ,

U V Î αi³0 βi³0

= = 1

With the definitions

[

]

[

]

1 2

1 2

1

2 2

1

, , , ,

, , , ,

,

k k

k k

k

k k

k

U U U

V V V

T

 

a

b a

b a

b +

º º

é ù

ê ú

ê ú

ê ú

ê ú

º ê ú

ê ú

ê ú

ê ú

ê ú

ë û

 

 

the recurrence relations (7) may be rewritten as:

(

)

1* 1 1 =

k e B,

+ b (8)

1

= *

k k k,

A + T (9)

1 = * 1 *

T T

k kk k k k 1 k 1 T

. A+ T +a+V+ e+ (10) For the Global-Bidiag procedure, we have the follow- ing propositions.

Proposition 1 [13]. Suppose that k step of the Global- Bidiag procedure have been taken, then the block

vectors 1 2 and 1 2 are F-

orthonormal basis of the Krylov subspaces n s´

1 k Uk+

T k

, , , k

V VV U U, ,,U ,

(

A A V, 1

)

and 1

(

,

T

)

1

k AA U

+ , respectively.

Proposition 2 [13]. The Global-Bidiag procedure will be stopped at step m if and only if , where and l are the grades of V1 and with respect to

{

}

= ,

m min m l

1

U

m

T

A A and T

AA , respectively.

Proposition 3 [13]. Let k be the matrix defined by

[

1, 2, ,

]

,

k U U Uk

 º 

= 1, ,

ik,

where the matrices i , are generated by the Global-Bidiag procedure. Then

n s´ U

2

* =

k F .

h h

By using the Global-Bidiag procedure, the GL-LSQR algorithm constructs an approximate solution of the form

= *

k k k

X V y , where yk = êëéyk( )1,,y( )kkùúûTÎ k, which solves the least-squares problem,

.

min F

X

B-AX

The main steps of the GL-LSQR algorithm can be summarized as follows.

Algorithm 1: Gl-LSQR algorithm 1) Set X0= 0

2) β1= BF, U1=B β1, 1= 1 T

F

α A U ,

1= 1 T

V A U α1.

3) Set W1=V1, f1=b1, r1=a1

4) For i= 1, 2, until convergence, Do: 5) Wi = AVi-αiUi

6) βi+1= Wi F i F i F

(3)

7) Ui+1=Wi βi+1

8) = 1 1

T

i i i i

S A U+ -β+V

9) αi+1= Si F 10) Vi+1=Si αi+1

11) ri=

(

ri2+βi2+1

)

1 2 12) ci =r ri 13) si= βi+1 ρi 14) qi+1=siαi+1 15) ri+1=ciαi+1 16) fi=aifi 17) fi+1 =cifi 18) fi=cifi

i 19) fi+1=-sif

20) Xi = Xi-1+

(

f ri i

)

Wi 21) Wi+1=Vi+1-

(

qi+1 ri

)

Wi

22) If fi+1 is small enough then stop 23) EndDo.

More details about the GL-LSQR algorithm can be found [13].

3. The GL-LSQR-Like Operator Method

In this section, we propose a new method for solving the linear systems (1). For this mean, we define the fol- lowing linear operator

:n s´ n s

 ´

,

(11)

( )

( )1

( )

( )

( )

= :,1 , , s :,

X éêëA XA X sùúû

 (12)

where A( )j , are the coefficient matrices of the multiple linear systems (1). Therefore, the linear systems (1) is written as:

= 1, ,

js

( )

X =B,

 (13)

where B is an rectangular matrix whose columns are

n s´ ( )

,

( )1 ( )2

, , s

b bb the right hand sides of the linear systems (1).

Definition 1: Let  be linear operator (11). Then

( )

(1)

( )

( )

( )

= :,1 , , :,

T T s T

.

X éêëA XA X s

 ùúû

,

Consider the block operator (13), similar to the well- known block Krylov subspace

(

)

{

2 1

}

; = span , , , , m

m A R R AR A R A

-

R

Definition 2: Let be linear operator (11). Then

where

where R is the residual of the operator equation (13), we define the following block Krylov-like subspace.

(

)

{

1

( )

}

; = n , ( ), , m ,

m R R R R

-

  spa  

= ...

i time i   

    and is the combination of rators.

: Let be linear operator (11) and

two ope

Definition 3

[

1 2

]

= , , , s

m V VVm Î . Then

)

, ,

n m´

( )

=

( ) (

1 , 2

( )

n ms

m V V Vm

´

.

é ù Î

ë û 

     

where

of the block operator Eq

c

reduction to lower bidia

(14)

where . The scalars and

n s j

V Î´ , pproxima

= 1, 2, , .

js

the solution

To a te

uation (13), we present a new algorithm, will be re- ferred to -GL-LSQR algorithm, which is based on the Global-Bidiag-like procedure, will be referred to - Bidiag. The -Bidig procedure reduces the linear o - rator  to t lower bidiagonal matrix form. This pro- cedure an be described as follows.

-Bidiag (starting matrix B;

pe he

gonal matrix form):

= , = T

β1U1 B α1 1V

( )

U1

( )

(

)

1 1

1 1 1 1

=

, = 1, 2, , =

i i i i i

T

i i i i i

β U V αU

i

α V U β V

+ +

+ + + +

üï - ïý ï

- ïþ

 

n s´

,

i i

U V Î

sen so that

0

i

α ³ βi³0

are cho i F = Vi F = 1.

Similar to Global-B w

U

idiag procedure, e define

[

]

[

]

1 2

1 2

1

2 2

1

, , , ,

.

m m

m m

m

m m

m

V V V

α β α

T

β α

β +

º

, , , ,

U U U

º

é ù

ê ú

ê ú

ê ú

ê ú

º ê ú

ê ú

ê ú

ê ú

ê ú

ë û

   

According to notation * and by using the definition 3, the recurrence relations (14) may be rewritten as:

(

)

1* 1 1 = , m+ βe B

 (15)

1 +1. (17) Proposition 4. Suppose that k step of the

pr

n [1

quantities generated from the linear operator an

( )

m = m+1*Tm

   , (16)

(

1

)

= * 1 *

T T

m+ m Tk +am+Vm+ em

   T

-lar to that given i Bidiag ocedure have been taken, then the block vectors

1, , m

VV and U1,,Um+1 are F-orthonormal basis of v-like subspace

(

; 1

)

T

m  V and

(

)

1 ; 1 T

m+ U

  , respective

this proposition the Krylo

The proof

.

is simi ly

f o 3].

The 

(4)

the approximate solution is given by

( )

.

min F

X Let the quantities

,

B- X

= m* m

m

Xy (18)

be defined, where . Acco

.

m

(

=

m m

R B- X

m

)

, (19)

m

y Î rding to linearity of the operator, it is easy to show that

( )

Xm =

( )

m *y

  

Also it readily follows from (15), (1 and properties of

m

y

holds to working accuracy. idual

6) product * the equation

( )

= *

m m

R B- 

(

)

(

)

(

)

1 1 1 1

1 1 1

= * * *

= * ,

m

m m m

m m m

y

βe T

βe T y

+ + + -  

To minimize the mth res Rm F, since m+1 is F-orthonormal and by using the proposition 3, we ose

m

y so that

cho

1 1 2

= ,

m F m

R βe -T ym (20) is minimum. This minimization problem is carried out by applying the QR decomposition [13], where a unitary matrix Qm is determined so that

[

1 1

]

1

1 1 1

2 3 2

1 1 1 = 0 = 0 , m m m

m m m

m m

m

Q T βe

f

r q f

r q f

r q f

r f f + - -+ ê ú ë û é ù ê ú ê ú ê ú ê ú ê ú ê ú ê ú ê ú ê ú ê ú ê ú ë û   

where and are scalars. The above m fm

é ù

ê ú

, l r atio l q is l f in sfo QR th an factoriz n determ ed by constructing the

m

plane rotation Qm m, +1 to operate on rows

m

d

1

m+ of the tran rmed

[

Tm β1 1e

]

to an ilate This gives the following ecurrence rela-

nih

1 m

β + .

tion: simple r 1 1 1 1 1 0 = , 0 0

m m m

m m m m

m m

m m m m

c s

s c β α

r q f

r f

r f

+

+ +

+ +

é ù é ù

é ùê ú ê

ú

ê úê ú ê

ú ê- ú

ë ûë û ë û

where rα1, fβ1 and the scalars cm and sm are the nontrivial elements of Qm m, +1 The quantity rm and fm are intermediate scala t are subsequently repla d by rm and fm.

With settin rs tha ce , g 1 = *

m m m

y - f

(

1

)

= * * ,

m m m m

X  - y (21)

.

(

1

)

= m* m *fm

-  (22)

Letting

[

]

1 1 2

* ,

m m m P P Pm

-º º 

  

then

.

= *

m m m

Xf

The matrix the last ock column of

can the pre

n s´ comp

,

m

P bl m,

, by be uted from vious Pm-1 and Vm the simple update

(

1

)

= ,

m m m m m

P V -q P- r (23)

also note that,

in which

1

= m ,

m m f f f -é ù ê ú ê ú ë û = .

m cm m

f f

Thus, Xm can be updated at each step, via the relation

1

= .

m m m m

X X - +f P

m F R

The residual norm is computed directly from the quantity fm+1 as

1

= .

m m

R F f +

Some of the work in (23) can be eliminated by using matrices Wm=rmPm in place of Pm. The main steps of the  -GL-LSQR algorithm can be summarized as follows.

Algorithm 2: -GL-LSQR algorithm 1) Set X0= 0

2) β1= BF U1=B β1, 1=

( )

1 T

F

αU

, ,

( )

1 1 1

T

V U α

3) Set

= .

1= 1

W V , f1=b1, r1=a1

4) For  conver

ˆ

-= 1, 2,

i until gence, Do:

5) Wi=

( )

Vi αiUi 6) i 1= ˆi

F

β+ W

7) Ui+1=Wˆi βi+1

8) ˆ =

(

1

)

1

T

i i i

SU+ -β+Vi

9) i 1= ˆi F

α+ S

10) Vi+1=Sˆi αi+1

11)

(

2 2

)

1 2 1

= +

i i βi

r r +

= i i

c r r

(5)

13) si=βi+1 ρi 14) qi+1=siαi+1 15) ri+1=ciαi+1 16) fi=aifi 17) fi+1 =cifi 18) fi=cifi 19) fi+1 =-sifi

20) Xi= Xi-1+

(

f ri i

)

Wi 21) Wi+1=Vi+1-

(

qi+1 ri

)

Wi

f

22) I fi+1 is small enough then stop ndDo.

the -GL-LSQR algorithm has

certain advant sly the appro-

xi of the multiple linear systems (1). Also

th ly

uation 23) E

As we observe, 

ages, we obtain simultaneou mate solution

e residual norm is cheap computed at every stage of the algorithm.

As a application of the new method applying it to solve the Sylvester equation in a special case. Consider the Sylvester eq

= , AX-XB C

where AÎn n´ , CÎn s´ and BÎs s´ are known and XÎn s´ is unknown.

mmetric matrix rned to the

s

By u Schur decom-

ve Sylvester

e

sing the B the abo

r system position for the sy

n is retu is a unita

,

equatio linea s as follows.

Ther ry matrix Q such that B=Q QL T, where

L is a diagonal matrix which diagonal elements are the eigenvalues of B. So we hav

= = ,

e

ˆ

AXQ-XQL CQ C

by taking ˆ =X XQ, the Sylvester equation is converted to the following s linearsystems

( ) ( ) ( ) ˆ j ˆj ˆj

= , = , , ,

A x c j is

where Aˆ( )j = A-ljI, ( ) ˆ j

x and cˆ( )j are the j-th col- mun of ˆX and Cˆ , respectively.

4. Numerical Experiments

In this section, all the numerical experiments were com- me MATLAB codes. ess

puted in double precision with so

For all the examples the initial gu X0 was taken to be zero matrix. We consider two sets of numerical ex- periments. The first set of numerical experiments con- tains matrices of the form

( )

= , = 1, 2, , , i

i

A A-lI is

obtained from the Sylvester equation, explained in previous section, where

10 10

= trdiag 1 , 4, 1

1 1

A

n n

÷

ç- + - - ÷

ç ÷

çè + + ø and li is the ei-

genvalue of the matrix

1

= tridiag 1 , 2, 1

1 1

B

s s

æ ö÷

ç- + - + ÷

ç ÷

çè + + ø

1

where the function rand creates an

the second set of experiments arise

f the ope

. The right-hand

side matrix C is taken rand n s

( )

, ,

random matr

n s´ ix. The matrices of

from the three-point

centered discretization o rator d

( )

d

d d

u a x

x x

æ ö÷ ç

- ççè ÷÷ø in [0,1] where the function a(x) is given by a(x) = c + dx, where c and d are two parameters. The discretization is

performed using a grid size of h= 1 , yielding matrices 65

of size 64 with the values of ck = 0.1551´

(

0.9524

)

k k

and dk = 7.7566´

(

0.9524 ,

)

k s

hand sides of these systems are ted randomly with = 1, 2,, . The right-genera

s were stopped as soon their 2-norms being 1. All the test

as, f -7

We display the convergence history in Figure 1 and Figure 2 for the systems corresponding to the matrices of the first of matrices and second set of matrices, re

10 .

£

set

solving multiple linear spectively. Figure 1 shows that the  -GL-LSQR algorithm converges, however, Figure 2 shows slowly convergence which can be remedied by using a reliable preconditioner. But we did not deal with the precon- ditioner techniques in this paper.

5. Conclusion

We proposed a new method for

æ ö

systems A x( ) ( )i i =b( )i

i , for 1£ £i s, where the coeffi- cient matrices A( ) an

applied to the arb

d the right-hand sides are

ary c

( )i b arbitrary in general. This method has certain advantages

which is itr oefficient matrices

( )i

A and right-hand sides b( )i . Also It is not needed to

(6)

tics, Speech and Signal Processing, Minneapolis, 27-30 April 1993, pp. 571-574.

[3] W. E. Boyse and A. A. Seidl, “A Block QMR Method for Computing Multiple Simultaneous Solutions to Compex Symmetric Systems,” SIAM Journal on Scientific Com- puting, Vol. 17, No. 1, 1996, pp. 263-274.

[4] R. Kress, “Linear Integral Equations,” Springer-Verlag, New York, 1989. doi:10.1007/978-3-642-97146-4

[5] D. O’Leary, “The Block Conjugate Gradient Algorithm and Related Methods,” Linear Algebra and Its Applica- tions, Vol. 29, 1980, pp. 293-322.

doi:10.1016/0024-3795(80)90247-5

[6] A. Jain, “Fundamentals of Digital Image Processing,”

quares Prentice-Hall, Englewwood Cliffs, 1989.

[7] S. Karimi and F. Toutounian, “The block Least S Method for Solving Nonsymmetric Linear Systems with Multiple Right-Hand Sides,” Applied Mathematics and Computation, Vol. 177, No. 2, 2006, pp. 852-862.

doi:10.1016/j.amc.2005.11.038

Figure 2. Convergence history of the LSQR algorithm an

basis vectors, it is not needed to predetermine a subspace dimension and the approximate solution residuals are cheaply computed at every stage of the al- gorithm simultaneously because they are updated short-term recurrence. Applying a reliable preconditioner for the linear systems of Equation (1) may increas the convergence rate, which has not been dicussing in this paper.

6. Acknowledgements

The author would like to thank the anonymous referees for their helpful suggestions, which would greatly in-prove the article.

This work is financially supported by the research co mittee of Persian Gulf University.

d the new algorithm for the second set matrices with s = 2. store the

[8] G. Golub and C. Loan, “Matrix Computations,” 2nd Edi- tion, Johns Hopkins Press, Bultimore, 1989.

s and

[9] A. El Guennouni, K. Jbilou and H. Sadok, “The Block Lanczos Method for Linear Systems with Multiple Right- Hand Sides,” Applied Numerical Mathematics, Vol. 51, No. 2-3, 2004, pp. 243-256.

[10] A. El Guennouni, K. Jbilou and with

H. Sadok, “A Block Bi- CGSTAB Algorithm for Mutiple Linear Systems,” Elec- tronic Transactions on Numerical Analysis, Vol. 16, 2003, pp. 129-142.

[11] S. Karimi and F. Toutounian, “On the Convergence of the BL-LSQR Algorithm for Solving Matrix Equations,” In- ternational Journal of Computer Mathematics, Vol. 88, No. 1, 2011, pp. 171-182.

doi:10.1080/00207160903365883

[12] T. F. Chan and M. K. Ng, “Galerkin Projection Methods for Solving Multiple Linear Systems,” SIAM Journal on Scientific Com

m-

puting, Vol. 21, No. 3, 2012, pp. 836-850.

doi:10.1137/S1064827598310227

[13] F. Toutounian and S. Karimi, “Global Least Squares (GL- LSQR) Method for Solving General Linear Systems with Several Right-Hand Sides,” Appl

REFERENCES

[1] C. C. Paige and M. A. Saunders, “LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares,” ACM Transactions on Mathematical, Vol. 8, No. 1, 1982, pp. 43-71. doi:10.1145/355984.355989

[2] R. Plemmons, “FFT-Based RLS in Signal Processing,” Proceeding of the IEEE International Conference on Acous-

ied Mathematics and Computation, Vol. 178, No. 2, 2006, pp. 452-460.

Imagem

Figure 1. Convergence history of the LSQR algorithm and  the new algorithm for the first set matrices with s = 4 and n

Referências

Documentos relacionados

In this work, we are interested in obtaining existence, uniqueness of the solution and an approximate numerical solution for the model of linear thermoelasticity with moving

In this paper, we explore the block triangular preconditioning techniques applied to the iterative solution of the saddle point linear systems arising from the discretized

Unlike classical techniques, the homotopy perturbation method leads to an analytical approximate and exact solutions of the nonlinear equations easily and

The iterative methods: Jacobi, Gauss-Seidel and SOR methods were incorporated into the acceleration scheme (Chebyshev extrapolation, Residual smoothing, Accelerated

Numa 2ª fase apresentam-se os dados do desemprego registado por diplomados, em cada um dos cursos de 1º ciclo da UMa e de cursos congéneres de OIES, agrupados pelas áreas consideradas

The asymptotic conditions on the system are dis- cussed in various function space topologies and a new concept of admissibility of a pair of Banach spaces is

Com este mini plano de marketing, pretende-se reorganizar a estratégia global da empresa, definindo algumas linhas orientadoras que possam servir de base, para

A preconditioning strategy for the solution of linear boundary element systems using the GMRES method. Iterative solution of large-scale 3D-BEM