• Nenhum resultado encontrado

On multiplicative (strong) linear preservers of majorizations

N/A
N/A
Protected

Academic year: 2017

Share "On multiplicative (strong) linear preservers of majorizations"

Copied!
16
0
0

Texto

(1)

http://scma.maragheh.ac.ir

ON MULTIPLICATIVE (STRONG) LINEAR PRESERVERS OF MAJORIZATIONS

MOHAMMAD ALI HADIAN NADOSHAN1∗ AND ALI ARMANDNEJAD2

Abstract. In this paper, we study some kinds of majorizations on

Mn and their linear or strong linear preservers. Also, we find the structure of linear or strong linear preservers which are multiplica-tive, i.e. linear or strong linear preservers like Φ with the property Φ(AB) = Φ(A)Φ(B) for everyA, B∈Mn.

1. Introduction

An n×n nonnegative real matrixA is doubly stochastic if all of its row and column sums are one. For X, Y ∈ Mn, it is said that X is

majorized by Y (written as X ≺Y) if there exists a doubly stochastic matrix Dsuch that X =DY. If we omit the non-negativity condition, we reach to the set of all generalized n×ndoubly stochastic matrices, denoted byGDn. By considering some other kinds of stochastic matrices

such as row stochastic, upper triangular row stochastic, tridiagonal row stochastic and even stochastic matrices, we reach to various kinds of majorizations. LetT :Mn→Mn be a linear operator. For a relation∼

onMn, it is said thatT is a linear preserver of∼, ifT X ∼T Y whenever X ∼Y. Similarly, it is said that T is a strong linear preserver of ∼, if

X∼Y ⇔ T X ∼T Y.

Also, it is said that T is a multiplicative linear preserver of ∼, if in addition T(XY) = T(X)T(Y), for all X, Y ∈ Mn. In this paper, we

characterize all multiplicative linear or strong linear preservers of some kinds of majorizations. It is well-known that nonzero linear and multi-plicative operators on Mn are inner automorphisms. We use this fact in

2010Mathematics Subject Classification. 15A04, 15A21.

Key words and phrases. Doubly stochastic matrix, Linear preserver, Multiplicative map .

Received: 24 December 2015, Accepted: 27 January 2016.

Corresponding author.

(2)

some proofs but we prefer to prove other theorems without using this. Assume Pn be the set of alln×npermutation matrices and Eij be an n×nmatrix such that its ijth entry is 1, and all other entries are 0.

2. Multiplicative linear preservers of majorizations

In this section, we find multiplicative linear preservers of some kinds of majorizations. LetX, Y ∈Mn. Xis said to be multivariately majorized

byY (written asX≺m Y), ifX=DY for somen×ndoubly stochastic

matrixDand called directionally majorized byY (written asX ≺dY),

if for everya∈Rnthere exists a doubly stochastic matrix D

a such that Xa=DaY a. In [6] and [7], the authors found the linear preservers and

strong linear preservers of multivariate and directional majorizations as follows.

In this paper, the letter J indicates the matrix with all entries equal to 1, andtr(x) uses for the summation of coordinates of a vectorx.

Theorem 2.1 ([7], Theorem 1). Let T : Mn,m → Mn,m be a linear

function. Then, T preserves multivariate majorizations if and only if T

preserves directionally majorizations if and only if one of the following holds:

(a) There exist A1, . . . , Am∈Mn,m such that

T X =

m

i=1

(tr(xi))Ai,

where xi is the i’th column of X.

(b) There exist P ∈ Pn and R, S∈Mm such that T X =P XR+J XS.

Theorem 2.2 ([6], Theorem 2.4). Let T : Mn,m → Mn,m be a linear

function. Then, T strongly preserves multivariate majorizations if and only ifT strongly preserves directional majorizations if and only if there exist P ∈ Pn and R, S∈Mm such thatT X =P XR+J XS andR(R+ nS) is invertible.

Let T : Mn → Mn has the form T X = P XR+J XS, for some P ∈ Pn. Then,T is a multiplicative linear preserver if and only ifQT Qt

is a multiplicative linear preserver for every Q ∈ Pn. So, in the proof

of the following theorem without loss of generality, we can assume that P =I.

Theorem 2.3. Let T : Mn → Mn be a linear function. Then, T is

a multiplicative linear preserver of a multivariate or a directional ma-jorization if and only if T has one of the following forms:

(3)

(ii) There exists a permutation P ∈ Pn such that

T X =P XP−1,

for allX ∈Mn.

(iii) There exists a permutation P ∈ Pn and a scalar γ ̸=−n such

that

T X = (γP +J)X(γP+J)−1,

for allX ∈Mn.

Proof. It is easy to prove that each of the conditions (i), (ii) or (iii) are sufficient for T to be a multiplicative linear preserver of a multivariate majorization. So, we only prove the necessity of the conditions. It is clear that EijEkl = 0 if and only if j ̸=k. Let T be of the form (a) in

Theorem 2.1. Then,

T X =

n

i=1

(tr(xi))Ai,

where A1, . . . , An∈Mn. Let 1≤k≤n, and put X=Y =Ekk. Then

AkAk=T(X)T(Y)

=T(X) =Ak.

So, A2k = Ak. On the other hand, set X = Ekk and Y = Ejk(j ̸= k).

Then,

AkAk=T(X)T(Y)

=T(XY) =T(0) = 0.

Therefore, Ak= 0 and hence T = 0.

Now, suppose that T has the form (b) in Theorem 2.1. Then,

T(Eij) =EijR+J EijS

= 

         

0 · · · 0 ..

. · · · ... 0 · · · 0 rj1 · · · rjn

0 · · · 0 ..

. · · · ... 0 · · · 0

         

+ 

 

sj1 · · · sjn

..

. ... ... sj1 · · · sjn

(4)

where rj1, . . . , rjnoccur in the i’th row. Since k̸=j,

0 =T(0) =T(EijEkl)

=T(Eij)T(Ekl)

=rjk

          

0 · · · 0 ..

. · · · ... 0 · · · 0 rl1 · · · rln

0 · · · 0 ..

. · · · ... 0 · · · 0

           + ( n ∑ m=1 rjm )           

0 · · · 0 ..

. · · · ... 0 · · · 0 sl1 · · · sln

0 · · · 0 ..

. · · · ... 0 · · · 0

          

+sjk

 

rl1 · · · rln

..

. · · · ... rl1 · · · rln

  + ( n ∑ m=1 sjm )   

sl1 · · · sln

..

. · · · ... sl1 · · · sln

 ,

where rl1, . . . , rln and sl1, . . . , sln in the first and second matrices occur

in the i’th row. So,

(2.1) sjk

[

rl1 . . . rln

] + ( n ∑ m=1 sjm ) [

sl1 . . . sln

] = 0,

and

(2.2) rjk[

rl1 . . . rln

] + ( n ∑ m=1 rjm ) [

sl1 . . . sln

] = 0.

We consider two cases.

Case 1: Let sjk = 0 for all j, k(1≤j ̸=k≤n). Put l = j in (2.1).

Then sjj = 0 and henceS = 0. Again, put l=j in (2.2). Thenrjk = 0

for all j̸=kand hence

R = 

 

r11 0 0

0 . .. 0 0 0 rnn

 .

Since T is multiplicative, we haveR=T(I) =T(I.I) =R2, and hence

rii= 0 or rii= 1.

If R̸= 0 andR ̸=I, without loss of generality, we may assume that

R = 

0 0 0 0 1 0 0 0 L

(5)

where L is a diagonal matrix whose diagonal entries are either 0 or 1. Set

X= 

1 1 0 0 1 0 0 0 0

∈Mn.

An easy calculation shows that T( X2)

̸=T(X)2 which is a contradic-tion. So, R=I orR= 0, and hence T =I orT = 0.

Case 2: Let sjk ̸= 0 for some j, k (1≤j̸=k≤n). Now, if n

m=1

sjm= 0,

then, R= 0. If

n

m=1

sjm̸= 0,

then, R=γS and T has the form T(X) = (γI+J)XS. Since k̸=j in (2.1) is arbitrary, we conclude that allsjk (k̸=j) are the same for each j, sayβj. Also, we denotesjj byαj.

Now, let j = k. Then, EijEkl =Eil, and T(Eij)T(Ekl) = T(Eil). If

we write this in the matrix form, we have

γ2sjj

          

0 · · · 0 ..

. · · · ... 0 · · · 0 sl1 · · · sln

0 · · · 0 ..

. · · · ... 0 · · · 0

           +γ ( n ∑ m=1 sjm )           

0 · · · 0 ..

. · · · ... 0 · · · 0 sl1 · · · sln

0 · · · 0 ..

. · · · ... 0 · · · 0

          

+γsjj

 

sl1 · · · sln

..

. · · · ... sl1 · · · sln

  + ( n ∑ m=1 sjm )   

sl1 · · · sln

..

. · · · ... sl1 · · · sln

   =γ           

0 · · · 0 ..

. · · · ... 0 · · · 0 sl1 · · · sln

0 · · · 0 ..

. · · · ... 0 · · · 0

           +   

sl1 · · · sln

..

. · · · ... sl1 · · · sln

(6)

where sl1, . . . , sln occurs in the i’th row. So,

{

(γαj+∑nm=1sjm−1)

[

sl1 · · · sln

] = 0, (γβj+∑nm=1sjm)

[

sl1 · · · sln

] = 0.

If S= 0, we are done. Assume S ̸= 0. Then,

{

γαj+αj+ (n−1)βj−1 = 0, γβj+αj+ (n−1)βj = 0.

Ifγ =−norγ = 0, there is no solution. Otherwise, we haveαj = nγ(+nγ−+γ1)

and βj = γ(n+1γ), for all j(1≤j≤n). Now, we have

1

γ(n+γ)       

γ+ 1 1 · · · 1

1 . .. 1 . . . .

.

. 1 . .. 1 1 · · · 1 γ+ 1

             

n+γ−1 −1 · · · −1 −1 . .. −1

. . . .

.

. −1 . .. −1 −1 · · · −1 n+γ−1

      

=I.

This shows that S−1 =γI+J and the proof is complete.

In the following section, we consider multiplicative strong linear pre-servers of some kinds of majorizations.

3. Multiplicative strong linear preservers of majorizations

Similar to the linear preservers of multivariate and directional ma-jorizations, some results hold for the strong linear preservers of multi-variate and directional majorizations. Now, we state a theorem that is useful in sequel.

Theorem 3.1 ([8], Theorem 1.1). Let F be an arbitrary field and φ : Mn(F)→Mn(F) a bijective linear map satisfying

φ(AB) =φ(A)φ(B),

whereA, B∈Mn(F).Then, there exists an invertible matrixT ∈Mn(F)

such that

φ(A) =T AT−1,

for every A∈Mn(F).

A nonnegative matrix A∈Mn,m is row stochastic if the sum of each

row of it is 1. Denote the set of all row stochastic matrices by Rn. For A, B ∈Mn,m, it is said thatAis a matrix majorized from left(resp. from

right) by B ifA=RB(resp. A=BR) for some rowstochastic matrices R ∈ Mn(resp. R ∈ Mm), and denoted by ≺l(resp. ≺r). In [6], the

(7)

Theorem 3.2 ([6], Theorem 5.2). A linear operator T : Mn → Mn

strongly preserves the matrix majorization ≺l if and only if there exist

a permutation matrix P and an invertible matrix L in Mn such that T X =P XLfor all X∈Mn. Moreover, ifT I =I, then L=Pt.

The structure of multiplicative strong linear preservers of a matrix majorization from left is as follows. The case of a matrix majorization from right is similar.

Theorem 3.3. A linear operator T : Mn → Mn is a multiplicative

strong linear preserver of the matrix majorization ≺l if and only if there

exists a permutation matrix P such that T X =P XPt for all XM n.

Proof. Since I2 = I, T(I)2 = T(I). Hence T(I)(T(I)I) = 0. But

T(I) = P L is invertible. So, T(I) = I, and hence by Theorem 3.2, T X =P XPtfor allX∈Mn, wich showsT is multiplicative as well. □

Now, we find the structure of multiplicative strong linear preservers of lw-column majorizations. In fact, we show that the structure of mul-tiplicative strong linear preservers of a majorization and mulmul-tiplicative strong linear preservers of rw-row majorization on Mn are the same.

For x, y ∈ Mn,1, it is said that x is lw-majorized by y, if x = Ry

for some R ∈ Rn. Let A, B ∈ Mn,m. It is said that that A is

lw-column majorized by B if every column of A is lw-majorizwd by the corresponding column ofB.

Theorem 3.4 ([2], Theorem 3.10). Let T :Mn,m → Mn,m be a linear

function. Then,T strongly preserves lw-column majorization if and only if there exist b1, . . . , bm ∈ ∪mi=1Span{ei}, P1, . . . Pm ∈ Pn such that B := [b1| · · · |bm] is invertible and T X = [P1Xb1|. . .|PmXbm].

The following theorem characterizes all multiplicative linear preservers of lw-column majorization on Mn.

Theorem 3.5. Let T : Mn → Mn be a linear operator. Then, T is a

multiplicative strong linear preserver of lw-column majorization if and only if T X =P XPt for some permutationsP.

Proof. By Theorem 3.4, there exist permutation matrices P1, . . . , Pn ∈

Pn and vectorsb1, . . . , bn∈∪ni=1span{ei} such that B := [b1| · · · |bn] is

invertible and

T X = [P1Xb1| · · · |PnXbn].

By Theorem 3.1, T has also the form T X =CXC−1. Now, suppose

[P1Xb1| · · · |PnXbn] = [CXD1| · · · |CXDn],

where Di is the ith column of C−1. So, PkXbk = CXDk. Since B is

invertible, it may be assumed that (bk)i

(8)

 

(Pk)11 · · · (Pk)1n

..

. . .. ... (Pk)n1 · · · (Pk)nn

             

0 · · · 0 0 0 · · · 0 ..

. ... ... ... ... ... ... 0 · · · 0 0 0 · · · 0 0 · · · 0 1 0 · · · 0 0 · · · 0 0 0 · · · 0

..

. ... ... ... ... ... ... 0 · · · 0 0 0 · · · 0

             

(bk)1

.. . (bk)n

   =   

C11 · · · C1n

..

. . .. ... Cn1 · · · Cnn

             

0 · · · 0 0 0 · · · 0 ..

. ... ... ... ... ... ... 0 · · · 0 0 0 · · · 0 0 · · · 0 1 0 · · · 0 0 · · · 0 0 0 · · · 0

..

. ... ... ... ... ... ... 0 · · · 0 0 0 · · · 0

             

(dk)1

.. . (dk)n

 .

Since Pk andC are invertible, then (bk)i= 0 if and only if (dk)i= 0.

Now, we have

(bk)ik 

 

(Pk)1j

.. . (Pk)nj

= (dk)ik 

 

(C)1j .. . (C)nj

 .

Since j and k variate independently, we have Pk = γC and Dk = γbk.

PutP :=γC. Then,

T X = [P1Xb1| · · · |PnXbn] =P XPt.

4. Multiplicative linear preservers of generalized

majorizations

For x, y ∈ Rn, it is said that x is gs-majorized by y (denoted by y≻gsx), if there exists a generalized doubly stochastic matrix D such

thatx=Dy, see [3]. ForA, B∈Mn,m, it is said thatB is gd-majorized

byA(written asA≻gdB), ifAx≻gsBx, for allx∈Rm. Actually,A≻gdB

if and only if, for everyx∈Rm, there exists a g-doubly stochastic matrix Dx such that Bx= Dx(Ax). The following theorem gives the possible

structures of all linear functions from Mn,m toMn,k that preserve≻gd.

Theorem 4.1 ([3], Theorem 1.3). Let T : Mn,m → Mn,k be a linear

(9)

(i) There existA1, . . . , Am ∈Mn,k such that

T(X) =

m

j=1

tr(xj)Aj,

where xj is the jth column of X.

(ii) There exist R, S∈Mm,k and an invertible matrix D∈GDn such

that

T(X) =DXR+J XS.

(iii) There exist S ∈ Mm,k, a ∈ Rm, r1, . . . , rk ∈ R and invertible

matrices D1, . . . , Dk∈GDn such that

T(X) = [r1D1Xa| · · · |rkDkXa] +J XS.

Now, we characterize the multiplicative linear preservers of gd-majorization on Mn.

Lemma 4.2. Let S ∈ Mn. The linear function T :Mn → Mn defined

by T X =J XS is multiplicative if and only if T = 0.

Proof. It is obvious that for alli, j, k ∈ {1, . . . , n},

T(Eij) =

 

sj1 · · · sjn

..

. ...

sj1 · · · sjn

 

=T(Ekj).

Then, for k̸=j,

0 =T(0) =T(EijEkl)

=T(Eij)T(Ekl)

=T(Eij)T(Ejl)

=T(EijEjl)

=T(Eil).

So,T = 0. □

Theorem 4.3. Let T : Mn → Mn be a linear function. Then, T is

a multiplicative linear preserver of gd-majorization if and only if T has one of the following forms:

(i) T X = 0 for all X∈Mn.

(ii) There exists an invertible matrixD∈GDn such that

T X =DXD−1,

(10)

(iii) There exist an invertible matrixD∈GDn and a scalar γ ̸=−n

such that

T X = (γD+J)X(γD+J)−1,

for allX ∈Mn.

Proof. Let T :Mn → Mn be a linear function that preserves≻gd. If T

is in the form of (i) or (ii) in Theorem 4.1, the proof is similar to that of Theorem 2.3. If T is in the form of (iii), we show thatT = 0.

First, suppose n= 2. If r1 = 0, we can choose D1 =D2 and T is in

the form of (ii) in Theorem 4.1. So, without loss of generality, we can assume that

T(X) = [Xa|rDXa] +J XS.

Now, let

a= [

α β

]

, D= [

d 1−d 1−d d

] ,

and for convenience,t=r(2d−1).

SupposeX = [

1 0

−1 0 ]

and Y = [

0 1

0 −1 ]

. So,

T(X) =α [

1 t

−1 −t ]

,

and

T(Y) =β [

1 t

−1 −t ]

.

Since X2 = X, T(X)2 = T(X) and hence α2(1−t) = α. So, α = 0 or α = 1−t1 . Note that if t = 1 then α = 0. On the other hand, since Y2 = −Y, β = 0 or β = −1−t1 , and if t = 1, then β = 0 and T = 0, by Lemma 4.2. So, assume that t̸= 1. Since XY =Y, αβ(1−t) =β, and if α = 0, we conclude that β = 0. Similarly, since Y X = −X, we conclude that if β = 0, then α = 0. So, we have α = β = 0 or α=−β = 1−t1 .

An easy calculation shows that

T(E11) =

[

α+s11 rαd+s12

s11 rα(1−d) +s12

] ,

T(E12) = [

−α+s21 −rαd+s22 s21 −rα(1−d) +s22

] ,

T(E22) =

[

s21 −rα(1−d) +s22

−α+s21 −rαd+s22 ]

(11)

The solution of the equations

{

T(E12)T(E11) = 0, T(E11)T(E22) = 0,

is

    

   

s11=−1+αt,

s12=−α(tr+1+rd(1t −t)),

s21= 1+αtt,

s22= αr(11+−dt+dt).

Note that α=−s11(1 +t). Assume that t̸=−1. (If t=−1, α= 0,

and we are done.)

CalculatingT(E11)T(E21), shows that

(α+s11) (rαd+s12+s11) = 0.

Now, if α+s11 = 0, and α ̸= 0, we have t = 0. Since D is invertible,

d̸= 12. So,r= 0 and we are done. Ifrαd+s12+s11= 0, then t= 1 or t=−1, which is a contradiction.

Now, suppose that n≥3. Let

T(X) = [r1D1Xa|. . .|rnDnXa] +J XS,

for some S ∈ Mn, a ∈ Rn, r1, . . . , rn ∈ R and invertible matrices D1, . . . , Dn∈ GDn. Without loss of generality, assume that r1, a1 ̸= 0,

where a1 is the first coordinate ofa. Set

X= 

     

1 0 · · · 0 −1 0 · · ·

0 0 · · · ... ..

. ... . .. 0 0 · · · 0

      .

(12)

a21       r1 n ∑ m=1 rm (

(Dm)1,1−(Dm)1,2

) (

(D1)m,1−(D1)m,2

) · · · .. . · · · r1 n ∑ m=1 rm (

(Dm)n,1−(Dm)n,2

) (

(D1)m,1−(D1)m,2

) · · ·

· · · rn n

m=1

rm

(

(Dm)1,1−(Dm)1,2

) (

(Dn)m,1−(Dn)m,2

)

· · · ...

· · · rn n

m=1

rm

(

(Dm)n,1−(Dm)n,2

) (

(Dn)m,1−(Dn)m,2

)      

=a1

     r1 (

(D1)1,1−(D1)1,2

)

· · · rn

(

(Dn)1,1−(Dn)1,2

)

..

. ...

r1

(

(D1)n,1−(D1)n,2

)

· · · rn

(

(Dn)n,1−(Dn)n,2

)      .

The first column of the matrix in the right hand side of the above equation cannot be equal to zero. Since otherwise, the first and the second columns of D1 are equal and the matrix cannot be invertible. Letβ = (D1)i,1−(D1)i,2, for someisuch that (D1)i,1−(D1)i,2 ̸= 0 and

α be the corresponding entry in the right hand side matrix in the above equation. It is clear that α̸= 0. So, a21r1α=a1r1β.

If we put

X=       

0 1 0 · · · 0

0 −1 0

0 0 0 ...

..

. . ..

0 · · · 0

       ,

we get to the equation X2 = −X and hence T(X)2 = −T(X), which gives the equation a22r1α =−a2r1β. Again, without loss of generality,

we may assume that a2 ̸= 0. If we shift the column vector

       1 −1 0 .. . 0       

(13)

So, amust have the form 

     

a1

−a1

0 .. . 0

      .

Now, set

X= 

       

0 0 0 · · · 0 0 1 0 · · ·

0 −1 0 · · · ... 0 0 0 · · ·

..

. ... ... . ..

0 · · · 0

        .

Then, T(X)̸= 0, and we can obtain the correspondingα and β. Using these α and β, and setting

X= 

       

0 0 · · · 0

1 0 ...

−1 0 · · · 0

0 0

..

. ... . .. ... 0 0 · · · 0

        ,

we get to X2 = 0 and T(X)2 = 0. This implies that a1 orr1 = 0 and we reach to a contradiction. So, T(X) has the formT(X) =J XS, and

hence by Lemma 4.2, we have T = 0. □

5. Multiplicative strong linear preservers of generalized

majorizations

The definitions, theorems and proofs of multiplicative strong linear preservers of gs and gd-majorization are similar to multivariate and di-rectional majorizations, see [3].

Now, we consider gut-majorizations.

An n×m matrixR is called g-row stochastic if the sum of all entries in each row ofR is 1. Denote the set of alln×nupper triangular g-row stochastic matrices by Rgutn . By E, we mean a matrix whose entries in

the last column are 1 and the other entries are 0. For A, B ∈ Mn,m,

it is said that A is gut-majorized by B, and written as A ≺gut B, if

there exists R ∈ Rgutn such that A =RB. In [4] the authors found the

(14)

Theorem 5.1 ([4], Theorem 1.3). Let T : Mn,m → Mn,m be a linear

function. Then, T strongly preserves ≺gut if and only if T X =AXR+ EXS for some R, S ∈ Mm and invertible matrix A ∈ Rgutn such that R(R+S) is invertible.

LetT :Mn→Mnbe a strong linear preserver of gut-majorization. It

is obvious thatT is multiplicative if and only ifQT Q−1 is multiplicative

for every Q∈ Rgutn .

Theorem 5.2. Let T : Mn → Mn be a multiplicative strong linear

preserver of gut-majorization. Then, there exists Q ∈ Rgutn such that T X =QXQ−1 for every XM

n.

Proof. Without loss of generality, we can suppose that T X = XR+ EXS. By Theorem 3.1, there exists an invertible matrixD∈Mn such

that

T X =XR+EXS

=DXD−1.

If X is an arbitrary n×n matrix withnth row equal to zero, we have XR=DXD−1. Suppose that

D−1=

   

a1n

A1 ...

an−1n an1 · · · ann−1 ann

    ,

RD= 

   

b1n

B1 ...

bn−1n bn1 · · · bnn−1 bnn

    ,

and

X = [

X1 0

0 0

] .

We have A1X1B1 =X1. Since X1 ∈ Mn−1 is arbitrary, A1 =B1 =

In−1. If we take X1 =In−1, then,

   

b1n

In−1 ...

bn−1n an1 · · · ann−1 ∗

   

= [

In−1 0

0 0

(15)

and hence an1 =· · ·=ann−1=b1n=· · ·=bn−1n= 0. So,

D−1 =

   

a1n

In−1 ...

an−1n

0 ann

    ,

RD= [

In−1 0

bn1 · · · bnn−1 bnn

] ,

and

D= 

   

−a1n

ann

In−1 ...

−an−1n

ann

0 a1

nn 

    .

PutX =E1nin the equationXRD=DX. Sincebn1 =· · ·=bnn−1 = 0

and bnn = 1, then R=D−1.

Now,

XR+EXS=DXD−1,

and

XRD+EXSD =DX.

PuttingX =En1, . . . , Enn in this equation, we get

−a1n ann

=· · ·=−an−1n ann

= 1−ann ann

,

(SD)11=· · ·= (SD)nn, and

(SD)ij = 0, i̸=j. So,

SD =−a1n ann

I ⇒ S=−a1n ann

D−1,

D−1 =

   

α

In−1 ...

α 0 1 +α

    ,

and

D= 

   

− α 1+α

In−1 ...

− α 1+α

0 1+1α

(16)

where α=a1n=· · ·=an−1n. Therefore,

T(X) =XD−1 α

1 +αEXD −1

= (

I− α 1 +αE

)

XD−1.

Hence T X =DXD−1, and the proof is complete.

Acknowledgments. The authors would like to thank the referee(s) for useful comments and remarks.

References

1. T. Ando,Majorization, doubly stochastic matrices, and comparision of eigenval-ues, Linear Algebra and its Applications, 118 (1989) 163–248.

2. A. Armandnejad, F. Akbarzadeh, and Z. Mohammadi, Row and column-majorization onMn,m, Linear Algebra and its Applications, 437 (2012) 1025– 1032.

3. A. Armandnejad and H. Heydari,Linear Preserving gd-Majorization Functions fromMn,m toMn,k, Bull. Iranian Math. Soc., 37(1) (2011) 215–224.

4. A. Armandnejad and A. Ilkhanizadeh Manesh,gut-Majorization and its Linear Preservers, Electronic Journal of Linear Algebra, 23 (2012) 646–654.

5. A. Armandnejad, Z. Mohammadi, and F. Akbarzadeh,Linear preservers of G-row and G-column majorization onMn,m, Bull. Iranian Math. Soc., 39(5) (2013) 865–880.

6. A.M. Hasani and M. Radjabalipour, The structure of linear operators strongly preserving majorizations of matrices, Electronic Journal of Linear Algebra, 15 (2006) 260–268.

7. C.K. Li and E. Poon,Linear operators preserving directional majorization, Lin-ear Algebra and its Applications, 325 (2001) 15–21.

8. P. Semrl,Maps on matrix spaces, Linear Algebra and its Applications, 413(2-3) (2006) 364-393.

1 Department of Mathematics, Vali-e-Asr University of Rafsanjan, Zip

Code: 7718897111, Rafsanjan, Iran. E-mail address: ma.hadiann@gmail.com

2 Department of Mathematics, Vali-e-Asr University of Rafsanjan, Zip

Referências

Documentos relacionados

É pensando nestas problemáticas que esta pesquisa foi construída, para avaliar a aplicação de recursos constitucionais federais, estaduais e recursos constitucionais próprios

Este artigo discute o filme Voar é com os pássaros (1971) do diretor norte-americano Robert Altman fazendo uma reflexão sobre as confluências entre as inovações da geração de

Afinal, se o marido, por qualquer circunstância, não puder assum ir a direção da Família, a lei reconhece à mulher aptidão para ficar com os poderes de chefia, substituição que

Este modelo permite avaliar a empresa como um todo e não apenas o seu valor para o acionista, considerando todos os cash flows gerados pela empresa e procedendo ao seu desconto,

In an active construction such as (1a) below, for example, its external argument is realized as the syntactic subject, bearing nominative Case and triggering

The probability of attending school four our group of interest in this region increased by 6.5 percentage points after the expansion of the Bolsa Família program in 2007 and

Na hepatite B, as enzimas hepáticas têm valores menores tanto para quem toma quanto para os que não tomam café comparados ao vírus C, porém os dados foram estatisticamente

É nesta mudança, abruptamente solicitada e muitas das vezes legislada, que nos vão impondo, neste contexto de sociedades sem emprego; a ordem para a flexibilização como