Métodos Quantitativos
Aplicados à Contabilidade
Doutorado em Ciências Contábeis 2019
1 Prof. Otávio R. Medeiros
The Basics of Matrix
Algebra
3
Definitions
• Matrix: rectangular array of real numbers with m rows and
n columns.
where are matrix elements.
• 11 12 1 21 22 2 1 2
...
...
A
...
n n m m mna
a
a
a
a
a
a
a
a
4
Definitions
• Size or dimension of a matrix: number of rows x columns:
m x n
• If a matrix has only one row = row vector, with dimension 1 x m
• If a matrix has only one column = column vector, with dimension n x 1
5
Definitions
• Sum of elements of leading diagonal = trace.
• Diagonal matrix: square matrix with all elements off the leading diagonal equal to zero.
• Identity matrix: diagonal matrix with all elements in the leading diagonal equal to one.
6
Definitions
•
Rank of a matrix: maximum number of linearly
independent rows or columns contained in the matrix,
e.g.:
3 4
2
7 9
3 6
1
2 4
rank
rank
The Rank of a Matrix
• You can think of an m x n matrix as a set of m row vectors, each
having n elements; or you can think of it as a set of n column vectors, each having m elements.
• The rank of a matrix is defined as (a) the maximum number of linearly independent column vectors in the matrix or (b) the maximum number of linearly independent row vectors in the matrix. Both definitions are equivalent.
• For an m x n matrix,
– If m is less than n, then the maximum rank of the matrix is m. – If m is greater than n, then the maximum rank of the matrix is n. • The rank of a matrix would be zero only if the matrix had no elements.
Full Rank Matrices
• When all of the vectors in a matrix are linearly
independent, the matrix is said to be full rank. Consider the matrices A and B below.
•
• Notice that row 2 of matrix A is a scalar multiple of row 1; that is, row 2 is equal to twice row 1.
• Therefore, rows 1 and 2 are linearly dependent. Matrix A has only one linearly independent row, so its rank is 1. Hence, matrix A is not full rank.
Full Rank Matrices
• Now, look at matrix B. All of its rows are linearly
independent, so the rank of matrix B is 3. Matrix B is full rank.
Example
• Consider the matrix X, shown below.
• What is its rank? • (A) 0 • (B) 1 • (C) 2 • (D) 3 • (E) 4 •
Example
• Solution
• The correct answer is (C). Since the matrix has more than zero elements, its rank must be greater than zero. And
since it has fewer rows than columns, its maximum rank is equal to the maximum number of linearly independent
rows. And because neither row is linearly dependent on the other row, the matrix has 2 linearly independent rows; so its rank is 2.
Example
• Consider the matrix Y, shown below.
• What is its rank? • (A) 0 • (B) 1 • (C) 2 • (D) 3 • (E) 4 •
Solution
• The correct answer is (C). Since the matrix has more than zero elements, its rank must be greater than zero. And
since it has fewer columns than rows, its maximum rank is equal to the maximum number of linearly independent
columns.
• Columns 1 and 2 are independent, because neither can be derived as a scalar multiple of the other. However, column 3 is linearly dependent on columns 1 and 2, because
column 3 is equal to column 1 plus column 2. That leaves the matrix with a maximum of two linearly independent columns; that is., column 1 and column 2. So the matrix rank is 2.
14
•
Trace of a matrix:
If A is m x n and B is n x m, then AB and BA are
square matrices and tr(AB) = tr(BA)
Definitions
1(A)
( A)
( (A))
n ii itr
a
tr c
c tr
15
Matrix Operations
Equality between matrices:
16
Matrix Operations
• Addition of matrices: A+B= C iff A and B have the same size and aij + bij = cij i, j.
2 4
1 2
1 6
3 5
4
1
7 6
17
Matrix operations
• Multiplication of a scalar k by a matrix A:
k.A = k.[aij], i.e. every element of the matrix is multiplied by the scalar.
Laws of matrix addition and scalar
multiplication
Let A, B, C be matrices of the same size m x n , 0 the
zero matrix, and c and d scalars. Then:
1. Closure law: A+B is an m x n matrix 2. Associative law: (A+B)+C = A+(B+C) 3. Cummulative law: A+B = B+A
4. Identity law: A+0 = A 5. Inverse law: A+(-A) = 0
Laws of matrix addition and scalar
multiplication (cont.)
6. Closure law: cA is an m x n matrix 7. Associative law: c(dA) = (cd)A 8. Distributive law: (c+d)A = cA+dA 9. Distributive law: c(A+B) = cA+cB 10. Monoidal law: 1A = A
11. Identity law: AI = A and IB = B
20
Matrix Multiplication
DEFINITION. Let A = [aij ] be an m x p matrix and B = [bij ] be a
p x n matrix. Then the product of the matrices A and B, denoted
by A.B (or simply AB), is the m x n matrix whose (i, j)th entry, for 1 i m and 1 j n, is the entry of the product of the ith row of A and the jth column of B; more specifically, the (i, j)th entry of AB is
1 1 2 2
...
i j i j ip pj
21
Matrix Multiplication
•
Example:
In general,
A.B ≠ B.A2 4
1 2
2 ( 1) 4 4 2 2 4 1
14 8
3 5
4
1
3 ( 1) 5 4 3 2 5 1
17 11
22
Matrix Multiplication
•
Multiplication of matrices is only possible if they are
conformable, i.e.
A (m x n) x B (n x p) = C (m x p)
23
Transpose of a matrix
• DEFINITION: Let A = [aij] be an m x n matrix. Then the transpose of A is the n x m matrix A’ (or AT) obtained by interchanging the rows and columns of A, so that the (i,
j)th entry of A’ is aji : 11 12 1 11 21 1 21 22 2 12 22 2 1 2 1 2 ... ... ... ... A ; A ' ... ... n m n m m m mn n n mn a a a a a a a a a a a a a a a a a a
24
Laws of Matrix Transpose
• Let A and B be matrices of the appropriate sizes so that the following operations make sense, and c a scalar. Then
1. (A+B)’ = A’+B’ 2. (AB)’ = B’A’ 3. (cA)’ = cA’ 4. (A’)’ = A
Some Matrices with Simple Structure
• DEFINITION: Let A = [aij ] be a square n x n matrix. Then A is:
– Scalar if aij = 0 and aii = ajj i j. (Equivalently: A = cIn for some scalar c, which explains the term “scalar.”)
– Diagonal if aij = 0 i j (Equivalently: the off-diagonal entries of A are 0).
26
Square matrices
• Diagonal matrix: 1 20 ... 0
0
... 0
0
0 ...
n
27
Square matrices:
• Scalar matrix = diagonal matrix, when
1 = 2 = ... =n .
28
Square matrices
• Identity matrix I:
Note: A.I = I.A = A, where A has the same size as I.
1 0 0 0
0 1 0 0
I
0 0 1 0
0 0 0 1
Matrix Inverses
• DEFINITION 2.5.1. Let A be a square matrix. Then a
(two-sided) inverse for A is a square matrix B of the same size as A such that AB = I = BA. If such a B exists, then the matrix A is said to be invertible.
30
Determinants
• DEFINITION: The determinant of a square matrix n x n matrix A = [aij ] is the scalar quantity det A (or |A|) defined recursively as follows: if n = 1 then det A = a11; otherwise, we suppose that determinants are defined for all square
matrices of size less than n and specified as:
• This method is known as the Laplace expansion (d’après Pierre-Simon
Laplace*) or the cofactor expansion. It can be use recursively for matrices of
any size. * https://en.wikipedia.org/wiki/Pierre-Simon_Laplace 1 1 1 1 11 11 21 21 1 1 1 det ( 1) ( ) ( ) ( ) ... ( 1) ( )
where M ( ) is the determinant of the ( -1) ( -1) matrix obtained from A by deleting the ith row and jth column of A.
n k n k k n n k ij A a M A a M A a M A a M A A n n
31
Determinant of 2
rdorder: the Laplace expansion method
• Example: matrix 2 x 2:
3 1
3 2 2 1 6 2 4
32
Determinant of 3
rdorder: the Laplace expansion method
• Example: 3 x 3 matrixConsider the matrix
• For the Laplace expansion, we can start by choosing any row or column. Assume we chose the 1st row:
Determinant of 3
rdorder: Sarrus’*
rule
• Sarrus’ rule can be used for 3 x 3 matrices only.
• See Harjrizaj (2009). * D’ après Pierre Frédéric Sarrus:
34
Determinant of 3
rdorder: Sarrus’
rule
• Example: matrix 3 x 32 3 2
2 3 2 2 3
1 1 2
1 1 2 1 1
3 2 2
3 2 2 3 2
2 1 2 3 2 3 2 1 2 2 1 3 2 2 2 3 1 2
4 18 4 6 8 6 6
35
Inverse matrix
• The inverse of a square matrix A, named A-1, is the matrix which pre or post multiplied by A gives the identity matrix.
• B = A-1 if and only if BA = AB = I
• Matrix A has an inverse if and only if det A 0 (i.e. A is non singular).
• (A.B)-1 = B-1.A-1
• (A-1)’=(A’)-1 if A é symmetrical and non singular, then A-1 is symmetrical.
• If det A ≠ 0 and A is a square matrix of size n, then A has rank
n, i.e. A is a full rank square matrix.
• If a square matrix of size n has rank < n, then its determinant is zero and so the matrix is not invertible, i.e. the matrix is
36
Steps for finding an inverse matrix
1) Compute the determinant
2) Find the minors of the elements aij, which are the determinants of the submatrix obtained after exclusion of the i-th row and j-th column.
37
Steps for finding an inverse matrix
• Determinant: (Laplace expansion) take any row or column and get the determinant by multiplying the products of each element of row or column by its respective cofactor. • Cofactor matrix: matrix where each element is substituted
Steps for finding an inverse matrix
39
Example
40
Example
• 3 x 3 matrix : 1 1 2 1 2 1 1 2 2 3 2 3 2 2 3 2 2 4 1 3 2 2 2 2 31 1 2 det 6 cofactor matrix 2 2 5 2 2 3 2 3 2 3 2 2 4 2 1 3 2 2 2 2 3 1 2 1 2 1 1 2 2 4 6 6 6 2 2 4 4 Adj 4 2 2 6 1 5 1 A A A A 1 2 2 3 3 3 2 2 2 1 1 6 6 3 3 3 1 5 1 1 5 1 6 6 6 6 6 6
Properties of Inverse Matrices
41 1(
)
'
'
-1 -1 -1 -1 -1 -1 -1 -1AA = A A = I
(A ) = A
AB
B A
(A ) = (A )
42
Matrix differentiation (1)
Example (1)
•44 Matrix differentiation (2)
Let
� be a column vector �=
(
�
1
�
2
⋮
�
�
)
and
� a column vector �=
(
�
1
�
2
⋮
�
�
)
. Then we can write:
�´�=�
1
�
1
+
�
2
�
2
+
⋯+�
�
�
�
45 Example (2)
Let
� be a column vector �=
(
2
3
4
−1
)
and
� a column vector �=
(
�
1
�
2
⋮
�
�
)
. Then we can write:
�´�=2�
1
+3
�
2
+4
�
3
−
�
4
46 Matrix differentiation (3) 11 12 1 1 12 22 2 2 1 2 2 2 11 1 12 1 2 13 1 3 1 1 22 2 23 2 3 2 2
Let A be a symmetric matrix and x a column vector . Hence: ´ 2 2 2 2 2 n n n n nn n n n n n a a a x a a a x a a a x a x a x x a x x a x x a x a x x a x x A x x Ax 2 1 11 1 12 2 1 12 1 22 2 2 2 1 1 2 2
The derivative of ´ w.r.t. vector will be: ( ´ ) 2( ) ( ´ ) 2( ) ( ) 2 2( ) ( ´ ) nn n n n n n n n nn n n a x x a x a x a x a x a x a x x a x a x a x x x Ax x x Ax x Ax x´Ax x x Ax 11 12 1 1 21 22 2 2 1 2 2 ´ n n n n nn n a a a x a a a x a a a x Ax 2x A
Example (3)
•REFERENCES
• Harjrizaj, Dardan. New Method to Compute the Determinant of a 3x3 Matrix. International Journal of Algebra, Vol. 3, 2009, no. 5, 211 – 219. Available in:
http://m-hikari.com/ija/ija-password-2009/ija-password5-8-2009/hajriza jIJA5-8-2009.pdf
• Theory and exercises about matrices can be found in :
– Shores, T.S. (2000) Applied linear algebra and matrix analysis. McGraw Hill College. With exercises!
– Brooks, C. Introductory Econometric for Finance, 3rd ed.
Cambridge University Press, 2014. Exercises chapter 2!
– Heij et al. Econometric methods with applications in business and economics, Oxford UP, 2004, Appendix A: Matrix Methods (with exercises!).
– Kutner et al. Applied linear statistical models, 5th Ed. : Chapter 5 -
Matrix Approach to Simple Linear Regression Analysis (with