As stated in Appendix A, an advantage offered by matrix algebra is its adaptability to computer usage. Using matrix algebra, large systems of simultaneous linear equations can be programmed for general computer solution using only a few systematic steps. For example, the simplicities of programming matrix additions and multiplications were presented in Section A.9. To solve a system of equations using matrix methods, it is first necessary to define and compute the inverse matrix.
If a square matrix is nonsingular (its determinant is not zero), it possesses an inverse matrix. When a system of simultaneous linear equations consisting of n equations and involving n unknowns is expressed as AX = B, the coefficient matrix (A) is a square matrix of dimensions n × n. Consider this system of linear equations
The inverse of matrix A, symbolized as A−1, is defined as
where I is the identity matrix. Premultiplying both sides of matrix Equation (B.1) by A−1 gives
Reducing yields
Thus, the inverse is used to find the matrix of unknowns, X. The following points should be emphasized regarding matrix inversions:
Several general methods are available to find a matrix inverse. Two shall be considered herein. However, before proceeding with general cases consider the specific case of finding the inverse for a 2 × 2 matrix using simple elementary matrix operations. Let any 2 × 2 matrix be symbolized as A. Also, let
By applying Equation (B.2) and recalling the definition of an identity matrix I as given in Section A.4, it is possible to calculate w, x, y, and z in terms of a, b, c, and d of A−1. Substituting in the appropriate values gives
By matrix multiplication
The determinant of A is symbolized as and equal to ad − bc.
Thus, for any 2 × 2 matrix composed of the elements , its inverse is simply
The inverse of A can be found using the method of adjoints with the following equation:
The adjoint of A is obtained by first replacing each matrix element by its signed minor or cofactor, and then transposing the resultant matrix. The cofactor of element aij equals (−1)i+j times the numerical value of the determinant for the remaining elements after row i and column j have been removed from the matrix. This procedure is illustrated in Figure B.1 where the cofactor of a12 is
Using this procedure, the inverse of the following A matrix is found:
For this A matrix, the cofactors are calculated as follows:
Following the procedure above, the matrix of cofactors is
Transposing this cofactor matrix produces the following adjoint of A:
The determinant of A is the sum of the products of the elements in the first row of the original matrix times their respective cofactors. Since the cofactors were already obtained in the previous step, this simplifies to
The inverse of A is now calculated as
Again, a check on the arithmetical work is obtained by using the definition of an inverse:
A system of equations can be modified using the following three steps without changing its solution:
If elementary row transformations are successively performed on A such that A is transformed into I, and if throughout the procedure the same row transformations are also done to the same rows of the identity matrix I, the I matrix will be transformed into A−1. This procedure is illustrated using the same matrix used to demonstrate the method of adjoints.
Initially, the original matrix and the identity matrix are listed side by side:
With the following three-row transformations performed on A and I, they are transformed into matrices A1 and I1, respectively:
After doing these operations, the transformed matrices A1 and I1 are
Notice that the first column of A1 is equivalent to the first column of a 3 × 3 identity matrix as a result of these three-row transformations. For matrices having more than three rows, this same general procedure would be followed for each row to convert the first element in each row of A1 to zero, with the exception of the first row of A.
Next, the following three elementary row transformations are done on matrices A1 and I1 to transform them into matrices A2 and I2:
After doing these operations, the transformed matrices A2 and I2 are:
Notice that after this second series of steps is completed, the second column of A2 conforms to column two of a 3 × 3 identity matrix. Again, for matrices having more than three rows, this same general procedure would be followed for each row, to convert the second element in each row (except the second row) of A2 to zero.
Finally, the following three-row transformations are applied to matrices A2 and I2 to transform them into matrices A3 and I3.
Following these operations, the transformed matrices A3 and I3 are:
Notice that through these nine elementary row transformations, the original A matrix is transformed into the identity matrix and the original identity matrix is transformed into A−1, which can be verified by multiplying it by the A matrix. Also note that A−1 obtained by this method agrees exactly with the inverse obtained by the method of adjoints. This is because any nonsingular matrix has a unique inverse.
It should be obvious that the quantity of work involved in inverting matrices greatly increases with the matrix size, since the number of necessary row transformations is equal to the square of the number of rows or columns. Because of this, it is not considered practical to invert large matrices by hand. This work is more conveniently done with a computer. Since the procedure of elementary row transformations is systematic, it is easily programmed.
TABLE B.1 Inverse Algorithm in BASIC, C, FORTRAN, and PASCAL
BASIC Language
|
FORTRAN Language
|
C Language
|
Pascal Language
|
Table B.1 shows algorithms, written in BASIC, C, FORTRAN, and Pascal programming languages, for calculating the inverse of any n × n nonsingular matrix A. Students should review the code in their preferred language to gain familiarity with the computer procedures.
Use the MATRIX software to do each problem.