Applications Of Matrices To Linear Equations

Go back to  'Determinants and Matrices'

Consider the system of equations

\[\begin{array}{l}3x + y + 2z = 3\\2x - 3y - z =  - 3\\\;\;x + 2y + z = 4\end{array}\]

We can write this system in matrix form as

\[\left[ \ \begin{matrix}   3 & \ \ 1 & \ \ 2  \\   2 & -3 & -1  \\   1 & \ 2 & \ \ 1  \\\end{matrix}\  \right]\ \left[ \ \begin{matrix}   x  \\   y  \\   z  \\\end{matrix}\  \right]=\left[ \begin{matrix}   \ \ 3  \\   -3  \\   \ \ 4  \\\end{matrix}\  \right]\]
While studying determinants, we’d learnt how to solve linear systems using Cramer’s rule. We’ll now learn how to use matrices to do the same.

Note that the matrix equation can be written as

\[A\ X=B\]

where

\[A=\left[ \ \begin{matrix}   3 & \ \ 1 & \ \ 2  \\   2 & -3 & -1  \\   1 & \ 2 & \ \ 1  \\\end{matrix}\  \right],\ \ X=\ \left[ \ \begin{matrix}   x  \\   y  \\   z  \\\end{matrix}\  \right],\ \ B=\left[ \begin{matrix}   \ \ 3  \\   -3  \\   \ \ 4  \\\end{matrix}\  \right]\]

A simple linear equation \(ax=b,\) where \(a,x,b\) are reals, has the solution

\[x=\frac{b}{a}={{a}^{-1}}b\]

Can we do something similar for matrices? Can we define matrix inverse so that

\[X={{A}^{-1}}B\]

In case of square matrices, it turns out we can! The inverse of a sqaure matrix A should be another square matrix A–1 of the same order such that

\[\boxed{A\,{{A}^{-1}}={{A}^{-1}}\,A=1}\]

Let us understand how to arrive at \({{A}^{-1}}\) using the example of a 3 × 3 matrix:

\[A=\left[ \ \begin{matrix}   {{a}_{1}} & {{b}_{1}} & {{c}_{1}}  \\   {{a}_{2}} & {{b}_{2}} & {{c}_{2}}  \\   {{a}_{3}} & {{b}_{3}} & {{c}_{3}}  \\\end{matrix}\  \right]\]

Note that the determinant of A, denoted as let A or | A |, by expansion along R1 is

\[\left| A \right|={{a}_{1}}\left( \left| \begin{matrix}   {{b}_{2}} & {{c}_{2}}  \\   {{b}_{3}} & {{c}_{3}}  \\\end{matrix} \right| \right)+{{b}_{1}}\left( -\left| \begin{matrix}   {{a}_{2}} & {{c}_{2}}  \\   {{a}_{3}} & {{c}_{3}}  \\\end{matrix} \right| \right)+{{c}_{1}}\left( \left| \begin{matrix}   {{a}_{2}} & {{b}_{2}}  \\   {{a}_{3}} & {{b}_{3}}  \\\end{matrix} \right| \right)\]

This can be written as a row-column product:

\[\left| A \right|=\left[ {{a}_{1}}\ \ \ {{b}_{1}}\ \ \ {{c}_{1}} \right]\ \left[ \begin{align}  & \ \ \ \left| \begin{matrix}   {{b}_{2}} & {{c}_{2}}  \\   {{b}_{3}} & {{c}_{3}}  \\
\end{matrix} \right| \\  & \!-\!\left| \begin{matrix}   {{a}_{2}} & {{c}_{2}}  \\   {{a}_{3}} & {{c}_{3}}  \\\end{matrix} \right| \\  & \ \ \ \left| \begin{matrix}   {{a}_{2}} & {{b}_{2}}  \\   {{a}_{3}} & {{b}_{3}}  \\\end{matrix} \right|\  \\ \end{align} \right]\]

We had also seen that the sum of co-factors of any row (or column) with the corresponding elements in a different row (or column) is zero.

For example,

\[\left[ {{a}_{2}}\ \ \ {{b}_{2}}\ \ \ {{c}_{2}} \right]\ \left[ \begin{align}  & \ \ \ \left| \begin{matrix}   {{b}_{2}} & {{c}_{2}}  \\   {{b}_{3}} & {{c}_{3}}  \\\end{matrix} \right| \\  &\! -\!\left| \begin{matrix}   {{a}_{2}} & {{c}_{2}}  \\   {{a}_{3}} & {{c}_{3}}  \\\end{matrix} \right| \\  & \ \ \ \left| \begin{matrix}   {{a}_{2}} & {{b}_{2}}  \\   {{a}_{3}} & {{b}_{3}}  \\\end{matrix} \right|\  \\ \end{align} \right]=0\]

whereas

\[\left[ {{a}_{2}}\ \ \ {{b}_{2}}\ \ \ {{c}_{2}} \right]\ \left[ \begin{align}  & - \left| \begin{matrix}   {{b}_{1}} & {{c}_{1}}  \\   {{b}_{3}} & {{c}_{3}}  \\\end{matrix} \right| \\  & +\left| \begin{matrix}   {{a}_{1}} & {{c}_{1}}  \\   {{a}_{3}} & {{c}_{3}}  \\\end{matrix} \right| \\  & -\left| \begin{matrix}   {{a}_{1}} & {{b}_{1}}  \\   {{a}_{3}} & {{b}_{3}}  \\\end{matrix} \right|\  \\ \end{align} \right]=\left|A \right|again \left\{ \begin{gathered}  \text{This corresponds to} \\  \text{expansion along }{{R}_{\text{2}}} \\ \end{gathered} \right\} \]

This suggests an interesting indication. For each element \({{a}_{ij}}\)A, define its co-factor as \({{C}_{ij}}.\) Consider the matrix composed of the co-factors, but with co-factors of rows arranged as columns.

For example, consider the matrix\[A=\left[ \ \begin{matrix}   {{a}_{1}} & {{a}_{2}} & {{a}_{3}}  \\   {{a}_{4}} & {{a}_{5}} & {{a}_{6}}  \\   {{a}_{7}} & {{a}_{8}} & {{a}_{9}}  \\\end{matrix}\  \right] \text{with corresponding co-factors} \;{{C}_{1}},{{C}_{2}}...{{C}_{9}}\]

Now, consider the matrix

\[\widetilde{A}=\left[ \ \begin{matrix}   {{C}_{1}} & {{C}_{4}} & {{C}_{7}}  \\   {{C}_{2}} & {{C}_{5}} & {{C}_{8}}  \\   {{C}_{3}} & {{C}_{6}} & {{C}_{9}}  \\\end{matrix}\  \right]\]

Note that

\[A\widetilde{A}=\left[ \ \begin{matrix}   \left| A \right| & 0 & 0  \\   0 & \left| A \right| & 0  \\   0 & 0 & \left| A \right|  \\\end{matrix}\  \right]\]

It is important that you are absolutely sure of this step. The matrix \(\widetilde{A}\) is called the adjoint of A and is sometimes denoted as adj(A). Note that \(\widetilde{A}A\) is the same as \(A\widetilde{A}\). Thus,

\[A\widetilde{A}=\widetilde{A}A=\left| A \right|\ \left[ \ \begin{matrix}   1 & 0 & 0  \\   0 & 1 & 0  \\   0 & 0 & 1  \\\end{matrix}\  \right]=\left| A \right|I\]

\[\boxed{\begin{align}  & \Rightarrow A\left( \frac{\widetilde{A}}{\left| A \right|} \right)=\left( \frac{\widetilde{A}}{\left| A \right|} \right)A=1 \\  & \ \Rightarrow A{{A}^{-1}}={{A}^{-1}}A=1\ \ \ \text{where}\ {{A}^{-1}}=\frac{\widetilde{A}}{\left| A \right|}\  \\ \end{align}}\]

We have succeeded in evaluating the inverse of a matrix. Let us apply this to our original problem.

\[A=\left[ \ \begin{matrix}   3 & \ \ 1 & \ \ 2  \\   2 & -3 & -1  \\   1 & \ \ 2 & \ \ 1  \\\end{matrix}\  \right]\left| A \right|=8\]

Carefully observe each of the elements of \({{A}^{-1}}:\)

\[{{A}^{-1}}=\frac{\widetilde{A}}{\left| A \right|}=\frac{1}{8}\left[ \begin{matrix}   -1 & 3 & 5  \\   -3 & 1 & 7  \\   7 & -5 & -11  \\\end{matrix} \right]=\left[ \begin{matrix}   -1/8 & 3/8 & 5/8  \\   -3/8 & 1/8 & 7/8  \\   7/8 & -5/8 & -11/8  \\\end{matrix} \right]\]

Since we had \(AX=B,\)

\[ \begin{align}& \qquad \;{{A}^{-1}}\left( AX \right)={{A}^{-1}}B\\\Rightarrow  \qquad &X={{A}^{-1}}B=\left[ \begin{matrix}   -1/8 & 3/8 & 5/8  \\   -3/8 & 1/8 & 7/8  \\   7/8 & -5/8 & -11/8  \\\end{matrix} \right]\ \ \left[ \begin{matrix}   \ \ 3  \\   -3  \\   \ \ 4  \\\end{matrix}\  \right]\ \ =\ \ \left[ \begin{matrix}   \ \ 1  \\   \ \ 2  \\   -1  \\\end{matrix}\  \right]\\\Rightarrow \qquad &\left[ \ \begin{matrix}   x  \\   y  \\   z  \\\end{matrix}\  \right]\ =\ \left[ \begin{matrix}   \ \ 1  \\   \ \ 2  \\   -1  \\\end{matrix}\  \right]\Rightarrow \,\,\,\,x=1,y=2,\ z=-1\end{align}\]

We make the following observations:

– If \(\left| A \right|\ne 0,\ {{A}^{-1}}\) always exists which means that the system has a unique solution

– If \(\left| A \right|=0,\) two cases arise as in the case of determinants.

Since \[X={{A}^{-1}}B=\frac{\overset{\tilde{\ }}{\mathop{A}}\,}{\left| A \right|}B\]

\(\Rightarrow \qquad \text{If}\ \widetilde{A}B\ne 0,\) no solution exists since X becomes undefined

\(\Rightarrow \qquad \text{If}\ \widetilde{A}B=0,\) the solution has an infinite number of solutions.

– For a homogenous system, i.e., B = 0,

\(\Rightarrow \text{If}\ \left| A \right|\ne 0,\) the system has only one solution, namely the trivial solution X = 0

\(\Rightarrow \text{If}\ \left| A \right|=0,\) then since \(\widetilde{A}B=0\)  too, the system has an infinite number of solutions.

Example - 23

Consider the system of equations

\[\begin{align}  & x+y+\ z=5 \\  & x+2y+3z=9 \\ & x+3y+\lambda z=\mu  \\ \end{align}\]

Find \(\lambda \ \text{and}\ \mu \) for which this system has

(a) a uniques solution

(b) no solution

(c) infinite solutions

Solution:

\[\Delta =\left| \ \begin{matrix}   1 & 1 & 1  \\   1 & 2 & 3  \\   1 & 3 & \lambda   \\\end{matrix}\  \right|=\lambda -5\]

(a) For a unique solution, \(\Delta \ne 0\Rightarrow \ \ \ \lambda \ne 5\)

(b) For no solution

 \[\begin{align}  & \Delta =0\qquad \qquad\text{and }\qquad \qquad\widetilde{A}B\ne 0 \\ \\ & \Rightarrow \lambda =5 \qquad \text{For}\,\lambda =5,\,\text{the coefficients matrix } \\ \\ & \text{A becomes}\left| \ \begin{matrix}   1 & 1 & 1  \\   1 & 2 & 3  \\   1 & 3 & 5  \\
\end{matrix}\  \right| \\ \\ & \Rightarrow \widetilde{A}=\left[ \ \begin{matrix}   \ \ 1 & -2 & \ \ 1  \\   -2 & \ 4 & -2  \\   \ \ 1 & -2 & \ \ 1  \\\end{matrix} \right] \\  &  \\  & \Rightarrow \widetilde{A}B=\left[ \ \begin{matrix}   \ \ 1 & -2 & \ \ 1  \\   -2 & \ 4 & -2  \\   \ \ 1 & -2 & \ \ 1  \\\end{matrix} \right]\ \ \left[ \ \begin{matrix}   5  \\   9  \\   \mu   \\\end{matrix}\  \right]\ \ =\ \ \left[ \ \begin{matrix}   \mu -13  \\   -2\mu +26  \\   \mu -13  \\\end{matrix}\  \right] \\ \\ & \text{Since }\;\widetilde{A}B\ne 0,\ \ \mu \ne 13 \\ \end{align}\]

Therefore, the system will have no solution if \(\lambda =5,\ \mu \ne 13\)

(c) For infinitely many solutions,

\[\begin{align}&\Delta =0,\ \ \widetilde{A}B=0\\\\\Rightarrow \qquad &\lambda =5,\ \ \mu =13\end{align}\]

TRY YOURSELF - IV

Q.1    Let \(A=\left[ \begin{matrix}   2 & 3  \\   6 & 5  \\\end{matrix} \right]\ \text{and}\ B=\left[ \begin{matrix}
   3 & 7  \\   4 & 0  \\\end{matrix} \right].\ \ \text{Evaluate}\ {{\left( A+B \right)}^{2}}.\)

Q.2    If \(A=\left[ \begin{matrix}   -4 & -3 & -3  \\   1 & 0 & 1  \\   4 & 4 & 3  \\\end{matrix} \right]\ ,\ \ \text{prove that}\ \widetilde{A}=A.\)

Q.3    For  \(A=\left[ \begin{matrix}   \ \ 1 & \ \ 1 & -1  \\   \ \ 2 & \ \ 0 & \ \ 3  \\   -3 & -1 & \ \ 2  \\\end{matrix}\  \right]\ ,\ \ B=\left[ \begin{matrix}   \ \ 1 & 3  \\
   \ \ 0 & 2  \\   -1 & 4  \\\end{matrix}\  \right],\ C=\left[ \begin{matrix}   1 & 2 & 3 & -4  \\   2 & 0 & -2 & 1  \\\end{matrix} \right],\)

verify that \(\left( AB \right)C=A\left( BC \right)\)

Q.4    Find the inverse of

\[A=\left[ \begin{matrix}   \ \ 1 & -2 & 3  \\   \ \ 0 & -1 & 4  \\   -2 & 2 & 1  \\\end{matrix} \right]\]

Q.5    Find X such that if

\[A=\left[ \begin{matrix}   \ \ 1 & \ \ 0 & -4  \\   \ \ 0 & -1 & \ 2  \\   -1 & \ \ 2 & \ 1  \\\end{matrix} \right],\ \ B\ =\left[ \begin{matrix}   \ 0 & 0 & 1  \\   0 & 1 & 0  \\   1 & 0 & 0  \\\end{matrix} \right]\]