# Gauss-Jordan Elimination

**Row reduction** is the process of performing row operations to transform any matrix into (reduced) row echelon form. In reduced row echelon form, each successive row of the matrix has less dependencies than the previous, so solving systems of equations is a much easier task. The idea behind row reduction is to convert the matrix into an "equivalent" version in order to simplify certain matrix computations. Its two main purposes are to solve system of linear equations and calculate the inverse of a matrix.

Carl Friedrich Gauss championed the use of row reduction, to the extent that it is commonly called **Gaussian elimination**. It was further popularized by Wilhelm Jordan, who attached his name to the process by which row reduction is used to compute matrix inverses, **Gauss-Jordan elimination**.

## Explanation

A system of equations can be represented in a couple of different matrix forms. One way is to realize the system as the matrix multiplication of the coefficients in the system and the column vector of its variables. The square matrix is called the **coefficient matrix** because it consists of the coefficients of the variables in the system of equations:

$\text{Product of matrices: } \quad \begin{array}{c c c c c c c} 2x & + & 4y & + & 7z & = & 4\\ 3x & + & 3y & + & 2z & = & 8\\ 5x & + & 6y & + & 3z & = & 0 \\ \end{array} \longrightarrow \left[ \begin{array}{c c c} 2 & 4 & 7\\ 3 & 3 & 2 \\ 5 & 6 & 3 \\ \end{array}\right] \left[ \begin{array}{c} x \\ y \\ z\\ \end{array} \right] = \left[ \begin{array}{c} 4 \\ 8 \\ 0 \\ \end{array} \right].$

An alternate representation called an **augmented matrix** is created by stitching the columns of matrices together and divided by a vertical bar. The coefficient matrix is placed on the left of this vertical bar, while the constants on the right-hand side of each equation are placed on the right of the vertical bar:

$\text{Augmented matrix: } \quad \quad \begin{array}{c c c c c c c} 2x & + & 4y & + & 7z & = & 4\\ 3x & + & 3y & + & 2z & = & 8\\ 5x & + & 6y & + & 3z & = & 0 \\ \end{array} \longrightarrow \left[ \begin{array}{c c c | c} 2 & 4 & 7 & 4\\ 3 & 3 & 2 & 8\\ 5 & 6 & 3 & 0\\ \end{array}\right].$

The matrices that represent these systems can be manipulated in such a way as to provide easy-to-read solutions. This manipulation is called row reduction. Row reduction techniques transform the matrix into **reduced row echelon form** without changing the solutions to the system.

The

reduced row echelon formof a matrix $A$ $\big($denoted $\text{rref}(A)\big)$ is a matrix of equal dimensions that satisfies the following:

- The leftmost non-zero element in each row is $1$. This element is known as a pivot.
- Any column can have at most $1$ pivot. If a column has a pivot, then the rest of the elements in the column will be $0$.
- For any two columns $C_{1}$ and $C_{2}$ that have pivots in rows $R_{1}$ and $R_{2},$ respectively, if pivot in $C_{1}$ is to the left of pivot in $C_{2}$, then $R_{1}$ is above $R_{2}$. In other words, for any two pivots $P_{1}$ and $P_{2}$, if $P_{2}$ is to the right of $P_{1}$, then $P_{2}$ is below $P_{1}$.
- Rows that consist of only zeroes are in the bottom of the matrix.

To convert any matrix to its reduced row echelon form, Gauss-Jordan elimination is performed. There are three elementary row operations used to achieve reduced row echelon form:

- Switch two rows.
- Multiply a row by any non-zero constant.
- Add a scalar multiple of one row to any other row.

Find $\text{rref}(A)$ using Gauss-Jordan elimination, where $A = \left[ \begin{array}{c c c} 2 & 6 & -2\\ 1 & 6 & -4 \\ -1 & 4 & 9 \\ \end{array}\right].$

The leftmost element in the first row must be 1, so the first row is divided by 2:

$\left[ \begin{array}{c c c} 2 & 6 & -2\\ 1 & 6 & -4 \\ -1 & 4 & 9 \\ \end{array}\right] \ce{->[\large \text{Divide the first row by 2.}]} \left[ \begin{array}{c c c} 1 & 3 & -1\\ 1 & 6 & -4 \\ -1 & 4 & 9 \\ \end{array}\right].$

The top left element is a pivot, so the rest of the elements in the first column must be 0. This can be done by subtracting the first row from the second row. Furthermore, the first row can be added to the third row to obtain the necessary 0s in the first column:

$\left[ \begin{array}{c c c} 1 & 3 & -1\\ 1 & 6 & -4 \\ -1 & 4 & 9 \\ \end{array}\right] \ce{->[\large R_2 - R_1 \text{ and } R_3 + R_1]} \left[ \begin{array}{c c c} 1 & 3 & -1\\ 0 & 3 & -3 \\ 0 & 7 & 8 \\ \end{array}\right].$

Now that the leftmost column is $\left[ \begin{array}{c} 1 \\ 0 \\ 0 \\ \end{array}\right],$ the middle element can be made 1 by dividing the second row by 3:

$\left[ \begin{array}{c c c} 1 & 3 & -1\\ 0 & 3 & -3 \\ 0 & 7 & 8 \\ \end{array}\right] \ce{->[\large \text{Divide the second row by 3.}]} \left[ \begin{array}{c c c} 1 & 3 & -1\\ 0 & 1 & -1 \\ 0 & 7 & 8 \\ \end{array}\right].$

The top and bottom elements in the second column can be made 0 with the appropriate row operations:

$\left[ \begin{array}{c c c} 1 & 3 & -1\\ 0 & 1 & -1 \\ 0 & 7 & 8 \\ \end{array}\right] \ce{->[\large R_1 - 3R_2 \text{ and } R_3 - 7R_2]} \left[ \begin{array}{c c c} 1 & 0 & 2\\ 0 & 1 & -1 \\ 0 & 0 & 15 \\ \end{array}\right].$

With the middle column now $\left[ \begin{array}{c} 0 \\ 1 \\ 0 \\ \end{array}\right],$ the method proceeds to the third column by dividing the third row by 15:

$\left[ \begin{array}{c c c} 1 & 0 & 2\\ 0 & 1 & -1 \\ 0 & 0 & 15 \\ \end{array}\right] \ce{->[\large \text{Divide the third row by 15.} ]} \left[ \begin{array}{c c c} 1 & 0 & 2\\ 0 & 1 & -1 \\ 0 & 0 & 1 \\ \end{array}\right].$

In the final step of the process, multiples of the third row are added to the first and second rows so that the last column becomes $\left[ \begin{array}{c} 0 \\ 0 \\ 1 \\ \end{array}\right]:$

$\left[ \begin{array}{c c c} 1 & 0 & 2\\ 0 & 1 & -1 \\ 0 & 0 & 1 \\ \end{array}\right] \ce{->[\large R_1 - 2R_3 \text{ and } R_2 + R_3. ]} \left[ \begin{array}{c c c} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array}\right]. \ _\square$

## Solving for Variables

## Computing Inverses

Given an equation $A x = b$ with a unique solution for $x$, it is possible to solve the equation by calculating $A^{-1}$ and then evaluating $A^{-1}A x = x = A^{-1} b.$ However, it takes much more time to calculate $A^{-1}$ $($and then multiply by $b)$ than just to solve for $x$. For that reason, $A^{-1}$ is only useful to compute when it is needed in itself, perhaps to solve general equations.

Perform row reduction on the matrix $[A \ I]$ to obtain something of the form $[I \ B]$. Then, $B = A^{-1}$ provides a solution to the inverse problem.

## See Also

**Cite as:**Gauss-Jordan Elimination.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/gaussian-elimination/