HOME

The uniqueness of reduced row-echelon form

In this short note, I’ll define reduced row-echelon form explicitly and explain why it is unique, that is, why row-reduction is guaranteed to produce the same answer no matter how you go about it.

A matrix is in RREF if:

  1. All the rows consisting entirely of zeros are at the bottom
  2. In each non-zero row, the leftmost non-zero entry is a 1. These are called the leading ones.
  3. Each leading one is further to the right than the leading ones of previous rows.
  4. The column of each leading one is “clean”, that is all other entries in the column are 0.

Now, why is there only one reduced row-echelon form for any given matrix? To explain this we’ll need the following simple but important fact about elementary row operations:

(This is simply because each elementary row operation replaces a row by a linear combination of some rows.)

Also, each elementary row operation can be undone so that we can “run row-reduction backwards”. This means that if we can reduce some matrix M to two different RREF matrices A and B, then we can also start with A, perform operations to get to M and then reduce to B. By the same token we can get from B to A by using row operations. Therefore it is enough to show that:

Let’s argue that now. First we show that A and B must have leading ones in the same positions. Start with the first row: say A’s first row has a leading one in column number i, while B’s first row has it in column number j. Then i can’t be bigger than j: if it were, no linear combination of rows of A can have a non-zero entry in column j. Similarly, i can’t be less than j, and so they must be equal.

Now, we know the each row of B is a linear combination of rows of A. The linear combination for the first row of B definitely uses (by “uses” I mean “uses with a non-zero coefficient”) the first row of A, otherwise it couldn’t have a non-zero entry in column i = j; but the linear combinations for all other rows of B cannot possibly use the first row of A, otherwise they’d have some non-zero entry in column i = j, which is impossible since B is RREF and thus column i = j is supposed to be clean.

So if we delete the first rows of A and B, we get two new matrices that (1) are still in RREF, (2) still have the property that each row of one is a linear combination of the rows of the other. Then we can apply the same argument as above to show that these new matrices have the leading one in the first row in the same position. Repeating this until all rows are exhausted we see that A and B must have the leading one in the same positions.

And now it is easy to see that A and B must coincide: take any non-zero row of A, say the k-th. It must be a linear combination of rows of B and we want to show that, in fact, it is just the kth row of B. Well, the linear combination must use the k-th row of B with coefficient 1, since that is the only way to get the leading one in the k-th row (the rest of its column is all zeroes). And it can’t use any other non-zero row R of B, since if it did, it’d be stuck with a non-zero entry in the column of R’s leading 1.

Omar Antolín Camarena