Inverse transformations are linear
Proving that inverse transformations are always linear transformations
Inverse transformations are linear transformations
The inverse of an invertible linear transformation is also itself a linear transformation.
Hi! I'm krista.
I create online courses to help you rock your math class. Read more.
Which means that the inverse transformation is closed under addition and closed under scalar multiplication.
In other words, as long as the original transformation
is a linear transformation itself, and
is invertible (its inverse is defined, you can find its inverse),
then the inverse of , , is also a linear transformation.
Inverse transformations as matrix-vector products
Remember that any linear transformation can be represented as a matrix-vector product. Normally, we rewrite the linear transformation as
, where is a square matrix
But because the inverse transformation is a linear transformation as well, we can also write the inverse as a matrix-vector product.
The matrix is the inverse of the matrix that we used to define the transformation .
In other words, given a linear transformation and its inverse , if we want to express both of them as matrix-vector products, we know that the matrices we use to do so will be inverses of one another.
The reason this is true is because we already know that taking the composition of the inverse transformation with will always give us the identity matrix , where is the dimension of the domain and codomain, .
Going the other way (applying the transformation to the inverse transformation), gives the same result:
The only way these can be true is if the matrices that are part of the matrix-vector products are inverses of one another.
Remember that matrix multiplication is not commutative, which means that, given two matrices and , is not equal to . We can’t change the order of the matrix multiplication and still get the same answer. But based on these compositions of with and vice versa, we’re saying and must both equal , which means they must equal each other. The only way that and can both equal is if and are inverses of one another.
This is why, when we represent the inverse transformation as a matrix-vector product, we know that the matrix we use must be the inverse of the matrix we used to represent . So if we represent as with the matrix , that means can only be represented as with the inverse of , .
Finding the matrix inverse
We’ve said that the inverse transformation can be represented as the matrix-vector product . Here’s how we can find .
If we start with , but augment it with the identity matrix, then all we have to do to find is work on the augmented matrix until is in reduced row-echelon form.
In other words, given the matrix , we’ll start with the augmented matrix
Through the process of putting in reduced row-echelon form, will be transformed into , and we’ll end up with
This seems like a magical process, but there’s a very simple reason why it works. Remember earlier in the course that we learned how one row operation could be expressed as an elimination matrix . And that if we performed lots of row operations, through matrix multiplication of , , , etc., we could find one consolidated elimination matrix.
That’s exactly what we’re doing here. We’re performing row operations on to change it into . All those row operations could be expressed as the elimination matrix . And we’re saying that if we multiply by , that we’ll get the identity matrix, so . But as you know, , which means .
How to show that inverse transformations are linear
Take the course
Want to learn more about Linear Algebra? I have a step-by-step course for that. :)
Finding the inverse transformation
Example
Find .
Because is a matrix, its associated identity matrix is . So we’ll augment with .
This is why, when we represent the inverse transformation as a matrix-vector product, we know that the matrix we use must be the inverse of the matrix we used to represent T.
Now we need to put into reduced row-echelon form. We start by switching and , to get a into the first entry of the first row.
Now we’ll zero out the rest of the first column.
Find the pivot entry in the second row.
Zero out the rest of the second column.
Find the pivot entry in the third row.
Zero out the rest of the third column.
Now that has been put into reduced row-echelon form on the left side of the augmented matrix, the identity matrix on the right side has been turned into the inverse matrix . So we can say