Inverse transformations are linear

 
 
 
 
 

Proving that inverse transformations are always linear transformations

Inverse transformations are linear transformations

The inverse of an invertible linear transformation ???T??? is also itself a linear transformation.

Hi! I'm krista.

I create online courses to help you rock your math class. Read more.

 

Which means that the inverse transformation ???T^{-1}??? is closed under addition and closed under scalar multiplication.

???T^{-1}(\vec{u}+\vec{v})=T^{-1}(\vec{u})+T^{-1}(\vec{v})???

???T^{-1}(c\vec{u})=cT^{-1}(\vec{u})???

In other words, as long as the original transformation ???T???

  1.  is a linear transformation itself, and

  2.  is invertible (its inverse is defined, you can find its inverse),

then the inverse of ???T^{-1}???, ???T???, is also a linear transformation.

Inverse transformations as matrix-vector products

Remember that any linear transformation can be represented as a matrix-vector product. Normally, we rewrite the linear transformation ???T: \mathbb{R}^n\to \mathbb{R}^n??? as

???T(\vec{x})=A\vec{x}???, where ???A??? is a square ???n\times n??? matrix

But because the inverse transformation ???T^{-1}??? is a linear transformation as well, we can also write the inverse as a matrix-vector product.

???T^{-1}(\vec{x})=A^{-1}\vec{x}???

The matrix ???A^{-1}??? is the inverse of the matrix ???A??? that we used to define the transformation ???T???.

In other words, given a linear transformation ???T??? and its inverse ???T^{-1}???, if we want to express both of them as matrix-vector products, we know that the matrices we use to do so will be inverses of one another.

The reason this is true is because we already know that taking the composition of the inverse transformation ???T^{-1}??? with ???T??? will always give us the identity matrix ???I_n???, where ???n??? is the dimension of the domain and codomain, ???\mathbb{R}^n???.

???(T^{-1}\circ T)(\vec{x})=I_n\vec{x}???

Going the other way (applying the transformation to the inverse transformation), gives the same result:

???(T\circ T^{-1})(\vec{x})=I_n\vec{x}???

The only way these can be true is if the matrices that are part of the matrix-vector products are inverses of one another.

???(T^{-1}\circ T)(\vec{x})=I_n\vec{x}\quad\quad\to\quad\quad(T^{-1}\circ T)(\vec{x})=A^{-1}A\vec{x}???

???(T\circ T^{-1})(\vec{x})=I_n\vec{x}\quad\quad\to\quad\quad(T^{-1}\circ T)(\vec{x})=AA^{-1}\vec{x}???

Remember that matrix multiplication is not commutative, which means that, given two matrices ???A??? and ???B???, ???AB??? is not equal to ???BA???. We can’t change the order of the matrix multiplication and still get the same answer. But based on these compositions of ???T??? with ???T^{-1}??? and vice versa, we’re saying ???A^{-1}A??? and ???AA^{-1}??? must both equal ???I_n???, which means they must equal each other. The only way that ???A^{-1}A??? and ???AA^{-1}??? can both equal ???I_n??? is if ???A??? and ???A^{-1}??? are inverses of one another.

This is why, when we represent the inverse transformation ???T^{-1}??? as a matrix-vector product, we know that the matrix we use must be the inverse of the matrix we used to represent ???T???. So if we represent ???T??? as ???T(\vec{x})=A\vec{x}??? with the matrix ???A???, that means ???T^{-1}??? can only be represented as ???T^{-1}(\vec{x})=A^{-1}\vec{x}??? with the inverse of ???A???, ???A^{-1}???.

Finding the matrix inverse

We’ve said that the inverse transformation ???T^{-1}??? can be represented as the matrix-vector product ???T^{-1}(\vec{x})=A^{-1}\vec{x}???. Here’s how we can find ???A^{-1}???.

If we start with ???A???, but augment it with the identity matrix, then all we have to do to find ???A^{-1}??? is work on the augmented matrix until ???A??? is in reduced row-echelon form.

In other words, given the matrix ???A???, we’ll start with the augmented matrix

???\begin{bmatrix}A\ |\ I\end{bmatrix}???

Through the process of putting ???A??? in reduced row-echelon form, ???I??? will be transformed into ???A^{-1}???, and we’ll end up with

???\begin{bmatrix}I\ |\ A^{-1}\end{bmatrix}???

This seems like a magical process, but there’s a very simple reason why it works. Remember earlier in the course that we learned how one row operation could be expressed as an elimination matrix ???E???. And that if we performed lots of row operations, through matrix multiplication of ???E_1???, ???E_2???, ???E_3???, etc., we could find one consolidated elimination matrix.

That’s exactly what we’re doing here. We’re performing row operations on ???A??? to change it into ???I???. All those row operations could be expressed as the elimination matrix ???E???. And we’re saying that if we multiply ???E??? by ???A???, that we’ll get the identity matrix, so ???EA=I???. But as you know, ???A^{-1}A=I???, which means ???E=A^{-1}???.

 
 

How to show that inverse transformations are linear


 
 

Take the course

Want to learn more about Linear Algebra? I have a step-by-step course for that. :)

 
 

 
 

Finding the inverse transformation

Example

Find ???A^{-1}???.

Screen Shot 2021-08-16 at 1.39.47 PM.png

Because ???A??? is a ???3\times 3??? matrix, its associated identity matrix is ???I_3???. So we’ll augment ???A??? with ???I_3???.

Screen Shot 2021-08-16 at 1.40.08 PM.png

This is why, when we represent the inverse transformation as a matrix-vector product, we know that the matrix we use must be the inverse of the matrix we used to represent T.

Now we need to put ???A??? into reduced row-echelon form. We start by switching ???R_2??? and ???R_1???, to get a ???1??? into the first entry of the first row.

Screen Shot 2021-08-16 at 1.40.31 PM.png

Now we’ll zero out the rest of the first column.

Screen Shot 2021-08-16 at 1.40.41 PM.png

Find the pivot entry in the second row.

Screen Shot 2021-08-16 at 1.40.51 PM.png

Zero out the rest of the second column.

Screen Shot 2021-08-16 at 1.41.03 PM.png

Find the pivot entry in the third row.

Screen Shot 2021-08-16 at 1.41.18 PM.png

Zero out the rest of the third column.

Screen Shot 2021-08-16 at 1.41.28 PM.png

Now that ???A??? has been put into reduced row-echelon form on the left side of the augmented matrix, the identity matrix on the right side has been turned into the inverse matrix ???A^{-1}???. So we can say

Screen Shot 2021-08-16 at 1.41.39 PM.png
 
 

Get access to the complete Linear Algebra course