Generalization of the inverse of a matrix.
I believe, we pay too much attention to implementation, and too lessThis is a piece of philosophy I deeply believe in. With the advent of packages like
attention in the study of the concept that is implemented. I have been
on then teams of many Data Science and Machine Learning projects, and
I would always reiterate on one simple idea; that is, “If you do not
know the math, you don’t know it at all.”
numpy
, matplotlib
, scikit-learn
etc., implementing a machine learning model with a moderately difficult data set and problem is fairly simple. The magic then stays in being able to tweak the algorithm and getting something new (or weird) out of the model. And, for you to be capable of doing so, you will have to know the mechanism behind it.
The Moore-Penrose pseudoinverse in the soul of PCA (Principal Component Analysis), one of the most popularly used Dimensionality reduction techniques.
How do we define the inverse of a matrix?
Provided that the matrix is a square matrix and non-singular, we simple divide the adjoint of the matrix with its determinant.
Mathematically, for and , the inverse of is defined as,
Of course, the above method is computationally very expensive. Hence, we can get the inverse of the matrix recursively using the Fadeev-LeVerrier equation ( Read about that in this blog of mine).
Now, how do we deal with matrices that are non-square? How do you find the inverse of a matrix that looks like this,
This is where the Generalization of inverse of a matrix happens, named the Moore-Penrose Pseudoinverse.
For every , there exists a pseudoinverse . ( is read as “A dagger”).
is mathematically defined as,
This is dimensionally consistent. Please check and verify.
Now, say we have,
It is impossible to find by the conventional method . So, we use the Generalized Inverse at .
So,
So, comes out to be . So, comes out to be .
Hence,
which is the pseudoinverse or the generalized inverse.
For a square matrix (i.e., ),
In detail,
Some properties of the generalized inverse are,
1.
2.
3.
One important point to remember is, always exists and is unique.
Cheers!
No comments:
Post a Comment