Preparation Notes - Hell lot of Math and Algorithms; very helpful.
Note : These notes are made during my preparation for my exams. They might be crude and may not be in their best form of presentation. Read at your own risk.
Subject : IC 021 - Computational Techniques in Control Engineering
Professor : Dr. A Ramakalyan.
Difficulty : Runs to Heavens and goes around it times.
Books:
1. B N Datta’s Numerical Methods for Linear Control Systems. - referenced as [1]
2. Gilbert Strang’s Linear ALgebra and it’s applications. - referenced as [2]
3. S Kumaresen’s Linear Algebra - A Geometric Approach. - referenced as [3]
1. Computing [1]
There are many ways of doing this, one of the very popular one is the Taylor Series method.
We simply expand as per the Taylor series.
Hence, can be written as,
- The drawbacks to this method are that a large number of terms is needed for convergence, and even when convergence occurs, the answer can be totally wrong.
Hence, I will now get into the Pade’s Scaling and Squaring algorithm for this task.
Suppose that, is a power series presented as,
Then the Pade’s Scaling and Squaring Algorithm, given an approximation of as ,
The and are chosen such that the terms containing are cancelled out in . The order of Pade’s approximation is . The ratio is unique and the coefficients and always exist.
Similarly, Pade’s approximation for is given by,
Where,
and
- It can be shown (Exercise 5.16) that if p and q are large, or if A is a matrix having all its eigenvalues with negative real parts, then is nonsingular.
- The difficulty of Round off errors in Pade’s method can be controlled by Scaling and squaring.
- Since is same as , the idea is it to find out in terms of powers of so that can be computed accurately and then find with repetitive squaring.
- This method has favorable numerical properties when .
Let be chosen such that , then Moler and Van Loan(1978) have shown that there exists an such that ,
Where , with
Given an error tolerance of we should choose such that
The Algorithm is as follows:
Input : ,
Output : with
Step 1: Choose such that , set .
Step 2: Choose and smallest non-negative integer satisfying,
Step 3: Set
Step 4: For do
Step 5: Solve for : .
Step 6: For do,
- The algorithm requires about flops.
- The algorithm is numerically stable when A is normal. When A is non-normal, an analysis of the stability property becomes difficult, because, in this case e a may grow before it decays during the squaring process; which is known as “hump” phenomenon.
Example at Page 143 of [1]
We have another effective method called the Matrix decomposition method uses the Real Schur form to do so.
Let be transformed to a real Schur matric using an orthogonal similarity transformation:
Then,
Note that is upper-triangular.
The algorithm is:
Step 1: Transform to using the iteration algorithm.
Step 2: Compute :
for do,
end.
for do
for do
set
end
end
Step 3: Now we have then we compute
2.Lyapunov Stability theorem: [1]
Lyapunov Stability theorem,
The system :
is asymptotically stable , for any symmetric positive definite matrix , there exists a unique symmetric positive definite matrix satisfying the equation:
Kronecker Product and Lyapunov:
For example,
if
and,
Then, the Kronecker product will look like,
It is evident that,
Now, we formulate the using the Kronecker product. Note that is negative semi-definite. is positive definite and symmetric.
Vectorisation of a matrix Stack all elements of
the matrix into one column.
Going directly to the final answer,
Verify it once.
If is stable then has unique solution.
Schur method of Lyapunov:
Method proposed by Bartels and Stewart. It is based on the reduction of to .
Step 1: Reduce the problem.
For,
is reduced to,
Where,
, and .
Step 2: Solution of the reduced form.
Let,
and
Assume that the columns through have been computed, and consider the following two cases,
Case 1: . Then is determined by solving the quasi-triangular system,
If, in particular, R is upper triangular, that is there are no “Schur bumps” on the diagonal, then each , can be obtained
by solving an upper triangular system as follows:
Case 2: this shows a Schur Bump.
Example at Page 274
If doesn’t have Eigen Values in the Limit circle , we modify as,
Use this in the Discrete Lyapunov which is given by,
The resulting equation is,
You will get a similar equation to that of the normal Lyapunov equation.
3. Modified QR algorithms:[4]
Modified Gram-Schmidt method:
for
end
for
-for
-end
end
It requires operations.
Practice example from the above linkOriginal algorithm of QR decomposition:
Aim is to write matrix in terms of .
Here, is an orthogonal matrix with orthonormal columns; and is an upper-triangular matrix.
Now, we take an example,
We will first find , and as the three orthonormal vectors using the GS-strategy.
Now, we will define , and as,
We define a partial basis as which will be updated at each step.
For the first step,
we find as,
similarly, we find as follows,
Now, we update ,
Now, we find as follows,
Hence, the partial basis is,
This partial basis is the orthogonal set, hence we now make it orthonoamal,
Hence,
norm of first column,
Similarly,
and
Hence, the orthonormal is,
Now, we have, and ; we now find ; which by theory has to be upper-triangular,
Make up the algorithm from the above steps.
4. Condition Number:[5]
As a rule of thumb, if the condition number , then you may lose up to digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods.
A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. The condition number is a property of the problem.
Condition number is mathematically defined as,
We define the -norm as follows,
For a matrix
The -norm is,
So, for a given matrix,
We get,
Hence,
and,
Therefore,
Thus, the matrix (or system) is ill-conditioned.
The question paper that came for this exam was pretty sadistic.
Give it a try, if you can and tell me how awesome it feels. I will score around 30/50.