Friday, 24 November 2017

Moore-Penrose Pseudoinverse

Generalization of the inverse of a matrix.

I believe, we pay too much attention to implementation, and too less
attention in the study of the concept that is implemented. I have been
on then teams of many Data Science and Machine Learning projects, and
I would always reiterate on one simple idea; that is, “If you do not
know the math, you don’t know it at all.”
This is a piece of philosophy I deeply believe in. With the advent of packages like numpy, matplotlib, scikit-learn etc., implementing a machine learning model with a moderately difficult data set and problem is fairly simple.
The magic then stays in being able to tweak the algorithm and getting something new (or weird) out of the model. And, for you to be capable of doing so, you will have to know the mechanism behind it.
The Moore-Penrose pseudoinverse in the soul of PCA (Principal Component Analysis), one of the most popularly used Dimensionality reduction techniques.


How do we define the inverse of a matrix?
Provided that the matrix is a square matrix and non-singular, we simple divide the adjoint of the matrix with its determinant.
Mathematically, for and , the inverse of is defined as,




Of course, the above method is computationally very expensive. Hence, we can get the inverse of the matrix recursively using the Fadeev-LeVerrier equation ( Read about that in this blog of mine).


Now, how do we deal with matrices that are non-square? How do you find the inverse of a matrix that looks like this,

This is where the Generalization of inverse of a matrix happens, named the Moore-Penrose Pseudoinverse.


For every , there exists a pseudoinverse . ( is read as “A dagger”).
is mathematically defined as,

This is dimensionally consistent. Please check and verify.


Now, say we have,

It is impossible to find by the conventional method . So, we use the Generalized Inverse at .
So,

So, comes out to be . So, comes out to be .
Hence,
which is the pseudoinverse or the generalized inverse.


For a square matrix (i.e., ),

In detail,



Some properties of the generalized inverse are,
1.
2.
3.
One important point to remember is, always exists and is unique.



Cheers!

Friday, 17 November 2017

The Linear Quadratic Regulator

Optimal Control and Linear-Quadratic-Regulator (LQR)

Today, I will not write an introductory passage to write off my blog. Because, writing an introduction to Optimal Control in itself will required a blog. However, I will add in small tidbits as and when needed.
To understand the topic, we need some basic definitions with us.

1.
A control system can be represented in terms of State Space, as follows,

In the above formulation,
is the state vector; .
is the output vector; .
is the input vector; .
is the System Matrix; .
is the Input Matrix; .
is the Output Matrix; .
is the Feed-forward Matrix; .
Now, for a system to be controllable, we first define a matrix , called the controllability matrix, such that,

The system is controllable if has full row rank (i.e. rank() ).

We will assume that we deal with Controllable systems only.

Usually, a single input system’s state feedback controller is designed using the Eigen-value method, or Pole Placement method.

2.
Pole placement method is the methodology of finding the control vector in the form
So, the state space representation changes as,

is found as,

Here, are the desired pole locations. Note that is defined as

However, for a multi-input system the feedback gain i.e. is not unique.
Linear Quadratic Control strategy is used to deal with this issue.

Now, we dive into the Linear Quadratic Regulator (LQR) formulation, for an -input and -state system with ,. Consider a system,

Our aim is to find an open loop control , for such that we minimize:

where and are symmetric positive semi-definite matrices.
is a symmetric positive definite matrix. Note that , and are fixed and given data.
The controller aim is to basically keep close to 0 especially at , which is the final time.
In ,
  • works against the transient response.
  • works against the finite state.
  • works against the control effort.
The above formulation can regulate the output near .
Note that, we can define, and as where,
We can now have a theorem as follows,
For a system with fixed initial and final conditions, ; and clearly . We define our time horizon as such that . We find such that our cost function, is minimized. is defined as,

Here, the first term of is the final cost and the second term is the recurring cost.


Now, we will formulate some important functions that will convert the which is a constrained optimal control problem to a unconstrained optimal control problem. [THIS MAY NOT MAKE SENSE TO YOU, WHICH IS NATURAL. HOLD ON].

Note that, ( ) is called the Lagrangian.
is the Hamiltonian operator. Defined in terms of and as in . Or it can be defined as,

The above definition is in terms of as defined in the theorem. So, we define in the same lines. Just for convenience of computation.

can be written as
Equation , and together form a set of differential equations (in and , obviously) with split boundary conditions at and . Now, we can easily define in terms of or/and .
As mentioned earlier, the solution is found by converting from a constrained optimal problem to a constrained optimal problem using a Lagrange multiplier function :

Notice that,

Therefore,

As the Hamiltonian Function is defined in , thus,

The necessary condition for an optimal solution is of the modified cost with respect to all variations of the system be minimal at all times from to .
We will define analytically in the next post and formulate the Riccati Equation that will lay the foundation to some amazing control strategies.
Cheers!

Sunday, 5 November 2017

i!

Define the Factorial of a Complex number.

In usual sense, factorial is defined as,

Now, the not so usual definition is based on the famous Gamma Function,

There is an unique and very useful property,

To extend into the complex domain, we will first have to go through Analytic Continuation, please read about Analytic Continuation here.
Therefore, after analytic continuation, we can write it as,

For,
So, now, clearly,

By

Clearly,

For easier computation, please catch that,

Let’s break it down,


If you have reached this far, you obviously know how to solve the above integral.
Cheers!

Saturday, 26 August 2017

A new Kilogram? What? How?

A new Kilogram? What? How?

To understand this, let us see how time is defined. We all know the SI unit of time is seconds, and it is defined as the international unit of time, the second, is defined by measuring the electronic transition frequency of caesium atoms. Similarly length (SI unit metre) is defined as the length traveled by light in seconds or to be more precise seconds.
The inspiration to write this blog was derived from Veritasium|How We’re Redefining the kg.
We see that, all these are fundamental quantities are define on a given/fixed standard.
However, the kilogram is set as the weight of a metal cylinder in Paris. That is not that ideal, huh!
So, the NIST is trying to define the unit of mass, i.e., kilogram using some already standardized quantities like the Planck’s constant and the Avogadro’s number. The Planck’s constant takes a dive into advanced applied physics (no, not Quantum physics :p ); therefore, demands my blog’s space and time.
The engineering behind it is pretty simple, to put it in the simplest of the terms;
There is a balancing device, which has a mass unit and a coil
unit. The mass unit is balanced with the magnetic field from
the coil using a motor fixed to it, unit and unless, . To know more, read Watt Balance or Kibble Balance.
Let’s look into the working now, in this process we will generate some cool equations and in the process will learn some new concepts, as and when required.
First, the watt balance, the principle of operation itself says that,

Taking into account all the usual scientific nomenclature, we can write,

Here, is the mass, is the acceleration due to gravity, is the magnetic flux density (in Tesla()), is the current in the coil and length of the conductor in the field.
Equation is for the weighing mode operation of Watt Balance. In which, the weights are matched on both sides.
There is another mode, called the velocity mode in which the mass() is lifted at a height and then the coil is moved back and forth in the magnetic field. This motion induces a voltage, , in the coil. By Faraday’s motional emf expression,

Here, is the velocity of the conductor in the magnetic field.
Now, can be written as,

and can be written as,

Equating and , we have,

Which can be written as,

Interestingly, is the mechanical power (refer Encyclopedia of Electrochemical Power Sources for more info) and is the electrical power.
You must all be thinking, how does Planck’s constant come to play. Well, please hold your horses. We are almost there.
To measure in , we go into a concept of superconductivity. Called the Josephson Phenomenon. Please read up on its working here. It is also significant because of being the standard of Voltage.
When DC voltage is applied to a Josephson Junction, the junction experiences an oscillation of frequency(read more),

Here, is the frequency, is the elementary charge, and is the Planck’s constant().
The above equation can be written as,

For many Junctions ( say ) it is,

The Voltage measure here is accurate to parts, refer this.
Now, if we write, as (where is the resistance offerered by the junction), then, changes to,

Another question now is how do we measure ?, for that we will use the idea of Quantum Hall Effect. Quantum Hall effect is the standard for resistance, please refer the paper for more information. But suffice to say, the resistance is defined as,

Here, is
Please note that, without the integer fraction i.e., is called the von-Klitzing constant, this guy got a Nobel for this. Please read more here.

Using the above equation in we get,

Which comes to,



This is for a single Josephson Junction, for junctions, it becomes,

Which can look more elegant if we write it as,

We have seen that and were measured very accurately. Similarly, there is a need that we measure the factors very accurately as well. The is measured using a Laser Interferometer. The was measured using a Gravimeter.
So, the scientists in NIST, are just putting in some mass and get the , and keep tuning the value of till we get a very accurate .
Cheers!

Monday, 7 August 2017

Study of Brachistochrone Problem.

This blog will deal with one of the most elegant topics in mathematics. The famous Brachistochrone curve. Vsuace did a great job at explaining it in this video. This looks good to me, but it doesn’t feel good; because of course the video does not contain tons of fancy mathematical vocabulary and millions of lines of .

To get started, we will just put forward a small fact that, this topic is from a larger field of mathematics called the Calculus of Variations.

Before we get into the Analysis of Brachistochrone Curve (which happens to be the heart of this post), let us brush-up on some concepts.


Partial Differentiation (advanced preliminary):

Let us look at how we define the derivative of a given function, say we have and we have,

Provided this limit exists.

Now, for a function of many variables it is not easy to compute the total derivative (the usual derivative). Therefore, we calculate the Partial Derivative. You might have already guessed it by now, we define partial derivative as, say we have a function , then,

Let us look into a small example now, say we have,

Now,

similarly,

and,


Idea of Speed and Time (basic preliminary):

Let us consider, the following scenario, (Made using Geogebra)

We have, in the figure we have denoted the path ACB as and the path ADB as .
Consider that, The time taken for a body to move from A to B using path is similarly, the time taken for a body to move from A to B using path is .
Also, consider that the distance for be and the distance for be .

Then,
Speed () == and Speed () = =.

3. Idea of Distance(basic preliminary):

Distance between two points and is given by,

In case, the distance is measured from origin () to some point , we get,


Now that the notion is clear, we shall proceed into the analysis.
[NOTE : Some topics which are new to me (or you) will be explained as and when it is required].


Time needed for a body to travel from to is given by,
for a linear path.

For a curve or a non-linear path, we consider piece-wise linear distance () and speed (). We can define, as,

Now, we must understand a fact that, all distance is composed of and coordinates. We consider the point as the origin () and as . Hence,


Also, since the translation is both in the and axes, we can say that the speed is gained by and equality,


Use and in , we have,

We write

Now, becomes,

Therefore, the function is,


Please notice that, the integral can be written as,

In our case, is .

Equation closely follows the The Euler-Lagrange equation.

It is pretty simple to define,


The Euler-Lagrange equation

For any such that,

Where, , then has an stationary point, if the Euler-Lagrange Equation, given by,

is satisfied.

Therefore, for our analysis,


Stationary value

This is a value at the stationary point.
A stationary point is the point at which the first derivative of a function becomes zero,

Side note :
Find the stationary points of .


Beltrami Identity

Our (from ) is such a cool expression, because does not appear in that; therefore, of course ; this again leads us to another beautiful form, Beltrami Identity, which is given by,


We now find, ,


Use and in equation , we get,




Rearranging a little gives,


This is the equation of the cycloid as per [4], I am yet to figure out how this happened.
The solution of can be found using a parametric equation,

To derive the above equation, please refer Math-stackexchange|Derive the parametric equations of a cycloid.

I have even plotted the solutions at , we can see that it is a cycloid,

If you wish to play with the visualisation please go to Cycloid | Parametric @ Desmos by Pragyaditya

Cheers!


References:
1. Stationary Points
2. Euler Lagrange Equation
3. Derivation of Beltrami Identity
4. Introduction to calculus of variations
5. Brachistochrone @ Wolfram

Saturday, 29 July 2017

A Nice Problem Involving Gamma and Beta Function.

[Playing with Gamma and Beta functions].
Solve :

Let,

Beta function is defined as:

Now, there are many properties of this beautiful mathematical tool, some of them are,

  1. They are symmetric.
  2. We can write Beta Functions in terms of Gamma Functions as follows,
  3. We can obviously add factorial into the mix too,
  4. We can define it in terms of trigonometric angles too,

Now,

We will solve both separately,

Similarly, for the other integral,

Therefore,

Cheers

Friday, 21 July 2017

Integration : Problem 2


:

Monday, 22 May 2017

Life of \pi

Is ?

There was an interesting problem that had come up during one of the JEE exams.

It asked us to integrate,

Use Polynomial Long Division,


Use Linearity,

Substitute the values and we will get,

Which comes to,

Since, Area cannot be negative, it is evident from that,



Also, is evident from,

Cheers!

Tuesday, 25 April 2017

Preparation of exam.

Preparation Notes - Hell lot of Math and Algorithms; very helpful.

Note : These notes are made during my preparation for my exams. They might be crude and may not be in their best form of presentation. Read at your own risk.


Subject : IC 021 - Computational Techniques in Control Engineering

Professor : Dr. A Ramakalyan.

Difficulty : Runs to Heavens and goes around it times.

Books:
1. B N Datta’s Numerical Methods for Linear Control Systems. - referenced as [1]
2. Gilbert Strang’s Linear ALgebra and it’s applications. - referenced as [2]
3. S Kumaresen’s Linear Algebra - A Geometric Approach. - referenced as [3]


1. Computing [1]

  1. There are many ways of doing this, one of the very popular one is the Taylor Series method.
    We simply expand as per the Taylor series.
    Hence, can be written as,

    • The drawbacks to this method are that a large number of terms is needed for convergence, and even when convergence occurs, the answer can be totally wrong.
  2. Hence, I will now get into the Pade’s Scaling and Squaring algorithm for this task.
    Suppose that, is a power series presented as,

    Then the Pade’s Scaling and Squaring Algorithm, given an approximation of as ,

    The and are chosen such that the terms containing are cancelled out in . The order of Pade’s approximation is . The ratio is unique and the coefficients and always exist.

    Similarly, Pade’s approximation for is given by,

    Where,

    and

    • It can be shown (Exercise 5.16) that if p and q are large, or if A is a matrix having all its eigenvalues with negative real parts, then is nonsingular.
    • The difficulty of Round off errors in Pade’s method can be controlled by Scaling and squaring.
    • Since is same as , the idea is it to find out in terms of powers of so that can be computed accurately and then find with repetitive squaring.
    • This method has favorable numerical properties when .

      Let be chosen such that , then Moler and Van Loan(1978) have shown that there exists an such that ,

      Where , with

      Given an error tolerance of we should choose such that

      The Algorithm is as follows:
      Input : ,
      Output : with

      Step 1: Choose such that , set .
      Step 2: Choose and smallest non-negative integer satisfying,

      Step 3: Set
      Step 4: For do

      Step 5: Solve for : .
      Step 6: For do,
    • The algorithm requires about flops.
    • The algorithm is numerically stable when A is normal. When A is non-normal, an analysis of the stability property becomes difficult, because, in this case e a may grow before it decays during the squaring process; which is known as “hump” phenomenon.
      Example at Page 143 of [1]
  3. We have another effective method called the Matrix decomposition method uses the Real Schur form to do so.

    Let be transformed to a real Schur matric using an orthogonal similarity transformation:

    Then,

    Note that is upper-triangular.

    The algorithm is:
    Step 1: Transform to using the iteration algorithm.

    Step 2: Compute :
    for do,

    end.

    for do
    for do
    set

    end
    end
    Step 3: Now we have then we compute


2.Lyapunov Stability theorem: [1]

Lyapunov Stability theorem,
The system :

is asymptotically stable , for any symmetric positive definite matrix , there exists a unique symmetric positive definite matrix satisfying the equation:


Kronecker Product and Lyapunov:

For example,
if

and,

Then, the Kronecker product will look like,

It is evident that,

Now, we formulate the using the Kronecker product. Note that is negative semi-definite. is positive definite and symmetric.

Vectorisation of a matrix Stack all elements of
the matrix into one column.

Going directly to the final answer,

Verify it once.

If is stable then has unique solution.


Schur method of Lyapunov:
Method proposed by Bartels and Stewart. It is based on the reduction of to .

Step 1: Reduce the problem.
For,

is reduced to,

Where,
, and .

Step 2: Solution of the reduced form.
Let,


and
Assume that the columns through have been computed, and consider the following two cases,

Case 1: . Then is determined by solving the quasi-triangular system,

If, in particular, R is upper triangular, that is there are no “Schur bumps” on the diagonal, then each , can be obtained
by solving an upper triangular system as follows:


Case 2: this shows a Schur Bump.

Example at Page 274


If doesn’t have Eigen Values in the Limit circle , we modify as,

Use this in the Discrete Lyapunov which is given by,

The resulting equation is,

You will get a similar equation to that of the normal Lyapunov equation.


3. Modified QR algorithms:[4]

  1. Modified Gram-Schmidt method:
    for

    end

    for

    -for

    -end
    end
    It requires operations.
    Practice example from the above link

  2. Original algorithm of QR decomposition:
    Aim is to write matrix in terms of .
    Here, is an orthogonal matrix with orthonormal columns; and is an upper-triangular matrix.

    Now, we take an example,

    We will first find , and as the three orthonormal vectors using the GS-strategy.
    Now, we will define , and as,

    We define a partial basis as which will be updated at each step.

    For the first step,
    we find as,

    similarly, we find as follows,

    Now, we update ,

    Now, we find as follows,

    Hence, the partial basis is,

This partial basis is the orthogonal set, hence we now make it orthonoamal,

Hence,
norm of first column,
Similarly,

and

Hence, the orthonormal is,

Now, we have, and ; we now find ; which by theory has to be upper-triangular,

Make up the algorithm from the above steps.


4. Condition Number:[5]

As a rule of thumb, if the condition number , then you may lose up to digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods.

A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. The condition number is a property of the problem.

Condition number is mathematically defined as,

We define the -norm as follows,
For a matrix

The -norm is,


So, for a given matrix,

We get,

Hence,

and,

Therefore,

Thus, the matrix (or system) is ill-conditioned.


The question paper that came for this exam was pretty sadistic.
Give it a try, if you can and tell me how awesome it feels. I will score around 30/50.

Sunday, 23 April 2017

a FAILED mathematical approach into the search for the E.T.


Are we alone?

Scientists have been searching for another form of intelligent life for decades now. We have searched and have failed, we have the evidence and yet do not understand what they are.
Numerous Physicists, Astronomers, Conspiracy theorists and even common man ( :p ) have spent most or a significant time of their lives looking into the big blue sky and asking the question, “Am I alone?”
This sounds highly unlikely that we are the ONLY intelligent form of life in this huge universe.
I don’t care if the probability of any life in this universe, other than us, is , until and unless it is not , I want to keep looking.
There have been so many things, let us take the Nazka lines of Peru. There are usually called Geoglyphs. These took a lot of time to come in light, Pedro Cieza de León mistook them as trail marks; some people mistook them as irrigation lines.
It was not until the days of flights and aeroplanes, when people saw these clearly; that is when they understood that these were not trail marks or anything. These were signs and drawings.
The one I like the most is as follows:



Now, what I ask is,
- How did they make these?
- Who did they make these for?
Unfortunately, these answers are not yet answered.

Some archeologists and geologists studied these lines and the results are shocking. These are not just lines with some nonsensical scribbling; these are made to last for a long time. Let alone the design; the methodology of design is scientific too. An excerpt from wikipedia is says the following,
On the ground, most of the lines are formed by a shallow trench with a
depth between 10 and 15 cm (4 and 6 in). Such trenches were made by
removing the reddish-brown iron oxide-coated pebbles that cover the
surface of the Nazca Desert. When this gravel is removed, the
light-colored clay earth which is exposed in the bottom of the trench
produces lines which contrast sharply in color and tone with the
surrounding land surface. This sublayer contains high amounts of lime
which, with the morning mist, hardens to form a protective layer that
shields the lines from winds, thereby preventing erosion.


Are these just drawings after all?
These are maps, some theorists say. Don’t believe them? have a look at the image below.


commercial photography locations

One of my favorite conspiracy theory is by Erich von Däniken, it is called the Cargo Cult
Erich von Däniken’s theory is the most famous approach to solve the
mystery of Nazca. He had the idea that long time ago visitors from
other stars visited the earth and naturally Nazca. At this place they
landed, during the landing stones was blown away by the power of
rocket propulsion. By approaching more the power was increasing and
the cleaned band broader. In this way the first trapezes emerged.
Later the Aliens disappeared and left confused people. Like in the
modern cargo cults they tried to call the Gods back by drawing lines,
figures and trapezes. Never Däniken said the formations was made by
Aliens. He discovered the GGF/Mandala/Zodiac and the mirror -
Formation and compares them with modern VASIS or PAPI-Signs.
To read more please visit : Conspiracy theories of Nazca Lines.

Mathematics:

There is a mathematical formula by American Astronomer and Astrophysicist Frank Drake [3]. He is hugely regarded as the father of modern SETI(Search for extraterrestrial intelligence).
The criticism related to Drake’s equation is not about the correctness of the equation itself, but about the highly ambiguous estimations of the various variable used in the equation.
The equation is as follows,

Here,
= the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light cone);
= the average rate of star formation in our galaxy
= the fraction of those stars that have planets
= the average number of planets that can potentially support life per star that has planets
= the fraction of planets that could support life that actually develop life at some point
= the fraction of planets with life that actually go on to develop intelligent life (civilizations)
= the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
= the length of time for which such civilizations release detectable signals into space
For people wondering what is a light cone, here is a picture. This is
a gloried way of looking into the dynamics of time.


In special and general relativity, a light cone is the path that a flash of light, emanating from a single event (localized to a single point in space and a single moment in time) and traveling in all directions, would take through spacetime


Criticism:

  1. One major setback of this equation is that it takes no account of the cosmological developmental phases and time, the value of would heavily rely on the speed of cosmological development[1].
  2. There is no derivation or methodology of how this equation came into being; which suggests that it is just a stab in the dark.
  3. The factor determines habitability; how in the fuck’s world do you find that? Oxygen and Water makes Earth habitable for humans; and probably in some planet , Nitrogen and Alcohol makes it habitable [2].

Here is a beautiful image about Drake’s equation,


Conclusion:

  1. This is a very badly written equation; not because it is wrong. But, because it has no methodology involved. Looks like couple of fresh college graduates wrote this while having some really potent weed.
    2.Drake’s equation is therefore a Fermi Paradox.
The Fermi paradox or Fermi’s paradox, named after physicist Enrico
Fermi, is the apparent contradiction between the lack of evidence and
high probability estimates, e.g., those given by the Drake equation,
for the existence of extraterrestrial civilizations.

An interesting coming from xkcd[3], it added another term which is the amount of bull shit you are gonna buy from the the drake equation.




Cheers!

Monday, 10 April 2017

Euler - Riemann Zeta Function.

“Madam, I have just come from a country where people are hanged if
they talk.” ― Leonhard Euler
The Riemann Zeta function, denoted as is a function of a complex variable that analytically continues the sum of the Dirichlet Series, [1]

The entire blog will be divided into the following parts:
  1. What is Analytic Continuation?
  2. What is Dirichlet Series?
  3. Recurrence relation between Bernoulli Numbers.
  4. Relationship between Bernoulli, Riemann and Euler.

1.

Before we talk about Analytic continuation, we will have to know what an Analytic function is.

There are multiple ways of defining the Analytic functions[2]. The one that I find most comfortable is given below:
: A function is said to be analytic in a region of the complex plane if has a derivative at each point of and if is single valued.

For better understanding, I will extend an example,
: Consider = , determine if the given function is analytic or not.
: We will use the Cauchy-Riemann equations.
For a given,
Now we have the following definitions,

Now, according to the Cauchy-Riemann Equation, any is an analytic function, iff,
Now, for the given ,

Hence, the relations are,

It is clear that,

Therefore, the given is not an analytic function.

Analytic continuation is a pretty simple concept to understand really.
: Say is an analytic function defined over a non-empty open subset of the complex plane . If is a larger open subset of , containing , and is an analytic function defined on such that,

In other words, Analytic continuation is the method of extending the domain of an analytic function. [3]

For some cool examples on Analytic continuation refer Virginia Tech’s paper here [4].

2.

Dirichlet Series[5] is any series of general form,

Now, with a slight modification,

We get,

For a function defined as,

is the Riemann Zeta function.

3.

Definitions,[6]

Therefore, is given as,

More generally,

for ,

or equivalently,

The above relation is symbolically written as,

On expansion, all -th powers of , must be written as and treated as Bernoulli Numbers.
The expansion is done on the basis of Binomial Theorem, which statest that,

Therefore, for ,

Therefore, for , after expanding for we have,


4.

Euler found a formula which easily defined the **even-numbered zeta**functions as follows[7]:

Interestingly, , so there is no counter-part for and yet a value of exists. Well, that is beyond my scope of this blog.
Cheers!