Given the following Markov Matrix:
import numpy, scipy.linalg
A = numpy.array([[0.9, 0.1],[0.15, 0.85]])
The stationary probability exists and is equal to [.6, .4]
. This is easy to verify by taking a large power of the matrix:
B = A.copy()
for _ in xrange(10): B = numpy.dot(B,B)
Here B[0] = [0.6, 0.4]
. So far, so good. According to wikipedia:
A stationary probability vector is defined as a vector that does not change under application of the transition matrix; that is, it is defined as a left eigenvector of the probability matrix, associated with eigenvalue 1:
So I should be able to calculate the left eigenvector of A
with eigenvalue of 1, and this should also give me the stationary probability. Scipy's implementation of eig
has a left keyword:
scipy.linalg.eig(A,left=True,right=False)
Gives:
(array([ 1.00+0.j, 0.75+0.j]), array([[ 0.83205029, -0.70710678],
[ 0.5547002 , 0.70710678]]))
Which says that the dominant left eigenvector is: [0.83205029, 0.5547002]
. Am I reading this incorrectly? How do I get the [0.6, 0.4]
using the eigenvalue decomposition?
The [0.83205029, 0.5547002]
is just [0.6, 0.4]
multiplied by ~1.39.
Although from "physical" point of view you need eigenvector with sum of its components equal 1, scaling eigenvector by some factor does not change it's "eigenness":
If , then obviously
So, to get [0.6, 0.4]
you should do:
>>> v = scipy.linalg.eig(A,left=True,right=False)[1][:,0]
>>> v
array([ 0.83205029, 0.5547002 ])
>>> v / sum(v)
array([ 0.6, 0.4])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With