giddy.markov.sojourn_time¶
- giddy.markov.sojourn_time(p, summary=True)[source]¶
Calculate sojourn time based on a given transition probability matrix.
- Parameters:
- parray
(k, k), a Markov transition probability matrix.
- summarybool
If True and the Markov Chain has absorbing states whose sojourn time is infinitely large, print out the information about the absorbing states. Default is True.
- Returns
- ——-
- : array
(k, ), sojourn times. Each element is the expected time a Markov chain spends in each state before leaving that state.
Notes
Refer to [Ibe09] for more details on sojourn times for Markov chains.
Examples
>>> from giddy.markov import sojourn_time >>> import numpy as np >>> p = np.array([[.5, .25, .25], [.5, 0, .5], [.25, .25, .5]]) >>> sojourn_time(p) array([2., 1., 2.])
Non-ergodic Markov Chains with rows full of 0
>>> p = np.array([[.5, .25, .25], [.5, 0, .5],[ 0, 0, 0]]) >>> sojourn_time(p) Sojourn times are infinite for absorbing states! In this Markov Chain, states [2] are absorbing states. array([ 2., 1., inf])