Waste less time on Facebook — follow Brilliant.

What is the probability that Markov chain works?

Let's play a game with a coin. We will flip this coin continuously, until either we get the sequence heads, heads, tails, or we get the sequence tails, heads, heads; in the former case, you win, in the latter I win. For example, if the tosses come up \(\text{HHHTTHHTTH}\ldots\), then you won by toss 4 (because \(\text{HHT}\)), even though \(\text{THH}\) appeared latter by toss 7. Had the second toss been tails, I would have won instead, since \(\text{HHT}\) only appeared by toss 8. What is the probability that you win?

This kind of problem appears relatively often (or at least I once solved two successive problems of this fashion). Although there is a specialized method of solution that works for this problem (if you get any tails in the first two tosses, you lose, otherwise you win), I will explain a more general method that can be applied to several such problems.

First, we observe the possible states of the game. There are nine states, which we can encode as \(\text{|*, |H*, |T*, HH*, HT*, TH*, TT*, HHT, THH}\). A pipe (|) indicates the beginning of the game, while an asterisk (*) indicates the time just before the current toss. Thus \(\text{|T*}\) indicates that we got tails as the first toss, and now we're at the second toss, while \(\text{HT*}\) indicates the last two tosses were heads and tails, respectively. After each toss, our state might change to something else; for example, after a toss of heads from \(\text{HT*}\), we end up in the state \(\text{TH*}\), since now the last two tosses are tails and heads in that order. (The fact that the third-last toss was heads no longer matters.) \(\text{HHT}\) is a win for you, and \(\text{THH}\) is a win for me; once we enter either of these, we won't go anywhere else.

Now we can construct a directed graph, where the vertices are states and edges are possible transitions, where edges are weighted according to the probability of entering such state. In other words, we create a Markov chain, as given in the following diagram made lovingly by Microsoft Paint:

A Markov chain of the problem

A Markov chain of the problem

Red arrows indicate the state change on getting heads, while orange arrows indicate the state change on getting tails. (Except the two rightmost, finish states, where the arrow indicates the state change for whatever the toss; once we end up on one of those, we won't change any more state.) For a Markov chain, the different colors can be ignored.

Now, we put a number into each state, indicating the probability that you win. The winning states are easy: \(\text{HHT}\) has probability \(1\) and \(\text{THH}\) has probability \(0\).

There is an important equality that always holds: \(P(\text{state }u) = \sum ((\text{weight of arrow leaving }u) \cdot P(\text{state pointed by the arrow}))\). For example, let's apply this to the state \(\text{HH*}\). Suppose it has probability \(p\). Then, according to the equality, we have \(p = \frac{1}{2} \cdot 1 + \frac{1}{2} \cdot p\); the first summand is if we follow the arrow to HHT (getting a tails), while the second is if we follow the arrow to \(\text{HH*}\) (getting a heads). We can easily solve \(p = 1\). This means once we end up on \(\text{HH*}\), you will always win no matter what.

Similarly, we can solve for \(\text{HT*, TH*, TT*}\). This one is a bit harder, as we obtain a system of linear equations in three variables, but we can solve that to give all zeroes. Once we get to any of these three states, you're doomed.

We can then continue to \(\text{|H*}\) and \(\text{|T*}\). For the former, we can compute \(P(\text{|H*}) = \frac{1}{2} P(\text{HH*}) + \frac{1}{2} P(\text{HT*}) = \frac{1}{2} \cdot 1 + \frac{1}{2} \cdot 0 = \frac{1}{2}\). The latter can also be computed in the same way to give \(P(\text{|T*}) = 0\). And finally, this leads up to \(P(\text{|*}) = \frac{1}{4}\), the starting state, which means at the beginning you have only a \(\frac{1}{4}\) probability of winning. Yes, I'm a jerk.

A Markov chain after putting the probabilities of winning from states

A Markov chain after putting the probabilities of winning from states

This visualization allows us to solve most problems of this sort better, without getting confused of how states lead to each other, and what your equations are. This is not much simpler than making equations without visualization, but it helps making sure you won't miss anything and also helps to tell which nodes you can probably compute earlier. At the worst case, yes, you need to make a system of linear equations and solve it by hand as usual, but hey, you now know that if you need to do so then it's the best way available.

Sample problems (read: the two problems I did that made me want to write this note):

Note by Ivan Koswara
1 year, 11 months ago

No vote yet
1 vote


Sort by:

Top Newest

Thanks for detailed and well written Note! It gave me a head started on "Markov chain" in probability... Pawan Kumar · 1 year, 6 months ago

Log in to reply

check this article http://www.sciencedirect.com/science/article/pii/S0167715210002270 Collatz Lothal · 1 year, 10 months ago

Log in to reply


Problem Loading...

Note Loading...

Set Loading...