# Law of Iterated Expectation

The **Law of Iterated Expectation** states that the expected value of a random variable is equal to the sum of the expected values of that random variable conditioned on a second random variable. Intuitively speaking, the law states that the expected outcome of an event can be calculated using casework on the possible outcomes of an event it depends on; for instance, if the probability of rain tomorrow depends on the probability of rain today, and all of the following are known:

- The probability of rain today
- The probability of rain tomorrow
*given*that it rained today - The probability of rain tomorrow
*given*that it did*not*rain today

the probability of rain tomorrow can be calculated by considering both cases (it rained today/it did not rain today) in turn. To use specific numbers, suppose that

- The probability of rain today is 70%
- If it rains today, it will rain tomorrow with probability 30%
- If it does not rain today, it will rain tomorrow with probability 90%

In this case, the probability of rain tomorrow is

\[0.7 \cdot 0.3 + 0.3 \cdot 0.9 = 0.21 + 0.27 = 0.48 = 48\%\]

The Law of Iterated Expectation is useful when the probability distribution of both a random variable \(X\) and a conditional random variable \(Y|X\) is known, and the probability distribution of \(Y\) is desired. This occurs extremely often in practice, especially in economics and poker.

## Formal definition

Let \(X,Y\) be random variables. Then

\[\mathbb{E}[X] = \mathbb{E}(\mathbb{E}(X|Y))\]

where \(X|Y\) is the conditional probability distribution of \(X\) given \(Y\).

If \(Y\) takes the outcomes \(A_1, A_2, \ldots, A_n\), the law can be written in the more natural form

\[\mathbb{E}[X] = \sum_{i=1}^{n}\mathbb{E}[X|A_i]P(A_i)\]

This means that the expected value of \(X\) can be calculated from the probability distribution of \(X|Y\) and \(Y\), which is often useful both in theory and practice.

## Example

For example, consider a star basketball player who scores (2 points) 80% of the time when unguarded, but only 40% of the time when guarded. Against the team's current opponent, the player will be guarded 70% of the time. Then, when the player shoots, the Law of Iterated Expectation says that:

\[\mathbb{E}[\text{points scored}] = \mathbb{E}[\text{points scored|guarded}]P(\text{guarded})+\mathbb{E}[\text{points scored|unguarded}]P(\text{unguarded})\] \[\mathbb{E}[\text{points scored}] = 2 \cdot .4 \cdot .7 + 2 \cdot .8 \cdot .3 = .56 + .48 = 1.04\]

so the player scores an average of 1.04 points every time he gains possession of the ball.

This law has practical application for the opposing team: for each defensive scheme, they can calculate how many points the opposing team will score (on average), so long as they know

- How often each player will be guarded
- How well each player shoots while guarded, and how well they shoot while unguarded

which allows the opponent to pick the best defensive scheme possible.

## Bayes' theorem and joint distributions

An important theorem that can simplify the reasoning is the law of joint distribution:

\[P(A \cap B) = P(A) \cdot P(B|A) = P(B) \cdot P(A|B)\]

This theorem makes logical sense: the probability that events \(A\) and \(B\) both occur is the same as the probability that \(A\) occurs, then \(B\) occurs. The probability that \(A\) occurs is simply \(P(A)\), and the probability that \(B\) subsequently occurs is \(P(B|A)\). Equivalently, the order can be reversed, leading to the second equality.

Note that when \(A,B\) are independent, this law becomes the more familiar \(P(A\cap B) = P(A) \cdot P(B)\).

This theorem is important because it allows the calculation of \(P(A|B)\) given \(P(A \cap B)\) and \(P(B)\), which is useful as \(P(A|B)\) is used in the law of iterated expectation.

You hear Horace being shouted at. What is the probability that he was late?

###### This problem is not original.

This law can also be rearranged into Bayes' theorem, which states that:

\[P(B|A) = \frac{P(A|B)P(B)}{P(A)}\]

which allows for the same calculations as above.

On planet Brilliantia, there are two types of creatures: mathematicians and non-mathematicians.

Mathematicians tell the truth \(\frac{6}{7}\) of the time and lie only \(\frac{1}{7}\) of the time, while non-mathematicians tell the truth \(\frac{1}{5}\) of the time and lie \(\frac{4}{5}\) of the time.

It is also known that there is a \(\frac{2}{3}\) chance a creature from Brilliantia is a mathematician and a \(\frac{1}{3}\) chance that it is a non-mathematician, but there is no way of differentiating from these two types.

You are visiting Brilliantia on a research trip. During your stay, you come across a creature who states that it has found a one line proof for Fermat's Last Theorem. Immediately after that, a second creature shows up and states that the first creature's statement was a true one.

If the probability that the first creature's statement was actually true is \(\frac{a}{b}\), for some coprime positive integers \(a, b\), find the value of \(b - a\).

## See Also

**Cite as:**Law of Iterated Expectation.

*Brilliant.org*. Retrieved from https://brilliant.org/wiki/law-of-iterated-expectation/