In this article, we are going to explore the Gambler’s Fallacy with Python and p5.js by simulating sequences of coin flips.

If I flip a coin 4 times, and the probability of it landing heads or tails is 50% for each outcome, what is the order of likelihoods for each of the following sequences occurring: A) H H H T B) H H H H C) H H T T E.g. if you think A is most likely, followed by B, followed by C, choose A, B, C.

Make a prediction, then press the `Play/pause`

button to see a simulation.

You can check out the code for the visualisation above on the p5.js site.

If you got this right then well done!. If not, then you have just committed the **the Gambler’s Fallacy**. Don’t worry, you are by no means alone in this initial assessment of the relative likelihoods of the sequences.

## Explanation of the Gambler’s Fallacy

The reason so many people commit the Gambler’s Fallacy in this and many other situations where probability is involved, is that there is a belief that what has happened previously must effect what happens next. However, in situations like the flipping a coin, this is not the case: each new outcome is **independent of the others**. With 4 coins There are `2 ^ 4 (16)`

possible sequences, and each is equally likely to occur.

Apart from the misconception about prior outcomes affecting the next outcome, another thing about this example which makes it easy to be mistaken is the fact that we might think of different, related questions, such as “what are the odds of getting 3 heads and 3 tails?” What is different in these cases is that order is unimportant, and there are multiple ways of achieving these combinations.

The Gambler’s Fallacy is just one of the ways in which humans are systematically poor at statistical reasoning, a phenomenon that is becoming better understood as more research is carried out and awareness of these weaknesses is slowly becoming part of our culture. There is a fascinating book on this topic by Daniel Kahneman called “Thinking Fast and Slow,” which goes into great detail about many of the perceptual errors we are subject to due to the way our brains work.

*As an Amazon Associate I earn from qualifying purchases.*

## Simulating the Gambler’s Fallacy with Python

Now for some code. The Python code below provides a simulation of the visualisation above.

```
import itertools
import random
from collections import Counter
NUM_COINS = 4
NUM_FLIPS = 1000
def new_sequence(event_counter):
seq = ""
for i in range(NUM_COINS):
seq += coin_flip()
event_counter[seq] += 1
def coin_flip():
return "T" if random.random() < 0.5 else "H"
perms = ["".join(x) for x in list(itertools.product("TH", repeat=NUM_COINS))]
event_counter = {key: 0 for key in perms}
for i in range(NUM_FLIPS):
new_sequence(event_counter)
for k, v in event_counter.items():
proportion = str(round(v / NUM_FLIPS * 100, 2)) + "%"
print(k, proportion)
```

A few points about this listing:

- It uses
`itertools.product`

as an easy way to produce the permutations of coin flips - You can modify to constants
`NUM_COINS`

and`NUM_FLIPS`

to explore different scenarios - A weighted coin could be simulated by editing this line:
`return "T" if random.random() < 0.5 else "H"`

A sample output from the above listing is:

```
TTTT 6.4%
TTTH 6.4%
TTHT 7.2%
TTHH 4.6%
THTT 5.8%
THTH 5.6%
THHT 6.3%
THHH 6.8%
HTTT 7.2%
HTTH 5.9%
HTHT 5.1%
HTHH 6.1%
HHTT 6.7%
HHTH 6.5%
HHHT 7.1%
HHHH 6.3%
```

In the long run, the expected value for the relative frequency of each sequence is `6.25% (1/16*100)`

. The above values for `1000`

trials are reasonably close to this value, and would approach closer with more trials.

Happy computing!