First, let's show that the Gambler's Fallacy is actually a fallacy by simulating a bunch of coin tosses in a program. We'll keep track of all the times a streak of 10 heads was obtained and the following toss after it and see if the number of heads following the streak is significantly less than the number of tails. We do not consider 11 heads in a row as two streaks of 10 heads but instead start searching for a completely new streak after every streak. Here is a Python 3 program to do that (we also count the number of heads and tails obtained in general to show that the simulated coin is not biased).

import random seed = 0 num_trails = 1000000 head_streak_length_limit = 10 random.seed(seed) num_heads = 0 num_tails = 0 heads_streak_length = 0 next_was_head = 0 next_was_tail = 0 for trial in range(num_trails): result = random.choice(['head', 'tail']) if result == 'head': num_heads += 1 else: num_tails += 1 if heads_streak_length < head_streak_length_limit: if result == 'head': heads_streak_length += 1 else: heads_streak_length = 0 else: if result == 'head': next_was_head += 1 else: next_was_tail += 1 heads_streak_length = 0 print('heads / tails:', round(num_heads/num_tails, 4)) print('after heads streak was head / after heads streak was tail:', round(next_was_head/next_was_tail, 4))

And here are the results:

heads / tails: 1.002 after heads streak was head / after heads streak was tail: 1.0579

Both the ratio of heads and tails in general and the ratio of heads and tails following a streak of 10 heads in a row are close to 1. You can try changing the streak length limit to see how making it shorter does not significantly change this ratio.

So why does this happen? It's a simple matter of probability theory. Both the probability of getting a head and of getting a tail is 0.5 and each toss is independent from the previous one. Therefore the probability of getting a head after 10 heads is $0.5^{10} \times 0.5 = 0.5^{11}$ and the probability of getting a tail after 10 heads is also $0.5^{10} \times 0.5 = 0.5^{11}$. It does not make a difference what the previous tosses were, the next toss being heads will always have a probability of 0.5, so both heads and tails are equally likely.

So then the more interesting question is why do we think otherwise? Why do we think that as the chain of consecutive heads gets longer, the probability of getting another head goes down? It could be because we expect the coin to be fair and a coin that never gives tails is a biased coin, but that doesn't really mean anything in terms of predicting what the next toss will be. It is entirely possible that the fairest possible coin will give a million heads in a row. There is nothing stopping this from happening. Not only that, but a million heads in a row is just as likely to happen as any other random looking sequence resulting from a million coin tosses. Let's write down a sequence of heads and tails as a string of 'H's and 'T's. The sequence 'HHHHHH' is just as likely to come up as the sequence 'HHHTTH', yet we think of the first sequence as being unlikely whilst of the second sequence as... what? Is the second sequence more likely to come up than the first? Would you bet more money on the second sequence than the first? This is what what I think is the root cause of the Gambler's Fallacy.

What I think happens in our mind is that we don't really predict whether the next toss will give a heads or a tails but whether the whole sequence will turn out to be an 'interesting' looking sequence or not. We love patterns, and when we see that a random sequence generator (coin tosses in this case) is giving out a sequence that looks like a simple pattern, we become suspicious of the randomness behind the sequence generator and assume that it isn't really random. I mean we're pretty terrible at generating random sequences manually because we assume that in a random process you will not get things like a streak of heads. But it's not really just a streak of heads that would make us suspicious isn't it? It's also a sequence of alternating heads and tails for example or a long sequence of heads followed by a long sequence of tails. I think that the source of the Gambler's Fallacy is that we categorise entire sequences into interesting and uninteresting sequences. As we watch a sequence being generated one toss at a time we expect that interesting sequences of that length are much less likely to happen that uninteresting ones, which is true. This makes us expect the next toss to break the interestingness of the current sequence, which, in the case of a streak of heads, is by the next toss turning out to be tails. Of course, although interesting sequences are less likely to happen as the length grows, given that the probability of the interesting pattern becoming uninteresting on the next toss is equal to the probability of it remaining interesting, it is still an incorrect assumption.

So what is the probability of an interesting sequence? This is a subjective question but we can still try to investigate it. Ideally a survey is conducted on a population of people to check what is considered as interesting by people on average. But that would be weird so I'll just conduct the survey on myself. I have generated all the possible coin tosses that can be obtained after 4 coin tosses and I will be annotating each sequence according to what I find interesting in it. I also did the same on 6 coin tosses to see how the ratio of interesting to uninteresting sequences changes with sequence length.

Here is what I find interesting in sequences of 4 coin tosses.

TTTT | same |

TTTH | not interesting |

TTHT | not interesting |

TTHH | switched |

THTT | not interesting |

THTH | repeated |

THHT | mirrored |

THHH | not interesting |

HTTT | not interesting |

HTTH | mirrored |

HTHT | repeated |

HTHH | not interesting |

HHTT | switched |

HHTH | not interesting |

HHHT | not interesting |

HHHH | same |

And here is what I find interesting in sequences of 6 coin tosses (organised into two columns to reduce vertical space).

TTTTTT | same | TTTTTH | not interesting |

TTTTHT | not interesting | TTTTHH | not interesting |

TTTHTT | not interesting | TTTHTH | not interesting |

TTTHHT | not interesting | TTTHHH | switched |

TTHTTT | not interesting | TTHTTH | doubled |

TTHTHT | not interesting | TTHTHH | not interesting |

TTHHTT | mirrored | TTHHTH | not interesting |

TTHHHT | not interesting | TTHHHH | not interesting |

THTTTT | not interesting | THTTTH | not interesting |

THTTHT | mirrored | THTTHH | not interesting |

THTHTT | not interesting | THTHTH | alternated |

THTHHT | not interesting | THTHHH | not interesting |

THHTTT | not interesting | THHTTH | not interesting |

THHTHT | not interesting | THHTHH | doubled |

THHHTT | not interesting | THHHTH | not interesting |

THHHHT | mirrored | THHHHH | not interesting |

HTTTTT | not interesting | HTTTTH | mirrored |

HTTTHT | not interesting | HTTTHH | not interesting |

HTTHTT | doubled | HTTHTH | not interesting |

HTTHHT | don't know | HTTHHH | not interesting |

HTHTTT | not interesting | HTHTTH | not interesting |

HTHTHT | alternated | HTHTHH | not interesting |

HTHHTT | not interesting | HTHHTH | mirrored |

HTHHHT | not interesting | HTHHHH | not interesting |

HHTTTT | not interesting | HHTTTH | not interesting |

HHTTHT | not interesting | HHTTHH | mirrored |

HHTHTT | not interesting | HHTHTH | not interesting |

HHTHHT | doubled | HHTHHH | not interesting |

HHHTTT | switched | HHHTTH | not interesting |

HHHTHT | not interesting | HHHTHH | not interesting |

HHHHTT | not interesting | HHHHTH | not interesting |

HHHHHT | not interesting | HHHHHH | same |

Among the 4 coin toss sequences, 8 out of the 16 sequences were considered interesting, meaning that you have a 50% chance of generating an interesting sequence. But among the 6 coin toss sequences, only 16 out of the 64 sequences were considered interesting, meaning that you have a 25% chance of generating an interesting sequence. Now I don't know if the patterns I identified in the tables above are the only interesting patterns for any sequence length, since maybe some patterns only emerge in very long sequences, but it is safe to say that the longer the sequence, the greater the proportion of uninteresting and random looking sequences will be. Therefore, the longer the sequence, the less likely that a sequence will be interesting. This might be what is going on when we think that the probability of getting a tails goes up as the streak of heads gets longer; we'd really be comparing how likely it is for a sequence of a given length to be uninteresting, which becomes less likely as the length grows.

Interesting patterns are interesting

I have to admit that whilst annotating interesting sequences I was more interested in being consistent in my reasoning than in saying what I genuinely found interesting. This is because I didn't actually feel that, for example, all "repeated" patterns were interesting, just some of them. But I wanted to avoid explaining myself too much based on what I felt. This means that there might be more going on in feeling that a sequence is aesthetically pleasing than simple mathematical patterns. It would be interesting if someone made an experiment where people are subjected to an EEG to see what is really considered interesting and what is not. How knows, maybe it might even turn out to be a kind of Rorschach test where different patterns of interest indicate different personalities. And maybe it might even be an interesting machine learning challenge to try to make a program predict which sequences would be considered interesting by humans in order to quantify how interesting a given sequence is.