"Pareidolia" refers to the common human practice of looking at randomness and seeing patterns. Some standard examples are when you see a basketball player make several shots in a row and interpret that as a "hot hand," not just the kind of streak that will happen every now and then among thousands of basketball players taking shots where each shot has a roughly 50:50 chance of going in. Or when you see a stock market adviser have several above-average years in a row and interpret that as evidence that future returns are likely to follow the same pattern, rather than as the kind of random streak that will happen every now and then when there are thousands of stock market advisers, each with a roughly 50:50 change of an above-average performance in any given year.
How good are you at perceiving randomness? Here's an example from Steven Pinker's 2011 book The Better Angels of Our Nature. This example and others were discussed in an article by Aatish Bhatia, "Empirical Zeal: What Does Randomness Look Like" in the December 21, 2012, issue of Wired magazine.
Consider the two panels with a bunch of points. The points on one panel are distributed randomly, but not on the other. Which is which?
The most common answer to say that the pattern on the right is random. The pattern on the left seems to have certain gaps and clusters and curves, which you can imagine as having some underlying meaning. But given the lead-in of the discussion here, you may be unsurprised to find that the random distribution is the one on the left. the distribution on the right is actually a representation of the pattern of glow-worms on a cave ceiling. The glow-worms compete for food and thus avoid being too close to each other. The greater evenness of the spacing is actually a giveaway that some underlying process is at work. Randomness is lumpy.
This may seem counterintuitive. After all, "random" refers to an equal probability of outcomes occuring--like where points occur in these panel. But an equal probability of something happening does not mean an equally spread out set of outcomes.
As an example, imagine that you flip a coin twice. On average, you expect to get one head and one tail. Now repeat this experiment of two coin flips 100 times. If every single time out of 100 you got one head and one tail, you could be extremely confident that you were not seeing a random outcome. After all, random chance suggests that one-quarter of the time you would expect to see two heads and one-quarter of the time you would expect to see two tails. In other words, if you don't see lumpy clusters, the odds are good that you aren't seeing randomness.
Separating what is random from what is an underlying pattern is of course the central task in figuring out what is happening in any complex system: the weather, outbreaks of disease, the path of an economy. Beware the dreaded phase, "It can't be just a coincidence." Sometimes, it can. Many people have a degree of pareidolia, and they will tend to assume that clusters must have an explanation other than randomness. A compelling reason for a course or two in statistics is to help people harness and shape their intuitions about what constitutes evidence of randomness or pattern.