The usual starting point for analyzing these kinds of strategic interactions is to consider what would happen if all parties were completely rational. It might seem intuitively obvious that movie studios will send out their better-quality movies to be reviewed, but not send out their lower-quality movies. However, it turns out that if all parties are fully rational, movie studios would release every movie for review. The authors explain the underlying mathematical game theory by offering an illustration along these lines.
Say that the quality of a movie can be measured on a scale between 0 and 100. Now say that studios decide that they will only release their better movies for review: for example, the studios might decide not to release for review any movie with quality below 50. In this situation, when moviegoers see that a movie has not been released for review, they will infer that it has a quality ranging from 0 to 50--on average, a value of 25.
But if consumers are going to assume that all unreviewed movies have a quality value of 25, then it makes sense for the movie studios to release for review all movies with qualities higher than 25, because they suffer diminished profits if a movie has quality of, say, 40, but moviegoers are assuming it's only a 25.
Now movie studios are releasing for review all movies with a quality score over 25, and movie goers will assume that the remaining movies are between 0 and 25, or an average of 12.5. Given these expectations of moviegoers, it will pay for the studios to release for review all movies with a quality score above 12.5, so that they don't face a situation where their movie with a true quality score of 20, when movie-goers are expecting only a 12.5.
However, now consumers will assume that all unreviewed movies have a quality value below 12.5. And as this cycle of inference and counterinference continues, eventually the movie studios will release all movies (except perhaps the single worst movie, which consumers will then know is the single worst movie) for review.
After identifying the purely rational outcome, the next step in this kind of analysis is to look at the underlying assumptions, and to think about which assumptions are mostly likely to be violated in this setting. The authors emphasize two such assumptions: 1) Consumers are always aware when movies haven't been released for review; and 2) Consumers draw fully rational conclusions when a movie isn't released for review. Of course, in the real world neither of these assumptions holds true. The authors find that of the 1,414 that had wide release in the U.S. market from 2000 through 2009, about 11% were not released for review. In addition, that number has been higher in recent years.
Movie-goers who often don't notice that a movie hasn't been released for review, or who don't draw the rational inference when that happens, are likely to end up going to low-quality movies they would not otherwise have attended. As a result, they are more likely to be disappointed in their movie experience when going to an unreviewed movie than to a movie that was released for review. The authors set out to test whether this implication holds true.
To measure what critics think of the quality of a movie movie, they use data from Metacritic.com, a website that pulls together and averages ratings of more than 30 movie critics from newspapers, magazines, and websites. To measure what audiences think of a movie, they look at user reviews of movies at the of Internet Movie Database (IMDB). They plot a graph with the movie critic ratings on the horizontal axis and the movie-watcher ratings on the vertical axis. Movies that were released for review are solid dots; movies that had a "cold open" without a review from the critics before they were released (although they were reviewed later) are hollow dots. Here is the graph:
What patterns emerge here?
1) Notice that the dots form a generally upward-sloping pattern, which tells you that when the critics tend to rate a movie more highly (on the horizontal axis), moviegoers also tend to rate the movie more highly (on the vertical axis).
2) Cold-opened movies, the hollow dots, tend to have lower quality. "No cold-opened movie has a metacritic rating higher than 67. The average rating for those movies is 30, 17 points below the sample average of 47."
3) The darker straight line is the best-fit line looking only at movies that were screened in advance. The lighter straight line is the best-fit line looking at movies that were not screened in advance. The lighter line is below the darker line. Think about a movie of a certain quality level as defined by the critics: if that movie is reviewed, people are more likely to enjoy that movie than if the movie was not released for early review. This pattern suggests that the reviews are helping people to sort out which movies they would prefer seeing, and that without reviews, people are more likely to end up disappointed.
After doing statistical calculations to adjust for factors like whether the movie features well-known stars, the size of the production budget, the rating of the movie, the genre of the movie, and other factors, they find: "[C]old opening is correlated with a 10 –30 percent increase in domestic box-office revenue, and a pattern of fan disappointment, consistent with the hypothesis that some moviegoers do not infer low quality from cold opening."
So here's some advice you can use: If you're not sure whether a movie has been released for review by critics before it was distributed, find out. If it hasn't been releasedm think twice about whether you really want to see it. Maybe you do! Or maybe you are not paying enough attention to the signal the movie studio is sending by choosing a cold opening. Here is the authors' explanation, based in part on interviews with studio executives (footnotes omitted):
"[P]roduction budgets and personnel are decided early in the process. The number of theaters which agree to show the film is contracted far in advance of any cold-opening decision. Cold-opening decisions are made after distribution contracts have been signed and according to a major distributor and studio executives “are not a part of the contract.” There are no contracted decision rights about whether to cold open or not. The cold-opening decision is almost always made late in the process. After the film is completed, there is often audience surveying and test screenings. As one senior marketing and public relations (PR) veteran put it, “If a movie is not shown to critics, a decision has been made that the film will not be well received by them … After the PR executives have seen the film, if they believe the film will be poorly reviewed, they will have a heart to heart with the marketing execs and filmmakers about the pros and cons of screening for critics. ...
"A key ingredient in this story is that executives must think some moviegoers are strategically naïve, in the sense that those moviegoers ... will not deduce from the lack of reviews that quality is lower than they think. (Otherwise, the decision to cold open would be tantamount to allowing critics to say the quality is low)."