I want you to think about the last time you wanted to know what other people thought of a movie you were interested in seeing. There’s a good chance that you went to rottentomatoes.com and searched for the film’s “Tomatometer” percentage score (and I am not referring to a fruit that’s gone bad).
For those who are unaware, Rotten Tomatoes is an aggregate website for movie and television reviews. It takes certified critic reviews from different websites, interprets every review as either good (fresh) or bad (rotten) and generates a percentage score that represents how many critics reviewed the movie positively overall among the total number of reviews. So if a movie has 100 aggregated reviews and only 25 are positive, it will have a 25 percent Tomatometer score.What if I told you that you are likely misinterpreting that Rotten Tomatoes score and neglecting more accurate ways to view film criticism?
The average Rotten Tomatoes user will see a 97 percent score as nearly a masterpiece. “It has to be good,” they might say, or commercials will tell them, “It’s the best-reviewed movie of the year!”
In actuality, 97 percent of the critics gave it a relatively good review, meaning it’s possible that half of those good reviews gave the movie a 6/10. While this is not the case for all scores, it is not uncommon. This causes Tomatometer scores to often be skewed and inaccurate.
Statistics show that when someone is told the movie is bad before they see it, their outlook on the movie after watching it is different. To back up this claim, I will refer to C.C. Lilford’s final Communication Research project, where they set up an experiment similar to Professor Robert O. Wyatt and Professor David P. Badger’s experiment from the University of Florida.
Students with differing background knowledge of the movie’s critical reception were shown a movie (in Lilford’s experiment, it was 1989’s “Pink Cadillac”). Group one, who had no prior knowledge of the film’s negative Tomatometer score, gave the film more generous feedback than group two, who was shown negative reviews beforehand. The same result occurred in Wyatt and Badger’s experiments as well.
Despite the flaws, there is an advantage to Rotten Tomatoes that users actively defend: that the site is convenient to use. Not everyone (especially college students) can afford to see every movie and decide if they like it, so they look to Rotten Tomatoes to see if it’s worth watching.
While there’s nothing wrong with that mindset, it’s more helpful to have more accurate data. So how can we fix this problem? How can we get more accurate results regarding whether a movie is worth watching or not while retaining the convenience that people love about the site? In my mind, there are two distinct solutions to this problem.
Solution number one is to observe a Rotten Tomatoes statistic that nearly everyone neglects to see: the average rating, located in fine print below the Tomatometer score. The average rating is ranked from zero to 10, and it is a larger indication of quality than the Tomatometer.
Let’s look at how much the average score makes a difference. Recent Golden Globe winner “If Beale Street Could Talk” has a 95 percent fresh rating, only slightly higher than the 2018 release “Bumblebee,” which has a 92 percent rating. However, when comparing those films with their average rating, “Beale Street” has a 8.7/10 average rating, while “Bumblebee” has a 7/10 average rating, creating further separation at the top. Next time you visit Rotten Tomatoes, take a look at the average rating for a more truthful answer to the question “Is it good or not?”.
Finally, solution number two involves more web browsing on your part as an audience member. I can guarantee that it will be worth it in the long run. Also, try relying on only two or three critical opinions that you trust.
When I say that, I mean find two or three critics that consistently line up with your taste in movies or television and that you generally agree with. The hard part is now over! Whenever you want to know if you want to see a movie, you have two or three people whose opinions you generally trust.
No more trying to interpret data and no more comparing critical views. If the reviewers you follow say it’s bad and not worth your time, maybe take their opinion more into consideration than an aggregated number on a website.