Monday, February 27, 2006

Thumbs Up, Thumbs Down

I would really hate to be the editor of the Quantitative Marketing & Economics academic journal right now. They're about to publish an article that is already receiving a failing grade from critics.

Marketing professors from Duke University, Florida Atlantic University and Carnegie Mellon University have co-authored a piece of research looking at film criticism and their findings have been utterly lambasted by the very people whose work they surveyed.


According to the news release put out by Duke, the study "examines the meaning of silence by professional film critics". It finds that "many film critics, faced with far too many movies to write about, tend to avoid writing reviews of bad films that they’ve seen. At the same time, a few critics, faced with the same overwhelming choice, tend to avoid reviewing good movies that they’ve watched". They also released a scale of those critics who "provide the most information about poorer movies" and "most information about the finer flicks". Talk about a survey having a false premise.

The research utterly misses the mark, primarily because it looks like the professors don't know a thing about film criticism and the system in place at some of the nation's largest-circulation newspapers. It's story assignments, stupid. As Pat Saperstein, a senior editor at Variety, puts it in a letter to Romenesko: "[T]he top reviewers with the most seniority are assigned the highest-profile, most prestigious films, duh. Younger, more inexperienced reviewers get the films the top critic doesn't want -- the genre films, kids' pics and other lower-prestige fare. That's the way it works at every paper I've ever heard of -- so why isn't it mentioned in the study?"

(I have a question on methodology as well: How do they back their system on which films critics actually saw but didn't review -- and, to put it in perspective, there were 527 films released domestically last year -- and how do they define critics' information on the films?)

And I'm sure this concept has been picked up by a good number of news consumers as well-- a reviewer not reviewing a film does not mean a damn thing. And here's the kicker: According the the news release, "the researchers are now exploring the relationship between a movie’s critical acclaim and its box office sales. Among other things, they aim to pinpoint the critics who have the biggest impact on ticket sales."

I'm already shaking my head at their next endeavor, this sequel, which is another false premise and presumably builds upon the results of this one-- critics don't grade for box office, but for quality. I'm willing to save them some grant money and predict they'll find little linkage; for example, this weekend's #1 film ("Tyler Perry's Madea's Family Reunion") wasn't even screened in advance for critics.

1 comment:

Anonymous said...

You raise some points that I missed in my comments on this research. They never quite explain how they know whether a critic saw a film in the first place.

I'm an academic, so I'm sympathetic with them to some extent, but this was (is?) a really sloppy study. Someone fell asleep when this research was being reviewed for publication.