Screenplays or Statistics – Can algorithms make hit films?

Photo by Volumetric
Photo by Volumetric

It’s taken me several days to gather my thoughts on the subject of computer algorithms being used to analyze screenplays for the right DNA that spawns a hit movie.  That’s the focus of this article in the New York Times about Worldwide Motion Picture Group and its CEO/”mad scientist” Vinny Bruzzese.  Like the writers and film professionals interviewed for the article, my reaction is mixed.  At first blush, of course, words like sacrilege and abomination come to mind and then give way to feisty paragraphs about the humanity in the craft, the beauty in uncertainty synthesized through each writer’s soul and unique voice.  Naturally, I do believe all that and have even seen it manifest on screen — but not always.

There’s no avoiding the truth:  every criticism one might sling at the notion of computer analysis of a screenplay can just as easily apply to the longstanding human analysis that produces a tremendous volume of motion picture entertainment.  To be fair, cinema, and especially American cinema, is probably the most derivative and formulaic of all popular media; and that’s only sometimes a bad thing.  When Pauline Kael reviewed Star Wars, her criticism was that it was composed entirely of successful scenes from other movies, and she was absolutely right — but that is also precisely why it was such a hit.  Star Wars is basically every great western and war film we’ve ever seen set in a galaxy far far away; and as revolutionary as its approach was for its time, it’s narrative and characters are equally dependent on tapping into nearly every ritual known to our subconscious film literacy.

For as long as there have been motion picture executives, there has been a persistent faith in the ability to crack the code for a hit movie. And for as long as there have been great filmmakers, there has been an understanding (often unspoken) that such a code is a figment of wishful thinking. It should be no surprise of course to find computer scientists insisting that indeed such a code exists and that it can be understood if we lift the fog of human, let alone writerly, emotion from the analysis.  Meanwhile, there is no denying that throughout film history, many surefire hits have flopped like suffocating mackerels on fishing trawlers, and many risky bets have redefined the medium.  Among the latter, of course, is Star Wars.

Today, the industry is far more bifurcated than it was in the 1970s.  Hollywood  studios produce almost exclusively “safe bets” in the form of $100 million blockbusters, while independents of varying size raise relative drippings to produce a much broader range of fare, still mostly operating on human instinct. Studio films, which must certainly be described as formulaic, continue to yield a mixed bag of finished products that run the gamut in my opinion from quite good to really, really not.  I thought, for instance, that the first Iron Man was very solid within the context of an action, comic book movie; that Green Lantern was forgettable; and that Sherlock Holmes, which banks on many of the elements that work in Iron Man, was also soporific. Regardless of my opinions, though, Holmes and Iron Man both grossed about a half-billion dollars while Green Lantern barely broke even on its $200 million budget.  All of these films are based on what we can call formulaic scripts, so where might computer analysis have played a role in predicting success or failure?  One might be tempted to say the winning ingredient in this data set is Robert Downey, Jr., which would be a reasonable assumption; and there’s no question stars bring the investments.  Even I went to see Holmes, fully expecting not to like it, solely because Downey was playing the lead.

So, if there are 20 million or so viewers out there just like me, producers can analyze the scripts all they want; we’re still ponying up the price of a ticket to see a performer we like in a classic role just out of curiosity. Meanwhile, I very much doubt script analysis alone could have predicted the financial success of Holmes and Iron Man or necessarily the failure of Green Lantern.  Any filmmaker knows that the action and structure on paper represents only the barest of bones for the finished film. Guy Ritchie didn’t bring Sherlock Holmes to life in a way that worked for me personally, but it clearly worked for plenty of fans; and Bruzzese’s analysis cannot see the production design or style of shooting or cutting or even Robert Downey Jr.‘s insouciant charm. And it is the combination of these and other disparate elements, all wrangled by a team of professionals making dozens of choices a day, that make hit movies.  This is true whether they’re huge spectacles or tiny glimpses into a single moment in a life.

But I’m avoiding the ontological question.  Is Mr. Bruzzese’s magic machine a relatively benign tool for certain film producers to do more of what they’re already doing, or is it yet another step toward removing the humanity from the creative process? It’s hard to say in this case whether this technology is truly disruptive or just another false idol for executives seeking the elusive promised land of the sure thing. Most of the films I and likeminded viewers consider great work barely register in the world of “blockbuster hits,” and I expect these works will continue to be produced, warts and all, without the aid of algorithmic analysis.  Meanwhile, if major producers want to spend many thousands of dollars to discover, as I predict, that hit-making is still a crapshoot, so be it.

I asked my friend, screenwriter Craig Fernandez, for his take on the whole thing, and his response sums it up well…

A lot of what passes as screenwriting in Hollywood is by the numbers/work by committee, but not work worth watching, not work that will ever be remembered, not work that begins with a broken person sitting at a typewriter telling a story that was telling itself.  If I may paraphrase Mark Twain, the difference between a script written by an invested writer and one written by an algorithm is ‘the difference between lightning and a lightning bug.’

It’s interesting that Fernandez describes the writer as a “broken person.” In so many ways, art is about wrestling with something that is fundamentally flawed in us, and this is an endeavor that neither computers nor many executives understand.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)