It’s taken me several days to gather my thoughts on the subject of computer algorithms being used to analyze screenplays for the right DNA that spawns a hit movie. That’s the focus of this article in the New York Times about Worldwide Motion Picture Group and its CEO/”mad scientist” Vinny Bruzzese. Like the writers and film professionals interviewed for the article, my reaction is mixed. At first blush, of course, words like sacrilege and abomination come to mind and then give way to feisty paragraphs about the humanity in the craft, the beauty in uncertainty synthesized through each writer’s soul and unique voice. Naturally, I do believe all that and have even seen it manifest on screen — but not always.
There’s no avoiding the truth: every criticism one might sling at the notion of computer analysis of a screenplay can just as easily apply to the longstanding human analysis that produces a tremendous volume of motion picture entertainment. To be fair, cinema, and especially American cinema, is probably the most derivative and formulaic of all popular media; and that’s only sometimes a bad thing. When Pauline Kael reviewed Star Wars, her criticism was that it was composed entirely of successful scenes from other movies, and she was absolutely right — but that is also precisely why it was such a hit. Star Wars is basically every great western and war film we’ve ever seen set in a galaxy far far away; and as revolutionary as its approach was for its time, it’s narrative and characters are equally dependent on tapping into nearly every ritual known to our subconscious film literacy.
For as long as there have been motion picture executives, there has been a persistent faith in the ability to crack the code for a hit movie. And for as long as there have been great filmmakers, there has been an understanding (often unspoken) that such a code is a figment of wishful thinking. It should be no surprise of course to find computer scientists insisting that indeed such a code exists and that it can be understood if we lift the fog of human, let alone writerly, emotion from the analysis. Meanwhile, there is no denying that throughout film history, many surefire hits have flopped like suffocating mackerels on fishing trawlers, and many risky bets have redefined the medium. Among the latter, of course, is Star Wars.
Today, the industry is far more bifurcated than it was in the 1970s. Hollywood studios produce almost exclusively “safe bets” in the form of $100 million blockbusters, while independents of varying size raise relative drippings to produce a much broader range of fare, still mostly operating on human instinct. Studio films, which must certainly be described as formulaic, continue to yield a mixed bag of finished products that run the gamut in my opinion from quite good to really, really not. I thought, for instance, that the first Iron Man was very solid within the context of an action, comic book movie; that Green Lantern was forgettable; and that Sherlock Holmes, which banks on many of the elements that work in Iron Man, was also soporific. Regardless of my opinions, though, Holmes and Iron Man both grossed about a half-billion dollars while Green Lantern barely broke even on its $200 million budget. All of these films are based on what we can call formulaic scripts, so where might computer analysis have played a role in predicting success or failure? One might be tempted to say the winning ingredient in this data set is Robert Downey, Jr., which would be a reasonable assumption; and there’s no question stars bring the investments. Even I went to see Holmes, fully expecting not to like it, solely because Downey was playing the lead.
So, if there are 20 million or so viewers out there just like me, producers can analyze the scripts all they want; we’re still ponying up the price of a ticket to see a performer we like in a classic role just out of curiosity. Meanwhile, I very much doubt script analysis alone could have predicted the financial success of Holmes and Iron Man or necessarily the failure of Green Lantern. Any filmmaker knows that the action and structure on paper represents only the barest of bones for the finished film. Guy Ritchie didn’t bring Sherlock Holmes to life in a way that worked for me personally, but it clearly worked for plenty of fans; and Bruzzese’s analysis cannot see the production design or style of shooting or cutting or even Robert Downey Jr.‘s insouciant charm. And it is the combination of these and other disparate elements, all wrangled by a team of professionals making dozens of choices a day, that make hit movies. This is true whether they’re huge spectacles or tiny glimpses into a single moment in a life.
But I’m avoiding the ontological question. Is Mr. Bruzzese’s magic machine a relatively benign tool for certain film producers to do more of what they’re already doing, or is it yet another step toward removing the humanity from the creative process? It’s hard to say in this case whether this technology is truly disruptive or just another false idol for executives seeking the elusive promised land of the sure thing. Most of the films I and likeminded viewers consider great work barely register in the world of “blockbuster hits,” and I expect these works will continue to be produced, warts and all, without the aid of algorithmic analysis. Meanwhile, if major producers want to spend many thousands of dollars to discover, as I predict, that hit-making is still a crapshoot, so be it.
I asked my friend, screenwriter Craig Fernandez, for his take on the whole thing, and his response sums it up well…
A lot of what passes as screenwriting in Hollywood is by the numbers/work by committee, but not work worth watching, not work that will ever be remembered, not work that begins with a broken person sitting at a typewriter telling a story that was telling itself. If I may paraphrase Mark Twain, the difference between a script written by an invested writer and one written by an algorithm is ‘the difference between lightning and a lightning bug.’
It’s interesting that Fernandez describes the writer as a “broken person.” In so many ways, art is about wrestling with something that is fundamentally flawed in us, and this is an endeavor that neither computers nor many executives understand.
Such things are so much a moving target, that trends with many aspects of daily life and culture and the news of the day.
Movies, as with music, MUST have certain aspects of familiarity to be any ‘good’ to a wide audience.
For instance:
How can you have a dramatic “twist” to a plot, if there’s nothing to ‘twist’ off of? If there’s no base of familiarity, the audience get’s easily lost. Now, you could rightfully say that there is too much catering to the lowest common denominator, and i would agree… and if computers spit out screenplays, you can bet your ass that it would be this kind of non-challenging-to-the-mind drivel. i can see nothing monumental coming from such things. As long as computers are told what the human condition is, and can’t ‘feel’ it for themselves, screenwriters have not much to fear IMO.
P.S. woohoo! we got a ‘two-fer’ today! Thanks Dave!
Thanks, James. It’s just because I’m playing catch-up. Always.
Algorithms will first predict, then write, hit songs before they can do movies. But I don’t doubt it’s coming. Hit movies and great movies, as you point out, are often different things. Great movies will take longer….
That it is likely to happen is hard to argue, but it’s possible that nobody will care. Did you see my piece about the robot band? I remain confident that when we remove the human from the art, the humans will no longer care. Or perhaps we’re talking about a point far enough in the future that it’s all a transhuman world. As long as there’s scotch.
Hasn’t that been predicted (and made) years ago?
My personal prediction: I’ll be listening to the first algorithmically written hits when I’m zooming around in my cold-fusion-powered flying car.
So is this blog post about computers making screenplays (what the title and James comment seems to allure to) or computers judging the quality of screenplays? Because one problem is significantly harder then the other.
Actually, the technology being uses data learned from hit films and analyzes screenplays in development along with some other metrics with the goal of producing a hit film. So, there are a few issues this raises — the inevitable (as Cat says) production of a script by a computer, but also the fact that a hit film results from many more elements than just the data one can read from a screenplay. A good film, of course, is an entirely different matter.
The second one is just a specialization of the document classification problem. Which is itself is a specialization of the supervised learning problem. In which there is a whole lot of very good approaches for (support vector machines, Bayesian networks, neural networks, random forests, etc.).
In fact if you give me a dataset of screenplays to their gross revenue I can probably design you an algorithm specific to this problem implementing one of the above approaches (incl. some feature reduction algorithms before the data hits the learner) that will probably predict the success of any new screenplay better then chance. I say probably, because I’d have to try it first to give you a definitive answer. Fortunately with cross validation you can judge the quality of an algorithm without even having to test it on new screenplays (with possible errors due to statistical variance, but not very high if you have a lot of data to work with).
I wouldn’t know how to begin algorithmically generating an entire movie from scratch however (generating music is far more tractable in this regard, because the problem has been well studied). I suppose you could use try to “brute force” it with some kind of neural network with an insane of neurons (on the order of trillions) and train it on tons and tons of data using the cutting edge deep learning techniques, but you’d probably need more computational power then exists on this planet to actually run something like that..
Basically what I am getting is it’s a MUCH BIGGER jump to go from document classification to something generating an entire coherent movie from scratch.
Of course it is, and I can think of so many better uses for the technology. As stated, the difference between the human analysis used to produce a “hit” film and computer analysis is probably negligible. Neither computers nor many studio execs actually know how to make a movie, so the analysis is equally relevant or meaningless case by case. In many cases, it’s likely to be a case of computers helping movies suck faster. None of this will produce Marjane Satrapi’s “Persepolis,” which is about as good an example of a singular human film product as I can think of.
I’m not sure what you are trying to say here. You have a set of features and a class label (which can binary decision between “movie sucks”, “doesn’t suck”). If you train a learning algorithm with a set of features and a class label and build a model of the data from that. After, you can give it unlabeled features and it will be able to predict the class label (if this movie will suck or not). Classification learning algorithms are very, very advanced these days, so advanced that it’s incredibly difficult for a human to even reason about how the algorithm came up with any given decision (talking about levels and levels of correlations). Definitely they beat any human at classification if you pick the right features (but there are also automatic feature selection algorithms, that’s the big focus of research these days).
Of course all this requires you to define what constitutes a sucky movie at the minimum..
There’s nothing binary about “movie sucks” or “movie doesn’t suck.” One man’s meat, and all that. In the post, I reference three Hollywood movies — one that I and the masses liked; one that neither I nor the masses liked; and one that I didn’t like, the masses did like, but I went to see anyway. That doesn’t even get into the films I personally think are brilliant, none of which fit into the category of major studio films, but are also not universally popular worldwide among 15-28 year-olds. There is no doubt an algorithm could analyze certain data and predict probability of a “hit,” but the entire enterprise is a waste of time and will never be applied by artists who actually want to say something of value. Many great films deal with subjects studios wouldn’t touch because they’re painful or controversial (i.e. not popular), but they are important, brilliant, even flawed but solid work. So, these analyses remain relevant only to people already “writing” films by committee and taking no risks. While these represent the biggest films in terms of money, they are the minority of production overall.
How do you get a computer to understand ‘subtlety’? How does a program discern a joke? What’s the program language equivalent of poor taste? How do you show it the lines and stay on this side?
And are we talking just screenplay? Or are we getting into filming (&/or animation) too? [Conception to execution] That opens up a can of worms so big as to be pointless.
And as David said, wouldn’t there be a 1000 better uses of one’s programming time off the top of anyone’s head?
But it can learn.. All those examples you gave are just less abstract representations of a general learning problem.
I know what a computer can do pretty well: when given a set feature vectors X and class labels pairs {X,y}, generate a predictive model h(x) that returns for any future feature vector X’ the associated true class label y’ with minimum error. ie. Given a set of {X, y} generate h(x) such that h(X’) = y’ + ϵ. (ϵ being the error term, which all supervised learning algorithms seek to minimize).
In the past 10-20 years especially, CS folks have developed algorithms that are crazy good at the accomplishing the above. Humans are still generally better at feature selection though, so that’s generally where most of the research is these days. But you’d be surprised on how good just splitting a document into words and running some simple frequency analysis can be for developing a workable feature vector and answering any weird philosophical categorization questions you want.
I’m not surprised in the least. But just as with computer/robot musicians, it is a pointless endeavor unless we’re interested in art that synthesizes the robot condition. While that makes for interesting philosophical rumination, it is in practical terms on the other side of the event horizon where we are quite possibly extinct.
David,
Now I don’t see how it’s pointless in any way. I’m talking about the classification problem. This is not a general AI thing, but a specific application of machine learning to classify instances of data. This is something that computers are really good at, today. You can thank the last 15 years of CS/ML research for coming up with some amazingly good classification/regression algorithms. I just was giving the abstract definition because it doesn’t matter what X and y is. CS like the field of mathematics uses abstract representations and variables to convey concepts that are general, the classification problem is the same regardless of what meaning you assign to X or y.
X in this case can be attributes of a screenplay, and y could be if the movie makes over $50 million or not. If you give me a whole bunch of {X,y} pairs, I can generate a predictive model [that’s the h(x)] that returns a y’ given an X’ (subject to a margin of error). h(x) is a function in the hypothesis space of possible functions that produce y’, classification algorithms strive produce a h(x) that minimizes the expected error term.
That is, tell you if a movie is likely to make $50 million given the attributes of the screenplay. If this will work depends if there is a separating hyperplane in some combination of attributes, but modern machine learning algorithms can often find useful separating hyperplanes in very complicated and noisy data. However it’s possible to have no separating hyperplane. If I generate {X,y} pairs from a true random walk, for instance – this means nothing distinctive can be “learned” from the data, and the algorithm will do no better then chance.
This is not the same thing as producing a movie from scratch. That’s more of a a general AI problem that is in the realm of genetic algorithms [novelty search] and neural networks (neural networks trained by deep learning algorithms in particular); and it’s a whole lot harder but it is an area of extremely active research (IMO, most ML research is trying to crack this problem).
Also the cutting edge deep learning techniques are so incredibly computationally and data expensive the current computers can’t really run them to their fullest potential, so it might require inventing new techniques for deep learning and novelty search that is less computationally expensive, or simply making faster computers.
Again, M. I understand how. I merely assert that there’s no value in doing it. In short, who cares? I don’t mean that to be flip. I’m being quite literal.
Seriously? Machine learning is all over the place (finance, medicine, manufacturing, science/engineering, film apparently). You know I’d be more interested in finding some industry that doesn’t have any application for machine learning…
If the subject is AI created art, then I’m serious. I’m not dismissing the entire science or all applications.
Why an AI would make art? You might have to ask the AI the question to get a real answer, but I assume the answer would be something along the lines of “because I can.” 🙂
Agreed. Assuming this conversation can take place because it may be on the other side of the event horizon.
Why do i picture in my head the computer spitting out a jumble of the most popular phrases and a mash-up of the most popular scenes as determined from the algorithm? There needs to be a human to make sure the output makes any sense, and that the resulting muddle takes the audience on a worthwhile journey.
Anyone can take the best scenes out of all movies and smash them together… but that would not make for a very good movie. Not one i would watch with any satisfaction, that is.