There’s no question that digital technologies have fostered tremendous opportunity for any content creator with a dream and the passion to pursue it; but these technologies have also created a dynamic, even volatile, market with forces both great and small competing for ad dollars, investors, and sustainable viewer attention. At the same time, we hear an awful lot about how content creators must adapt to new business models, which can be both true and false, depending on exactly what that means. Certainly, we have new platforms for distribution and marketing and new ways for artists and creators to break out and build a fan base; but when it comes to building a professional career or an entertainment-based business in the digital age, are all the rules really so different, or do many of the fundamentals still apply even as we learn to use new tools?
To discuss some of these topics, I met with Dan Goodman, who co-founded Believe Entertainment Group in New York City with partner William H. Masterson III in 2010. Believe Entertainment Group is essentially a traditional, financier/studio producer of original content for the digital-only market. Their slate of shows includes the popular series The LeBrons, which has just launched season two on Xbox LIVE; and a brand new show called EpicEDM for fans of electronic dance music, which is the first studio-originated content series designed specifically for the Twitter platform.
I believe (pun intended) companies like Goodman’s represent the future of what we now call television. As the worlds of TV and the Internet continue to converge, it will be the producers who build sustainable models based on professional, quality content who will be the studios we talk about in years to come.
To learn more about Believe Entertainment Group and their shows, visit their website here.
Dan and I spoke in Believe Entertainment Group’s Manhattan office.
It’s taken me several days to gather my thoughts on the subject of computer algorithms being used to analyze screenplays for the right DNA that spawns a hit movie. That’s the focus of this article in the New York Times about Worldwide Motion Picture Group and its CEO/”mad scientist” Vinny Bruzzese. Like the writers and film professionals interviewed for the article, my reaction is mixed. At first blush, of course, words like sacrilege and abomination come to mind and then give way to feisty paragraphs about the humanity in the craft, the beauty in uncertainty synthesized through each writer’s soul and unique voice. Naturally, I do believe all that and have even seen it manifest on screen — but not always.
There’s no avoiding the truth: every criticism one might sling at the notion of computer analysis of a screenplay can just as easily apply to the longstanding human analysis that produces a tremendous volume of motion picture entertainment. To be fair, cinema, and especially American cinema, is probably the most derivative and formulaic of all popular media; and that’s only sometimes a bad thing. When Pauline Kael reviewed Star Wars, her criticism was that it was composed entirely of successful scenes from other movies, and she was absolutely right — but that is also precisely why it was such a hit. Star Wars is basically every great western and war film we’ve ever seen set in a galaxy far far away; and as revolutionary as its approach was for its time, it’s narrative and characters are equally dependent on tapping into nearly every ritual known to our subconscious film literacy.
For as long as there have been motion picture executives, there has been a persistent faith in the ability to crack the code for a hit movie. And for as long as there have been great filmmakers, there has been an understanding (often unspoken) that such a code is a figment of wishful thinking. It should be no surprise of course to find computer scientists insisting that indeed such a code exists and that it can be understood if we lift the fog of human, let alone writerly, emotion from the analysis. Meanwhile, there is no denying that throughout film history, many surefire hits have flopped like suffocating mackerels on fishing trawlers, and many risky bets have redefined the medium. Among the latter, of course, is Star Wars.
Today, the industry is far more bifurcated than it was in the 1970s. Hollywood studios produce almost exclusively “safe bets” in the form of $100 million blockbusters, while independents of varying size raise relative drippings to produce a much broader range of fare, still mostly operating on human instinct. Studio films, which must certainly be described as formulaic, continue to yield a mixed bag of finished products that run the gamut in my opinion from quite good to really, really not. I thought, for instance, that the first Iron Man was very solid within the context of an action, comic book movie; that Green Lantern was forgettable; and that Sherlock Holmes, which banks on many of the elements that work in Iron Man, was also soporific. Regardless of my opinions, though, Holmes and Iron Man both grossed about a half-billion dollars while Green Lantern barely broke even on its $200 million budget. All of these films are based on what we can call formulaic scripts, so where might computer analysis have played a role in predicting success or failure? One might be tempted to say the winning ingredient in this data set is Robert Downey, Jr., which would be a reasonable assumption; and there’s no question stars bring the investments. Even I went to see Holmes, fully expecting not to like it, solely because Downey was playing the lead.
So, if there are 20 million or so viewers out there just like me, producers can analyze the scripts all they want; we’re still ponying up the price of a ticket to see a performer we like in a classic role just out of curiosity. Meanwhile, I very much doubt script analysis alone could have predicted the financial success of Holmes and Iron Man or necessarily the failure of Green Lantern. Any filmmaker knows that the action and structure on paper represents only the barest of bones for the finished film. Guy Ritchie didn’t bring Sherlock Holmes to life in a way that worked for me personally, but it clearly worked for plenty of fans; and Bruzzese’s analysis cannot see the production design or style of shooting or cutting or even Robert Downey Jr.‘s insouciant charm. And it is the combination of these and other disparate elements, all wrangled by a team of professionals making dozens of choices a day, that make hit movies. This is true whether they’re huge spectacles or tiny glimpses into a single moment in a life.
But I’m avoiding the ontological question. Is Mr. Bruzzese’s magic machine a relatively benign tool for certain film producers to do more of what they’re already doing, or is it yet another step toward removing the humanity from the creative process? It’s hard to say in this case whether this technology is truly disruptive or just another false idol for executives seeking the elusive promised land of the sure thing. Most of the films I and likeminded viewers consider great work barely register in the world of “blockbuster hits,” and I expect these works will continue to be produced, warts and all, without the aid of algorithmic analysis. Meanwhile, if major producers want to spend many thousands of dollars to discover, as I predict, that hit-making is still a crapshoot, so be it.
I asked my friend, screenwriter Craig Fernandez, for his take on the whole thing, and his response sums it up well…
A lot of what passes as screenwriting in Hollywood is by the numbers/work by committee, but not work worth watching, not work that will ever be remembered, not work that begins with a broken person sitting at a typewriter telling a story that was telling itself. If I may paraphrase Mark Twain, the difference between a script written by an invested writer and one written by an algorithm is ‘the difference between lightning and a lightning bug.’
It’s interesting that Fernandez describes the writer as a “broken person.” In so many ways, art is about wrestling with something that is fundamentally flawed in us, and this is an endeavor that neither computers nor many executives understand.
There is certainly no shortage of copyright in the news these days, and readers of this blog might wonder about my silence on subjects like the Supreme Court’s ruling in Kirtsaeng or the testimony before Congress by Register of Copyright Maria Pallante calling for the next great overhaul of the law. For starters, when I began writing IOM, I never intended for it to overemphasize copyright as a topic; and I have stated repeatedly in posts and comments that there are plenty of sites (see blogroll) hosted by legal experts in Intellectual Property, which I do not presume to be. In fact, one of my ongoing criticisms of the Web is that its mechanisms tend to bring out the armchair expert on all subjects, regardless of their complexity, which invariably reduces even the most intricate matters to popular sentiment based on prejudices already held before discussion began. An illusion of discourse heading in no particular direction.
I write this blog from two main perspectives — as an artist working to navigate a changing career in the middle of tremendous upheaval and churn; and as a citizen with a measure of socratic humility, admitting that my observations are limited and that there are always experts who know more than I about many things. I bet if I walked into my local diner and talked to the 50+ crowd, I could gather a smattering of opinions on say North Korea but probably receive blank stares on copyright. If I did the same thing with a bunch of local sixteen year-olds, I might get blank stares on North Korea and an earful on the evils of copyright. Odds are, of course, few of these opinions will be grounded in quality journalism, let alone first-source expertise. Yes, the Internet makes it possible to cut through bumper-sticker politics and acquire expert information, but it’s also a great tool for repeating the bumper stickers, which is why amateurs can make a whole career out of repeating what people want to hear, regardless of substance. So it is with copyright.
If uninformed, declaratives about copyright are the froth in your latte, then TechDirt is the site for you. I read Mike Masnick’s post, for instance, concerning Pallante’s testimony, and the typical blogger thing to do would be for someone like me to critique that post fallacy by fallacy; but the prospect of doing so is almost as tedious as it is futile. After all, both Masnick and I are about as expert on copyright law as we probably are on plumbing. Those opposed to strong copyright protections already agree with his post, and those in favor will agree with mine. Meanwhile, I’m betting a large segment of the American population neither knows nor cares to know about the inner workings of these laws; so I often find myself wondering about the value of us amateurs arguing via blog over some of the more fleeting and granular aspects of a legal system that will likely take several years to evolve into its next incarnation.
So, for anyone who reads this blog and is not knee-deep in the gore of the copyright battle, the big picture as I see it this: I believe the copyright system will change over the next decade or so, but if that change is predicated too much on the self-serving premises of its tech-industry antagonists, the results for artists in particular, and for society in general, will be regressive rather than progressive. It would be like allowing the oil industry to overly influence emissions policy. Copyright stifles innovation is a popular meme and a cornerstone premise of the entire cabal aligned against the system, but this assertion is never supported by solid examples or data, which leads one to conclude that innovation describes what is contemporary and popular, regardless of whether or not it is economically progressive or, dare I say, fair. We generalists could boil down the details to a few fundamental questions when considering the future of copyright: Is enterprise-scale piracy innovation or exploitation? Is the right of the author a civil right or a government handout? Is copyright relevant for the individual or just a tool for big corporations?
These may be questions my kids’ generation will have to answer, but in order to do so honestly, they will need to come to terms with certain practical realities that don’t require legal scholarship. First, they’ll need to recognize that the Internet is not an extension of themselves, but a technological piece of infrastructure over which just a few corporations wield unprecedented power. Next, they’ll need to see past the selfish habit of acquiring media for free and accept that there is no such thing as an economy based on free stuff, that someone always pays and who pays makes a difference. They’ll need to recognize that no matter what they believe about big media companies and lobbyists, flesh-and-blood, independent artists and small creative businesses are experiencing tangible and measurable harm. In fact, as I write this, musician and activist David Lowery, speaking at the Canadian Music Week’s Global Forum, just said the following: “The first week our new Camper Van Beethoven album came out, I watched one seed on BitTorrent distribute more copies than we sold.” I think you have to be both daft and depraved to describe this as innovation, and this kind of spin has no business informing the future of copyright.
I was asked the other day by a gadfly baiting me on Twitter if a “win” for me would be the triumph of the RIAA and the MPAA. I don’t know what that means, and neither does the gadfly; but these implicit accusations are typical of the associative politics to which neither conservatives nor progressives are immune. Such interactions are circular, boring, and meaningless. And the hypocrisy is off the charts. I won’t pretend I’m a legal scholar, but the number of tech utopians who presume to lecture the creative community about how to make albums, motion pictures, and other works is truly staggering.
As I say, this blog was never intended to be all about copyright, and it occurs to me that part of its intent was to share observations from the perspective of developing new film projects in the current landscape. I admit that I am too easily attracted to the broad discussion, and I shall make an effort to steer this blog to be a little more film project focused, if for no other reason than film is next and may be more vulnerable than music. It’s been a long time since Lars Ulrich was pilloried on the steps of Napster, and today we see musicians, from fairly obscure to the biggest names, coming forward to talk about artists’ rights in the digital age, and not without reason. The truth is I don’t care if I or one of my colleagues develops a new film as a self-produced project, a deal with a Netflix, a traditional studio, or an established indie production company — whatever best serves the work. But there is not one of these paths that is not founded on the right of the author to retain first choice in the process by establishing a precedent of ownership in the work. Beyond that fundamental reality are many intricate details for professionals to work out and a whole lot of amateur-hour bullshit that deserves once and for all to be moved to the fringes of the debate.
The Illusion of More is my personal blog from December 2011 to December 2025. As of February 2026, I am no longer posting new blogs or other content, but I hope you enjoy this archive. Please do not attribute any of my writings here to my current or previous employers.
You must be logged in to post a comment.