Skip to main content

How Does Slated Use AI?

TL;DR, Slated relies on humans to read scripts and AI for financial analysis and matchmaking.

Mary C. avatar
Written by Mary C.
Updated today

With AI technology advancing rapidly, members of our community may be curious how Slated leverages AI in service of its mission to help great movies get made.
​

Slated Analytics, founded in 2015, was one of the first companies to use artificial intelligence in the independent film finance space. We launched first-of-its-kind financial projections and an AI-fueled matchmaking system, driven by the ethos that data is not competitive with art, but can instead empower filmmakers.

From the outset, we used data to expose trends in the industry, debunk myths, and arm filmmakers with studio-grade tools. Our research has been published in multiple studies in The Hollywood Reporter, Screen Daily, IndieWire, and other outlets. For us, it was clear that incorporating AI components selectively and carefully can lead to demonstrably better outcomes for filmmakers and investors alike.

But today, whether to use AI is not really the question. Instead, when and when not to use AI is the whole ballgame. Our experience servicing tens of thousands of filmmakers and executive producing over 70 movies serves as our guide for when Slated uses AI and when we choose to avoid it.

First, here are a few ways in which Slated currently uses AI:

  • Credit Scoring / Personal Scores / Team Score - Our custom models crunch your work history including total produced credits, box office, awards nominations and wins, and adjusts for each person's role in each project, as well as recency and depreciation.

  • Financial Analysis / Financial Score - When you buy Financial Analysis, our AI and ML-driven forecasting processes datapoints from thousands of films and compares that to your film's budget, team details, finance structure, genre, Script Score and other factors to ascribe data-driven probabilities for certain outcomes including box office, ancillary revenues, release pattern, and suggested marketing budget.

  • Marketplace Matching - When you use our platform to connect with other people or with projects, you'll benefit from a targeted matchmaking algorithm that takes personal preferences and privacy settings into account, including members' individual scores and their desired parameters for who they'd like to contact them.

  • Third Party Services - Like any company, Slated uses a variety of third party services from customer management software to server hosting, many of which utilize modern AI technologies to deliver their products. In some cases, we may recommend AI tools directly to filmmakers. For example, some of our favorite pitch deck artists use a combination of AI and human design to create show-stopping pitch decks better than anything that was possible just a few years ago.

We may choose to use AI in other areas in the future. For example, we may offer filmmakers script breakdowns that generate helpful materials that can be used for production and casting. (If we do, the use of AI will be opt-in and clearly indicated.)

However, we feel there is one area in which AI should NOT be used: screenplay feedback. To that end, Slated Analytics does not use generative AI to create the feedback in our Script Analysis product.

Instead, we use our signature, three-reader system, removing the cover page and having three story analysts review the script independently. Think of it like a mini-polling. If one of the three readers issues a Recommend, the project will most likely achieve a 70+ Script Score, even if the other two readers pass. We do this because we've found that getting one in three professionals to love a script is a strong indication of the project's ability to draw positive responses when its sent out.

Reading every script three times is very labor intensive. For reach submission, we dedicate 18 hours of work on average, generating over 10 pages of human-written analysis. In total, every set of Script Analysis requires work from five different members of our staff, including a supervisor and manager. In many ways, Slated's Script Analysis is the most ANTI-AI option for script feedback. But in our humble opinion, it's simply the most robust and reliable method.

At least so far, a human review of the script is still the only way to predict the human response when material is sent out to producers or screened before an audience. And the data continues to back that up.

Having reviewed over 10,000 projects, our published research has shown that scripts with 75+ Script Scores are 4-5 times more likely to yield fresh reviews on Rotten Tomatoes. As far as we know, our script analysis is still the only coverage that has been proven to predict film quality prior to production.

Slated also puts its money where its mouth is by considering 75+ scores for its EP VIP program. Since launching analysis in 2015, this system has produced over 70 completed feature success stories. The vast majority of those films now have overwhelmingly Fresh reviews on Rotten Tomatoes. To our knowledge, our script vetting system is still the only one of its kind, and the only one proven to predict Rotten Tomatoes ratings.

While many companies are using AI for screenplay reviews these days (and some even spit out notes that sound smart), none have passed our sniff test in terms of providing sound feedback that would make the movie better. For that reason, Slated has doubled down on human-driven screenplay analysis.

Our process also has safeguards in place to protect against the use of generative AI in the written assessment. Here's a list of some of the specific safeguards against generative AI that are baked into our Script Analysis process:

  • As mentioned above, every set of Script Analysis requires work from five members of our staff. For every screenplay, three experienced story analysts spend 3+ hours each reading and writing about the material. Next, we have a supervisor proofread the combined coverages and reviewed them for adherence to our policies. Last, we have a manager do a final, quality assurance spot check prior to delivery.

  • Our team does their work on a custom platform that tracks the contributions from each story analyst, as well as the supervisor, and manager. We can see a transcript of digital fingerprints showing our staff's actions on each analysis.

  • Since our analysis launched in 2015, we've shared hundreds of analysis samples on the public internet, most of which pre-date modern generative AI by many many years. As anyone can see, our process and format has not changed materially in that time. Our published studies further create a third party public record of our process and its outputs against which any new analysis can be compared.

  • Some people may notice that all of our reader comments follow a similar format. That's by design. We have a very regimented approach to story analysis that requires our experts go through an exhaustive process to arrive at their conclusions. While we give our analysts appropriate leeway to express their professional opinion, every citation, every score, and every comment is driven by a list of policies and requirements outlined our 50+ page rubric. That's why every individual coverage has eleven sections, eight of which have three+ citations, and speaks to similar questions about the script's story mechanics.

  • From time to time, we utilize independent AI Detectors as part of our quality assurance process to verify that no generative AI was used in the creation of analysis comments.

  • Lastly, the average Slated Analytics analyst has worked at the company between 6 and 10 years, covered over 1,800 projects each, and logged more than 10,000 hours in our system. We know their writing styles so well that we can identify each of their writing from a blind sample. Were there suddenly a marked deviation in a someone's writing style, we would spot it quickly.

Our broader opinion of AI in filmmaking is that it will continue to be used by filmmakers, executives, and companies to be more efficient and that's okay. In part, it's okay because it's not that new. Any time a writer or story analyst uses automated proofreading, for instance, that's AI. Likewise, editors now have generative AI baked into popular editing software like Adobe Premiere. These are generally good things.

To some extent, how AI is used is the choice of the artist and they will either benefit or suffer from the result. While we have some opinions about the most ethical ways to approach AI in storytelling, we also think the work will speak for itself.

In other words, from what we've seen so far, good movies are still good movies and bad movies are still bad movies. When artist use AI in unsophisticated ways, it harms the film. When they use it in smart, targeted ways, it can make the film better. We face a similar proposition in evaluating material since we also executive produce.
​
However the movie is made, a filmmaker's job is to ultimately make a group of people feel something. The best predictor of how humans will feel about a movie remains how humans feel about the script. If we ever go to a movie theater and see a packed room full of robots laughing and crying, we may reconsider our approach. In the meantime, we'll continue to rely exclusively on smart humans for script feedback.

Did this answer your question?