Will Generative AI products debut in Hollywood?
Sam Altman’s been busy.
OpenAI has been making rounds in Hollywood. This highlights the push I mentioned in a previous article.
There’s a major push by OpenAI to expand into new markets - and new industries.
The media industry is a blue ocean waiting to be sailed.
Today, we cover:
Why Hollywood?
What’s the roadmap for GenAI in media?
What are potential business models?
GenAI in Hollywood is coming. Other use cases and business models will follow early adopters.
Let’s explore this a bit.
Why Hollywood?
Starting with Hollywood is smart.
It taps into an influential industry looking for new storytelling and workflow tools. Hollywood has influence to drive mass adoption - far beyond US shores.
When Hollywood utilizes GenAI, it's a garners significant media attention. That drives curiosity, interest, and wiliness to try it out. Better yet? There's immediate proof of concept.
The proof of concept is before our eyes. So there’s more immediate buy in. For both studios and OpenAI? Thats a powerful credibility and advertising boost - with a quality product.
Quality products make other industries sit up and notice. That generates enthusiasm for widespread adoption.
But adoption won’t be overnight. It's a gradual process requiring patience and strategic implementation.
Implementation means thinking about mass adoption - it effects business models and processes.
makes a very great point about this:Just like cars took decades to replace horses, Hollywood is likely to adopt generative AI gradually. It will start in studios with resources, then spread as it proves its value.
As Hollywood begins to see the practical benefits of generative AI? Its use will expand. Smaller test projects will pave the way. They show the tech's potential to improve storytelling, film, and production workflows.
The roadmap for the media industry using GenAI will start small. Then scale up.
But what does that roadmap look like?
What’s the roadmap for GenAI in media?
OpenAI is trying to drive film and media adoption. That is clear. What is not so clear is what’s the time frame.
I’ve talked to a few media industry about this, and we came up with a rough roadmap:
First Stage: Proofs of Concept
We will likely see commercials and shorts as first proofs of concept. The costs to build, generate excitement, and drive investment in future projects? Very high. We’ll likely see this in two areas:
Commercials. An AI label these days generates excitement. It gives a perception of novelty - AI product or not. This can drive sales and boost reputation. It captures the imagination of potential customers. It builds a perception a company is cutting edge. Coke was the first boat into this blue ocean.
Shorts. They are a sandbox. Designers, animators and studios will test out will generative AI tools. They’ll also test out processes and estimate costs. Risk and cost are smaller in these environments. The vertical video format of IG and TikTok are extremely popular too. Shorts can drive adoption far easier than in the past.
Any time you have new tech? You have to test with trial and error. The first proofs of concept will be in commercials and shorts.
2nd Stage: Process Augmentation
Augmenting and enhancing workflows will be next. Once commercials and shorts have proven consistent value and revenue? Process augmentation.
These won’t be end to end media quite yet. More mundane parts of the media process will be generative. These may include:
Storyboard Prototyping. Tools assist designers in creating frameworks and wire frames for the initial storyboards. These may even include visual and text suggestions from text or image prompts.
Asset Creation: AI is used to generate background images, textures, and even characters. Artists need less time and effort for these tasks.
Animation and Visual Effects: Simple animation sequences and VFX can be automated or semi-automated. This streamlines post-production workflows.
Voice Synthesis Synthetic voice tracks for characters or narration. Real voices can be used to create synthetic models.
The human is very much in the loop in this process. These set the foundation. But the designer would then build the rest themselves. The workflow is enhanced, but the creativity and how its expressed? Belongs to a human.
3rd Stage: Full-Scale Integration
At this stage, AI becomes a core part of media creation. The human is still in the loop - but the AI agent takes a larger role. The AI agent ends up a junior partner that completes small tasks.
Creative Collaboration: AI and creatives work together closely, expanding the limits of storytelling and design.
Scene and Episode Generation: AI can now craft entire scenes or episodes, maintaining coherence and emotional depth.
Personalized Content: Viewers get content tailored in real-time to their preferences, making media more interactive.
Accessibility: High-quality content creation tools become accessible to everyone, leading to more diverse stories and voices in media.
This phase marks a shift towards AI-driven media that's more immersive, interactive, and inclusive.
What are potential business models?
There’s major market emerging: short form media.
There's a clear trend toward shorter-form media content, which is picking up pace rapidly. It's a blue ocean ripe for product innovation, driven by fast-paced content consumption.
Deloitte in their annual 2024 Digital Media Trends survey found:
Generative AI is an emerging market for GenZ and Millenials. Businesses can expand their future reach and influence.
What types of business models targeting Gen Z and Millennials could we create? Here’s two interesting ones that popped up in my discussion with media friends:
Generative AI Advertising Platforms.
Campaign Packages. This model will provide a package of services for the entire ad campaign process. It will include AI-generated ad creation, A/B testing, and analytics. You can set pricing based on the campaign's size. This includes its duration and the platforms it aims to reach.
Custom Ad Creation. Brands seek unique, high-quality ads. You could offer a service to create them. This would involve closer collaboration with the brand. The goal is to make and improve AI-generated ads that match their strategy and brand.
Licensed Synthethic Voice Models.
Licensing Agreements. Celebrities and voice actors could make licensing deals with AI companies. Companies would create synthetic models of their voices. The agreements would detail the terms of use in various types of media. They would also note the duration of use and any content limits.
Tiered Pricing Structure. We could use a tiered pricing model. Basic usage, like short voice clips for personal use, would have a lower fee. Commercial use in advertising or entertainment would have a higher fee. Special projects, like a video game or animated features, could have custom pricing. The price will depend on the size and visibility of the project.
Revenue Sharing. Revenue-sharing models could be attractive. The voice talent gets a percentage of profits from the use of their synthetic voice. This would provide ongoing income. It would also motivate voice actors to use their synthetic voices in projects.
These are by no means comprehensive. But they do show possible business models with short media.
GenAI can quickly produce short, engaging products and marketing, delivering value with fewer resources.
Up Next
I’ve been traveling a bit while writing this, so thanks for bearing with me.
My first course for LinkedIn Learning is coming out next week, April 1st. Be on the lookout for it. I’ll be sending out a link too.
I’m also finishing up a foundational course for LinkedIn Learning on AI Strategy. I can’t wait to share it.
On top of that, I’m hard at work on:
How the 5 V’s of data affect your AI strategy
Breaking down barriers in AI strategy: How to overcome friction.
I’ll be updating this as time allows.
See you next time! 🍻