AI Comes For Hollywood
AI Comes For Hollywood
From fake actresses to Sora 2, Hollywood enters a reckoning point
Originally posted on my CMMEdia substack newsletter - join me there as I post regularly.
Also, Pop Passport for my podcast/posts on travel and pop culture
Hollywood has always been on the edge of technological disruption - from sound to color to digital effects. But the arrival of generative AI (text, image, video) now poses not just a tool shift, but a challenge to the very identity of what it means to be a creator, performer, or studio. At stake: rights, value, authenticity - and whether humans remain central to storytelling. With Hollywood struggling, and its economy at a crossroads, here comes AI to bring more slop nobody asked for.
Two flashpoints in 2025 crystallize this tension: Sora 2, OpenAI’s ambitious video-generation tool, and Tilly Norwood, a fully AI-generated “actress” whose unveiling prompted outrage from studios, unions, and artists.
Sora 2, Studio Backlash & the Creator vs. Platform Struggle
What is Sora 2?
Sora 2 is OpenAI’s next-gen text-to-video model, enabling users to generate short, hyperrealistic videos - including depictions of public figures, characters, or scenarios from prompts. Within days of its release, Hollywood’s major players reacted - not with cautious optimism, but with alarm.
What the Studios, Unions & Agencies Are Saying
CAA (Creative Artists Agency) issued a public warning: Sora “poses risk to creators’ rights,” particularly regarding likeness, copyright, and compensation.
Several agencies (WME, UTA) indicated they would opt out, refusing to allow their clients’ likenesses to be used without control.
Motion Picture Association (MPA) weighed in: existing copyright law must be enforced; OpenAI must take “decisive action” to protect creators.
OpenAI, under pressure, promised to introduce more “granular control” - allowing rights holders to opt out, request takedowns, and set rules for use of characters / likeness.
But critics point out: these are reactive and after-the-fact solutions. The core issue remains: Sora 2 already enables the creation of content using copyrighted characters or recognizable personalities — raising questions about whether the burden is on rights holders to police misuse
Watch this example of a Sora 2 Pro Created Short Film, 100% Text to Video
Why It Matters: Creators vs Studios
Displacement & Devaluation
If a future studio can conjure “actors” or “extras” from prompts, what happens to the value of hiring real actors, background performers, or even minor roles? The risk is that performance becomes a commodity — diluted by hyper-scalable AI.Loss of Control Over Likeness, Voice & Identity
Generative AI models are trained on vast datasets, often drawing implicitly from existing performances, images, voice patterns, etc. The question: who owns what is “derived,” and who gets credit or payment? The studios and talent agencies insist that creators must have veto power.The Platform vs. Creator Economy Shift
If AI models become the medium, studios may become gatekeepers not of content production, but of model access, licensing, and regulation. The tension shifts: creators vs platforms, rather than creators vs studios.Regulation & Norms Are Behind the Curve
Legal precedent and contract language in entertainment haven’t caught up to generative AI. What does “performance residual” mean when the “actor” never existed? Who monitors posthumous likeness reuse?
Tilly Norwood: The AI “Actress” That Sparked Uproar
Who (or What) Is Tilly Norwood?
Tilly Norwood is a fully AI-generated character (or “actress”) created by Dutch AI production studio Particle6 / Xicoia, under founder Eline van der Velden.
Tilly already has social media profiles, modeling images, video snippets, and was pitched as a next-generation “star” — possibly to be represented by talent agencies.
Hollywood Pushback
SAG-AFTRA (the union for actors) issued a strong statement:
“To be clear, ‘Tilly Norwood’ is not an actor, it’s a character generated by a computer program… trained on the work of countless professional performers — without permission or compensation.”
They condemned the concept of replacing human performers with “synthetics.”
Critics also attacked the practice of using the works of many artists as training data without compensation. SAG-AFTRA highlighted that Tilly was built on un-consented performances.
Some industry insiders framed Tilly as a test case — a provocative play to push boundaries, see who folds, who resists, and to normalize AI performers.
The Gap Between Promise & Reality
Interestingly, reviews of Tilly’s debut performances (e.g. “AI Commissioner”) point to the uncanny valley, awkward lip-sync, wooden acting, odd movement interpolation. Critics described it as technically intriguing, but emotionally hollow.
This may underscore a deeper point: humans care not just about look and voice, but about intangible qualities - subtlety, empathy, lived experience - which AI struggles to replicate convincingly (yet).
Still: the symbolic act of presenting Tilly as “signable talent” is a shift in stakes. It forces the question: if AI could do “good enough,” do we slide toward a lower bar for hiring human performers?
These controversies are not isolated. They reflect a larger collision: the industrial media paradigm (studios, gatekeepers, contracts) vs the platform / generative AI paradigm (models, democratized access, data-driven creativity).
Power is shifting: The entities controlling the data, models, and deployment pipelines may become the new gatekeepers, displacing traditional studios or intermediaries.
Value is being rethought: What is scarce in a world where images and performances can be synthesized? Authentic human creativity, relationships, lived experience, trusted brands.
Legal & ethical lag: Contracts, IP law, personality rights, residuals - many frameworks in entertainment were never designed for synthetic humans.
Audience trust is fragile: Deepfakes, misattribution, “synthetic resurrection” of deceased icons (e.g. creating videos of dead celebrities) strain legitimacy and can lead to backlash.
In effect, Hollywood is rolling into a storm. Whether it survives or transforms depends not only on tech, but on political will, regulation, collective bargaining, and public sentiment.
Why Media Literacy (Now) Is Crucial - Especially for Young Audiences
As I have written and spoken about on many occasions, this technological, cultural shift isn’t happening in abstraction. It has deep implications for how people consume, interpret, and trust media. Below are key reasons why media literacy must be front and center as we move into this new era.
Read my previous post on media literacy for kids resources here, Media Psychology, Literacy x Misinformation”, and Why We Need Media Psychology
1. Fact vs. Fabrication Will Blur Faster
Sora 2–style tools can generate videos that look real. If viewers can’t distinguish AI from documentary, the potential for misinformation, propaganda, deepfake scandals skyrockets. The “will this ever happen?” has already moved to “how often is this happening?”
Audiences must learn to ask:
Who created this video?
Is it watermarked or traceable?
Are there signs of uncanny movement or physics errors?
2. Likeness, Identity & Synthetic Personas
The Tilly Norwood saga shows that “actors” may no longer be human. That means: new rules for representation, identity, consent. Young people may grow up in a world where a “star” might be synthetic, or a social-media persona entirely AI. Understanding behind-the-scenes mechanics is essential to not be manipulated.
3. Gatekeeping & Power Recognition
Generative AI centralizes power in model owners and platform designers. Audiences should be aware: which companies build and control these systems? Whose interests are encoded into them? Media literacy empowers citizens to ask, “who built this filter, and why?”
4. Valuing Human Creativity vs Algorithmic Output
As AI becomes more advanced, we must preserve the idea that creativity, nuance, emotion, and lived experience matter — not just output quality or novelty. Recognizing the difference, and demanding transparency (e.g. “this is AI”) is part of cultural maturity.
built this filter, and why?”
5. Legal, Ethical & Civic Awareness
Young creators and audiences will increasingly face questions about copyright, deepfakes, AI rights — and their voice will matter in shaping future norms. Media literacy isn’t just defensive: it’s empowerment.
Hollywood is not just fighting for its future — it’s wrestling with what it means to be human in a world where images, voices, and “actors” can be manufactured. Sora 2 and Tilly Norwood are early but intense skirmishes in a war for narrative control, livelihood, and authenticity. As readers, viewers, creators — we are not powerless observers. Demanding transparency, calling for regulation, supporting human voices, and teaching savvy media habits will be among the most creative acts of this era.