AI & Marketing

Ben Affleck Got AI Wrong - Here's What He Missed

Ben Affleck went viral breaking down AI on Joe Rogan's podcast. He's thoughtful, well-researched - and completely missing what's actually happening in AI.

Sharon Sciammas
Sharon SciammasAI & Growth Leader
6 min read

Hollywood meets AI - film production

Photo by Jakob Owens on Unsplash


Ben Affleck went viral last week breaking down AI on Joe Rogan's podcast. Smart guy, thoughtful take. He compared AI to electricity, defended human creativity, and predicted Hollywood has nothing to worry about.

He's wrong.

Not because he's stupid—he's clearly done his homework. But because he's making the same mistake most people make when predicting technology: he's extrapolating from current limitations instead of understanding exponential curves.

Let me show you where his logic breaks down.

Claim #1: "AI Progress Is Plateauing"

Affleck's big economic argument: GPT-5 is only 25% better than GPT-4 but costs four times more in electricity and data. His conclusion? we are hitting diminishing returns.

Sounds reasonable. it is also completely missing what is actually happening.

What he is ignoring:

While OpenAI's scaling might be plateauing, the entire industry is accelerating in a different direction. Groq just raised $750 million for inference chips that run 7.41x faster while cutting costs by 89%. Elon Musk built a 200,000-GPU supercomputer in 214 days and plans to scale to a million. China's DeepSeek trained a model matching GPT-4 performance for $6 million instead of $100 million—running on export-restricted chips that aren't even the best available.

This isn't a plateau. it is a global arms race with nations treating AI as a geopolitical weapon.

Affleck is watching one company's diminishing returns while missing that hardware acceleration, algorithmic efficiency, and international competition are exploding simultaneously. When something becomes strategic to nations, the "it is getting too expensive" argument falls apart.

Claim #2: "AI Outputs Are Average Because They Go to the Mean"

here is where Affleck reveals he does not understand how these models actually work.

He says: "ChatGPT, Claude, Gemini—they are really shitty because by nature they go to the mean, to the average."

The problem: This fundamentally misunderstands generative AI architecture.

Diffusion models do not "average" training data. They generate novel outputs from noise. When Runway or Sora creates a video, it is not averaging existing footage—it is creating something that has never existed before. that is not physics averaging; that is physics generating.

What Affleck is actually observing is the garbage-in-garbage-out problem. Default prompts with zero context produce generic results. But with proper prompting, constraints, and guidance, these same models produce highly specific, non-average outputs.

I know this because I build with these tools daily. My multi-agent marketing systems using Claude Code do not produce "average" content—they produce deterministic, brand-specific outputs because I've engineered the inputs properly.

Affleck is judging AI by what casual users get, not what skilled practitioners can achieve. that is like judging photography by tourist snapshots instead of Ansel Adams.

Claim #3: "AI cannot Write Meaningful Stories"

Affleck makes a great point about Good Will Hunting's emotional authenticity—that scene where Robin Williams breaks through came from real human experience, 50 takes, one perfect moment.

Then he concludes AI can never do this.

here is what he is missing: Stories aren't as unique as we think.

On the podcast, Affleck described his research process for The Town: interviewing prisoners, hanging with ex-cons, visiting FBI task forces, gathering human moments. that is exactly right—that is what makes stories authentic.

But here is the thing: AI does not replace that research. It accelerates what comes after.

The human part: Talk to people, record experiences, gather authentic details.
The AI part: Generate 10 script variations, test different story structures, iterate faster, edit relentlessly.

Affleck thinks this is either/or. it is actually both/and.

And about those stories being "unique"—Pixar literally codified storytelling into frameworks. Marvel turned superhero films into a formula. The music industry has chord progressions that generate hits. Every romantic comedy follows the same beats.

My prediction: we will get a two-tier content industry:

Tier 1: AI-generated mass market content using proven frameworks (the new Marvel formula)
Tier 2: Human-crafted prestige content with authentic emotional depth (the new indie/arthouse)

The gap between them will not be in the story structure—it'll be in the human performance and emotional nuance. Which is exactly what Affleck got right about those 50 takes of Good Will Hunting.

What Affleck Actually Got Right

The human performance moment matters more than ever.

he is absolutely correct that the final scene of Good Will Hunting—50 takes to get one perfect emotional beat—is irreplaceable. AI will not replace actors bringing lived experience to roles.

But here is what changes: More time for human creativity (acting, directing, emotional depth) BECAUSE less time on mechanical work (writing drafts, shot planning, VFX rendering).

And he is right about one other thing: it is all about money.

Film is high-risk, high-cost, uncertain ROI. Most projects fail financially. No one knows what will hit.

But this is exactly why AI adoption will accelerate faster than he thinks.

When an industry is capital-intensive with high failure rates, tools that reduce risk and cost get adopted FAST. Want to test 10 story concepts before committing? AI makes that affordable. Want to previsualize an entire film before shooting? AI enables that.

The economics argument cuts against Affleck's slow adoption prediction.

The Real Future (Not Affleck's Incremental Vision)

here is the pattern we keep seeing:

  • "AI cannot drive" → Self-driving cars
  • "AI cannot beat humans at chess/Go" → AlphaGo dominates
  • "AI cannot write code" → Claude Code, Cursor, Copilot everywhere
  • "AI cannot create art" → Midjourney, DALL-E, Runway
  • "AI cannot make movies" → ???

The gap between "AI cannot do this" and "AI does this routinely" keeps shrinking. it is now measured in months, not years.

Affleck assumes AI will remain at today's capability with incremental improvements. that is the mistake. we are not on a linear curve—we are on an exponential one.

Why This Matters

Affleck's perspective is common among people who understand the current state but do not work with these systems daily. he is done research, talked to experts, formed thoughtful opinions.

But he hasn't built a multi-agent content system. He hasn't watched Claude Code architect an entire marketing workflow. He hasn't experienced the velocity difference between human-only and human+AI processes.

The opportunity isn't in debating IF AI will transform filmmaking. it is in positioning for WHEN.

Whether that is helping traditional companies adopt AI workflows, building AI-native production systems, or consulting on creative industry transformation—the gap between "Hollywood does not believe this yet" and "this becomes obvious reality" is where fortunes are made.

Affleck will be right about one thing: Human creativity, emotional depth, and authentic performance will matter MORE, not less.

he is just wrong about the timeline and scope of transformation.

Share: