🤖 AI Brief: Actors strike over AI, the FTC targets OpenAI, and Meta preps their commercial LLM

Plus: Anthropic launches their Claude 2 LLM

Today is July 17, 2023.

This week’s issue explores the continuing themes of creative job disruption, government regulation, and the open vs. closed-source battle. One thread that ties these stories together?

The role of AI in history is yet to be written, and we’re seeing major stakeholders all pursue their own agendas to stay relevant and influential in the age of AI. There’s a lot of politics at play and human pain here.

As always, I write my weekly AI memo so you, the busy reader, can rapidly digest this news and come away smarter.

In this issue:

  • 🧨 Actors on strike over AI, worried they will be “replaced by machines”

  • 🔨 The FTC investigates OpenAI, asks for unprecedented disclosures

  • 🤯 Meta’s free commercial LLM is “imminent” and could shake up the LLM world

  • 🔎 Anthropic launches Claude 2, plus other news items

  • 🧪 The latest science experiments, including some amazing image-to-video tech

🧨 Actors on strike over AI, worried they will be “replaced by machines”

Credit: Netflix (Black Mirror / “Joan is Awful”)

Black Mirror, meet real life.

The ongoing actor's strike is primarily centered around declining pay in the era of streaming, but the second-most important issue is actually the role of AI in moviemaking.

Driving the news: SAG-AFTRA is accusing Hollywood studios of offering background performers just one day's pay to get scanned — then these studios would own that actors’ likeness for eternity with no further consent or compensation. Studios are pushing back and saying this is a gross misrepresentation of their proposal.

Why this matters:

  • Overall pay for actors has been declining in the era of streaming: Supporting actors in Orange is the New Black revealed they were paid as little as $27.30 a year in residuals due to how streaming shows compensate actors, while still earning significantly more from their smaller network TV parts. Many interviewed by the New Yorker spoke about how they worked second jobs during their time starring on the show.

  • With 160,000 members, many actors are concerned about a living wage: outside of the superstars, the chief concern from working actors is making a living at all -- which is increasingly unviable in today's age.

  • Voice actors have already been screwed by AI: numerous voice actors shared earlier this year how they were surprised to discover they had signed away in perpetuity a likeness of their voice for AI duplication without realizing it. Actors are afraid the same will happen to them now.

What are movie studios saying?

  • Studios have pushed back, insisting their proposal is "groundbreaking" - but no one has elaborated on why it could actually protect actors.

  • Studio execs also clarified that the license is not in perpetuity, but rather for a single movie. But SAG-AFTRA still sees that as a threat to actors' livelihoods, when digital twins can substitute for them across multiple shooting days.

What's SAG-AFTRA saying?

  • President Fran Drescher is holding firm: “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”

The main takeaway: we're in the throes of watching AI disrupt numerous industries, and creatives are really feeling the heat. The double whammy of the AI threat combined with streaming service disrupting earnings is producing extreme pressure on the movie industry. We're in an unprecedented time where both screenwriters and actors are both on strike, and the gulf between studios and these creatives appears very, very wide.

🔨 The FTC investigates OpenAI, asks for unprecedented disclosures

The FTC (Federal Trade Commission) is now investigating OpenAI -- and recently hit them with a 20-page demand letter informing the company it’s now in the FTC’s crosshairs. For the curious readers, here's the full document.

Why this matters:

  • The FTC believes existing consumer protection laws apply to AI, even if AI legislation has yet to arrive from Congress.

  • In general, the FTC has been aggressive towards tech companies. Linda Khan (the commissioner) has charted a deliberate agenda of going after tech, including trying to block the Activision-Microsoft deal (the FTC lost) and also trying to block Meta from acquiring a VR startup (the FTC lost as well). Losses have not deterred her from continuing an aggressive tone.

  • The fines and penalties for FTC violations can be large: Facebook paid $5B in 2019 and Twitter paid $415M in 2022.

So what's the FTC investigating here? Two major angles are part of the FTC’s investigation:

  • "Unfair or deceptive privacy or data security practices" –\ this is the stuff that resulted in big fines for Facebook and Twitter.

  • “Unfair or deceptive practices relating to risk of harm to consumers, including reputational harm" -- this follows several lawsuits from individuals against OpenAI alleging defamation from the AI's hallucinations, such as making up criminal records.

What must OpenAI do? Over 19 pages of individual demands from the FTC then follow. But the most notable asks are details around how the model was trained, the data used to train it, and disclosure of numerous proprietary details OpenAI has refused to share to date.

The main takeaway: In total, this would represent an unparalleled level of disclosure required from OpenAI. But the bigger risk is whether open-source models and other AI creators with fewer resources would be subject to the same scrutiny -- if so, that would represent a big chilling effect on innovation in the LLM space.

🤯 Meta’s free commercial LLM is “imminent” and could shake up the LLM world

We've previously reported that Meta planned to release a commercially-licensed version of its open-source language model, LLaMA. Now we know the launch is right around the corner, according to a report from the Financial Times (paywalled).

Why this matters:

  • OpenAI, Google, and others currently charge for access to their LLMs -- and they're closed-source, which means fine-tuning is not possible.

  • Meta will offer commercial license for their open-source LLaMA LLM, which means companies can freely adopt and profit off this AI model for the first time.

  • Meta's current LLaMA LLM is already the most popular open-source LLM foundational model in use. Many of the new open-source LLMs you're seeing released use LLaMA as the foundation, and now they can be put into commercial use.

Meta's chief AI scientist Yann LeCun is clearly excited here: "The competitive landscape of AI is going to completely change in the coming months, in the coming weeks maybe, when there will be open source platforms that are actually as good as the ones that are not."

Why could this be game-changing for Meta?

  • Open-source enables them to harness the brainpower of an unprecedented developer community. These improvements then drive rapid progress that benefits Meta's own AI development.

  • The ability to fine-tune open-source models is affordable and fast. This was one of the biggest worries Google AI engineer Luke Sernau wrote about in his leaked memo re: closed-source models, which can't be tuned with cutting edge techniques like LoRA.

  • Dozens of popular open-source LLMs are already developed on top of LLaMA: this opens the floodgates for commercial use as developers have been tinkering with their LLM already.

How are OpenAI and Google responding?

  • Google seems pretty intent on the closed-source route. Even though an internal memo from an AI engineer called them out for having "no moat" with their closed-source strategy, executive leadership isn't budging.

  • OpenAI is feeling the heat and plans on releasing their own open-source model. Rumors have it this won't be anywhere near GPT-4's power, but it clearly shows they're worried and don't want to lose market share. Meanwhile, Altman is pitching global regulation of AI models as his big policy goal.

The main takeaway: we’re in the early innings of the open-source vs. closed-source debate. Expect Meta’s move to significantly accelerate the intensity of the battle, now that their model can be used commercially.

🔎 Quick Scoops

Anthropic introduces their Claude 2 LLM, with improved performance, longer responses, and better memory. (Anthropic)

Harvard will teach students using an AI instructor next semester, enabling 1:1 student:teacher ratio for their popular CS50 coding course. (Futurism)

Elon Musk announces his AI startup, xAI. Composed of AI veterans, Musk intends to chart his own contrarian course in the world of AI and will leverage Twitter’s immense volume of data. (Twitter)

Stability AI Co-Founder Accuses Company of Tricking Him Into Selling Stake for $100 in Lawsuit. Cyrus Hodes sold his 15% stake for just $100. That stake would be worth over $500M today. (ARTNews)

Brave sells copyrighted data for AI training, engineer reveals. AI projects using third-party sellers like Brave may be able to avoid direct copyright infringement – but that doesn’t make it right. (StackDiary)

🧪 Science Experiments

HyperDreamBooth: the fastest personalized text-to-image model yet

  • Google Research team shows that using only a single input image, HyperDreamBooth is able to personalize a text-to-image diffusion model 25x faster than DreamBooth

  • Project page here

DreamBooth just got a big upgrade.

Pika Labs releases image-conditioned video generation

  • What it does: upload an image with a prompt and it’ll animate the image. For example: “a girl in the wind” modifies an image to have animated, blowing hair

  • Lots of early interest in this tech – you’ll have to join their Discord to test it.

Very impressive for an early science experiment.

NIFTY: Neural Object Interaction Fields for Guided Human Motion Synthesis

  • Generating realistic 3D motions of humans interacting with objects (e.g. sitting in a chair) is really hard. These researchers (from Google, NVIDIA, and Stanford) propose a “human motion diffusion model” that uses little data to do this. While early, this looks really promising.

  • Project page here

How does a human realistically sit in a chair? Quite the hard problem to solve.

CLIPascene: Scene Sketching with Different Types and Levels of Abstraction

  • We’ve seen diffusion models able to convert a sketch into a scene. But what about the reverse? This group of researchers demonstrates a novel method to do exactly that.

AI can now do the reverse of sketch-to-scene — quite cool!

😀 A reader’s commentary

👋 How I can help

We crossed 15k subs! Our audience continues to grow.

Here’s other ways you can work together with me:

  • If you’re an employer looking to hire tech talent, my search firm (Candidate Labs) helps AI companies hire the best out there. We work on roles ranging from ML engineers to sales leaders, and we’ve worked with leading AI companies like Writer, Tome, Twelve Labs and more to help them make critical hires. Book a call here.

  • If you would like to sponsor this newsletter, shoot me an email at [email protected] 

As always — have a great week!