🤖 AI Brief: AI fails the EU AI Act, artists angry at Marvel, and politicians embrace AI content

Plus: Google's new speaking LLM

Today is June 26, 2023.

If there’s a theme to the major news events that stood out to me over the past week, it’s that we’re in the early innings of grappling with AI.

As legislation in the EU races ahead, AI models are trying to understand how they fit in. At the same time, everyone from artists to politicians are seeing AI pervade their work without fully understanding and grasping its long-term implications.

Lots of fascinating topics to watch in the months and years ahead!

In this issue:

  • 📣 Leading AI Language Models Fall Short of Upcoming EU Regulations, Stanford Study Warns

  • 🛠️ Millions of task workers now work in AI annotation, often with no idea why

  • 📺️ AI-generated political content spreads, highlighting gaps in election laws

  • 🎨 Marvel’s “Secret Invasion” AI-generated intro angers artists

  • 🧪 The latest science experiments, including new video generation AI models and Google’s new speaking LLM

📣 Leading AI Language Models Fall Short of Upcoming EU Regulations, Stanford Study Warns

A Stanford study concludes that the world's leading AI language models could fail to meet the standards set in the EU's new AI Act, facing significant regulatory risks and potential heavy fines for non-compliance.

Leading AI models largely fail to comply across 12 key dimensions, researchers found. Credit: Stanford University

Why this matters:

  • The EU AI Act is on its way to becoming law: it's now in its final stages after passage through parliament, so there's no way to head off its arrival. Any final changes will be small tweaks.

  • Penalties for non-compliance are serious: fines of the greater of €20,000,000 or 4% of worldwide revenue are possible.

  • Open-source models face the same standards as closed-source models: this includes registration with the EU, transparency requirements, and safety considerations.

  • Other countries will use it as an example: as legislation gets developed in the USA, it's likely they'll look to the EU for inspiration.

What did the researchers find?

  • Across 12 key requirements for generative AI, the leading 10 models fell short. Most scored just 50% of the total possible 48 points.

  • Hugging Face's open-source BLOOM performed the best, securing 36/48 points.

  • OpenAI's GPT-4 scored 25/48 points, roughly middle of the pack.

  • Anthropic's Claude scored 7/48 points, just second from the bottom.

Areas of failure were different between closed-source and open-source models:

  • Open-source models generally outperformed in data sources transparency and resource utilization disclosure. Due to their generally transparent releases, this is not surprising.

  • Closed-source models excelled in areas such as comprehensive documentation and risk mitigation.

What are the issues to watch next here?

  • Many elements of the AI Act remain murky, the researchers argue, so additional clarity is needed. Look out for tweaks to the law as it goes through additional refinement.

  • How open-source and closed-source projects adapt in the next few months will be interesting to observe. OpenAI in particular will have be more open. And open-source projects may have to wrestle with better understanding registration requirements and post-deployment model risks.

🛠️ Millions of task workers now work in AI annotation, often with no idea why

This deepdive report from the Verge covers an emerging underclass of society: the millions of task workers, often in developing countries, paid pennies to annotate images, voice files, and text for training AI systems.

  • In a highly ironic twist, many have no idea they’re working for AI systems, and several workers interviewed were surprised to learn they worked for subsidiaries of tech companies like Scale AI.

  • This “supply chain” of human labor is deliberately obfuscated, often for confidentiality concerns, the Verge found. Remotasks, one of the companies in Africa identified by the Verge, lists no connection to Scale AI despite its subsidiary status.

The task workers themselves have mixed feelings about their labor:

“I read and I Googled and found I am working for a 25-year-old billionaire,” said one worker, who was labeling the emotions of people calling to order Domino’s pizza. “I really am wasting my life here if I made somebody a billionaire and I’m earning a couple of bucks a week.”

Another worker, who discovered they had likely been helping train ChatGPT, summed up the opinion of his peers: “People were angry that these companies are so profitable but paying so poorly.”

📺️ AI-generated political content spreads, highlighting gaps in election laws

From local elections to national races, New York Times highlights a number of political races that are seeing rise of AI-generated content.

  • One Canadian mayoral candidate generated multiple images of a crime-ridden Toronto in order to emphasize his tough-on-crime message, the Times found. None were labeled as AI-generated.

  • Errors in AI-generated images aren’t dissuading their use, the Times found. The same candidate also distributed materials of a woman with three arms (a common AI image error), and nonetheless gained traction in the mayoral race.

  • And a Twitch livestream debate of AI Trump vs. AI Biden shows how eerily AI can imitate major political figures, increasingly raising questions within our broader society of how to separate real from AI content.

🎨 Marvel’s “Secret Invasion” AI-generated intro angers artists

Marvel broke new ground last week as the first major television show to use an AI-generated opening sequence. Its Secret Invasion TV series, in which an alien shape-shifting species infiltrates Earth, features an abstract, mind-melting intro that fans quickly determined was AI-generated.

Credit: Marvel Studios

After confirming that visual effects group Method Studio used AI to generate the intro, Marvel has been taking flak from visual effects artists.

Jeff Simpson, a concept artist who directly worked on the Secret Invasion TV series for Marvel, tweeted: "Secret Invasion intro is AI generated. I’m devastated, I believe AI to be unethical, dangerous, and designed solely to eliminate artists' careers.”

AI tools in the visual effects space comes at a time when Hollywood writers are striking over lower pay and AI concerns, and visual effects artists are similarly complaining that large studios like Marvel have been the driving force behind lower pay, longer hours, and nearly unprofitable operating margins for many of the smaller studios.

(Editor’s note: Marvel doesn’t make this intro available publicly themselves, but you can probably find it via a Youtube search)

🔎 Quick Scoops

OpenAI successfully lobbied the EU to water down the AI Act, leading to chatbot platforms avoiding a “high risk” label. (Time)

US Senate previews their AI legislation strategy. While the US is still playing catchup, leading politicians recognize that speed is critical. (NBC News)

AI-generated child sexual abuse images are on the rise, and law enforcement agencies are ill-equipped to stop this. What’s worse, automated detection systems based on recognizing real images could now face an overwhelming challenge in synthetic media. (The Washington Post)

Voice-generating AI platform ElevenLabs raises $19M, and launches an AI voice detection tool (TechCrunch)

Dropbox launches universal AI search, amongst many other AI tools. Expect natural language-powered search to be increasingly prevalent as the new UI paradigm in many, many platforms to come. (The Verge)

🧪 Science Experiments

DragGAN is now available directly on Hugging Face

  • This demo made waves when it first released weeks ago, and is a proof-of-concept of how generative AI can unleash a new set of powers in image manipulation

  • Now you can try it here without installing anything else

Source: Hugging Face

Midjourney 5.2 releases, includes new “Zoom Out” feature

Source: Reddit

zeroscope_v2 XL video generation model releases

  • This is a watermark-free video model, now capable of generating high quality video at up to 1024 x 576

  • See it in action here

Source: Zeroscope team / Hugging Face

Google presents AudioPaLM, an LLM that can speak and listen

  • AudioPaLM fuses text-based and speech-based language models (PaLM-2 and AudioLM) into a unified multimodal architecture

  • See it in action here

😀 A reader’s commentary

Thanks Bradley!

That’s it! Have a great week!

And as always, we want to hear from you on how useful this is. The more feedback you provide, the better our newsletter gets! What else would you like to see covered?

So take the poll below — your signals are helpful!