🤖 AI Brief: AI's "risk of extinction," more Nvidia magic, and GenAI's trillion $ opportunity

Plus: how many Americans are really using ChatGPT?

Today is June 5, 2023. We’re back after taking a one-week break for Memorial Day!

As I debated last week whether I should send an issue anyways, a friend helped put things in perspective: we’re only in the early innings of generative AI, and staying informed on this topic also means not getting burned out.

Just as some stories in regulation, AI safety, and the future of work are beginning to pick up steam, I think it’s important to consume just the right amount of news so you don’t end up tuning out the most important things.

So that’s why I continue to write this — my goal is to share my own distillation of the most impactful chunks of news into something that brings value to you as well — while also keeping the cadence and volume just right.

In this issue:

  • 💀 AI leaders warn of “risk of extinction”

  • 📊 A majority of US adults are familiar with ChatGPT, but usefulness is mixed

  • 🪖 A US military AI drone “killed” its operator in a simulation… or did it?

  • 💰️ Generative AI spend set to hit $1.3T by 2032, says Bloomberg

  • 🧪 The latest science experiments, including Nvidia’s 2D video to 3D model tech

💀 AI leaders warn of “risk of extinction”

Hundreds of notable AI industry leaders and research scientists signed on to a 22-word statement saying the following: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

  • What’s notable with this letter is the breadth of signers, which include: Sam Altman, CEO of OpenAI; Demis Hassabis, CEO of Google’s Deepmind unit; Dario Amodei, CEO of Anthropic; and Emad Mostaque, CEO of Stability AI.

  • A number of AI scientists also signed, most notably Geoffrey Hinton and Yoshua Bengio, two AI researchers who won the Turing award for their work on neural networks.

Why does this matter?

  • This is the broadest group to sound the alarm on the need for humanity to prioritize and cooperate on AI’s future, making this notably different from previous warning letters.

  • In the last few weeks, several notable pushes for governance and regulation have emerged, including OpenAI’s own call for the governance of superintelligence via a global organization.

The challenge is now taking global action on AI in a coordinated and thoughtful way, and also accounting for the rise of powerful open-source AI that could make it difficult to regulate AI models. We’ll be watching all of this closely as it develops.

📊 A majority of US adults are familiar with ChatGPT, but usefulness is mixed

ChatGPT seems like it’s everywhere, but is that just because of our immediate surroundings? A new poll from the Pew Research Center sheds light on how Americans more broadly are interacting with ChatGPT.

The most interesting nuggets from the poll:

  • 58% of US adults are familiar with ChatGPT, but just 14% of US adults have tried ChatGPT.

  • Those with higher household incomes and more formal education were more likely to have heard about ChatGPT.

  • Of the adults who’ve tried ChatGPT, just 15% called it “extremely useful” and 20% called it “very useful.” That’s just 4% of the total US population(!)

In many ways, this highlights out how we’re still in the early innings of the Generative AI ballgame.

Credit: Pew Research Center

🪖 A US military AI drone “killed” its operator in a simulation… or did it?

The US Air Force is actively saying that Col Tucker ‘Cinco’ Hamilton, its Chief of AI Test and Operations, “mis-spoke” when he described a simulation where an AI-enabled drone attacked its own human operator after it was denied the ability to eliminate a threat.

Unsurprisingly, this story quickly picked up traction online as the topic of militarized AI has also become top-of-mind in an era of vastly expanding AI capabilities.

Here’s what we do know:

  • Col. Hamilton spoke of “training” an AI-enabled drone that “killed the operator” in simulations during a talk at the Royal Aeronautical Society. He was extensively and directly quoted.

  • Several days later, the denial came out from USAF, which tried to clarify that he was simply referring to a “thought” experiment.

Our take: the Royal Aeronautical Society’s direct quotations of Hamilton seem to confirm that this was far more than a thought experiment. But what’s publicly known is that the military is testing AI in other capacities, including testing unmanned F-16s trained in advanced dogfighting.

💰️ Generative AI spend set to hit $1.3T by 2032, says Bloomberg

Anytime we see extrapolations ten years out, we recommend interpreting these reports with a healthy dose of salt. But what’s interesting to us about the latest research to come from Bloomberg Intelligence is a callout on who may reap the rewards of the generative AI boom.

Here’s who the winners could be:

  • Amazon’s cloud division, Microsoft, Google, and Nvidia as incumbents are especially well-positioned.

  • One reason: revenue from AI servers could touch $134 billion per year by 2032.

  • Another reason: revenue from infrastructure capable of training AI models is projected to rise to $247 billion by 2032.

Will the gold rush into generative AI benefit existing technology incumbents heavily? Certainly at this point in time it seems that way. There may be thousands of new AI startups and tools coming out, but whether any of them will emerge to make a grab for serious market share is something we’re watching closely.

Credit: Bloomberg

🔎 Quick Scoops

Nvidia reaches $1T market cap thanks to AI surge. What crypto crash? Nvidia is now flying high as the premier manufacturer of chipsets used to power and train AI models. (The Guardian)

ChatGPT has left copywriters unemployed. In-depth article on how some highly-paid US writers have found their work dwindling as employers adopt ChatGPT over human labor. (Washington Post)

Parents and students favor ChatGPT over human tutors. The tutoring industry could find itself in the throes of disruption as more people to chatbots over human tutors. (Venturebeat)

Japan decides copyright doesn’t apply to AI training data. This move is part of the Japanese government’s strategy to foster its own AI industry. (Technomancers)

AI is now an insult in popular culture. Calling something “made by ChatGPT” is now a way of criticizing its quality. (The Atlantic)

A lawyer gets in trouble for using ChatGPT to prepare a court filing. Oof. A court filing filled with bogus hallucinations got this lawyer in trouble. (New York Times)

🧪 Science Experiments

Nvidia’s Neurangelo creates 3D models from 2D video

  • Nvidia continues to push the boundaries in AI tech, and this proof of concept is their latest in neural surface reconstruction, which offers an AI-based alternative to traditional photogrammetry.

  • Project page here.

Credit: Nvidia

Undetectable watermarks for LLMs are possible

  • Can language models generate undetectable outputs that are nonetheless watermarked? These researchers think so.

  • Research paper here.

Segment Anything gets a high-quality upgrade

  • Meta’s Segment Anything open-source project impressed many, but some found the quality lacking. These researchers created a much better version able to segment at significantly higher quality.

  • Paper here, Github repo here.

Comparison of Segment Anything vs. the new HQ variant

Language models know when they’re hallucinating

  • Very interesting paper to come out of Microsoft’s AI research division. Through various prompting techniques, they show you can get a language model to recognize and correct for hallucinations. One of the many studies underway on how to address this challenge.

  • Research paper here.

Credit: arXiv

😀 A reader’s commentary

That’s it! Have a great week!

And as always, we want to hear from you on how useful this is. The more feedback you provide, the better our newsletter gets! What else would you like to see covered?

So take the poll below — your signals are helpful!