🤖 AI Brief: Bard does math, DeepMind's algorithm magic, and Meta's new music engine

Plus: Palantir's $400M contract with US Spec Ops

Today is June 12, 2023.

Happy Monday! Last week brought a number of interesting developments and new science projects (like Meta’s new open source music generator) to try. It also saw a lot of noise (articles and news that I purposely filter out of this newsletter).

May have you have a fruitful and productive week!

In this issue:

  • 🧠 Bard improves at math, coding, and more through new upgrades

  • 🛑 OpenAI still not training GPT-5, prioritizing other ideas instead

  • 🔨 Meta criticized by US senators for “leaking” open-source LLaMA LLM

  • 🏃‍♂️ Google’s DeepMind discovers faster sorting algorithms using deep reinforcement learning

  • 🧪 The latest science experiments, including Meta’s new music generation engine

🧠 Bard improves at math, coding, and more through new upgrades

One of the known weaknesses of language models is their poor math capabilities, owing to inherent limitations in the transformer architecture that lead them to function like prediction engines.

  • This week, Google announced their solution to this weakness by enabling Bard to perform implicit code execution in the background.

  • For queries that Bard believes will benefit from logical reasoning, a second system kicks in that improves the accuracy of Bard’s responses to computation-based problems.

  • Google’s team cautions this still isn’t perfect, however the expected performance gain they witnessed was about 30% over the baseline. Read the full blog post from Google here.

🛑 OpenAI still not training GPT-5, prioritizing other ideas instead

One of the constant questions OpenAI CEO Sam Altman encounters is “when is GPT-5 coming out?” As he toured several countries last week to promote OpenAI’s agenda, this question came up again.

He was happy to reinforce some points and offer more context:

  • Months after explaining that the company wasn’t working on GPT-5, Altman confirmed this again: “we have a lot of work to do before we start that model.”

  • Altman set further expectations on any hypothetical release timeline, explaining that it took six months to release GPT-4 even after OpenAI finished training it. Read more here.

🔨 Meta criticized by US senators for “leaking” open-source LLaMA LLM

A bipartisan group of US senators from the same committee that questioned OpenAI CEO Sam Altman is now going after Meta.

Meta’s LLaMA LLM notably leaked to the public in February and have been central to the red-hot pace of open-source LLM improvements in recent months.

  • This is clearly worrying the US government: “The open dissemination of LLaMA represents a significant increase in the sophistication of the AI models available to the general public, and raises serious questions about the potential for misuse or abuse,” the senators wrote to CEO Mark Zuckerberg.

  • Meta “failed to conduct any meaningful risk assessment,” the letter says, noting how the company should have anticipated broad dissemination of the model after it was released to a smaller group at first.

🏃‍♂️ Google’s DeepMind discovers faster sorting algorithms using deep reinforcement learning

Google DeepMind's AlphaDev AI has improved several sorting algorithms in C++, including optimizing one algorithm for a 70% speed increase over the previous best method.

  • The recent achievement came about by adapting the AlphaZero AI, which famously mastered complex games like chess and Go, into a code-focused version dubbed AlphaDev.

  • As computer chips approach fundamental physical limits due to their nanoscale transistors, the need for better software efficiency and optimization becomes increasingly paramount.

Read the full report from the Google, published in Nature.

🔎 Quick Scoops

Meta plans to add generative AI across numerous platforms, as the company details its AI strategy in its clearest form yet. (Axios)

OpenAI was sued for defamation by a radio host, who is angry that the chatbot falsely claims he embezzled funds and defrauded a nonprofit. (Ars Technica)

Chief Information Officers worry using AI to write code will create new problems, among them growing technical debt and orphan code. (Wall Street Journal - 💰️ Paywalled)

Microsoft’s Bing chatbot gets in trouble for serving up canned AI answers for “Chrome,” in a sign of a growth back gone wrong. After journalists pointed this out, the company reversed course. (The Verge)

DeSantis campaign is running ads with AI-generated imagery, mixing in real and fake images in a way that obscures the actual AI-generated image of some content. (Twitter)

Palantir wins $400M contract with US Special Operations to integrate LLMs and AI, as part of reducing cognitive load for the military and its warfighters. (MarketWatch)

🧪 Science Experiments

Google adds vision into speech recognition algorithm, vastly improving robustness

  • Existing technology for deciphering audio suffers when the inputs are noisy

  • By injecting visual streams in as well, prediction of audio is vastly improved

  • See their research here

    Predictions of what people are saying improve when you interpret the visuals too! Credit: Google

LLMs integrated with Notebooks could add new superpowers, Steven Wolfram shares

Meta releases their open-source music generation model

Nested Diffusion provides a way to progressively see image results via Stable Diffusion 1.5

Witness how Stable Diffusion increasingly generates a more detailed image via Nested Diffusion.

😀 A reader’s commentary

That’s it! Have a great week!

And as always, we want to hear from you on how useful this is. The more feedback you provide, the better our newsletter gets! What else would you like to see covered?

So take the poll below — your signals are helpful!