🤖 AI Brief: coding jobs disrupted, open-source's free speech debate, and more AI Act criticism.

Plus: how Nvidia's $1T valuation could be threatened

Today is July 3, 2023.

This week’s issue explores a number of AI topics now exploding into the mainstream: white-collar job disruption, the benefits of open-source models, and further criticism of the EU’s AI Act.

I also feature a fascinating read from SemiAnalysis that shows how Nvidia’s AI moat could quickly disappear; this is once again a reminder of how fast the AI space moves even in the early innings.

In this issue:

  • 🧑‍💻 AI tools could change the programming profession in profound ways

  • 🤬 Uncensored AI chatbots provoke intense debates over free speech

  • 💣️ EU companies criticize the EU’s AI Act as “catastrophic” in an open letter

  • 🖥️ Nvidia’s AI moat could be threatened as AMD chips deliver better performance

  • 🧪 The latest science experiments, including a new image to 3D mesh tech that’s better than ever

🧑‍💻 AI tools could change the programming profession in profound ways

“Software is eating the software industry” as skilled workers are becoming more productive and AI starts automating knowledge work, the Wall Street Journal reports (note: paywalled article) in its deep dive on AI’s impact on coders.

Why this matters: while knowledge workers in the past have typically benefited from technology improvements, AI has the potential to upend this trend. A “lost generation of early-career developers” could be the result here as hiring practices shift quickly.

It’s already happening:

  • Since the release of ChatGPT and GitHub’s Copilot last year, experienced programmers have become more productive while companies have slowed down the hiring of junior engineers.

  • Adoption of AI tooling is rapid, with 70% of programmers using or planning to use AI tools, a StackOverflow survey found.

  • Employment data now shows that junior engineers are the first to experience layoffs as well in the last year.

Experts expect this same force to come for other white-collar jobs, and the rapid pace of AI may catch a number of professions by surprise in a “cautionary tale for us all,” writes the Journal.

I get asked a lot about what I think about AI and the future of jobs. This is yet another data point to support my own belief: no one really knows, but we should expect our old assumptions to not hold.

🤬 Uncensored AI chatbots provoke intense debates over free speech

We’ve covered the open-source community and how dozens of language models, many unrestricted, have been released in the past few months. The debate over free speech and safety protocols has only gotten more heated in the last few weeks, reports the New York Times (note: paywalled article).

At the heart of the debate:

  • “This is about ownership and control,” explains Eric Hartford, an ex-Microsoft engineer who created the uncensored WizardLM language model.

  • What Hartford and other open-source proponents want: “If I ask my model a question, I want an answer, I do not want it arguing with me.”

Some open-source models are taking a middle ground, but it isn’t easy:

  • Open Assistant, based on Meta’s LLaMA and developed with help from 13,500 volunteers, is experiencing deep community debate about how its safety systems should work.

  • One faction questions whether there should be safety systems in place at all.

Could this lead to a world where everyone has their own personalized chatbot? 

  • Possibly, and that’s the ideal outcome that many open-source LLM creators envision.

  • “Democrats deserve their model. Republicans deserve their model. Christians deserve their model. Muslims deserve their model,” Hartford explained in a blog post on uncensored models. “Every demographic and interest group deserves their model. Open source is about letting people choose.”

What’s clear is that open-source proponents aren’t intimidated by legislation such as the EU’s AI Act. They’re full-steam ahead as they pursue their vision.

💣️ EU companies criticize the EU’s AI Act as “catastrophic” in an open letter

The EU Parliament greenlit a draft of the AI Act last month, putting it on the path to become law in the coming years. Now a group of 150 EU executives is sounding the alarm over the AI Act, claiming that it may “jeopardise Europe’s competitiveness and technology sovereignty.”

These are some major companies voicing criticism:

  • Renault, Heineken, Airbus, and Siemens are just some of the major EU companies sharing concerns.

  • “The EU AI Act, in its current form, has catastrophic implications for European competitiveness,” says one of its signatories.

They argue that the law is too restrictive:

  • AI innovation could be stifled due to strict rules targeting generative AI systems, the signers say.

  • Registration requirements, risk assessments, and transparency disclosures would subject AI systems to “disproportionate” compliance costs and liability risks.

  • The worst case outcome? AI companies withdraw from Europe rather than meet compliance requirements.

The EU Parliament doesn’t seem swayed, though:

  • “It is a pity that the aggressive lobby of a few are capturing other serious companies,” said Dragoș Tudorache, a Member of the European Parliament who led the development of the AI Act.

  • Tudorache stressed that the process behind the legislation was collaborative and “industry-led.”

We’re still several years from the AI Act becoming law. Expect debates here to continue, especially as other countries like the US begin to shape their own AI legislation.

🖥️ Nvidia’s AI moat could be threatened as AMD chips deliver better performance

Nvidia’s hardware to date has been the preferred approach to develop and train AI models, helping catapult the company’s market capitalization past $1 trillion in recent weeks.

The moat is now under threat, says SemiAnalysis:

  • Open-source software from MosaicML (which Databricks just acquired for $1.3B) is leveling the playing field, helping AMD M1250 GPU achieve nearly 80% of the performance of Nvidia’s A100 GPU on AI-related tasks.

  • And Mosaic hasn’t even tested their software on AMD’s new M1300 yet, which could spark further gains.

This is a good thing overall: it’s long been the dream of AI engineers to have a hardware-agnostic tech stack, where their code can work across multiple GPU platforms and not require GPU-level programming. That future may be arriving earlier than expected.

AMD has long played catchup to Big Green in the GPU world. It’s fascinating that software improvements, released as open-source libraries by other companies, could be the accelerant here for them.

🔎 Quick Scoops

Databricks will buy MosaicML for $1.3B, marking one of the largest acquisitions in the generative AI space in a sign of how hot the AI market is right now. (TechCrunch)

AI-generated music is provoking deep debates in the music industry, writes the Rolling Stones in this fascinating deep-dive. (Rolling Stones / paywalled)

Microsoft, OpenAI sued for privacy violations by 16 people, alleging that the businesses committed “theft” of personal information. (The Register)

Meta offers more transparency on how AI influences content feeds, another step in their “openness” philosophy regarding AI. (The Verge)

🧪 Science Experiments

High-quality 3d object generation now possible from a single image

  • Magic123 enables textured 3D mesh generation from a single unposed image. The quality of its outputs versus prior approaches is significantly better.

  • See it in action here, including comparisons to other approaches.

Lifelike animated motion now possible from AI models trained on synthetic data

  • The ability for AI models to generate 3D human poses from real images has been poor to date.

  • BEDLAM is the first synthetic dataset of bodies exhibiting lifelike animated motion, all generated from an AI model.

  • See it in action here

Adding a “backspace” ability to LLMs can help avoid compounding errors

  • A team from Stanford shows that incorporating a “backtracking” ability into LLM output generation helps avoid compounding errors and poor results.

  • What’s more, their approach can be added into existing LLMs without major architectural changes.

  • Full paper here

😀 A reader’s commentary

The most valuable thing for me to hear is that I’m offering something differentiated and useful (that’s why I wrote this in the first place; I wasn’t happy with the quality of AI coverage). Thanks George!

That’s it! Have a great week — and for folks in the USA, enjoy your July 4th!

And as always, we want to hear from you on how useful this is. The more feedback you provide, the better our newsletter gets! What else would you like to see covered?

So take the poll below — your signals are helpful!