- Artisana: Generative AI News
- Posts
- 🤖 AI Brief: AI licensed by the government, doctors prefer AI answers, and more.
🤖 AI Brief: AI licensed by the government, doctors prefer AI answers, and more.
Plus: why 61% of Americans think AI is a "threat to humanity."
Today is May 21, 2023.
This week gave us a lot of glimpses into the future of AI, from the emergence of open-source as a power player to calls for regulation of AI models in the United States. Many of our stories touch upon two key themes that are playing out real time:
Will open-source beat closed-source AI models? The rapid progress here is even causing OpenAI to play defense and consider releasing their own.
Will AI models be licensed by regulatory bodies in the future? This idea is taking hold in both the US and EU now, and could usher in a world where AI models can no longer get released into the wild.
As always, I write my weekly AI memo so you, the busy reader, can rapidly digest this news and come away smarter.
In this issue:
🧨 OpenAI CEO testifies before Congress, calls for regulations
🤯 OpenAI to launch an open source model
🏥 Google’s MedPaLM 2 AI beats actual doctor answers in a new study
🔎 61% of Americans consider AI a “threat to humanity”, and other quick scoops
🧪 The latest science experiments, including some mind-blowing image manipulation tech
🧨 The Big Read: OpenAI CEO testifies before Congress, calls for regulations
OpenAI CEO Sam Altman speaks before the US Senate (Photo credit: NYTimes)
During a 3-hour hearing before the US Senate on the future of AI, OpenAI CEO Sam Altman was able to speak to a curious and receptive audience – a big difference from past hearings where tech CEOs have been grilled.
We wrote a full breakdown of all the key moments, and for those with a lot of time, you can watch the entire hearing here.
The most notable bombshell he dropped: the US should establish an agency to regulate and license AI models, Altman proposed.
The agency would license companies working on advanced AI models and revoke licenses if safety standards are violated.
AI systems that can "self-replicate and self-exfiltrate into the wild" and manipulate humans into ceding control would be violations, in Altman’s view
Why this matters:
Senators called AI an “atomic bomb” moment and there’s bipartisan consensus that AI is a serious matter. AI is one of the few issues to cut through political gridlock right now.
OpenAI’s proposal to license AI models may benefit themselves the most: at a time when open-source is seeing rapid gains, this could crimp progress on that front
One remarkable moment:
Altman was asked if he could lead the agency: “Would you be qualified, if we promulgated those rules, to administer those rules?" Sen. Kennedy (R-La) asked Altman.
But Altman demurred and said he would recommend others: “I love my current job,” he said
What to expect next:
A bipartisan Senate group is already getting to work on AI legislation. The US still trails the EU in drafting any rules (the EU’s AI Act is nearing finalization), so this is just a first step
Generative AI is a top priority for the G7 meeting in Hiroshima. Multiple countries have started a coordinated process to regulate generative AI, though specifics remain unclear.
🤯 OpenAI to launch an open source model
As pressure from open-source models heats up, OpenAI is planning on launching an open-source model in addition to its current set of closed models (GPT-4, GPT-3.5, and GPT-3).
Our full report covers the nuances of the situation, but the story comes down to this:
OpenAI’s DALL-E 2 image model has already lost mindshare against open-source Stable Diffusion
The rapid progress on the open-source LLM front in the past two months is concerning to OpenAI
Releasing an open-source model is a defensive move: alongside their closed-source models, it could enable OpenAI to control the ecosystem and the overall narrative
And one day later, OpenAI CEO Sam Altman called for licensing of AI models in front of Congress (this would likely slow down open-source). What a coincidence!
Driving the conversation: a leaked “we have no moat” memo from Google concerning the power of open-source is likely driving the same debate within OpenAI.
🏥 Google’s MedPaLM 2 AI beats actual doctor answers in a new study
AI continues to transform how professions work, and researchers at Google recently shared their findings on how a customized version of Google’s PaLM 2 language model passed US medical test questions with 86.5% accuracy, but more importantly generated answers that a panel of doctors preferred over actual doctor-written answers. Our full breakdown is here.
A panel of human doctors judged Med-PaLM 2’s answers to be consistently better than real doctor answers. (Credit: arXiv)
How to make sense of this:
Expect domain-specific models to be the future: LLMs will increasingly be fine-tuned to perform better jobs at specific functions. Bloomberg’s own finance LLM, BloombergGPT, is another example.
Doctors could be augmented: Doctors (at least in the US) are already in short supply. AI may not replace doctors, but as the pace of progress keeps up, each doctor could see their efficacy magnified.
Few jobs are safe from AI reinvention: roles that take years of studying are finding that AI is increasingly able to do more and more. Outside of jobs like construction and manufacturing, expect AI to be everywhere.
🔎 Quick Scoops
Neeva, a Google search competitor, is shutting down. Founded by the former head of Google’s ad business, the transformation of search by LLMs has made their business vision uncertain.
Religious chatbots in India are popular, but also condoning violence. Millions in India seem comfortable using chatbots posing as Indian deities, but the responses they generate pose risk.
61% of Americans consider AI a threat to humanity according a new Reuters poll. Conservative voters were notably more concerned.
People in China are using chatbots to recreate deceased family members. This has been attempted in the past, but the power of LLMs have made this possible in a totally new way.
OpenAI (finally) launches the official ChatGPT iOS app. Hopefully this sweeps aside the sketchy apps posing as official ChatGPT clients that ran amok on the iOS app store.
An analysis of what Google’s recent I/O event means for the AI wars. Great in-depth breakdown by Stratchery.
🧪 Science Experiments
Point-based image manipulation using generative AI is possible
DragGAN enables enable to alter pose, shape, expression and more by simply dragging and dropping on an image
DragGAN: absolutely crazy how this is possible so early into generative AI’s lifecycle.
XRayGPT open-source model released
XrayGPT aims at the automated analysis of chest radiographs based on the given X-ray images. It was fine-tuned on medical data (100k pat-doc conversations) + 30k radiology conversations.
How XRayGPT works.
FrugalGPT improves LLM usage costs
An “LLM cascade” method learns which combos of LLMs can create the best queries at lowest cost. Researchers found this was able to match the performance of the best LLM (GPT-4) at significant cost reduction (98%).
Chart showing how FrugalGPT scales performance against cost.
😀 A reader’s commentary
That’s it! Have a great week!
And as always, we want to hear from you on how useful this is. The more feedback you provide, the better our newsletter gets! What else would you like to see covered?
So take the poll below — your signals are helpful!