This Week in AI: OpenAI’s talent retention woes

Written by
Alyssa Stringer
Published on
Aug. 7, 2024, 5:33 p.m.

Hiya, folks, welcome to TechCrunch’s regular AI newsletter.

This week in AI, OpenAI lost another co-founder.

John Schulman, who played a pivotal role in the development of ChatGPT, OpenAI’s AI-powered chatbot platform, has left the company for rival Anthropic. Schulman announced the news on X, saying that his decision stemmed from a desire to deepen his focus on AI alignment — the science of ensuring AI behaves as intended — and engage in more hands-on technical work.

But one can’t help but wonder if the timing of Schulman’s departure, which comes as OpenAI president Greg Brockman takes an extended leave through the end of the year, was opportunistic.

Earlier the same day Schulman announced his exit, OpenAI revealed that it plans to switch up the format of its DevDay event this year, opting for a series of on-the-road developer engagement sessions instead of a splashy one-day conference. A spokesperson told TechCrunch that OpenAI wouldn’t announce a new model during DevDay, suggesting that work on a successor to the company’s current flagship, GPT-4o, is progressing at a slow pace. (The delay of Nvidia’s Blackwell GPUs could slow the pace further.)

Could OpenAI be in trouble? Did Schulman see the writing on the wall? Well, the outlook at Sam Altman’s empire is undoubtedly gloomier than it was a year ago.

Ed Zitron, PR pro and all-around tech pundit, outlined in his newsletter recently the many obstacles stand in the way of OpenAI’s path to continued success. It’s a well-researched and thorough piece, and I won’t do it an injustice by retreading the thing. But the points Zitron makes about OpenAI’s increasing pressure to perform are worth spotlighting.

OpenAI is reportedly on track to lose $5 billion this year. To cover the rising costs of headcount (AI researchers are very, very expensive), model training and model serving at scale, the company will have to raise an enormous tranche of cash within the next 12 to 24 months. Microsoft would be the obvious benefactor; it has a 49% stake in OpenAI and, despite their sometime rivalry , a close working relationship with OpenAI’s product teams. But with Microsoft’s capital expenditures growing 75% year-over-year (to $19 billion) in anticipation of AI returns that have yet to materialize, does it really have the appetite to pour untold billions more into a long-term, risky bet?

This reporter would be surprised if OpenAI, the most prominent AI company in the world, failed to source the money that it needs from somewhere in the end. There’s a very real possibility this lifeline will come with less favorable terms, however — and perhaps the long-rumored alteration of the company’s capped-profit structure .

Surviving will likely mean OpenAI moves further away from its original mission and into uncharted and uncertain territory. And perhaps that was too tough a pill for Schulman ( and co. ) to swallow. It’s hard to blame them; with investor and enterprise skepticism ramping up, the entire AI industry, not just OpenAI, faces a reckoning .

News

Apple Intelligence has its limits : Apple gave users the first real taste of its Apple Intelligence features with the release of the iOS 18.1 developer beta last month. But as Ivan writes, the Writing Tools feature stumbles when it comes to swearing and touchy topics, like drugs and murder.

Google’s Nest Learning Thermostat gets a makeove r : After nine long years, Google is finally refreshing the device that gave Nest its name. The company on Tuesday announced the launch of the Nest Learning Thermostat 4 — 13 years after the release of the original and nearly a decade after the Learning Thermostat 3 and ahead of the Made by Google 2024 event next week.

X’s chatbot spread election misinfo : Grok has been spreading false information about Vice President Kamala Harris on X, the social network formerly known as Twitter. That’s according to an open letter penned by five secretaries of state and addressed to Tesla, SpaceX and X CEO Elon Musk, which claims that X’s AI-powered chatbot wrongly suggested Harris isn’t eligible to appear on some 2024 U.S. presidential ballots.

YouTuber sues OpenAI : A YouTube creator is seeking to bring a class action lawsuit against OpenAI, alleging that the company trained its generative AI models on millions of transcripts from YouTube videos without notifying or compensating the videos’ owners.

AI lobbying ramps up : AI lobbying at the U.S. federal level is intensifying in the midst of a continued generative AI boom and an election year that could influence future AI regulation. The number of groups lobbying the federal government on issues related to AI grew from 459 in 2023 to 556 in the first half of 2024, from January to July.

Research paper of the week

“Open” models like Meta’s Llama family , which can be used more or less however developers choose, can spur innovation — but they also present risks. Sure, many have licenses that impose restrictions, as well as built-in safety filters and tooling. But beyond those, there’s not much to prevent a bad actor from using open models to spread misinformation, for example, or spin up a content farm.

There may be in the future.

A team of researchers hailing from Harvard, the nonprofit Center for AI Safety, and elsewhere propose in a technical paper a “tamper-resistant” method of preserving a model’s “benign capabilities” while preventing the model from acting undesirably. In experiments, they found their method to be effective in preventing “attacks” on models (like tricking it into providing info it shouldn’t) at the slight cost of a model’s accuracy.

There is a catch. The method doesn’t scale well to larger models due to “computational challenges” that require “optimization” to reduce the overhead, the researchers explain in the paper. So, while the early work is promising, don’t expect to see it deployed anytime soon.

Model of the week

A new image-generating model emerged on the scene recently, and it appears to give incumbents like Midjourney and OpenAI’s DALL-E 3 a run for their money.

Called Flux.1 , the model — or rather, family of models — was developed by Black Forest Labs, a startup founded by ex- Stability AI researchers, many of whom were involved with the creation of Stable Diffusion and its many follow-ups. (Black Forest Labs announced its first funding round last week: a $31 million seed led by Andreessen Horowitz.)

The most sophisticated Flux.1 model, Flux.1 Pro, is gated behind an API. But Black Forest Labs released two smaller models, Flux.1 Dev and Flux.1 Schnell (German for “fast”), on the AI dev platform Hugging Face with light restrictions on commercial usage. Both are competitive with Midjourney and DALL-E 3 in terms of the quality of images they can generate and how well they’re able to follow prompts, claims Black Forest Labs. And they’re especially good at inserting text into images, a skill that’s eluded image-generating models historically.

Black Forest Labs has opted not to share what data it used to train the models (which is some cause for concern given the copyright risks inherent in this sort of AI image generation), and the startup hasn’t gone into great detail as to how it intends to prevent misuse of Flux.1. It’s taking a decidedly hands-off approach for now — so user beware.

Grab bag

Generative AI companies are increasingly embracing the fair use defense when it comes to training models on copyrighted data without the blessing of that data’s owners. Take Suno, the AI music-generating platform, for example, which recently argued in court that it has permission to use songs belonging to artists and labels without those artists’ and labels’ knowledge — and without compensating them.

This is Nvidia’s (perhaps wishful) thinking, too, reportedly. According to a 404 Media report out this week, Nvidia is training a massive video-generating model, code-named Cosmos, on YouTube and Netflix content. High-level management greenlit the project, which they believe will survive courtroom battles thanks to the current interpretation of U.S. copyright law.

So, will fair use save the Sunos, Nvidias, OpenAIs and Midjourneys of the world from legal hellfire? TBD — and the lawsuits will take ages to play out, assuredly. It could well turn out that the generative AI bubble bursts before a precedent is established. If that doesn’t end up being the case, either creators — from artists to musicians to writers to lyricists to videographers — can expect a big payday or they’ll be forced to live with the uncomfortable fact that anything they make public is fair game for a generative AI company’s training.

Weekly newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.
Read about our privacy policy .
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.