Hiya, folks, welcome to TechCrunch’s regular AI newsletter.
This week in AI, Gartner released a report suggesting that around a third of generative AI projects in the enterprise will be abandoned after the proof-of-concept phase by year-end 2025. The reasons are many — poor data quality, inadequate risk controls, escalating infrastructure costs and so on.
But one of the biggest barriers to generative AI adoption is the unclear business value, per the report.
Embracing generative AI organization-wide comes with significant costs, ranging from $5 million to a whopping $20 million, estimates Gartner. A simple coding assistant has an upfront cost between $100,000 and $200,000 and recurring costs upward of $550 per user per year, while an AI-powered document search tool can cost $1 million upfront and between $1.3 million and $11 million per user annually, finds the report.
Those steep price tags are hard for corporations to swallow when the benefits are difficult to quantify and could take years to materialize — if, indeed, they ever materialize.
A survey from Upwork this month reveals that AI, rather than enhancing productivity, has actually proven to be a burden for many of the workers using it. According to the survey, which interviewed 2,500 C-suite execs, full-time staffers and freelancers, nearly half (47%) of workers using AI say that they have no idea how to achieve the productivity gains their employers expect while over three-fourths (77%) believe that AI tools have decreased productivity and added to their workload in at least one way.
It seems the honeymoon phase of AI may well be ending, despite robust activity on the VC side. And that’s not shocking. Anecdote after anecdote reveals how generative AI, which has unsolved fundamental technical issues , is frequently more trouble than it’s worth.
Just Tuesday, Bloomberg published a piece about a Google-powered tool that uses AI to analyze patient medical records, now in testing at HCA hospitals in Florida. Users of the tool Bloomberg spoke with said that it can’t consistently deliver reliable health information; in once instance, it failed to note whether a patient had any drug allergies.
Companies are beginning to expect more of AI. Barring research breakthroughs that address the worst of its limitations, it’s incumbent on vendors to manage expectations.
We’ll see if they have the humility to do so.
News
SearchGPT: OpenAI last Thursday announced SearchGPT, a search feature designed to give “timely answers” to questions, drawing from web sources.
Bing gets more AI: Not to be outdone, Microsoft last week previewed its own AI-powered search experience, called Bing generative search. Available for only a “small percentage” of users at the moment, Bing generative search — like SearchGPT — aggregates info from around the web and generates a summary in response to search queries.
X opts users in: X, formerly Twitter, quietly pushed out a change that appears to default user data into its training pool for X’s chatbot Grok, a move that was spotted by users of the platform on Friday. EU regulators and others quickly cried foul. (Wondering how to opt out? Here’s a guide .)
EU calls for help with AI: The European Union has kicked off a consultation on rules that will apply to providers of general-purpose AI models under the bloc’s AI Act, its risk-based framework for regulating applications of AI.
Perplexity details publisher licensing: AI search engine Perplexity will soon start sharing advertising revenue with news publishers when its chatbot surfaces their content in response to a query, a move that appears to be designed to assuage critics that’ve accused Perplexity of plagiarism and unethical web scraping.
Meta rolls out AI Studio: Meta said Monday that it’s rolling out its AI Studio tool to all creators in the U.S. to let them make personalized AI-powered chatbots. The company first unveiled AI Studio last year and started testing it with select creators in June.
Commerce Department endorses “open” models: The U.S. Commerce Department on Monday issued a report in support of “open-weight” generative AI models like Meta’s Llama 3.1, but recommended the government develop “new capabilities” to monitor such models for potential risks.
$99 Friend: Avi Schiffmann, a Harvard dropout, is working on a $99 AI-powered device called Friend. As the name suggests, the neck-worn pendant is designed to be treated as a companion of sorts. But it’s not clear yet whether it works quite as advertised.
Research paper of the week
Reinforcement learning from human feedback (RLHF) is the dominant technique for ensuring that generative AI models follow instructions and adhere to safety guidelines. But RLHF requires recruiting a large number of people to rate a model’s responses and provide feedback, a time-consuming and expensive process.
So OpenAI is embracing alternatives.
In a new paper , researchers at OpenAI describe what they call rule-based rewards (RBRs), which use a set of step-by-step rules to evaluate and guide a model’s responses to prompts. RBRs break down desired behaviors into specific rules that are then used to train a “reward model,” which steers the AI — “teaching” it, in a sense — about how it should behave and respond in specific situations.
OpenAI claims that RBR-trained models demonstrate better safety performance than those trained with human feedback alone while reducing the need for large amounts of human feedback data. In fact, the company says it’s used RBRs as part of its safety stack since the launch of GPT-4 and plans to implement RBRs in future models.
Model of the week
Google’s DeepMind is making progress in its quest to tackle complex math problems with AI.
A few days ago, DeepMind announced that it trained two AI systems to solve four out of the six problems from this year’s International Mathematical Olympiad (IMO), the prestigious high school math competition. DeepMind claims the systems, AlphaProof and AlphaGeometry 2 (the successor to January’s AlphaGeometry ), demonstrated an aptitude for forming and drawing on abstractions and complex hierarchical planning — all of which have been historically challenging for AI systems to do.
AlphaProof and AlphaGeometry 2 worked together to solve two algebra problems and a number theory problem. (The two remaining questions on combinatorics were left unsolved). The results were verified by mathematicians; it’s the first time AI systems have been able to achieve silver medal-level performance on IMO questions.
There are a few caveats, however. It took days for the models to solve some of the problems. And while their reasoning capabilities are impressive, AlphaProof and AlphaGeometry 2 can’t necessarily help with open-ended problems that have many possible solutions, unlike those with one right answer.
We’ll see what the next generation brings.
Grab bag
AI startup Stability AI has released a generative AI model that turns a video of an object into multiple clips that look as though they were captured from different angles.
Called Stable Video 4D , the model could have applications in game development and video editing, Stability says, as well as virtual reality. “We anticipate that companies will adopt our model, fine-tuning it further to suit their unique requirements,” the company wrote in a blog post.
To use Stable Video 4D, users upload footage and specify their desired camera angles. After about 40 seconds, the model then generates eight five-frame videos (although “optimization” can take another 25 minutes).
Stability says that it’s actively working on refining the model, optimizing it to handle a wider range of real-world videos beyond the current synthetic datasets it was trained on. “The potential for this technology in creating realistic, multi-angle videos is vast, and we are excited to see how it will evolve with ongoing research and development,” the company continued.