Generative AI Still Needs to Prove Its Usefulness

0
4


Generative AI took the world by storm in November 2022, with the release of OpenAI’s service ChatGPT. One hundred million people started using it, practically overnight. Sam Altman, the CEO of OpenAI, the company that created ChatGPT, became a household name. And at least half a dozen companies raced OpenAI in an effort to build a better system. OpenAI itself sought to outdo GPT-4, its flagship model, introduced in March 2023, with a successor, presumably to be called GPT-5. Virtually every company hurtled to find ways of adopting ChatGPT (or a similar technology, made by other companies) into their business.

There is just one thing: Generative AI doesn’t actually work that well, and maybe it never will.

Fundamentally, the engine of generative AI is fill-in-the-blanks, or what I like to call “autocomplete on steroids.” Such systems are great at predicting what might sound good or plausible in a given context, but not at understanding at a deeper level what they are saying; an AI is constitutionally incapable of fact-checking its own work. This has led to massive problems with “hallucination,” in which the system asserts, without qualification, things that aren’t true, while inserting boneheaded errors on everything from arithmetic to science. As they say in the military: “frequently wrong, never in doubt.”

Systems that are frequently wrong and never in doubt make for fabulous demos, but are often lousy products in themselves. If 2023 was the year of AI hype, 2024 has been the year of AI disillusionment. Something that I argued in August 2023, to initial skepticism, has been felt more frequently: generative AI might turn out to be a dud. The profits aren’t there—estimates suggest that OpenAI’s 2024 operating loss may be $5 billion—and the valuation of more than $80 billion doesn’t line up with the lack of profits. Meanwhile, many customers seem disappointed with what they can actually do with ChatGPT, relative to the extraordinarily high initial expectations that had become commonplace.

Furthermore, essentially every big company seems to be working from the same recipe, making bigger and bigger language models, but all winding up in more or less the same place, which is models that are about as good as GPT-4, but not a whole lot better. What that means is that no individual company has a “moat” (a business’s ability to defend its product over time), and what that in turn means is that profits are dwindling. OpenAI has already been forced to cut prices; now Meta is giving away similar technology for free.

As I write this, OpenAI has been demoing new products but not actually releasing them. Unless it come outs with some major advance worthy of the name of GPT-5 before the end of 2025 that is decisively better than what their competitors can offer, the bloom will be off the rose. The enthusiasm that propped up OpenAI will diminish, and since it is the poster child for the whole field, the entire thing may well soon go bust.

LEAVE A REPLY

Please enter your comment!
Please enter your name here