Is AI a doomed bubble? These economists seem to think so!

Category: AI Blockchain Regulatory Is AI a doomed bubble? These economists seem to think so!

With the advent of OpenAI’s ChatGPT, and the desperate attempts for Silicon Valley behemoths to catch up, it’s safe to say that the chatbot wars are heating up. Big investors are rushing in to hurl dollars behind language and code-generating systems in a bid to be first to market with the next iteration of AI bots.

Despite the hype, however, some experts seem to think it’s all one giant echo chamber. In a scathing essay written in Salon magazine, a pair of experts say that the froth surrounding the AI hype cycle is doomed to investors’ worst fear: a bubble. When popped, they argue it will reveal Large Language Model (LLM) – powered systems to be far less paradigm-shifting technologies than originally thought.

They say it’s a lot of ‘smoke and mirrors’.

Fletcher Jones professor of Economics at Pomona College, Gary N. Smith, together with technology consultant Jeggrey Lee Funk says:

“The undeniable magic of the human-like conversations generated by GPT will undoubtedly enrich many who peddle the false narrative that computers are now smarter than us and can be trusted to make decisions for us.”

“The AI bubble,” they add, “is inflating rapidly.”

The experts ground their arguments in that many investors simply appear to fundamentally misunderstand the actual technology behind the easily humanised language models. While the chatbots, especially ChatGPT and the OpenAI-powered Bing Search sound impressively human, they’re not actually synthesizing information. As such, the authors claim that they fail to provide thoughtful, analytical, or even correct answers in return.

Instead, the algorithms function more like predictive text features available on smartphones or email programs. They merely predict which words might come next in a sentence. Each prompt response is a probability equation, as opposed to the artificial general intelligence alluded to by many futurists. The material at hand is not ‘understood’, which leads to a convincing phenomenon of collective AI hallucination – a serious error of the tech. made ever more complicated by the algorithm’s proclivity to sound especially confident, often to the point of turning belligerent even when delivering wrong answers.

Smith and Funk held nothing back.

“Trained on unimaginable amounts of text, they string together words in coherent sentences based on statistical probability of words following other words. But they are not ‘intelligent’ in any real way — they are just automated calculators that spit out words.”

The two seemingly disregarded the fact that AI chatbots augment workflows at the very least! Various AI optimists have naturally taken the other side of the bet, writing off the tech’s sometimes funny, and sometimes genuinely horrifying remarks and errors as part of the process of progress. They often argue that more data, freely permitted by way of public usage, will solve chatbots’ fact-checking woes.

If that’s the case, it would imply that all that’s needed is a patch or upgrade, and not a fundamental rework of these AI architectures. Naturally, it’s also a tempting narrative for current and prospective investors in artificial intelligence bots, who would rather have purchased a motorcycle with a broken headlight than one that doesn’t function as promised.

According to Smith and Funk, more data could even make the programs’ issues even worse because of this fundamental problem of human-like understanding.

“Training it on larger databases will not solve the inherent problem: LLMs are unreliable because they?do not know what words mean. Period,” the experts said. “In fact, training on future databases that increasingly include the BS spouted by LLMs will make them even less trustworthy,” they added.

To put it another way, if experts’ fears are expanded upon, and misleading chatbots begin to clog-up the internet with cheap, unchecked, and easy to synthesise content, then that would make it increasingly difficult for search engines to sort out what’s real and what’s hogwash. In other words, trustworthy information could become even more scarce, and the internet could turn into a victim of its own success.

Certainly, this pessimistic take is one side of the equation, and could be proven to be incorrect. And it goes without saying that augmented workflows via AI chatbots are already a functional part of some jobs. Regardless of who is right, however, it’s worth separating the practical use-case of current models, and the fluffy, panacea-like picture depicted by AI optimists.

After all, powerful people are pouring money into LLMs, and continue to do so. Surely, there’s some form of value there if self-interested mega corporations like Microsoft are supporting a string of generative AI technologies? For their own sake, they have a vested interest in getting these AI products into consumers’ hands.

“That astonishing and sudden dip,” write Smith and Funk, speaking to Google’s large market cap losses following the shaky release of their chatbot Bard. It “speaks to the degree to which AI has become the latest obsession for investors.”

“Yet their confidence in AI — indeed, their very understanding of and definition of it,” they add, “is misplaced.”

Valletta, Malta event

Location

Valletta, Malta

11 - 14 November 2024

REGISTER FOR EVENT
',a='';t=t.replace('alt=""','alt="'+alt+'"');return t.replace("ID",e)+a}function lazyLoadYoutubeIframe(){var e=document.createElement("iframe"),t="ID?autoplay=1";t+=0===this.parentNode.dataset.query.length?'':'&'+this.parentNode.dataset.query;e.setAttribute("src",t.replace("ID",this.parentNode.dataset.src)),e.setAttribute("frameborder","0"),e.setAttribute("allowfullscreen","1"),e.setAttribute("allow", "accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"),this.parentNode.parentNode.replaceChild(e,this.parentNode)}document.addEventListener("DOMContentLoaded",function(){var e,t,p,a=document.getElementsByClassName("rll-youtube-player");for(t=0;t