Comments by Brian Shilhavy
Editor, Health Impact News
Sam Altman and OpenAI finally released their long anticipated version 5.0 last week of ChatGPT. So many users complained that they had to quickly restore previous 4.0 versions (Source.)
Almost everyone now in the financial sectors is referring to the AI bubble, either as something that is going to be no big deal, or something worse.
But every once in a while someone says the quiet part out loud: “What if this is as good as its going to get for ‘generative AI?'”
Oh no, that just simply could not be true, given how much money has been invested in it for claims for what it allegedly will do in the future.
That’s like saying “high cholesterol does not lead to heart disease.”
Oh no, that couldn’t be true because pharmaceutical companies made $billions telling people it was true and then selling them drugs to make it true.
There are just too many “truths” in our society that cannot be true, as it would be too costly and too deadly (like arresting mass murderers).
Generative AI is like that a lot. But the problem is you can only fake it for so long before everyone else figures it out too.
The latest one to say the quiet part out loud on AI: Cal Newport of the New Yorker.
What If A.I. Doesn’t Get Much Better Than This?
GPT-5, a new release from OpenAI, is the latest product to suggest that progress on large language models has stalled.
Excerpts:
Much of the euphoria and dread swirling around today’s artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled “Scaling Laws for Neural Language Models.”
The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training?
Back then, many machine-learning experts thought that, after they had reached a certain size, language models would effectively start memorizing the answers to their training questions, which would make them less useful once deployed.
But the OpenAI paper argued that these models would only get better as they grew, and indeed that such improvements might follow a power law…
A few months after the paper, OpenAI seemed to validate the scaling law by releasing GPT-3, which was ten times larger—and leaps and bounds better—than its predecessor, GPT-2.
Suddenly, the theoretical idea of artificial general intelligence, which performs as well as or better than humans on a wide variety of tasks, seemed tantalizingly close. If the scaling law held, A.I. companies might achieve A.G.I. by pouring more money and computing power into language models.
Within a year, Sam Altman, the chief executive at OpenAI, published a blog post titled “Moore’s Law for Everything,” which argued that A.I. will take over “more and more of the work that people now do” and create unimaginable wealth for the owners of capital. “This technological revolution is unstoppable,” he wrote.
“The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.”
It’s hard to overstate how completely the A.I. community came to believe that it would inevitably scale its way to A.G.I. In 2022, Gary Marcus, an A.I. entrepreneur and an emeritus professor of psychology and neural science at N.Y.U., pushed back on Kaplan’s paper, noting that “the so-called scaling laws aren’t universal laws like gravity but rather mere observations that might not hold forever.”
The negative response was fierce and swift.
“No other essay I have ever written has been ridiculed by as many people, or as many famous people, from Sam Altman and Greg Brockton to Yann LeCun and Elon Musk,”
Marcus later reflected.
Over the following year, venture-capital spending on A.I. jumped by eighty per cent.
After that, however, progress seemed to slow. OpenAI did not unveil a new blockbuster model for more than two years, instead focussing on specialized releases that became hard for the general public to follow.
Some voices within the industry began to wonder if the A.I. scaling law was starting to falter.
A contemporaneous TechCrunch article summarized the general mood:
“Everyone now seems to be admitting you can’t just use more compute and more data while pretraining large language models and expect them to turn into some sort of all-knowing digital god.”
But such observations were largely drowned out by the headline-generating rhetoric of other A.I. leaders. “A.I. is starting to get better than humans at almost all intellectual tasks,” Amodei recently told Anderson Cooper.
In an interview with Axios, he predicted that half of entry-level white-collar jobs might be “wiped out” in the next one to five years. This summer, both Altman and Mark Zuckerberg, of Meta, claimed that their companies were close to developing superintelligence.
Then, last week, OpenAI finally released GPT-5, which many had hoped would usher in the next significant leap in A.I. capabilities. Early reviewers found some features to like.
Within hours, users began expressing disappointment with the new model on the r/ChatGPT subreddit. One post called it the “biggest piece of garbage even as a paid user.”
In an Ask Me Anything (A.M.A.) session, Altman and other OpenAI engineers found themselves on the defensive, addressing complaints. Marcus summarized the release as “overdue, overhyped and underwhelming.”
In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate.
Such voices argue that this technology is important, but not poised to drastically transform our lives. They challenge us to consider a different vision for the near-future—one in which A.I. might not get much better than this.
I recently asked Marcus and two other skeptics to predict the impact of generative A.I. on the economy in the coming years.
“This is a fifty-billion-dollar market, not a trillion-dollar market,”
Ed Zitron, a technology analyst who hosts the “Better Offline” podcast, told me. Marcus agreed:
“A fifty-billion-dollar market, maybe a hundred.”
The linguistics professor Emily Bender, who co-authored a well-known critique of early language models, told me that “the impacts will depend on how many in the management class fall for the hype from the people selling this tech, and retool their workplaces around it.” She added,
“The more this happens, the worse off everyone will be.”
Related:
News about the Impending Market Crash over the AI Bubble is now Going Mainstream
Comment on this article at HealthImpactNews.com.
This article was written by Human Superior Intelligence (HSI)
See Also:
Understand the Times We are Currently Living Through
Exposing the Christian Zionism Cult
Christian Teaching on Sex and Marriage vs. The Actual Biblical Teaching
Where is Your Citizenship Registered?
The Bewitching of America with the Evil Eye and the Mark of the Beast
Jesus Christ’s Opposition to the Jewish State: Lessons for Today
Distinguishing True Prophets from False Prophets in These Evil Modern Times
Insider Exposes Freemasonry as the World’s Oldest Secret Religion and the Luciferian Plans for The New World Order
Identifying the Luciferian Globalists Implementing the New World Order – Who are the “Jews”?
The Brain Myth: Your Intellect and Thoughts Originate in Your Heart, Not Your Brain
Fact Check: “Christianity” and the Christian Religion is NOT Found in the Bible – The Person Jesus Christ Is
The Seal and Mark of God is Far More Important than the “Mark of the Beast” – Are You Prepared for What’s Coming?
The Satanic Roots to Modern Medicine – The Mark of the Beast?
Medicine: Idolatry in the Twenty First Century – 8-Year-Old Article More Relevant Today than the Day it was Written
Having problems receiving our emails? See:
How to Beat Internet Censorship and Create Your Own Newsfeed
We Are Now on Telegram. Video channels at Bitchute, and Odysee.
If our website is seized and shut down, find us on Telegram, as well as Bitchute and Odysee for further instructions about where to find us.
If you use the TOR Onion browser, here are the links and corresponding URLs to use in the TOR browser to find us on the Dark Web: Health Impact News, Vaccine Impact, Medical Kidnap, Created4Health, CoconutOil.com.