by Brian Shilhavy
Editor, Health Impact News
I have published dozens of articles since the beginning of 2023 when the AI frenzy started, showing that it was poised to be the biggest market bubble of all time.
While there were a few dissenters back then, now more people are waking up, and many financial analysts are now sounding the alarm.
Of course the bubble should have exploded in 2023 when the top Big Tech banks failed, because these rich Tech billionaires held accounts at banks like Silicon Valley Bank that well exceeded the FDIC insurance levels of only $250,000 per account.
There were bank runs back then, and many people could not get access to their funds and total panic ensued.
But Big Tech billionaire voices, such as Mark Cuban, who stood to lose hundreds of millions, convinced the U.S. Government to basically bail these banks out, covering ALL accounts above $250,000, and Big Tech billionaires suffered no loses.
But how much longer can this bubble last?
Back in 2023 I was a very small minority voice publishing the truth about the false claims of LLM (or “Generative”) AI, and usually I had to create my own graphics.
Today, when I looked for a graphic to do this article, I had dozens of choices, and the one I chose is from a LinkedIn article that was published 2 years ago, in July of 2023, showing that I was not the only voice sounding the alarm even back then.
Yesterday (July 29, 2025), the headline news on the Dow Jones publication “MarketWatch” kicked this warning into the mainstream media, at least in the financial news sector, with the following article being the headline article for most of the day.
Why the man behind ‘The Hater’s Guide to the AI Bubble’ thinks Wall Street’s hottest trade will go bust
Excerpts:
Ahead of earnings reports from several of the major so-called AI hyperscalers due this week, MarketWatch spoke with Zitron to learn more about his perspective. Zitron elaborated on some of the points he made in a recent edition of his newsletter, “Where’s Your Ed At” and explained why he believes the AI boom will eventually lead to a painful bust on par with the collapse of the dot-com bubble.
MarketWatch: In your view, what are some of investors’ most common misconceptions about generative AI and its feasibility as a business?
Zitron: It doesn’t make any money or profit. Really, depending on the company, it is one or both. It’s one of the strangest things I’ve ever seen. It’s not like there are a few incumbents that are profitable but only making a little money. Even the two largest companies making the most revenues, OpenAI and Anthropic, are burning through billions of dollars a year.
MarketWatch: There has been a lot of talk about the potential for AGI — artificial general intelligence. How close are we to developing that?
Zitron: We are nowhere. We don’t have proof it’s even possible. We just don’t. Even Meta, which is currently giving these egregious sums of money to AI scientists — their lead AI scientist said scaling up large language models isn’t going to create AGI.
We do not know how human beings are conscious. We don’t know how human thinking works. How are we going to simulate that in a computer?
Furthermore, there’s no proof that you can make a computer conscious, and right now, they can’t even get agents right.
How the hell are they meant to make a conscious or automated computer? These models have no concept of right or wrong, or rules, or really anything.
They are just looking over a large corpus of data and generating, as they are probabilistic, the most likely thing that you may want it to. It is kind of crazy that they can do it, but what they are doing is not thinking.
Reasoning models are not actually reasoning. They do not reason. They do not have human thought, or any thought. They are just large language models that just spit out answers based on what the user wants.
The only thing I disagree with that the analyst said in this interview, it that “the AI boom will eventually lead to a painful bust on par with the collapse of the dot-com bubble.”
Ah, no. When you look at the total spending on AI right now, this bubble is MUCH bigger than the dot-com bust, which I lived through and even created my own ecommerce company during that time which is still operational today.
When this AI bubble bursts, the effects will be FAR WORSE!
I have had readers email me and complain that I have been saying this for over 2 years now, but there is no way I can predict when this will happen as there are still $billions, if not $trillions of dollars being spent by Big Tech that is holding this market up.
It won’t last, and the longer it takes for this bubble to burst, the worse the crash will be. It will be truly devastating.
Here are several other articles that have been published during the past couple of weeks that support the fact that this current AI frenzy is a huge bubble that is going to come crashing down at some point, because the actual technology as it functions TODAY, doesn’t even work very well.
Maybe AI Isn’t Going to Replace You at Work After All
AI fails at tasks where accuracy must be absolute to create value.
by Charles Hugh Smith
oftwominds.com
Excerpts:
In reviewing the on-going discussions about how many people will be replaced by AI, I find a severe lack of real-world examples. I’m remedying this deficiency with an example of AI’s failure in the kind of high-value work that many anticipate will soon be performed by AI.
Few things in life are more pervasively screechy than hype, which brings us to the current feeding-frenzy of AI hype. Since we all read the same breathless claims and have seen the videos of robots dancing, I’ll cut to the chase: Nobody posts videos of their robot falling off a ladder and crushing the roses because, well, the optics aren’t very warm and fuzzy.
For the same reason, nobody’s sharing the AI tool’s error that forfeited the lawsuit. The only way to really grasp the limits of these tools is to deploy them in the kinds of high-level, high-value work that they’re supposed to be able to do with ease, speed and accuracy, because nobody’s paying real money to watch robots dance or read a copycat AI-generated essay on Yeats that’s tossed moments after being submitted to the professor.
In the real world of value creation, optics don’t count, accuracy counts.
Nobody cares if the AI chatbot that churned out the Yeats homework hallucinated mid-stream because nobody’s paying for AI output that has zero scarcity value: an AI-generated class paper, song or video joins 10 million similar copycat papers / songs / videos that nobody pays attention to because they can create their own in 30 seconds.
So let’s examine an actual example of AI being deployed to do the sort of high-level, high-value work that it’s going to need to nail perfectly to replace us all at work. My friend Ian Lind, whom I’ve known for 50 years, is an investigative reporter with an enviably lengthy record of the kind of journalism few have the experience or resources to do. (His blog is www.iLind.net,
ia*@il***.net
)
Let’s summarize AI’s fundamental weaknesses:
1. AI doesn’t actually “read” the entire collection of texts. In human terms, it gets “bored” and stops once it has enough to generate a credible response.
2. AI has digital dementia. It doesn’t necessarily remember what you asked for in the past nor does it necessarily remember its previous responses to the same queries.
3. AI is fundamentally, irrevocably untrustworthy. It makes errors that it doesn’t detect (because it didn’t actually “read” the entire trove of text) and it generates responses that are “good enough,” meaning they’re not 100% accurate, but they have the superficial appearance of being comprehensive and therefore acceptable. This is the “shoot from the hip” response Ian described.
In other words, 90% is good enough, as who cares about the other 10% in a college paper, copycat song or cutesy video.
But in real work, the 10% of errors and hallucinations actually matter, because the entire value creation of the work depends on that 10% being right, not half-assed.
4. AI agents will claim their response is accurate when it is obviously lacking, they will lie to cover their failure, and then lie about lying. If pressed, they will apologize and then lie again. Read this account to the end: Diabolus Ex Machina.
In summary: AI fails at tasks where accuracy must be absolute to create value. Lacking this, it’s not just worthless, it’s counter-productive and even harmful, creating liabilities far more consequential than the initial errors. (Full article.)
Where LLMs Are Falling Short
by Stephanie Palazzolo
The Information
Excerpts:
I’m back from the International Conference on Machine Learning in Vancouver, one of the biggest annual meetups for artificial intelligence researchers. And this year, the conference underscored all the ways in which large language models are still falling short of everyone’s expectations, despite the immense progress that got us to this point.
Researchers even called into question some of the most promising techniques that have gained popularity over the last year, such as chain-of-thought reasoning, or asking the models to describe how they arrived at an answer—their “thoughts,” so to speak.
For instance, one research paper presented at the event explained how chain-of-thought can actually hurt model performance on certain tasks due to “overthinking.”
In one example, models were shown strings of letters that followed some rule that the model didn’t know. Then the models were shown another string of letters and asked whether it followed the unknown rule or not.
Humans typically perform better on this task when they’re told to go with their gut feeling. However, the models performed worse when asked to explain their reasoning.
Models are great at finding patterns, but when there are so many possibilities for what the unknown rule or pattern might be, they tend to overthink and end up at the wrong answer, the paper argued. (Full article.)
I asked AI and my financial planner the same questions. Here’s how they stacked up.
Google’s Gemini gave me similar savings advice as my financial planner — but here’s why I don’t plan on getting rid of my human adviser anytime soon
by Genna Contino
MarketWatch
Excerpts:
When I logged onto Zoom for my first-ever session with a professional financial planner, I noticed there was a third user on the call: a meeting notetaker powered by artificial intelligence.
As Charlotte, N.C.-based certified financial planner Rob Bacharach talked through the best way to reach my various goals, the bot dutifully listened alongside me, taking notes about retirement contributions and building an emergency fund.
This interaction got me thinking about the broader uses of AI in financial planning, so I asked Google’s generative AI bot Gemini many of the same questions I asked my adviser. It responded with surprisingly similar advice — but it still can’t perform key functions that a financial planner can do, such as automating transfers to savings.
Through this experiment, I found I was much more comfortable talking through my situation with an actual human who can show empathy and trustworthiness — qualities that made me comfortable and confident in my decisions.
“It’s the human adviser who has the expertise, empathy, context and strategy — and that unique combination is what makes human advice so powerful,”
said Pam Krueger, the founder of Wealthramp, a platform that connects investors to advisers.
“AI can support good planning, it can really help save time on ‘tasks,’ but it can’t be the planner.”
And yet, more people are turning to generative AI for money help: The fastest-growing category of queries between March and April 2025 on Open AI’s ChatGPT was economics, finance and taxes, according to data from Sensor Tower. (Full Article.)
How AI’s Funding Hype is Affecting the Big Tech Labor Force
As I published earlier this month (July, 2025), AI is NOT replacing humans in the workforce, but SPENDING ON AI is forcing mass layoffs. See:
Has the AI Apocalypse Arrived? Tens of Thousands Being Laid Off in Big Tech – AI has to Either Replace Them or AI Spending Must Stop
And these layoffs are continuing. More will most certainly follow.
Intel Confirms Mass Layoffs, Over 24,000 Jobs To Be Cut This Year
Intel hopes to have just 75,000 core employees by year-end, about a third fewer than it had at the end of 2024.
by Jibin Joseph
PCMag
Excerpts:
After months of speculation, Intel has confirmed it is eliminating thousands of jobs in mass layoffs this year.
The company planned to trim 15% of its global workforce last quarter and, in its latest earnings report, has confirmed that a majority of those cuts have already been initiated. Some of those cuts have reportedly impacted workers at Folsom and Santa Clara units in California, as well as Hillsboro and Aloha in Oregon. Other impacted sites include Arizona, Texas, and Israel.
Intel isn’t stopping there. By the end of this year, it hopes to have only 75,000 core employees. According to The Verge, the company had 99,500 core employees at the end of 2024, which means it will have reduced its headcount by one-third or nearly 24,500 people by the end of this year.
Over the last few years, Intel has lost market share to rivals like TSMC and has struggled to develop products that meet the demands of the AI industry.
Last year, it cut 15,000 jobs. (Full article.)
Indeed, Glassdoor to lay off 1,300 staff amid AI push
by Ram Iyer
TechCrunch
Excerpts:
Recruit Holdings, the Japanese parent of Indeed and Glassdoor, said on Friday it is laying off about 1,300 employees at the two companies. The layoffs are part of a broader restructuring that involves Glassdoor’s operations being integrated within Indeed, and an increasing focus on using AI.
As part of the restructuring, Glassdoor’s current CEO, Christian Sutherland-Wong, is leaving the company on October 1. LaFawn Davis, chief people and sustainability officer at Indeed, is also leaving the company.
The job cuts come as tech companies across the world roll back their sustainability initiatives and cut jobs to balance out extensive spending on integrating AI into their businesses.
Tens of thousands of people stand to lose jobs at Microsoft, TikTok, Match, Intel, and Meta, per announcements in just the past couple of months. (Full article.)
Meta AI Researcher Warns of ‘Metastatic Cancer’ Afflicting Company Culture
by Kalley Huang
The Information
Excerpts:
Over the past month, Meta Platforms has overhauled its stumbling efforts in artificial intelligence by making a head-spinning series of hires of leaders and researchers from outside the company. But a key question has lingered: Why has Meta struggled so badly in AI that it’s injecting so much fresh blood into the company to lead a turnaround?
One outgoing research scientist from Meta’s generative AI group has come up with his own diagnosis of the company’s AI problems—and his assessment isn’t pretty. In a more than 2,000-word essay that has circulated inside Meta in recent days, the research scientist, Tijmen Blankevoort, paints a bleak picture of cultural and organizational dysfunction inside Meta that he argues has stymied its progress in AI.
“I have yet to meet someone in Meta-GenAI that truly enjoys being there. Someone that feels like they want to stay in Meta for a long time because it’s such a great place,”
wrote Blankevoort, referring to the nearly 2,000-person group that develops Meta’s flagship AI model, Llama.
“You’ll be hard pressed to find someone that really believes in our AI mission. To most, it’s not even clear what our mission is.”
The Demand for More Data Centers and Energy to Power the AI Bubble
When the AI hype began at the end of 2022, it spurred massive spending on the computer chips that are needed to run these powerful AI programs, and also the energy needed to run these new data centers.
This is just one more aspect of how the AI hype is poised to potentially destroy our economy.
As AI booms, data centers threaten energy grid and water supplies, expert says
by Courtney Sakry
TechXplore
The unseen infrastructure powering artificial intelligence (AI) isn’t digital—it’s physical: massive data centers filled with thousands of computer servers. As the popularity of AI tools continues to grow, it has triggered a once-in-a-generation construction boom for larger and more powerful data centers.
Now, the recently announced AI Action Plan is calling for even more infrastructure to power them.
Virginia Tech’s Landon Marston, associate professor of civil and environmental engineering, explains what the rapid expansion of data centers could mean for our power grids, water supply, and communities.
“The primary driver for energy consumption is the IT equipment itself—the servers run 24/7 to process data. The second major driver is cooling. All that electronic equipment generates a tremendous amount of heat, and data centers must run massive cooling systems to keep servers from overheating.
AI-specific servers are especially power-hungry because of the intense calculations they perform,” Marston said.
“It could lead to data centers being built without adequate grid planning, increasing the risk of local blackouts,” said Marston.
“It could also allow facilities to be built without proper consideration of local water availability, water infrastructure, and financial agreements that ensure long-term sustainability of the water system.”
Big Tech Wants Nuclear-Powered AI Now, But Here’s What They’re Not Telling Us
Experts say tech companies’ nuclear reactor plans are immensely challenging at best.
by Emily Forlini
PCMag
Excerpts:
In the past year, nuclear power has reentered the conversation, hailed as a way to advance energy-hungry AI technology without devastating the planet and causing our electrical bills to skyrocket. So tech companies are going nuclear—announcing plans to add more reactors in an all-out battle to secure as much data center power as possible and one-up each other’s AI ambitions.
It won’t be easy. And it might not work at all.
In 2027, Microsoft will reopen Pennsylvania’s Three Mile Island—nearly half a century after an infamous partial meltdown at the plant. A Microsoft spokesperson tells us that nuclear energy will help build “a decarbonized grid for our company, our customers, and the world.” Also in 2027, Meta plans to reopen an abandoned Illinois reactor. Meanwhile, both Amazon and Google have invested in new reactor tech.
President Trump has issued four executive orders to promote the nuclear industry. The administration and Westinghouse this month announced the opening of 10 new reactors in the US, with construction starting in 2030.
But the company’s last reactors in Georgia didn’t fare so well: They were seven years late, $18 billion over budget, and bankrupted the company. This time, Westinghouse says it will use Google’s AI products to streamline development. Essentially, AI will help in creating reactors to power AI.
What could go wrong?
To find out how feasible all of this is, I recently spoke to several experts in nuclear power. They point out major hurdles to growing our nation’s nuclear capabilities, foremost among them high costs and long construction timelines.
Then there’s the potential public panic over a Chernobyl happening in their backyards. Plus, everything rests on new technology known as small modular reactors (SMRs), which is still in development and completely unproven at scale.
It could be a long time—decades, even—before your conversations with ChatGPT are powered by nuclear energy, and nothing is guaranteed. Here’s what Big Tech isn’t telling you about the challenges ahead.
AI in Wyoming may soon use more electricity than state’s human residents
Proposed data center would demand 5x Wyoming’s current power use at full deployment.
by BENJ EDWARDS
ArsTechnica
Excerpts:
On Monday, Mayor Patrick Collins of Cheyenne, Wyoming, announced plans for an AI data center that would consume more electricity than all homes in the state combined, according to The Associated Press. The facility, a joint venture between energy infrastructure company Tallgrass and AI data center developer Crusoe, would start at 1.8 gigawatts and scale up to 10 gigawatts of power use.
The project’s energy demands are difficult to overstate for Wyoming, the least populous US state. The initial 1.8-gigawatt phase, consuming 15.8 terawatt-hours (TWh) annually, is more than five times the electricity used by every household in the state combined.
That figure represents 91 percent of the 17.3 TWh currently consumed by all of Wyoming’s residential, commercial, and industrial sectors combined. At its full 10-gigawatt capacity, the proposed data center would consume 87.6 TWh of electricity annually—double the 43.2 TWh the entire state currently generates. (Full article.)
Other Ways the AI Hype is Affecting Society and the Economy
ChatGPT outage shows just how many people are using AI at work
‘I no longer know how to work’: ChatGPT outage causes panic among some employees
by Weston Blasi
MarketWatch
The public outcry over Tuesday’s ChatGPT outage showed just how many people use the artificial-intelligence service on a regular basis.
OpenAI’s 400 million ChatGPT users experienced “elevated error rates” worldwide on Tuesday as consumers reported thousands of technical issues, according to data from DownDetector.com, a website that tracks online service outages.
“I just stayed up till 4:30 a.m. getting my last part of a project done so ChatGPT could review it,”one Reddit user posted beneath a screenshot showing a “something has gone wrong” error message from the app. “Lovely way to head to bed.”
“I no longer know how to work,” an X user posted about the outage.
“Millions forced to use brain as ChatGPT takes day off,” another Reddit user joked.
ChatGPT had 400 million weekly active users as of February. (Full article.)
AI Threats Raise Demand for Cybersecurity Products That Don’t Exist (Yet)
by Aaron Holmes
The Information
Excerpts:
Artificial intelligence that handles complex tasks with minimal human oversight, also known as an agent, is creating a bevy of security holes that require plugging.
The problem: Tools to protect companies against the risks posed by the newfangled AI don’t exist yet, according to cybersecurity sellers, investors and IT executives.
That’s why some corporate security executives say they’ve blocked employees from using new products like OpenAI’s ChatGPT Agent mode, which takes over customers’ web browsers to send emails or shop online on their behalf.
Investors are trying to fund startups aiming to fix agent-related security problems that have already emerged publicly, and existing cybersecurity firms are racing to build new products as well.
“A lot of enterprises are now realizing they want to buy a solution for these new vulnerabilities, but there isn’t really a solution on the market right now,”
said Jim Routh, a former chief information security officer at companies including MassMutual, Aetna and KPMG.
Among the AI security holes that don’t have clear solutions: preventing AI agents from taking harmful actions like deciding to wipe out a company’s code base as a way to fix a small bug; giving agents access to third-party applications such as Gmail or Salesforce so they can send emails or log data in applications—without compromising workers’ passwords; ensuring that apps employees create by vibe coding with AI don’t include any harmful code; and blocking AI agents from interacting with malicious websites set up by hackers.
Conclusion: AI Hype Could Destroy our Economy and Society
The truth about generative AI today is that it presents a very real threat to our economy and society, but for the OPPOSITE reasons that AI believers tell us.
The danger of AI today is that it is over-hyped and has infected all sectors of our society, which will end in catastrophe because Big Tech has wasted our national wealth on science fiction. And they now have an ignorant President who believes their lies and is looking to get rich off it himself, along with his family.
Those who believe in the AI hype, which includes most people in the Alternative Media who constantly publish articles every day sowing fear among their audiences that AI is going to take their job away from them, or replace humans altogether and destroy the world, are FALSE PROPHETS.
The Technocrats thrive on this false fear, and use it to keep funding their AI enterprises, and now most know that this “new” technology is over-hyped and that we have a huge AI bubble that is going to eventually cause economic chaos.
If you claim to be a believer and disciple of Jesus Christ, you need to repent of your sins, if you are sowing fear over AI to your audiences.
The truths contained in the ancient scriptures completely contradict the concept of “transhumanism”.
While technology can be used in human beings such as prosthetics and pacemakers, for example, technology can NOT create a new being that is a “transhuman.”
To learn more about this, see:
What is Life?
The Brain Myth: Your Intellect and Thoughts Originate in Your Heart, Not Your Brain
Also learn how the Human Superior Intelligence network is far superior to any technology network that man and Satan can create.
Comment on this article at HealthImpactNews.com.
This article was written by Human Superior Intelligence (HSI)
See Also:
Understand the Times We are Currently Living Through
Exposing the Christian Zionism Cult
Christian Teaching on Sex and Marriage vs. The Actual Biblical Teaching
Where is Your Citizenship Registered?
The Bewitching of America with the Evil Eye and the Mark of the Beast
Jesus Christ’s Opposition to the Jewish State: Lessons for Today
Distinguishing True Prophets from False Prophets in These Evil Modern Times
Insider Exposes Freemasonry as the World’s Oldest Secret Religion and the Luciferian Plans for The New World Order
Identifying the Luciferian Globalists Implementing the New World Order – Who are the “Jews”?
The Brain Myth: Your Intellect and Thoughts Originate in Your Heart, Not Your Brain
Fact Check: “Christianity” and the Christian Religion is NOT Found in the Bible – The Person Jesus Christ Is
The Seal and Mark of God is Far More Important than the “Mark of the Beast” – Are You Prepared for What’s Coming?
The Satanic Roots to Modern Medicine – The Mark of the Beast?
Medicine: Idolatry in the Twenty First Century – 8-Year-Old Article More Relevant Today than the Day it was Written
Having problems receiving our emails? See:
How to Beat Internet Censorship and Create Your Own Newsfeed
We Are Now on Telegram. Video channels at Bitchute, and Odysee.
If our website is seized and shut down, find us on Telegram, as well as Bitchute and Odysee for further instructions about where to find us.
If you use the TOR Onion browser, here are the links and corresponding URLs to use in the TOR browser to find us on the Dark Web: Health Impact News, Vaccine Impact, Medical Kidnap, Created4Health, CoconutOil.com.