Highlighted event Today’s Top 4 Cyber Attacks and How to Defend Against Them More info
Highlighted event Today’s Top 4 Cyber Attacks and How to Defend Against Them More info
Highlighted event Today’s Top 4 Cyber Attacks and How to Defend Against Them More info
Highlighted event Today’s Top 4 Cyber Attacks and How to Defend Against Them More info

Between AI Hype and Skepticism: The Reality of Generative AI

People on both sides of the AI spectrum are earning money and driving traffic by making hyperbolic claims about generative AI (genAI). On one side of the spectrum, there are the AI zealots, boosters, and cheerleaders. Many of these folks (e.g., OpenAI CEO Sam Altman, NVIDIA CEO Jensen Huang) make their livelihoods by marketing genAI as a potential savior of humanity. These boosters are keen to cite AI’s potential existential risks; however, their warnings serve primarily as marketing endeavors. By emphasizing how dangerous genAI can be in the wrong hands, boosters like Altman position their initiatives as being deathly important. As one would expect, this tactic is particularly common before funding rounds. On the other side of the AI spectrum are the cynics and skeptics, some of whom go so far as to label the AI boosters as “charlatans” or “grifters.” Among the skeptics, there are several different camps. There are internal skeptics—researchers and academics currently working on AI initiatives—and then there are external skeptics who may have worked on AI projects in the past, but primarily build their personal brands by debunking genAI hype (e.g., NYU professor Gary Marcus, British technology pundit Ed Zitron). The boosters’ behavior is relatively straightforward, so let’s devote a bit more time to the motley world of AI skepticism.

Societal harm and ethical issues

Cynics have many different reasons to be skeptical about the current genAI craze. According to Gartner’s 2025 Hype Cycle for AI, genAI has recently gone from the peak of inflated expectations to the trough of disillusionment. This isn’t to say that genAI won’t continue to have a positive impact on society, business, education, health, and other facets of life; however, many researchers are increasingly starting to question whether the pros do, in fact, outweigh the cons.

Many ethics-focused academic researchers are concerned with algorithmic biases, misinformation, and other potential societal harms posed by genAI. For example, in their book, “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want,” researchers Emily Bender (University of Washington) and Alex Hanna (DAIR) describe some chatbots as “a racist pile of linear algebra,” given the tools’ propensity to perpetuate racial inequities.

Aside from misinformation and algorithmic biases, many AI skeptics are concerned about climate issues and the amount of energy that it takes to operate the data centers that power genAI models. Others are worried about genAI causing a deterioration of critical thinking skills, widespread job displacement, and the potential for millions of workers to be left behind.

Corporate hype and untenable economics

Not to be undone, another branch of vocal AI skeptics focuses on the precarious financial situation allegedly faced by many of the big players in the genAI space. As an example, Ed Zitron frequently predicts that the genAI bubble will burst, leading to the contraction of stock valuations and a subsequent depression. Likewise, Gary Marcus describes the genAI landscape as economically unstable given the exorbitant costs to power the models, combined with a lack of corresponding revenues.

Marcus, Zitron, and other vocal critics take umbrage with the idea that current genAI models, and the continuous scaling of LLMs specifically, can ever lead to artificial general intelligence (AGI) or super-intelligence. To be fair, Marcus doesn’t think AGI is unattainable per se; he just prefers a hybrid, neurosymbolic AI approach as opposed to the current, widespread approach of scaling LLMs.

Existential risks

Yet another branch of genAI criticism focuses on the potential existential harms, such as AI-fueled wars, human extinction, and widespread social instability. For example, OpenAI co-founder and former chief scientist Eliezier Yudkowsky wrote a March 2023 essay in Time Magazine, warning that without a proposed moratorium on genAI research, “literally everyone on Earth will die.” Co-written with AI researcher Nate Soares, Yudkowsky’s forthcoming book has an equally incendiary title, “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.”

Geoffrey Hinton, who helped develop neural network technologies back in the 1980’s, also believes AI poses existential risks. In an April 2025 interview with CBS News, Hinton says, “The best way to understand it emotionally is we are like somebody who has this really cute tiger cub. Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.” Yoshua Bengio, a former colleague of Hinton and another pioneer of neural networking, has likewise been outspoken about the potential for AI to outsmart humans and enact a takeover.

To reiterate, AI boosters are also keen to cite AI’s potential existential risks, although this is usually an effort to market their projects, fund a particular organization, or position themselves as experts on AI policy. To be sure, genAI companies are often chasing government contracts and private equity capital, so it makes sense that they’d position their organizations as being the key to preventing existential harms.

AI centrists take a relatively measured approach

As a quick caveat, centrist is a relative term. One person’s centrist is another’s “zealot,” and yet another’s “doomer.” Nevertheless, there are some people (e.g., Anthropic CEO Dario Amodei) who talk about the potential negative effects of current genAI models, even when it goes against their financial interest to do so.

Despite the fact that Amodei is deep in the AI arms race, he’s a realist when it comes to many genAI issues. I very much agree with Amodei when he says that AGI is primarily a marketing term, and I appreciate his honesty about the uncertain financial trajectory of genAI companies—especially in regard to scaling LLMs (which his company is doing).

In a recent interview with Alex Kantrowitz‘s Big Technology podcast, Amodei says, “On the underlying technology, I’ve started to become more confident. There isn’t no uncertainty about it. [There is uncertainty.] I think the exponential that we’re on could still totally peter out. I think there’s maybe, I don’t know, a 20 or 25% chance that some time in the next two years, the models just stop getting better—for reasons we don’t understand, or maybe for reasons we do understand, you know, like data or compute availability.”

Although some detractors have accused the Anthropic CEO of being an AI doomer, I find much of his commentary to be realistic and refreshing.

Key takeaways

In our currently over-hyped genAI environment, there are plenty of vocal AI cheerleaders and skeptics. While wading through the assortment of opinions, it’s important to assess the speakers’ financial motives. The CEO of the world’s largest semiconductor chip company is logically going to describe AI as being as important as electricity; conversely, many outspoken AI critics are trying to sell books, drive traffic, and build their own personal brands by questioning the technology. Although the potentials (and risks) of genAI are real, it’s best to take the hyperbole of the boosters (and the doomers) with a grain of salt.

Nieuwsbrief

Sign up for our newsletter

Stay updated with our latest products and offers by subscribing to our newsletter