Desperate for power, AI hosts turn to nuclear industry


As data centers grow to run larger artificial intelligence (AI) models to feed a breakneck adoption rate, the electricity needed to power vast numbers of GPU-filled servers is skyrocketing.

The compute capacity needed to power AI large language models (LLMs) has grown four to five times per year since 2010, and that includes the biggest models released by OpenAI, Meta, and Google DeepMind, according to a study by Epoch AI, a research institute investigating key AI trends.

Epoch AI

AI service providers such as Amazon Web Services, Microsoft, and Google have been on the hunt for power providers to meet the growing electricity demands of their data centers, and that has landed them squarely in front of nuclear power plants. The White House recently announced plans to support the development of new nuclear power plants as part of its initiative to increase carbon-free electricity or green power sources.

AI as energy devourer

The computational power required for sustaining AI’s rise is doubling roughly every 100 days, according to the World Economic Forum (WEF). At that rate, the organization said, it is urgent for the progression of AI to be balanced “with the imperatives of sustainability.”

“The environmental footprint of these advancements often remains overlooked,” the Geneva-based, nongovernmental organization think tank stated. For example, to achieve a tenfold improvement in AI model efficiency, the computational power demand could surge by up to 10,000 times. The energy required to run AI tasks is already accelerating with an annual growth rate between 26% and 36%.

“This means by 2028, AI could be using more power than the entire country of Iceland used in 2021,” the WEF said.

Put simply, “AI is not very green,” said Jack Gold, principal analyst with tech industry research firm J. Gold Associates.

Large language models (LLMs), the algorithmic foundation for AI, train themselves on vast amounts of data scoured from the internet and other sources. It is the process of training AI models (i.e., LLMs) and not the act of chatbots and other AI tools offering users answers based on that data — known as “inference” — that requires the overwhelming majority of compute and electrical power.

And, while LLMs won’t be training themselves 100% of the time, the data centers in which they’re located require that peak power always be available. “If you turn on every light in your house, you don’t want them to dim. That’s the real issue here,” Gold said.

“The bottom line is these things are taking a ton of power. Every time you plug in an Nvidia H100 module or anyone’s GPU for that matter, it’s a kilowatt of power being used. Think about 10,000 of those or 100,000 of those, like Elon Musk wants to deploy,” Gold said.

The hunt for power heats up

As opposed to adding new green energy to meet AI’s power demands, tech companies are seeking power from existing electricity resources. That could raise prices for other customers and hold back emission-cutting goals, according The Wall Street Journal and other sources. 

According to sources cited by the WSJ, the owners of about one-third of US nuclear power plants are in talks with tech companies to provide electricity to new data centers needed to meet the demands of an artificial-intelligence boom.

For example, Amazon Web Services is expected to close on a deal with Constellation Energy to directly supply the cloud giant with electricity from nuclear power plants. An Amazon subsidiary also purchased a nuclear-powered data center in Pennsylvania for $650 million, and it plans build 15 new data centers on its campus that will feed off that power, according to Pennsylvania-based The Citizen’s Voice.

(AWS did not respond to a request for commend from Computerworld at the time of this article’s publication.)

One glaring problem with bringing new power online is that nuclear power plants can take a decade or more to build, Gold said.

“The power companies are having a real problem meeting the demands now,” Gold said. “To build new plants, you’ve got to go through all kinds of hoops. That’s why there’s a power plant shortage now in the country. When we get a really hot day in this country, you see brownouts.”

The available energy could go to the highest bidder. Ironically, though, the bill for that power will be borne by AI users, not its creators and providers. “Yeah, [AWS] is paying a billion dollars a year in electrical bills, but their customers are paying them $2 billion a year. That’s how commerce works,” Gold said.

“Interestingly enough, Bill Gates has an investment in a smallish nuclear power company that wants to build next-generation power plants. They want to build new plants, so it’s like a mini-Westinghouse,” Gold said. “He may be onto something, because if we keep building all these AI data centers, we’re going to need that power.”

“What we really need to do is find green AI, and that’s going to be tough,” Gold added.

AI as infrastructure planner

The US Department of Energy (DOE) is researching potential problems that may result from growing data center energy demands and how they may pose risks to the security and resilience of the electric grid. The agency is also employing AI to analyze and help maintain power grid stability.

The DOE’s recently released AI for Energy Report recognized that “AI itself may lead to significant load growth that adds burden to the grid.” At the same time, a DOE spokesperson said, “AI has the potential to reduce the cost to design, license, deploy, operate, and maintain energy infrastructure by hundreds of billions of dollars.”

AI-powered tools can substantially reduce the time required to consolidate and organize the DOE’s disparate information sources and optimize their data structure for use with AI models.

The DOE’s Argonne Lab has initiated a three-year pilot project with multiple work streams to assess using foundation models and other AI to improve siting, permitting, and environmental review processes, and help improve the consistency of reviews across agencies.

“We’re using AI to help support efficient generation and grid planning, and we’re using AI to help understand permitting bottlenecks for energy infrastructure,” the spokesperson said.

The future of AI is smaller, not bigger

Even as LLMs run in massive and expanding data centers run by the likes of Amazon, IBM, Google, and others are requiring more power, there’s a shift taking place that will likely play a key role in reducing future power needs.

Smaller, more industry- or business-focused algorithmic models can often provide better results tailored to business needs.

Organizations plan to invest 10% to 15% more on AI initiatives over the next year and a half compared to calendar year 2022, according to an IDC survey of more than 2,000 IT and line-of-business decision makers. Sixty-six percent of enterprises worldwide said they would be investing in genAI over the next 18 months, according to IDC research. Among organizations indicating that genAI will see increased IT spending in 2024, internal infrastructure will account for 46% of the total spend. The problem: a key piece of hardware needed to build out that AI infrastructure — the processors — is in short supply.

LLMs with hundreds of billions or even a trillion parameters are devouring compute cycles faster than the chips they require can be manufactured or upscaled; that can strain server capacity and lead to an unrealistically long time to train models for a particular business use.

Nvidia, the leading GPU maker, has been supplying the lion’s share of the processors for the AI industry. Nvidia rivals such as Intel and AMD have announced plans produce new processors to meet AI demands.

“Sooner or later, scaling of GPU chips will fail to keep up with increases in model size,” said Avivah Litan, a vice president distinguished analyst with Gartner Research. “So, continuing to make models bigger and bigger is not a viable option.”

Additionally, the more amorphous data LLMs ingest, the greater the possibility of bad and inaccurate outputs. GenAI tools are basically next-word predictors, meaning flawed information fed into them can yield flawed results. (LLMs have already made some high-profile mistakes and can produce “hallucinations” where the next-word generation engines go off the rails and produce bizarre responses.)

The solution is likely that LLMs will shrink down and use proprietary information from organizations that want to take advantage of AI’s ability to automate tasks and analyze big data sets to produce valuable insights.

David Crane, undersecretary for infrastructure at the US Department of Energy’s Office of Clean Energy, said he’s “very bullish” on emerging designs for so-called small modular reactors, according to Bloomberg.

“In the future, a lot more AI is going to run on edge devices anyways, because they’re all going to be inference based, and so within two to three years that’ll be 80% to 85% of the workloads,” Gold said. “So, that becomes a more manageable problem.”

Previous Story

[Audio] Episode 328 — Sanctions Enforcement Risks and Redlines

Next Story

Microsoft employees must use Apple iPhones in China