AI and Electricity
AI and Electricity

Artificial Intelligence and Electricity

Andrew Ng the famous AI Scientist and the guy who never gets old once said “AI is the new Electricity” and he was absolutely right about the analogy. However, this article is not about analogy; actual electricity is measured in amperes, volts, and kWatts-hours. And why it is a huge barrier to Artificial Intelligence growth for businesses and Model training giants like Open AI.

The Hidden Energy Cost Behind NVIDIA’s Data Center GPUs

In 2023 alone, NVIDIA sold 3.76 million data center GPUs. Powering that many GPUs requires roughly the same amount of energy needed to power the entire city of Phoenix, Arizona—about 620,000 homes consuming a total of 75.1 terawatt-hours, or approximately 18,132 kWh per home per year. In dollar terms about $1.6 Billion worth of electricity per year just to run them.

NVIDIA

A Growing Energy Appetite for Artificial Intelligence

So what it means for AI so that, when you add another 3.85 million Data Centers shipped by AMD and Inter in 2023 you need another power grid as big as Phoenix and you will have electricity needs multiplying the size of two major cities a year by three companies. Keep in mind, this discussion is limited to data center GPUs, not PC or SoC GPUs. Out of the three companies currently leading the data center GPU business in the U.S., at least seven others are not far behind. The US government may have put sanctions but they do have some great GPUs.

The human brain contains about 100 billion neurons and runs on just 20 watts of power, while NVIDIA’s most powerful GPU has around 80 billion transistors—roughly equivalent to 0.2 billion neurons—and requires 700 watts of power to operate. And people talk about Artificial General Intelligence or Singularity to compete with nature. But that is topic of another article.

So startups cannot afford nor GPUs cost not afford to power them if not backed by big bank companies like Microsoft, Google or VCs. Even some survives and some die matter is bigger than that for Artificial Intelligence. Which is growth of AI is at stake and how to solve this problem.

Nuclear Fusion

Researchers are testing many methods in detail to solve it, but I cannot disclose them right now. However, major industry giants are actively working on it.

For example

Sam Altman CEO of Open AI personally involved and invested in a clean energy startup called Helion hoping to produce controllable Hydrogen Fusion Reactor to produce virtually infinite amount of energy using Hydrogen atoms Nuclear Fusion. Scientists have already tested this principle in hydrogen bombs; it differs from nuclear fission and has produced both nuclear weapons and sustained nuclear electricity for years. Engineers still need to convert nuclear fusion power into a reliable, sustained source of electricity. A feet on which about 30 plus startups working globally and no at least 10 years away from any commercial Fusion reactor to power the grid. And don’t even get me started on solar and wind – despite over $10 trillion in global investments, these technologies still fail to meet demand, and we need to acknowledge that reality.

GPUs like human brain.

Apart from nuclear fusion, low-power GPU technology using integer operations instead of floating-point calculations could be a solution. However, investment in integer-based GPU technology remains limited, as the industry has yet to realize its full potential. Developing these GPUs is mathematically daunting, and researchers need to explore ways to create low-power GPUs whether integer-based or otherwise that drastically reduce power consumption, approaching the efficiency of the human brain.

So what is next?

AI growth definitely cripples when your technology depends on high-power data center GPUs for training large AI models on huge datasets, and you cannot finish before the cutting-edge models become commoditized – usually within two years.

The most advanced models then become cheap and, in many cases, trainable on older, commercially available data center GPUs in the cloud, which are far more affordable and much faster to train. As a result, you don’t need the same power that cutting-edge large models require with the latest GPUs — a race that has already priced out 99% of startups. As one cutting edge large model can take $100 million easily in 3 to 6 months training period.

The good news is that IRVINEi avoids large model training issues and high electricity costs by using technology that doesn’t rely on massive model training. Consumers and businesses manage power consumption through IRVINEi’s OVAL AI & IoT Hub, which includes a GPU and features a Single Interface Dashboard (SID).

This hub brings you your own AI Personal Assistant, AI Bodyguard, and Business Manager. IRVINEi’s proprietary technology combines Large Language Models (LLMs) and Computer Vision (CV) to give you a personalized AI experience, making you feel like Iron Man with your own Jarvin.

Check Also

The Benefits of Edge Computing in Smart Home Systems

In the age of smart technology, homes are becoming more connected and intuitive than ever …

Leave a Reply

Your email address will not be published. Required fields are marked *