News
Sponsored
AI
Grid edge

An overlooked efficiency solution in the AI race: general-purpose computing

Efficient Computer believes hyper-specialization of GPUs has missed a critical opportunity for distributed intelligence – until now.

|
Published
October 28, 2024
Listen to the episode on:
Apple Podcast LogoSpotify Logo

Photo credit: Efficient Computer

Photo credit: Efficient Computer

As the market for AI hardware and services approaches $1 trillion in value, chipmakers are focused on highly-specialized, energy-intensive GPUs to run the computing infrastructure.

But Brandon Lucia thinks there’s an overlooked opportunity for embedding intelligence into everything – and doing it far more efficiently: general purpose computing.

“We're in the business of accelerating everything – and that's the way that we like to think about the problem space,” said Lucia, in an interview.

Lucia is the co-founder and CEO of Efficient Computer, a chipmaker spun out of Carnegie Mellon University that developed a CPU architecture that it claims is 100 times more efficient than other general purpose CPUs. The company emerged from stealth in March with $16 million in seed financing led by Eclipse.

Lucia thinks the chip industry’s hyper-specialization on GPUs for AI and ML has caused many people to overlook important areas for innovation – holding back efficiency improvements in CPUs that could have much wider efficiency benefits across the economy.

“There are a lot of other offerings in the market today that are opting to specialize, and specialization is a way of focusing your hardware on only one narrow stripe of software in the world,” explained Lucia. “That would work if there was only exactly one thing that mattered for applications.”

CPUs, of course, are the foundation of our modern devices – laptops, smartphones, wearables, appliances, and data center servers. They’ve become radically more powerful, but are reaching the limits of their efficiency potential. 

In a traditional von Neumann architecture, 90-99% of the energy consumed in a processor goes to moving data back and forth from the memory. “You’re literally moving the instruction spatially through some bits in your chip, and that takes energy. It’s extravagantly wasteful,” explained Lucia.

Efficient Computer “wiped the chalkboard completely clean” and developed an entirely new spatial data flow architecture. It contains a grid of square processing elements that analyze how one instruction interacts with another instruction, creating a connection without having to move data extraneously into and out of intermediate memory structure. 

“Instead of fetching a new instruction every cycle, which is very, very expensive, we fetch the instruction once, plunk it down on the tile, and then it stays there for millions or billions of processor cycles,” explained Lucia. “We get rid of those costs and that makes us one to two orders of magnitude more energy efficient than other general purpose processors that are out there in the market.”

The architecture can be programmed like any other CPU, avoiding restrictive APIs or a new coding language.

Listen to the episode on:
Apple Podcast LogoSpotify Logo

Real-world efficiency benefits

Lucia, a professor in electrical and computer engineering at Carnegie Mellon, worked on the underlying architecture for seven years with his co-founders. Efficient Computer is now working with early customers to build proof-of-concept edge applications.

The startup’s focus is on resource-constrained devices that require battery power, such as wearables, civil infrastructure, and industrial applications. By reducing the need for frequent battery replacements and enabling more sophisticated on-device processing, Lucia said there are huge downstream efficiency benefits.

“If you have battery power, that means you have to go and recharge the device all the time. But there's actually bigger societal consequences,” he said. 

Devices run on the spatial data flow architecture would last years longer – while also supporting new data-rich, real-time applications.

“It brings the lifetime of these devices way out, so you can consider applications that are just not possible today,” said Lucia. “And so that's really motivating for us because that means you have fewer truck rolls going into the world, you have fewer batteries getting thrown into the ocean. Those don't become a waste product of doing a very large scale deployment like that.”

Lucia thinks general purpose computing has been ignored in favor of processors that are optimized for a narrow range of tasks, like AI and ML workloads. These architectures are designed for repetitive tasks, like matrix multiplications in large-scale AI models, but are not as efficient for more diverse computing needs out in the real world.

He gave an example of a smart camera that processes video and images using onboard computing power. This processing might include things like cleaning up the image, recognizing objects, or running AI algorithms to interpret what’s in the frame.

In a typical smart camera, energy consumption is split between two types of tasks. Around 60% of the energy goes into running AI algorithms to analyze the images, and 40% is spent on non-AI tasks. A specialized processor might be very good at AI, but less efficient at non-AI tasks. Optimizing only for AI leaves a huge amount of untapped efficiency potential. 

“We can take the 60% and we do a killer job at making that fast and efficient, but then we also take that other 40% and we can do a killer job at making that fast and efficient too,” said Lucia.

AI enthusiasts often point to the ways the technology could radically accelerate innovation and reshape the economy – across materials discovery, manufacturing efficiency, resource exploration, weather forecasting, grid modeling, and more. 

“The innovative potential of this thing [AI] is absolutely enormous in terms of inventing things that fundamentally wouldn't be possible with traditional techniques – across every industry, but especially around sustainability and clean energy production,” said Crusoe CEO Chase Lochmiller at a recent Latitude Media event.

And chipmakers like NVIDIA say there’s still plenty of headroom for improving GPU efficiency and AI modeling.

But Lucia thinks there’s lots of untapped innovation to come for CPUs – which could extend the life of devices, and enable a wide variety of applications that rely on distributed intelligence. The impact could be equally transformative.

“If you're working in infrastructure intelligence, if you're working in the area of making smart grids smarter or industrial or civil infrastructure smarter now is a critical moment of change.”

EVENT
Transition-AI 2024 | Washington DC | December 3

Join industry experts for a one-day conference on the impacts of AI on the power sector across three themes: reliability, customer experience, and load growth.

Register
EVENT
Transition-AI 2024 | Washington DC | December 3

Join industry experts for a one-day conference on the impacts of AI on the power sector across three themes: reliability, customer experience, and load growth.

Register
EVENT
Transition-AI 2024 | Washington DC | December 3

Join industry experts for a one-day conference on the impacts of AI on the power sector across three themes: reliability, customer experience, and load growth.

Register
EVENT
Transition-AI 2024 | Washington DC | December 3

Join industry experts for a one-day conference on the impacts of AI on the power sector across three themes: reliability, customer experience, and load growth.

Register
Get in-depth coverage of the energy transition with Latitude Media newsletters
No items found.