Photo credit: Emvat Mosakovskis / Shutterstock
Photo credit: Emvat Mosakovskis / Shutterstock
When Mark Waclawiak looked at Avangrid's data infrastructure in 2020, he saw a problem familiar to utilities across the country: scattered data.
The company's operational information was spread across three different outage management systems, spanning service territories in New York, Maine, and Connecticut. Most of the data lived in Excel spreadsheets and Access databases. Even basic questions about reliability metrics required manual compilation from multiple systems.
"We were significantly limited," Waclawiak told attendees at Latitude Media's Transition-AI conference. "That is one of the open secrets. A lot of data management is in real base infrastructure — Access and Excel. So we were significantly limited.”
Before focusing on AI applications, Waclawiak's team started with a simple goal: get more value from outage data. They set up an on-premise SQL Server database and began unifying data across their operating companies. The process wasn't flashy, but the results came quickly.
"Within one year we're paying off in wild and incredible returns on just doing core infrastructure development for data and bringing in some data analytics," Waclawiak explained. This allowed Avangrid to show efficacy to regulators and brought “significant buy-in from our regulators and funding."
The success gave Waclawiak’s team the credibility to expand. They started bringing in more data sources, like additional outage data, geospatial data, and weather data. Today, Avangrid is now exploring more advanced applications, including computer vision for infrastructure inspection. Those applications were possible because of the data foundation — which also helped the utility build an internal team with technical expertise.
Consequently, Avangrid has emerged as one of the leading U.S. utilities with a team specifically devoted to artificial intelligence.
"We had for years these data scientists, these data engineers, these data analysts that are sitting day by day next to electrical engineers and line supervisors to understand what the underlying data means," he explained.
Other, more cautious, approaches to adoption are also unfolding at other utilities as well.
Kevin Jones, who leads Dominion Energy’s synchrophasor program, calls his utility’s approach to AI adoption “aggressive incrementalism.”
For instance, Dominion's synchrophasers collect detailed measurements of grid dynamics, to improve visibility of real-time conditions and respond to outages. "You don't need AI to get the information that you need from that signal," Jones explained at the conference. "It's basic signal processing, basic power system dynamics, control theory, statistics, nothing really crazy."
Like Waclawiak, though, he sees potential for using AI more once the fundamentals are in place: "There is an interesting way that once you have then characterized those dynamics over time and space, then you can start to look at the very highly nonlinear relationship between say all of those features” using artificial intelligence, he said.
Many utilities are developing pilot programs for AI, but very few have moved from limited trials to full-scale deployments.
A recent partnership between Consumers Energy and Utilidata offers insights into how power providers might take a measured approach to scaling. Consumers recently received $20 million in federal GRIP funding to deploy Utilidata's AI platform across 18,000 smart meters. That first phase is designed to help the utility uncover insights that will inform a bigger rollout.
"The GRIP award is a great down payment on what could end up being a large-scale deployment once we understand how to get as much value out of it [as possible] and make sure it's adding the right value for our customers," Ryan Jackson, who leads corporate strategy at Consumers Energy, told Latitude Media in a previous interview.
Speaking at Transition-AI, Ruth Scotti, who formerly led innovation efforts at Consumers, explained the technical challenges behind the deployment.
"When you try to really implement at scale, you encounter a lot of complexity and it's hard integrating with a utility’s legacy system to find compatibility," Scotti explained at Transition-AI. "If you have a solution that is much more data rich, it could overwhelm the existing utility system."
Josh Wong, CEO of ThinkLabs and former founder of Opus One Solutions, said he has adapted to utilities' unique constraints and timelines. ThinkLabs was spun out of GE to build an AI-based grid orchestration “copilot” for grid planners.
The startup works with data that utilities already have, rather than expecting them to overhaul their systems.
"For us, we train based on specific utility circuit data,” Wong said at the event. “They have that data available. It's not perfect, but it exists. They have been using it to design, plan and operate the system for decades.”
One key concern that shapes how – and how fast – utilities invest in AI is data privacy.
"Utilities are very concerned about their data being taken to train public models or multi-customer models that can be sold to another utility," Wong noted. ThinkLabs has adapted to that concern by focusing on deep learning rather than large language models, allowing utilities to maintain control of their operational data.
Learn about the pathways to adopting AI-based solutions in the power sector in a first-of-its-kind study published by Latitude Intelligence and Indigo Advisory Group.
Learn about the pathways to adopting AI-based solutions in the power sector in a first-of-its-kind study published by Latitude Intelligence and Indigo Advisory Group.
Learn about the pathways to adopting AI-based solutions in the power sector in a first-of-its-kind study published by Latitude Intelligence and Indigo Advisory Group.
Learn about the pathways to adopting AI-based solutions in the power sector in a first-of-its-kind study published by Latitude Intelligence and Indigo Advisory Group.
Despite these constraints, Wong sees clear financial incentives for utilities to adopt AI, particularly around operational efficiency: "With every dollar of opex saved, you can spend typically six to eight dollars in capex in return. So I think the workflow efficiency value equation is very strong with AI type of tools."
The challenges aren’t just technical or cultural — they’re also about market structure. Utilidata estimates 50 million meters around the U.S. could soon be replaced with AI-enabled upgrades. But Angela Kassahun, the company’s director of policy and market development, sees states struggling to evaluate and approve the technologies.
For example, traditional benefit-cost analyses don't easily capture how AI systems improve over time as they learn. "Everything from how AI is procured to how it's evaluated in utility labs just looks different because the benefits of artificial intelligence are immediate — but they're richer and long-term as AI algorithms are trained and more use cases are unlocked," Kassahun explained.
Some states are experimenting with new approaches. Michigan, for example, created an accelerated pilot framework that can approve new technology trials in 90 days — very fast by utility standards.
"From a policy and regulatory perspective, I think a lot of states across the country are trying to grapple and balance how to improve reliability in the face of the more frequent extreme weather events while also preparing a distribution grid that can accommodate more distributed energy resources," said Kassahun. "And these things are really primed for an AI solution."
The industry's adoption of artificial intelligence will follow the same path for other technologies in the power sector — steady, incremental, and focused on specific operational problems. But as Avangrid’s Waclawiak pointed out, this measured approach can still deliver strong results.
"We look at our organization as being able to answer questions that we previously thought were unanswerable," he said. "And that really was just from bringing in different data sets, making it available, clean and having it essentially intersect with our existing data sets."