AI-generated image credit: Gold Flamingo
AI-generated image credit: Gold Flamingo
A single, resounding message emerged from PG&E’s Innovation Summit last week: AI’s potential to address grid challenges in a scalable, affordable way is too big to ignore.
Two weeks after releasing its R&D strategy report, which for the first time elaborated its vision to become an “AI-enabled utility,” PG&E’s embrace of AI was on full display at the event. CEO Patti Poppe and other executives walked through the ways they’re integrating AI to address several dozen problem statements — from wildfire risk forecasting to task automation for linemen and engineers. They clearly view AI a critical tool to meet their strategic initiatives.
This sentiment is largely shared by the broader industry. But despite the growing interest in AI within utilities, widespread rollout of AI-enabled solutions beyond early adopters remains limited.
Industry executives and market studies will often point to technical and institutional constraints like lack of internal expertise, data access and governance, and risks of unproven technology as the reasons for slow uptake. But there’s another major underlying factor: a misconception of what AI even is, and subsequently how it should be utilized.
Multiple recent studies, including reports from IBM and Itron, show overwhelming support for AI by utility executives and their acknowledgement that the technology can be an integral tool to address some of their top challenges.
Our research earlier this year reflected the same level of enthusiasm, with nearly all of the utilities we spoke with acknowledging the potential role AI could play long-term. Early movers like AES, Portland General Electric, and Avangrid are actively testing and rolling out AI-enabled solutions across some of their biggest strategic priorities, including asset inspection and monitoring, and distribution grid visibility and orchestration.
These stats and activities are promising. But our research also found that actual adoption is still slow-moving, with a majority of the market still in a “wait and see” mode, or only experimenting with AI at the department-level. There is still a lack of centralized, utility-wide protocols for testing and deploying AI-enabled solutions.
While the utilities that remain in a holding pattern recognize the long-term potential for AI, a lack of understanding of the inner workings of AI — the so-called “black box” — causes fears around data accuracy, governance, and interpretability.
Of course, not all AI solutions and tools are the same. And treating every AI-enabled solution with the same level of caution is holding utilities back from capturing meaningful value today, while also depriving them of the opportunity to increase their level of comfort and expertise around these technologies.
AI is often viewed in binary terms: either you launch autonomous, end-to-end tools, or don’t use it at all. This perception of AI stems from personal experiences using tools like ChatGPT or Perplexity, solutions that use AI to fully automate tasks. It can pit AI as a standalone solution against “more conventional solutions,” rather than a component that can enhance conventional solutions.
This over-generalization reinforces, for many utilities, the perception that they need to make radical changes to hiring and data management before considering deploying any AI-backed solutions.
Learn about the pathways to adopting AI-based solutions in the power sector in a first-of-its-kind study published by Latitude Intelligence and Indigo Advisory Group.
Learn about the pathways to adopting AI-based solutions in the power sector in a first-of-its-kind study published by Latitude Intelligence and Indigo Advisory Group.
Learn about the pathways to adopting AI-based solutions in the power sector in a first-of-its-kind study published by Latitude Intelligence and Indigo Advisory Group.
Learn about the pathways to adopting AI-based solutions in the power sector in a first-of-its-kind study published by Latitude Intelligence and Indigo Advisory Group.
The reality is that there are multiple types of AI, like machine learning or deep learning, that can be utilized in nuanced, specific ways that don’t involve the rollout of an autonomous, black-box solution — like merging disparate data sets to inform more accurate forecasts, or analyzing image-based data to anticipate potential equipment failure.
For example, PG&E’s meteorology team uses machine learning to incorporate weather and environmental data to help develop five-day rolling fire-risk forecasts, and its customer insights team leverages AI to analyze hundreds of thousands of customer survey results. Both applications increased comfort with the tools over time, validated data assumptions and accuracy, and ultimately provided the utility with a path to continuously iterate and expand the use of AI.
To be fair, concerns around data, workforce readiness, and reliability of AI tools are all very valid and aren’t solely a result of misperceptions. And there are a number of other factors taken into consideration when utilities are evaluating if and when they should incorporate AI, especially when debating internal solutions versus vendor solutions, with cost and reliability largely influencing decisions to adopt or not.
But all of these concerns become more manageable when taking a nuanced approach to evaluating AI inside a utility.
At our Transition AI 2024 event in two weeks, we’ll be taking a deeper dive into AI adoption trends and the broader factors shaping utility AI roadmaps. There is no universal AI adoption plan or path that utilities can follow, given the vast differences in their strategic priorities, regulatory requirements, and regional considerations. But the leaders are all focused on one key approach: focusing on targeted outcomes, rather than on deploying end-to-end AI solutions.
As Stephanie Sheldrick, senior director of customer insights, strategy & analytics at PG&E put it at last week's summit, “we had a vision of where we wanted to go in the long term and what was possible, but we knew we would have to start small and bite sized to get the proof concept going and learn and iterate on that.”