Podcast
Sponsored
DATA + CLOUD
AI

Under the hood of data center power demand

Microsoft’s former VP of energy breaks down ways to deal with AI-driven load growth.

Listen to the episode on:
Catalyst
Catalyst

Driven by the AI boom, data centers’ energy demand could account for 9% of U.S. power generation by 2030, according to the Electric Power Research Institute. That's more than double current usage. 

So how do we meet that demand? And what impacts will it have on the grid and decarbonization?

In this episode, Shayle talks to Brian Janous, former vice president of energy at Microsoft and current co-founder of Cloverleaf Infrastructure. Brian talks through the options for meeting data center demand, including shaping computational loads to avoid system peaks and deploying grid-enhancing technologies.

He and Shayle also cover topics like:

  • Why AI-driven demand will be big, even with “zombie requests” in the interconnection queue
  • How hyperscalers are “coming to grips” with the reality that they may not hit decarbonization targets as quickly as planned
  • Why Brian thinks efficiency improvement alone “isn’t going to save us” from rising load growth
  • Why Brian argues that taking data centers off-grid is not a solution 
  • Options for shaping data center load, such as load shifting, microgrids, and behind-the-meter generation
  • How hyperscalers could speed up interconnection by shaping computational loads

Recommended resources

Make sure to listen to our new podcast, Political Climate — an insider’s view on the most pressing policy questions in energy and climate. Tune in every other Friday for the latest takes from hosts Julia Pyper, Emily Domenech, and Brandon Hurlbut. Available on Apple, Spotify, or wherever you get your podcasts.

Be sure to also check out Living Planet, a weekly show from Deutsche Welle that brings you the stories, facts, and debates on the key environmental issues affecting our planet. Tune in to Living Planet every Friday on Apple, Spotify, or wherever you get your podcasts.

Listen to the episode on:

Transcript:

Announcer: Latitude Media Podcast at the Frontier of Climate Technology.

Shayle Kann: I'm Shayle Kann, and this is Catalyst.

Brian Janous: Dublin has one of the highest concentrations of data centers in the world. It's almost 20% of the entire country of Ireland's electricity for data centers. So the grid operator was going to establish a moratorium, but we were actually able to go convince them that what you should actually do is require data centers to be dispatchable in exchange for a grid connection.

Shayle Kann: Okay? Yes, it's a conversation about data centers and energy, but this one's a little different, trust me. I am Shayle Kann, I invest in revolutionary climate technologies at Energy Impact Partners. Welcome. All right, so first, big news, we have Catalyst swag at long last, after thousands, maybe millions of you have requested it. We've finally made swag. How do you get it? You ask, great question. Well, go to latitudemedia.com/referrals and you'll get a unique link to send to your friends to tell them about the show, for every friend who signs up using your unique referral code, you'll have a chance to win some of our highly coveted swag. All right, onto the show. So it is the topic de jour, which is data center energy demand. There are breathless reports coming out almost daily at this point claiming data centers could consume up to maybe 10% of U.S. electricity by 2030.

That was the high end of a recent EPRI report. There are also skeptics on the other side who are saying that data center energy efficiency that is going to improve, is going to erase the majority of that growth. And maybe this is actually a phantom problem that's not real. They're also public market investors right now who are just throwing dollars after dollars at everyone from utilities to independent power producers to First Solar, who's going to supply solar power to companies who are going to build power to supply to these data centers. And look, that version of the conversation, which is mostly about exactly how much energy demand will come from data centers over the next few years is important. And it is interesting, but it's also not really the conversation that I want to have, at least not primarily. I'll just posit at the start that I think the amount of demand for electricity from data centers will be a lot and certainly a lot more than it has been historically.

And if that's true, then I think the real interesting questions are about how we might meet that demand, the impacts it'll have on the broader electricity sector on customers, and the differences in the role that data centers might play on the grid in the future versus what they have in the past. That I think is the conversation that's not quite being had enough right now. Except I've been having it frequently for a couple of years now with my buddy Brian Janous. Brian spent about 12 years running energy for Microsoft, so he was there long before any of this current wave of AI energy demand showed up. But he was also there at the beginning when it did, and he left about a year ago. And co-founded a company called Cloverleaf Infrastructure, which is focused on developing sites that are going to be well suited to tomorrow's large energy consumers, including but not exclusively, data centers. Anyway, Brian and I chat about this stuff all the time, but for the first time chatting about it together in front of a mic, here's my conversation with Brian. Brian, welcome.

Brian Janous: Excited to be here Shayle.

Shayle Kann: So you spent, how long were you at Microsoft in the end? Like 12 years or something?

Brian Janous: Almost 12 years, yes.

Shayle Kann: Okay. Almost 12 years. So you were leading energy at Microsoft for a long time. I don't want to talk through the whole history of that time. I want to talk a little bit though about the tail end of it, which was, so you left, what, a year ago or so right?

Brian Janous: Yes.

Shayle Kann: So you were there for the emergence of ChatGPT and the first, I don't know, 18 months or something after that. So what was happening, your job was to procure energy for Microsoft broadly, but including for data centers. What was it like that last 18 months, 24 months? How did things change, how quickly and what was the dynamic?

Brian Janous: Yeah, the change was pretty rapid. I think we all knew that Microsoft was working with Open AI and that there was this whole movement towards AI potentially coming at some point in the future. But I don't think there was a realization. And if I'm going back to probably the summer of 2022, this was before ChatGPT3 was released. I don't think there was really a realization of the scale of the magnitude of what the technology was going to do and how fast it was going to be adopted. And so we started hearing rumblings of it that summer and I started getting some odd questions that I'd never gotten before about the scale of certain data centers and how big could you make a data center?

And then once GPT3 was released in November, that's when it really started to sink in. And then I think the next level of realization was the next spring when 3.5 came out and you saw this massive leap went on, really what I thought of as a sort of a half-click wasn't even the full click to GPT4, but just a half-click. And that's the moment that I realized that the problem we were going to have was going to be whether we would have enough power to support this technology that was moving at a pace that as you know, moves way faster than the electric utility industry.

Shayle Kann: But even before that, as we've talked about before, power was already a constraint for you in some locations, right? Because data centers are clustered in certain regions and you also want to be able to get low latency to highly populated areas. And so it was already, if you were in Singapore or London or something like that, power was already a constraint, right? And so what you were realizing was that it was going to be a broader constraint than that?

Brian Janous: Yeah, it was really an accelerant because I would say in the maybe two years prior to that, we had really shifted our strategy towards a power first approach to how we thought about siding. Because over that decade we had gone from a baseline of a couple of hundred megawatts and we were growing at a relatively rapid pace, but that's also kind of a small denominator. So the incremental tranches every year that we had to go procure were measured in the tens or maybe hundreds of megawatts. Towards the end of the last decade, that denominator kept growing and now the denominator was in gigawatts.

And the concern we started to realize is that the tranche size, even if we're still growing at the same rate, those tranche sizes are quite large. And so we already were starting to be concerned about the fact that we were out there looking for hundreds of megawatts, if not gigawatts on a yearly basis, and then you add AI into the equation. And that's when we really started to realize that there's going to be a significant challenge, not just for Microsoft, but the entire industry, and not just the AI cloud industry, but the entire electric utility industry to support the continued growth of native cloud applications on top of now this new and somewhat uncertain trajectory of where AI was going to go.

Shayle Kann: And then you mentioned this, but we should talk about it for a minute. The odd questions that you started getting in the wake of ChatGPT and its ilk initially were around how big you could make individual data centers. So there's two components here. One is just how many megawatts in total of demand I should be using gigawatts, maybe I should be using tens of gigawatts here, but how many gigawatts in total of demand is there going to be from these data centers? There's a second question, which is actually probably the more pertinent question in the context of how this is affecting the electricity sector, which is, what is the scale of an individual data center? And so could you talk a little bit more about what those numbers used to look like and what they're looking like now?

Brian Janous: Yeah, so I think if you go back even towards the end of the last decade, a big data center region was measured in a few hundred megawatts.

Shayle Kann: Can we talk about a region... I've come to learn from you mostly what a region means in the context of a single hyperscaler or a single customer. But why do you think in regions and what is a region?

Brian Janous: Sure. A region is just what the cloud applications look like to the outside world. So if you go onto AWS, you go on to Azure and you say, I want to stand up an application, you're going to have an option to select a region and it'll be called US East or US West II. What that really is a designation of a cluster of data centers that are all within a certain latency envelope of one another. So Northern Virginia is a great example, AWS has their largest region, which is their US East region that is made up of, at this point probably dozens of data centers, but to the outside world, it looks like one big machine. So that's what a region is. And so those regions, and the reason that you have places like Northern Virginia, Amsterdam, those were where the biggest network hubs were. So everyone clustered around there initially as everyone was launching some of their early cloud regions. And then over time, you know, Microsoft, Amazon, Google, all have dozens of regions around the world, but the dozens of regions consist of hundreds of individual data centers.

Shayle Kann: Right, okay. So back to your point, a region might've been a couple of hundred megawatts in the olden days, in the olden days of five years ago.

Brian Janous: Right, exactly. And then the questions started coming in as you were noting earlier around gigawatt scale regions, not just gigawatt growth globally year over year, but individual regions that could potentially achieve gigawatt scale. And that was all driven by the AI training needs because those training models are in and of themselves a big machine. And so, and this is not my forte and my area of expertise around the actual architecture inside of a training model, but my understanding of the constraints there is that you can't have a training model that exists across multiple regions because there's a very low latency requirement for that model, which means it has to largely be on a single building or single campus or very close together. So that started forcing the need to find individual points on the grid that could support by themselves gigawatt scale.

Shayle Kann: Right. Okay. So stuff's getting bigger, regions are getting bigger, individual data centers are getting bigger. And then I don't want to spend too much time on the history here. Let's fast-forward to today. Obviously now this issue has broken wide open into the public consciousness. It's like the Wall Street Journal's doing an article on this every week. And the other thing that's happening is that utilities are starting to sound the alarm bells publicly and to their regulators about the amount of requests, connection load, connection requests that they're getting predominantly from data centers. And so you've seen these crazy numbers like AEP saying that there's 80 gigawatts in the queue or something like that, and others as well. What do you make of all of that? Like, how much of that do you think might be speculative, how much is real and just what's your broad perspective on, we're clearly in a moment, how should we be thinking about the actual expected load if power were not a constraint? And then we'll talk about what constraints power does add.

Brian Janous: Yeah, it's certainly the forecasts you've seen that are based on those AEP numbers of requests they have in queue. If you add all those up, that's way above the actual demand because there's a lot of zombie requests, just like in the generation queue. We're not going to build all the projects that are in generation queue right now. A bunch of those are just developers that hope to be able to build a project, but there wouldn't be enough demand to support every project in the queue that we have right now, both on the supply side and the demand side. So the sum total of the things you've seen that the sky is falling and AEP needs an ID gigawatts, there's no reason to believe that AEP is going to connect 90 gigawatts in the next 10 years. There is still a very large demand inside.

And I'll use AEP specifically, and there's a reason why we're talking about AEP, and I'll get to that in just a second. But there is still a extraordinarily large amount of demand inside of that system, and it is in excess of 10 gigawatts almost certainly.

The reason AEP has gotten a lot of attention is they operate one of the highest voltage systems in the country. So they operate a 765-kV system, which is higher than what anyone else has. There's a little bit of 765 in New York as well. But if you track what's been happening in terms of announced new data centers for the big hyperscalers, you can see that they trace along that 765 line pretty cleanly because when everyone started to realize a couple of years ago that they were going to need gigawatts scale for single sites, everyone sort of went to the one place in the country that could most easily support that. And that of course is AEP. Then you have a lot of fast followers coming in and saying, well, we want to develop data centers there too. Hence, the significant numbers that you're seeing from AEP in terms of what they have in their queue.

Shayle Kann: I think the key point here, and then I want to move on from talking about the overall demand picture, but the key point to me is there can be a lot of hyperbole. There can be a lot of zombie requests and a lot of vaporware, and it's still an absolutely enormous amount of new demand from an electricity perspective. 10 gigawatts in AEP is still a huge amount, it's 1/8 of the number that they reported, but it's still really difficult.

Brian Janous: It's huge, and it's also changing very quickly because if you go even back to January of this year, PJM had roughly 4 gigawatts of anticipated data center demand for AEP by 2030, and by May, AEP revised that up to closer to 15 in just five months. So it's all changing so quickly, and these utilities are grappling with how real is this? And yes, you're right. That's an enormous amount of power for one utility to have to interconnect in a period of five to seven years.

Shayle Kann: There's also something like 20 gigawatts of total data center load in the United States today, or that might be like a year or two old, but very recently it was 20 totals. There's 50% of that total in one utility territory.

Brian Janous: Correct.

Shayle Kann: So, all right, so we've made that point. Here's the thing I actually want to talk about. The problem of course is these data centers are getting bigger. They're requesting this enormous amount of power and they have a certain profile, right? Historically, the assumption has been a data center, particularly a cloud data center requires 24/7 power. In fact, it requires that so much that it introduces multiple redundancies in order to ensure that, this is why data centers have backup power and backup backup power and all that kind of stuff. So first I want to talk about why is that the requirement and then I want to talk about this new wave and whether there is anything that we can do to make the data centers more palatable to the grid from a grid perspective to enable some more of this growth and not totally swamp the electricity system. So first question is, in historic times when you were just building cloud data centers, why do you need such insane resiliency? And then is there any difference in the new world?

Brian Janous: Yeah, the resiliency is a function of what we were going back talking about previously in terms of these regions. I think people have this sense of the cloud is really being this sort of ethereal thing and applications can just move around and if one data center goes down, then well, you can just move everything to another data center. That's not really how it works. When someone's writing an application in the cloud, it's going to a physical location on the grid inside of a particular region. That region itself may have some redundancy. So there may be, as I said, multiple data centers inside that region. So if one data center goes down, you don't lose an application, but you still need very high availability for those data centers so that the customer experience is good so that your searches show up really fast so when you go into retrieve a document, it comes up right away.

If someone else is editing it, you're seeing it in real time. All of that requires very, very high availability. There are a few exceptions. I mean, there are some applications that don't require as much availability. So one example might be the web crawlers that do sort of the indexing for search. If you drop your new podcast and put it on your website and it doesn't show up in a Google search for a few minutes, that's not a big deal. But if someone actually clicks on it, they want it to download immediately, that is a big deal. So there's certain different things going on inside those data centers where you do have different levels of availability, but on the whole, availability requirements are extraordinarily high and that tends to need for backup generators and UPS, which are the batteries that control power quality and also bridge to generators in the event of an outage.

And so that's why data centers are generally built with diesel generator backup. Now, I will say that backup is probably a bit of a relic because when we first started building these data centers 15 years ago, most of them were relatively small. They were 5 or 10 megawatts. They were connected at distribution voltage. So the standard was I'll put in 48 hours of diesel fuel to manage downtime and outages. Most of these data centers today are connected at very high voltage. They're connected at the highest voltages, and so the reliability is much higher. So there probably is some flex in there in terms of what data centers actually need. Now, the recent storms in Texas, not withstanding you probably saw the pictures of very high voltage transmission lines that were completely decimated in those storms. And so there are still risks for sure. And California of course, where you live every summer we get worried about rolling blackouts because of fire issues. So there are certainly areas where we still have concern even at the higher voltages, but on the whole, availability is pretty good at that level.

Shayle Kann: All right, so then now in the new world, which by the way still has all those same cloud data centers and cloud requirements, that's still growing, but what we're adding onto it is this AI world wherein it's important I think in the context of whether you can introduce any flexibility here to distinguish model training and inference, right?

Brian Janous: Yes. And that's one that comes up a lot is this idea of training models being curtailable because they actually fit in that category of sort of indexing that I was talking about previously, which is that's sort of a batch process. So training models in and of themselves are batch processes, and so in theory, they could actually shut down during certain periods that would not be true for inferencing. So inferencing is a lot more akin to the search function.

So if you go into a ChatGPT and you want to make your picture, you want it to tell you a story, you want that to happen very quickly, those applications are still going to require a very high availability, similar to what you would expect in normal cloud applications. So I think it's a little bit overblown to say, and I think this is also true of something like the conversation around crypto, but that these are highly curtailable loads that you could just attach a training model to a wind farm and only run it on average 35% of the time, nobody's going to do that because the cost of that infrastructure, that server is extraordinarily high.

So you still want to get very high utilization out of those assets. So maybe you can avoid significant contributions to things like system peak for a few hours a year, but they're not curtailable loads in the sense of you're going to follow the sun with a training model.

Shayle Kann: Yeah, though, okay. I think what you're saying is, for example, historically data centers not really participants in demand response programs, but if you're doing a training model, maybe demand response is a thing you could actually do, but ultimately for training models where you could theoretically afford to turn them on and off and not ruin your customer experience, which is the thing you care about, otherwise it's sort of an economic question. Well, it's an economic and availability question. And so I wonder whether you could take a view that, look, if the power constraint becomes acute enough, if it just truly becomes true that we just do not have enough capacity, the wait times are too long, et cetera, et cetera. And if it remains true, which I believe it's true today, that it's just very lucrative for the hyperscalers to run these data centers, do you really think it's not, perhaps not to the extreme where you just connect your training model to off grid solar or whatever and operate a 20% capacity factor, but just say, can you avoid system peak daily potentially? Can you shut down for a couple of hours a day?

Brian Janous: It's possible. I think it goes back to that sort of economic motivation argument and utilization of the asset. And it's akin to the conversation I'm sure you've probably had on your show about things like hydrogen electrolyzers, right? Electrolyzers are really expensive, so you want to run them at a very high capacity factor. And it's similar to how you would think about a nuclear plant. You're not going to take a nuclear plant and turn on and off every day because it's very expensive from a CapEx standpoint. So therefore you want high utilization. So anything that has very high upfront CapEx usually needs very high utilization to justify that CapEx.

And so that's why I think you're not going to see, at least from an economic standpoint, a significant push towards having a lot of curtailable workloads. Now, you also noted the capacity issue and the availability issue. So there's two different things at play. One is, could I curtail this workload and avoid really high power prices in the afternoon if it's a really hot day in Texas, for instance? But the second thing is the thing utilities are really grappling with, and we'll go back to AEP. If AEP has to connect 10 gigawatts of firm load that needs 24/7 power, then it needs 10 gigawatts of new peak capacity to support their, because they just increased their system peak by 10 gigawatts.

Shayle Kann: More, right? They need a reserve margin.

Brian Janous: And a reserve margin on top of that. Yeah. Yes, exactly. So that's where this gets really interesting because then if I'm sitting in AEP issues and I say, okay, well it's going to take me seven years to get all of this new capacity online. Now I'm a data center sitting there going, do I really want to wait seven years or am I willing to think a little creatively about how I deploy this capacity such that I can minimize that contribution to AEP system peak.

Shayle Kann: Right. So that's the thing I want to talk about. What does thinking creatively look like if you're going to deploy a new data center, what are the suite of options that could be available to you if you're in that exact situation, a situation wherein theoretically a utility says to you, look, it's going to take me seven years, eight years to get you the capacity that you need under normal circumstances, but maybe I'm willing to work with you if you could figure out a way not to contribute to system peak or whatever it's going to be or not require me to upgrade this transmission line, something depending on the situation, what are the things that data center operators could be doing and in some cases are starting to think about doing to offer a different product?

Brian Janous: Sure. So I'll talk about three things. The first one I think gets a lot of attention and frankly I think is not particularly interesting, and then I'll talk about the ones that are interesting. So the one thing that always comes up in this context is, well, data centers should just go off grid. Why not just have all of your own power? Make your own power, you don't need the utility. But the problem with that is there's no such thing truly as going off grid, because what you mean by that, at least if we're talking about this decade, is I'm going to build a natural gas power plant, which surprise, is actually connected to a grid. So you're just saying, I'm going to go from connecting to the power grid to connecting to the gas grid, which in and of itself also has constraints on it, and you have issues with the firmness of the gas supply.

And so there's no magic button to say, well, I just got to completely get rid of the utility. Certainly not at the scale we're talking about you're not going to put solar and storage to supply a 500 megawatt gigawatt scale data center. I mean, that's just not feasible. Nuclear comes up a lot, but again, we're trying to be realists about what we're going to do this decade. I'm very bullish on nuclear. You and I have talked about nuclear a lot, very excited about what's happening in that industry, but the focus that these companies have right now is getting data centers online in the next 2, 3, 4 years, and nuclear is not going to solve that problem. So taking the data center completely off grid is not the solution.

So then the second thing is, one of the great things about data centers is they're already micro grids, right? You already build backup generation, you already have storage. I still remember my very first day at Microsoft. This was in November of 2011, and I was sitting down with one of our engineers and asking him just to explain to me how a data center worked, and he's walking me through the electrical topology and saying, well, we've got these generators and then there's this UPS batteries and then, so I kind of paused and I said, so a data center is just a power plant with an energy storage plant that just happens to have a big room of servers next to it. He's like, yeah, that's pretty much it. So I was like, that seems like a huge opportunity. We could do a lot with all those assets. And so we started working at that time on thinking about how would we use generators that could be more flexible and use more often IE moving from diesel to natural gas.

And that led to the first project that we did, which was a natural gas turbine project in Cheyenne, Wyoming in 2016. We also started looking at how could we use those UPS batteries more effectively. And so that led to our first what we called Grid Interactive UPS, which is a UPS that could actually respond to grid signals, provide ancillary services, and we deployed that in Dublin a few years ago. And so there's a lot of untapped potential with those behind the meter assets that data centers already have. And I know one of EIP's portfolio companies, Enchanted Rock is working on this. There's a lot of companies that are working on how you can use behind the meter batteries, whether that's an extension of the UPS or even just a customer cited stationary storage project. All of those have a ton of potential for data centers.

And I don't think even though this has been done in the industry, it hasn't become normalized. That data center behind the meter assets really become grid assets. Probably one of the few places in the world where that is becoming the norm is Dublin. But the reason that's the norm in Dublin is the grid operator was moving towards a moratorium on data centers because Dublin has one of the highest concentration of data centers in the world. It's almost 20% of the entire country of Ireland's electricity are data centers, and they're almost all in Dublin. So the grid operator was going to establish a moratorium, but we were actually able to go convince them that what you should actually do is require data centers to be dispatchable in exchange for a grid connection. So now every new data center in Dublin is getting a natural gas generator behind the meter so it can be flexible and avoid that contribution to system peak.

So behind the meter opportunities are enormous and they're certainly known, but I think they're largely untapped and underutilized. The third thing is the utility side. The utility also has the ability to bring flexibility and dispatch capabilities in ways that they historically haven't done. The biggest being, of course, grid enhancing technologies. What can we do to get squeezed more out of the existing system? And rather than looking at the system as being very static and immovable, and the only thing I can do is build a new gas plant. It's like there's a lot of things that utility can do on their side of the meter to better unlock the capacity that's there, and then in essence, sort of minimize the flexibility requirements that a customer would have to bring. So if you bring both those things to the table at the same time and you optimize both together, there's ways to squeeze more out of this system rather than just waiting five to seven years for a new transmission and new generation to be built.

Shayle Kann: On that Third one, how do you think about fairness across the system? So how do you ensure that if a utility is going to spend some money to do something that enables a data center to get connected, that it doesn't, one, raise prices for other customers and, two, limit the amount of other stuff that might've gotten connected or limit the productive use of a clean generation that they might've been building? How much innovation is there to be had in the contractual structure, the tariffs between data centers and utilities?

Brian Janous: I think that's a really important piece of it in that if a utility is going to be making, well one, so it's not the question you asked, but I think it's important, which is historically utilities have been motivated to spend as much money as possible on infrastructure because they want to put that in their rate base and make money on it. So there's already a change when you go start talking to utility and saying, Hey, I want you to not spend a lot of money. They spend a little bit of money just to get more out of your existing system. So that's a pivot from the way utilities have tend to think about the world. And I've had a lot of conversations with folks about this, and one of the questions that comes up is, well, why would a utility want to do that?

Because again, it's not a lot of rate-based, say deploying good enhancing technologies versus building a new power plant. The reason they want to do it is the reason we're having this conversation is there's a massive demand that they want to go after. They want to attract those customers into their service territory. They want that economic development, job growth and all the things to come with it. And so they are motivated to say yes, to move quickly, and to the extent that they can extract more out of the existing system, they then have an advantage relative to their peers to attracting that load and that infrastructure investment. So that's a value proposition to them. Now the question becomes how do you manage that investment? How does the cost recovery work? And because a lot of this is sort of new for utilities to be thinking about, say, offering different levels of service to, I mean, not every utility has a dispatchable tariff, right?

So these are the things that I'm spending time talking to utilities a lot about, which is to say, Hey, if you are going to attract investment of this scale, and we're talking a lot about data centers, but it's also the on-shoring of manufacturing and huge explosion in jobs and development of factories and chip fabs. And so utilities want to be able to say yes to these things, but it does require some regulatory innovation as well and understanding what is the art of the possible, what are the existing constraints that your regulators have put on you, and how can we evolve those so that you can be more nimble and position yourself to capture this load growth and capture the economic opportunity that it presents.

Shayle Kann: The other component here that we haven't talked about, but we should, is decarbonization. And it's an interesting challenge for the hyperscalers because the hyperscalers have been at the forefront of making commitments about decarbonization in general and the use of clean energy for their demand, right? And we saw this a couple of weeks ago, Microsoft announced that despite all their best efforts, their overall emissions have gone up 30% since making initial commitment. I assume largely just because they have built a lot more data centers have them a lot more load, and it's hard to keep up on the clean energy side of that. You mentioned some of the options that data center operators have at their disposal, some of the most prominent being natural gas generators on site or whatever it might be. How do you think about this tension between the pretty aggressive timeline of the commitments of decarbonization that certainly the hyperscalers have made and some utilities have also made, versus the urgency to just build as much as possible as quickly as possible wherever you can build it?

Brian Janous: Yeah, I first want to say kudos to Microsoft are just coming out and saying, Hey, this is the reality, our emissions are up. I was fortunate enough to be at the table in late 2019 with our executives establishing the path to carbon negative by 2030. And when I think back to that time, there's a lot of stuff that we didn't know this was pre-COVID, it was pre the supply chain disruptions that we saw as a result of that. It was pre the interconnection queue issues that have developed, and it was pre-AI. So there's a lot of headwinds that have evolved, and I'm talking about this in the context of Microsoft just because I was at that table, but I'm sure the same thing with Google and Amazon and everyone else who've made enormously ambitious climate commitments. And all of the tech companies have done an amazing job in terms of really setting the bar extraordinarily high and that there's been a great amount of positive peer pressure and pushing each other to do more.

And so I think that industry has done a tremendous amount to advance climate ambitions of corporations across the board. But these are all real headwinds. I mean, these are things that no one anticipated in all of the forecasts that we had at the time did not incorporate any of these things. And so there is, I think at this point, a point of coming to grips with the fact that those targets are probably not achievable in the time horizons that they had all set. I don't know that any of them have come out and explicitly said, we're not going to achieve our 2030 targets. I think what they're saying is we are doubling and tripling down on those efforts, and we're going to do everything that we possibly can to achieve the targets we've set, but they're also setting a very realist tone that there's a whole bunch of stuff that's come out that we were not planning for, so now we're having to adapt how we address this new reality.

And of course, what's happening with the utilities, and we've seen it over and over again, is they're revising IRPs. They're coming out and saying, Hey, we're going to have to add more fossil fuels. We're going to have to extend the life of some of these assets because that's the only way we can meet this continued contribution to our peak system demand. And this goes back to what we were talking about earlier, that if there are a way to reduce that contribution to peak demand, that's probably the best way to avoid further additions of more fossil fuel plants being built by utilities.

Shayle Kann: You know, I've talked about this before, but one of the ways that I think about this is that there historically when power was not a constraint, or at least not as much of a constraint, if you were developing a data center, you had a bunch of other constraints that you could place on siting, right?

There are other things that you care about. You care about proximity to jobs and water and latencies. You care about fiber, you care about a bunch of stuff, and power was somewhere down the rank order list of the things you filter for when you're developing. And by the way, decarbonization or access to clean power at that site. And then ideally access to 24/7 clean power at that site, which is the goal that the hyperscalers set for themselves, or at least I think Google and Microsoft have. That was on the list too. But it seems what has happened is that power has shot to the top of that list and maybe is number one with a bullet and everything else is a few levels below. And so it's not that it doesn't matter, it's that it is deprioritized relative to we just got to build where we can build. Is that a reasonable way to think about it?

Brian Janous: I think so. And I mean obviously I'm not on the inside at this point, but my perception now is that if you go back a few years, zero carbon energy day one was really becoming a non-negotiable that... We wanted energy, wanted to be zero carbon, we wanted to have it done, ready to go when we plug in the data center. I think probably the perception today inside of those companies is we need power now and we need a path to get lots of it, and we also want to have line of sight to where that zero carbon energy is going to come from at some point. And so I think that's sort of the trade off that's probably happening at this point is there is such a land grab or a power grab to get as much capacity as possible to plug in as many GPUs as possible because this is really a game about scale and building the biggest machines.

And there's a lot of discussion right now about efficiency and what if Nvidia comes out with a more efficient chip, which they have. I mean, they continue to innovate and come out with more efficient chips. But the example that I've used with utilities is that if, let's say Meta for instance, finds a piece of land or a point on the grid where they can pull three gigawatts of power on one location, and then the next day Nvidia comes out with a chip that's twice as efficient, is Meta going to build a 1.5 gigawatt data center? No, they're going to build a three gigawatt data center that is twice as powerful as the one they thought they were going to build, right? That's what this AI game is all about, is building the most powerful models that you can. And so that's why efficiency isn't really going to save us in this particular context.

Shayle Kann: All right, Brian, I think we've covered a lot of what's going on today, but as you said, it is changing in real time, so we're going to have to do it again as soon as the next wave of craziness hits us. So we'll have you back on, but in the meantime, thanks so much.

Brian Janous: All right, thanks Shayle, really enjoyed it.

Shayle Kann: Brian Janous is the co-founder of Cloverleaf Infrastructure and the former VP of energy at Microsoft. This show is a production of Latitude Media. You can head over to latitudemedia.com for links to today's topics. Latitude is supported by Prelude Ventures. Prelude backs visionaries, accelerating climate innovation that will reshape the global economy for the betterment of people and planet. Learn more at preludeventures.com. This episode was produced by Daniel Waldorf, mixing by Roy Campanella and Sean Marquand, theme song by Sean Marquand. Steven Lacey is our executive editor. I'm Shayle Kann, and this is Catalyst.

No items found.
No items found.
No items found.
No items found.
No items found.