The Power Crisis Stalling AI Infrastructure: What Two Days Chairing an Industry Conference Revealed
By Darshan Honale | November 2025
I just spent two days chairing a data center and AI infrastructure conference in Houston, bringing together operators, hyperscalers, utilities, and investors to discuss the industry’s future. What kept me up afterward wasn’t the latest cooling technology or the newest chip architecture.
It was a simple, uncomfortable truth: We’re not sure we can actually deliver the power AI needs.
The Math That Doesn’t Work
Let me start with three numbers that explain everything:
AI demand: Growing exponentially. When GPT-3 evolved to GPT-4, we saw a 10x increase in parameters. Each generation of models requires dramatically more compute, which translates directly to power.
Data center build-out: Growing linearly. Even in the best-case scenario, you’re looking at 18-24 months from breaking ground to operations.
Grid capacity: Flat or declining in many markets. Old generation is retiring faster than new capacity comes online.
During the conference, one statistic kept coming up: Texas’s ERCOT grid is currently processing over 205 gigawatts in large-load interconnection requests. 70% of those requests—roughly 144 gigawatts—are data centers.
Just one year ago, that number was 56 gigawatts.
That’s not a trend line. That’s a tidal wave. And even if only 20% of these projects materialize, we’re talking about 40+ gigawatts of new data center load in a single grid operator’s territory.
The 18-Month Problem Nobody Wants to Talk About
As chairman, I had a unique vantage point. I heard the same challenge articulated differently across every session, every panel, every side conversation:
Grid interconnection has become the existential bottleneck.
The typical timeline from application to energization? 18-36 months. And that’s the optimistic case.
Think about what this means: A developer can secure land, obtain permits, line up customers, and have capital ready to deploy. But if the local utility says “we can’t deliver 100 megawatts for 36 months,” the project either dies or relocates.
One data center operator put it bluntly during a panel discussion: “The four hurdles we face are permitting, power procurement, land acquisition, and community engagement. Power is THE bottleneck. If you’re not executing on the five-year deal, you’re supporting someone else’s five-year deal.”
Another panelist from the renewable energy sector added context: “The interconnection queue for generation projects is massive—larger than all the existing load in the system. The challenge isn’t finding generation. It’s connecting that generation to the grid.”
This isn’t just a U.S. problem, though Texas exemplifies it most dramatically. The International Energy Agency projects data centers will consume approximately 945 terawatt-hours by 2030—nearly double current consumption. The infrastructure to deliver that power simply doesn’t exist yet.
The Uncomfortable Truth About “Bridge Fuels”
Here’s where the conference conversations got particularly revealing—and uncomfortable.
Behind-the-meter natural gas generation is proliferating. Not as a long-term strategy, but because it’s the only way to hit 2025-26 deployment timelines.
The economics are brutal but clear: Paying 2x CapEx for onsite natural gas that delivers power in 6-12 months beats paying $0 for grid power you can’t access for 36 months.
During one panel, an industry expert shared that 30% of data centers are expected to be behind-the-meter by 2030. He referenced two visible megaprojects—one outside San Antonio at 17 gigawatts of unabated natural gas, another in the Permian Basin at 11-13 gigawatts.
The tension was palpable. These same companies have net-zero pledges—targets like carbon neutrality by 2040. Yet the near-term reality involves deploying fossil fuel generation at unprecedented scale.
One panelist tried to frame it optimistically: “Data centers are pushing the sustainability aspect at a much faster pace than utilities could, just due to regulatory and tariff aspects. Taking that behind the meter means you can not only accelerate power delivery but also expand and accelerate sustainability.”
Translation: We’ll go fossil now and clean it up later.
I’m not criticizing this approach—I’m acknowledging its necessity. But let’s be honest about the gap between public sustainability commitments and operational reality. The term “bridge fuel” is doing a lot of work to paper over this contradiction.
Three Workarounds Gaining Traction
Beyond natural gas, three strategies emerged as practical responses to the power crisis:
1. Battery Storage as Bridge Technology
Behind-the-meter battery storage allows facilities to begin operations while waiting for grid interconnection. Instead of sitting idle for 18-36 months burning capital with zero revenue, operators deploy 4-6 hours of battery capacity to handle peak loads and short outages.
When grid power finally arrives, the batteries transition to providing grid services, backup power, and peak shaving.
Is this optimal? No. Batteries cost $300-400 per kilowatt-hour, have limited duration, and still require generation to recharge. Companies are spending hundreds of millions on temporary infrastructure as a workaround for grid failures.
But one transmission executive captured the pragmatism: “It’s a bridge. It lets you start operating instead of waiting.”
One promising development: A study by a major hyperscaler with a grid operator found that grid-enhancing technologies could facilitate interconnection of nearly 6.8 gigawatts of solar, wind, and storage generation by 2027 across five states. That’s meaningful—but still a fraction of total demand.
2. Alternative Markets
The traditional data center markets—Northern Virginia, Silicon Valley, major metros—are overwhelmed. Smart developers are looking elsewhere.
One presenter made a compelling case for rust belt markets: “Buffalo, upstate New York—power is available, fiber infrastructure exists, land is cheap, and local governments are desperate for investment. Why not?”
This represents a fundamental shift in site selection logic. Instead of starting with “where are the customers?” the new question is “where can I get X megawatts in Y months?” Then you work backward to justify the location.
The irony: Cities that lost manufacturing jobs decades ago suddenly have something valuable—excess grid capacity from shuttered industrial facilities. Economic development is driving creative thinking about how to convert that latent capacity into data center opportunities.
3. Modular Construction
Traditional data center development takes 24-36 months. Modular approaches promise 6-9 month delivery of 2-5MW modules.
One modular provider explained: “Speed is the product.” For customers who need capacity quickly, the premium for modular makes sense—even if it means less customization and some quality tradeoffs.
The challenge: Modular works well at smaller scale, but scaling to 50-100MW facilities requires sophisticated integration. And as AI density requirements evolve rapidly (we’re already seeing movement toward 1-megawatt racks), modular designs can become obsolete quickly if they’re not built with flexibility in mind.
The Hidden Technical Revolution
Buried in one highly technical presentation was an insight most attendees probably missed: The transition from AC to DC power delivery within data centers could simultaneously address density, efficiency, reliability, and grid stability challenges.
Here’s the logic: Computers need DC power, typically at 48 volts. Today’s architecture converts AC to DC at the rack level—thousands of times across a facility. Each conversion loses 1-2% efficiency and requires massive copper cabling.
The proposed alternative: Convert AC to DC once at the facility level (at 1,200-1,400V DC), then use simple DC-DC converters at the rack.
Benefits:
- 30%+ reduction in cabling costs
- 1-2 percentage point efficiency improvement (98.5% → 99.5%)
- Elimination of harmonics and power quality issues
- Natural integration with batteries and renewables (which are DC-native)
- Ability to provide grid services
For a 100-megawatt facility, 1% efficiency improvement equals 1 megawatt—roughly $1 million per year in energy savings. Over ten years, that’s $10 million+ in net present value.
This doesn’t grab headlines like liquid cooling or new chip architectures. But it fundamentally changes the economics of high-density facilities.
The Utility Dynamic Nobody Expected
Perhaps the most profound shift isn’t happening in Silicon Valley boardrooms—it’s happening in utility operations centers and public utility commission hearings.
Utilities have become the unexpected kingmakers in AI infrastructure.
Consider this: A data center operator can have unlimited capital, world-class technology, and eager customers. But if the local utility says “we can’t deliver 100 megawatts for 36 months,” the project is dead.
The problem: Utilities spent the last 20 years managing 0.5% annual load growth with declining industrial demand. Now they’re being asked to approve 200-500 megawatt point loads with 12-18 month timelines. Institutionally, technically, and financially, they’re not equipped to respond at this pace.
One panelist shared a cautionary tale: A California municipality approved permits for two data centers, 48 megawatts each. They were built, ready to operate. But there was no power. The city had literally run out of grid capacity.
Local governments don’t want to lose the economic development. So cities are “being more flexible now to entertain ideas of how we can use onsite power generation” because they don’t want data centers choosing other locations.
The result: Utilities that figure out how to serve data center load at speed will capture billions in revenue. Those stuck in “business as usual” will watch their largest growth opportunity in decades move elsewhere.
For developers, this creates a stark reality: Utility relationships are now THE strategic asset—more important than technology partnerships or customer relationships. You need to engage utilities 2-3 years before construction, be flexible on timing and phasing, and offer grid services to reduce their burden.
What This Means for Investors
I’ve seen deals die. Not because the technology didn’t work. Not because the market wasn’t there. But because the power purchase agreement fell through, or grid interconnection got delayed by 18 months.
So when I evaluate AI infrastructure investments now, here’s my framework:
The First Question
“Where are you getting the power, and when can you actually get it online?”
Everything else is secondary. I don’t care how innovative your cooling system is if you can’t energize the facility.
What Makes a Project Fundable
1. Confirmed Power Timeline Not just applications—actual utility commitments. Bonus points for behind-the-meter backup plans if grid delays occur.
2. Speed-to-Market Strategy Evidence of modular approach OR alternative market with available power OR behind-the-meter solution OR creative utility partnership.
3. Economic Viability Customer commitments for 60%+ capacity before construction. Utilization targets of 80%+ (AI economics require higher utilization than traditional data centers).
4. Operational Resilience Not 100% dependent on a single power source. Battery backup or storage component. Ability to provide grid services (potential revenue opportunity).
5. Sustainability Defensibility If using fossil fuels, a credible transition plan. Renewable integration roadmap. Transparency about “bridge fuel” approach rather than greenwashing.
Where I’m Looking
High-conviction opportunities:
- Companies solving interconnection bottlenecks (grid enhancement tech, storage solutions, microgrid infrastructure)
- Alternative market developers (secondary cities with power availability)
- Behind-the-meter solutions that balance speed with sustainability
- Long-term clean power technologies (SMRs, advanced geothermal, long-duration storage) for the 2030+ timeline
What I’m avoiding:
- Projects without confirmed power timelines
- Pure speculation on grid capacity appearing magically
- Single-point-of-failure power dependencies
- Operators who don’t understand utility dynamics
- Markets where public utility commissions are hostile to data centers
The Uncomfortable Question
Let me end with the question nobody wanted to ask directly during the conference, but that lingered in every conversation:
What if we can’t actually solve this?
What if the infrastructure buildout required for AI’s exponential growth simply can’t happen fast enough? What if the regulatory and physical constraints of grid development are fundamentally incompatible with AI deployment timelines?
I don’t have an answer yet. But I’m watching these signals closely:
- Utilization rates: If AI facilities can’t achieve 80%+ utilization, the economics break. That would suggest demand isn’t matching infrastructure deployment.
- Interconnection queue trends: Is the backlog growing or shrinking? Texas went from 56 to 205 gigawatts in one year. That’s not sustainable.
- Behind-the-meter proliferation: If 30% of facilities go behind-the-meter by 2030, that’s an admission that grid solutions failed. The question becomes: Can behind-the-meter scale sufficiently?
- Regulatory reform: Are public utility commissions adapting? Are states streamlining interconnection? Or are they protecting existing systems?
- SMR timelines: Will small modular reactors actually deliver in the 2028-2030 window, or will regulatory hurdles push them further?
What I Told the Conference
In my closing remarks, I framed it this way:
“We’re at an inflection point where optimism meets pragmatism. Everyone came in talking about the AI revolution. But the conversations we’ve had over these two days were really about a more fundamental question: Who can actually deliver power, and how fast?
The winners in 2025-26 won’t be those with the best AI models or the most GPUs. The winners will be those who can deliver power, fast, at scale—and do it in a way that’s at least defensible on sustainability, even if it’s not perfect.
Everything else is secondary.”
That’s the uncomfortable truth I’m sitting with as an investor. The AI infrastructure opportunity is real. The market need is undeniable. The capital is available.
But power access—specifically, the speed at which we can bring new capacity online—may be the constraint that determines winners and losers more than any other factor.
What are you seeing in your market? Are grid interconnection timelines improving or getting worse? How are you thinking about speed vs. sustainability tradeoffs?
I’d welcome your perspective—especially if you’re solving this problem in creative ways.
