The Next Frontier: Grid Software
Technology Innovation Driving Grid Reform to Fill the Demand Gap
The Problem: Transmission & The FERC
The U.S. power grid wasn’t designed for today’s energy and compute demands. Instead of a cohesive national system, we have a fragmented patchwork—built through decades of reactive, region-by-region decisions. At the heart of this fragmentation is a failure of coordinated transmission planning. While the Federal Energy Regulatory Commission (FERC) is tasked with setting national rules, the actual development of grid infrastructure is left to regional operators like PJM and ERCOT, who often act in their own silos.
Despite recognizing the challenges, FERC has so far delivered only piecemeal reforms. This regulatory limbo has created fertile ground for conflict and confusion—especially as data center operators (DCOs) and hyperscalers rush to secure access to limited grid capacity.
PJM, in particular, is emerging as a critical market for DCOs—not just because of its proximity to major population centers, but because of the collision between explosive load growth and aging transmission infrastructure. Like ERCOT, it represents both a bottleneck and a battleground for next-generation, grid-aware data center deployments.
This tension is now playing out in a high-profile dispute involving PJM, Constellation, and Calpine. At issue is a settlement PJM made with data center developers to allow preferred interconnection access. FERC initially rejected the deal in November 2023, arguing it violated open-access rules by offering a “fast lane” in an already-clogged interconnection queue. PJM pushed back, saying it needs flexibility to accommodate hyperscalers and other large-load customers. A 90-day negotiation is now underway to rewrite the rules.
For context, FERC Order 2023—issued in November—was meant to reform the interconnection process. It introduced cluster-based queue processing and stricter penalties on both developers and grid operators. But it didn’t address cost allocation or give FERC siting authority, two core issues preventing proactive grid buildout. Until those are resolved, transmission delays—and the innovation drag that comes with them—will persist.
Colocation Model and Market Implications
The co-lo model — in which large AI and cloud providers partner with or vertically integrate with energy developers—has become the de facto approach for accelerating deployment. This model was pioneered by Bitcoin miners who were the OG consumer of marginal energy. Why is this happening and what are the benefits?
Siting advantages (near existing substations or brownfield grid assets)
Contractual certainty (long-term PPAs or direct ownership of energy assets)
Bypassing the queue (via acquisitions, bilateral deals, or BTM builds)
However, this model creates friction:
It prioritizes capital-rich players i.e. hyperscalers who can secure custom interconnection deals, often to the detriment of smaller developers (read our data center dynamics report here).
It’s opaque and not standardized, making it hard for regulators to maintain fairness or transparency.
It can exacerbate transmission congestion, especially when hyperscalers cluster around already constrained nodes.
In sum: Where FERC Orders 2023 and 1920 Fit In
Order 2023 (Interconnection Reform)
Issued July 2023
Intended Goal: Improve transparency and reduce interconnection delays by enforcing cluster-based studies and stricter penalties.
Reality: Didn’t address cost allocation, which still hampers proactive buildout.
Tension: The colocation model exploits loopholes—e.g., negotiating outside the queue or directly acquiring projects—potentially undermining the spirit of Order 2023.
Order 1920 (Transmission Planning & Cost Allocation)
Issued May 2024
Proactive Shift: Pushes planning horizon to 20 years and calls for better modeling.
But: Stops short of requiring regional coordination or granting FERC real siting authority.
Effect on Co-lo: Encourages forward-looking planning—but lacks teeth to ensure that colocation hotspots (like Northern Virginia) don’t worsen regional grid fragility.
This legal battle is less about one PJM settlement and more about who gets priority access to scarce interconnection and transmission capacity in a world of exponential load growth from AI and data centers. Co-lo is efficient for speed and scale, but it risks making grid access a pay-to-play game, undercutting the original intent of competitive wholesale markets.
Solution: Addressing Four Technology Gaps
While regulatory conflict has created uncertainty, it also spotlights how urgently we need modernization. As the tension between FERC and regional grid operators escalates, it’s catalyzing broader recognition of the systemic gaps—and opening the door for innovation. For founders building at the intersection of energy, software, and infrastructure, this moment represents a rare opportunity to define how the grid evolves. Below are four critical technology areas where we’re seeing promising solutions emerge:
1) Data Center Performance Improvement
(e.g. Central Axis, Mercury Computing)
Given the extreme power needs of AI data centers, PUE-focused (power usage effectiveness)innovation is now a transmission issue, not just an ops metric.
Behind-the-meter load shaping (e.g., thermal storage, AI demand shifting, or real-time voltage optimization) can flatten peak curves, easing grid strain.
Mercury’s dynamic optimization or Central Axis’ DCIM stack can make “grid-aware data centers” a reality—prioritizing compute jobs based on marginal power availability or time-of-day grid signals.
This supports utilities’ new reliability frameworks (many are revising NERC compliance in light of hyperscaler load unpredictability).
2) Interoperability & Data Integration Platforms
(e.g. Texture, Daylight, Sourceful)
Utility or grid operator-maintained platforms that provide transparent, real-time grid information to developers such as queue status and location-based availability of injection/withdrawal capacity.
Platforms like Texture and Daylight can:
Normalize disparate queue and capacity data across utilities (many still rely on Excel + PDFs).
Provide real-time visibility into locational marginal pricing (LMP), congestion, curtailment risk, and capacity margins.
Support “grid readiness scoring” for siting and project sequencing.
This could enable a grid API economy, especially if paired with open interconnection maps (e.g., NYISO and CAISO have partial implementations, but still need major UI/UX and data standardization improvements).
3) Advanced Grid Modeling Tools / Forecasting
(e.g. Voltquant,others)
Interconnection delays often stem from slow, manual review of contingency, short-circuit, and dynamic stability studies — processes that haven’t meaningfully evolved in decades. Emerging platforms should aim to modernize this workflow with advanced modeling environments that simulate grid conditions more efficiently and scalably.
We believe there’s a major opportunity to build LLM-augmented tools that assist utility engineers with scenario generation, result interpretation, and multi-party coordination — unlocking faster, smarter grid planning.
This is a high-friction area with real whitespace. If you’re building here or backing teams who are, we want to hear from you.
4) Project Coordination Infrastructure
(e.g. Euclid, Othersphere)
Tools that reduce friction in the planning, approval, and execution phases of infrastructure development.
Normalize fragmented data sources like queue filings, permitting updates, and utility notices.
Tie timelines and cost estimates to grid constraints and upgrade dependencies.
Enable shared visibility across developers, utilities, and investors — replacing ad hoc communication with structured workflows.
In conclusion …
These FERC vs. PJM disputes signal deep uncertainty in how infrastructure gets prioritized, approved, and compensated—making it harder for startups and investors to model risk or return. When foundational processes like interconnection and cost allocation are up for debate, it injects ambiguity into already slow, utility-driven sales cycles. For large IPPs like Constellation Energy, this is a time bottleneck. For early-stage gridtech startups, that ambiguity can be existential—delaying deployment, shifting focus, or stalling fundraising altogether.
But there’s a silver lining: regulatory friction often precedes reform. We’re encouraged to see large players like Constellation pushing the conversation forward, setting new precedents that could make grid access more efficient and predictable. As national attention turns to the transmission bottleneck, the moment is ripe for founders to help shape a more interoperable, transparent, and innovation-ready grid. For startups positioned at the intersection of software, infrastructure, and energy, the current moment presents a rare opportunity to help shape the next-generation grid architecture—one that’s more interoperable, efficient, and innovation-friendly.
At Crucible, we are actively hunting for founders building the next generation of grid software.
Disclosure: Crucible Capital is invested in some of the companies mentioned.
See this FERC order https://www.ferc.gov/enforcement-legal/legal/major-orders-regulations/standards-conduct-transmission-providers
Utility or grid operator-maintained platforms that provide transparent, real-time grid information to developers such as queue status and location-based availability of injection/withdrawal capacity.
FYI That is actually illegal for both market power and national security of critical infrastructure reasons.