Gigawatt-scale AI data centers are entering interconnection queues across the country, but traditional grid planning tools weren't designed for these large loads. The questions from ISOs and utilities are getting harder. The timelines are getting longer. And the old approaches to load modeling and grid integration aren't working.
Watch our technical discussion featuring Harvard, National Lab of the Rockies (NLR), and EPE engineers to discover what's changing – and what it means for your project.
Traditional composite load models (CMLD) assume motor-driven loads with predictable ramp rates and power factor behavior. AI data centers are fundamentally different in several ways:
ISOs and utilities are starting to require detailed modeling and validation that existing approaches can't provide. If you're developing data center infrastructure or conducting interconnection studies, you need to understand where the requirements are headed.
1. Load Modeling for Power Electronics-Based Data Centers: Why traditional CMLDs fail for GPU workloads, and what's needed instead.
2. Model Validation Without Stage Testing: Generators get MOD 25/26/27 stage tests. Data centers need a different approach.
3. Data Center Flexibility and Grid Services: Bitcoin mining proved flexible loads work, but AI data centers are more complex.
4. Infrastructure Constraints and Development Strategy: 7% annual demand growth meets multi-year transmission timelines.
5. AI Tools in Grid Planning: Can AI help us plan for AI's power demand? Where does automation help and where does engineering judgment remain critical?
This webinar is relevant if you're: