Today’s utilities bear little resemblance to the utility companies from decades ago.
For most of the last century, electric utilities’ operations were built around an architecture of one-directional flow of power from generation to customers, under fairly predictable conditions. Now, however, the environment in which utilities operate looks far different.
Climate change is contributing to wildfires and temperature extremes, forcing many utility companies to run in conditions they never planned for.1 At the same time, the integration of renewable generation, storage and other DERs is driving a shift to a DSO model in which the grid is more distributed and electricity flows bidirectionally between utilities and consumers. Finally, through consolidations and mergers, utilities have grown more organizationally complex, adding yet another hurdle to dynamically responding to these new conditions.
Addressing the breadth of these challenges requires an unprecedented level of connectivity across distributed organizations. To harden systems, predict when equipment will fail and communicate thoughtfully with tens of thousands of customers during emergency events, utilities need an enterprise-wide data-driven operating system that spans across operators, analysts and leaders.
Bridging the gap between analytics and operations
Utilities today have access to a rich set of data, but they may not be leveraging it to its fullest extent. With the right systems in place, utilities can gather real-time signal about grid operations from smart meters, leverage LiDAR to achieve highly accurate GIS records and deploy drones to remotely inspect asset conditions. To harness this data, many utilities are investing in data science teams and new ML/AI off-the-shelf toolkits. While these efforts frequently create impressive data assets and promising models, many prototypes fail to make it to production and actually achieve the promised operational improvements.
For example, imagine a utility that has begun a pilot project to deploy new sensors to help manage DER integration. The data science team has leveraged industry-standard data science tools to detect power quality faults and recommend where maintenance inspections should begin looking to find the source of the problem.
To get this off the ground, each day, the data science team’s business counterparts export spreadsheets of model outputs for maintenance engineers.
Engineers then review recommendations in conjunction with context from other applications — a time-consuming and error-prone process — before committing decisions to a legacy scheduling system. As part of the review, engineers cross-check the modeling outputs with various maintenance schedules, often performing ad-hoc analysis based on their real-world experience. Moreover, stale historical assumptions are often baked into these models, requiring the engineers to manually correct the same discrepancies every time the data science team delivers a fresh set of outputs.
For the data science team, developing models is also a manual and painful process, involving manual data cleaning and feature engineering. The data scientists struggle to run the model across all feeders and to leverage models from other analytic projects. Most crucially, they don’t get timely feedback from their consumers. Without frequent updates on which recommendations were accurate and which weren’t, it proves difficult to iteratively improve the models over time and ensure that they were accurate.
With so many distributed sources of information and the messy process of combining human insights with data, utilities may find it challenging to make the most of their investments in new data sources and analytical capabilities. Many utility companies may face technical and organizational barriers that prevent their data, analytics, engineering and operational teams from iterating closely together.
Why most utilities’ IT toolkits aren’t enough
For decades, utilities have relied on foundational software systems that are critical to the electric power infrastructure, but also highly constraining. Purpose-built on-premises systems like DMS, SCADA and GIS are foundational for managing grid infrastructure data, but they’re specialized for a narrow set of data and operational processes, making them of limited use for utilities grappling with an ever-evolving set of challenges.
Newer iterations on these systems, like ADMS and DERMS, promise to help utilities manage a more distributed, DER-friendly grid. However, they are still too specialized to deal with the larger issues facing utilities. These systems don't typically integrate external data, leverage custom models, or help utility companies build new workflows. Utilities will need these capabilities and more, to adapt their operations in the face of two-way power flow across distributed energy resources, more complex weather events, and other challenges of the twenty-first century.
Outside of these systems, utilities have followed the lead of other industries in investing in new cloud systems like data warehouses and analytics engines. These perform capably at collecting data from different sources, but they are removed from the core operations that made the legacy systems, like DMS and EMS, so valuable. Cloud-based data warehouses may summarize a complex reality, but they don’t offer levers to act or respond to new threats. They don't go the last mile of getting these analytics into the hands of field crews and grid operators so that they can make better decisions day after day. For instance, even after years of investment in cloud modernization, when faced with an emergency operation or urgent new work program, many utilities rely heavily on spreadsheets, emails and file shares, which quickly become outdated and don't leave a trail of where information came from. Often, the newer cloud tools don't actually improve the operational bottom line when they're needed most.
Creating a common operating environment
In the face of massive disruption, analytics alone aren’t enough: utilities need a platform that can help them power complex operations by combining the best of data and human insight. This platform should break down data silos and ultimately enable a feedback loop between data teams, analysts, engineering and planning teams and grid operators.
Pacific Gas and Electric (PG&E) uses Palantir Foundry to layer together different sets of information, such as real-time grid conditions and risk models. The utility can also conduct preventive maintenance by developing models to predict equipment health, analyzing smart meter data to understand where to prioritize repairs. PG&E credits Foundry with helping the company manage the risk of wildfires caused by electrical equipment, ultimately keeping Californians safer.
Palantir has partnered with many utilities to deploy Foundry across the value chain, from emergency operations to predictive maintenance to EV planning. To learn more about how Foundry can help your utility’s decision-making capabilities, check out our website and meet us at DISTRIBUTECH in Dallas, TX from May 23-25.
1) Jones, Matthew W., Adam Smith, Richard Betts, Josep G. Canadell, I. Colin Prentice, and Corinne Le Quéré. "Climate change increases the risk of wildfires." ScienceBrief Review 116 (2020): 117. https://www.preventionweb.net/files/73797_wildfiresbriefingnote.pdf