Unlocking the value of distribution data models with visual, intuitive and automated workflows
Today, many distribution utilities face significant challenges with building, maintaining, and validating a central grid model that can be used across their IT landscape for planning and operational needs. The data is spread across multiple sources and shared across systems via complex integrations. The process to manage the data and build grid models encompasses multiple users, stakeholders, and owners across various departments. This complex process is time consuming, error prone, and not easily scalable.
The current method to build and exchange model data is no longer sufficient to manage the rapidly changing energy landscape (e.g. increase of DERs and smart devices, etc). The amount of data required for making accurate planning and operations decisions, as well as the size of the grid model, is consistently increasing. This continuous change is causing additional manual work to maintain a single source of truth for grid model data. The data for each workflow needs to be properly checked and validated to ensure data models are correct for every end system. Therefore, an innovative way to support the integration of multiple data sources, electrical consistency, and automated checks on the entire data model is necessary.
During this session, you will learn how a major U.S. distribution utility has tackled these data modelling challenges head-on with an innovative future-proof approach. The approach includes the journey to implement a scalable tool that automates data quality checks and facilitates the creation of a consistent, validated model used across multiple target systems (one shared core model used across all distribution departments).
Session sponsored by Siemens