Embedded Technologies for the Smart Grid
An Ounce of Prevention: Bringing Real-Time Monitoring to the Grid
After years—perhaps decades—of falling behind the technology curve, the power industry now has an opportunity to upgrade its infrastructure. Done correctly, it will lead to a secure grid that utilizes existing infrastructure to increase electricity transmission while reducing and preventing blackouts.
SUPREET OBEROI, REAL-TIME INNOVATIONS
Page 1 of 1
The electrical grid is in transition. Currently, grids are controlled by aging SCADA systems with primitive or no communications between generation stations. Despite the obvious fact that these systems are critical to the health and growth of the national infrastructure, today’s electrical grid is largely running “open loop,” with poor monitoring, poor metrics, and no real concept of distributed control.
The Northeast blackout of 2003 occurred on August 14. Ten million people in Ontario and over 45 million people in eight states of America were left without power. Many regions lost water supply. Regional airports were closed and cellular and cable services were disrupted. In Ottawa and New York, people resorted to looting.
While the investigations found many reasons for the blackout—from trees not being topped to computer bugs—one thing was clear. With a modern grid infrastructure, with utilities sharing their information in real time in a more fault-tolerant manner, this blackout could have been caught earlier. The investigation revealed that during the precious minutes following the first outages in Ohio, when action might have been taken to prevent the blackout spreading, the local utility’s managers had to ask their operators by phone what was happening on their own wires. Meanwhile, the failures cascaded to neighboring regions. In other words, the grid operators were flying blind!
This incident delivered a sense of urgency to modernize our electric grid infrastructure.
Situational Awareness for the Next-Generation Grid
There is little ability to analyze events that threaten the performance or operation of the grid. Worse, without the support of increasingly rare human-expert operators, there is essentially no ability to detect, analyze and correct anomalies in real time. Systemic failures are a constant risk.
“Synchrophasor” technology will improve this situation. Collecting phase angle measurements (phasors) from disparate locations at the same instant allows control centers to directly measure the state of the grid and take corrective action. If used in real time, it offers the possibility of proactively stopping failures by alerting the system operators where the grid is going unstable, and either isolating or correcting the problem.
The North American SynchroPhasor Initiative (NASPI) seeks to provide the sensors, communication capabilities and control centers required to implement this key functionality. The network connecting phase sensors will be called NASPInet. NASPI’s vision is to implement a wide-area network capable of monitoring and controlling the entire grid. By collecting data in real time with minimal latency from multiple points on the grid, operators or automated programs can quickly detect and respond to anomalies and threats.
NASPInet is an effort to develop an industrial grade, secure, standardized, data communications infrastructure for the electric grid. In particular, NASPInet data bus aims to enable utilities to share their phasor information in real time.
At the heart of the NASPInet architecture is a Phasor Measurement Unit (PMU). This simple device receives a GPS clock signal and voltages and currents from the electric power system. The measured values are then time-stamped and are called synchrophasors since they are time-synchronized phasor values. The word phasor indicates a measurement of both the signal magnitude and the angle (Figure 1).
A graphical representation of the data represented by the PMU device
These PMU data create wide-area visibility across the power system in ways that let grid operators understand real-time conditions, see early evidence of emerging grid problems, and better diagnose, implement and evaluate remedial actions to protect system reliability. The PMU data does not provide new ways to remedy a fault in the grid, but provides high-fidelity information before the occurrence of the event for the operator to take corrective action (Figures 2 and 3).
As shown in Figure 4, a Phasor Data Concentrator (PDC) correlates the phasor data from a number of PMUs and PDCs by time and feeds it as a single stream to other applications. PDCs allow us to capture wide-area disturbances, improve system security and coordinate substation visualization.
Conceptual diagram on NASPInet elements.
The Phasor Gateway (PGW) in the NASPInet lexicon controls access to all signals from its substations. Think of it as a router that enables the NASPI network to access data from within the organization by verifying cyber security, access rights and data integrity, among other things.The Phasor Gateway is extremely critical because it also manages traffic format, timing compatibility, and setting traffic priority according to classes of data.
NASPInet will enable the exchange of different classes of data with different priorities. For example, NASPInet will enable exchange of large-volume historical data with high reliability but without strict end-to-end latency needs. On the other hand, NASPInet can mandate strict latency needs on exchange of PMU data while allowing some samples to be lost. To do this, NASPI defines classes of data, which indicate the type of contract a publisher and a subscriber need to have when exchanging that class of data (Table 1).
NASPInet classes of data.
Solving Complex Distributed System Challenges
As NASPInet evolves from a concept to a prototype to a production-ready deployment, it will face increasingly complex technical challenges that the networking layer will need to address.
Scalability: The current number of deployed PMUs across North America is in the hundreds. However, as the needs and the means to provide situational awareness for the grid increases, not only will the number of deployed PMUs grow exponentially, but there will be other types of sensors providing valuable visibility into the state of the grid. What this means is that there could be tens of thousands on sensors, exchanging information in real time. The middleware, being the foundation for such a data grid, should be able to support such a scale.
Low Latency: For the phasor data to be useful for aligning the grid, it should not only be accurate, but also be available within a strict time window. As mentioned, PMU data is time-aligned data; it does not make sense to receive the data after the time window has elapsed. Usually, the time window is in the tenths of a millisecond, with a trending toward even a finer resolution. In addition, this data needs to be delivered over vast geographical distances and this information is most useful when the electricity is delivered from one grid to another. The middleware should able to support real-time and low-latency transmission characteristics over a wide-area network.
Fault Tolerance: The grid will support transmission of various categories of data. To support fault tolerance for data transmission, the grid cannot rely only on hardware redundancy or multiple paths—either these options will be prohibitively inefficient or they will not be available. The middleware protocols should be able to support reliability without sacrificing the ability to multicast for scalability
Quality of Service: Put simply, different categories of data may have different data transmission needs. Some classes of data, such as PMU readings, may be able to afford lost samples, but not high latency. The historical sensor data, requested after an “incident,” may not have low-latency transmission needs, but will require strict reliability. What this implies is that the network should be able to send different classes of data with different qualities of service (QoS). While a networking layer that sends all data with the strictest needs would suffice, it will be inefficient, preventing scalability and ensuring inefficient use of network resources. What is required is a middleware that can optimize the use of network resources depending upon the classes of data.
Security: There are multiple reasons why the PMU data transmissions need to be confidential and tamper proof. Unauthorized access to PMU readings can expose utilities and transmission operators to legal liabilities, particularly in cases when the loads need to be dropped or worse, when there is a blackout. In addition, by tampering with the PMU readings, hackers could adversely affect the flow of power on our nation’s grid. The middleware should support protocols to protect data confidentiality and integrity, and support access control schemes that let only the authorized users have access to the data.
Heterogeneity: The Power utilities will not develop the NASPInet from scratch. These utilities have significant investments in legacy networking protocols at the substation level, which cannot scale and perform to meet the NASPInet needs. What is required is a middleware protocol that can not only meet the needs of NASPInet, but also interoperate with legacy protocols, some dating back multiple decades.
Many Industries Have Solved Similar Problems
Fortunately, many other industries have been facing, and have solved, such network integration problems. High-performance middleware based on the “Data Distribution for Real-Time Systems” (DDS) standard offers publish-subscribe peer-to-peer networking with extremely configurable delivery parameters. These “quality of service” (QoS) parameters allow DDS to connect disparate systems with varying delivery needs into a single real-time networked system. DDS-compliant middleware is proven in hundreds of mission-critical applications. It is rated to the Department of Defense’s highest Technology Readiness Level (TRL) 9, indicating that it is field-proven in actual mission-critical applications.
DDS is an adopted international standard. It is actively developed and maintained by the Object Management Group (OMG), the largest systems software standards body. First picked up by the Navy, it is now mandated by the U.S. military for high-performance networking. DDS adoption is also growing rapidly in many industries beyond military systems. The standard includes both API definitions (for source-code portability) and a wire specification that offers inter-vendor interoperation. Note that the wire specification is also an IEC standard, IEC 61148.
DDS supports intelligent partitioning, dynamic deployment, “plug and play” discovery and configurable reliability. It scales well; it is proven in systems that require over 11 million publish-subscribe pairs. It also has initial success in the electrical industry. For instance, the Grand Coulee dam is retrofitting its control system to offer N-way redundant “never stop” operation based on DDS middleware. The Army Corps of Engineers is in full test now, and plans to go live with this design in Q4 2010. Thus, DDS will be online in a significant power generation application soon.
DDS provides a data-centric infrastructure that focuses on how data is moving and transforming in the system. It manages the flow of data, the data schema and all essential aspects of a distributed application. DDS can provide sub-millisecond deterministic delivery over dedicated transports such as the existing dark fiber in many transmission lines. DDS offers detailed QoS control of reliability, liveliness detection, redundancy and more. It implements reliable multicast for wide-area multi-point integration. DDS can also work over WAN networks and intelligently traverse firewalls. It integrates easily and securely with other protocols and standards, Web services, databases and enterprise architectures. Finally, DDS offers both standard APIs—for source portability between vendors—and an internationally accepted standard wire protocol for interoperability between implementations.