Building out the Smart Grid

The Industrial Internet Will Help Solar Arrays Actually Reduce Carbon Emissions

As renewable energy sources at the edge of the current power grid continue to grow across neighborhoods and businesses, microgrids with edge intelligence and peer-to-peer communication are necessary to deliver on the promise of green energy.


  • Page 1 of 1
    Bookmark and Share

Article Media

You’ve covered your roof with solar arrays, your monthly electric bill has plunged, and you’re doing your part to reduce carbon emissions. Actually, the first two items are real, but the last is only partially true. What happens when a cloud suddenly shuts down your power generation, but your air conditioner is still running? Somewhere on the other side of your local utility’s power grid, a fast spinning generator of some kind picks up the load. Luckily your local utility was ready for the sudden load and had that generator spinning extra fast in anticipation, in the process burning additional fossil fuel and adding extra wear and tear to the equipment.

It is this challenge of renewable distributed energy resources that is driving so much research and development in microgrids. Somehow, the sudden changes in local power generation have to be managed. This requires edge intelligence for local control and peer-to-peer communication for low-latency and high reliability with no single point of failure. The edge communication and control framework needed to make microgrids work well is emerging, based on the latest architectures and technologies underlying the intelligent, distributed systems of the Industrial Internet of Things.

The Distributed Energy Resources Challenge

Renewable energy like solar and wind hold great promise as replacements for dirty fossil fuel-based energy. Deploying these renewable energy resources locally promises to diversify our energy generation and make it more resilient. They also help businesses and homeowners do their part to reduce carbon production while reducing energy bills in exchange for an up-front capital investment. But, because wind and solar are intermittent, we can’t depend on them fully. We need backup power sources for when the wind dies down and the skies cloud over.

One of the biggest issues, especially with solar, is how quickly the power output can change. Currently, in the US, a local power substation, where high-voltage power is converted to neighborhood distribution voltage levels, monitors power needs and reports those back to the utility. It can then take up to 15 minutes to spin up (or down) a centralized generation plant as necessary. A solar array can lose power in a matter of milliseconds with a fast moving cloud. An alternate source has to be online and ready to pick up the load in milliseconds. If there isn’t sufficient backup, the voltage on the grid can drop and the grid can fail.

As solar energy resources grow in a utility’s service area, the more excess spinning reserve the utility has to store as backup. While the sun is shining, power may be flowing from these distributed solar arrays back to the grid reducing the need for fossil fuel generators. But those fossil fuel generators need to be running and spun up sufficiently to suddenly take over the load if the solar arrays stop producing. So with every solar array pushing power on to the grid, there is an equivalent fossil fuel generator spinning in the background to take over. Those generators are burning fuel and wearing out bearings.

What is needed is 15 to 30 minutes of lead time. If the utility has 15 to 30 minutes of time after a cloud bank passes over the neighborhood or the wind dies, to ramp up a new generator then they don’t need to have the spinning reserve. To provide the time needed, energy storage and load reduction are promising techniques.

Microgrids Integrate Energy Storage and Load Reduction

The primary method of energy storage being deployed today is batteries—very large banks of batteries. Battery storage systems come in different sizes, from house mounted to grid level systems backing up an entire neighborhood or industrial park. They make a lot of sense in combination with solar as they can be charged up while the sun is out and quickly step in when the sun disappears. Utilities will often include battery storage systems in their large solar power plants for this reason.

Another very promising technique is a virtual power source. Rather than generating power from backup, load is reduced instead. A technique called Demand Response is used to quickly turn off non-critical loads. To a utility, the effect is almost the same as turning on a backup power source. For example, in addition to quickly switching on a battery, you could also have a system in your house that turns off your air conditioner since the sun just went behind a cloud. In a factory, lighting could be reduced, the temperature setting for the cooling system raised and the electric vehicle chargers in the parking lot switched off for the duration.

The result of integrating energy storage and load reduction or demand response systems is a much smoother load requirement curve to the utility. This means there is time to ramp up large backup systems. Costs and carbon production are reduced. However, to enable it, we need a distributed communication and control system in place at the edge that is integrated with local sensors and storage. The solar array or a nearby controller, detecting the energy drop, needs to send commands to the batteries or to the load reduction system in a matter of milliseconds. And when the solar arrays power back up, the air conditioners may be turned back up and the batteries switched to charging mode. Fast response times, peer-to-peer communication and intelligent control at the edge are required.

A microgrid provides this intelligent communication and control framework and integrates distributed energy resources at a local level. It manages the interaction between the energy sources and loads in the local power grid as well as interacting with the higher-level utility control system. Beyond smoothing the intermittencies of renewable energy sources, microgrids also enable other valuable use cases. Loads in a neighborhood peak in the evening when the power from solar is waning. A microgrid can power up batteries using the solar arrays during the day and power the evening load from the batteries. If energy prices vary during the day, then a microgrid can optimize the power it uses or sells to reduce costs or even make money. If the external power grid fails, the microgrid can continue to provide service to its local customers, perhaps by firing up an emergency backup generator in time to take over from the batteries. For an industrial site, a hospital, or a data center, uninterrupted power based on renewables at the core is imperative.

A Microgrid Architecture Based on the Industrial Internet

Many microgrid development projects are turning to the Industrial Internet to find modern protocols and edge intelligence architectures. Ground zero of the Industrial Internet is now the Industrial Internet Consortium (IIC). The IIC was founded in April 2014 by GE, Cisco, Intel, AT&T and IBM to accelerate the creation of an interoperable and secure Industrial Internet. Towards this goal, the IIC supports 3 major initiatives: 1) to foster an ecosystem of companies, technologies and solutions for the Industrial Internet, 2) to develop an Industrial Internet Reference Architecture (IIRA) and recommend standards for the IIRA, and 3) develop proof-of-concept testbeds to demonstrate solutions for Industrial Internet Systems. The IIRA is being extended with more detailed guidance and specific technologies and standards. The purpose is to provide a common reference architecture that spans all industries represented by the more than 190 members (as of July 2015) of the IIC.

The first version of the IIRA was published in June 2015 with a high level overview of the architectural elements needed to deliver an Industrial Internet system (IIS). The Connectivity section contains key elements for any framework that will deliver interoperable and secure communications. The Connectivity architecture (Figure 1) requires a central communication “databus” with gateways to integrate edge devices or sub-networks using legacy communication protocols or existing interfaces. By normalizing all communications through a single standard, this architecture achieves interoperability between devices and applications in the system and simplifies the security implementation.

Figure 1
The Industrial Internet Reference Architecture requires a “core connectivity standard” to ensure interoperability and security in Industrial Internet Systems.

Three IIC members—RTI, National Instruments and Cisco—are applying the precepts of the IIRA to the microgrid challenge. The Communication and Control Testbed for Microgrid Applications provides a peer-to-peer communication framework to interconnect powerful edge intelligence controllers and analytics nodes. RTI, NI and Cisco have implemented an instance of the IIRA for this Microgrid Testbed program using RTI’s Connext platform based on the standard Data Distribution Service (DDS) protocol, NI’s CompactRIO intelligent controllers and Cisco’s Connected Grid Routers. Phase 1 of the testbed program is underway, developing a proof-of-concept demonstration in Austin, Texas. Phase 2 of the program will be a more complete implementation of a microgrid in the simulation labs at Southern California Edison. Once satisfied with the security and safety of the framework, Phase 3 will involve the integration of a real microgrid in San Antonio Texas with CPS Energy, the municipal utility.

Through this Microgrid Testbed program, RTI, NI and Cisco are showing that an interoperable and secure Industrial Internet solution can streamline the development of microgrids. Using an Industrial Internet architecture promises a more open, interoperable set of systems where solutions from a wide variety of vendors and system integrators can be applied. Adhering to the IIRA ensures microgrid systems can take advantage of rapid advances in standards, technologies and solutions driven by the huge momentum of the Industrial Internet.

Implementing the Microgrid Communications Framework with DDS

The Phase 1 Microgrid Testbed proof-of-concept demonstration is a greatly simplified “microgrid” system using stand-ins like lightbulbs and fans for power loads, a simple single-phase power circuit driven by a standard wall plug, and relays that can cut out particular loads as needed. NI CompactRIO controllers run logic to mimic a microgrid running under different modes. The normal optimization mode runs with the microgrid connected to the power grid (the wall socket in this case) and power measurements showing nominal power usage by the loads on the circuit. A simulated battery takes over as a power source when the power grid fails (someone pulls the plug from the wall) and a demand response controller immediately cuts most of the loads to ensure the battery is balanced with the remaining loads. Other simulated modes like storm mode and grid synchronization mode round out the current demonstration. Actual batteries, solar arrays and other microgrid equipment will be added to flesh out the proof-of-concept.

The control logic for the various modes in the microgrid demonstration runs on different controllers across the distributed system. With DDS as the core connectivity standard, there is a great deal of flexibility in placing the control logic (Figure 2). The databus allows the logic to be redeployed as needed across the distributed edge controllers.

Figure 2
The IIC Microgrid Testbed Phase 1 proof-of-concept demonstration communication and control architecture using the DDS protocol for the core connectivity standard. Native DDS controllers communicate peer-to-peer across the DDS databus while legacy devices connect via software gateways. Control logic can be deployed as needed across the controllers in the system.

Advantages of Using a DDS-Based Implementation

DDS implements a publish-subscribe model that connects information producers (publishers) with information consumers (subscribers). The overall distributed application is composed of processes called “participants,” each running in a separate address space, possibly on different computers. A participant may simultaneously publish and subscribe to strongly-typed data-streams identified by names called “Topics.” The interface allows publishers and subscribers to present type-safe API interfaces to the application. 

DDS defines a communication relationship between publishers and subscribers. The communications are decoupled in space (nodes can be anywhere), time (delivery may be immediately after publication or later), and flow (level of reliability and bandwidth control). The DDS middleware automatically discovers publishers and subscribers and connects them based on the Topic they are providing or wish to receive. Quality of Service (QoS) parameters specify the timeliness, frequency, reliability and content delivered to each application (Figure 3). 

Figure 3
With DDS, automatic discovery matches publishers and subscribers of a data topic and Quality of Service parameters describing application data needs shape the resulting data flows. Specifying and relying on data-centric properties enables decoupled, resilient systems.

To increase scalability, topics may contain multiple independent data channels identified by “keys.” This allows nodes to subscribe to many, possibly thousands, of similar data streams with a single subscription. When the data arrives, the middleware can sort it by the key and deliver it for efficient processing. 

DDS also provides a “state propagation” model. This model allows nodes to treat DDS-provided data structures like distributed shared-memory objects, with local caches efficiently updated only when the underlying data changes. There are facilities to ensure coherent and ordered state updates. 

DDS is fundamentally designed to work over unreliable transports, such as UDP or wireless networks. No facilities require central servers or special nodes. Efficient, direct, peer-to-peer communications, or even multicasting, can implement every part of the model.

DDS does not require a central server, so implementations can use direct peer-to-peer, event-driven transfer. This provides the shortest possible delivery latency. This is a significant advantage over client-server or broker-based designs. Central servers or brokers impact latency in many ways. At a minimum, they add an intermediate network “hop,” nominally doubling the minimum peer-to-peer latency. If the server is loaded, that hop can add very significant latency. Client-server designs do not handle inter-client transfers well; latency is especially poor if clients must “poll” for changes on the server. 

Fine control over real-time QoS is perhaps the most important feature of DDS. Each publisher-subscriber pair can establish independent QoS agreements. Thus, DDS designs can support extremely complex, flexible data flow requirements.

Periodic publishers can indicate the speed at which they can publish by offering guaranteed update deadlines. By setting a deadline, a compliant publisher promises to send a new update at a minimum rate. Subscribers may then request data at that or any slower rate. Publishers may offer levels of reliability, parameterized by the number of past issues they can store to retry transmissions. Subscribers may then request differing levels of reliable delivery, ranging from fast-but-unreliable “best effort” to highly reliable in-order delivery. This provides per-data-stream reliability control. 

The DDS publish-subscribe model provides fast location transparency. That makes it well suited for systems with dynamic configuration changes. It quickly discovers new participants and data topics. The system cleanly flushes old or failed nodes and data flows as well.

Scalability Ensures Systems Can Be Extended and Federated

With data-centric publish-subscribe, there is no need to maintain N-squared network connections like there is with message or connection centric protocols. As a result DDS based systems can scale to over 10 million publish-subscribe pairs, and application code can be reduced by a factor of 10.

Since DDS does not assume a reliable underlying transport, it can easily take advantage of multicasting. With multicasting, a single network packet can be sent simultaneously to many nodes, greatly increasing throughput and scale. Most client-server designs, by contrast, cannot handle a client sending to multiple potential servers simultaneously. In large networks, multicasting greatly increases throughput and reduces latency.

Much like the security in a database where data is secured table by table, DDS allows data to be secured topic by topic. The system integrator specifies which DDS applications have read or write permission for which topics in the system. Only those data topics that need to be made confidential should be encrypted. Topics that do not need to be secured can be left unencrypted, allowing higher performance, and signed with a message authentication code (MAC), to verify authenticity. This fine-grained security model helps mitigate malicious insider attacks by limiting the access of a compromised application or a malicious user.

The Industrial Internet Consortium’s Microgrid Testbed program applies the cross-industry Industrial Internet protocol DDS, intelligent edge controllers and industrial network equipment to implement a communication and control architecture for microgrid applications. Long term, the real power of the Industrial Internet of Things (IIoT) is to connect sensor to cloud, power to factory, and road to hospital. To do that, we must change core infrastructure to use generic, capable networking technology that can span industries, field systems and the cloud. Applying the IIoT to microgrids is a key step to enable large-scale efficient use of green energy.



Real-Time Innovations
Sunnyvale, CA
(408) 990-7400