BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


TECHNOLOGY CONNECTED

Advances in System Connectivity

How Do You Communicate to Your Peers?

In OpenVPX Systems, Ethernet over PCIe is a growing trend.

BY DAVID HINKLE, ELMA

  • Page 1 of 1
    Bookmark and Share

Article Media

OpenVPX provides many choices on how to use the high-speed interconnects provided in the standard’s various backplane profiles for your data plane fabric. Generally speaking, when you are designing a solution, you have to choose the protocol you want to use to communicate over the data plane in order to find products with the hardware layer that provides the needed capability. 

So, if you want to use PCIe (PCI Express) for your data plane fabric, you would look for boards that have the hardware on the boards to provide PCIe ports to that fabric. Likewise, if you want to use Ethernet on your data plane fabric, you would look for products with Ethernet chips on the boards. From a hardware perspective, these two protocols appear to be mutually exclusive, forcing you to choose one or the other for data plane traffic.

Have Your Cake and Eat It Too

Both PCIe and Ethernet, ubiquitous protocols used in solutions in the embedded market, are well understood by application developers, and customers have many legacy applications that have implemented these protocols. Wouldn’t it be convenient to use both protocols over the same hardware layer, without any additional effort by the developer? Before we go there, however, let’s talk about the possible uses of both of these protocols.

First, let’s look at PCIe. Most people accept it as a very high speed way to communicate from a host root complex CPU board to some end nodes performing a variety of possible I/O tasks.Developers just assume there will be driver support to make this all happen, which is true in most operating systems available in the embedded computing space. 

The application developer is able to use these resources without much additional effort; life is good in an orderly hierarchal PCI domain where you have a single root complex and multiple end nodes providing I/O such as graphics boards or storage controller boards. But when you need multiple processing boards in your solution and you start adding peers, you quickly run into issues, such as how to handle communication between the multiple root complexes. 

Generally, this is easily solved by using the concept of non-transparent PCIe nodes, which basically isolates the two domains. These multi-domain/root complex designs are very common, but communicating from one domain to the other creates a need to develop code to set up and configure the various bridges/switches in these domains. This is so that the non-transparent node will allow applications to pass traffic through a non-transparent node and access things on the other side. 

This is fairly well understood by PCIe device driver developers, but not necessarily by the application developers, who only want to use some resource on the other side of a non-transparent node. So projects with non-transparent nodes in a multi root system that will require access through non-transparent bridges to other devices need a device driver-type developer resource who understands how to develop drivers to configure the devices in the system. 

The need for this additional configuration and device driver work is nothing new here; it has been happening this way for years in the PCI space. What is compelling and makes someone consider adding this type of driver support to their solution is that PCIe’s performance has continued to double from GEN 1 to GEN 2 and now GEN 3. This allows for higher speed bandwidth of up to 8 Gbit/s per lane, and because PCIe easily scales from x1 (one lane) to x16, it’s just understandable that application developers would want to take advantage of that performance to communicate to their peers over PCIe.

Ethernet, on the other hand, is the de facto protocol used by many application developers when a solution requires sending and receiving data to other peers in a multi SBC system solution. The software model for communicating over Ethernet to another SBC is well understood by application developers where the usual challenge they have is meeting the performance requirements.

While Ethernet has continued to increase in performance with many 10G Ethernet ports becoming available in embedded solutions, it has not grown at nearly the rate of PCIe. Higher speed Ethernet solutions such as 40G Ethernet are available in the general market, but not readily available to the embedded market. The hardware costs of these higher performance Ethernet chips are also a concern. When looking at the cost to implement boards with these 10G and up chips, it is not nearly as attractive as the much lower cost and higher performing PCIe chips, which are used everywhere. Another issue to consider is the overall latency introduced by the standard Ethernet stack’s interaction with the CPU over these chips. Many of the newer chips have “offload” engines to help, but at an added cost.

You Really Can Have It Both Ways

Now back to the original thought of having your cake and eating it too. What if you could use the application developer’s knowledge of implementing software using Ethernet but provide a “middleware” layer to replace the Ethernet hardware layer, which would preserve the standard APIs everyone is used to? Figure 1 shows some typical stack breakdowns of middleware from a couple of vendors that provide this capability for their products.

Figure 1
Typical Layer Stack (diagrams courtesy of Concurrent Technologies and Interface Concepts).

As noted, several companies already offer this “middleware,” with some going beyond just Ethernet over PCIe and providing a host of options to the application developer for communicating to a peer. But basically, for Ethernet over PCIe, they all use the low-cost, low-latency PCIe hardware to communicate at the physical layers and present the upper layer Ethernet APIs to the application developer, as if they were working with hardware-enabled Ethernet ports.

Some of the companies providing products that support their own SBCs and switches include  Concurrent Technologies with its FIN-S (Fabric Interconnect Networking Software) and Interface Concept’s Multiware offering. Several other SBC and switch vendors providing solutions to the VPX market also provide this capability.

Although companies each have their own middleware version, the common thread is that—from the perspective of the application developer—they are using a standard, well-known API to communicate, just as if they were using an Ethernet hardware-enabled board. This provides portability of applications, which is

key to many projects as they move forward to new architectures that try to improve performance. 

The Proof Is in the Pudding

A couple of examples of Ethernet over PCIe performance support why this is a growing trend. First, PLX indicates in a presentation regarding performance that it has seen Ethernet over PCIe 2.x on a x8 port at around 40 million, 64-byte random writes/sec and 21 Gbit/s throughput (2x 10 Gb Ethernet).

Also, Concurrent Technologies recently shared performance information in a white paper it developed highlighting their product. In that white paper, the company indicated that it has run benchmarks showing performance of Ethernet (packet sizes over 2K) over PCIe GEN 2 x4 ports outperforming a 10 GbE PCIe GEN 2 x8 Ethernet card by almost 2 to 1. Also shown is the CPU utilization comparison clearly demonstrating that, at the higher packet sizes, Ethernet over PCIe can drop CPU utilization to around 5% vs. a 10 GbE utilization of around 10%.

Most of the performance numbers indicate what can be done with GEN 1 and GEN 2 chips. With GEN 3 now more readily available and providing yet again a doubling of bandwidth at a cost point much lower than what you would need to pay for a 10 GbE chip solution, the performance that customers are looking for is now available.

So Who Needs It?

Applications where streaming I/O is needed would greatly benefit from Ethernet over PCIe and represent an area that is becoming increasingly prominent in the VPX space.

As a solution provider, Elma Electronic integrates payload boards from multiple vendors in order to provide the solution our customers demand. OpenVPX provides a rich selection of solutions from multiple vendors (Figure 2). Mixing vendors is generally a non-issue. An issue that often arises when defining a solution with peer-to-peer communication is providing a solution that application developers can start working with—an application-ready system. We had to propose one of two choices, either they had to develop the device driver level code, or Elma needed to create it for them at an additional cost. We now can solve the problem by including middleware to ease their development. This gives the customer comfort in knowing that they do not need to do “custom” driver work to make their system operate with multiple SBC boards; it works out of the box with the added middleware.

Figure 2
SBCs from both Concurrent and Interface Concept. They use the latest Intel chip (fourth generation) along with their respective middleware packages.

For those problems needing multiple SBC board solutions that need to pass data at high speeds between them as well as still needing the old traditional host peripheral structure that PCIe provides, it is time to explore this new capability and enjoy the ease of use andresulting performance. 

Elma Electronic

Fremont, CA

(510) 656-3400

www.elma.com