BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


TECHNOLOGY CONNECTED

Optical Connectivity in System Design

New System Architectures to Interface High Data Rate Sensors

New technology offers engineers of EW or EO/IR systems the possibility to use remote high-performance sensor modules using Gigabit/10 Gigabit serial interfaces. After processing, these systems need high-speed data transmission links to the wide area operation network. To benefit from this new technology, designers must define new innovative system architectures.

BY THIERRY WASTIAUX, INTERFACE CONCEPT

  • Page 1 of 1
    Bookmark and Share

Article Media

The development of the market for high-performance surveillance systems has led to a significant growth of sophisticated and powerful sensors. These sensors can be high-definition daylight or infrared cameras as well as large antenna arrays for radars or multi-antenna systems for SIGINT, SFDR or direction finding systems. The global market for sensors is growing at a double digit rate beyond just the defense market and into new fields such as energy and smart infrastructures according to market and technology forecast consultants ElectroniCast. UAVs have already been recognized as essential to many military operations. The importance of UAVs drives the market for surveillance, EO/IR, radar, detection, communication and intelligence-gathering UAV payloads. Inside UAVs, but also UGVs or UUVs, these surveillance or electronic warfare systems run in harsh and tightly packed environmental conditions with sensors often located remote from the processing units.

Delivering high-frequency signals through long coaxial cables from the sensor to the processing units has a number of disadvantages. Even the last generation of high-definition IR and daylight optical sensors or RF antenna systems generated a tremendous flow of sample data feeding the processing units, and with the high frequency the signal loss can be high. In addition, copper cables carrying these analog high-frequency signals present EMI radiation and susceptibility issues. Their impact on the weight of UAVs and on maintenance can also represent a real burden.

These serial links from the sensors bear various protocols such as Serial FPDP, 10GbE, Aurora, SerialRapidIO and others. Data can be delivered over copper and optical cables. Optical cables are free from EMI radiation, avoiding interference inside tightly packed unmanned vehicle systems, and are much lighter than copper cables. In addition, single mode and even multimode fiber cables can collect data from large antenna arrays or from widely spaced antennas at distances greatly exceeding the constraints met in embedded systems.

With this very high sample data throughput from the sensor, many algorithms such as beam forming, down conversion, image filtering and compression need massively parallel computing. The last generation FPGAs have become real “processing workhorses” offering the necessary parallel computing resources including thousands of dedicated multipliers, optimized memory interfaces and high-speed transceivers. The ratio of computing power per second and per watt is close to ten times that of the best CPU or GPU when computing on integers. With such a ratio of computing power to power consumption that is second to none, FPGAs have become the best possible interface to sensors allowing low-latency processing and communication.

In terms of communication capabilities, FPGA vendors offer many ways of communicating with built-in PCIe ports as well as lightweight gigabit serial protocols like Aurora. These protocols along with Serial FPDP appear to be among the best for carrying high-speed data sample throughput. Processing these data flows also implies back-end computing behind the FPGA parallel processing via the use of high-end SBCs with their own native interfaces for PCIe. These SBCs can perform such applications as detection, tracking or target recognition. They also run the software protocol stacks that connect the system to the wide area network. All these considerations lead us to a high level architecture that can be summarized in the block diagram of Figure 1.

Figure 1
High level sensor-to-processing units architecture.

Given this high-level architecture, what would be the best possible solutions to implement it? Interfacing optical links coming from the sensors with FPGAs at a controlled cost implies the use of standard form factors, and VITA has defined excellent specifications that allow high-speed communication as well as cost reduction by using standards.

The OpenVPX Vita 65 standard has become a well-proven solution with backplanes able to sustain data rates at up to 10 Gbit/s per lane. Reaching even greater speed remains a technical challenge because the backplane may appear in the future as a data communication bottleneck in a system.

The VITA 57 standard features its high pin count connector for FPGA mezzanine cards that sustain high data rates at a competitive cost and that can be interfaced with the high-speed transceivers of the last generation of FPGAs. Based on these two efficient and well-proven norms, we can progress further.

The IC-QSFP-FMCa can be seen as a good example of an optical interface in a small form factor (Figure 2). This VITA 57 FMC features two QSFP cages, each of which is able to receive a four-fiber optical QSFP+ interface. Each of these QSFP+ interfaces allows communication on four full-duplex lanes at a data rate up to 10 Gbit/s per lane, depending on the maximum data rate achievable on the FPGA transceiver interface. They can reach a distance of up to100 meters with multimode 850 nm fiber. A total of eight SERDES run up to the FMC connector to interface the signal processing module. An onboard microcontroller manages the QSFP interfaces through an I2C bus. A clock synthesizer can be configured by the microcontroller through an SPI bus.

Figure 2
The IC-QSFP-FMCa is an example of an FMC mezzanine card that can provide two quad optical interfaces.

Front End Processing FPGA Modules

The block diagram in Figure 3 shows a Xilinx Virtex-7 OpenVPX module IC-FEP-VPX3c. Two quad GTX interface with the eight lanes coming from the FMC connector after optical-to-electrical conversion at the receive side and before electrical conversion at the transmit side.

Figure 3
Optical interfaces can also be implemented using the FMC card for front end and back ends of an FPGA to the backplane on an OpenVPX module.

The Xilinx IBERT test confirms the very low bit error rate at the nominal data rate and proves the validity of this communication approach. Behind the FPGA transceivers, firmware is instantiated to interface various protocols (sFPDP, 10 GbE, Aurora, PCIe, etc.).

The relevant IPs in the FPGA (FFT, Beamforming, filtering, down conversion among others) process the data samples received through the optical interfaces using the great resources of the Virtex-7 especially in terms of DSP logic elements. The processing results may be stored in the two high-bandwidth DDR3 banks, each of which has a 64-bit interface. These 2 Gbyte memory banks allow the storage of a large amount of data before transfer to other part of the system.

The next step consists of moving this data from the DDR3 memory of the FPGA to another memory in the system through the data pipes connected to the backplane. Powerful DMA Engines instantiated within the FPGA move this data using PCIe from FPGA DDR3 memory banks to any other memory location in the system for further processing. This is done via nontransparent PCIe switches in order to avoid root complex issues between Intel processors. These matrixes can be on specific switch boards or directly implemented on the CPU boards. These fast DMA engines can move data at a rate of 1.5 Gbytes/s of effective useful data on a PCIe x 4 link. That rate comes close to the theoretical data throughput limit when including PCIe communication layer overhead and 8B/10B encoding. These DMA engines are driven by CPUs that are connected to the FPGA module or that are on the FPGA module itself in the case of the 6U form factor. The Reference Design delivered with the FPGA modules contains the DMA Engine IP that is able to perform these high-speed memory data transfers.

A software package called Multiware provides a high-level abstraction in order to provide the designer with services such as Virtual Ethernet over PCIe, shared memory, message synchronization with DMA-powered transfers between FPGA modules and CPU modules or between different CPU modules. The designer can then focus on the application without the burden of writing software.

Users will want to access data processed by the surveillance system. Again and for the same reasons as above, optical communication appears to be the right solution even if the flow of data at this stage appears less important than behind the sensors. The same FMC QSFP+ mezzanine may be used to connect the systems to the outside or to a radio transmission system to send the processed data in an air-to-ground link for example. At this stage, GbE/10GbE or PCIe protocols will be preferred. Virtual Ethernet on PCIe will allow the use of classical TCP/IP stacks. FPGA modules bearing the FMC optical mezzanine can carry an IP TCP/IP stack to reduce the processing load of FPGAs and to increase the per-slot processing power of the system. Alternatively, this fiber link can also start from Rear Transition Module featuring QSFP+ cages at the rear of a FPGA module. So this demonstrates that we have the technologies to interface high-speed sensors to the signal processing FPGAs and to connect EW and radar systems to the operational networks.

Interface Concept
Quimper, France
+33 (0)2 98 573 030
www.interfaceconcept.com