June 2011

Head Transplant


  • Page 1 of 1
    Bookmark and Share

Those of us old enough to remember when the word “control” always followed the word “embedded,” remember a day when even computer controlled equipment was beset with electromechanical dials, gauges, lights and switches to provide input to, and display results from embedded systems.  Keypads and seven segment displays were considered sophisticated “user I/O” in those days.

With the advent of the PC, the opportunity to move to keyboard-based input and display-based output opened up many new applications. But even with early graphics displays, the absence of a graphical software package and enough memory space limited output for most applications to text only. So keyboard input and text output took off, supported by the then-ubiquitous DOS operating system.  As we moved through the 90s, graphical displays began to emerge. But early graphical systems were low resolution and slow. More importantly, the display used substantial CPU cycles and expensive RAM and was pretty much out of the question when any level of real-time performance or determinism was required.  

To deal with these limitations, embedded control system architectures evolved into distributed computing networks, where one subsystem was dedicated to the human-machine interface (HMI) while other systems on the network did actual control, using DOS or an RTOS to control motors, sample analog inputs and pass information over a network to and from the dedicated display system. In many applications, both the control system and the HMI system were in the same box and in some cases the “network” was a simple RS-232 connection. As the Internet ramped, it became possible to run the network over large distances, with the HMI system remote from the control systems. One could control a headless microwave tower on a mountain top from an office desktop thousands of miles away.

As the sophistication of applications grew into the first decade of the 21st century, the HMI demands on the graphics systems grew as well. Graphics controllers became a more and more important part of embedded systems. Touch screens and high-resolution displays dropped in price. And high-performance graphics interfaces, such as AGP and PCI Express x16, emerged to funnel data ever faster between the application processor and the graphics processor.  

But the biggest impact by far was the relentless march of Moore’s Law—doubling the transistor density of integrated circuits every 18 months. Millions of transistors became available on processor chips, more than could be consumed by additional cores or cache alone. Processor chips began to integrate other CPU board functions into the processor chip—first the Northbridge (or memory controller hub), but by the early to mid-2000s, processor designers also began to include graphics controllers.  

Early integrated graphics controllers used smaller, older generation graphics technology. While the performance of these integrated graphics controllers was adequate for many embedded applications, those applications requiring high-resolution displays or HD quality video still required a separate, discrete graphics controller. However, for many applications, it was now cost-effective to combine both HMI and control capabilities in the same system.  

The pervasiveness of this strategy became clearer as processor companies first built relationships with graphics processing companies and then acquired them outright as demonstrated by VIA’s purchase of S3, AMD’s purchase of ATI Technologies, and Intel’s long-term technology licensing relationship with Nvidia.

But many multimedia applications such as digital signage and gaming required more performance than these integrated graphics controllers could provide. Discrete graphics controllers continue to thrive. At the same time, the emphasis on small form factor systems and low power consumption drove many applications away from a discrete graphics controller if at all possible.

This year, these two trends have collided with the introduction of AMD’s Fusion family of Accelerated Processing Units (APU) combining a multicore low-power Gigahertz-plus processor with a high-performance graphics processor in a single BGA package. More applications than ever before can combine the graphical HMI with the control application in a single small form factor system.

In addition, graphics processors are now comprised of hundreds of parallel calculating cores. While optimized for calculating textures, shading and 3D virtual reality, these general purpose graphics processing units (GPGPUs) can be harnessed for a variety of next-gen embedded features ranging from data encryption to smaller, lighter UAVs to computed tomography. 

While this won’t help the mountain-top microwave repeater, it does provide for a strong combination of user interface and real-time control in the same system, further distancing modern embedded systems from those old days of keypads, dials, gauges and switches.