Outside the Box


  • Page 1 of 1
    Bookmark and Share

Start with the external connectivity. End with the peripherals and network interfaces, full circle. In between, select one or more processors and chipsets to crunch the data before passing back through the OS to some internal storage or out across cables or through radios.

As embedded engineers, we have to admit that we often fall for processor vendors’ marketing pitches. We’ve got to have the latest and greatest high-speed machine. We want to build in enough processing headroom for future features and software upgrades without going back later to change the hardware. We want to brag about the cutting-edge design on our resume. And of course our sales team tells us that customers like the sizzle of certain processor brand names inside the box. Clearly the hype would have us approaching our projects in the reverse direction:  inside-out.

Your mission, should you choose to accept it, is to design an optimal solution while filtering out the noise and spin of other people who have an agenda. To do this, resist the temptation to choose the latest whiz-bang processor first. Tell the pointy haired boss who has attended too many seminars to take a vacation instead. If he tells you to “think outside the box,” tell him that you already are, both figuratively and literally. That ought to get him or her off your back, for long enough to finally understand your point at least. Embedded system designs must start and end with the external operating environment. After all, it’s the I/O, dummy. (No, we haven’t forgotten that it’s also the software, dummy.)

This year is another banner year for processor launches. In the embedded x86 world, performance seems to grow faster than can be consumed by the operating systems and applications. Detractors of Moore’s Law watch yet again as aggressive die shrinks and bleeding edge transistors and dielectrics provide a glut of processing power and bus bandwidth. The single core 2 GHz processors from several generations back with only generation 1 of PCI Express are now easily outperformed by dual and quad core processors with gen 2 lanes from the affordable embedded ultra-mobile-based roadmap. Larger cache sizes and faster RAM interfaces for the cache misses complete the picture. New low power microserver processors are applauded. If the processing can be distributed or offloaded from the main CPU, lower power consumption would mean reducing overall size and weight of the device we are designing. Depending on the co-processing alternatives, the cost may come down too.

After analyzing I/O requirements and scaling down the main processor, then examine the impressive array of standard form factor industrial computer boards and processor modules. If the Mini-ITX shoe fits, wear it. If the I/O circuitry is available in standard slot cards and the shock and vibration requirements are modest, an inexpensive industrial ITX solution can be cobbled together. If too bulky or flimsy, a custom carrier board can implement the I/O with the exact layout and connectorization desired. This design path leads to a carrier that can be reused for generations with a simple compatible CPU module swap. The overall process leads to good results, and there is simply too much at stake—including your reputation as a designer—to jump the gun.

The amount of money that can be saved by downsizing the processor next time is substantial. It could reduce the size, weight and cost of the thermal solution as well. Before gulping down the processor vendor’s excess transistor Kool-Aid, take a close look at what’s just enough to keep the I/O happy. Start outside the system, trace the bandwidth inside the box, through the CPU and back out again. Your buyer and accounting team will become your new best friends. And your boss should come around too, once the Kool-Aid buzz wears off.