BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


EDITORIAL

The “ASIC Quandary”—How Much Can We Integrate, When and Why?

TOM WILLIAMS, EDITOR-IN-CHIEF

  • Page 1 of 1
    Bookmark and Share

We’ve just gotten news of a market survey on graphics shipments that reports a decline in overall graphics shipments for the year. We will not embarrass the company by naming it because while its survey did include graphics shipped in desktops, notebooks, netbooks and PC-based commercial and embedded equipment, it did not include ARM-based tablets or, apparently, ARM-based embedded devices. It did somewhat lamely admit that tablets have changed the PC market. Duh.

If we’re talking about shipments of graphics processors, both discrete and integrated onto the same die with a CPU, it makes utterly no sense to ignore the vast and growing number of devices based on ARM processors with high-performance integrated graphics. The report claims that graphics leader Nvidia has dropped shipments—but does not acknowledge shipments of multicore graphics processors integrated with ARM cores on that same company’s Tegra CPU products. These processors appear in a wide variety of Android-based tablets. And, of course, there is no mention of the graphics integrated on the Apple-proprietary ARM-based CPUs built into the millions of iPads that are being sold. So what exactly is the point of all this? Go figure.

It was easier when graphics companies sold graphics processors that could be discrete or integrated into graphics add-in modules. All this integration stuff has gotten very messy. Now whether you think you need the high-end graphics capability or not, it comes with your Intel Core i5 or i7; it comes with your low-power Atom CPU, and it is a featured part of your AMD G- or R-Series APU. One of the results has been that developers really are making use of the powerful graphics that come with the chips they need for their designs, and more and more possibilities are opening up for the use of such graphics in embedded consumer and industrial applications.

It turns out, however, that this development with graphics points to a wider phenomenon. It is now possible to integrate practically anything we want onto a single silicon die. Of course, just because we can does not automatically make it a good idea. And that is the core of what we might call “the ASIC quandary.” What, exactly, are the conditions that must be met to put together a given combination of general vs. specialized functionality as hard-wired silicon? What amount of configurability and programmability? What mix of peripherals and on-chip memory of what type can be brought together that make technical, market and economic sense? And how much of it can we really take advantage of? Of course, if you’re able to write a large enough check you can now get an ASIC that integrates practically anything, but that is not the path of the majority of the industry.

Interestingly, graphics is something we now understand pretty well, while multicore CPUs are something we are still struggling to fully master (see the Editor’s Report in this issue). Back in the 80s, a company named Silicon Graphics occupied a huge campus in Silicon Valley. They produced a very high-end graphics chip called the Graphics Engine that was used in their line of advanced graphics workstations. They also developed the graphics language that eventually became OpenGL, the most portable and widely used graphics software today and one that can be used for all manner of high-end GPUs.

Thus, of the mix of considerations mentioned above, market volume acceptance of high-end graphics has made it a natural for integration on all the latest CPUs. Multicore processors (which also integrate advanced graphics) are certainly another case, and there is definitely enough motivation to exploit their full potential that these will long continue to gain in market acceptance. But what else can we expect to eventually be massively integrated onto everyday CPUs? Could it be DSP? Well, that appears to be already covered by some of the massively parallel general-purpose graphics processors that are now integrated into CPUs by companies like AMD and Nvidia. There is a language that is emerging to take advantage of their power for numerically intensive processing that—perhaps not accidentally similar to OpenGL—is named OpenCL.

For the present, we are also now seeing the increasing integration not of hard-wired silicon functionality but of silicon configurability and programmability. The initial Programmable System on Chip (PSoC) from Cypress Semiconductor started as an alternative to the dauntingly huge selection of 8-bit processors that offered a dizzying assortment of peripheral combinations. PSoC has since grown to include products based on 32-bit ARM devices. More recently, devices have come out of Xilinx and Altera that combine a 32-bit multicore ARM processor with its normal peripherals on the same die with an FPGA fabric. While the market has yet to issue a final verdict here, this indicates the direction of the ASIC quandary. Add the ability to make choices and later, if a large enough trend is identified, some enterprising company may issue the stuff hard-wired. The ability for truly massive integration is here. The direction it takes will depend on forces both technical and economic.