Advances in Interfaces for SSD Storage

Confused by Embedded SSDs? Don’t Be

Next time someone tells you they’re confused by SSDs -- the multitude of interfaces, form factors, standards, and acronyms – tell them to sit back, relax and just marvel at what SSDs are bringing to embedded systems.


  • Page 1 of 1
    Bookmark and Share

Article Media

As a vast array of solid-state storage products steadily penetrates the embedded systems space, it is not uncommon for designers to get a little overwhelmed by flash-storage options. SLC vs. MLC SSDs; form factors such as 2.5”, 1.8”, M.2, Slim SATA, mSATA, CompactFlash, CFast, and eUSB; interfaces like SATA, PCIe and PATA; and AHCI, NVMe and other aspiring standards – these all make for quite an acronym stew and a dizzying array of selections designers must make.

Making the most-appropriate selections is critical to achieving an optimal balance of an embedded system’s functionality, reliability, environmental considerations, and budget.  For such designs, SSDs developed specifically for embedded and industrial applications are emerging as key difference-makers, yet the form, function and standards they adopt vary widely.

If you’re confused by the seemingly countless SSD options out there and the alphabet soup of acronyms surrounding them, don’t be. 

Let’s look at how today’s embedded-focused SSDs are adopting new protocols, reliability techniques and form factors.  Of course, we also must acknowledge their heritage in the older, tried-and-true storage technologies that remain important to the embedded market – a space in which designs lean more toward stability and reliability than trendiness and cutting-edge speed.

To SLC or MLC: That is The (Bit) Question

The fundamental distinction between single-level cell (SLC) and multi-level cell (MLC) flash is that SLC, in storing one bit per cell, is widely considered more reliable than MLC, which stores twice the number of bits.  However, SLC is more expensive, on a bit-to-cell basis, than MLC.  For the embedded-system designer, therefore, the decision of whether to go with SLC- or MLC-flash SSDs is neither about just density, reliability or cost, but rather a combination of them all.

Power-fail protection is essential to retain critical data in the event of a power failure, and that protection challenge is more significant in MLC designs than with SLC because of a phenomenon known as paired page writes.  When an SSD writes to MLC NAND flash, there are two blocks open at the same time, and the data in those blocks is subject to corruption if the power goes out during a write. SLC doesn’t use paired pages and, therefore, won’t face this challenge.

Some SSD manufacturers seem to believe that power-fail protection via a hardware solution is enough but this is often not the case.  Because of the inherent challenges with many industrial system designs, power-fail is a complex problem due to varying voltages, power supplies, and communication between host and storage.  This multi-faceted challenge requires a multi-faceted approach to power-fail protection that includes both hardware- and firmware-mitigation techniques.  Additionally, some performance enhancement techniques, including “early acknowledgement,” actually increase the risk of data loss as they take shortcuts by indicating to the host system that they “have the data” when in fact that data has not yet been committed to the flash.  So, if power is lost, so too is the data.  Therefore, embedded-system developers are strongly advised to design with SSDs whose power-fail protection is integrated into hardware and firmware, whether the system uses SLC or MLC.  In terms of both data security and total cost of ownership, this level of integration is a smart investment.

Form Factors Galore

The SLC/MLC selection appears substantially less complex in comparison to choices designers must make from among the myriad SSD form factors.  Embedded designs typically favor smaller SSDs able to fit into dense, “set it and forget it” spaces; the systems where the drives reside are usually deployed in tightly packed and/or often inaccessible environments.  And while 2.5-inch and even 1.8-inch SSDs may seem miniscule enough – indeed, for many embedded designs they are -- there’s a substantial portion of systems that require even smaller drives.It is for this very segment that vendors such as Virtium provide SSDs with form factors such as M.2, Slim SATA,  mSATA, CompactFlash, CFast, and eUSB (Figure 1).

Figure 1
Embedded, industrial-grade SSDs come in a variety of form factors.

The good news is that each of these form factors is based on and has evolved significantly from tried-and-true industry standards – some dating back several decades. 

Solid-State Driving in the Express Lane

Originally developed to replace racks of hard drives, enterprise-class SSDs over time adopted a number of high-speed interfaces to eliminate the throughput and data-integrity limitations in systems’ storage.  SAS, for example, became SSDs’ interface of choice for storing mission-critical enterprise data.  With its dual-port modes, error-correcting features and other data-integrity enhancements, SAS showed its muscle, delivering greater performance and higher reliability than SATA.

However, the SAS-SSD “marriage” quickly brought designers to the realization that traditional hard drive interfaces, while perhaps cost-effective, still posed a performance bottleneck, sparking a search for even greater interface speeds.  All roads on that quest led to PCIe, now the interface of choice for today’s most demanding applications and deployed throughout the ecosystem – computing, communications, networking, and storage.

PCIe didn’t just unleash the performance potential of SSDs; it enabled a slew of new form factors ideal for embedded and industrial uses.  It’s given rise to M.2, Mini Card and NVMe SSDs.  And speaking of NVMe…

Let’s Not Forget the Protocols

Some PCIe-based SSDs, including selected models from Virtium, will support the ATA Host Controller Interface (AHCI), the protocol supported by the peripheral controller hubs that connect to chipsets by Intel, AMD and others.  Since AHCI is based on ATA, the software commands are the same as for SATA and, to an extent, even PATA.  So using AHCI instead of NVMe on a particular PCIe interface is a trade-off of performance vs. software familiarity.  Of course, embedded systems can take advantage of AHCI only if the capacities are high enough; higher capacities are rarely needed for industrial embedded designs.  Furthermore, although NVMe is supported in Windows 8.1, Mac OS and some versions of Linux, designers building embedded systems with custom OSes may have a big challenge implementing NVMe.

Expect that these challenges will smooth out as NVMe takes hold in the industrial/embedded market.  Virtium, for example, is currently developing its own approach to NVMe so it can fully support this protocol once it’s ready for embedded-system primetime.

One more point about the vast array of SSD options for embedded systems: They give designers the opportunity to optimize with SSDs featuring the most-appropriate form factor, interface, protocols, data protection, and capacity.  That last item is significant because many, if not the majority of, embedded designs don’t require the same high capacities that, say, data centers demand; 128MB for a embedded boot drive will be far more reasonably priced than a 4TB enterprise drive.  So, lower-capacity SSDs give embedded systems a distinct budget-friendliness.

Rancho Santa Margarita, CA
(949) 888-2444