BROWSE BY TECHNOLOGY




RTC SUPPLEMENTS


TECHNOLOGY IN CONTEXT

On-Board Storage

To Defrag or Not to Defrag–That Is the Question for SSD

Solid-state disks are becoming a popular storage medium in embedded applications. In real use, SSD suffers from file system fragmentation just like HDD. While defragmentation can restore performance, care must be taken not to incur too many write cycles in clearing file and free space fragments.

YU HSUAN LEE, APACER TECHNOLOGY

  • Page 1 of 1
    Bookmark and Share

Although the solid-state disk (SSD) is not new and has a long history of its own, it has only recently received quite a bit of media hype that makes it outshine other NAND flash products such as flash-based media players or memory cards. As big computer brand names like Apple, Acer and Asus introduced their portable computers with built-in solid-state drives, people began to really notice the existence and the importance of this technology.

The SSD is a true replacement technology for the traditional hard disk drive (HDD). Yes. It is fast. Really fast. It has no mechanical moving parts that slow down read/write. It endures shocks and vibration to a greater extent. It operates without a sound. It tolerates a larger range of temperature. And it is tiny and lovely.

But those who are more skeptical only consider SSD to be a technology that competes with HDD in certain niche applications. The so-called netbook market, for example, is such a niche application requiring a disk drive as small and lightweight as possible that SSD serves as a better solution than HDD. The trend is there, obviously, but the price per Gigabyte is still too high for SSD to become mainstream. For the time being we see SSD and HDD versions coexist and compete with each other in the netbook market. As SSD’s price is dropping and its capacity is increasing rapidly, it won’t be long before SSD finds its place in your offices and home desktops. But SSD isn’t perfect. We can still find room for improvement, and costs will continue to drop.

Fragmentation Not Just for Hard Drives

Anyone with experience in computers knows that a computer just gets slower and slower after the day you first turn it on. Depending on how you use it, your computer can become sluggish in just weeks or months. The main reason for this aging phenomenon is not in hardware but in software. Or, more precisely, it is file system fragmentation that slows down your computer.

One might say that this is true for HDD because the read/write head needs to position itself to the right place during read/write operation and it takes time to do that. As a file is cut into pieces and stored in different memory area of the HDD, the read/write head travels to the locations of each file piece and spends one seek time to read and write each piece of data. Thus the read/write time naturally multiplies by a factor in proportion to how fragmented the file is. One might then conclude that file fragmentation will not cause a slowdown in an SSD because there is no read/write head and therefore the mechanical seek time is zero. In an SSD, every file tends to be broken up into pieces and stored in different physical locations of the flash memory cells because the logical-to-physical (L2P) mapping used in managing the flash memory seems to scramble the relationship between logical addresses and physical addresses of data. Since everything is done electronically, not mechanically, one might conclude that file fragmentation is not a problem for SSD at all.

The above statements are right about HDD, but only half correct for the SSD part. It is a misconception that SSD does not suffer from file system fragmentation. A large part of system slowdown can indeed be attributed to the read/write head spending too much mechanical seek time for highly fragmented files in HDD, and SSD does reduce that to zero. While this is a significant improvement, mechanical seek time only makes up a part of total access time, or I/O time, of any single input/output request made to the disk. I/O time is the time a computer system takes to complete a request cycle all the way from application, OS and driver down to disk hardware, memory cells, and then back again.

Zero mechanical seek time certainly does not mean zero I/O time. No matter how fast an SSD can be, its I/O time can never be zero. File system fragmentation affects I/O time in an SSD even when the mechanical seek time is zero. To put it another way, the misconception that SSD does not suffer from file fragmentation derives from seeing the performance degradation as a problem of the storage device alone, a problem of whether there is mechanical moving part or not, but not as a problem concerning the system as a whole. The question at issue here is I/O time, not seek time.

Here we must distinguish between two different types of fragmentation under the name “file system fragmentation.” They are file fragmentation and free space fragmentation. File fragmentation is the cause of slow read because the file is stored as a bunch of noncontiguous smaller pieces scattered around. Free space fragmentation, on the other hand, is the cause of slow write. It happens when there is no contiguous free space available for storing a file in one write such that the file system allocates a bunch of non-contiguous smaller slices of free space to store this file in several writes. Free space fragmentation leads to file fragmentation during write operations. File fragmentation leads to free space fragmentation–this happens when fragmented files are deleted. It is obvious that the two types of fragmentation form a vicious circle.

Kicking seek time out of our equation, one can understand that for an SSD the main factor that causes performance drop is file system fragmentation. This is a problem at the level of file system and MFT table (NTFS), where files become fragmented such that one access request for one file turns out to be several or more access requests for pieces of file fragments that make up the file. This “I/O multiplication” effect found in a fragmented file system is particularly noticeable during write cycles. The reason is the erase-before-write characteristics of NAND flash: data can be written into a memory block only after existing data has been erased. Since the erase/write speed is slow compared to read, a write multiplication due to free space fragmentation can slow down I/O time severely.

Defragmentation Comes to the Rescue

In September, Apacer introduced its first SSD bundled with optimization software to address, among other things, the problem of file system fragmentation. By applying defragmentation algorithms specially tailored for SSD, the Optimizer software can restore read operation 5.9x faster, write operation 19.5x faster, random read 3.9x faster, and random write 9.0x faster. Notice how file system fragmentation can degrade system performance in this test case. In a severely fragmented file system, sequential read can degrade to the level of random write, and sequential write and random write become extremely sluggish. Here we see to what extent SSD may suffer from file system fragmentation, and how defragmentation can bring performance back (Figure 1).

Now comes a problem. Since the defragmentation routine will move file fragments in trying to consolidate files and free space, it causes additional write operations to the SSD’s NAND flash. However, NAND flash can only allow for a limited number of erase/write cycles for each memory unit–a lifetime issue. If you write too often, your flash will soon die out. It appears that, by shuffling files, defragmentation will increase the erase counts and shorten the life span of SSD.

Indeed, having limited erase/write cycles is an SSD’s weak spot. The problem of how to minimize erase count in flash has been one of the key research areas for flash vendors. It concerns, at the level of memory controller and L2P table, how efficiently the memory controller manages data among memory cells. No matter how good the memory controller may be in reducing its “write amplification” factor, using whatever kind of wear-leveling algorithms, memory controllers in their present architecture seem incapable of doing anything to cope with the increase of erase/write cycles caused by the I/O multiplication in a fragmented file system.

Thus we face a dilemma. On the one hand, we need to defrag our SSD to improve its performance. On the other hand, we cannot defrag too much lest too many writes shorten SSD’s longevity. Traditional defragmentation aims to clear all fragments found in a target disk. But this strategy cannot work for SSD because of the life time issue. The solution to this dilemma is certainly one of balance and compromise. That is, how to devise a defrag strategy that will improve throughput while at the same time not incurring too many erase counts?

It appears that the better defrag strategy is to prevent free space fragmentation from occurring without being too aggressive in consolidating files. Since free space fragmentation leads to file fragmentation, if free space fragments can be minimized, the chances of creating fragmented files can also be minimized. If such a defragmentation strategy is applied at the initial stage of an SSD, then the file system can be optimally maintained in a less fragmented state. This helps to turn the vicious circle of fragmentation into a virtuous one. On the other hand, by consolidating files only when needed, erase counts used in consolidating files are minimized.

Figure 2 shows a test case done by Apacer where cumulative erase counts of an SSD test sample going through a series of procedures are documented. In this case fragmented free space is artificially created at stage 2 and stage 5. The optimization algorithm is applied at stage 6. HDBENCH is first run in a fragmented free space at stage 3, and then in an optimized, defragmented, space at stage 7. Defragmentation does incur erase count increase for 4. Running HDBENCH incurs 4 counts in a fragmented space, but only 1 in a defragmented space. That is, the optimization algorithm reduces HDBENCH erase count from 4 to 1.

Based on the test, we can argue that although optimization will use some erase counts in consolidating fragments, it reduces erase counts incurred by other write activities that are done after it because fragments are already cleared. The total erase counts incurred in the case with optimization are only a fraction more than the total erase counts incurred in the case without optimization. It is as if applying optimization saves a number of erase counts from later write activities only to use them in advance. Since erase counts accumulate differently in different user scenarios, it is conceivable that the total erase counts incurred in the case with optimization can be the same as, or even less than, the total erase counts incurred in the case without optimization. This means a well-designed defrag algorithm can extend an SSD’s life span.

Apacer Technology
Milpitas, CA.
(408) 586-1291.
[www.apacer.com].