MENU

The expanding appeal of SSDs

The expanding appeal of SSDs

Feature articles |
By eeNews Europe



SSD HDD comparison

SSDs are faster than traditional hard disk drives (HDD) because data is read directly from Flash memory. Because there is no seek time, data access is potentially far quicker. Unlike HDDs, read performance does not depend on where data is stored on an SSD. In consequence, SSDs offer designers a route to a better user experience, with faster data transfer, shorter boot times and quicker loading of applications.

Having no moving parts, SSDs make no sound and are virtually immune to mechanical breakdowns; being lighter, too, they are significantly less susceptible to vibration and shock, extreme altitudes, and temperatures; and magnetic fields have no effect on Flash memory, whereas magnets can alter the data on traditional HDD media. As a result of these advantages, designers of embedded or low-power systems will benefit from extra ruggedness and suitability in applications where the mechanical construction of a conventional drive imposes design constraints.

The most important disadvantage is of course, the price. A typical consumer HDD from a reputable manufacturer costs around 10p per Gigabyte, whilst the highest-density solid-state equivalent is nearly £3 /GB. And whilst NAND Flash memory prices have been falling faster than HDDs, it will be many years before price-per-Gigabyte values are level. Market research firm PriceG2 predicts that even by 2012 there will still be a 12-fold price disparity. A consumer with $49 to spend will have the choice of a 1TB-2.5” drive or an 80GB-SSD.

When it comes to technical challenges, a key consideration is that SSDs have different read and write performance, and their write performance is significantly affected by availability of free programmable blocks. Furthermore, although SSDs do not suffer the effects of data fragmentation like HDDs do, their performance may degrade over time and Flash memory devices have a limited number of writes over the life of the drive.

Optimising SSD performance

These issues are being addressed by today’s SSD manufacturers by improving on the memory cell technology and associated read-write control. Increasingly SSDs are able to use lower-cost multi-level cells (MLC) instead of the single level cell Flash, thanks to recent improvements in MLC device technologies. So-called Enterprise MLC circuits have longer lifespan than the retail-oriented MLC technology, and allow many more writes to the memory cells. Previous generations of eMLC circuits produced in 34nm fabs can handle up to six times more writes than regular MLC. Intel is planning to use 25nm processes for its next generation of business-class SSDs.

Apart from enhancements to the underlying memory technology, the most significant solution to limited drive lifetime and SSD degradation in performance over time is to flag the relevant cells as “available” when a file is deleted from the device, rather than physically deleting the data. To keep the number of overwrites to a minimum the operating system or drive controller is programmed to write data to a new location as often as possible instead of overwriting the old file. This avoids unnecessary wear and tear. There is a further challenge when the SSD is so full that there are no vacant blocks available for new files. Typically data are read from and written to NAND flash in 4KByte blocks, but can only be deleted as an entire block of 512KB. So, to overwrite a 7KB previously deleted file with another 7KB file, the drive has to start by loading the entire 512KB block. Then the SSD rewrites it by replacing the deleted file with the new one; finally it writes the entire block back to the drive. Performance is impacted severely.

The good news is that this can be alleviated using the TRIM command offered by operating systems including Windows 7 and Linux (2.6.33 kernel or above). When the OS recognizes a drive as an SSD, it tells the device when a file is removed so that it can clear the blocks used by the file, leaving it empty when a new file is to be written. In this way there is only a small reduction in performance when deleting a file, and a massive gain when writing files to a mid-life SSD.

For example, the latest 2.5-inch SATA SSDs from Transcend fully support the TRIM command, enabling them to maintain optimum write speeds, whilst reducing long-term SSD wear. Featuring storage capacity up to 512GB, currently the highest in the industry, Transcend’s upgraded SSDs are large enough to store operating systems, applications and more.

TRIM support varies among different manufacturers, and there are also proprietary software solutions providing similar functionality in older operating systems like XP and Vista. In addition, many manufacturers build the algorithms into the disk controller rather than the OS. Such controllers further optimise write operations by spreading them evenly over the entire flash disk and these so-called wear-levelling techniques not only optimise write speed, but also enhance longevity of the flash cells. Another technology contributing to endurance and reliability is Error Correction Code (ECC) capability employed by manufacturers like WD, Transcend and Intel. Advanced (ECC) algorithms add redundancy to the data stream that is transferred by a host system to the SSD. When noise or some form of interference causes data distortion, these proprietary technologies detect and correct the inevitable errors.

Sometimes ECC alone is not enough, especially when smaller process geometries are combined with increasingly complex chip designs, which makes the devices more prone to transient bit-flip errors. These occur when a cell’s state is unintentionally flipped from a 1 to a 0 or a 0 to a 1. Such unintentionally flipped data bits may not be detected and corrected by ECC, causing unpredictable behaviour and potential data corruption – with consequent fatal data errors and host system downtime.

New applications

Inherent advantages of Flash SSDs, along with the enhancements to basic technology, are driving adoption in a host of new applications. Solid-state technology is no longer simply a substitute for rotating disks. For example, constant performance and availability is a major advantage for networked data storage, where cost effectiveness and upgradeability are key factors.

The efficiency of a database is directly proportional to the speed at which files can be stored, sorted, indexed and retrieved. Solid-state disks can significantly improve the performance, efficiency and effectiveness of an external storage library, and the short access time of SSD drives significantly improves the network efficiency. For demanding applications like real time systems and audio visual processing, SSDs can deliver order-of-magnitude speed improvements along with more efficient access to files and significant lower latency. To demonstrate these benefits, Intel IT tested an electronic design automation server and found that with large silicon design workloads, substituting lower-cost Flash SSDs for part of a server’s physical memory resulted in a 1.74x cost-performance improvement.

SSDs are ideal for high-performance and mission critical applications in the industrial sector, where maximum physical protection provides high tolerance to extreme environmental conditions, such as magnetic fields, air pressure, extreme temperatures and vibration. Data storage in embedded system applications faces many constraints in connectivity, space and durability. SSDs can support continuous operation 24/7 in an environment of limited space and where power budgets may be tight.

Aerospace flash applications benefit from the small form factor of SSDs and their inherent standards of ruggedness, reliability and performance, along with the ability to withstand extreme temperatures, vibration and pressure. Solid state flash disks are currently the only viable product available for these applications, particularly for data recording/acquisition /logging, in aircraft, weather balloons, ships, and high-speed sea craft as well as space missions. Similarly, the increasing use of stored telemetry and driver behaviour data in private vehicles is made more affordable with SSDs. This may become an insurance expectation before long.

Figure 1: MLC Flash is inherently slower to write than SLCs, but SSD manufacturers like Western Digital are closing the performance gap.

SSDs come to the fore in processing magnetic resonance imaging (MRI), digital X-ray, positron emission tomography, ultrasound, and digital cardiology and computed tomography, where SSD technology offers multiple benefits for these demanding applications. NAND Flash storage enables simultaneous read/write on the hard drive without any compromise in reliability

The data integrity and security capabilities of SSDs make them particularly effective in data storage for casinos and video gaming systems. SSD technology allows slot machines and other gaming stations to safely store their code, meeting the crucial regulatory and legislative requirements by preventing any type of violation or data intrusion.

In high-end computing applications, too, the SSD has created new possibilities to increase overall system performance. Flash memory working with a suitable interface offers extremely low seek time, less than 1ms, and a faster and more stable transfer rate, with the additional benefit of enabling simultaneous read/write on the hard drive. These advantages will be particularly valuable to the needs of cloud computing infrastructures.

More computationally extreme applications, however, need ultra-fast data access often less than 10 microseconds. Here DRAM-based SSDs are used, primarily to accelerate applications that would otherwise be held back by the latency of Flash SSDs or traditional hard-disks. 

Figure 2: Tests with large silicon design workloads, substituting lower-cost SSDs for part of a server’s physical memory resulted in a 1.74x performance-normalized cost advantage. (Source – Intel)

DRAM-based SSDs

To overcome the drawback of volatility, DRAM-based SSDs usually incorporate either an internal battery or an external AC/DC adapter with associated backup storage systems to ensure data persistence when they are powered down. If power is lost, the batteries take over allowing all data to be copied from RAM to back-up storage. When the power is restored, the information is copied back to RAM and the SSD resumes normal operation.

In practice, most Flash-based SSDs typically incorporate a small amount of DRAM as a cache, similar to the cache in hard disk drives. A directory of block placement and wear levelling data is also kept in the cache while the drive is operating.

Implementation

OEM Flash drives are available in standard 1.8-inch and 2.5-inch form factors and interfaces (IDE, SATA…) to provide essentially plug-and-play implementation in consumer computing applications. The limited capacity of current SSDs makes them especially suitable for the growing range of netbooks, tablet PCs and ultra-small format PCs. Manufacturers are also making it easier for desktop users to upgrade to SSDs without replacing their existing hardware. For example Transcend offers an upgrade kit featuring a 32GB 2.5-inch SATA interface SSD, a 2.5-inch to 3.5-inch mounting bracket, SATA data and power cables and HD disk cloning software.

In addition, Flash-based solid-state drives can be used to create network appliances from general-purpose PC hardware. Security and long lifetime are ensured by write-protecting the drive containing the operating system and application software, essentially as a replacement for larger, less reliable HDDs or CD-ROMs providing an inexpensive alternative to costly router and firewall hardware.

Implementation of more sophisticated network hardware, for example network or database accelerators, frequently combines Flash/DRAM SSD hybrids alongside conventional HDD arrays. Targeted acceleration requires detailed knowledge of the application and through careful tuning allows optimum partition between RAM-based SSDs used for random reads and writes, and Flash used for sequential data transfers such as reading during boot-up and writes when the system is hibernated or backed up. As well as helping to drive the massive anticipated growth of tablet PCs and cloud computing, easier, cheaper and faster implementation of solid-state disk drive solutions will create a whole world of new opportunities for OEMs.

About the author:

Mike Caddy is central product manager semiconductors at RS Components

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s