There are many convincing arguments in favour of SSD and its eventual high adoption rate into data centres. All these arguments build a strong case in terms of total cost of ownership that favours SSD adoption in caching and Tier 0 storage environment:
1. Up to 20 times and increasing higher bandwidth and 1000 faster latency compared to HDD
2. 50% Lower power consumption than HDD
3. Predictable reliability and lacking mechanical parts vs. unpredictable and high failure rate of HDDs
4. Comparable price per GB in low density enterprise class storage devices
But there are still Myths and barriers that slow down the adoption rate of SSD in data center industry:
1. Using SSD as a drop-in replacement of HDD, without knowledge of various types of SSD
2. Lack of education in dealing with SSD as Flash based device
3. Usage of consumer SSD in server environment, due to easy access and high adoption rate in PC environment, led to many bad experiences and fails, spreading the idea of low reliability of SSD
4. Many vendors covering various level of value change ownership, numerous brands, different level of quality and pricing, etc. makes the choice for the user not easier, rather increase the level of hesitation
5. Low level of knowledge and hesitation of traditional server OEM Sales force on usage of SSD as a non-commodity and application dependent device
This article aims to tackle the adoption barriers and provide an easy broad knowledge on topics in relation with SSD.
Hitting the Memory wall - HDD is not dead, it is part of the solution
Tiered storage is the best solution today dealing with big amounts of data. When it comes to data access, the only economically reasonable approach is storage tiering, i.e. using various storage medium in cascade. The faster the memory, the closer it should be to the central processing unit (CPU). The typical metric used to measure this is latency – the time needed to fetch data from a storage medium when the data is requested. DRAM latencies are in a range of 10s of nano-seconds, for SSD in the range of micro seconds (factor of 1000), and for HDD we are talking about milli-seconds (again a factor of 1000).
In this sense each medium is caching the data placed in the next level. The most wanted data is in DRAM, the next priority of data in SSD and the least used data in HDD.
SSD is not a drop-in HDD replacement but a different animal, requiring a different approach
SSD is based on NAND flash memory. NAND flash cells have a limited life-span defined on the number of times data could be written into the cell. When this number is reached the cell is dead and the data in each cell is potentially compromised. In this sense, SSDs have a completely different behaviour compared to HDDs.
The different features and behaviour of SSD require a different approach compared with Hard Disks. The applications can get more out of this new device if they fit their software patterns to the special features of NAND flash, instead of using SSD as just another storage device and as a HDD drop-in replacement.
Here are two aspects, which need to be considered to make the right choice of SSDs for your environment.
• The writing behaviour of your environment is the key for the choice of the SSD. You need to know how many data you expect to write on the SSD over the life cycle of the product and how many data your application writes every day on your storage device, and whether these are random or sequential writes.
• SSD density: The higher the density, the more total bytes could be written on an SSD and therefore the longer the life cycle. It is worth calculating if buying a higher density of a specific SSD category resulting in increased endurance and life cycle is cheaper than moving to the next higher category with longer endurance. This optimises the performance per dollar of your investment.
Right choice of Solid State Drive
Quality is a criterion that needs to be built upon and is a matter of trust. It is also about the behaviour of the product over time, something that is not obvious from the beginning or by comparing the products side by side. SSD is based on Flash components and the main criterion on choice of SSD is its capability of Total Bytes Written (TBW). In other words, how often a flash cell can be rewritten before it fails and this comes down to the quality of the flash cells.
In the production process, not all the components have the same quality. A flash manufacturer typically tests and chooses the best components for SSD production and uses the other components for less critical devices or even sells it into the market for other purposes.
Consumer SSD in Enterprise Environment
A consumer SSD has firmware optimising the interaction of the SSD with a client, assuming it is operating in a PC/Note book environment. However an enterprise server expects 24/7 availability from its SSD and the enterprise Firmware is designed with this in mind.
You will notice the difference when you decide using Client SSD behind HBA or Raid Card.
A consumer SSD disconnects from the outside world for just a few seconds to clean itself and restructure the data internally: its own so called “garbage collection” process. For a Raid Card the lack of responsiveness above a few milliseconds is considered as a failure of the device. Hence, an Enterprise SSD will typically switch down its performance while making the garbage collection, but it will never fail to response to a Raid card.