I've read a couple of articles recently predicting a sudden and abrupt end to the flash market. The reason given was that as flash NAND gets more affordable, it also becomes less reliable. This is caused by the decreasing flash lithography and the increasing write density (number of bits per cell). Each of these lowers the life expectancy of the technology. The theory is that, as this continues to happen, flash will become unsustainable and the industry will move on to something else.
While I agree that flash will at some point be replaced by another technology, I don't think that's going to happen anytime soon. With the exception of NVDIMM, most competing memory technology is five to six years away from being appropriate for the enterprise, and NVDIMM will be used only in small quantities because of cost.
There's a lot of technology that supports the flash NAND market. This includes flash controller technology that controls how and where flash is written. The problem with the "flash will die theory" is that it assumes that, as the flash NAND market advances, the surrounding market will stand still. To the contrary, we have seen at least as much innovation in flash controllers as we have seen in flash NAND.
[Which flash is right for you? See our Quick Guide To Flash Storage Latency Wars.]
Another error in this theory is that it assumes that storage system builders won't adapt and improve their technology. Again to the contrary, we have seen these system designers create technologies that minimize flash writes by leveraging deduplication and compression. We expect these vendors to continue improving these technologies and increase their ability to compress and identify redundant data segments.
We are also beginning to see initial implementations of flash tiering, where data is at first written to a small but more reliable single-level cell-based tier, then staged to multi-level cell as it stabilizes and becomes more read-only in nature. Going forward, these vendors could easily create a small SLC front end to act as a shock absorber to initial data writes, and then leverage increasingly lower cost and higher density flash NANDs.
The final problem with the "flash will die" theory is that it assumes that flash-based storage systems have a long way to go to be cost competitive with traditional disk storage systems. That is simply not the case. Most hybrid and all-flash systems are already claiming price parity with high-performance disk arrays, and in our research at Storage Switzerland, we find that to be pretty accurate. Given an effective archiving strategy, we are at the point where using flash for the majority (if not all) of your active data is a reality.
Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, that's for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology. Find out more in the 2014 State Of Server Technology report. (Free registration required.)