Over the past 20 years RAID technology has become so ingrained in how we build storage for anything more important than the family photos that most storage practitioners have forgotten that it was once a revolutionary idea. In 1988 before Patterson, Gibson and Katz published their seminal paper, which both coined the term RAID and defined levels 1-5, mainframes and minicomputers used 14" drives that resembled your basic Maytag washer. Patterson et. al. dubbed these SLEDs (Single Large Expensive Drives). Meanwhile PCs had 3.5" disks that were pitiful by comparison. The idea that a bunch of pitiful drives could out-perform and out-last the expensive enterprise disk was revolutionary at the time but led to the ultimate demise of SLEDs.
In truth the SLED makers were running out of options to make their drives ever bigger, stronger and faster. The IBM 3380 used as an example in Patterson's paper had 4 independent head positioners and could deliver 200 IOPs, but that complexity drove the price up to $15/MB and power consumption to over 6KW for a single 7.5GB drive. While the 14" diameter of the platters made room for 4 head combs it also made spinning the disk faster impractical. This technology had reached its zenith.
Today's enterprise SSD market reminds me of the RAID vs. SLED days. Most vendors from EMC and IBM to HP and Compellent have added STEC's Zeus IOPS SSDs (the SLED equivalent) to their fibre channel arrays and it's easy to see why. Not only do Zeus IOPS deliver a whopping 45,000 read and 16,000 write IOPs, they also come with a Fibre Channel interface so vendors can just plug them to JBODs where spinning FC disks went and re-tune the firmware on their controllers to accommodate the new devices.
The only problem with STEC's flagship drives is the price. Street price for a 146GB unit is around $16000, or about $110/GB or $1/IOPS. Since enterprise users will almost always use mirrored pairs, that makes the cost of entry $32K. Compared to 20-200 short stroked 15K RPM drives that's cheap, but it does require segregating 146GB of hot data into its own LUN.
Similar to RAID, Pillar and Equallogic are using SATA interface SLC devices from Intel and Samsung that are less than a quarter as fast as the STECs for write I/O but also less than 1/20th the cost. By using a whole shelf of 50 or 64GB beasts, they can give their users twice the space and IOPS of a pair of STECs for what should be about the same price, once the slot costs are factored in.
This approach makes sense for these vendors, just as adding STEC made sense for EMC. Both Equallogic and Pillar were figuring out how to best add SSD to array architectures that used point-to-point SATA RAID controllers in each shelf. The enterprise guys had systems that supported SATA but were performance optimized for FC drives on FC loops.