The Storage Networking World I attended in Dallas last week was a substantially different event than SNWs of old. In the past, SNW was the event where storage vendors socialized with each other, mixing business and pleasure over drinks and the pre-conference golf outing. This SNW seemed to have a lot more end users there for the education sessions and hands-on labs.
Since the Dallas Omni was a smaller venue than those used for SNWs of the past, it was hard to judge the size of the crowd, but I found it telling that while there seemed to be a good crowd at lunch and between sessions, while sessions were going on the halls were generally empty. So the crowd was definitely getting educated. Unfortunately with my busy schedule of briefings, I only made it to a couple of sessions. They were well attended and even better, there were lots of good questions from the audience.
Unfortunately, just as the vendor:user ratio shifted to the users, the vendors have started drifting away. HP, Oracle and HDS had the only 20x20-foot booths in the hall, and the total exhibit area would have fit in a Microsoft or IBM booth at a big show like Comdex in the old days.
Over the last few years we’ve seen a shift from big shows run by independent outfits, of which Interop, run by our corporate overlords here at UBM, seems to be the last, to vendor-driven 'worlds' by EMC, VMware, HP, IBM and the rest. While it might make sense for an EMC customer to send folks to EMCworld for education, that means they’re never going to be exposed to cutting-edge storage arrays from the likes of Nimbus, Tintri, Nimble, Starboard and Tegile, since they can’t buy a booth at EMCworld or HP Discover.
I did see some interesting tech, starting with Symform’s peer-to-peer cloud storage, which uses encryption and Reed-Solomon-type dispersal coding to use subscriber’s disk space as a repository for other user’s data. Users get to store as much data as they donate disk space. Since the load is spread across all the users, the bandwidth demands are low, and you don’t need to donate anything more than SATA disk space on a PC to the cloud.
Startup SavageIO brought a high-density storage system that packs 4 2.5-inch SSDs and 48 hot-swappable 3.5-inch SATA drives in a 4u cabinet. The disks are driven by a Xeon motherboard and LSI MegaRAID controllers through a SAS expander and SavageIO’s own backplane. While running 4 SATA drives through each SAS channel will limit the system's performance, it should be a good way to build low-cost storage systems with NexentaStop, Gluster or your choice of software. I’m a little concerned about the drive mounting though, as they stand the drives up and support them with Plexiglas dividers. I’d rather have better vibration isolation.