There's a debate brewing among network systems management gurus: How far must one go to get effective, timely information from the unmanageable mountain of super-granular performance data that their over-instrumented, overly chatty equipment keeps trying to provide?
On one side is VMware, which responded so vigorously to customer complaints about nonexistent tools to manage virtual servers that, just a couple of years later, it felt compelled to buy big-data analytics companies to help sort through the resulting mass of updates.
Using tools such as Log Insight, which it bought from developer Pattern Insight in August, VMware evidently plans to add big-data analytics and data mining to its systems. It's also adding a network management suite to make it easier for administrators to consider all the real-time machine-to-machine data supplied by thousands of networked devices while still finding only the answers they need.
On the other side are those who appreciate using data efficiently rather than profligately. For example, there's Shmuel Kliger, founder and chief technology officer of VMware partner VMTurbo, who said any network systems management setup that dumps so much raw data on administrators that they need big-data analytics to sift through it for answers is fundamentally flawed.
"Data center operations are getting more complicated, so approaching it in the same way as older systems vendors who keep adding point tools to handle every new demand makes trying to manage it all even more complicated," Kliger says.
"There may be thousands of data points in the configuration of servers, workload placement, capacity planning, CPU and memory balancing, power management, but if you're doing each of those things with a different tool, it's going to take you a long time," he adds. "VMware had a green field to make up its own more coherent approach to management; instead they put themselves in the same boat as the rest of the systems vendors, delivering a basket of tools, some with a common UI, that are not necessarily integrated, that have no semantics function that integrates the data into a coherent picture for systems management."
Big Four framework vendors delivered a "single pane of glass" view of the network, but they also dumped on users a "management nightmare," according to Kliger's blog on the topic.
A large company might use dozens of point products to manage its hardware, but most are meant simply to collect and deliver issues to network admins, who might or might not have the time to sift through the problems and create reports highlighting important points.
As data centers get more complicated, virtualization disperses responsibility for specific parts of the infrastructure according to functions or departments, rather than location of the servers, making the management nightmare worse.
It would be easy to brush Kliger off as a former executive who doesn’t like the direction being taken by the company that bought his brainchild. That might even be true to some extent. But he's not the kind of troll or yahoo who goes to a parade just to fling mud at people on the floats.
He is a former VP of architecture and applied research at EMC and founder/CTO of System Management ARTS--an innovative startup whose Smarts InCharge suite was designed to automatically find, inventory and identify developing problems in network devices to save admins the effort of extensive troubleshooting.
In 2002, when it won a Network Computing Editor's Choice award, SMARTS was one of only two systems management vendors offering Layer 2 network discovery; it was acquired by VMware parent company EMC in 2004.
Before SMARTS, Kliger was also a senior researcher at IBM and at Weizmann Institute of Science, where he got his master's degree and Ph.D. in computer science.
Kliger may be partisan on some systems management issues, but he's not an idiot. That doesn't mean he's right. It does mean that if he's not right, he at least isn't completely wrong.
IT infrastructures really are getting more complex as cloud and virtualization continue to reduce the importance of the physical characteristics of parts of that infrastructure and make traditional ways of measuring its capability according to the location of specific clusters or data centers irrelevant.
There's an argument to be made that low-level networking gear shouldn't need the intelligence to solve its own problems. It needs to be fast and communicative--shipping lots of data on traffic flow, application performance and other variables up the line to be analyzed by hardware with more intelligence.
Next: VMware Is After More Than Just Good Systems Management