In a recent blog post, I discussed how the WAN is changing for some organizations. This post considers the broader topic of what else might be changing in networks, affecting staffing and skills. In fact, this topic should perhaps become an annual blog topic to address what’s shifted in the last year.
Let’s treat this as a catch-up blog on trends I’m seeing, which may or may not catch on.
Wired and WLAN
WLAN speeds have been increasing, and the quality of WLAN services and operational support improving. Cisco’s WLAN offerings are strong in this regard, particularly with the mobile device QoS “Fast Lane” and other cooperative efforts with Apple, and integration enterprise mobile device management (MDM) products.
WLAN done well has the potential to replace most wired networking, particularly since most users use primarily WLAN at home. Of course, enterprise APs currently require a wired POE infrastructure. WLAN done poorly -- well, that’s for masochists. There are apparently a lot of people who qualify -- enterprise WLAN is different than home WLAN.
One neat Cisco product can be added to APs to allow them to also operate as cellular small-cell technology, in conjunction with a cell provider. They use IPsec tunnels for secure backhaul of the cellular traffic. The visual description that comes to mind is “Mickey Mouse ears for your AP,” which probably manages to offend both Cisco and Disney in one sentence. So let’s pretend I didn’t write that and go with “outboard antenna ears with mounting behind your AP,” as I’m in no way intending to disparage or poke fun at the product. There are other modules coming that can be added to certain Cisco APs as well for different market niches, e.g. retail beacons.
If you hadn’t noticed, leaky coax and other DAS systems are generally poor for WLAN. Think MIMO, frequency limitations, etc. The Cisco product is a solution that does traditional WLAN supplemented with small cell and backhaul, which avoids doing one of the two things well and the other sub-optimally.
Potential problem areas for most WLAN networks:
- Legacy site surveys. Have YOU re-surveyed recently? With RF signal turned down? Maybe planning for dense cells even? What kind of bandwidth per user is your requirement? By the way, all this means coordinating with your site survey person before they do the survey, so they measure properly.
- Cheap printers (wake up HP!) and laptops (many vendors) that inexcusably are 2.4 Ghz band only, still. 802.11ac is where they should be now.
Reasons to keep wired networks:
- Wired phone handsets.
- Other wired devices, especially POE devices.
I can’t really say wired will go away. It may shrink.
Bear in mind that with Cisco facilities switches for smart lighting etc., there may be a shift. POE devices in various forms still retain value with wiring, or perhaps provide greater ROI for POE, and low-power facilities cabling could be useful. Maybe we’ll be talking about wireline-powered IOT and wireless IOT devices?
Data center
Cloud is the obvious technical impact here. My claim is that applications really ought to be done over right, or colo is more what you’re achieving, at cloud prices.
SaaS does have the impact of shrinking whatever datacenter you have: fewer apps. Support costs also decline. Office 365 is a leading example of that. That frees up server admin cycles to deal with the specialized apps, VDI, etc.
UCS and converged infrastructure do the same. You can run a medium-sized business out of one UCS chassis. And if you’re doing that, why not stick it in a colo? This fits into the one-to-a-few racks (cabinets) size range.
In conjunction with this, I’ll note Ivan Pepelnjak’s blog about only needing two top-of-rack switches in a datacenter. I did similar math a while ago, and keep updating it. You can run a lot of VMs in a UCS chassis, and three to four times as many in a rack. You might need two switches to interconnect all that, or two racks of such chassis. Even years ago, the math came out to 1000-2000 VMs, depending on how much CPU and RAM they use. You’ve got to be a pretty big company to need more than that. Heavy virtualization = “Honey, I Shrank the Datacenter.”
So once you’ve done a good job of virtualizing, this could be another incentive to move things to a colo. It costs less to put a shrunken datacenter into a colo. Dare we refer to a non-virtualized datacenter as “bloated”?
Combining this with a colo-centric WAN works well. We are seeing more and more customers shifting to this. It provides better datacenter redundancy and security than a datacenter in costly office space.
In one case, it helped anticipate moving the organization to a new building: The customer moved the datacenter to colo space, then could focus solely on standing up the new building user space, WAN link to the datacenter, and moving users. (Pro tip, thanks to John and Mark!)
Other data center-related thoughts:
- SSD solid state storage greatly increases IOPS. That moves the bottleneck elsewhere, probably back to CPU and network. Do you know where your bottlenecks are?
- VMware VSAN and similar “hyperconverged” server technologies definitely impose heavier burdens on the network, both bandwidth-wise and regarding stability and reliability. You do monitor up/down, error%, and discard% on all your infrastructure links, don’t you?
- I suspect VMware VSAN (or similar technology) scales to a point then reaches a point of diminishing returns. I’d like to know what that scaling point is. If you know of any good research on this, please share a link via Twitter and/or a blog comment!
Summing up so far
So you’ve moved your datacenter, WAN, and firewalling/network edge to the colo. What’s left is some user access networking at each site, some mix of wired ports and heavy WLAN.
For what it’s worth, NetCraftsmen runs Cisco Jabber out of a colo. I can’t quite say super, since Jabber seems to not reliably auto-detect changes of location/address/web proxy or security device. I’ve learned to restart Jabber whenever I change site. That’s an application issue, not a network issue!
NEXT: File shares, security, and more