In the 500 or so years it feels like I've been working with computers of one sort or another, I've noticed that shiny new technologies follow similar trajectories. With FCoE, we've reached what Gartner calls the "Peak of Inflated Expectations," and what I call the "ATM Can Fix Everything" moment, named after the timely combination of IBM introducing desktop ATM cards and a Visa commercial in the mid 1990s. As we head towards the "Trough of Disillusionment" we need to take a good hard look at where FCoE is a good fit and where it could be overkill or resume dressing.
Conceptually, FCoE promises an end to the back-of-server spaghetti factory through converged networking and improves Fibre Channel performance and reliability at Ethernet prices. Having spent more than my share of time standing in the blast of hot air from the back of a server rack trying to figure out which of the seven Gigabit Ethernet cables is plugged into vNIC5, I'm sure that most of you can see the benefit from at very least converging data and storage traffic into a pair of 10Gbps Ethernet connections.
When I ran the numbers in this blog entry back in February, we'd already reached the point where 10Gbps was cheaper than multiple 1Gbps connections for virtual server hosts. I can only imagine the difference is even bigger now.
Assuming everyone should use 10GbE for new deployments, and that 10GbE solves the cable mess problem, the question remains, who should plan for and start piloting FCoE deployments? The easy answers come at the ends of the spectrum.
First let me state that FCoE is a technology for organizations already running Fibre Channel SANs. If you've been using DAS and are adding shared storage to support server virtualization or disaster recovery planning initiatives, you could use FCoE, and it would work well for you.