Last month a group of vendors, including Brocade, Cisco, Emulex, Intel and QLogic, announced yet another Fibre Channel over Ethernet protocol that encapsulates the Fibre Channel Protocol (FCP) in an Ethernet frame so Fibre Channel data can be carried across 10 Gigabit Ethernet connections. So FCoE joins iSCSI, iFCP and FCIP as yet another way to carry storage data across an Ethernet network. As professor Andrew S. Tanenbaum once said, "The nice thing about standards is that there are so many of them to choose from."
The big difference between Fibre Channel over Ethernet and the others is that FCoE eschews IP and sends FC data directly down the Ethernet. Depending on who you talk to, this is either the long-awaited common infrastructure that can run standard network and storage applications--yielding FC behavior and management at Ethernet prices while reducing both world hunger and global warming--or the last gasp of the FC industry about to drown in the tsunami that is iSCSI. I think it's mostly the latter; here's how I see the arguments shaking out.
FCoE's proponents claim that avoiding the computing cost of calculating all those pesky TCP windows and checksums is an advantage. That makes me wonder why storage guys are afraid of TCP. Today's servers are crammed full of multicore, multigigahertz processors and use Gigabit Ethernet chips from Broadcom and Intel that off-load much of the heavy lifting of TCP, so even several gigabits per second of TCP traffic uses just a small percentage of available CPU. If you throw enough cheap computing cycles and bandwidth at a problem, you don't need to tweak your protocols to be especially efficient. Giving up on IP makes FCoE unroutable, limiting its use to links--or at least VLANs--dedicated to storage traffic. Why bother with a new protocol?
So, what would FCoE buy a SAN admin? It allows the use of 10-Gbps Ethernet links, boosting available SAN bandwidth, but very few servers generate more traffic than a 4-Mbps FC link can handle. And of course, important servers that generate that kind of traffic should have two FC HBAs and a multipath driver for reliability. That boosts their available bandwidth to 8 Gbps, and even fewer servers will fill that pipe. Faster storage-to-switch and inter-switch links could be more attractive, but QLogic already has 10-Gbps FC ISLs.
One of the reasons most large enterprise shops haven't adopted iSCSI is the political squabbling between the storage group, which owns the FC SAN, and the networking group, which owns the Ethernet infrastructure on which iSCSI runs. The storage group doesn't want to have the network group managing switches on the SAN.