Discussing all things virtualization and storage in the data center.

FCoE: Do Most People Need It?

I've been at SNW all week trying to find out why an end user would want to implement FCoE.  Every time I write about it I get a number of vendor responses saying that I'm wrong and that it's coming and that FCoE will rule the world.  Everyone will adopt it and I might as well fall in line or I'll be left behind.

To sum up what I've learned over this week of talking end users is that the big shops will look at it, but the other 90% of them like things the way they are and there is no compelling reason to switch to FCoE.  Sounds a lot like what happened to iSCSI to me.  Speaking with the vendors, they seem to think that everyone is going to want this new protocol stack in their data center and will just rip out everything they have just to get it.  So I think there might be a tad bit of a gap between vendor thinking and end user needs.  We've never seen that before have we?  ;)

So here are some reasons that users are saying that they won't be going FCoE anytime soon:

1.  "We don't have the money to replace our cabling infrastructure."

       When it comes down to it, if you want to implement FCoE using a 10Gb infrastructure then you have to use OM3 class cables.  Most users I know aren't using OM3 class cables and that is not something that can really be justified with management at this time.  They don't really want to suggest to their management that everything get replaced when they are meeting the majority of demand with their current infrastructure.  Cables and the labor to put in a proper infrastructure isn't cheap.

2.  "Am I only solving a 'cable sprawl' problem?"

      Most of the vendors and analysts are saying that the CNA/FCoE implementation will save on the number of cables in the data center.  All but the largest shops I've spoken with don't have cable sprawl issues.  So if the vendors are pushing this as a problem and solution, are they just inventing a problem for the rest of us so management will tell us we have a problem?  Again the end user isn't saying that this is a problem.  After all, aren't we going to have to add more cables to the infrastructure during an FCoE implementation before we can remove the old cables?  Do we really want to go tugging on our old cables after implementation?  Isn't there a risk that someone will pull a cable and disconnect something by accident?

3.  "What about internal server bandwidth?"

     Removing my NICs and HBAs from individual PCI slots is going to decrease my overall IO bandwidth available to my applications.  There is no denying it, so don't try.  In today's storage and network implementations we traditionally have a couple HBAs for storage related traffic and 3 - 4 NICs for IP based stuff.  The argument for the CNA is to replace all this with a couple adapters and merge everything onto these cards.  So I'm going from 6 slots of bandwidth to only 2 slots of bandwidth.  And those 2 CNA slots will be running at 10Gb.  A lot of users don't trust this at all.  After all, one reason for having all those adapters is to ensure performance and knowing that one process isn't going to run away and take all my resources.  Then there is the question about protocol convergence and priorities.  FCoE is going to use the PAUSE feature of Ethernet to allow the priority of packets through the network.  Fibre Channel will get the highest priority (0 and 1) and the other protocols will go from there.  A lot of users just don't trust this.  That is a lot of stuff happening on a single card and until this stuff if fully baked I don't see users changing their minds about this.

So I think that is enough for now.  Hopefully we'll get some good discussion out of these items that the users are bringing up.  What are other people hearing out there from end users?  Does it really make sense to change to a new protocol just to appease some vendors?

comments powered by Disqus