Tag Archives: Ethernet

Why Fibre-Channel has to improve

Many of you have used and managed fibre-channel based storage networks over the years. It comes to no surprise that a network protocol primarily developed to handle extremely time-sensitive operations is build with extreme demand regarding hardware and software quality and clear  guidelines on how these communications should proceed. It is due to this that fibre-channel has become the dominant protocol in datacenters for storage. Continue reading

ipSpace.net: FCoE between data centers? Forget it!

Cisco guru and long time networking expert Ivan Pepelnjak  hits the nail on its head with the below post. It is one more addition to my series of why you should stay away from FCoE. FCoE is only good for ONE thing: to stay confined in the space of the Cisco UCS platform where Cisco seems to obtain some benefits.

Today I got a question around the option for long-distance replication between enterprise arrays over a FCoE link. If there is ONE thing that I would put towards the HDS arrays is that they are virtually bulletproof with regards to data-replication and 6x6x6 across 3 data-centres replication scenarios in cascaded and multi-target topologies are not uncommon. (yes, read that again and absorb the massive scalability of such environments.)

If however you then start to cripple such environments with a greek trolley from 1000BC (ie FCoE) for getting traffic back and forth you’re very much out of luck.

Read the below from Ivan and you’ll see why.

ipSpace.net: FCoE between data centers? Forget it!: Was anyone trying to sell you the “wonderful” idea of running FCoE between Data Centers instead of FC-over-DWDM or FCIP? Sounds great … un…

He also refers to a Cisco whitepaper (a must read if you REALLLYY need to deploy FCoE in your company) which outlines the the technical restrictions from an protocol architectural point of view.

The most important parts are that the limitation is there and Cisco has no plans to solve this. Remember though I referred to Cisco in this article all the other vendors like Brocade and Juniper have the same problem. Its an Ethernet PFC restriction inherent to the PAUSE methodology.

So when taking all this into consideration you have a couple of options.

  • Accept business continuity to be less than zero unless the hurricane strikes with a 50 meter diameter. (small chance. :-))
  • Use FibreChannel with dark-fibre or DWDM/CWDM infrastructure
  • Use FCIP to overcome distances over 100KM
So far another rant of the FCoE saga which is stacking up one argument after another of why NOT to adopt it and which is getting more and more support across the industry by very respected and experienced networking people.

Why convergence still doesn’t work and how you put your business at risk

I browsed through some of the great TechField Day videos and came across the discussion “What is an Ethernet Fabric?” which covered the topic of Brocade’s version of a flat layer 2 Ethernet network based on their proprietary “ether-fabric protocol”. At a certain point the discussion led to the usual “Storage vs. Network” and it still seems there is a lot of mistrust between the two camps. (As rightfully they should. :-))

For the video of the “EtherFabric” discussion you can have a look >>here<<

Convergence between storage en networking has been a wishful thinking ever since parallel SCSI became in it 3rd phase where the command set was separated from the physical infrastructure and became serialised over an “network” protocol called Fibre-Channel.

The biggest problem is not the technical side of the conversion. Numerous options have already been provided which allow multiple protocols being transmitted via other protocols. The SCSI protocol is able to be transmitted via FibreChannelC, TCPIP, iSCSI and even the less advanced protocol ATA can be transferred directly via Ethernet.

One thing that is always forgotten is the intention of which these different networks were created for. Ethernet was developed somewhere in the 70’s by Robert Metcalf at Xerox (yes, the same company who also invented the GUI as we know it today) to be able to have two computers “talk” to each other and exchange information. Along that path the DARPA developed TCP/IP protocol was bolted on top of that to make sure there was some reliability and a more broader spectrum of services including routing etc was made possible. Still the intention has always been to have two computer systems exchange information along a serialised signal.

The storage side of the story is that this has always been developed to be able to talk to peripheral devices and these days the dominant two are SCSI and Ficon (SBCCS over FibreChannel). So lets take SCSI now. Just the acronym already tells you its intent:  Small Computer Systems Interface. It was designed for a parallel bus, 8-bits wide, had a 6 meter distance limitation and could shove data back and forth at 5MB/s. By the nature of the interfaces it was a half-duplex protocol and thus a fair chunk of time was spent on arbitration, select, attention and other phases. At some point in time (parallel) SCSI just ran into brick wall w.r.t. speed, flexibility, performance, distance etc. So the industry came up with the idea to serialise the dataflow of SCSI. In order to do this all protocol standards had to be unlinked from the physical requirements SCSI had always had. This was achieved with SCSI 3. In itself it was nothing new however as of that moment it was possible to bolt SCSI onto a serialised protocol. The only protocols available at that time were Ethernet, token ring, FDDI and some other niche ones. These ware all considered inferior and not fit for the purpose of transporting a channel protocol like SCSI. A reliable, high speed interface was needed and as such FibreChannel was born. Some folks at IBM were working on this new serial transport protocol which had all the characteristics anyone would want in a datacentre. High speed (1Gbit/s, remember Ethernet at that time was stuck at 10Mb/s and token ring at 16Mb/s), both optical and copper interfaces , long distance, reliable (ie no frame drop) and very flexible towards other protocols. This meant that FibreChannel was able to carry other protocols, both channel and network including IP, HIPPI, IPI, SCSI, ATM etc. The FC4 layer was made in such a flexible way that almost any other protocol could easily be mapped onto this layer and have the same functionality and characteristics that made FC the rock solid solution for storage.

So instead of using FC for IP transportation in the datacentre some very influential vendors went the other way around and started to bolt FC on top of Ethernet which resulted in the FCoE standard. So we now have a 3 decade old protocol (SCSI) bolted on top of a 2 decade old protocol (FC) bolted on top of a 4 decade old protocol (Ethernet).

This in al increases the complexity of datacentre design, operations, and troubleshooting time in case something goes wrong. Although you can argue that costs will be reduced due to the fact you only need single CNA’s, switchports etc instead of a combination of HBA’s and NIC’s, but think about the fact you lose that single link. This means you will lose both (storage and network) at the same time. This also means that manageability is reduced to zero and you will to be physically behind the system in order resuscitate it again. (Don’t start you have to have a separate management interface and network because that will totally negate the argument of any financial saving)

Although it might seem that from a topology perspective and the famous “Visio” drawings the design seems more simplified however when you start drawing the logical connections in addition to the configurable steps that are possible with a converged platform you will notice that there is a significant increase in connectivity. 

I’m a support engineer with one of the major storage vendors and I see on a day to day basis the enormous amount of information that comes out of a FibreChannel fabric. Whether it’s related to configuration errors, design issues causing congestion and over-subscription, bugs, network errors on FCIP links and problems with the physical infrastructure. See this in a vertical  way were applications, operating systems, volume managers, file-systems, drivers etc. all the way to the individual array spindle can be of influence of the behaviour of an entire storage network and you’ll see why you do not want to duplicate that by introducing Ethernet networks in the same path as the storage traffic.
I’m also extremely surprised that during the RFE/RFP phase for a new converged infrastructure almost no emphasis is placed on troubleshooting capabilities and knowledge. Companies hardly question themselves if they have enough expertise to manage and troubleshoot such kind of infrastructures. Storage networks are around for over over 15 years now and still I get a huge amount of questions which touch on the most basic knowledge of these networks. Some call themselves SAN engineers however they’ve dealt with this kind of equipment less than 6 months and the only thing that is “engineered” is the day-to-day operations of provisioning LUNs and zones. As soon a zone commit doesn’t work for whatever reason many of them are absolutely clueless and immediate support-cases are opened. Now extrapolate this and include Ethernet networks and converged infrastructures with numerous teams who manage their piece of the pie in a different manner and you will, sooner or later, come to the conclusion that convergence might seem great on paper however there is an enormous amount of effort that goes into a multitude of things spanning many technologies, groups, operational procedures and others I haven’t even touched on. (Security is one of them. Who determines which security policies will be applied on what part of the infrastructure. How will this work on shared and converged networks?)

Does this mean I’m against convergence? No, I think it’s the way to go as was virtualization of storage and OS’es. The problem is that convergence is still in its infancy and many companies who often have a CAPEX driven purchase policy are blind to the operational issues and risks. Many things need to be fleshed out before this becomes real “production ready” and the employees who keep your business-data on a knifes-edge are knowledgeable and confident they master this to the full extent.

My advice for now:

1. Keep networks and storage isolated. This improves spreading of risk, isolates problems and recoverability in case of disasters.
2. Familiarise yourself with these new technologies. Obtain knowledge through training and provide your employees with a lab where they can do stuff. Books and webinars have never been a good replacement for one-on-one instructor led training.
3. Grow towards an organisational model where operations are standardised and each team follows the same principles.
4. Do NOT expect you suppliers to adopt or know these operational procedures. The vendors have thousands of customers and a hospital requires far different methods of operations than an oil company. You are responsible for your infrastructure and nobody else. The support-organisation of you supplier deals with technical problems and they cannot fix your work methods. 
5. Keep in touch with where the market is going. What looks to become mainstream might be obsolete next week. Don’t put your eggs in one basket.

Once more, I’m geek enough to adopt new technologies but some should be avoided. FCoE is one of them at this stage.

Hope this helps a bit in making you decisions.

Comments are welcome.

Erwin van Londen

Brocade just got Bigger and Better

A couple of months ago Brocade invited me to come to San Jose for their “next-gen/new/great/ what-have-ya” big box. Unfortunately I had something else on the agenda (yes, my family still comes first) and I had to decline. (they didn’t want to shift the launch of the product because I couldn’t make it. Duhhhh.)

So what is new? Well, it’s not really a surprise that at some point in time they had to come out with a director class piece of iron to extend the VDX portfolio towards the high-end systems.  I’m not going to bore you with feeds and speeds and other spec-sheet material since you can download that from their web-site yourself.

What is interesting is that the VDX 8770 looks, smells and feels like a Fibre-Channel DCX8510-8 box. I still can’t prove physically but it seems that many restriction on the L2 side have a fair chunk of resemblance with the fibre-channel specs. As I mentioned in one of my previous posts, flat fabrics are not new to Brocade. They have been building this since the beginning of time in the Fibre-Channel era so they do have quite some experience in scalable flat networks and distribution models. One of the biggest benefits is that you can have multiple distributed locations and provide the same distributed network without having to worry about broadcast domains, Ethernet segments, spanning-tree configurations and other nasty legacy Ethernet problems.

Now I’m not tempted to go deep into the Ethernet and IP side of the fence. People like Ivan Pepelnjack and Greg Ferro are far better in this. (Check here and here )

With the launch of the VDX Brocade once again proves that when they set themselves to get stuff done they really come out with a bang. The specifications far outreach any competing product currently available in the market. Again they run on the bleeding edge of what the standards bodies like IEEE, IETF and INCITS have published lately. Not to mention that Brocade has contributed in that space makes them frontrunners once again.

So what are the draw-backs. As with all new products you can expect some issues. If I recall some high-end car manufacturer had to call-in an entire model world-wide to have something fixed in the brake-system so its not new or isolated to the IT space. Also with the introduction of the VDX an fair chuck of new functionality has gone into the software. It’s funny to see that something we’ve taken for granted in the FC space like layer 1 trunking is new in the networking space.

Nevertheless NOS 3.0 is likely to see some updates and patch releases in the near future. Although I don’t deny some significant Q&A has gone into this release its a fact that by having new equipment with new ASICS and functionality always brings some sort of headaches with them.

Interoperability is certified with the MLX series as well as the majority of the somewhat newer Fibre-Channel kit. Still bear in mind the require code levels since this is always a problem on supportcalls. 🙂

I can’t wait to get my hand on one of these systems and am eager to find out more. If I have I’let you know and do some more write-up here.

Till next time.


DISCLAIMER : Brocade had no influence in my view depicted above.

SoE, SCSI over Ethernet.

It may come as no surprise that I’m not a fan of FCoE. Although I have nothing against the underlying thought of converged networking I do feel that the method of encapsulating multiple protocols in yet another frame is overkill, adds complexity, requires additional skills, training and operating methods and introduces risk so as far as I’m concerned it shouldn’t be needed. The main reason FCoE is invented is to have the ability to traverse traffic from Fibre Channel environments through gateways (called FCF’s) to an Ethernet connected Converged Network Adapter in order to save on some cabling. Yeah, yeah I know many say you’ll save a lot more but I’m not convinced.
After staring at some ads from numerous vendors I still wonder why they never came up with the ability to directly map the SCSI protocol on Ethernet in the same way they do with IP. After all with the introduction of 10G Ethernet all issues of reliability appear to have gone (have they??) so it shouldn’t be such a problem to directly address this. This was the main reason why Fibre Channel was invented in the first place. I think from a development perspective this should be an evenly amount of effort to have SCSI directly transported on Ethernet compared to Fibre Channel.From an interface perspective it shouldn’t be such a problem as well. I think storage would be as happy to shove in an Ethernet port in addition to FC. They wouldn’t need to use any difficult FCoE or iSCSI mechanisms.

Since all, or at least a lot, development efforts these days seem to have shifted to Ethernet why still invest in Fibre Channel. Ethernet still has a 7 layer OSI stack but you should be able to just use three, the physical, datalink, and networking layer. This should be enough to shove frames back and forth in a flat Ethernet network (or Ethernet Fabric as Brocade calls it).For other protocol like TCP/IP this is no problem since they already use the same stack but just travel a bit higher up. This then allows you to have a routable iSCSI environment (over IP) as well as a native SCSI protocol running on the same network. The biggest problem is then security. If SCSI runs on a flat Ethernet network there is no way (yet) to secure SCSI packets arriving at all ports in that particular network segment. This would be the same as having no zoning active as well as disabling all LUN masking on the arrays. The only way to circumvent this is to invent some sort of “Ethernet Firewall” mechanism. (I’m not aware of a product/vendor who provides this but I’ve never heard of it.) I’ts pretty easy to spoof a MAC address so that’s no good as a security precaution. 

As usual this should then also have all the other security features like authentication, authorisation etc etc. Fibre Channel already provides authentication based on DH-CHAP which is specified in the FC-SP standard. Although DH-CHAP exists in the Ethernet world it is strictly tied to higher layers like TCP. It would be good though to see this functionality on the lower layers as well.

I’m not an expert on Ethernet so I would welcome comments that would provide some more insight of the options and possibilities.

Food for thought.


Will FCoE bring you more headaches?

Yes it will!!.

Bit of a blunt statement but here’s why.

When you look at the presentations all connectivity vendors (Brocade,Cisco,Emulex etc…) will give you they pitch that FCoE is the best thing since sliced bread. Reduction in costs, cooling, cabling and complexity will solve all of your to-days problems! But is this really true?

Let start with costs. Are the cost savings really that big as they promise. These days a server 1G Ethernet port sits on the motherboard and is more or less almost a free-bee. Expectation is that the additional cost of 10Ge will be added to a server COG but as usual they will decline over time. Most servers come with multiple of these ports. On average a CNA is 2 times more expensive then 2 GE ports + 2 HBA’s so that’s not a reason to jump to FCoE. Each vendor have different price lists so that’s something you need to figure out yourself. The CAPEX is the easy part.

An FCoE capable switch (CEE or FCF) is significantly more expensive than an Ethernet switch + a FC switch. Be aware that these are data center switches and the current port count on an FCoE switch is not sufficient to deploy large scale infrastructures.

Then there is the so called power and cooling benefit. (?!?!?) I searched my butt of to find the power requirements on HBA’s and CNA’s but no vendor is publishing these. I can’t imagine an FC HBA chip eats more than 5 watts however a CNA will probably use more given the fact it runs on a higher clock speed and for redundancy reasons you need two of them anyway so in general I think these will equate to the same power requirements or an eth+hba combination is even more efficient than CNA’s. Now lets compare a Brocade 5000 (32 port FC switch) with a Brocade 8000 FCoE from a BTU and power rating perspective. I used their own specs according to their data sheets so if I made a mistake don’t blame me.

A Brocade 5000 uses a maximum of 56 watts and has a BTU rating of 239 at 80% efficiency. An 8000 FCoE switch uses 206 watts when idle and 306 watts when in use. The BTU heat dissipation is 1044.11 per hour. I struggled to find any benefit here. Now you can say that you also need an Ethernet switch but even if that has the same ratings as a 5000 switch you still save a hell of a lot of power and cooling requirement on separate switches. I haven’t checked out the Cisco, Emulex and Qlogic equipment but I assume I’m not far off on those as well.

Now, hang on, all vendors say there is a “huge benefit” in FCoE based infrastructures. Yes, there is, you can reduce your cabling plant but even there is a snag. You need very high quality cables so an OM1 or OM2 cabling plant will not do. As a minimum you need OM3 but OM4 is preferred. Do you have this already? If so good you need less cabling, if not buy a completely new plant.

Then there is complexity. Also an FCoE sales pitch. “Everything is much easier and simpler to configure if you go with FCoE”. Is it??? Where is the reduction in complexity when the only benefit is that you can get rid of cabling. Once a cabling plant is in place you only need to administer the changes and there is some extremely good and free software to do that. So even if you consider this as a huge benefit what do you get in return. A famous Dutch football player once said “Elk voordeel heb z’n nadeel” (That’s Dutch with an Amsterdam dialect spelling :-)) which more or less means that every benefit has it’s disadvantage i.e. there is a snag with each benefit.

The snag here is you get all the nice features like CEE,DCBX,LLDP,ETS,PFC,FIP,FPMA and a lot more new terminology introduced into you storage and network environment. (say what???). This more or less means that each of these abbreviations needs to be learned by your storage administrators as well as you network administrators, which means additional training requirements (and associated costs). This is not a replacement for your current training and knowledge but this comes on top of that.
Also these settings are not a one-time-setup which can be configured centrally on a switch but they need to be configured and managed per interface.

In my previous article I also mentioned the complete organizational overhaul you need to do between the storage and networking department. From a technology standpoint these two “cultures” have a different mindset. Storage people need to know exactly what is going to hit their arrays from an applications perspective as well as operating systems, firmware, drivers etc. Network people don’t care. They have a horizontal view and they transport IP packets from A to B irrespective of the content of that packet. If the pipe from A to B is not big enough they create a bigger pipe and there we go. In the storage world it doesn’t work like this as described before.

Then there is the support side of the fence. Lets assume you’ve adopted FCoE in your environment. Do you have everything in place to solve a problem when it occurs. (mind the term “when” not “if”)  Do you know exactly what it takes to troubleshoot a problem. Do you know how to collect logs the correct way? Have you ever seen a Fibre Channel trace captured by an analyzer? If so, where you able to bake some cake of it and actually are able to pinpoint an issue if there is one and more importantly how to solve this? Did you ever look at fabric/switch/port statistics on a switch to verify if something is wrong? For SNIA I wrote a tutorial (over here) in which I describe the overall issues support organisations face when a customer calls in for support and also what to do about it. The thing is that network and storage environments are very complex. By combining them and adding all the 3 and 4 letter acronyms mentioned above the complexity will increase 5-fold if not more. It therefore takes much and much longer to be able to pin-point an issue and advise on how to solve it.

I work in one of those support centers of a particular vendor and I see FC problems every day. Very often due to administrator errors but far more because of a problem with software or hardware. These can be very obvious like a cable problem but in most cases the issue is not so clear and it take a lot of skills, knowledge, technical information AND TIME to be able to sort this out. By adding complexity it just takes more time to collect and analyze the information and advise on resolution paths. I’m not saying it becomes undo-able but it just takes more time. Are you prepared and are you willing to provide your vendor this time to sort out issues?

Now, you probably think I must hold a major grudge against FCoE. On the contrary; I think FCoE is a great technology but it’s been created for technologie’s sake and not to help you as customer and administrator to really solve a problem. The entire storage industry is stacking protocols upon protocols to circumvent the very hard issue that they’ve screwed up a long time ago. (Huhhhh, why’s that?)

Be reminded that today’s storage infrastructure is still running on a 3 decade old protocol called SCSI (or SBCCS for z/OS which is even older). Nothing wrong with that but it implies that shortcomings of this protocol needs to be circumvented. SCSI originally ran on a parallel bus which was 8-bit wide and hit performance limitations pretty quick. So they created “wide scsi” which ran on a 16-bit wide bus. With increase of the clock frequencies they pumped up the speed however the problem of distance limitations became more imminent and so they invented Fibre-Channel. By disassociating the SCSI command set from the physical layer the T10 committee came up with SCSI-3 which allowed the SCSI protocol to be transported over a serialized interface like FC which had a multitude of benefits like speed, distance and connectivity. The same thing happened with Escon in the mainframe world. Both the Escon command set (SBCCS now known as Ficon) as well as SCSI (on FC known as FCP) are now able to run on the FC-4 layer. Since Ethernet back then was extremely lossy this was no option for a strict lossless  channel protocol with low latency requirements. Now that they have fixed up Ethernet a bit to allow for loss-less transport over a relatively fast interface they now map the entire stack into a mini-jumbo frame and the FCP-4 SCSI command and data sits in a FC encapsulated frame which in turn now sits in an Ethernet frame. (I still can’t find the reduction in complexity, if you can please let me know.)

What should have been done instead of introducing a fixer-upper like FCoE is that the industry should have come up with an entirely new concept of managing, transporting and storing data. This should have been created based on todays requirements which include security (like authentication and authorization), retention, (de-)duplication , removal of awareness of locality etc. Your data should reside in a container which is a unique entity on all levels from application to the storage and every mechanism in between. This container should be treated as per policy requirements encapsulated in that container and those policies are based on the content residing in there. This then allows for a multitude of properties to be applied to this container as described above and allows for far more effective transport

Now this may sound like trying to boil the ocean but try to think 10 years ahead. What will be beyond FCoE? Are we creating FCoEoXYZ? 5 Years ago I wrote a little piece called “The Future of Storage” which more or less introduced this concept. Since then nothing has happened in the industry to really solve the data growth issue. Instead the industry is stacking patch upon patch to circumvent current limitations (if any) or trying to generate a new revenue stream with something like the introduction of FCoE.

Again, I don’t hold anything against FCoE from a technology perspective and I respect and admire Silvano Gai and the others at T11 what they’ve accomplished in little over three years but I think it’s a major step in the wrong direction. It had the wrong starting point and it tries to answer a question without anyone asking.

For all the above reasons I still do not advise to adopt FCoE and urge you to push your vendors and their engineering teams to come up with something that will really help you to run your business and not patching up “issues” you might not even have.

Constructive comments are welcome.

Kind regards,
Erwin van Londen

Why FCoE will die a silent death

I’ve said it before, storage is not simple. There are numerous things you have to take into account when designing and managing a storage network. The collaboration between applications, IO stacks and storage networks have to be very stable in order to get something useful out of it both in stability as well as performance. If something goes wrong its not just annoying but it might be disastrous for companies and people.

Now I’ve been involved in numerous positions in the storage business from storage administrator to SAN architect and from pre-sales to customer support and I know what administrators/users need to know in order to get things working and keep it this way. The complexity that comes to the administrators is increasing every year as does the workload. A decade ago I use to manage just a little over a terrabyte of data and that was pretty impressive in those days. Today some admins have to manage a petabyte of data (yes, a 1000 fold more). Now going from a 32GB diskdrive to a 1TB diskdrive might look like their life just simplified but nothing is further from the truth. The impact it has when something goes wrong is immense. Complexity of applications, host/storage based virtualisation etc etc have all added to an increase of skills required to operate these environments.

So what does this have to with FCoE. Think of it as this: you have two very complex environments (TCPIP/networking and FibreChannel Storage) who by definition have no clue what the other is about. Now try to merge these two together to be able to transport packets through the same cable. How we do that? We rip away the lower level of the ISO and FC layers, replace that with a new 10GbE CEE interface, create a new wrapper with new frameheaders, addressing and protocol definitions on those layers and away we go.

Now this might look very simple but believe me, this was the same with fibre channel 10 years ago. Look how the protocol evolved. Not only in speeds and feeds but also tremendously in functionality. Examples are VSAN’s, Virtual Fabrics, FibreChannel Routing to name a few. Next to that the density of the FC fabrics has increased as does the functionality on storage arrays. I already wrote in a previous article that networking people in general are not interested in application behaviour. They don’t care about IO profiles, responsetimes and some packet loss since TCPIP will solve that anyway. They just transport packets through a pipe and if the pipe isn’t big enough they replace it with a bigger pipe or re-route some of the flow to another pipe. That is what they have done for years and they are extremely good at it. Storage people on the other hand need to know exactly what it hitting their arrays and disks. They have a much more vertical approach because each application has a different behaviour on storage. If you mix a large sequential load with a very random one hitting the same arrayports and spindles you know you are in a bad position.

So here is were politics will collide. Who will manage the FCoE network. Will it be the networking people? (Hey, it’s Ethernet right? So it belongs to us!). Normally I have no problem with that but they have to prove that they know how FibreChannel behaves, what a Ficon SBC codes set looks like as well as an FCP SCSI CDB. (I see some question marks coming already)
Now FCoE doesn’t work on your day-to-day ethernet or fibrechannel switch. You have to have specialized equipment like CEE and FCF switches to get things going. Most of them are not backwards compatible so they act more as a bridging device between an CEE and FC network. This in turn add significantly to the cost you were trying to save by knocking off a couple of HBA’s and network cards.

FCoE looks great but the added complexity in addition to an entire mindshift of networking and storage management plus the need for extremely well trained personnel will make this technology sit in a closet for at least 5 years. There it will mature over time so true storage and networking convergence might me possible as a real business value add. At the time of this writing the standard is just a year old and will need some fixing up.

Businesses are looking of ways to save cost, reduce risk and simplify environments. FCoE currently gives neither of these.