QLogic 10000 Series Adapters

QLogic has introduced a new product that combines their already reliable Fibre Channel host bus adapters with solid state storage in order to do caching.  Think Fusion-IO cards with a Fibre Channel HBA as well.  (Yes I know that’s an over simplification)

10000series

 

 

The new QLogic cards come in 2 flavors.  A 200GB SSD option and a 400GB SSD option, both of which are 8Gb Fibre Channel.  I’ve been told that 8Gb was used to get started with this concept because it was already proven and solid, where as the 16Gb Fibre is much newer.  I’m sure these cards will be a hit and 16Gb Fibre cards are in the works with even larger capacities.

The adapters work much like you would expect, where split writes are created, one to the storage device, and one to the SSD housed on the daughter card.  Then read operations that are still in the SSD cache don’t have to be sent to the SAN.  I’ve been told that these adapters can also be setup as targets so that they can share their cache between adapters and in the future may be able to mirror their cache which would be beneficial for virtualized environments.

Traditionally, caching has been done on the SAN which requires the controllers to still do some work to fetch the data.  Using the HBA to do the caching, you can eliminate the controller from having to retrieve this data leaving more room on the SAN for other things.  Also, the caching done on the SAN might be caching for  100 servers.  The data that is stored in that cache may not be useful to some of the servers in your environment so they don’t get any performance benefit out of it.  Caching on the HBA can guarantee that a specific server is getting the cache benefits.

If caching isn’t done on the SAN and you need extremely high IOPS, Fusion-IO boards have been used in the past to great success.  Unfortunately, this is like adding direct attached storage so it can only benefit one machine at a time unless you’re using some additional replication software.

These QLogic cards can give benefits of both direct attached storage as well as traditional SAN caching at the same time, which is clearly a nice benefit.

Cons

Even though QLogic is advertising over 300,000 IOPS with these cards, they do have a few downsides as of right now, probably because they are so new.

They require a rack mount server to be installed in.  If you have blades these cards won’t work for you.  In addition, these cards require side by side PCI-Express ports available for the HBA and the daughter card which is attached by ribbon cable.  This will limit the number of systems that are good candidates to try these out.  Lastly, even though these cards are dual port, they become a single point of failure because both ports are on a single card.  You could get two of them, but you’re looking at 4 PCI-Express slots being available and I’m guessing this will be pretty unlikely for most people.  The good news is that since these are just used for caching, the data is still available on the SAN so you haven’t lost data because of a card failure.

Thoughts

I think this is a great idea for the future, but probably isn’t going to be mainstream for a while until some of the limitations can be overcome.  Look for Fusion-IO to do something similar to add Fibre Channel functionality to their cards as well.

If this is something interesting to you, check out QLogic’s site and request a demo.  www.qlogic.com

 

 

 

One Response to QLogic 10000 Series Adapters

  1. Just to throw a few things out there:

    They went with 8G because their 16G cards are not very good. 300k IOPS is at the peak range of any 8GFC adapter regardless of the manufacturer, and thats going to be read at 512k block sizes. Look for real world workloads in the 4-16k range and those IOPs numbers drop significantly. The bigger issue with having a two card solution is board placement, PCIe version and lane support isn’t universal across the motherboard. You’re going to see a mix of X8/X16 etc. then of course their 8GFC adapters are all PCIe 2.0 what about the SSD card? Ultimately solid state solutions of this nature are only as good as the software stack behind them along with any real benefits that the software provides. Fusion IO isn’t successful only because they were first to market, its because the stack behind their hardware works with a host of solutions and provides benefit and value. It remains to be seen if the Q stuff does that, or if anyone will use it. They have been pushing this solution for a while now, but I’ve yet to run into anyone who has actually deployed it or is using it.

    It’s an interesting concept, but I’m not sure how much market demand there is for this. Its expensive, untested, and unproven for the most part. There are significant competitors in the space of SSD adapters, and frankly the performance gains perceived might not be worth the price as the laws of diminishing returns set in.. I know the other card manufactures looked at doing this a few years ago and opted out of it because the upside was so limited. I’d hazard a guess that this entire product line disappears in 12 months due to lack of sales.

Leave a reply