vSphere Designs For Blades: IBM BladeCenter H and iSCSI

vSphere Designs For Blades: IBM BladeCenter H and iSCSI: "

VMware employee Hany Michael has started an incredibly useful series of design posts and diagrams titled VMware vSphere On Blade Servers. His first post VMware vSphere on IBM BladeCenter H delivers a multi layer .PDF that provides a template design of 4 common vSphere deployments with HS22 and the HS22V Blades.


I’m just highlighting the iSCSI design in this post, but be sure to get a copy of Hany’s full PDF for diagrams of FC and 10GE designs as well.


Here’s an image of the BladeCenter H iSCSI design taken from the Configuration 2 layer. Click the image for a larger view.


image


Some things to point out why I think this diagram is so useful:





  • It illustrates a combination of hardware and vSphere design to provide complete redundancy.

  • Although the physical network connections are shown by the red, yellow, and green clouds, the actual redundant cabling is impossible to illustrate without making this diagram too busy to understand. For example, switch modules have 4 external ports or more and therefore create numerous cabling possibilities to 2 or more physical switches

  • Shows the use of a vSwitch design utilizing both a vStandard Switch and vDistributed Switch. Note vSw0 and vSw2 are vSS.

  • Multiple ESX Clusters are in this design and span across 2 chassis. I’ll quote Hany’s explanation of the detail behind the multi ESX cluster design:


“You will see two type of clusters:



  • Management Cluster: it is typically a two node cluster running the management and infrastructure services. For example, if you want to virtualize the vCenter Server, the VM should be running on this cluster rather than the actual production clusters. Same thing holds true for other vCenter products like: AppSpeed, CapacityIQ, SRM and so forth. There are two reasons for doing that: the first, we don’t want to run into the problem where vCenter Server is not accessible (there are some examples published in the community but my favorites are Jason Boche’s Catch22s!). The second reason, we don’t want to either affect our workloads’ performance with our management virtual appliances or vice versa.


  • Production Clusters: You can see here two production clusters (Cluster A and Cluster B). The take away from that is the following:



    • You don’t have to stick with that number of hosts per cluster, it depends on what you want to achieve, and also on some configuration maximums that may or may not limit you.

    • The nodes have to be spanned across the two chassis as numbered and illustrated in (Config 1). There are two reasons for that: Firstly, you don’t want your whole cluster to fail in an unluckily event when a whole chassis fails. Secondly, you have to keep in mind that VMware HA selects the first 5 hosts in the cluster and promote them as a "Primary" nodes, if they fail, your HA cluster fails.”




Adding a Blade Center to your data center introduces new complexity in the hardware by itself. Once you’ve set up your Blades and Chassis correctly, Haney has now made it easier to build a vSphere environment on IBM BladeCenter H. Be sure to follow the links above and read all of the many more design points in the posts on Hypervizor.com.


I look forward to more posts from Hany about designing vSphere for the various other Blade Server manufacturers as well.



"

Comments

  1. This comment has been removed by a blog administrator.

    ReplyDelete

Post a Comment

Popular posts from this blog

Azure Storage Account Hot, Cool & Archive Storage

Why Does Cloning A VM From Template Take A Long Time?

RVTools version 2.9 has been released