A2 VSphere Architecture Design

Published on June 2016 | Categories: Documents | Downloads: 33 | Comments: 0 | Views: 406
of 79
Download PDF   Embed   Report

Comments

Content


VMware and Customer Confidential
VMware vSphere™ Plan and Design Services
Architecture Design
for
Customer

Prepared by

Jane Q. Consultant, Sr. Consultant
VMware Professional Services
[email protected]
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 2 of 79
Revision History
Date Rev Author Comments Reviewers
08 December 2010 v2.2 Pat Carri Pubs
Edits/Formatting
Jeff Friedman
07 December 2010 v2.1 Andrew Hald Introduced vCAF,
reorganized
document flow
John Arrasjid,
Michael Mannarino,
Wade Holmes,
Kaushik Banerjee
04 November 2010 v2.0 Jeff Friedman vSphere 4.1 Andrew Hald
16 June 2009 v1.0 Mark Ewert,
Kingsley Turner,
Ken Polakowski
Pang Chen

DELETE THE FOLLOWING HIGHLIGHTED TEXT AFTER YOU READ IT
This is representative sample deliverable of a Plan and Design for VMware vSphere engagement
for use when building, rebuilding, or expanding a specific VMware vSphere design. Your actual
deliverable for your customer will vary depending on the engagement scope, situation,
environment, and requirements. You will need to update this document based upon your specific
customer.


© 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international
copyright and intellectual property laws. This product is covered by one or more patents listed at
http://www.vmware.com/download/patents.html.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective
companies.



VMware, Inc
3401 Hillview Ave
Palo Alto, CA 94304
www.vmware.com

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved. VMware is a registered trademark of VMware, Inc.
Page 3 of 79
Design Subject Matter Experts
The people listed in the following table provided key input into this design.
Name Email Address Role/Comments




VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 4 of 79
Contents
1. Purpose and Overview ...................................................................... 9
1.1 Executive Summary ....................................................................................................... 9
1.2 Business Background .................................................................................................... 9
1.3 The VMware Consulting and Architecture Framework ................................................ 10
2. Conceptual Design Overview .......................................................... 13
2.1 Customer Requirements.............................................................................................. 13
2.2 Design Assumptions .................................................................................................... 13
2.3 Design Constraints ...................................................................................................... 14
2.4 Use Cases ................................................................................................................... 14
3. vSphere Datacenter Design ............................................................ 15
3.1 vSphere Datacenter Logical Design ............................................................................ 15
3.2 vSphere Clusters ......................................................................................................... 16
3.3 Microsoft Cluster Service in an HA/DRS Environment ................................................ 20
3.4 VMware FT .................................................................................................................. 20
4. VMware ESX/ESXi Host Design ...................................................... 21
4.1 Compute Layer Logical Design ................................................................................... 21
4.2 Host Platform ............................................................................................................... 22
4.3 Host Physical Design Specifications ........................................................................... 23
5. vSphere Network Architecture ......................................................... 24
5.1 Network Layer Logical Design ..................................................................................... 24
5.2 Network vSwitch Design .............................................................................................. 25
5.3 Network Physical Design Specifications ..................................................................... 29
5.4 Network I/O Control ..................................................................................................... 30
6. vSphere Shared Storage Architecture ............................................. 32
6.1 Storage Layer Logical Design ..................................................................................... 32
6.2 Shared Storage Platform ............................................................................................. 33
6.3 Shared Storage Design ............................................................................................... 34
6.4 Shared Storage Physical Design Specifications ......................................................... 36
6.5 Storage I/O Control ...................................................................................................... 36
7. VMware vCenter Server System Design ......................................... 38
7.1 Management Layer Logical Design ............................................................................. 38
7.2 vCenter Server Platform .............................................................................................. 39
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 5 of 79
7.3 vCenter Server Physical Design Specifications .......................................................... 40
7.4 vCenter Server and Update Manager Databases ....................................................... 40
7.5 Licenses ...................................................................................................................... 43
8. vSphere Infrastructure Security ....................................................... 44
8.1 Overview ...................................................................................................................... 44
8.2 Host Security ............................................................................................................... 44
8.3 vCenter and Virtual Machine Security ......................................................................... 44
8.4 vSphere Port Requirements ........................................................................................ 45
8.5 Lockdown Mode and Troubleshooting Services ......................................................... 45
9. vSphere Infrastructure Monitoring ................................................... 46
9.1 Overview ...................................................................................................................... 46
9.2 Server, Network, and SAN Infrastructure Monitoring .................................................. 46
9.3 vSphere Monitoring ..................................................................................................... 46
9.4 Virtual Machine Monitoring .......................................................................................... 47
10. vSphere Infrastructure Patch/Version Management ...................... 48
10.1 Overview ...................................................................................................................... 48
10.2 vCenter Update Manager ............................................................................................ 48
10.3 vCenter Server and vSphere Client Updates .............................................................. 50
11. Backup/Restore Considerations .................................................... 51
11.1 Hosts............................................................................................................................ 51
11.2 Virtual Machines .......................................................................................................... 51
12. Design Assumptions ...................................................................... 52
12.1 Hardware ..................................................................................................................... 52
12.2 External Dependencies ............................................................................................... 53
13. Reference Documents ................................................................... 54
13.1 Supplemental White Papers and Presentations .......................................................... 54
13.2 Supplemental KB Articles ............................................................................................ 56
13.3 Supplemental KB Articles ............................................................................................ 57
Appendix A – ESX/ESXi Host Estimation .............................................. 58
Appendix B – ESX/ESXi Host PCI Configuration ................................... 62
Appendix C – Hardware BIOS Settings ................................................. 63
Appendix D – Network Specifications .................................................... 64
Appendix E – Storage Volume Specifications ........................................ 65
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 6 of 79
Appendix F – LUN Sizing Recommendations ........................................ 66
Appendix G – Security Configuration ..................................................... 68
Appendix H – Port Requirements .......................................................... 70
Appendix I – Monitoring Configuration ................................................... 74
Appendix J – Naming Conventions ........................................................ 77
Appendix K – Design Log ...................................................................... 79


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 7 of 79
List of Figures
Figure 1. Datacenter Logical Design ............................................................................................. 17
Figure 2. Network Switch Design .................................................................................................. 27
Figure 3. SAN Diagram ................................................................................................................. 34

List of Tables
Table 1. Infrastructure Design Qualities ........................................................................................ 11
Table 2. Infrastructure Design Quality Ratings .............................................................................. 12
Table 3. Customer Requirements.................................................................................................. 13
Table 4. Design Assumptions ........................................................................................................ 13
Table 5. Design Constraints .......................................................................................................... 14
Table 6. Continuous Availability or High Availability ...................................................................... 16
Table 7. Total Number of Hosts and Clusters Required................................................................ 16
Table 8. VMware HA Cluster Configuration .................................................................................. 18
Table 9. Option 1 Name or Option 2 Name ................................................................................... 21
Table 10. VMware ESX/ESXi Specifications ................................................................................. 22
Table 11. VMware ESX/ESXi Host Hardware Physical Design Specifications ............................. 23
Table 12. Option 1 Name or Option 2 Name ................................................................................. 24
Table 13. Proposed Virtual Switches Per Host ............................................................................. 25
Table 14. vDS Configuration Settings ........................................................................................... 28
Table 15. vDS Security Settings .................................................................................................... 29
Table 16. vSwitches by Physical/Virtual NIC, Port and Function .................................................. 29
Table 17. Virtual Switch Port Groups and VLANs ......................................................................... 30
Table 18. Virtual Switch Port Groups and VLANs ......................................................................... 30
Table 19. Option 1 Name or Option 2 Name ................................................................................. 32
Table 20. Shared Storage Logical Design Specifications ............................................................. 33
Table 21. Shared Storage Physical Design Specifications ........................................................... 36
Table 22. Storage I/O Enabled ...................................................................................................... 36
Table 23. Disk Shares and Limits .................................................................................................. 37
Table 24. Option 1 Name or Option 2 Name ................................................................................. 38
Table 25. vCenter Server Logical Design Specifications .............................................................. 39
Table 26. vCenter Server System Hardware Physical Design Specifications .............................. 40
Table 27. vCenter Server and Update Manager Databases Design ............................................. 40
Table 28. vCenter Server and Update Manager Database Names .............................................. 41
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 8 of 79
Table 29. SQL Server Database Accounts ................................................................................... 42
Table 30. ODBC System DSN ....................................................................................................... 42
Table 31. Lockdown Mode Configurations .................................................................................... 45
Table 32. Estimated Update Manager Storage Requirements ..................................................... 49
Table 33. vCenter Update Manager Specifications ....................................................................... 49
Table 34. Sources of Technical Assumptions for this Design ....................................................... 52
Table 35. VMware Infrastructure External Dependencies ............................................................. 53
Table 36. CPU Resource Requirements ....................................................................................... 58
Table 37. RAM Resource Requirements ....................................................................................... 58
Table 38. Proposed ESX/ESXi Host CPU Logical Design Specifications ..................................... 59
Table 39. Proposed ESX/ESXi Host RAM Logical Design Specifications .................................... 59
Table 40. VMware vSphere Consolidation Ratios ......................................................................... 61
Table 41. ESX/ESXi Host PCIe Slot Assignments ........................................................................ 62
Table 42. ESX/ESXi Hostnames and IP Addresses...................................................................... 64
Table 43. VMFS Volumes .............................................................................................................. 65
Table 44. NFS Volumes ................................................................................................................ 65
Table 45. vSphere Roles and Permissions ................................................................................... 68
Table 46. vCenter Virtual Machine and Template Inventory Folders to be used to Secure VMs . 69
Table 47. ESX/ESXi Port Requirements ....................................................................................... 70
Table 48. vCenter Server Port Requirements ............................................................................... 71
Table 49. vCenter Converter Standalone Port Requirements ....................................................... 72
Table 50. vCenter Update Manager Port Requirements ............................................................... 73
Table 51. SNMP Receiver Configuration ...................................................................................... 74
Table 52. vCenter SMTP Settings ................................................................................................. 74
Table 53. Physical to Virtual Windows Performance Monitor (Perfmon) Counters ....................... 74
Table 54. Modifications to Default Alarm Trigger Types ............................................................... 76
Table 55. Design Log .................................................................................................................... 79


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 9 of 79
1. Purpose and Overview
1.1 Executive Summary
This VMware vSphere™ architecture design was developed to support a virtualization project to
consolidate 1,000 existing physical servers. The required infrastructure being defined here will be
used not only for the first attempt at virtualization, but also as a foundation for follow-on projects
to completely virtualize the enterprise and to prepare it for the journey to Cloud Computing.
Virtualization is being adopted to slash power and cooling costs, reduce the need for expensive
datacenter expansion, increase operational efficiency, and capitalize on the higher availability and
increased flexibility that comes with running virtual workloads. The goal is for IT to be well-
positioned to respond rapidly to ever-changing business needs.
This document details the recommended vSphere foundation architecture to implement based on
a combination of VMware best practices and specific business requirements and goals. The
document provides both logical and physical design considerations encompassing all VMware
vSphere-related infrastructure components, including requirements and specifications for virtual
machines and hosts, networking and storage, and management. After this initial, foundation
architecture is successfully implemented, the architecture can be rolled out to other locations to
support a virtualization-first initiative, meaning that all future x86 workloads will be provisioned on
virtual machines by default.
1.2 Business Background
Company and project background:
 Multinational manufacturing corporation with large retail sales and finance divisions
 Vision for the future of IT is to use virtualization as a key enabling technology
 This first ―foundation infrastructure‖ to be located at a primary U.S. datacenter in Burlington,
Massachusetts
 Initial consolidation project targets 1,000 mission-critical x86 servers

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 10 of 79
1.3 The VMware Consulting and Architecture Framework
The VMware Consulting and Architecture Framework (vCAF) is a set of tools for delivering all
VMware consulting engagements in a standardized way. The framework guides the design
process and creates the architecture design. vCAF is executed in the following phases:
 Discovery
o Understand the customer’s business requirements and objectives for the project.
o Capture the business and technical requirements, assumptions, and constraints for the
project.
o Perform or secure a current state analysis of customer’s existing VMware vSphere
environment.
 Development
o Conduct design workshops and interviews with the following subject matter experts:
application, business continuity (BC) and DR, environment, storage, networking, security,
server administration, and operations. The goal of these workshops is to transform the
business and technical requirements into a logical design that is scalable and flexible.
o Discuss design options with tradeoffs and benefits. Compare the design choices with
their impact against key infrastructure qualities as defined by vCAF.
 Execution
o Build the vSphere infrastructure according to the physical design specifications.
o Execute verification procedures to confirm operational and technical success.
 Review
o Identify next steps.
o Re-calibrate enterprise goals and identify any new or modified objectives.
o Match enterprise goals to future engagement of professional services.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 11 of 79
The infrastructure qualities shown in the following table are used to categorize requirements and
design decisions as well as assess infrastructure maturity.
Table 1. Infrastructure Design Qualities

Design Quality Description
Availability Indicates the effect of a design choice on the ability of a technology
and the related infrastructure to achieve highly available operation.
Key metrics: % uptime
Manageability Indicates the effect of a design choice on the flexibility of an
environment and the ease of operations in its management. Sub-
qualities may include scalability and flexibility. Higher ratios are
considered better indicators.
Key Metrics:
 Servers per administrator
 Clients per IT personnel
 Time to deploy new technology
Performance Indicates the effect of a design choice on the performance of the
environment. This does not necessarily reflect the impact on other
technologies within the infrastructure.
Key Metrics:
 Response time
 Throughput
Recoverability Indicates the effect of a design choice on the ability to recover from
an unexpected incident which affects the availability of an
environment.
Key Metrics:
 RTO – Recovery Time Objective
 RPO – Recovery Point Objective
Security Indicates the effect of a design choice to have a positive or
negative impact on overall infrastructure security. Can also indicate
whether a quality has an impact on the ability of a business to
demonstrate or achieve compliance with certain regulatory policies.
Key Metrics:
 Unauthorized access prevention
 Data integrity and confidentiality
 Forensic capabilities in case of a compromise


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 12 of 79
All qualities are rated as shown in the following table.
Table 2. Infrastructure Design Quality Ratings
Symbol Definition
↑ Positive effect on the design quality
o No effect on the design quality or there is no comparison basis
↓ Negative effect on the design quality

This document captures the design decisions made for the solution to meet customer
requirements. In some cases, customer-specific requirements and existing infrastructure
limitations or constraints might result in a valid but sub-optimal design choice.
The primary goal of this design is to provide a service that corresponds with the business
objectives of the organization. With financial constraints factored into the decision process, the
key qualities to take into consideration are:
 Availability
 Recoverability
 Performance
 Manageability
 Security

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 13 of 79
2. Conceptual Design Overview
DELETE THE FOLLOWING HIGHLIGHTED TEXT AFTER YOU READ IT
At the top of this section, insert text describing the key motivations for this design effort. What are
customer pain points? What are the factors driving business decisions in IT?
Fill in the requirements, assumptions and constraints below with information gathered during your
engagement.
The key customer drivers and requirements guide all design activities throughout an engagement.
Requirements, assumptions, and constraints are carefully logged so that all logical and physical
design elements can be easily traced back to their source and justification.
2.1 Customer Requirements
Requirements are the key demands on the design. Sources include both business and technical
representatives.
Table 3. Customer Requirements
ID Requirement Source Date Approved
r101 Tier 1 services must meet a one hour RTO. Fred Jones
r102 PCI-compliant services require isolation from other
services.
David Johnson

2.2 Design Assumptions
Assumptions are introduced to reduce design complexity and represent design decisions that are
already factored into the environment.
Table 4. Design Assumptions
ID Assumption Source
Date
Approved
a101 All services belonging to the Billing Department can be
considered as Tier 2 for the purposes of DR planning.
James Hamilton
a102 Performance is considered acceptable if the end user
does not notice a difference between the original
platform and the new design.
David Johnson


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 14 of 79
2.3 Design Constraints
Constraints limit the logical design decisions and physical specifications. They are decisions
made independent of this engagement that may or may not align with stated objectives.
Table 5. Design Constraints
ID Constraint Source
Date
Approved
c101 IBM will provide the server hardware. Fred Jones
c102 Qlogic HBAs will be used in the ESX hosts. Fred Jones

2.4 Use Cases
This design is targeted at the following use cases:
 Server consolidation (power and cooling savings, green computing)
 Server infrastructure resource optimization (load balancing, high availability)
 Rapid provisioning
 Server standardization
The following use cases are deferred to a future project:
 Server containment (new workloads)

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 15 of 79
3. vSphere Datacenter Design
3.1 vSphere Datacenter Logical Design
In VMware vSphere, a datacenter is the highest level logical boundary. The datacenter may be
used to delineate separate physical sites/locations or vSphere infrastructures with completely
independent purposes.
Within vSphere datacenters, VMware ESX™/ESXi hosts are typically organized into clusters.
Clusters group similar hosts into a logical unit of virtual resources, enabling such technologies as
VMware vMotion™, High Availability (HA), Dynamic Resource Scheduling (DRS), and VMware
Fault Tolerance (FT).
To address customer requirements, the following design options were proposed during the design
workshops. For each design decision, the impact on each infrastructure quality is noted. The
selected design option is then explained with the appropriate justification.
DELETE THE FOLLOWING HIGHLIGHTED GUIDANCE TEXT AFTER YOU READ IT AND
REMOVE THE HIGHLIGHTING FROM THE DESIGN DECISION TEMPLATE.
The following design decision is an example. Follow the model below to communicate the design
decisions appropriate to your customer and their requirements.
3.1.1 Tier 2 Service Availability
Customer requirements have not explicitly defined the service level availability for Tier 2 services.
The following two options are available.
3.1.1.1. Option 1: Continuous Availability
Continuously available services:
 Are redundant at the application level
 Have no single points of failure
The drawbacks of continuous availability are:
 More expensive infrastructure
 Not-all applications are compatible with continuous availability methods
3.1.1.2. Option 2: High Availability
Highly available services:
 Limit single points of failure
 Are less expensive to support
The drawbacks of high availability are:
 Some service downtime is possible
 Application awareness requires additional effort
Tier 2 services are not defined as mission critical, but they are still important to daily business
operations of the customer. Providing no service availability efforts is not an acceptable option.
Budgetary constraints must be balanced with availability requirements so that if a problem occurs,
the service is restored within one hour, as stated in the service level agreement for Tier 2
services (r007).

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 16 of 79
Table 6. Continuous Availability or High Availability
Design Quality Option 1 Option 2 Comments
Availability ↑ ↑ Both options improve availability, though Option 1
would guarantee a higher level.
Manageability ↓ o Option 1 would be harder to maintain due to increased
complexity.
Performance o o Both design options have no impact on performance
Recoverability ↑ ↑ Both options improve recoverability
Security o o Both design options have no impact on security
Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality
Due to the lower cost and better manageability, customer has selected Option 2, High Availability.
This design decision is reflected in the physical design specifications outlined below.
3.2 vSphere Clusters
As part of this logical design, vSphere clusters are created to aggregate hosts. The number of
hosts per cluster is shown in the following table.
Table 7. Total Number of Hosts and Clusters Required
Attribute Specification
Number of hosts required to support 1,000 VMs 17
Approximate number of VMs per host 58.82
Maximum vSphere cluster size if hosts support more than 40 VMs each 16 hosts
Capacity for host failures per cluster 1 host
Dedicated hosts for maintenance capacity per cluster 1 host
Number of ―usable‖ hosts per cluster 6 hosts
Number of clusters created 3
Total usable capacity in hosts 18 hosts
Total usable capacity in VMs
(total usable hosts * true consolidation ratio)
1,084


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 17 of 79
Assumptions and Caveats
 The total usable capacity in VMs does not account for shadow instances of VMs needed to
support VMware FT. The number of protected VMs must be counted double.
 Host failures for VMware HA are expressed in the number of allowable host failures, meaning
that the expected load should be able to run on surviving hosts. HA policies can also be
applied on a percentage spare capacity basis.
 Dedicated hosts for maintenance assumes that a host is reserved strictly to offload running
VMs from other hosts that must undergo maintenance. When not being used for
maintenance, such hosts can also provide additional spare capacity to support a second host
failure, or for unusually high demands on resources. Having dedicated maintenance hosts
can be considered somewhat conservative, as spare capacity is being allocated strictly for
maintenance activities. Such spare capacity is earmarked here to ensure that there is
sufficient capacity to run VMs with minimal disruption.
 Hosts are evenly divided across the clusters.
 Clusters can be created and organized to enforce resource allocation policies such as:
o Load balancing
o Power management
o Affinity rules
o Protected workloads
o Limited number of licenses available for specific applications
Figure 1. Datacenter Logical Design
Datacenter A Datacenter B
None
Tier One Cluster
Host & VM Affinity Rules
Fault Tolerance
vCenter Server VM
(with VUM, Converter plug-ins)
Tier Two
Cluster
Distributed Power Management
Tier Three Cluster
Application Cluster for limited
licenses
Hosts 1-8
Hosts 9-16
Hosts 17-24
MS SQL Server DB
(vCenter DB, VUM DB)
Active
Directory


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 18 of 79
3.2.1 VMware HA
Each cluster are configured for VMware High Availability (HA) to automatically recover VMs if
either ESX/ESXi host fails, or if there is an individual VM failure. A host is declared failed if the
other hosts in the cluster cannot communicate with it. A VM is declared failed if a heartbeat inside
the guest OS can no longer be received.
VMs are tiered in relative order of priority for restarts:
 High (for example, Windows Active Directory domain controller VMs).
 Medium (default).
 Low.
 Disabled. For non-critical VMs. Do not restart. Sacrifices resources for higher priority VMs (for
example, QA and test VMs).
The configuration settings for VMware HA are shown in the following table.
Table 8. VMware HA Cluster Configuration
Attribute Specification
Enable host monitoring Enable
Admission control Prevent VMs from being powered on if they violate availability
constraints
Admission control policy Cluster tolerates one host failure
Default VM restart priority High (critical VMs)
Medium (majority of VMs)
Disabled (non-critical VMs)
Host isolation response Power off VM
Enable VM monitoring Enable
VM monitoring sensitivity Medium


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 19 of 79
Setting Explanations
 Enable host monitoring. When HA is enabled, hosts in the cluster are monitored. If there is
a host failure, the virtual machines on a failed host are restarted on alternate running hosts in
the cluster.
 Admission control. Enforces availability constraints and preserves host failover capacity.
Any operation on a virtual machine that decreases the unreserved resources in the cluster
and violates availability constraints is not permitted.
 Admission control policy. Each HA cluster can support as many host failures as specified.
 Default VM restart priority. The priority level specified here is relative. VMs must be
assigned a relative restart priority level for HA. VMs are organized into four categories: high,
medium, low, and disabled. It is presumed that the majority of systems will be satisfied by the
medium setting and are therefore left at default. VMs identified as high priority, such as the
Active Directory VMs, are started before the medium priority VMs, which in turn are restarted
before the VMs configured with low priority. If insufficient cluster resources are available, it is
possible that VMs configured with low priority will not be restarted. To help prevent this
situation, non-critical systems, such as QA and test VMs, are set to disabled. If there is a host
failure, these VMs are not restarted, saving critical cluster resources for higher priority VMs.
 Host isolation response. Host isolation response determines what happens when a host in
a VMware HA cluster loses its service console/management network connection, but
continues running. A host is deemed isolated when it stops receiving heartbeats from all
other hosts in the cluster and it is unable to ping its isolation addresses. When this occurs,
the host executes its isolation response. To prevent the potential for multiple instances of
each virtual machine to be running if a host becomes isolated from the network (causing
other hosts to believe it has failed and automatically restart the host’s VMs), the VMs are
automatically powered off upon host isolation.
 Enable VM monitoring. In addition to determining if a host has failed, HA can also monitor
for virtual machine failure. When set to enabled, the VM monitoring service (using VMware
Tools) evaluates whether each virtual machine in the cluster is running by checking for
regular heartbeats from the VMware Tools process running in each guest OS. If no
heartbeats are received, HA assumes that the guest operating system has failed, and HA
reboots the VM.
 VM monitoring sensitivity. This affects how relatively quickly HA concludes that a VM
failed. Highly sensitive monitoring results in a more rapid conclusion that a failure occurred.
While unlikely, highly sensitive monitoring may lead to false identification of failures when the
virtual machine in question is actually still working, but heartbeats have not been received
due to factors such as resource constraints or network issues. Low sensitivity monitoring
allows for more time before HA deems a VM to have failed. At the medium setting, HA
restarts the VM if the heartbeat between the host and the VM was not received within a 60
second interval. HA also only restarts the VM after each of the first three failures every 24
hours to prevent repeated failed restarting of VMs that need intervention to recover.


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 20 of 79
3.3 Microsoft Cluster Service in an HA/DRS Environment
Microsoft Cluster Service (MSCS) applies to Microsoft Cluster Service with Windows Server 2003
and Failover Clustering with Windows Server 2008.
All hosts that run MSCS virtual machines are managed by a VMware vCenter™ Server system
with VMware HA and DRS enabled. For a cluster of virtual machines on one physical host, affinity
rules are used. For a cluster of virtual machines across physical hosts, anti-affinity rules are used.
The advanced option for VMware DRS, ForceAffinePoweron, is set to 1, which enables strict
enforcement of the affinity and anti-affinity rules that are created. The automation level of all
virtual machines in an MSCS cluster are set to Partially Automated.
Note Migration of MSCS clustered virtual machines is not recommended.
3.4 VMware FT
Each cluster also supports VMware Fault Tolerance (FT) to protect select critical VMs. The
systems to protect are:
 2 Blackberry Enterprise servers
 2 Microsoft Exchange front-end servers
 2 Reporting servers
These six systems are distributed evenly amongst the three clusters resulting in two FT-protected
virtual machines per eight hosts initially.
All VMs to be protected by VMware FT have only one vCPU and disks configured eager-zeroed,
also called thick-provisioned (not thin-provisioned). An eager-zeroed thick disk has all space
allocated and zeroed out at creation time; this takes a bit longer for the creation time, but
facilitates optimal performance and better security.
VMware Fault Tolerance with VMware Distributed Resource Scheduler (DRS) are enabled. This
process allows fault tolerant virtual machines to benefit from better initial placement and to be
included in the cluster's load balancing calculations.
Note Enable the Enhanced vMotion Compatibility (EVC) feature.
On-Demand Fault Tolerance is scheduled for the two reporting servers during the quarter-end
report period and then returned to the HA cluster during non-critical operations.
FT traffic is supported with a pair of Gigabit Ethernet ports (see Section 5, vSphere Network
Architecture). Because a pair of Gigabit Ethernet ports can support on average 4 to 5 FT-
protected VMs per host, there is capacity for additional VMs to be protected by VMware FT.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 21 of 79
4. VMware ESX/ESXi Host Design
4.1 Compute Layer Logical Design
The compute layer of the architecture design encompasses the CPU, memory, and hypervisor
technology components. Logical design at this level centers on performance and security.
To address customer requirements, the following design options were proposed during the design
workshops. For each design decision, the impact on each infrastructure quality is noted. The
selected design option is then explained with the appropriate justification.
DELETE THE FOLLOWING HIGHLIGHTED GUIDANCE TEXT AFTER YOU READ IT AND
REMOVE THE HIGHLIGHTING FROM THE DESIGN DECISION TEMPLATE.
The following Design Decision is an example. Please follow the model below to communicate the
design decisions appropriate to your customer and their requirements. See Section 3.1 for an
example.
4.1.1 Design Decision 1
Description of the design decision
4.1.1.1. Option 1: Name
Advantages:
 Advantage 1
 Advantage 2
Drawbacks:
 Drawback 1
 Drawback 2
4.1.1.2. Option 2: Name
Advantages:
 Advantage 1
 Advantage 2
Drawbacks:
 Drawback 1
 Drawback 2
Further details should be included here. Also highlight any relevant requirements, assumptions
and/or constraints that will impact this decision.
Table 9. Option 1 Name or Option 2 Name
Design Quality Option 1 Option 2 Comments
Availability ↑ ↑ Both options improve availability, though Option
1 would guarantee a higher level.
Manageability ↓ o Option 1 would be harder to maintain due to
increased complexity.
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 22 of 79
Performance o o Both design options have no impact on
performance
Recoverability ↑ ↑ Both options improve recoverability
Security o o Both design options have no impact on security
Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality
Which option was selected and why?
4.2 Host Platform
This section details the VMware ESX/ESXi hosts proposed for the vSphere infrastructure design.
The logical components specified are required by the vSphere architecture to meet calculated
consolidation ratios, protect against failure through component redundancy, and support all
necessary vSphere features.
Table 10. VMware ESX/ESXi Specifications
Attribute Specification
Host type and version ESXi 4.1 Installable
Number of CPUs
Number of cores
Total number of cores
Processor speed
4
4
16
2.4 GHz (2400 MHz)
Memory 32GB
Number of NIC ports 10
Number of HBA ports 4

VMware ESXi was selected over VMware ESX because of its smaller running footprint, reduced
management complexity, and significantly fewer number of anticipated software patches.
The exact ESXi installable build version to be deployed will be selected closer to implementation
and will be chosen based on the available stable and supported released versions at that time.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 23 of 79
4.3 Host Physical Design Specifications
This section details the physical design specifications of the host and attachments corresponding
to the previous section that describes the logical design specifications.
Table 11. VMware ESX/ESXi Host Hardware Physical Design Specifications
Attribute Specification
Vendor and model x64 vendor and model
Processor type
Total number of cores
Quad core x64-vendor CPU
16
Onboard NIC vendor and model
Onboard NIC ports x speed
Number of attached NICs
NIC vendor and model
Number of ports/NIC x speed
Total number of NIC ports
NIC vendor and model
2 x Gigabit Ethernet
4 (excluding onboard)
NIC vendor and model
2 x Gigabit Ethernet
10
Storage HBA vendor and model
Storage HBA type
Number of HBAs
Number of ports/HBA x speed
Total number of HBA ports
HBA vendor and model
Fibre Channel
2/4GB
2 x 4GB
4
Number and type of local drives
RAID level
Total storage
2 x Serial Attached SCSI (SAS)
RAID 1 (Mirror)
72GB
System monitoring IPMI-based BMC

The configuration and assembly process for each system is standardized, with all components
installed the same on each host. Standardizing not only the model, but also the physical
configuration of the ESX/ESXi hosts, is critical to providing a manageable and supportable
infrastructure—it eliminates variability. Consistent PCI card slot location, especially for network
controllers, is essential for accurate alignment of physical to virtual I/O resources. Appendix B
contains further information on the host PCI placement.
All ESX/ESXi host hardware including CPUs was selected following the vSphere Hardware
Compatibility Lists and the CPUs were determined to be compatible with Fault Tolerance.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 24 of 79
5. vSphere Network Architecture
5.1 Network Layer Logical Design
The network layer encompasses all network communications between virtual machines, vSphere
management layer, and the physical network. Key infrastructure qualities often associated with
networking include availability, security, and performance.
To address customer requirements, the following design options were proposed during the design
workshops. For each design decision, the impact on each infrastructure quality is noted. The
selected design option is then explained with the appropriate justification.
DELETE THE FOLLOWING HIGHLIGHTED GUIDANCE TEXT AFTER YOU READ IT AND
REMOVE THE HIGHLIGHTING FROM THE DESIGN DECISION TEMPLATE.
The following Design Decision is an example. Please follow the model below to communicate the
design decisions appropriate to your customer and their requirements. See Section 3.1 for an
example.
5.1.1 Design Decision 1
Description of the design decision
5.1.1.1. Option 1: Name
Advantages:
 Advantage 1
 Advantage 2
Drawbacks:
 Drawback 1
 Drawback 2
5.1.1.2. Option 2: Name
Advantages:
 Advantage 1
 Advantage 2
Drawbacks:
 Drawback 1
 Drawback 2
Further details should be included here. Also highlight any relevant requirements, assumptions
and/or constraints that will impact this decision.
Table 12. Option 1 Name or Option 2 Name
Design Quality Option 1 Option 2 Comments
Availability ↑ ↑ Both options improve availability, though Option
1 would guarantee a higher level.
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 25 of 79
Manageability ↓ o Option 1 would be harder to maintain due to
increased complexity.
Performance o o Both design options have no impact on
performance
Recoverability ↑ ↑ Both options improve recoverability
Security o o Both design options have no impact on security
Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality
Which option was selected and why?
5.2 Network vSwitch Design
Following best practices, the network architecture complies with these design decisions:
 Separate networks for vSphere management, VM connectivity, vMotion traffic, and Fault
Tolerance logging (VM record/replay) traffic
 Separate network for NFS or iSCSI (storage over IP) used to store VM templates and guest
OS installation ISO files
 Network I/O control implemented to allow flexible partitioning of network traffic for virtual
machine, vMotion, FT, and IP storage traffic across the physical NIC bandwidth
 Redundant virtual Distributed Switches (vDS) with at least three active physical adapter ports
 Redundancy across different physical adapters to protect against NIC or PCI slot failure
 Redundancy at the physical switch level
Table 13. Proposed Virtual Switches Per Host
Virtual Standard (vSS) or
Virtual Distributed Switch (vDS)
Function Number of
Physical NIC Ports
vSS0 Management Console and
vMotion
2
vDS1 VM, Storage over IP and FT 6


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 26 of 79
vSS 0 is dedicated to the management network and vMotion. Service consoles and VMkernel
ports (used for vMotion and IP storage) do not migrate from host to host; these can remain on a
Virtual Standard Switch (vSS.)
vDS1, is allocated to virtual machine, IP Storage, and Fault Tolerance network traffic. This should
be configured to a Disturbed Switch to take advantage of network I/O, Load Based Teaming, and
Network vMotion. This vDS is configured to use six active Ethernet adapters. All physical network
switch ports connected to these adapters are configured as trunk ports with spanning tree
disabled. The trunk ports are configured to pass traffic for all VLANs used by the virtual switch.
The physical NIC ports are connected to redundant physical switches.
No traffic shaping policies are in place, Load-based teaming is configured for improved network
traffic distribution between the pNICs and Network I/O Control enabled.
VM network connectivity uses virtual switch port groups and 802.1q VLAN tagging to segment
traffic into four VLANs
To support the network demands of up to 60 VMs per host, this vDS is configured to use six
active Gigabit Ethernet adapters. All physical network switch ports connected to these adapters
are configured as trunk ports with spanning tree disabled. The trunk ports are configured to pass
traffic for all VLANs used by the virtual switch.
The physical NIC ports are connected to redundant physical switches.


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 27 of 79
Figure 2. Network Switch Design
ESX/ESXi Host
Virtual Standard
Switch0
Virtual Distributed
Switch1
vmnic7
Onboard
vmnic5
Slot 7
vmnic6
Onboard
vmnic4
Slot 7
vmnic2
Slot 7
Standby
vmnic0
Onboard
Active
vmnic1
Onboard
vmnic3
Slot 7
Physical
Switch1
VMkernel:
vMotion
Port Group
Management
Console
Port Group
Physical
Switch2
Virtual Machine
Network
Port Group
VMkernel:
Storage over IP
Port Group
VMkernel:
Fault Tolerance
Port Group
VLAN 100
VLAN 500
VLAN 200
VLAN 300
VLAN 600
VLAN 650


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 28 of 79
Table 14. vDS Configuration Settings
Parameter Setting
Load balancing Route based on physical NIC load
Failover detection Beacon probing
Notify switches Enabled
Rolling failover No
Failover order All active

vSwitch Configuration Setting Explanations
 Load Balancing. Route based on physical NIC ensures vDS dvUplink capacity is optimized.
Load-Based Teaming (LBT) avoids the situation of other teaming policies in which some of
the dvUplinks in a DV Port Group’s team were idle while others were completely saturated
just because the teaming policy used is statically determined. LBT reshuffles port binding
dynamically based on load and dvUplinks usage to make an efficient use of the available
bandwidth. LBT only moves ports to dvUplinks configured for the corresponding DV Port
Group’s team. Note that LBT does not use shares or limits to make its determination while
rebinding ports from one dvUplink to another.
 Failover Detection. In addition to link status, the VMkernel sends out and listens for periodic
beacon probes on all network adapters in the team. This enhances link status, which relies
exclusively on link integrity of the physical network adapter to determine when a failure
occurs. Link status enhanced by beacon probing detects failures that are due to cable
disconnects or physical switch power failures, as well as configuration errors or network
interruptions beyond the local NIC termination point.
 Notify Switches. When enabled, this option sends out a gratuitous ARP whenever a new
NIC is added to the team or when a virtual NIC begins using a different physical uplink on the
ESX/ESXi host. This option helps to lower latency issues when a failover occurs or when
virtual machines are migrated to another host using vMotion.
 Rolling Failover. Determines how a physical adapter is returned to active duty after
recovering from a failure. When set to No, the adapter is returned to active duty immediately
upon recovery. Setting it to Yes keeps the adapter inactive, even if it is recovered and
requires manual intervention to return it to service.
 Failover Order. All physical adapters assigned to each vSwitch and port group are
configured as Active adapters. No adapters are configured as standby or unused.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 29 of 79
Table 15. vDS Security Settings
Parameter Setting
Promiscuous mode Reject (default)
MAC address changes Reject
Forged transmits Reject

vDS Security Setting Explanations
 Promiscuous Mode. Setting to Reject at the vSwitch level protects against virtual machine
virtual network adapters. Placing a VM virtual network adapter in promiscuous mode has no
effect on which frames are received by the adapter.
 MAC Address Changes. Setting to Reject at the vSwitch level protects against MAC
address spoofing. If the guest OS changes the MAC address of the adapter to anything other
than what is in the .vmx configuration file, all inbound frames are dropped if the guest OS
changes the MAC address back to match the MAC address in the .vmx configuration file,
inbound frames are sent again.
 Forged Transmits. Setting to Reject at the vSwitch level protects against MAC address
spoofing. Outbound frames with a source MAC address that is different from the one set on
the adapter are dropped.
5.3 Network Physical Design Specifications
This section expands on the logical network design in the corresponding previous section by
providing details on the physical NIC layout and physical network attributes.
Table 16. vSwitches by Physical/Virtual NIC, Port and Function
vSwitch vmnic NIC / Slot Port Function
0 0 Onboard

N/A
0 Management Console and vMotion
0 2 1 Management Console and vMotion
1 1 Onboard

Quad PCIe Slot 7
0 VM, FT, and Storage over IP traffic
1 3 1 VM, FT, and Storage over IP traffic
1 5 Quad PCIe Slot 7

Onboard

0 VM, FT, and Storage over IP traffic
1 7 1 VM, FT, and Storage over IP traffic
1 4 Quad PCIe Slot 7

Onboard
0 VM, FT, and Storage over IP traffic
1 6 1 VM, FT, and Storage over IP traffic

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 30 of 79
Table 17. Virtual Switch Port Groups and VLANs
vSwitch Port Group Name VLAN ID
0 MGMT-100 100
0 VMOTION-500 500
1 PROD-200 200
0 DEV-300 300
0 FT-600 600
0 NFS-650 650
0 N/A 1

See Appendix D for more information on the physical network design specifications.
5.4 Network I/O Control
Virtual Distributed Swtich1 (vDS1) are configured with Network I/O Control enabled. After
Network I/O control is enabled, traffic through that virtual distributed switch is divided into the
following network resource pools: FT traffic, iSCSI traffic, vMotion traffic, management traffic,
NFS traffic and virtual machine traffic. This design specifies that virtual machine, iSCSI, and FT
network traffic are dedicated to virtual distributed swtich1.
The priority of the traffic from each of these network resource pools is set by the physical adapter
shares and host limits for each network resource pool. Virtual machine traffic is set to High, the
FT resource pool set to Normal, and the iSCSI traffic set to Low. These reservations apply only
when the physical adapter is saturated.
Note The iSCSI traffic resource pool shares do not apply to iSCSI traffic on a dependent
hardware iSCSI adapter.
Table 18. Virtual Switch Port Groups and VLANs
Network Resource Pool Physical Adapter Shares Host Limit
Fault Tolerance Normal Unlimited
iSCSI Low Unlimited
Management N/A N/A
NFS N/A N/A
Virtual Machine High Unlimited
vMotion N/A N/A

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 31 of 79
Network I/O Settings Explanation
 Host Limits. Host limits are the upper limit of bandwidth that the network resource pool can
use.
 Physical Adapter Shares. Shares assigned to a network resource pool determine the total
available bandwidth guaranteed to the traffic associated with that network resource pool.
o High. Sets the shares for this resource pool to 100.
o Normal. Sets the shares for this resource pool to 50.
o Low. Sets the shares for this resource pool to 25.
o Custom. A specific number of shares, from 1 to 100, for this network resource pool.


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 32 of 79
6. vSphere Shared Storage Architecture
6.1 Storage Layer Logical Design
To address customer requirements, the following design options were proposed during the design
workshops. For each design decision, the impact on each infrastructure quality is noted. The
selected design option is then explained with the appropriate justification.
DELETE THE FOLLOWING HIGHLIGHTED GUIDANCE TEXT AFTER YOU READ IT AND
REMOVE THE HIGHLIGHTING FROM THE DESIGN DECISION TEMPLATE.
The following Design Decision is an example. Please follow the model below to communicate the
design decisions appropriate to your customer and their requirements. See Section 3.1 for an
example.
6.1.1 Design Decision 1
Description of the design decision
6.1.1.1. Option 1: Name
Advantages:
 Advantage 1
 Advantage 2
Drawbacks:
 Drawback 1
 Drawback 2
6.1.1.2. Option 2: Name
Advantages:
 Advantage 1
 Advantage 2
Drawbacks:
 Drawback 1
 Drawback 2
Further details should be included here. Also highlight any relevant requirements, assumptions
and/or constraints that will impact this decision.
Table 19. Option 1 Name or Option 2 Name
Design Quality Option 1 Option 2 Comments
Availability ↑ ↑ Both options improve availability, though Option
1 would guarantee a higher level.
Manageability ↓ o Option 1 would be harder to maintain due to
increased complexity.
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 33 of 79
Performance o o Both design options have no impact on
performance
Recoverability ↑ ↑ Both options improve recoverability
Security o o Both design options have no impact on security
Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality
Which option was selected and why?
6.2 Shared Storage Platform
This section details the shared storage proposed for the vSphere infrastructure design.
Table 20. Shared Storage Logical Design Specifications
Attribute Specification
Storage type Fibre Channel SAN
Number of storage processors 2 (redundant)
Number of switches
Number of ports per host per switch
2 (redundant)
2
LUN size 1TB
Total LUNs 50
VMFS datastores per LUN 1
VMFS version 3.33


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 34 of 79
6.3 Shared Storage Design
The following figure illustrates the design.
Figure 3. SAN Diagram
ESX/ESXi Host
vmhba0 vmhba1
Fibre Switch A Fibre Switch B
SAN Storage Processor A SAN Storage Processor B
1TB
LUN
VMFS
1
VMFS
2
VMFS
3
VMFS
...
VMFS
...
VMFS
...
VMFS
...
VMFS
50
1TB
LUN
1TB
LUN
1TB
LUN
1TB
LUN
1TB
LUN
1TB
LUN
1TB
LUN
vmhba2 vmhba3

Based on the results of the physical system assessment, it was determined that on average, each
system has a 36GB system volume with 9 gigabytes used, and a 143GB data volume with 22
gigabytes used. After projecting for volume growth and providing 33% minimum free space per
volume, for the task of estimating overall storage requirements it was determined that each virtual
machine would be configured with a 12GB system volume and a 40GB data volume. Unless
constrained by specific application or workload requirements, or special circumstances (such as
being protected by VMware Fault Tolerance), all data volumes are provisioned as thin disks with
the system volumes deployed as thick. This strategic over-provisioning saves an estimated
8.25TB of storage, assuming that on average, 50% of the currently available storage on each
data volume remains unused. The consumption of each storage volume is monitored in
production with alarms configured to alert if any approach capacity to provide sufficient time to
source and provision additional disk.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 35 of 79
With the intent to maintain at least 15% free capacity on each VMFS volume for VM swap files,
snapshots, logs, and thin volume growth, it was determined that 48.7TB of available storage is
required to support 1,000 virtual machines after accounting for the long term benefit of thin
provisioning. This will be provided in 50 1TB LUNs (the additional 1.23 TB is for growth and test
capacity). These LUNs will be zoned to all hosts and formatted as 50 VMFS datastores.
1TB was selected because it provides the best balance between performance and manageability
with approximately 20 VMs and 40 virtual disks per volume. Although larger LUNs up to 2TB
(without extents) and 64TB (with extents) are possible, this size was chosen for several reasons.
For manageability, it allows an adequately large portion of disks to better use resources and limit
storage sprawl. A smaller size maintains a reasonable RTO and reduces the risks associated with
losing a single LUN. In addition, the size limits the number of VMs that remain on a single LUN.
Additionally, three NFS volumes are presented to each ESX/ESXi host for the storage of virtual
machine templates, guest operating system installation, CD images (ISOs), and to provide
administrators second-tier storage for log and VM archival and infrastructure testing. The
separation of such files from VM files was done recognizing that these non-VM files can often
have higher I/O characteristics.
Each ESX/ESXi host is to be provisioned with a vSS switch supported by two physical Gigabit
Ethernet adapters dedicated for IP storage connectivity.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 36 of 79
6.4 Shared Storage Physical Design Specifications
This section details the physical design specifications of the shared storage corresponding to the
previous section that describes the logical design specifications.
Table 21. Shared Storage Physical Design Specifications
Attribute Specification
Vendor and model Storage vendor and model.
Type Active/passive.
ESX/ESXi host multipathing policy Most Recently Used (MRU). Set because the
SAN is an active/passive array to avoid path
thrashing. With MRU, a single path to the SAN is
used until it becomes inactive, at which point it
switches to another path and continues to use
this new path until it fails; the preferred path
setting is disregarded.
Minimum/Maximum speed rating of switch
ports
2GB/4GB.

See Appendix E for an inventory of VMFS and NFS volumes.
6.5 Storage I/O Control
Storage I/O Control is enabled. This allows cluster-wide storage I/O prioritization, providing the
ability to control the amount of storage I/O that is allocated to virtual machines during periods of
I/O congestion. The shares are set per virtual machine and can be adjusted for each VM based
on need.
Table 22. Storage I/O Enabled
Datastore Path Storage I/O Enabled
Prod_san01_02 vmhba1:0:0:3 /dev/sda3 48f85575-5ec4c587-
b856-001a6465c102
yes
Prod_san01_07 vmhba2:0:4:1 /dev/sdc1 48fbd8e5-c04f6d90-
1edb-001cc46b7a18
yes
Prod_san01_37 vmhba32:0:1:1 /dev/sde1 48fe2807-7172dad8-
f88b-0013725ddc92
yes
Prod_san01_44 vmhba32:0:0:1 /dev/sdd1 48fe2a3d-52c8d458-
e60e-001cc46b7a18
yes


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 37 of 79
Table 23. Disk Shares and Limits
VM Disk Shares Limit – IOPS
ds007 Disk 1 Low 500
kf002 Disk 1 and 2 Normal 1000
jf001 Disk 1 High 2000
rs003 Disk 1 and 2 Custom 350

Shared I/O Control Settings Explanation
 Storage I/O Enabled. Storage I/O is enabled per datastore. Navigate to the Configuration
Tab > Properties to verify that the feature was enabled.
 Storage I/O Shares. Storage I/O shares are similar to VMware CPU and memory shares.
Shares define the hierarchy of the virtual machines for distribution of storage I/O resources
during periods of I/O congestion. Virtual machines with higher shares have higher throughput
and lower latency.
 Limit IOPs. By default IOPS allowed for a virtual machine are unlimited. By allocating
storage I/O resources, you then limit the IOPS allowed to a virtual machine. If a machine has
multiple disks, you must set the same IOPS value for all the disks that access that virtual
machine.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 38 of 79
7. VMware vCenter Server System Design
7.1 Management Layer Logical Design
The Management Layer
To address customer requirements, the following design options were proposed during the design
workshops. For each design decision the impact on each infrastructure quality is noted. The
selected design option is then explained with the appropriate justification.
DELETE THE FOLLOWING HIGHLIGHTED GUIDANCE TEXT AFTER YOU READ IT AND
REMOVE THE HIGHLIGHTING FROM THE DESIGN DECISION TEMPLATE.
The following Design Decision is an example. Please follow the model below to communicate the
design decisions appropriate to your customer and their requirements. See Section 3.1 for an
example.
7.1.1 Design Decision 1
Description of the design decision
7.1.1.1. Option 1: Name
Advantages:
 Advantage 1
 Advantage 2
Drawbacks:
 Drawback 1
 Drawback 2
7.1.1.2. Option 2: Name
Advantages:
 Advantage 1
 Advantage 2
Drawbacks:
 Drawback 1
 Drawback 2
Further details should be included here. Also highlight any relevant requirements, assumptions
and/or constraints that will impact this decision.
Table 24. Option 1 Name or Option 2 Name
Design Quality Option 1 Option 2 Comments
Availability ↑ ↑ Both options improve availability, though Option
1 would guarantee a higher level.
Manageability ↓ o Option 1 would be harder to maintain due to
increased complexity.
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 39 of 79
Performance o o Both design options have no impact on
performance
Recoverability ↑ ↑ Both options improve recoverability
Security o o Both design options have no impact on security
Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality
Which option was selected and why?
7.2 vCenter Server Platform
This section details the VMware vCenter Server proposed for the vSphere infrastructure design.
Table 25. vCenter Server Logical Design Specifications
Attribute Specification
vCenter Server version 4.1
Physical or virtual system Virtual
Number of CPUs
Processor type
Processor speed
2
VMware vCPU
N/A
Memory 4GB
Number of NIC and ports 1/1
Number of disks and disk sizes 2: 12GB (C) and 40GB (D)
Operating system and SP level Windows Server 2003 Enterprise R2 32-bit SP1

VMware vCenter Server, the heart of the vSphere infrastructure, is implemented on a virtual
machine, as opposed to a standalone physical server. Virtualizing vCenter Server enables it to
benefit from advanced features of vSphere, including VMware HA and vMotion.
The exact vCenter Server build to be deployed will be selected closer to implementation and will
be chosen based on the available stable and supported released versions at that time.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 40 of 79
7.3 vCenter Server Physical Design Specifications
This section details the physical design specifications of the vCenter Server.
Table 26. vCenter Server System Hardware Physical Design Specifications
Attribute Specification
Vendor and model VMware VM virtual hardware 7
Processor type VMware vCPU
NIC vendor and model
Number of ports/NIC x speed
Network
VMware Enhanced VMXNET
1 x Gigabit Ethernet
Management network
Local disk RAID level N/A

7.4 vCenter Server and Update Manager Databases
This section details the specifications for the vCenter Server and Update Manager databases.
Table 27. vCenter Server and Update Manager Databases Design
Attribute Specification
Vendor and version Microsoft SQL Server 2005
Authentication method SQL Server Authentication
Recovery method Full
Database autogrowth Enabled in 1MB increments
Transaction log autogrowth In 10% increments; restricted to 2GB maximum
size
vCenter statistics level 3
Estimated database size 41.74GB


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 41 of 79
Setting Explanations
 Authentication method. Per the current database security policies, SQL Server
Authentication, and not Windows Authentication, will be used to secure the vCenter Server
databases. A SQL Server account with strong password will be created to support vCenter
Server and vCenter Update Manager access to their respective databases.
 Recovery method. The full recovery method helps to ensure that no data is lost if there is
database failure between backups. Because it maintains complete records of all changes to
the database within the transaction logs, it is critical the database is backed up regularly
which truncates (grooms) the logs. The DBA team schedules incremental nightly and full
weekend backups of the vCenter and vCenter Update Manager databases.
 Database autogrowth. The vCenter Server database can expand on demand in 1MB
increments with no restriction on its growth.
 Transaction log autogrowth. The transaction log is restricted to a maximum 2GBs to
prevent filling up the log volume. Because the vCenter Server and vCenter Update Manager
databases are backed up nightly (truncating the logs), the 2GB maximum should be more
than required.
 vCenter statistics level. Level 3 gives more comprehensive vCenter statistics than the
default setting.
 Estimated database size. 41.74GB was calculated using the vCenter Advanced Settings
tool, assuming 24 hosts and 1,084 VMs at the above statistics level.
The corporate database, Microsoft SQL Server 2005, will be used, as there is currently a trained
team of DBAs supporting several physical database servers on this platform. For this initial
vSphere infrastructure, this resource is leveraged and databases for both the vCenter Server and
vCenter Update Manager are hosted on a separate, production physical database server system.
Table 28. vCenter Server and Update Manager Database Names
Attribute Specification
vCenter database name VC01DB01
Update Manager database name VUM01DB01


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 42 of 79
The following table details the account to be created and its corresponding rights to each
database. A strong password is assigned to this account and documented following established
password storage procedures.
Table 29. SQL Server Database Accounts
Account Name Database Rights
vc01 VC01DB01 dbowner
vc01 VUM01DB01 dbowner
vc01 msdb dbowner*
* dbowner rights to the msdb database can and will be revoked following vCenter installation and
creation of the vCenter Server and vCenter Update Manager databases.

Table 30. ODBC System DSN
Database ODBC
System DSN
ODBC
Client
Database Server SQL
Account
vCenter Server VC01DB01 SQL Native Client SQLPROD08 vc01
Update Manager VUM01DB01 SQL Native Client SQLPROD08 vc01

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 43 of 79
DELETE THE FOLLOWING HIGHLIGHTED TEXT AFTER YOU READ IT
This sample is based on Enterprise Plus licensing. Be sure to educate your customer to the
capabilities of each feature. The actual licensing that your customer has purchased will be based
on their business requirements. You will need to update this section accordingly.
7.5 Licenses
For this initial vSphere infrastructure, the Enterprise Plus vSphere license edition is used. This
edition provides the following licensed features:
 Up to 8 physical cores per CPU and no physical memory limit per ESX/ESXi host
 Update Manager
 vStorage APIs for Data Protection
 High Availability (HA)
 Thin provisioning
 Data Recovery
 vMotion
 Hot add virtual hardware support
 Fault Tolerance (FT)
 vShield Zones
 Storage vMotion
 DRS
 DPM
 vNetwork Distributed Switches
 Host Profiles
 vStorage APIs for Multipathing
 Virtual Serial Port Concentrator
 vStorage APIs for Array Integration
 Network I/O Control
 Storage I/O Control
The vCenter Server will be configured with the issued 25-character license keys and
automatically installs the appropriate license on the ESX/ESXi hosts as they are added to
inventory.
The License Reporting Manager is used to view and generate reports on a quarterly basis for all
license keys for vSphere 4.1 products in the virtual IT infrastructure.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 44 of 79
8. vSphere Infrastructure Security
8.1 Overview
Security is absolutely critical in this environment, and any security vulnerability or risk exposed by
the new vSphere infrastructure would have a negative impact on future adoption of virtualization
technology. To protect the business, existing security policies and procedures were considered
and leveraged. Microsoft Active Directory users and groups were used to govern access. To
reduce confusion that could undermine security, wherever possible the existing administration
teams are granted the same access to the vSphere environment as they currently have to the
physical infrastructure. For example, the physical network team is granted access to facilitate
their responsibility of the virtual network infrastructure. However, end users will continue to
access the virtual machines through the guest OS or application mechanisms and will not have
access through VMware vSphere components or the vSphere Client directly. No access is
granted that is not required to perform a specific, authorized job function.
8.2 Host Security
Chosen in part for its limited management console functionality, ESXi is configured with a strong
root password stored following the corporate password guidelines. ESXi lockdown mode is also
enabled to prevent root access to the hosts over the network, and appropriate security policies
and procedures are created and enforced to govern the systems. Because ESXi cannot be
accessed over the network, sophisticated host-based firewall configurations are not required.
A new team; ―ESX Admins‖ will be created with a supporting Active Directory group. This, and
only this, team has overall responsibility and access to the ESX/ESXi hosts.
8.3 vCenter and Virtual Machine Security
Access to perform systems administration of the virtual machines is divided among the same
teams currently responsible for the physical instances of the systems, leveraging the existing
Active Directory implementation. Folders within vCenter Virtual Machine and Templates inventory
are created to simplify assignment of these permissions to the appropriate virtual machines.
Folders are created for each different system classification (such as HR, Finance, IT) and used to
contain their respective virtual machines. Permissions are applied to these folders and mapped to
the appropriate Active Directory group responsible for the management of the virtual machines
they contain.
A new team will also be created: Enterprise vSphere Administration are created with a supporting
Active Directory group. This, and only this, team has overall and overarching responsibility and
access to the entire vSphere infrastructure.
The existing physical network and storage administration teams are also granted access, with
their job responsibilities extended into the virtual infrastructure. Leveraging security capabilities of
vSphere, the network and storage teams are granted access only to virtual networking and
storage, respectively; that is, only what is required to perform their specific job responsibilities.
Neither team has access to virtual machines, the datacenter, clusters, or any other aspect of the
vSphere infrastructure. Similarly, the virtual machine admin teams do not have access to the
configuration of network, storage, or any other aspects of the vSphere infrastructure, only the
virtual machines they are responsible for. Only the new Enterprise vSphere Administration team
has access to the entire infrastructure.
As stated previously, no end users are granted access to the vSphere Infrastructure directly.
They continue to leverage guest OS and application interfaces for connectivity. Appendix G
contains detailed information regarding the assignment of VMware vCenter Roles to Microsoft
Active Directory Groups used to secure the infrastructure.
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 45 of 79
8.4 vSphere Port Requirements
Appendix H contains detailed information regarding which ports to open for communication
between vSphere-related services.
8.5 Lockdown Mode and Troubleshooting Services
To remain compliant with the organizations security regulations, Lockdown Mode is enabled. All
configuration changes to the vSphere environment are made accessing the vCenter Server with
lockdown mode configured. Lockdown mode restricts access to host services on the ESXi server,
but does not affect the availability of these services.
Note If the ESXi host loses access to vCenter Server while running in Total Lockdown Mode,
ESXi must be reinstalled to regain access to the host.

Table 31. Lockdown Mode Configurations
Service Lockdown Mode Total Lockdown Mode
Lockdown On On
Local Tech Support Mode Off Off
Remote Tech Support Mode Off Off
Direct Console User Interface On Off

ESXi features three types of troubleshooting services; Local Tech Support Mode (TSM), Remote
Tech Support Mode Service (SSH) and Direct Console User Interface Service (DCUI). In
accordance with the organizations security policy TSM and SSH are enabled only when required.
DCUI remains enabled to conform to design specifications.
Lockdown and Troubleshooting Settings Explanation
 Local Tech Support Mode (TSM). Usable with physical access to the server console and
root privileges.TSM provides a command-line interface to diagnose and repair VMware ESXi
hosts. Support Mode should only be used at the request of VMware technical support.
 Remote Tech Support Service (SSH). Similar to TSM, but uses the secure protocol SSH for
remote access to the server. Requires root privileges and provides a command-line interface
to diagnose and repair VMware ESXi hosts. Support Mode should only be used at the
request of VMware technical support.
 Direct Console user Interface Service (DCUI). Usable with physical access to the server
and root privileges. The DCUI is used for basic configuration of the ESXi host. When running
in Lockdown mode, you can log in locally to the direct console user interface as the root user
and disable Lockdown Mode. You can then troubleshoot the issue using a direct connection
to the vSphere Client or by enabling Tech Support Mode.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 46 of 79
9. vSphere Infrastructure Monitoring
9.1 Overview
Because the uptime and health of the entire technology infrastructure is paramount, the current
environment already has an implementation of an enterprise monitoring system. Capable of both
SNMP-based and managed system-specific monitoring, such as leveraging Windows Events and
Performance Counters, the system monitors all network, server, and storage systems, processing
any events through a sophisticated event correlation engine to determine the appropriate
response and notifications to send. The new vSphere infrastructure integrates into the monitoring
system through the appropriately configured events and alerts.
9.2 Server, Network, and SAN Infrastructure Monitoring
All of the physical systems, including the network and SAN, will continue to be monitored directly
by the enterprise monitoring system which is configured to incorporate additional infrastructure
required to support vSphere. The new physical servers purchased to run VMware ESX/ESXi are
outfitted with IPMI Baseboard Management Controllers (BMC) used by the enterprise monitoring
system to monitor system hardware status (such as processor temperature, fan speed, and the
like). In the future, vSphere Distributed Power Management (DPM) will be considered, and can
use these IPMI BMCs to automatically power ESX/ESXi Hosts on and off based on demand to
help further power and cooling savings.
9.3 vSphere Monitoring
Leveraging the event monitoring and alarm system in vSphere, vCenter Server is configured to
monitor the health and performance of all critical virtual infrastructure components, including the
ESX/ESXi hosts, the clusters, VMware HA, Fault Tolerance, virtual machine operations such as
vMotion, and the health of the vCenter Server itself. The events and conditions to be monitored
and configured to alert are detailed in Appendix I.
Upon the triggering of an alert, vCenter Server is configured to send SNMP traps to the enterprise
management system’s SNMP receiver. Although the same system is primarily responsible for
event correlation and email alerting across the enterprise, vCenter Server is also configured to
send email alerts for all triggered events to the vSphere Enterprise Administration group.
The vSphere Enterprise Administrators group is also responsible for routinely reviewing and
managing the health and system logs generated by the ESX/ESXi hosts, vCenter Server, and the
virtual machines. These logs will be groomed and archived following corporate log retention
policies and procedures. The following are monitored on a consistent basis to verify the health of
the virtual infrastructure:
 VMware ESX/ESXi host hardware status
 vCenter Server services status
 VMware HA Healthcheck and operational status
 Cluster operational status
 Storage performance statistics
 Network performance statistics
 Network I/O control status
 Storage I/O control status
 Memory utilization report
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 47 of 79
9.4 Virtual Machine Monitoring
The current enterprise management system provides monitoring of the 1,000 systems to be
virtualized and continues performing this important task after the systems are converted to
vSphere VMs. After reviewing the current monitoring configuration, it was determined that only
minor changes are necessary to facilitate monitoring the systems after they are virtualized. The
monitoring system primarily requires network connectivity to the virtual machines which is not
impacted by the conversion, as their IP addresses and host names are not changing. However,
the mechanism that monitors the performance of Windows virtual machines utilizes Windows
Performance Monitor (Perfmon) counters and must be reconfigured to use new, virtualization-
specific Windows Perfmon counters provided by VMware Tools. These counters, unlike their
counterparts for physical components, are tuned for accurate assessment of virtualized Windows
performance.
Appendix I provides detailed monitoring system configuration information, including SNMP and
SMTP settings, the list of alarms and events to leverage, and the Windows Performance Monitor
counters to use by the enterprise monitoring system to monitor virtual machines.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 48 of 79
10. vSphere Infrastructure Patch/Version Management
10.1 Overview
Maintaining an up-to-date IT infrastructure is critical. The health, performance, and security of the
entire business depends on the health, performance, and security of its supporting technology.
Maintaining an up-to-date infrastructure can be a daunting task for IT administrators, but if not
performed dependably and routinely, the infrastructure is at risk.
The VMware vCenter Update Manager enterprise patch automation tool is implemented as part of
the new vSphere infrastructure to keep the vSphere ESX/ESXi hosts and virtual machines’
VMware Tools up-to-date. Update Manager can provision, patch, and upgrade third-party
modules such as EMC's PowerPath multipathing software. Administrators can also evaluate
patches/updates to VMware vCenter Server and the vSphere Client and update those manually
as required.
10.2 vCenter Update Manager
VMware vCenter Update Manager is an automated patch management solution that applies
patches and updates to VMware ESX/ESXi hosts, Microsoft Windows virtual machines, and
select Linux virtual machines. vCenter Update Manager can also update VMware Tools and
VMware virtual hardware in virtual machines. In addition to securing the datacenter against
vulnerabilities and reducing or eliminating downtime related to host patching, automated
ESX/ESXi host updates provide a common, installed version for hosts. Although VMware Fault
Tolerance includes a versioning-control mechanism that allows the Primary and Secondary VMs
to run on FT-compatible hosts at different but compatible patch levels, maintaining common
versions among the hosts is the best practice. This is vital for the health of VMware Fault
Tolerance, which requires the same build to be installed on all hosts supporting FT-protected
VMs.
vCenter Update Manager will be installed on the same system as vCenter Server and configured
to patch/upgrade the ESX/ESXi hosts and VMware Tools installed within the virtual machines. It
will, however, not be used to automatically update virtual machine hardware. Virtual machine
hardware updates are evaluated and performed manually as needed.
For this initial deployment, Update Manager is not leveraged to update virtual machines guest
operating systems or applications, as there is already a robust mechanism in place for such
patching. Because Update Manager is not used to patch VMs, the Update Manager Server can
reside on the same system as the vCenter Server for simplicity. In the future, if Update Manager
is used to update VMs, the Update Manager Server should be run on a separate dedicated
system for performance reasons, so as not to overburden the vCenter Server.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 49 of 79
Like vCenter, a dedicated database for Update Manager will be created on one of the database
servers also housing the database for vCenter Server using the vCenter Update Manager Sizing
Estimator from vmware.com, and using the following assumptions:
 No remediation of VMs
 Remediate ESXi 4.1+ hosts (no ESX 4.1+)
 1 concurrent ESXi host upgrades
 24 hosts
 Patch scan frequency for hosts: 4 per month
 Upgrade scan frequency for hosts: 1 per month
 Patching the vCenter Update Manager database is estimated to require 870 MB of storage
for the first year and up to 2GBs of patch and 2GBs of temporary disk storage space on the
Update Manager Server system for patches.
Table 32. Estimated Update Manager Storage Requirements
Estimated Update Manager
Database Size (first year)
Estimated Patch Storage
Required
Estimated Temp Storage
Required
870 MB 2GB 2GB

Table 33. vCenter Update Manager Specifications
Attribute Specification
Patch download sources Select Download ESX 4.1 patches
Unselect Download ESX 3 patches, Download Linux VM
patches, Download Windows VM patches
Shared repository D:\vCenterUpdateManager\vSpherePatches
Proxy settings None
Patch download schedule Every Sunday at 12:00AM EST
Email notification [email protected]
Update Manager baselines to
leverage
Critical and non-critical ESX/ESXi host patches
VMware Tools upgrade to match host
Virtual machine settings Select Snapshot virtual machines before remediation to
enable rollback
Select Don’t delete snapshots
ESX/ESXi host settings Host maintenance mode failure: Retry
Retry interval: 30 Minutes
Number of retries: 3
vApp settings Select Enable smart reboot after remediation
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 50 of 79
Setting Explanations
 Patch download sources. Which patches to download.
 Shared repository. The vCenter Update Manager Server was configured with a data disk to
use for storing patches at this location.
 Proxy settings. Settings for a proxy server if one is used to access the internet from the
datacenter.
 Patch download schedule. The time and frequency to download new patches.
 Email notification. Who Update Manager automatically notifies when new patches are
downloaded.
 Update Manager baselines to leverage. Update Manager baselines define a level of
patches and updates to monitor for and download.
 Virtual machine settings. Initially, Update Manager will not be used to update virtual
machines. However, this setting will be configured per best practices to prepare for the event
that when VM patching is activated, a snapshot will be taken of each virtual machine prior to
performing any remediation operations. This will enable rollback of patches that are applied
by vCenter Update Manager, if necessary. These snapshots will not be automatically deleted
by Update Manager. Members of the vSphere Administration group will delete the snapshots
after determining that the patches have been successfully applied and are functioning
correctly.
 ESX/ESXi host settings. Update Manager places a host into maintenance mode before
applying patches. Maintenance mode automatically triggers the migration of any VMs running
on the host to other hosts in the cluster to avoid VM downtime. If Update Manager and
vCenter encounter problems putting a host into maintenance mode, this setting specifies
what to do and how many times within the specified interval between attempts before
abandoning the patch application to a particular host.
 vApp settings. vApps are logical groups of VMs. This setting uses the start order of VMs as
defined with a vApp when powering on VMs. vApps often require powering on virtual
machines in a specific order due to dependencies, and this is configured within the vApps
properties.
10.3 vCenter Server and vSphere Client Updates
Administrators routinely check for and evaluate new vCenter Server and vSphere Client updates.
These are installed in a timely fashion following release and proper testing. VMware vSphere
Client updates should be manually installed whenever vCenter Server is updated. Using
conflicting versions of the vSphere Client and vCenter Server can cause unexpected results.
vSphere Client automatically checks for and downloads an update if it exists when connecting to
an updated vSphere Server.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 51 of 79
11. Backup/Restore Considerations
11.1 Hosts
Host Profiles are created and used to restore ESX/ESXi hosts.
Backing up hosts is not a necessary practice because a standard installation is relatively trivial
and takes only minutes from start to finish. Administrators maintain accurate documentation of the
ESX/ESXi networking and storage configurations to use for reference after a restore.
11.2 Virtual Machines
There currently is a robust enterprise backup system that is used to back up each physical
system. The plan is to continue using this method to back up virtual machines. Restoring virtual
machine guest operating systems, applications, and associated data follows the same method as
for physical machines.
The current production backup strategy calls for incremental backups nightly, full backups on
Sundays, and full backups at month-end and year-end. Test, development, and QA VMs are
backed up on an as-needed basis.
In the future, virtualization-specific methods for backing up and restoring VMs will be considered.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 52 of 79
12. Design Assumptions
12.1 Hardware
Hardware deployment must meet technical requirements for each product. If the hardware used
deviates from the recommended hardware (see Section 13, Reference Documents), there must
be re-qualification from the development team to make sure that this hardware supports all
VMware products used in the deployment. The technical assumptions for this design are listed
below.
Table 34. Sources of Technical Assumptions for this Design
Element Reference
ESX/ESXi and vCenter Server ESX/ESXi and vCenter Installation Guide
ESX/ESXi Configuration Guide
Basic System Administration Guide
vSphere 4.1 Configuration Maximums Guide
ESX/ESXi host hardware vSphere Hardware Compatibility Lists
ESX/ESXi I/O adapters vSphere Hardware Compatibility Lists
ESX/ESXi SAN compatibility Fibre Channel SAN Configuration Guide
iSCSI SAN Configuration Guide
vMotion, HA, fault tolerance vSphere 4.1 Availability Guide


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 53 of 79
12.2 External Dependencies
External dependencies address other systems or technologies that depend on or could be
affected by the vSphere infrastructure. External dependencies are different assumptions in that
they clearly identify dependent factors and the consequent implications.
Table 35. VMware Infrastructure External Dependencies
Item Requirements
Active Directory Active Directory is required to implement and operate the vSphere
Infrastructure.
DNS DNS must be configured for connectivity between vCenter Server,
Active Directory, VMware ESX/ESXi, and virtual machines.
Network Network congestion or failure prevents vMotion from migrating
virtual machine and affects the ability of vCenter Server to
manage VMware ESX/ESXi hosts. It can also negatively impact
HA and FT.
Storage Area Network Stability and performance of the SAN affects the virtual machines.
Time synchronization Accurate time keeping and time synchronization is critical for a
healthy vSphere infrastructure. All components including
ESX/ESXi hosts, vCenter Server, the SAN, physical network
infrastructure, and virtual machine guest operating systems must
have accurate time keeping. This is especially critical for virtual
machines protected by FT.
Staff Properly trained IT staff is critical for the correct implementation,
operation, support, and enhancement of the vSphere
infrastructure.
Policies and procedures The policies and procedures governing the use of information
technology must be revised to properly incorporate the unique
properties and capabilities of virtualization as implemented
through this design.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 54 of 79
13. Reference Documents
13.1 Supplemental White Papers and Presentations
 ESX and vCenter Server Installation Guide for ESX 4.1, vCenter Server 4.1
 ESXi Installable and vCenter Server Setup Guide for ESXi 4.1 Installable, vCenter Server 4.1
 ESXi Embedded and vCenter Server Setup Guide for ESXi 4.1 Embedded, vCenter Server
4.1
 vSphere Datacenter Administration Guide for ESX 4.1, ESXi 4.1, vCenter Server 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_dc_admin_guide.pdf
 vSphere Resource Management Guide for ESX 4.1, ESXi 4.1, vCenter Server 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_resource_mgmt.pdf
 VMware ESXi and ESX Info Center
http://www.vmware.com/products/vsphere/esxi-and-esx/upgrade.html
 VMware Enterprise Infrastructure Support Life Cycle Policy
http://www.vmware.com/support/policies/lifecycle/enterprise-infrastructure/index.html
 vSphere Management Assistant Guide for vSphere 4.1
http://www.vmware.com/support/developer/vima/vma41/doc/vma_41_guide.pdf
 VMware Data Recovery Administration Guide for Data Recovery 1.2
http://www.vmware.com/pdf/vdr_12_admin.pdf
 vSphere Compatibility Matrixes
http://www.vmware.com/pdf/vsphere4/r40/vsp_compatibility_matrix.pdf
 Configuration Maximums for VMware® vSphere 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdf
 iSCSI SAN Configuration Guide for ESX 4.1, ESXi 4.1, vCenter Server 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf
 Fibre Channel SAN Configuration Guide for ESX 4.1, ESXi 4.1, vCenter Server 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_san_cfg.pdf
 Setup for Failover Clustering and Microsoft Cluster Service for ESX 4.1, ESXi 4.1, vCenter
Server 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_mscs.pdf
 VMware Network I/O Control: Architecture, Performance and Best Practices for VMware
vSphere 4.1
http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf
 vSphere Command-Line Interface Installation and Scripting Guide for ESX 4.1, ESXi 4.1,
vCenter Server 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp4_41_vcli_inst_script.pdf
 VMware vCenter Converter Installation and Administration Guide for vCenter Converter 4.2
http://www.vmware.com/pdf/vsp_vcc_42_admin_guide.pdf
 vSphere Management Assistant Guide for vSphere 4.1
http://www.vmware.com/support/developer/vima/vma41/doc/vma_41_guide.pdf
 VMware Compatibility Guides
http://www.vmware.com/resources/compatibility/search.php
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 55 of 79
 Introduction to VMware vSphere for ESX 4.0, ESXi 4.0, vCenter Server 4.0
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_intro_vs.pdf
 vShield Zones Administration Guide for vShield Zones 1.0
http://www.vmware.com/pdf/vsz_10_admin.pdf
 What’s New in VMware vSphere 4: Performance Enhancements:
http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSpherePerformance_P13_R1.pdf
 What’s New in VMware vSphere 4:Virtual Networking:
http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereNetworking_P8_R1.pdf
 What Is New in VMware vSphere 4: Storage:
http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereStorage_P10_R1.pdf
 Network Segmentation in Virtualized Environments
http://www.vmware.com/files/pdf/network_segmentation.pdf
 Protecting Mission-Critical Workloads with VMware Fault Tolerance:
http://www.vmware.com/files/pdf/resources/ft_virtualization_wp.pdf
 VMware ESX 3 802.1Q VLAN Solutions:
http://www.vmware.com/pdf/esx3_vlan_wp.pdf
 CLARiiON Integration with VMware ESX:
http://www.vmware.com/pdf/clariion_wp_eng.pdf
 Recommendations for Aligning VMFS Partitions:
http://www.vmware.com/pdf/esx3_partition_align.pdf
 Security Design of the VMware Infrastructure 3 Architecture:
http://www.vmware.com/pdf/vi3_security_architecture_wp.pdf
 Making Your Business Disaster Ready with VMware Infrastructure:
http://www.vmware.com/pdf/disaster_recovery.pdf
 Automating High Availability (HA) Services with VMware HA:
http://www.vmware.com/pdf/vmware_ha_wp.pdf
 ESX 4 Patch Management Guide:
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_esxupdate.pdf
 Best Practices for Patching ESX:
http://www.vmware.com/resources/techresources/1075
 Microsoft Exchange Server 2007 Performance on VMware vSphere 4
http://www.vmware.com/files/pdf/perf_vsphere_exchange-per-scaling.pdf
 Improving Scalability for Citrix Presentation Server:
http://www.vmware.com/pdf/esx_citrix_scalability.pdf
 Managing VMware VirtualCenter Roles and Permissions:
http://www.vmware.com/resources/techresources/826
 VMware vCenter Update Manager Performance and Best Practices
http://www.vmware.com/pdf/Perf_UpdateManager40_Best-Practices.pdf

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 56 of 79
13.2 Supplemental KB Articles
 vMotion CPU Compatibility Requirements for Intel Processors:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1991
 vMotion CPU Compatibility Requirements for AMD Processors:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1992
 vMotion CPU Compatibility - Migrations Prevented Due to CPU Mismatch - How to Override
Masks
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&extern
alId=1993&sliceId=1&docTypeID=DT_KB_1_1&dialogID=23256056&stateId=0 0 2325069
 Installing ESX.1 and vCenter 4. 41 Best Practices:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&extern
alId=1009080&sliceId=2&docTypeID=DT_KB_1_1&dialogID=23256161&stateId=0%200%20
23250853
 VMware High Availability Slot Calculation:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&extern
alId=1010594&sliceId=1&docTypeID=DT_KB_1_1&dialogID=23256209&stateId=0%200%20
23250906
 VMware vSphere 4.1 Software Compatibility Matrix:
http://partnerweb.vmware.com/comp_guide/docs/vSphere_Comp_Matrix.pdf
 Processors and Guest Operating Systems that Support VMware Fault Tolerance:
http://kb.vmware.com/kb/1008027
 VMware Infrastructure Architecture Overview:
http://www.vmware.com/pdf/vi_architecture_wp.pdf
 Virtualization Overview:
http://www.vmware.com/pdf/virtualization.pdf
 What’s New in VMware vSphere 4:Virtual Networking:
http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereNetworking_P8_R1.pdf
 What Is New in VMware vSphere 4: Storage:
http://www.vmware.com/files/pdf/VMW_09Q1_WP_vSphereStorage_P10_R1.pdf
 Network Throughput in a VMware Infrastructure:
http://www.vmware.com/pdf/esx_network_planning.pdf
 Network Segmentation in Virtualized Environments
http://www.vmware.com/files/pdf/network_segmentation.pdf
 The vSphere Availability Guide:
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_availability.pdf
 Protecting Mission-Critical Workloads with VMware Fault Tolerance:
http://www.vmware.com/resources/techresources/1094
 VMware ESX 3 802.1Q VLAN Solutions:
http://www.vmware.com/pdf/esx3_vlan_wp.pdf
 CLARiiON Integration with VMware ESX:
http://www.vmware.com/pdf/clariion_wp_eng.pdf
 Recommendations for Aligning VMFS Partitions:
http://www.vmware.com/pdf/esx3_partition_align.pdf
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 57 of 79
 Security Design of the VMware Infrastructure 3 Architecture:
http://www.vmware.com/pdf/vi3_security_architecture_wp.pdf
 Making Your Business Disaster Ready with VMware Infrastructure:
http://www.vmware.com/pdf/disaster_recovery.pdf
 Automating High Availability (HA) Services with VMware HA:
http://www.vmware.com/pdf/vmware_ha_wp.pdf
 ESX 4 Patch Management Guide:
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_esxupdate.pdf
 Best Practices for Patching ESX:
http://www.vmware.com/resources/techresources/1075
 Microsoft Exchange Server 2007 Performance on VMware vSphere 4
http://www.vmware.com/files/pdf/perf_vsphere_exchange-per-scaling.pdf
 Improving Scalability for Citrix Presentation Server:
http://www.vmware.com/pdf/esx_citrix_scalability.pdf
 Managing VMware VirtualCenter Roles and Permissions:
http://www.vmware.com/resources/techresources/826
 VMware vCenter Update Manager Performance and Best Practices
http://www.vmware.com/pdf/Perf_UpdateManager40_Best-Practices.pdf
13.3 Supplemental KB Articles
 vMotion CPU Compatibility Requirements for Intel Processors:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1991
 vMotion CPU Compatibility Requirements for AMD Processors:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1992
 vMotion CPU Compatibility - Migrations Prevented Due to CPU Mismatch - How to Override
Masks
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&extern
alId=1993&sliceId=1&docTypeID=DT_KB_1_1&dialogID=23256056&stateId=0 0 2325069
 Installing ESX 4.1 and vCenter 4.1 best practices:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&extern
alId=1009080&sliceId=2&docTypeID=DT_KB_1_1&dialogID=23256161&stateId=0%200%20
23250853
 VMware High Availability slot calculation:
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&extern
alId=1010594&sliceId=1&docTypeID=DT_KB_1_1&dialogID=23256209&stateId=0%200%20
23250906
 VMware vSphere 4.1 Software Compatibility Matrix:
http://partnerweb.vmware.com/comp_guide/docs/vSphere_Comp_Matrix.pdf
 Processors and Guest Operating Systems that Support VMware Fault Tolerance:
http://kb.vmware.com/kb/1008027
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 58 of 79
Appendix A – ESX/ESXi Host Estimation
To determine the number of hosts required to consolidate the existing datacenter’s 1,100 physical
x86 servers, the performance and utilization of the existing servers was analyzed using VMware
Capacity Planner for 30 days. The analysis captured the resource utilization for each system,
including average and peak CPU and RAM utilization.
Out of the servers analyzed, 100 were disqualified from the consolidation project for the following
reasons:
 Servers are already planned for decommission and there is no need to virtualize
 Incomplete performance and utilization data; such servers to be deferred for further analysis
 Servers use specialized hardware that cannot be virtualized
Note ESX 4.1 supports USB device passthrough from an ESX or ESXi to a virtual machine.
A total of 1,000 candidates were selected for this first virtualization initiative. Over the sampling
period, the metrics in the following tables were observed.
Table 36. CPU Resource Requirements
Metric Amount
Average number of CPUs per physical system 2
Average CPU MHz 2800 MHz
Average normalized CPU per physical system 5663 MHz
Average CPU utilization per physical system 6.5% (368.01 MHz)
Average peak CPU utilization per physical system 9% (509.67 MHz)
Total CPU resources required for 1,000 VMs at peak 509,670 MHz

Table 37. RAM Resource Requirements
Metric Amount
Average amount of RAM per physical system 1024 MB
Average memory utilization 62% (634.88 MB)
Average peak memory utilization 70% (716.80 MB)
Total RAM required for 1000 VMs at peak before memory sharing 716,800 MB
Anticipated memory sharing benefit when virtualized 50%
Total RAM required for 1,000 VMs at peak with memory sharing 358,400 MB
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 59 of 79
For estimating capacity, the target host platform has the specifications proposed in the following
tables.
Table 38. Proposed ESX/ESXi Host CPU Logical Design Specifications
Attribute Specification
Number of CPUs (sockets) per host 4
Number of cores per CPU 4
MHz per CPU core 2,400
Total CPU MHz per CPU 9,600
Total CPU MHz per host 38,400
Proposed maximum host CPU utilization 80%
Available CPU MHz per host 30,720 MHz

Table 39. Proposed ESX/ESXi Host RAM Logical Design Specifications
Attribute Specification
Total RAM per host 32,768 MB (32 GB)
Proposed maximum host RAM utilization 80%
Available RAM per host 26,214 MB

Estimation Assumptions
 Hosts sized for peak utilization levels rather than average utilization. This is to support all
systems running at their observed peak resource levels simultaneously.
 CPU and memory utilization for each host capped at 80% (allow 20% for overhead and
breathing room).
 Memory sharing: 50% (achieved through running the same guest OS, Windows Server 2003
Standard R2, across 90% of all VMs).

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 60 of 79
The following formula was used in calculating estimated required host capacity to support the
peak CPU utilization of the anticipated VM workloads:

Total CPU required for total VMs at peak
=#of ESX/ESXi Hosts Required
Available CPU per ESX/ESXi Host
Using this formula, the following estimated required host capacity was calculated for the planned
vSphere infrastructure:

509,670 MHz (Total CPU)
= 16.59 ESX/ESXi Hosts
30,720 MHz (CPU per Host)

The following formula was used in calculating the number of hosts required to support anticipated
at peak RAM utilization:

Total RAM required for total VMs at peak
= #of ESX/ESXi Hosts Required
Available RAM per ESX/ESXi Host

Using this formula, the following estimated required host capacity was calculated for the planned
vSphere infrastructure:
358,400 MB (Total RAM)
= 13.67 ESXi Hosts
26,214 MB (RAM per Host)


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 61 of 79
From a CPU workload perspective, 17 VMware ESX/ESXi hosts are needed, but from a memory
workload perspective, only 14 hosts are needed. The higher value is used because that is the
limiting factor.
This provides substantial consolidation ratios as shown in the following table.
Table 40. VMware vSphere Consolidation Ratios
# of
Virtualization
Candidates
# of ESX/ESXi
Hosts
Required
Consolidation
Ratio: VMs per
Host
Consolidation
Ratio: VMs per
Core*
Max Host
CPU/ RAM
Utilization
1,000 17 58.82 3.68 80%
* Each VM has one vCPU

In actuality, because 1,000 VMs can be supported by 16.6 hosts, the true consolidation ratio is
60.27, which means that through extrapolation with 17 hosts, the infrastructure should be able to
support not just 1,000 VMs, but 1,024 VMs.
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 62 of 79
Appendix B – ESX/ESXi Host PCI Configuration
The selected host platform has 11 PCIe expansion slots. Card locations were staggered to
simplify system cabling, leaving five (5) available PCIe slots for future expansion.
Table 41. ESX/ESXi Host PCIe Slot Assignments
Slot Number Card
1 Dual port PCIe-4x Fibre Channel HBA
2 Empty
3 Dual port PCIe-4x Fibre Channel HBA
4 Empty
5 Dual port PCIe-4x Gigabit Ethernet adapter
6 Empty
7 Dual port PCIe-4x Gigabit Ethernet adapter
8 Empty
9 Dual port PCIe-4x Gigabit Ethernet adapter
10 Empty
11 Dual port PCIe-4x Gigabit Ethernet adapter
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 63 of 79
Appendix C – Hardware BIOS Settings
The default hardware BIOS settings on servers may not always be the best choice for optimal
performance. Review the BIOS settings with a VMware virtualized environment in mind. This
section lists some of the BIOS settings for consideration.
The latest version of the BIOS available for your system is running.
BIOS is set to enable all populated sockets and all cores in each socket.
―Turbo Mode‖ is enabled if your processors support it.
Hyperthreading is enabled in the BIOS.
Some NUMA-capable systems provide an option to disable NUMA by enabling node
interleaving. In most cases, the best performance is achieved by disabling node
interleaving
Hardware-assisted virtualization features (VT-x, AMD-V, EPT, RVI) are enabled in the
BIOS.
Disable C1E halt state in the BIOS. (See note following regarding performance
considerations versus power considerations.)
Disable any other power-saving mode in the BIOS. (See note following regarding
performance considerations versus power considerations.)
Disable unneeded devices, such as serial and USB ports, from the BIOS
Notes
 ESX 4.0 supports Enhanced Intel SpeedStep and Enhanced AMD PowerNow! CPU power
management technologies that can save power when a host is not fully utilized. However,
because these and other power-saving technologies can reduce performance in some
situations, consider disabling them when performance considerations outweigh power
considerations.
 Because of the large number of different server models and configurations, any list of BIOS
options is always likely to be incomplete.
 After updating the BIOS, make sure your BIOS settings are as you wanted them.
 After these changes are made, some systems may need a complete power down and restart
before the changes take effect.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 64 of 79
Appendix D – Network Specifications
Table 42. ESX/ESXi Hostnames and IP Addresses
ESX/ESXi Host Management
Console
IP/Mask
NFS Storage
IP/ Mask
vMotion
IP/Mask
Fault Tolerance
IP/Mask
esx01.domain.local 10.1.100.101 /
24
172.20.100.101
/ 24
172.24.100.101
/ 24
172.28.100.101
/ 24
esx02.domain.local 10.1.100.102 /
24
172.20.100.102
/ 24
172.24.100.102
/ 24
172.28.100.102
/ 24
esx03.domain.local 10.1.100.103 /
24
172.20.100.103
/ 24
172.24.100.103
/ 24
172.28.100.103
/ 24
esx04.domain.local 10.1.100.104 /
24
172.20.100.104
/ 24
172.24.100.104
/ 24
172.28.100.104
/ 24
esx05.domain.local 10.1.100.105 /
24
172.20.100.105
/ 24
172.24.100.105
/ 24
172.28.100.105
/ 24
esx06.domain.local 10.1.100.106 /
24
172.20.100.106
/ 24
172.24.100.106
/ 24
172.28.100.106
/ 24
esx07.domain.local 10.1.100.107 /
24
172.20.100.107
/ 24
172.24.100.107
/ 24
172.28.100.107
/ 24
~ ~ ~ ~ ~
esx21.domain.local 10.1.100.121 /
24
172.20.100.121
/ 24
172.24.100.121
/ 24
172.28.100.121
/ 24
esx22.domain.local 10.1.100.122 /
24
172.20.100.122
/ 24
172.24.100.122
/ 24
172.28.100.122
/ 24
esx23.domain.local 10.1.100.123 /
24
172.20.100.123
/ 24
172.24.100.123
/ 24
172.28.100.123
/ 24
esx24.domain.local 10.1.100.124 /
24
172.20.100.124
/ 24
172.24.100.124
/ 24
172.28.100.124
/ 24

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 65 of 79
Appendix E – Storage Volume Specifications
Table 43. VMFS Volumes
Datastore Name LUN ID Purpose
prod_san01_00 0 Production
prod_san01_01 1 Production
prod_san01_02 2 Production
prod_san01_03 3 Production
~ ~ ~
prod_san01_46 46 Production
test_san01_47 47 Test
test_san01_48 48 Test

Table 44. NFS Volumes
NFS Volume Name Server Path
nfsvol01 filer01.domain.local /nfs/vsphere/nfsvol01
nfsvol02 filer01.domain.local /nfs/vsphere/nfsvol02
images filer03.domain.local /nfs/shared/images

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 66 of 79
Appendix F – LUN Sizing Recommendations
When sizing a LUN for a VMware Datastore, first determine how many active virtual machines,
(VMs), to allocate per LUN. For read/write performance, it does not matter if your VMDK is on a
100GB LUN or 2TB LUN. It is the total I/O load on the LUN that matters.
Below is a general calculation you can use to determine the LUN size best suited for your
environment.
Suppose for example you determined that you are using 10 VMDKs per LUN on average, then
calculate the average disk size. For this example, 50GB is an average size for the VMDK file:
Drive C:\ 20GB OS and patches
Drive D:\ 30GB Application disk
TOTAL 50 GB VMDK file
Multiplying 50GB by 10 VMs comes to 500GB per LUN to store 10 VMDKs.
10 * 50 = 500
Swap Space
Now factor in the needed spare room for VM swap space and for snapshots. For this example, on
average VMs are configured with 3GB of RAM. This requires a 3GB swap space for each VM.
Ten VMs would require a total 30GB of swap space.
10 * 3 = 30.0
Snapshots
15% is a good figure to use for snapshot overhead. The retention policy for active snapshots
determines the overall space required. Best practice recommends that the retention policy for
active snapshots be as short as possible. If the retention policy exceeds two days, then store the
snap shots on a second tier datastore.
15% of 500GB = 75.0
Free Space
Factor in 15% for free space:
15% of 500GB = 75.0

To summarize and build the formula:
10 VMs x 50GB (avg. disk size) = 500.0 GB (total disk size)
+ 10 VMs x 3GB = 30.0 GB (VM swap)
+ 15% of 500GB = 75.0 GB (snapshots)
+ 15% of 500GB = 75.0 GB (free space)
Total 680.00
Round Up 700.00 GB


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 67 of 79
vSphere 4.1 offers enhanced visibility into storage throughput, and latency of hosts and virtual
machines, and aids in troubleshooting storage performance issues.
NFS statistics are now available in vCenter Server performance charts, as well as ESXTOP.
Storage I/O provides quality of service in the form of I/O shares and limits. Using Storage I/O
Control, vSphere administrators can make sure that the most important virtual machines get
adequate I/O resources even in times of congestion.
Refer to the vSphere Resource Management Guide and vSphere Datacenter Administration
Guide for more detail.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 68 of 79
Appendix G – Security Configuration
Table 45. vSphere Roles and Permissions
vSphere Role
Name
Corresponding
AD Groups
Enabled
vSphere
Privileges
vCenter
Inventory Level
for Permissions
Description
Enterprise
vSphere
Administrators*
vSphere-Admins* All Datacenter and
all child objects
Administrative
rights to the
entire vSphere
infrastructure
vSphere
Network
Administrators*
NET-Admins Network and all
child privileges
Network and all
network child
objects ONLY
Administrative
rights to all
vSphere network
components
vSphere
Storage
Administrators*
SAN-Admins Datastore and all
child privileges
Storage Views
and all child
privileges
Datastores and
all datastore
child objects
ONLY
Administrative
rights to all
vSphere storage
components
Virtual
Machine
Administrators*
FINSYS-Admins Virtual Machine
and all child
privileges
FINANCE folder
and the VMs it
contains ONLY
Administrative
rights to Finance
Virtual Machines
ONLY
Virtual
Machine
Administrators*
HRSYS-Admins Virtual Machine
and all child
privileges
HR folder and
the VMs it
contains ONLY
Administrative
rights to HR
Virtual Machines
ONLY
Virtual
Machine
Administrators*
DEVSYS-Admins Virtual Machine
and all child
privileges
DEVELOPMENT
folder and the
VMs it contains
ONLY
Administrative
rights to
Development
Virtual Machines
ONLY
Virtual
Machine
Administrators*
QASYS-Admins Virtual Machine
and all child
privileges
QA folder and
the VMs it
contains ONLY
Administrative
rights to QA
Virtual Machines
ONLY
Virtual
Machine
Administrators*
ITSYS-Admins Virtual Machine
and all child
privileges
IT folder and the
VMs it contains
ONLY
Administrative
rights to IT
Virtual Machines
ONLY
* New. Must be created.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 69 of 79
Table 46. vCenter Virtual Machine and Template Inventory Folders to be used to Secure
VMs
Folder VM Type Associated Admin AD Group
FINANCE Financial VMs FINSYS-Admins
HR HR VMs HR-Admins
DEVELOPMENT Development VMs DEV-Admins
QA QA VMs QA-Admins
IT IT VMs IT-Admins


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 70 of 79
Appendix H – Port Requirements
Table 47. ESX/ESXi Port Requirements
Description Ports Protocol Direction
vSphere Client to ESX/ESXi host 443, 902, 903 TCP Incoming
VM Console to ESX/ESXi host 903 TCP Incoming
ESX/ESXi host and vCenter Heartbeat 902 UDP Incoming/
Outgoing
ESX/ESXi host DNS client 53 UDP Outgoing
ESX/ESXi host NTP client to NTP server 123 UDP Outgoing
ESX/ESXi host NFS 111, 2049 TCP, UDP Outgoing
vMotion between ESX/ESXi hosts 8000 TCP Incoming/
Outgoing
HA between ESX/ESXi hosts 2050-2250,
8042-8045
TCP, UDP Incoming/
Outgoing
ESX/ESXi host to Update Manager 80, 443, 9034 TCP Outgoing
Update Manager to ESX/ESXi host 902, 9000-9010 TCP Incoming
ESX/ESXi host CIM Client to Secure
Server
5988, 5989 TCP Incoming
ESX/ESXi host CIM service location
protocol
427 TCP, UDP Incoming/
Outgoing


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 71 of 79
Table 48. vCenter Server Port Requirements
Description Ports Protocol Direction
vSphere Client to vCenter Server 443 TCP Incoming
vSphere Web Access to vCenter Server 443 TCP Incoming
VM Console to vCenter Server 902, 903 TCP Incoming
ESX/ESXi host and vCenter Heartbeat 902 UDP Incoming/
Outgoing
LDAP 389 TCP Incoming
Linked Mode SSL 636 TCP Incoming
ESX/ESXi 2.x/3.x host to legacy License
Server
27000, 27010 TCP Incoming/
Outgoing
Web Services HTTP 8080 TCP Incoming
Web Services HTTPS 8443 TCP Incoming
vCenter SNMP server polling 161 UDP Incoming
vCenter SNMP client trap send 162 UDP Outgoing
vCenter DNS client 53 UDP Outgoing
vSphere Active Directory integration 88, 445 UDP, TCP Outgoing
ODBC to MS SQL Server database 1433 TCP Outgoing
Oracle Listener port to Oracle database 1521 TCP Outgoing


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 72 of 79
Table 49. vCenter Converter Standalone Port Requirements
Description Ports Protocol Direction
Converter Client (GUI) to Converter Server 443
(configurable)
TCP Incoming
Converter Server to remote Windows
powered-on Machine—remote agent
deployment, Windows file sharing
445 and 139 TCP Incoming
Converter Server to remote Windows
powered-on Machine—remote agent
deployment, Windows file sharing
137 and 138 UDP Incoming
Converter Server to remote Windows
powered-on machine—agent connection
9089 TCP Incoming
Converter Server/Linux agent to remote
Linux powered-on machine
22 TCP Incoming
Converter Server/Agent to managed
destination—VM creation/management
(includes VM Helper creation/management)
443 TCP Incoming
Windows powered-on machine to managed
destination—hot clone—access
(vCenter/ESX/ESXi)
443 TCP Incoming
Windows powered-on machine to managed
destination—hot clone—copy (ESX/ESXi)
902 TCP Incoming
Windows powered-on machine to hosted
destination—hot clone—Windows file
sharing
445 and 139 TCP Incoming
Windows powered-on machine to hosted
destination—hot Clone—Windows file
sharing
137 and 138 UDP Incoming
Helper VM to Linux powered-on machine—
hot clone
22 TCP Outgoing
Converter Server/Agent to managed
source/destination—VM import—access
(vCenter/ESX/ESXi)
443 TCP Incoming
Converter Server/Agent to managed
source/destination—VM import—copy
from/to ESX/ESXi
(Traffic from ESX/ESXi to ESX/ESXi direct
for disk-based cloning only)
902 TCP Incoming
VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 73 of 79
Converter Server/Agent to hosted
source/destination—VM import—Windows
file sharing
445 and 139 TCP Incoming
Converter Server/Agent to Hosted
Source/Destination—VM Import—Windows
file sharing
137 and 138 UDP Incoming

Table 50. vCenter Update Manager Port Requirements
Description Ports Protocol Direction
Update Manager to vCenter Server 80 TCP Incoming
Update Manager to external sources (to
acquire metadata regarding patch updates
from VMware
80, 443 TCP Outgoing
Update Manager client to Update Manager
server
8084 TCP Incoming
Listening ports for the Web server,
providing access to the plug-in client
installer and the patch depot
9084, 9087 TCP Incoming
Update Manager to ESX/ESXi host (for
pushing virtual machine and host
updates/patches)
902 TCP Incoming


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 74 of 79
Appendix I – Monitoring Configuration
Table 51. SNMP Receiver Configuration
SNMP Monitoring Server Port SNMP Community String
emon01.domain.local 162 PRIV-RW

Table 52. vCenter SMTP Settings
SMTP Server Sending Account Account to Receive Alerts
mail03.domain.local Vsphere01.domain.local [email protected]

Table 53. Physical to Virtual Windows Performance Monitor (Perfmon) Counters
Old Physical Hardware Counter New Virtualization Aware Counter
Processor - % Processor Time VM Processor - % Processor Time
- Effective VM Speed in MHz (new)
% Committed Bytes in Use Memory Active in MB
- Memory Ballooned in MB (new)
% Committed Bytes Memory Used in MB


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 75 of 79
Default vSphere Host Alarms to be Used
 Host hardware system board status
 Host power state
 Host memory status
 Host processor status
 Host disk status
 Host network status
 Host connection and power state
 Host memory usage
 Host CPU usage
 Host disk usage
 Host network usage
 Host storage status
 License error
 Cannot connect to network
 Cannot connect to storage

Default vSphere Cluster Alarms to be Used
 All HA hosts isolated
 Cluster deleted
 Cluster overcommitted
 HA admission control disabled
 HA agent unavailable
 HA disabled
 HA host failed
 HA host isolated
 Host resource overcommitted
 Insufficient failover resources
 No compatible host for secondary VM
 Virtual machine Fault Tolerance state changed
 Timed out starting secondary VM
 Cluster High Availability error
 Migration error

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 76 of 79
Default vSphere Datastore Alarms to be Used
 Datastore disk usage (%)
 Datastore state to all hosts
Table 54. Modifications to Default Alarm Trigger Types
Trigger Type Condition Warning Condition
Length
Alert Condition
Length
Host Memory
Usage
Is above 80 For 10
minutes
90 For 5
minutes
Host CPU Usage Is above 80 For 10
minutes
90 For 5
minutes
Datastore Disk
Usage
Is above 85 For 30
minutes
90 For 5
minutes


VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 77 of 79
Appendix J – Naming Conventions
ESX/ESXi Host Naming Convention
ESX hostnames follow this naming convention:
ESX-<nn>-<domain>-local
ESXi hostnames follow this naming convention:
ESX-<nn>-<domain>-local
where
<nn> is a sequential number for the host
<domain> is the Active Directory domain where the host resides
vCenter Server Naming Convention
VMware vCenter Servers follow this naming convention:
vCenter-<location>-<nn>-<domain>.local
where
<location> is the datacenter location where the vCenter Server resides, using existing
corporate naming conventions
<nn> is a sequential number for the vCenter Server
<domain> is the Active Directory domain where the vCenter Server resides
Datacenter Naming Convention
Datacenters follow this naming convention:
<location>-<function>
where:
<location> is the datacenter location, using existing corporate naming conventions
<function> is the role of the datacenter (for example, prod, dev, test, qa)
Cluster Naming Convention
Clusters follow this naming convention:
<function>-<nn>
where:
<function> is the role of the department, such as Finance
<nn> is a sequential number for the cluster

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 78 of 79
Virtual Switch Naming Convention
Virtual switches follow this naming convention:
<network-name>-<purpose>-<nn>
where:
<network-name> is the physical network associated with the vSwitch, for example,
management, VM
<purpose > is the type of network traffic (for example, production, test)
<nn> is a sequential number for the vSwitch
vSwitches must follow consistent, case-sensitive naming conventions to guarantee vMotion
functionality along with other elements of the vSphere infrastructure.

VMware vSphere Plan and Design Services
Architecture Design
© 2010 VMware, Inc. All rights reserved.
Page 79 of 79
Appendix K – Design Log
DELETE THE FOLLOWING HIGHLIGHTED TEXT AFTER YOU READ IT
This is sample of a design log for decisions made during your VMware vSphere engagement. The
actual issues and resolutions for your customer will be specific to the issues that needed to
addressed and solved. You will need to update this table.
Table 55. Design Log
ID Open Issue/Question Owner Close
Date
Answer/
Resolution
Comments/
Proposed Value
101
102

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close