Title: VEO Virtual Ecosystem Optimization
1vSphere 4 Virtualizing Critical Apps
VMUG 2009 Presented By Matt Cavanagh, Principal
Consultant, Flytrap Technologies With Support
From Bluestripe Software Stratus Technologies
2Agenda
- Virtualization Adoption trend
- Obstacles to Virtualizing Critical Apps
- Required components for Critical Apps
3Virtualization Adoption Trends
Slow and Challenging VM Conversion
Integrated Transaction Apps
- N Tiered / Web Applications
- Win, Linux, Solaris, Z series
- Significant non VM systems
Complex N tier / Web Apps
- First business transactions
- Windows Linux front ends
- Tiers 12 virtualized
- 100s/1000s servers
- Middleware issues
Simple Heavy Load Apps
- Exchange Servers
- Windows 200x Servers
- 100s/1000s servers
Development Test
- Departmental Servers
- 10s servers
Simple Under-utilized Apps
- Multi platforms for Dev/Test
- 10s servers
Recent Forrester discussions estimate lower-left
quadrant is majority of todays estimated 12-15
vm proliferation.
4Adoption Trendscont.
- Organizations conducting virtualization projects
are experiencing 18 reductions in infrastructure
cost and 15 savings in utility cost - However, improvements could be made
- 69 of organizations surveyed do not have the
ability to discover all connections that impact
application performance in virtual environments - 71 of organizations do not have the ability to
assess interdependencies between virtual and
physical systems - 76 of organizations do not have tools in place
for the automated discovery of applications in
virtualized environments
4
5Obstacles
- Performance
- Virtualization overhead too costly prior to
vSphere - Availability
- Critical apps typically run on either RISC based
machines designed for high availability (e.g.
Mainframes) or on x86 running in cluster - Management
- Lack of application transaction flow visibility
(55) - Inability to anticipate application performance
changes in P2V (49) - Inability to measure end user experience (46)
- Inability to manage SLAs around application
performance (43) - Bottom Line Co-mingling critical apps inside a
virtual environment makes SLAs and debugging
difficult
5
6Obstaclescont
- Performance
- Virtualization overhead too costly prior to
vSphere - Availability
- Critical apps typically run on either RISC based
machines designed for high availability (e.g.
Mainframes) or on x86 running in cluster - Management
- Lack of application transaction flow visibility
(55) - Inability to anticipate application performance
changes in P2V (49) - Inability to measure end user experience (46)
- Inability to manage SLAs around application
performance (43) - Bottom Line Co-mingling critical apps inside a
virtual environment makes SLAs and debugging
difficult
6
7Obstaclescont
- P2V Conversion Checklist
- Inventory physical servers
- Resource utilization study
- Scheduled conversion project
- Agreement of all departments
7
8Required Components
- Performance
- Bare metal performance
- Availability
- VMware FT is good, but still lacks several
components - Management
- Need detailed insight into application
performance across the network - Need ability to quickly go back to Physical (even
if only for political reasons)
8
9The Solution
- Performance
- vSphere performance improvements now provide
bare-metal equivalent performance - Availability
- Stratus HA Servers provide greater than 6 x 9s
reliability (99.999 ) availability through
hardware fault tolerance.
9
10The Solution.cont
- Management
- Software companies such as Bluestripe provide
detailed insight into application performance and
interdependencies. - Closely analyze application performance pinch
points prior to going virtual compare this to
virtual (Bluestripe) - Continually monitor, root-cause problem areas
quickly (Bluestripe) - Quickly migrate back to physical if issues appear
to be related to virtualization (Acronis BR10)
10
11The Solution.cont
11
12VMware FT Gotchas
- Ensure CPUs support FT
- Enable Hardware Virtualization (HV) in the BIOS
- Recommend to turn off power management (power
capping) - in the BIOS (performance implications)
- Recommend to disable hyper-threading in the
- BIOS (performance implications)
- Physical RDM is not supported (note that Virtual
RDM is supported) - Storage VMotion is not supported
- N-Port ID Virtualization (NPIV) is not supported
- vmdk must be thick-eagerzeroed (thin will be
converted) - Gigabit NIC for FT logging (10Gbit can be used)
- Ensure environment does not have a single point
of failure - Primary and secondary hosts/VMs in HA-enabled
cluster - DRS cannot be used for protected VMs (note that
manual VMotion is OK) - Primary and secondary hosts must be on same
build - VMs cannot have more than 1 vCPU (SMP is not
supported) - Hot add of devices is not supported
- Snapshots are not supported (delete them before
protecting) - VM hardware must be a v7
12
13(No Transcript)
14Report Card Definitions
14
15Report Card Virtual Infrastructure
15
16Report Card Networking
16
17Existing Networking
17
18Proposed Network
18
19Report Card Storage
19
20Report Card VMs
20
21Healthcheck Recommendations
- Storage
- Grow VMFS volumes to hold multiple VM (OS drives)
- Rule of thumb lt 10 VMs per VMFS volume
- VMFS volumes should have enough free space to
handle VM snapshots vswap files. - ACLs on Eql array need to be modified limit only
to iqn names of ESX hosts. - Provide redundant connections from each ESX host
to SAN fabric - VCMS
- Move Virtual Center to Windows 2003 server on ESX
cluster. - Note set restart priority to High in HA cluster
setting
21
22Healthcheck Recommendations
- Networking
- Use NIC teaming to provide network redundancy for
all networks. - Remove existing Heartbeat network to free up NIC
port - Upgrade SAN switch infrastructure
- Stackable switches with support for Jumbo Frames,
Flow Control, Multicast storm control disable,
STP disable - Virtual Machines
- Upgrade VMware tools
22
23(No Transcript)
24Methodology