VMware as implemented by the ITS department, QUT - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

VMware as implemented by the ITS department, QUT

Description:

2 2.2 GHz AMD Opteron (dual core) CPU's. 9 GiB memory ... 2 2.6 GHz AMD Opteron (dual core) CPU's. 14 GiB memory ... Hosts have dual 200 MiB/s (or 400 MiB/s for ... – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 20
Provided by: publi123
Category:

less

Transcript and Presenter's Notes

Title: VMware as implemented by the ITS department, QUT


1
VMware as implemented bythe ITS department, QUT
  • Scott Brewster
  • 7 December 2006

2
Note
  • IT services at QUT are provided primarily by the
    central ITS department and additionally by the IT
    departments of various faculties and divisions.
  • This presentation focuses on the VMware
    implementation managed by the central ITS
    department.(There are other VMware
    implementations at QUT managed by faculty IT
    departments.)

3
Overview
  • Why VMware?
  • VMware software
  • Physical hardware
  • Host hardware
  • Network hardware
  • Storage hardware
  • Virtual machine configuration
  • Guest operating-systems
  • Backup of virtual machines
  • VirtualCenter
  • Future directions

4
Why VMware?
  • Server consolidation through server
    virtualisation
  • Relocating instances of operating-systems on
    multiple under-utilised physical servers to
    multiple virtual machines on a single physical
    server
  • Test and development environments are key targets
    for virtualisation

5
VMware software
  • Timeframe
  • Late-2005 Initial deployment 6 hosts running
    ESX Server 2.5.2
  • Mid-2006
  • Installed ESX Server 3.0 on 8 new hosts
  • Migrated virtual machines from 6 original hosts
  • Manually shutdown and migrated existing virtual
    machines one at a time from the ESX Server 2.5.2
    hosts to the new ESX Server 3.0 hosts, leaving
    all ESX Server 2.5.2 hosts empty of virtual
    machines. Unfortunately required virtual machine
    downtime!
  • Re-installed ESX Server 3.0 on the original 6
    hosts
  • Late-2006 Upgraded all hosts to ESX Server 3.0.1
  • Used VMotion to migrate all virtual machines from
    a given host prior to its updating to ESX Server
    3.0.1. No virtual machine downtime required!
  • Now Another 8 new hosts awaiting installation of
    ESX Server 3.0.1

6
Physical hardware
  • VMware implementation requires three key types of
    physical hardware hosts, a network, and shared
    storage
  • Hosts
  • 22 Hewlett-Packard (HP) ProLiant-series servers
  • Network
  • 1000 Mb/s Ethernet
  • Cisco and Nortel network infrastructure
  • Storage
  • Local boot disks
  • Shared storage provided by SAN
  • SAN consists of HP storage arrays and fibre
    channel switches

7
Host hardware
  • 22 physical hosts dedicated to VMware
    implementation
  • 4 ? HP ProLiant DL380 G4
  • 2 ? 3.4 GHz Intel Xeon CPUs
  • 5 GiB memory
  • 2 ? 200 MiB/s Fibre channel (200-M5-SN-I) ports
  • 4 ? 1000 Mb/s Ethernet (1000BASE-T) ports
  • 10 ? HP ProLiant DL385 G1
  • 2 ? 2.2 GHz AMD Opteron (dual core) CPUs
  • 9 GiB memory
  • 2 ? 400 MiB/s Fibre channel (400-M5-SN-I) ports
  • 4 ? 1000 Mb/s Ethernet (1000BASE-T) ports
  • 8 ? HP ProLiant BL465c G1
  • 2 ? 2.6 GHz AMD Opteron (dual core) CPUs
  • 14 GiB memory
  • 2 ? 400 MiB/s Fibre channel (400-M5-SN-I) ports
  • 4 ? 1000 Mb/s Ethernet (1000BASE-T) ports

8
Network hardware
  • Each host has 4 ? 1000 Mb/s network connections
  • IP subnet 131.181.117.128/25 for the service
    console
  • IP subnet 10.0.0.0/8 on a dedicated VLAN for
    VMotion
  • IP subnet 131.181.118.0/24 or 131.181.117.0/25
    for use by virtual machines
  • Additional connection identical to (3) above, for
    redundancy.
  • Some hosts have an extra 2 ? 1000 Mb/s network
    connections
  • IP subnet 131.181.108.0/24 or 131.181.107.0/24
    for use by virtual machines
  • Additional connection identical to (5) above, for
    redundancy.

9
Network hardware
  • Now External switch tagging (EST) mode

Service console
Vmotion module
Virtual machines
Vswitch
Vswitch
Vswitch
Vswitch
131.181.117.128/25
10.0.0.0/8
131.181.108.0/24
131.181.118.0/24
Physical network connections
10
Network hardware
  • Currently need access to four IP subnets just for
    virtual machines with desired access to even more
    subnets.
  • Intention is to use virtual switch tagging (VST)
    mode
  • Allows virtual machines to access any subnet
  • Provides redundancy for all connections
    (including Service Console and Vmotion)
  • Allows Vmotion between more ESX Server hosts

11
Network hardware
  • Desired Virtual switch tagging (VST) mode

Service console
Vmotion module
Virtual machines
Vswitch
Physical trunk connections
131.181.118.0/24
10.0.0.0/8
131.181.117.128/25
131.181.108.0/24
12
Storage hardware
  • Hosts boot from local disks
  • Local disks (all SCSI) are configured into a
    RAID-1 logical disk.
  • Our non-blade servers use an extra local disk as
    a hot spare.
  • All other storage is shared and presented from a
    SAN
  • Hosts have dual 200 MiB/s (or 400 MiB/s for newer
    hosts) fibre channel connections to the SAN one
    to each SAN fabric.(QUT has two identical SAN
    fabrics for redundancy.)
  • HP Storage arrays (EVA8000 in this case) provide
    shared SAN LUNs to the hosts.
  • SAN LUNs for use by VMware are 500 GiB RAID-5
    LUNs.

13
Storage
  • Each SAN LUN provides the backing for a single
    ESX datastore.
  • Datastores can span SAN LUNs but we havent
    tried this.
  • In turn, a datastore can be formatted with the
    VMFS3 filesystem.
  • Virtual machines virtual disks are backed by
    files in VMFS3 filesystems.
  • We keep all of a virtual machines virtual disks
    on the same datastore.

14
Virtual machine configuration
  • Currently hosting 64 virtual machines
  • CPU
  • Majority of virtual machines configured with a
    single virtual CPU
  • Some are configured with dual virtual CPUs
  • Memory
  • Majority are configured with 512 MiB or less
  • Some use 1 GiB or more
  • Network
  • All currently use a single virtual network
    interface
  • Storage
  • Most have a relatively small boot virtual disk
    with one or more large data virtual disks
  • Some have a larger combined boot/data virtual disk

15
Guest operating-systems
  • Red Hat Enterprise Linux 4
  • 29 virtual machines running this OS
  • Even physical host hardware cannot always keep up
    with the default system timer rate of 1000 clock
    interrupts/s. A custom kernel is therefore
    required to reduce this rate to 100 interrupts/s
    for virtual machines.
  • Virtual machine is created manually by
    system-administrator.
  • Operating-system is then installed using
    network-based Kickstart process from the
    universitys Red Hat Satellite. Custom scripts
    install additional QUT-specific software and
    customisation.
  • The host is automatically registered for updates
    as part of the Kickstart process.

16
Installation of guest operating-systems
  • Microsoft Windows 2003 Server
  • 35 virtual machines running this OS
  • Clock interrupts already occur at less than 100
    interrupts/s, so no customisation of the system
    timer is required.
  • Virtual machine is created by cloning a virtual
    machine template which has previously been
    manually installed from a Windows installation
    CD. The template is configured to both run
    Sysprep and add the instance to the WSUS server
    for updates.
  • The system-administrator then modifies the newly
    created virtual machine if extra disks, memory,
    etc. are required.

17
Backup of virtual machines
  • No backup of ESX Server hosts is made
  • Virtual machines are stored on the shared SAN
    LUNs and can be restarted from a different ESX
    Server host if an ESX host is lost.
  • Each virtual machine is backed-up traditionally
    using a network-backup agent
  • If a virtual machine is lost is must be recreated
    and restored from tape.
  • The shared SAN LUNs are not backed-up
  • If a shared SAN LUN is lost, all virtual machines
    it contained must be recreated and restored from
    tape.

18
VirtualCenter
  • VirtualCenter version
  • Late-2005 Initial deployment used VirtualCenter
    1.3.1
  • Mid-2006 Fresh installation of VirtualCenter 2.0
  • Late-2006 Upgrade to VirtualCenter 2.0.1
  • Client Only supported on Windows
  • Linux users have to use Terminal Services client
    to first connect to Windows host
  • Virtual consoles become unreliable when this is
    done key-press and key-release events are
    delayed causing unwanted repetition on virtual
    consoles
  • Server Only supported on Windows
  • Installed on a physical host
  • License server
  • Dedicated license server running on the same
    physical host as the VirtualCenter server
  • VirtualCenter database
  • Oracle database running under Linux on a physical
    host
  • Vmotion
  • Separately licensed and additional cost, but
    essential tool in our experience
  • Allows on-line migration of virtual machines
    between physical hosts

19
Future directions
  • Review virtual machine backup
  • Current backup strategy does nothing to reduce
    the number of costly network backup licenses
    required
  • Network backups generate a lot of extra network
    traffic, which is undesirable on virtual machines
  • Configuration of resource pools
  • Currently little consideration is being given to
    guaranteeing resources for virtual machines
  • Appropriately configured resource pools should
    help
Write a Comment
User Comments (0)
About PowerShow.com