Friday, 29 October 2010

Building a vLab Part 3: vCenter Server

Previously on "Building a vLab": Part 1: The Design and Part 2: Infrastructure Build.

For a production environment, many people run vCenter on a server that connects to a SQL Server database on another server (possibly as part of a cluster).  However, as part of this vLab, we're going for the default install of a single VM using a local SQL Server Express database.

The vCenter VM has 1 vCPU, 4GB RAM, 40GB hard disk, 1 vNIC connecting to the vLab LAN with an IP address of 192.168.10.2/24. This specification is smaller than that recommended by VMware, but it's enough to get started with. The vCenter server is running Windows Server 2008 R2 as vCenter 4.1 requires a 64bit version of Windows.

Once built, the vCenter Server is assigned the default gateway of the Vyatta VM (192.168.10.33) and the DNS server of the domain controller. The vCenter Server is named "vcenter" and then joined to the vLab domain.

As I do not have permanent VMware vSphere licences in my home lab, I wanted to create an environment where rebuilding from scratch would be a fairly painless experience.

I first created a new 64bit Windows Server 2008 R2 virtual machine. After it was assigned a static IP address and given the correct hostname, I created a small command script called install-vcenter.cmd based on the VMware "Performing a Command-Line Installation of vCenter Server" and copied it to the administrator's desktop.

Having got the base Windows install done, I then exported the VM as an OVF template for future use.

The next step was to mount the vCenter ISO and run the install-vcenter.cmd script. This performed a silent default installation of vCenter including the installation of the .NET runtime and SQL Server Express install. There are many customisable options that can be passed to the setup but these work well enough for my needs:

set EXE=D:\vpx\VMware-vcserver.exe
start /wait %EXE% /q /s /w /L1033 /v" /qr DB_SERVER_TYPE=Bundled FORMAT_DB=1 /L*v \"%TEMP%\vmvcsvr.txt\""

This means that when the vCenter licence expires, I can wipe it out, re-import the template VM, rejoin the domain and run the install-vcenter.cmd script to rebuild a new vCenter installation. It won't keep all my previous settings, and won't configure all the VMs, but it's a start.

UPDATE (22 Jan 2011): If the lab isn't used for a couple of months, the Active Directory trust relationship will fail and the installation will fail. To fix, export the VM when built, but before it is on the domain. I then wrote a short cmd file to automatically join the domain with the following command:

netdom join %computername% /Domain:VSPHERE /UserD:Administrator /PasswordD:mypassword /REBoot:20


In the next part of this series, we'll build our virtual ESXi servers.

Sunday, 24 October 2010

Building a vLab Part 2: Infrastructure Build

The journey begins! In order to build the vLab as detailed in part one, I'll be using my HP ML115 G5. This is a quad core, single CPU tower server in which I've installed 8GB RAM. It's also got 2 x 500GB SATA drives of which I'll be using one for the vLab environment (the other will be used for other projects). The ML115 G5 has an internal USB socket and ESXi can easily be installed on it, reserving the disk space for the VMs.

There is little point in recreating the same installation instructions over and over again when there is a perfectly good reference point. In this case, I'm using the excellent "Installing VMware ESXi 4.0 on a USB Memory Stick The Official Way" post from TechHead (the install for 4.1 is pretty much the same).

As the only physical VMware server, I'll be referring to it as the pESXi box.

At the end of the install process, I have an empty VM host connected to my physical network on which to build the VMs that represent the "physical" items in my virtual infrastructure: The router and the SAN/NAS and the Active Domain controller that we'll need when vCenter Server is installed.


Building the LAN


VMware best practice is to use multiple networks for different types of traffic. The vLab will require four different VLANs (virtual machine network traffic, vMotion traffic, IP storage and access to the physical network). In order to enable this, the pESXi server needs four switches. All four of these switches contain "Virtual Machine" port groups. The switch containing the management interface to the pESXi box also has a VMkernel port.

  • vSwitch0: Connects to the physical (non-vLab) network
  • vSwitch1: The vLab LAN for management access and connecting VMs
  • vSwitch2: The vLab storage network for iSCSI and/or NFS traffic
  • vSwitch3: The vLab vMotion network
On the pESXi box, the networks look as follows:





You will notice that vSwitches1-3 are not connected to any physical adapter yet. We will use the Vyatta router to accomplish this.


    For reference, I'll be using the following subnets:

    • 192.168.192.0/24 - main network connected to the physical network
    • 192.168.10.0/24 - vLab Virtual Machine network
    • 192.168.20.0/24 - vLab storage network
    • 192.168.30.0/24 - vLab vMotion network


    Add the routes to the network on your management PC. Alternatively, it might be more useful to add these on your main router so that traffic to these networks route correctly.

    The router is necessary because I want to give my vLab a completely separate IP range to the rest of my kit. Therefore, in order for my non-lab kit to communicate with the vLab, I need a layer 3 router. Vyatta have a free "Core" edition that can be installed. For this, I created a new VM with the following sizings:
    • 1 vCPU
    • 256MB RAM
    • 8GB Hard Disk (thin provisioned)
    • 3 x Network Interfaces
      • 1 connecting to the default VM network (i.e., the physical LAN)
      • 1 connecting to the "vSphere Lab in a Box LAN"
      • 1 connecting to the "vSphere Lab in a Box Storage LAN"
    For information on installing Vyatta, see this guide. My Vyatta configuration (as displayed using the show -all command) is:

     interfaces {
         ethernet eth0 {
             address 192.168.192.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:5b
             speed auto
         }
         ethernet eth1 {
             address 192.168.10.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:65
             speed auto
         }
         ethernet eth2 {
             address 192.168.20.33/24
             duplex auto
             hw-id 00:0c:29:50:0a:6f
             speed auto
         }
         loopback lo {
         }
     }
     system {

         gateway-address 192.168.192.1
         host-name vyatta
         login {
             user root {
                 authentication {
                     encrypted-password $1$VBYqK71jAsu3bsoAznh22mx0pqp31nU/
                 }
                 level admin
             }
             user vyatta {
                 authentication {
                     encrypted-password $1$FdjsdebjGneXOIVw9exHrXRAcaN.
                 }
                 level admin
             }
         }
         ntp-server 69.59.150.135
         package {
             auto-sync 1
             repository community {
                 components main
                 distribution stable
                 password ""
                 url http://packages.vyatta.com/vyatta
                 username ""
             }
         }
         time-zone GMT
     }


    I mapped the Vyatta ethernet adapters (eth0, eth1 and eth2) to the correct network by comparing the MAC address with those listed against each adapter in the vSphere client. The default route (pointing to my Internet router) will make things like downloading software from within the vLab easier.

    Building the SAN

    VMware vSphere shines when the hosts have access to shared storage. The vLab ESXi servers will connect to an IP based (iSCSI) SAN. There are multiple ways to achieve this and one of the most common ways for home lab users to get shared storage is to use the Linux-based OpenFiler distribution.

    Again, in an attempt to avoid reinventing the wheel, I'll point to the excellent Techhead post on configuring OpenFiler.

    The specifics for the vLab is that the OpenFiler VM is connected to a the vLab storage LAN and not the vLab VM LAN. The IP address for the OpenFiler VM is 192.168.20.1. In addition to the install disk of 8GB, I've also created a 100GB thin provisioned disk on which to install VMs. The OpenFiler storage will be used for the VMs that I'll install on the virtual ESXi servers. The Active Directory Domain Controller, Vyatta VM and vCenter Server will also install directly onto the 500GB SATA datastore.


    Installing the Active Directory Domain Controller

    VMware vCenter Server requires Active Directory, so we'll need a domain controller for the vLab. Best practice requires at least two domain controllers for resilience, but we'll make do with just the one (this is a VM lab not a Windows lab). I sized the DC VM to be very small: 1 vCPU with 256MB RAM, 8GB hard disk and 1 vNIC connecting to the vLab LAN with an IP address of 192.168.10.1/24. Although I prefer Windows Server 2008, the DC will run Windows Server 2003 because of it's lower footprint.

    The setup was a standard Windows Server 2003 install, followed by running dcpromo. I called the host "labdc", the domain "vsphere.lab" and we're off.

    Okay, so we have everything ready apart from our virtual ESXi hosts and the vCenter server. We'll continue the journey in part 3.

    Friday, 22 October 2010

    Building a vLab Part 1: The Design

    Like many in the VMware community, having a home lab on which to try things out is something I've been working on for some time. As I prepared to update my VCP3 to the VCP4, I thought it would be good to build myself a new "vLab" to test - and break - things without worrying about production systems and complaining users.

    This mini series is partly inspired by a posting on TechHead's blog. I think there was originally going to be a series there, but for whatever reason (I'm guessing time was the issue), it didn't really develop.

    In order to be able to run through the VCP4 syllabus, I needed the following:

    • 2 x ESX servers
    • 1 x VMware vCenter server
    • 1 x Active Directory Domain Controller
    • 1 x SAN/NAS shared storage array
    • 1 x router/firewall that isolates the vLab from the rest of my home network

    This is what I'm aiming for:




    Obviously this is going to take up a fair amount of space, will be noisy and hot. So the approach I'm going to aim for is to run the entire vLab on a single HP ML115 G5 with 1 x quad core CPU and 8GB RAM. I'll run VMware ESXi as the base hypervisor and then install the following VMs:

    • 1 x Windows Server 2008 R2 VM to run vCenter Server (4.1 requires a 64bit OS)
    • 1 x Windows Server 2003 VM to run as an Active Directory Domain Controller
    • 1 x OpenFiler VM to run as an iSCSI and NFS server
    • 1 x Vyatta VM to run as a router/firewall
    • 2 x VMware vSphere Hypervisor (ESXi) VMs to run the lab VMs

    For the Windows Server 2003/2008 licences I'll use my Technet subscription and the OpenFiler and Vyatta installs are free. For the vCenter Server and enterprise features of vSphere, I'll have to use the evaluation licences.

    With the software components downloaded and ready to do, it's time to do the build...http://livingonthecloud.blogspot.com/2010/10/building-vlab-part-2-infrastructure.html

    Friday, 8 October 2010

    Sun X4100 M2 firmware upgrade

    This is a very short note that others might run into...

    I was trying to upgrade the firmware on a Sun X4100 M2 server to the latest release and the System BIOS upgrade was failing. I was picking the firmware image up from a network drive which may have been the problem, as copying the image to my C: drive and then installing the upgrade worked fine.

    Not sure why this should be the case, but the upgrade has now worked.

    Thursday, 7 October 2010

    Configure Solaris 10 for mail relaying

    We have a number of devices on our network that can send email alerts. It makes sense to have a central server that can act as a mail relay. We have a Solaris 10 server "sol10" that comes bundled with sendmail, but this is not configured to act as a mail relay.

    In order to make the Solaris server relay messages to another host involves editing the /etc/mail/sendmail.cf file and setting the value:

    # "Smart" relay host (may be null)
    DSmailserver.my.domain

    Obviously replace "mailserver.my.domain" with the FQDN of your real mail server that you want to send email through. Restart sendmail by running:

    svcadm restart /network/smtp

    This setting will allow mail that originates on "sol10" to be sent out, but does not help when you want other devices on your network to use sol10 as it's relay. The answer was surprisingly easy:

    Create a new file /etc/mail/relay-domains. In this file, put the networks you want sol10 to accept email from. For example, if you have devices on the 10.0.0.0/8, 172.16.0.0/16 and 192.168.20.0/24 networks and want to use sol10 as the relay, enter the following lines in /etc/mail/relay-domains:

    10
    172.16
    192.168.20

    Once done, restart sendmail again (same command as above), configure your clients to use the Solaris server as their SMTP server and check the output in /var/log/syslog while you send a test message.

    Monday, 4 October 2010

    Upgrading to ARCserve r15

    We've been running ARCserve 11.5 on UNIX for a number of years, but CA have effectively stopped development on it. The Windows version was not suitable for our environment because it was (at the time) unable to perform incremental and differential backups of UNIX/Linux filesystems.

    The latest ARCserve release (r15) now supports incremental/differential backups of UNIX/Linux filesystems, so we made the jump.

    As most of our Windows and Linux servers are now VMs running in our VMware cluster, we have opted to perform block level VM backups. To do this, we installed the ARCserve Backup Agent for Virtual Machines on our vCenter server.

    ARCserve uses the VMware Virtual Disk Development Toolkit (VDDK) to provide integration with the VMware Data Protection APIs. I installed this on our vCenter server and configured the ARCserve server to backup all the VMs using the vCenter server as a proxy (note: this is not a VCB proxy as it does not copy the files, rather the VDDK provides a direct way of accessing the underlying VMDKs).

    The problem we had was that the VM backups failed with the following error:

    VMDKInit() : Initialization of VMDKIoLib failed

    To cut a long story short, the problem was that the vCenter server is 64bit (required by vSphere 4.1). The fix, provided by CA support, was to extract the vddk64.zip file in C:\Program Files (x86)\VMware\VMware Virtual Disk Development Kit\bin.

    The problem here appears to be that the VMware installer for the VDDK does not create the 64bit files when installing on a 64bit version of Windows. By adding these and restarting the ARCserve processes, the backup worked successfully.

    Now to test the overnight backup of all our VMs...

    Edit: The backup appears to be working fine!

    Edit 2: Forgot to mention that the CA support representative also modified the system PATH variable to include the 64bit VDDK driver: C:\Program Files (x86)\VMware\VMware Virtual Disk Development Kit\bin\vddk64