Now that we’ve covered installing and getting started with a VMware setup, I guess the next step is to start to manage this new setup.
As shown in the previous blogs, basic management with a simple setup is pretty easy, as is expanding on this new VMware infrastructure (as I hope to show you).
In this blog I’m going to cover the following:
As you can see from the above, we are now moving in to more SMB type setups with a couple of hosts running high availability allowing for a complete host failure and the infrastructure to still run OK.
We will being by adding a new ESXi host in to our existing environment. I will assume for this part that you have installed ESXi on to the new Server and we are managing this via the Vcenter Server (as installed in my last blog: Getting Started with ESXi 5.1 (Part 1) )
Firstly before we continue, let’s make sure we’ve added the correct DNS entry in to DNS.
Finally make sure we can ping it by name.
Now we can communicate with our new host lets add it in to our datacentre.
Select your Datacentre, and select “add host”
And follow the step by step guide through (this was covered in the previous blog but I will re-cap here just to be sure).
Once again when you are prompted for the thumbprint make sure the ESXi host you are adding match’s that of the ESXi host you THINK you are adding…
You will now see your new host presented, and once finished it should be active and manageable.
As you can see from the screen shot we have a warning regarding the host memory usage, this is because I’ve got all these VM’s running on the one host and memory is starting to get a tad thin on the ground.
As we have this nice new ESXi host to use, I’m going to migrate the MRVC01 (Vcenter Server) over to this host.
If you just jump straight in and try to migrate the VM it won’t work as first we need to configure VMotion. (as shown below) you can see the only option you can pick when right clicking a VM and selecting Migrate is to move it to a different Datastore
So what is Vmotion?
Well VMotion enables the live migration of running virtual machines from one physical server to another with zero downtime, continuous service availability, and complete transaction integrity.
VMotion lets you:
Seeing as High Availability is built upon VMotion then it’s definitely something we need to enable!
Sounds complex doesn’t it, but like all things in VMware it’s fairly simple.
Select the ESXi Host and select Configuration > Networking. From here click Properties
Select Management Network and we can see on the right hand side VMotion is currently disabled.
Select Edit then simply put a tick next to VMotion.
Click OK and then do the same for your new ESXi host
Now if we right click on the VM and select Migrate you can see we are given additional options, and what we wish to do is to “change host”.
Select your new ESXi host (in this case MRESXI02).
You may (depending on the environment) get a few alerts. For example in my home lab it’s running at 100mb connection instead of 1000mb.
I also removed the CD drive from the VM which it’s just telling me it can’t access. (as the ISO was stored locally and not on MRSAN01).
Which brings me to my next point. Please note you CANNOT move VM’s between hosts if the VM resides on LOCAL storage. It HAS to be on shared storage. (Unless you have the Essentials Plus licensing, with ESXi 5.1 and vCenter 5.1 – or so I believe).
But in 99% of cases, most environments which require HA will have shared storage in place.
Continue and select next (chose high priority) after all we want it doing now don’t we.
You will now see it’s migrating the VM between hosts.
The only change I noticed was reply times, (using the VM I didn’t notice any difference), and seeing as my SAN (running openfiler) is in one room ad my ESXi hosts in another running over a 10/100 network I think reply times going from 1/2ms to 80ms is pretty reasonable!
Once complete, you will now see the VM listed as running on the other ESXi Host.
In terms of moving VM’s manually, you can see it works well. Now let’s take this one step further and “automate” the process. If MRESXI01 fail’s, it would be nice if the VM’s automatically came up on MRESXI02 wouldn’t it?
For this we need to create a new cluster
Please NOTE: If you have a two host setup (like in this example), select “percentage of cluster resources reserved as failover”
If you leave it as default (Hosts failures the cluster tolerates) It doesn’t work. In a larger environment 4+ ESXi hosts, you can leave the default settings.
We can then drag and drop the hosts in to the new cluster.
After a short period once they configure themselves you will see one becomes the Master (above) and one becomes the Slave (below). Notice: “vSphere HA State”
Now we have HA enabled, we can see MRDC01 and MRVC01 are both running on MRESXI02 and MRSQL01 is MRESXI01. (I’ve deleted some of the old VM’s to make it a bit clearer)
I’m now going to shut down MRESXI01 and *hopefully* the MRSQL01 VM will automatically move and be relocated on to MRESXI02.
The Server has detected there has been a power failure and HA is now active
We can see an alert next to MRESXI01 and if we switch to the console view we can see the server is currently booting up
Checking out a constant ping to the server you see we lost 34 pings in total, which for the size setup we have that’s not bad going at all. Obviously for the more “critical” server’s in your environment there are other HA methods we can put in place, but this guide was purely to show the basics, and for a lot of SMB’s, this short amount of downtime for an entire HOST to fail is more than sufficient.
Finally if we check the VM’s and where they are running we can see they are all located on MRESXI02.
There we have it, hopefully you’ve seen it’s not as daunting as maybe you first thought and hopefully this help’s you to become more confident with your VMware setup.
Thanks for reading