One of our hosts became inaccessible over the weekend as I was migrating VMs between hosts. After a number of attempts to gracefully reboot the host I was left with no other choice than to reboot -f and wait patiently while it rebooted. Sadly HA didn’t have an opportunity to migrate machines off so I had to login to the host via the C# client and power all the VMs on again. Thankfully there were no machines on there that caused disruption and total downtime was about 10 minutes. Not ideal, though.
Anyway, as a result of the reboot I noticed I had a VM that had been renamed to Unknown VM. I had an idea of what it could be but after checking the events on the host I ended up being a little confused as the VM I thought it was had migrated to the new datastore. Not a problem though and if you find yourself in this situation then the following should help out:
- SSH into your host as root
- Run the following: cat /etc/vmware/hostd/vmInventory.xml – this will result in an output of the VMs currently registered to your host. Compare this with the list of VMs actually running on your host.
- Right-click the Unknown VM entry and click Remove From Inventory.
- Browse to the appropriate datastore for the virtual machine and open the folder.
- Right-click the *.vmx file and click Add to Inventory.
- Power on the virtual machine.
Now, in my case the VM had actually completed the migration and appeared further down the list of running VMs so I followed steps 1 – 3.
For reference here are the VMware KBs I used for troubleshooting:
Restarting the Management agents on an ESXi or ESX host (1003490)
Identifying Fibre Channel, iSCSI, and NFS storage issues on ESX/ESXi hosts (1003659)
Inaccessible virtual machines are named as Unknown VM (2172)
A virtual machine cannot be powered on and shows as unknown (1008752)
I’m in the process of merging datastores at the moment as we seem to be generating some mammoth sized VMs and we’re fast approaching capacity. Amongst other servers our Exchange box is approaching 1TB and we have various other platforms requiring some large amounts of space. Bearing in mind we need to keep some free space to ensure servers don’t actually stop running and we also need space for things like snapshots and backups, it’s got to the point where something has to be done.
I know that it’s not just a case of right clicking and unmounting datastores as that runs a high risk of causing APD. We’ve experienced APD not so long ago and that’s not something I wanted to go through again so I needed some instructions on how to do it cleanly and how to avoid a nightmare scenario.
Thankfully I came across this excellent guide on how to achieve just that written by Jerry Wilkin. And be sure to bookmark this blog as there’s some other handy tutorials and articles you might find useful (especially if you’re into guitars).
I’m a little late to the party but here’s an awesome new VMware fling you’ll want to take a look at. It’s essentially a Java front end for the esxtop and resxtop utilities used to, amongst other things, diagnose performance issues within your VMware environment.
Head to http://labs.vmware.com/flings/visualesxtop and download the zip file. Once downloaded, unzip and then rename the unzipped file to include the .tar extension. You’ll need something like 7zip to then untar that file and eventually you’ll get to the goods. Double click on the visualEsxtop.bat file and you’ll be presented with the login box, enter the IP address of the host you want to investigate, enter your credentials and you’ll be presented with the initial screen.
ESXtop is still something of an undiscovered entity to me, I’m able to diagnose the usual issues so I’m not the man to offer any sage advice. However, if you’re after some tips to get you going then check out Duncan Epping’s post on Visual ESXtop over at Yellow Bricks.
Oh and not to forget our Mac friends, here’s a guide to getting things going in OSX by William Lam.
If you’re testing and learning VMware products then building a lab is pretty much the single most important thing you need to do before doing anything else. For your VCP you’re going to need a DC, two hosts, a VC (appliance or otherwise) and storage. Not all of us have access to equipment of a suitable level to be able to fully explore the range of VMware products so the next best thing is a VMware workstation embedded lab. This is where Autolab comes in and can save you hours of configuration.
Really, hours. And the best bit is that it’ll run on less than 12GB of RAM. Obviously you’re not going to get the full View, vDirector environment going but you’ll definitely get a VCP lab running.
It’s a straightforward job, the instructions are written in plain English and there’s plenty of help available on the forums and via Twitter. It took me a couple of attempts to get my lab ‘just right’ and once you do find that sweet spot then I suggest shutting all your VMs down and backing them up. Then, if it all goes south, all you need to do is remove them from workstation and re add.
I’m in the process of reinforcing my vDS knowledge after struggling quite considerably when studying for my VCP and I needed to simplify my lab configuration. I’m currently using the software iSCSI initiators provided for free from Microsoft and after some reading of various articles I decided to go with NFS storage by way of Openfiler. The process is straightforward enough and I’ve outlined the steps I took below. Please bear in mind that this is my lab environment and I had no existing VMs residing on my datastores as it was a fresh build. If you’re going to do this and you do have existing VMs then you’re going to want to install and configure Openfiler and add your NFS datastores to vCenter BEFORE you unmount your exisiting iSCSI datastores.
1. Download Openfiler. There’s a VMware appliance ready for you to download but I grabbed the ISO as I like to run through the installation for myself.
2. Install and configure Openfiler using this awesome guide by @jreypo: http://jreypo.wordpress.com/2010/11/30/configure-nfs-shares-in-openfiler-for-your-vsphere-homelab/ – you’re going to want to create two datastores to satisfy HA heartbeat requirements so follow the guide through twice and add both datastores to each of your hosts.
It should look like this:
4. vMotion your VMs onto the new datastores using cold migration (power off and vMotion).
5. Disable HA on your cluster.
6. Unmount both of your existing iSCSI datastores from your hosts.
7. Delete the unmounted datastores.
8. Enable HA and if necessary reconfigure your hosts for HA again if they throw up warnings.
9. Once that’s done you’re probably going to want to remove the iSCSI software initiator from vCenter and to do that there’s a guide right here: http://jackstromberg.com/2013/01/how-do-i-remove-an-iscsi-software-adapter-in-vmware/– this will involve you rebooting your hosts.
And I’m happy to say that after reconfiguring my storage I was able to migrate my standard switch to a distributed switch with zero hassles.