Unknown Virtual Machines After Host Reboot

One of our hosts became inaccessible over the weekend as I was migrating VMs between hosts. After a number of attempts to gracefully reboot the host I was left with no other choice than to reboot -f and wait patiently while it rebooted. Sadly HA didn’t have an opportunity to migrate machines off so I had to login to the host via the C# client and power all the VMs on again. Thankfully there were no machines on there that caused disruption and total downtime was about 10 minutes. Not ideal, though.

Anyway, as a result of the reboot I noticed I had a VM that had been renamed to Unknown VM. I had an idea of what it could be but after checking the events on the host I ended up being a little confused as the VM I thought it was had migrated to the new datastore. Not a problem though and if you find yourself in this situation then the following should help out:

  1. SSH into your host as root
  2. Run the following: cat /etc/vmware/hostd/vmInventory.xml – this will result in an output of the VMs currently registered to your host. Compare this with the list of VMs actually running on your host.
  3. Right-click the Unknown VM entry and click Remove From Inventory.
  4. Browse to the appropriate datastore for the virtual machine and open the folder.
  5. Right-click the *.vmx file and click Add to Inventory.
  6. Power on the virtual machine.

Now, in my case the VM had actually completed the migration and appeared further down the list of running VMs so I followed steps 1 – 3.

For reference here are the VMware KBs I used for troubleshooting:

Restarting the Management agents on an ESXi or ESX host (1003490)
Identifying Fibre Channel, iSCSI, and NFS storage issues on ESX/ESXi hosts (1003659)
Inaccessible virtual machines are named as Unknown VM (2172)
A virtual machine cannot be powered on and shows as unknown (1008752)