Reset ESXi trial license

Quoted directly by Aaron from:

“This guide will give you the steps needed to reset the license file so that you can apply the evaluation license back to your ESXi host.

WARNING: This is for education/informational testing/development purposes only, and should not be used on a production server.

To reset your expired ESX 4.x, ESXi 4.x, ESXi 5.x or ESXi 6.x 60 day evaluation license:

  1. Login to the HOST via SSH or Shell
  2. Remove /etc/vmware/license.cfg
  3. Copy /etc/vmware/.#license.cfg to /etc/vmware/license.cfg
  4. Restart the vpxa service

Or simply copy the code below and paste it into your SSH session.

rm -r /etc/vmware/license.cfg 
cp /etc/vmware/.#license.cfg /etc/vmware/license.cfg 
/etc/init.d/vpxa restart

Then open the “Licensed Features” option in the configuration tab of the ESXi host through the vSphere Client.

Click on “Edit” in the top right of the “Licensed Features” page

Once the “Assign License” window opens you will see two options. There will be a category for “Evaluation Mode” and Assigned License. Click on the “(No License Key)” option and then click “OK”. This will set the host back to “evaluation” mode and will give you access to all features for 60-days!”

*Update* This works if the current trial licensed hasn’t expired yet. If it has already expired, ether it be existing trial or an alternative license type, the above trick doesn’t seem to work. When navigating to the license area of the host afterwards it expects you to enter a key. (I didn’t test actually putting in all 0’s which might just be the field text indicator and not actually filled). but it wouldn’t let me proceed. In this case a host reboot might be required by following these commands instead:

rm /etc/vmware/vmware.lic
rm /etc/vmware/license.cfg
reboot

This worked and I was able to spin up my VMs on the host again.

Installing vCenter

Installing vCenter

Since vCenter will not be support on Windows moving forward, all discussion of vCenter will simply be referenced by its new known acronym; VCSA. vCenter based on linux.

I just signed up for VMUG advantage as such I get to play with vCenter at home, yay, else get the required ISO from VMware’s product portal using your own VMware login ID.

Although 6.7 is out, and well polished, 6.7 cannot manage ESXi 5.5 hosts, since I still have a few I’d like to use in my cluster, I’m going to be using VSCA 6.5 for this guide.

Also, I technically only have 5.5 based hosts at this moment (I love the phat (C#) client).

new version PhotonOS?

VCSA CPU and RAM Requirements

VCSA Storage Requirements

Open/Mount the ISO on your OS of choice. For me in Windows, simply mount the ISO and navigate into the vcsa-ui-installer\Win32\installer.exe

Run it!

Stage 1

*Drools* I’m not sure what to do… *Clicks Install*

Introduction; Next
EULA; Accept; Next
VCSA + PSC
Target Host + port + username + Password; next
VM Name + Root Password
Select Datastore (I enabled Thin Disk)
Give a system name (which you’ll want to point to the IP address you define, in the DNS servers used by the VCSA and any client systems needing management access)
IP Address
IP MASK
Gateway + DNS Server


Finish.

Now it states this will take a few minutes as it depends on, the hardware specs of the ESXi host it was deployed to, and maybe internet speed if these RPMs are not on the OVF template that was deployed. Also the VM has to boot.

Quick Break time!

Interesting default… until it finally completes…

Stage 2

NEXT!

NTP servers (0.ca.pool.ntp.org,1.ca.pool.ntp.org,2.ca.pool.ntp.org)

Next

New SSO domain, create a password for administrator@vsphere.local (I’ll create a SSO domain for zewwy.ca later to allow my local AD based accounts to have logon rights later on in this or another tutorial).

DEPLOY!

Mhmm, after 2 attempts I kept getting a pschealth service error. I googled it but the VMware KB was rather useless.

On the third try, I set the system name to IP address, as well as set the vCenter to simply use the hosts time, instead of NTP (even though I used the same NTP server the host was using… so shrug), also waited a little bit longer when starting stage 2, and on the third try it finally succeeded the installation.

Then I added the license key and assigned it to vCenter. which was provided to me when I checked out the “purchase” on VMUG advantage partner site.

Summary

Over all the process is very straight forward. In the next post I’ll cover adding hosts, assigning keys, connecting VCSA to an AD server for an alternative SSO domain. Stay tuned!

Veeam, SMB, and the Failed to get disk free space

The Story

I wanted to try Veeam B&R Free again, now that I discovered a trick on re-issuing the 60 day trial key on ESXi hosts so I should be able to get past my old issue I blogged about “The VMware Screw“…

So I D/L the latest n greatest from Veeam and that’s B&R 9.5 Update 4, grab the latest builds here (Veeam Login Req)…

Run the installer, nothing special here. Love the new UI, amazing how much nicer it is vs the old Free Edition.

Anyway, navigate to Backup Infrastructure to add a Repo, in this case a simple USB HDD I was sharing via SMB on a FreeNAS server. I had created it with open access so no authentication was required to access the share.

As shown here, I was accessing the file share without issue in Windows Explorer…

However, attempting to add it as a Repo…

Whomp, whomm, whmomomomomom.

Kind of annoying that anonymous SMB is I guess not supported as a Repo type, or maybe just not with my particular setup, I’m not exactly sure what the exact reason for this error being hit as I don’t have access to Veeam source code. Anyway, I started to google for a possible solution, annoying the first result was simply a post which a Veeam rep simply posted to the second most common solution post which basically stated:

“add the registry setting:

Key: HKLM\SOFTWARE\Veeam\Veeam Backup and Replication\
DWORD: NetUseShareAccess = 1

As per KB1735”

Did that and…. Failed to get disk free space. Since a lot of people on these threads are mentioning the user of a username and password I decided to follow this guide on creating a user for an SMB share on FreeNAS.

Follow the Veeam wizard and… “Can’t get disk free space”

Ughhhh… I was about to give up and actually attempt a physical alternative then I noticed something people were saying…

ender – “Depending on your SMB server, you may need to enter the username as DOMAIN\user or SERVERNAME\user before it’s accepted.”

StivoBerlin – “Have you tried to write “YOUDOMAINONYOURSYNOLOGY\youruser” instead of only “youruser” for the NAS login ?”

michaelbrandi – “I have found that it makes a difference if you enter “full” credentials so not admin but IP\admin, not sure it’s universal, but it helped me.”

That’s when I added an account to Veeam as “FREENAS\TestUser” along with the password, and used that credential after entering the path, and got past the error!

I was actually rather impressed at the speed of the backups considering it was a USB drive shared over the network via SMB from FreeNAS…

Look at that, backed up a 20 gig vm in 8 minutes wooo! Not bad for Free.

Maybe I’ll re-follow up on my old free VM backup series and bring it back with a proper tut on each step required to make it work.

For now I hope this information helps someone!

ESXi 6.5 on Proliant Gen9 Hardware Status Unknown

I’ll keep this post short.

If you have a Proliant Gen9 server and running ESXi 6.5 u2 along with VSCA 6.5u2.

You will get all hosts not displaying any hardware status. This should fixed immediately as you don’t get alerts on any hardware faults via IPMI. This includes status from hosts running ESXi 5.5 or 6.5.

The first fix is to upgrade the VCSA to 6.5u3.

After upgrading the VCSA to 6.5u3… Hardware status will come back for each host.. however.. if you are running ESXi 6.5u2 on the Gen9 servers you’ll something like this:

as you can see some sensors are a lil wonky…

The fix here is to upgrade the host to 6.5u3 via the HPE build.

After the hosts and the VCSA are on 6.5u3 all is good and hardware faults will again will trigger critical alarms on vSphere.

A Productive Nightmare

The Story

Lack of Space

It all begins with a new infrastructure design, it’s brilliant. All the technical stuff a side, the system is built and ready for use, one problem the new datastore is slightly overused (many plans for service migrations and old bloated servers to be removed but have not yet been completed). I had one datastore that was used for a test environment, with the whole test environment down and removed this datastore would be perfect temp location till the appropriate datastore could be acquired.

The Next Day

I was chatting with our in house developer when a user walks in asking why they couldn’t complete a task on the system, figuring a work flow server issue simply rebooting it often fixed any issues with it, however this time I also received an email from the DBA stating reports of a DB issue due to bad blocks on the storage level.

At this point my heart sank, I quickly logged into the storage unit and was shocked to not see any notification of issues, deciding right then and there to move to back to reliable storage I made the svMotion, while it was in progress the storage unit I was logged into finally showed errors of disk failure, one disk had failed while the other had become degraded (In a RAID 1+0 this can be bad news bears) after the svMotion completed there was still a corrupted DB (we all have backups right?) lucky it was just a configuration DB for the workflow server and not any actual data, so I provided the DBA with a backup of the database files, didn’t take long and everything was back to green.

That Weekend

I decided to play catch up on the weekend due to the disruptive nature of the disk failure that week, to my dismay and only by chance the new host in the new cluster was showing disconnected from vCenter… What the…

Since I wasn’t sure what was going on here at first I chatted with the usual’s on IRC, I was informed instantly “RAMdisk is full”. After some lengthy recovery work (shutting down VMs and manually migrating them to an active host in vCenter) I discovered it was cause the ESXi host did lose connectivity to its OS storage (in this case was installed on an SD card)

So I updated the firmware on the host server. This so far (after a couple weeks now) has resolved this issue.

Then while I was working on the above host lost of connectivity, the other host lost connection to vCenter! However this one had much different signs and symptoms, after doing the exact same process of moving VMs off this host, it was determined by VMware support that it was “possibly” due to the loss of the one datastore. Remember the datastore I discussed above, although I had moved any VM usage of it from the hosts I did not remove it as an active datastore, so although the storage unit was accessible while the disks had failed, for some reason the whole storage unit had failed (UI was now unresponsive). So I had to remove this datastore and all associated paths. After all this everything was again green for this cluster.

So much for that weekend…

That Storage Unit

Yeah alright so that storage unit… it was a custom built FreeNAS box that was spliced together from a HP DL385p Gen8 server. I got this thing for dirt cheap and was working as a datastore perfectly fine before the disc failure so I don’t blame the hardware or even FreeNAS or all the crap that happened. It was just a perfect storm.

So I decided to try something different with this unit first… since I had been using an LSI 9211-8i flashed in IT mode (JBOD) for the SAS expanders in the front (25 disk sff). I decided I would try to build my first hyper-converged setup. That meant creating a FreeNAS VM, hardware passthough the storage controller (LSI 9211-8i) and then created datastores using the discs in the front.

Sooo

The Paradox

The first issue I had was the fact you need a datastore to host the FreeNAS VMs config and hard drive files… but if we are going to do hardware pass-through of the entire SAS exapnders via the LSI card, that means it’s not accessible or usable for the host OS. Uggghhhh, now we could use NFS or iSCSI but the goal for me was to have a full self contained system not relying on another host system, now I can easily install ESXi on a USB or SD card, but it won’t allow me to use these as datastores. At least not on there own…

Come here USB datastore… I mostly followed this blog post on it by Virten however I personally love this old one by non other than my favorite VMware blogger William Lam of VirtuallyGhetto.com

*My Findings* Much like the comments on here and many other blog and form posts about doing this is I could not get this to work on 5.1 or 5.5 those builds are too finiky and I’d always get the same error about no logical partition defined or something, yet worked perfectly fine in 6.5 or 6.7 (I personally don’t use 6.0)

OK, so I decided to use ESXi 6.7, installed on a SD card, and setup a 8 gig USB based Datastore. Next Issue is you have to reserve the memory else you’d be limited to even less than 4 gigs as ESXi will complain there is not enough from on the datastore for the swap file. Not a big deal here as we have plenty of RAM to use (100 Gigs HP genuine ECC memory).

I did manage to get FreeNAS installed on said datastore and as you’d expect it was slowwwwww. My mind started to run wild and though about RAMDisc and if it was possible to use that as a datastore… in theory.. it is! William is still around! 😀

Couple notes on this

1) you need a actual Datastore as it seems like ESXI just creates system links to the PMem Datastore. (I noticed this by attempting to ssh into the host and simple copy the VM’s files over, it failed stating out of space, even though there was enough defined for the PMem Datastore).

2) You create the VM and defined the HDD to be on PMem Datastore and will warn you of non persistence.

Sure enough I created a FreeNAS VM on the PMem and it was fast install, but as soon as the host needed a reboot, attempt to power on that VM and it says the HDD is gonezo. So this was cool, but without persistence it sort of sucks.

Anyway I didn’t need the FreeNAS OS to have fast I/O anyway, so stuck with the USB based datastore. Then I went to pass-through the controller, now enabling pass-though on the controller worked fine, but the VM wouldn’t start.

Checking the logs and googling revealed only ONE finding!

No matter what I tried the LSI card or the built in HBA same error as the post above:
“WARNING: AMDIOMMU: 309: Mapping for iopn 0x100 to mpn 0x134bb00 on domain 1 with attr 0x3 failed; iopn is already mapped to mpn 0x100 with attr 0x1
WARNING: VMKPCIPassthru: 4054: Failed to setup IOMMU mapping for 1 pages starting at BPN 0x100000100”

Yay, another idea gone to shit and time wasted, I learned some things but I wanted to learn something and bring some use back to this system… ugh fine! I’ll just put it back to normal connecting the SAS expanders to the P420i HBA and use the 2GB battery backed cache to define a speedy datastore and just keep it simple…

The Terrible HBA

I don’t wish this HBA on anyone seriously, so after I put it all back to normal, the first thing I find is:

  1. When I booted the server and let the system post, when it got up to the storage controller part (Past the bottom indication to press F9 for setup, F10 for Smart Provision, and F11 for Boot Menu) it will list the storage controller and it’s running firmware in this case v8.00.
    Half the time if I pressed F5, if there was no previous error codes and no disks or logical units defined I someones got into the ACU (Array Configuration Utility) the other half the fans would kick up to 100% and stay there while the ACU booted (showing nothing but an HP logo and a slow progress bar) and when ACU finally did load I’d be presented with “No Storage Controller found”
    (Trust me I got a 40 min video of me yelling at the server for being stupid haha)
  2. This issue would become 100% apparent as soon as I plugged in a drive with a logical unit defined from another (updated) version of Smart Array.
    To get around this issue I ended up grabbing the “latest” HP SSA (Smart Storage Administrator) tool from, HPs site. Now I quote latest due to the fact is it’s from 2013… No this allowed me to finally build some arrays for me to use with the planned ESXi build.

I noticed that at first I wasn’t seeing the new logical drive I defined in the HP SSA in ESXi itself, I totally forgot to grab HPE custom build as it includes all required drivers for these pieces of hardware.

First thing i notice after grabbing HPE’s custom ESXi build… in this case 6.7 (requires VMware login) is that the keyboard is buggering out on me when attempting to configure the management NIC.

At first I thought maybe the USB stick was crapping out due to the many OS installs I’ve been doing on it. So I decide to move to using the logical array I built, the custom installer does see the new array and away I go, still buggy, so I thought maybe it’s the storage controller firmware? Looking up the firmware for P420i or equivalent appears there are numerous post of issues and firmware updates.. turns out there’s even a 8.32(c) Nov 2017 update, since I was too lazy to build a custom offline installer for this firmware flash I used an install of Windows Server 2016 and ran the live updater, to my amazement it worked flawless… yet also to my amazement Windows worked perfectly fine on the same logical array regardless of the firmware it was running (Is this a VMware issue…??)

So after re-installing the custom ESXi 6.7 from HPE, the host was still being buggy… and now started to PSOD (Purple Screen of Death)… are you kidding me, after everything that’s already happened… ughhhhh…

Googling this I found either

A) Old posts of Vendor finger Pointing (Around ESXi 3-4)

B) Newer Posts (ESXi 6.7~) this lead me to the only guy who claimed to have fixed his PSOD and how he did it here

Which I found I was not having the same errors showing which lead me to my first link due to the logs. Having updated all the firmware, and running HPEs builds I could only think to try the ESXi 6.5-U2 build as the firmware was supposedly supported for that build.

Now running ESXi 6.5-U2 without any issues, and no PSOD! Unfortunately without warranty on this hardware I have no way to get HP to investigate this newer 6.7 build to run on this particular hardware.

Icing on the Cake

Alright so now I should finally be good to go to use this hypervisor for testing purposes right? Well I had a bunch of spare discs and slots to create a separate datastore for more VMs yay…

Until I went to boot that latest HP SSA offline I listed above that fixed the fan speed and no controller found for the ACU, well now this latest HP SSA was getting stuck at a white screen! AHHHHHHHHHHHHHHHHH how do I create of manage the logical unit and build arrays if the offline software is stuck, well i could have installed and learned how to use the hpssacli and their associated commands but since I was already kind of stressed and bummed out at this point installed Windows Server 2016 and ran the HP SSA for that which looks exactly like the offline version.

Finally created all my arrays, installed the only stable version of ESXi with associated drivers, have all my datastores on the host showing green, created a dedicated restore proxy and am finally getting some use back from this thing….

Conclusion

What… a …. freaking… NIGHTMARE!

 

A general system error occurred: Launch failure

Failure to Launch

Sound the Alarm! Sound the Alarm!

*ARRRRRRREEEEEEEEERRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
ARRRRRRREEEEEEEEERRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR

and one heck of a day sweating bullets, am-I-right?!

Also sorry about the lack of updates, with spring and work, it’s been hard to find time to blog. Deepest apologies.

The Story

Well it’s Tuesday so it’s clearly a day for a story, and boy do I have a story for you. I was going about my usual way of working.. endlessly to meet my goal… that never able to acquire goal of perfection…. anyway… I had completed a couple major tasks of going from vCenter 5.5 to 6.5u2, and while I have yet to blog about that fun bag (cause I had used my own PKI and certificates so the migration scripts VMware provides pooped the bed till I replaced them with self signed) but I digress this has nothing to do with that… well sort of.

Where was I… oh right… I was vMotioning a couple VMs to some new 6.5U2 hosts I had setup on new hardware (yeah buddy this was a whole new world (Why did I just say that in the voice of Princess Jasmine?)) but alas the final VM was not meant to move for it had stalled @ 20 percent (I had this happen once before during my deployment and discovered some interesting things about multi-NIC vmotions) I probably should blog about that toooooo but alas my time is running short. (I still have many other things to tackle and blog about). and then……..

“A general system error occurred: Launch failure”

ugh…. I could have jumped right to the logs but I jumped on google instead…

CloudSpark apparently recently came across this issue and posted on all the many places he apparently could think… reddit, and VMware forums

the VMware post got a tad long, but it was more the post on reddit that got me wondering….

“Haven’t seen this personally, but there’s a few things via google on the “Failed to connect to peer process” error that suggest it could be due to running out of storage. Specifically in one scenario, the /tmp/vmware-root folder on the ESXi host fills up with logs.” – Astat1ne

When I SSH into the host and ran “df -h” I was surprised to be reported back with an error:

esxcli returned an error: 1

Something to that extent anyway. I ended up Moving all the VMs back off the host, and swapped out the SD card the ESXi software was installed on (I had a copy of the SD card with a clone of the ESXi software and host configure that was created right after the host was installed and configured).

Sure enough after reboot powering it back on, “df -h” returned clean. I was then able to vMotion all the VMs back onto the set of new hosts.

Def a generic error message I’ve never seen, and thinking about it now is rather comical. (I know when your in the middle of the problem it’s not so funny)…

BUT it sure is now! :D…. this is so going to come back to bite me….

(await updates here post April 30th)

VMware ESXi 5.5
D-Link DGE-530T RevC

The Story

Are you guys ready for a story? This one is actually not so bad. A couple days ago I post on Facebook if anyone happened to have a spare PCI/PCIe Network Interface Card (NIC), since it was going to be used for interest access I was ok with it being 100, but was aiming for 1000 (now that Shaw provide over 300mbps internet, clearly 100 doesn’t cut it).

After a day of no luck, and a bunch of funny remarks (as almost none of my friends had any idea of what I was talking about), I decided to take another look through my old computer hardware to see what I could scrounge up…

PCI NIC Found!

well, well, not even dusty, a PCI NIC, exactly what I needed in my hypervisor to play with OPNsense. I originally was going to try layer 2 trunking via VLANs, however the main vSwitch already had VMkernel Nics bound to the physical adapter @ layer 3, and the same interface on my firewall (Palo Alto) wouldn’t allow me to create a layer 2 sub-interface is the main interface was already bound to layer 3. Since I wanted my OPNsense VM to get an actual public IP address, this required my device to get a connection from my VM, directly to my modem at layer 2… yeah another NIC. So here we are, and it didn’t take long for me to shut down my VMs and install the card, and boot my hypervisor back up (I hope to one day have multiple hypervisor to not have to shut down my VMs, but even then, if you don’t pay chances are you won’t get access to the APIs that migrate the memory states of the VMs for you, so it’s a hassle either way…. anyway back to the story.

PCI NIC Found … NOT

Oh Borat, who brought you in?!?! So as you may have guessed I went to add a new vSwitch for my new VM to get it’s direct Public IP, and to my dismay there was no physical NIC to pick… what the….

So to Google! and hopefully either VMware support, or usually always better personal blogs! We all loves these right… ahem… anyway…

You can probably guess where the official answer went, but I’ll enlighten you as I did follow along for … pain? OK I don’t know why I did, I was really hopeful it wasn’t going to be the answer I knew it was going to be….

Hey! some of the command they provided helped, or did they? All this was, was some BS data chasing to tell you, IT’s Not supported, SOWWY!

Clearly, there must be some answers by the community forums right??

Community’s great! VMwares…. :S

So what do we get… One… unanswered and crying about a badly referenced link to source two... also unanswered crying about the same stuff we already know…. it’s officially not supported. Well I’m running ESXi 5.5 Free and using GhettoVCB’s scripts, also unsupported, so not really an issue… the issue is teh lack of help right now.

But bring me down, I don’t thikn so, the internet has many sites, and many people sharing their knowledge, how?!?! BLOGS! Ahem…

Blogs to the Rescue!

Yes believe it or not it is the power of the real untethered, unfiltered beauty that is blogging that we actually get some meat and potatoes. My first source showed signs of light! One problem, it’s literally 9 years old and using ESXi 4. OK well it also wanted a fair amount of direct file placing and special manipulation. Most of this works fairly differently in ESXi 5.x, and vibs or precompiled binaries that work with esxcli are the more preferred method. I avoid saying supported here, cause I use these methods to install unsupported packages :D.

Alright, so now what, well the Holy Grail! This King managed to not only blog about getting this working but shared the drivers/vibs packages required to get it to work too! Epic! Let’s get this dang NIC working…

1) Grab the VIB files

2) Change your support level on ESXi5+:

~ # esxcli software acceptance set –level=CommunitySupported
Host acceptance level changed to ‘CommunitySupported’.

3) Install the driver with: “esxcli software vib install -v /DLink-528T-1.x86_64.vib“

4) Reboot

Sounds simple enough lets give it a shot… and I hit some errors, classic…

I won’t show the erros just yet as I have it one long snippet, but basically I had a bit of problems cause of the GhettoVCB scripts I had pushed on to my host, but the error results weren’t exactly clear… I attempted a couple things first, like copying the VIB to the path it kept complaining about and specifying the fully qualified path to the VIB.. nothing till I stumbled across this...

esxcli software vib install -v /full/path/to/.vib -f

which finally gave me a driver install successful!

Alright, and after reboot…..


OMG! No way, there it is with the proper name and everything. Considering the blog post I followed was for a different NIC model I wasn’t sure if it would work, but there it is… so lets not get to ahead of ourselfs and see if it comes up and is able to transmit packets…

I was having some issues initially so I decided to give my lil netbook a simple /24 IP and give my OPNsense a simple /24 IP just to validate the card wasn’t the issue, or the drivers I just installed.

Plug them together, lights come up, that’s good… checking ESXI vSphere…

That’s good, and finally can we transmit?!?!

Hey!!!! we have communication! Now it’ll be figuring out getting the Public IP configured properly. But we’ll save that for another post. 😀 Cheers!

How to Shrink a VMDK

Hey all,

Not often you have to shrink a VMDK file, expanding one is super easy, even on a live Virtual Machine. Shrinking one however, isn’t as straight forward.

This guy does a decent job giving a step by step tutorial, but you can soon realize you can do it even faster, and without cloning…

1) Use his math to get the disk size you need to edit inside the vmdk:

The number highlighted above, under the heading #Extent description, after the letters RW, defines the size of the VMware virtual disk (VMDK).

this number – 83886080, and it’s calculated as follows:

40 GB = 40 * 1024 * 1024 * 1024 / 512 = 83886080

2) Only shrink VMDKs in which you know the end of the disk contains allocated blocks, do this in a test only, make sure you have backups.

Now instead of cloning, simply remove the disk from the vm, and re-attach it. watch it’s reattached size be smaller, and it matches, much like the source guys post.

Free Hypervisor Backup
Before Part 3

The Story

I’m currently in the quest to fulfill my needs for a free VM backup solution, my hypervisor of choice right now is ESXi, while there are great alternative free hypervisors ( Citrix’s Xen, Microsoft’s Hyper-V, Linux’s KVM) I personally always felt comfortable with VMwares user interface (I’m talking the old Windows Phat Client). I totally understood the need for a web based client, however I felt it a sham to drop the phat client as to provided multiple benefits that the Web UI just doesn’t. Although the latest 6.7 Web UI has been pretty decent, besides their placement of the Hostname location…. :@

Anyway… if you’ve been following my posts you’ll see I decided to give Veeam Free a shot. I love these guys, great software, so I figured best to get better antiquated with their software, to my dis-may their simply relied on VMwares APIs solely. Which meant no SSH backdoor tricks for use ESXi free users.

However as I also mentioned in my previous post, an awesome dude who runs virtuallyGhetto.com, William Lam; wrote a script to complete the task we wanted via the hosts local CLI, which can be connected to via SSH. The script is called GhettoVCB. I took a quick look at the source code and did find a couple instances of zombie code and other anomalies but for the most part looked decent enough to give it a shot. Now I will get to this stuff in my next post using the same example VM I will specify here where I came across a couple issues and interesting facts I discovered during this adventure.

My Discoveries

The first thing I noticed about the script was the lack of certain dependency checks, in this case there’s no actual source code dependencies that his script relies on, it’s all meant to run on ESXi, and sure enough there’s some line of code to check for that…

Nice, however there doesn’t seem to be a check to validate if the datastore specified in the very first variable has enough space to complete it’s task.

In this case the script would simply error out once the destination ran out of space stating the source VMDK was the issue, this lead me briefly down the wrong rabbit whole. After validating I had no issues with my source VMDK (booted the VM and checked all services and FileSystem integrity) I noticed a couple things.

1) Even though I specified Thin disc for my destination, which had enough space to store the VM data (60~ GBs), the thin disc was attempting to create the full provisioned size of the disc, cause…..

2) I forgot the source disc was set to Thick Provisioned Eagered Zero

So there was a couple things about this VM….

1) It’s my ZoneMinder VM which holds my IP camera footage on motion detection

2) This data is mostly useless due to no events having taken place

Alright so now my goal was the following:

1) Remove all the un-needed data

A) Open ZoneMinder Web UI
B) Click on Camera
C) Delete All Events
D) wait a while before all the MySQL queries to kick off to clear the data
E) Used “df -h” to watch usage drop

2) Once I had all the data removed, I had to re-claim the space. I decided to dig up my old blog post on the subject matter… Well that was a bit underwhelming and simply provided my links instead of any valid examples… So I hope to provide a bit better details here:

A) I’m running Linux not Windows so sdelete is out.

B) My first attempts I decided to follow this DD exampledd if=/dev/zero of=/zeroes && rm -f /zeroes” (DO NOT DO THIS, from my tests it caused mySQL service to not come up properlly). I then found people stating to use secure-delete or zerofree, however I had some issues with these against a live system and wanted a simple live system, general technique… if there so many references stating dd can do it… how.. then I found this
“dd if=/dev/zero of=/tmp/empty.dd bs=1048576; rm /tmp/empty.dd”
I’m assuming maybe cause I used /tmp instead of /… only diff I can think of.

C) At this point I made a backup using ghettoVCB which I had to use an alternate Datastore to save the VMDK that I would then finally hole punch, in this case the ghettoVCB script converts the VMDK from thick to Thin however will still be the size of the provisioned Disc.

D) Now this is where I started to get a bit… annoyed… However I did learn a few things… one is that & to bring a process to the background doesn’t disconnect that process from the current terminal session. So while I was SSH’d in and running the clone operation, it wasn’t fully completing when my SSH session timed out, even though I used & to run it in the background. Read this for more info but the gist take away is this: “nohup and disown both can be said to suppress SIGHUP, but in different ways. nohup makes the program ignore the signal initially (the program may change this). nohup also tries to arrange for the program not to have a controlling terminal, so that it won’t be sent SIGHUP by the kernel when the terminal is closed. disown is purely internal to the shell; it causes the shell not to send SIGHUP when it terminates.” –Gilles

E) OK so after a few annoying hours of failed transfers due to my own ignorance. I finally had a legit copy of my VMDK in thin format but sucking up a lot of space (See pic above) *Note I technically did the DD trick after making a backup, but the size shown on the datastore would still be teh same* the final part… hole punching: So quick re-cap, we deleted unused data, then zeroed out the unused blocks, we see a low size in the guest system (E.G. df -h), but we still see a large size used by the thin provisioned disc. K let’s hole punch
“vmkfstools -K /vmfs/volume/Datastore/VM/VM.vmdk”

Host Status During Hole Punch

CPU ESXi Host

CPU Storage Unit (FreeNAS)

Disk Usage Host

Disk Usage Storage Unit (FreeNAS)

Datastore Latency Host

Network Usage Host

As you can tell from the source images we can tell a couple things:

1) HolePunching a VMDK does a high read I/O on the datastore

2) If the Datastore is iSCSI based it’ll saturate the iSCSI NIC (this can cause a performance degradation for other VMs utilizing the same Datastore

3) The latency increases due to the high Read I/O on the Datastore, again this directly affects performance of VMs running on the same Datastore

Once done the VMDK looked liked this:

Results / Summary

*NOTE* You can use the following command to convert a Thick Disk to thin manually, if you wish to Hole Punch a VMDK without the script.

vmkfstools -i Source-Thick.vmdk -d thin Destination-thin.vmdk

So why did I go through all this pain? Well I didn’t like the fact I was backing up useless data, weather it be pointless old images, or zeros. In the end the same VM backup went from taking almost an hour down to 5 minutes!