How to vMotion a VM without vCenter WITHOUT Shared Storage

While I have covered this in the past here:
How to vMotion a VM without vCenter – Zewwy’s Info Tech Talks

This was using shared network storage between hosts…. what If you have no vCenter AND no shared storage? In my previous post I suggested to go check out VMware arena’s post but that just covers how to copy files from one host to another, and what I’ve noticed is while it does work if you let it complete, the vmdk is no longer thin and takes up the full space as specified by its defined size.  This is also mentioned as such in this serverfault thread “Solutions like rsync or scp will be rate-limited and have no knowledge of the content (e.g. sparse VMDK files, thin-provisioned volumes, etc.)”

So options provided there are:

  1. Export the VM as an OVF file, move to a local system, then reimport the OVF to your ESXi destination.I attempted this but on the host I could only export vmdk. While attempting to do so I got network issues (browser asking to download multiple files, but I must not have noticed and time and timed out? not sure). This also requires an intermediary device and double down/up on the network, I’m hoping for a way between hosts directly.
  2. Use vSphere and perform a host/storage migration.This post is how to do it without. Also note I attempted this but in my case I’m using my abomination ESXi host I created in my previous blog post, and vCenter fails the task with errors. (Again SCP succeeds but doesn’t retain thin provisioning). Not sure why SCP succeeds but vCenter fails seems to be more redundant to poor connection and keeps going, which happens when the WiFi NICs underload in those situations.
  3. Leverage one of Veeam’s free products to handle the ad hoc move.

I love Veeam, but in this case I’m limited in resources, lets see if we can do it via native ESXi here.

So that exhausts all those options. What else we got…

Move VMware ESXi VM to new datastore – preserve thin-provisioning – Server Fault

Oh someone figured out what I did in my intital post all the way back in 2013… wonder how I missed that one.. oh well, same answer as my initial post though required shared storage… moving on…

LOL no way… Willam Lam all the way back from over 14 years ago! Answering the question I had about compression of the files. and saying the OVF export is still the best option.. mhmmm…

I don’t want to stick to just scp, man did it suck getting to 97% done on a 60 Gig provisioned VMDK, that’s only taking up roughly 20 gigs, to have to not work cause I put my machine to sleep thinking it was a remote connection (SSH) to the machine and the machine is doing the actual transfer… just to wake my machine the next morning to have a “corrupt” vmdk that fails to boot or svmotion to get thin. I have machines with fast local storage but poor network, it’s a problem from back in the day with poor slow internet speeds. So what do we have? We got gzip and tar, what’s the diff?

In conclusion, GZIP is used to compress individual files, whereas TAR is used to combine numerous files and directories into a single archive. They are frequently used together to create compressed archive files, often with the “.tar.gz” extension.

Also answered here.

“If you come from a Windows background, you may be familiar with the zip and rar formats. These are archives of multiple files compressed together.

In Unix and Unix-like systems (like Ubuntu), archiving and compression are separate.

tar puts multiple files into a single (tar) file.
gzip compresses one file (only).
So, to get a compressed archive, you combine the two, first use tar or pax to get all files into a single file (archive.tar), then gzip it (archive.tar.gz).

If you have only one file, you need to compress (notes.txt): there’s no need for tar, so you just do gzip notes.txt which will result in notes.txt.gz. There are other types of compression, such as compress, bzip2 and xz which work in the same manner as gzip (apart from using different types of compression of course).”

OK, so from this it would seem like a lot of wasted I/O to create a tar file of the main VDMK flat file, but we could gain from compressing it. Let’s just do a test of simple compression and monitor the host performance while doing so.

Another thing I noticed that I didn’t seem to cover in my previous post in doing this trick was the -ctk.vmdk files. Which are change block tracking files, as noted from here:

“Version 3 added support for persistent changed block tracking (CBT), and is set when CBT is enabled for a virtual disk. This version first appeared in ESX/ESXi 4.0 and continues unchanged in recent ESXi releases. When CBT is enabled, the version number is incremented, and decremented when CBT is disabled. If you look at the .vmdk descriptor file for a version 3 virtual disk, you can see a pointer to its *-ctk.vmdk ancillary file. For example: version=3

# Change Tracking File
changeTrackPath=”Windows-2008R2x64-2-ctk.vmdk”
The changeTrackPath setting references a file that describes changed areas on the virtual disk.
If you want to back up the changed area information, then your software should copy the *-ctk.vmdk file and preserve the “Change Tracking File” line in the .vmdk descriptor file. If you do not want to back up the changed area information, then you can discard the ancillary file, remove the “Change Tracking File” line, read the VMDK file data as if it were version 1, and roll back the version number on restore.

I’ll have to consider this when running some of the commands coming up. Now we still don’t know how much, if any, space we’ll save from compression alone and the time it’ll take to create the compressed file… from my research I found this resource pretty helpful:

Which Linux/UNIX compression algorithm is best? (privex.io)

Since we want to keep it native doing quick tests via the command line shows ESXi to have both gzip and xz but not lx4 or lbzip2, which kind of sucks as they showed to have the best performance in terms of compression speeds… as quoted by the article “As mentioned at the start of the article, every compression algorithm/tool has it’s tradeoffs, and xz’s high compression is paid for by very slow decompression, while lz4 decompresses even faster than it compressed.” Which is exactly what I want to see in the end result, if we save no space, then the process will burn I/O and expected life of the drive being used or pretty much zero gains.

Highest overall compression ratio: XZ If we gonna do this this is what we want, but how long it takes and how much resources (CPU cycles, and thus overall WATTS) trade off will come into question (though I’m not actually taking measurements and doing calculations, I’m looking at it at points and time and making assumed guessed at overall returns).

Time to find out what we can get from this (I’m so glad I looked up xz examples cause it def is not intuitive (no input then output parameters, read this to know what I mean) :

xz -c /vmfs/volumes/SourceDatastore/VM/vm-flat.vmdk > /vmfs/volumes/TargetDatastore/whereever/vmvmdk.xz

Mhmmm no progress… crap didn’t read far enough along and I should have specified the -v flag, not sure why that wouldn’t be defaulted, having no response of the console kind of sucks… but checking the host resources via the web GUI shows CPU being used, and write speed….. sad….

CPU usage:

and Disk I/O:

Yeah… maybe 4 MB/s and this is against a SSD storage on a SATA bus, there’s no way the storage drive or the controller is at fault here… this is not going to be worth it…

Kill command, check compressed file less than 300 MB in size, OI, that def not going to pay off here…

I decided to try taring everyting into one file without compression hoping to simply get it to to one file roughly 20gigs in size with max I/O. As mentioned here:

“When I try the same without compression, then I seem to get the full speed of my drive. ”

However to my dismay (maybe it ripped the SSDs cache too hard?) I unno I’d get I/O error, even though the charts showed insane throughput, I decided to switch to another datastore a spindle drive on the ESXi host and you can see the performance just sucks compared to the SSD itself.

Which now again stuck waiting cause instead of amazing throughput its stuck going only 20 MB/s apparently… uggghhhh.

To add to this frustration, I figured I’d try the OVF export option again, but I guess cause the tar operation has a read on the file, I’m assume a file lock, when attempting the OVF export it just spits an web response “File Not Found”. So, I can’t even have a race knowing full well the SSD could read much faster than what it’s currently operating at. I don’t really know what the bottleneck is at this point…

Even at this rate it’s feeling almost pointless, but man just to keep a vmdk thin, why, oh WHY SCP can’t you just copy the file as the size it is… mhmmm there has to be a way other than all this crap….

I don’t think this guy had any idea he went from thin too thick on the VM….

I thought about SSHFS, but it’s not available on ESXi server….

Forgot about Willams project GhettoVCB Great if I actually wanted more of a backup solution… considered for future blog, but over kill to just move a VM.

The deeper I go here the more the simply export to OVF template and import is seeming reaaaaaalll appeasing.

Awww man this tar operation looks like its takin more size then the source. doing a du -h on the source shows 19.7 Gigs… tar file has now surpassed 19.8 Gigs in size… with no sign of slowing down or stopping lol. Fuck man I think tar is also completely unaware of thin disk and I think it’ll make the whole tar file what ever the provisioned size was (aka thick). Shiiiiiiiiiiiiit!

Trying the Export VM option looked so promising,

until the usual like always… ERROR!!

FFS man!!! Can’t you just copy the files via SSH between hosts? Yeah but only if you’re willing to copy the whole disk and if you’re lucky holepunch it back to thin at the destination… can’t you do it with the actual size on disk… NO!

Try the basic answer on almost all posts about this, just export as template and import… Browser download ERROR… like Fuck!!!

Firefox… nope same problem… Fuck…. Google what ya got for me? well seems like almost the same as my initial move of using SCP but use WinSCP via my client machine and uttering in a middle man in the process, but I guess using the web interface  to download/upload was already a man in the middle process anyway… fine let’s see if I can do that… my gawd is this ever getting ridiculous… what a joke… Export VM from ESXi embedded host client Failed – Network Error or network interruption – Server Fault

And of course when I connect via Win SCP it see the hard drive as being 60 Gigs, so even though trafser speed are good to taking way more space then needs and thus waste data over the bus… FUCK MAN!!!!!

If only there was a way to change that, oh wait there is, I blogged about it before here: How to Shrink a VMDK – Zewwy’s Info Tech Talks

OK Make a clone just to be safe (you should always have real backups but this will do. and amazing this operation on the SSD was fast and didn’t fail.

Woo almost 300 MB/s and finished in under 4 minutes. Now let’s edit the size.

Well I tried the edit size, but only after doing a vmkfstools convertion of the vmdk would it show the new size in WinSCP, even then transferred the files and it was still corrupted in the end..

ESXi 6.5 standalone host help export big VM ? | MangoLassi

Mhmmm another link to Willams site, covering the exact same thing, but this time using a tool ovftool….

and wait a second… He also said there’s a way to use the ovftool on the ESXi server itself, in this post here….. mhmmmm If I install the Linux OVF tool on the ESXi host, I should be able to transfer the VM while keeping the thin disk all “native” on ESXi… close enough anyway…

Step 1) Download the OFV tool, Linux Zip

Step 2) Upload Zip file via Web GUI to Datastore. (Source ESXi)

Step 3) unzip tool (unzip ovfool.zip), then delete zip.

Step 4) Open outbound 443 on Source ESXi server. Otherwise you get error on tool.

Step 5) run command to clone VM, get error that managed by ESXi host.

Step 6) remove hosts from ESXi and run command again… fail cause network error (Much like OVF export error, seem that happens over port 443/HTTPS)

Man Fuck I can’t fucking win here!!!

I think I’m gonna have to do it the old fashioned way… doing it via “seeding”. plug in a Drive into the source ESXi, and physically move it to the target.

Tooo beeeee continued……

I grabbed the OVFtool for windows (the machine I was doing all the mgmt work on anyway, yet it too failed with network issues.

I decided to reboot the mgmt services on the host:

Then gave it one last shot…

Holy efff man the first ever success yet… don’t know if this would havefixed all my other issues, the export failing for https, and all the others? And the resulting OVA was only about 8 Gigs. Time to see if I can deploy it now on the target host.

I deployed the OVA to the target via the WebGUI without issue.

I also tested the ESXi webGUI export VM option and this time it also succeeded without failure, checking the host resources CPU is fairly high on both ovftool export or the webGUI export option. Using esxtop showed hostd process taking up most of the CPU usage during the processes. Further making me believe restarting that service is what fixed my issues…

Migrating WordPress

The Story

In the beginning, there was a man! This man ran! This man ran a site, it was awww inspiring, and so unknown. It ran on Linux in many forms, then one day it realized!!!!!!!! Everything becomes out of date!

Ahem, anyway…

Reasons for migrating

Whatever the reason maybe, mine happens to be this little gem right here:

I’m sorry… did that just say insecure… me… insecure…

OK, sometimes I can be a little insecure but you didn’t have to shove it in my face. Anyway, whatever your reason for migrating maybe. I haven’t done, and maybe you haven’t either. So lets do this… together!

Yeah… I googled… This was my main source, so big thanks to Tom Ewer for this write up much like his mine will be rather indepth and manual. If you wish to avoid learning the nitty gritty and just want to get it done, or use a plugin to help see wpbeginner site here they use a plugin called duplicator and seems to have solid reviews. Since I’m not a fan of paid magic, I’m gonna do this manually.

Migrating WordPress

Step 1 – Backup

Make sure you have a backup of your WordPress Server, in my case since they are VMs, I used Veeam to create a backup of my current WordPress server. Sadly even after logging in to the hosting VM and updating the repos and host OS (Debain 8 Jessie), this version of Debian ran out of support and thus it’s repos weren’t able to supply the updated PHP libraries needed to clear the alert.

After that backup I followed along with Toms blog and instead of FTP (File Transfer Protocol) I used SCP (Yeah… I’m NOT insecure! :P) WinSCP that is, what this actually means I’m not sure apparently either Secure Contain Protect or Special Containment Procedures, but under the hood it just uses SSH? Oh… “Secure Copy (SCP) is method of transferring files between computers over a secure channel. It uses the SSH protocol to do so”

In my case since it was Turnkey Linux, the web files were located at /var/www

Step 2 – Export WP Database

Now Tom ended up using phpMyADmin in his example, I wasn’t sure if the Turnkey image I deployed was using the same, turns out after a quick google search and right on the VMs console, states Adminer on port 12322

after login…

Export….

Left all the default selections…. then… what the….

I was expecting a file to save, so instead I had to select all and save it to a file with Notepad++, all this was, was an output of the DBs in raw SQL.

Step 3 – Create DB Instances

Create your new DB instances, depending on how the SQL server handles DBs imports creation of actual DB’s and their tables may very.

Step 4 – Verify DB Login Credentials

From the backup files in step 1, look in config.php for the DB user connection strings. Verify this user has a login and has proper access rights to the DB’s and Tables being imported.

Now….

What happened to me

Now in my case I simply spun up a new TurnKey WordPress server to see if they were maintaining their images, and sure enough this new was running on Debian 9 Strech, which is the recommended version. So from here although the Adminer version is the same @ 4.2.5 it looks different. Another thing to note was on the old version I was able to login to Adminer with ‘root’ on the new Adminer I had to login in with ‘adminer’.

Now Tom states to create the Databases, I’m not sure if he means the instances here, cause in my case it turns out the creation of the databases happens at import. So what happened to me was this.. first I tried to import…

Then he says to edit the config.php in my case since it was all default WordPress I figured, meh hopefully it’ll all match.. hahah so I skipped his Step 4, also skipped his step 5 cause I already did that, as I said the creation of the DBs happens at import, at least if they don’t already exist like the above error. In my case it just stated it for mysql, but not wordpress so I assumed the other 4 successful queries were for my wordpress data (hahah so many assumed mistakes).

So I uploaded those unedited web files back to the same dir on the new server….

and….

DOH! hahaha That’s why you don’t skip Tom’s Step 4, but in my case I had no interest in creating these as they should already exist, at least unless the WordPress image has changed that (in this case so lucky, they didn’t) so all I had to do was figure out how to set the wordpress users password on the new DB to match that which is in the config.php…. so I opened config.php grabbed the wordpress users password, and found out about this trick by James Goodwin… 😀

However, I was able to load my new WordPress site, but it was still the default one, not mine with my posts, themes, plugins… what the… which brought me back to my “failed” DB import….

When WordPress gives you attitude you drop em like their hot, drop em like their hot…. but in this case WordPress is hot is I won’t drop it, but I will drop those useless databases So, since this was all just a test server anyway, I went back into the Adminer on the new server and simple selected both MySQL, and wordpress and clicked the drop button.

Then back into import, selected the same export SQL file… and…

and sure enough my site loaded with all my content, all my plugins, all my posts, and no more PHP warning!

Summary

Now this covers the manual work to move WordPress, however much like what Tom covered, the final touches of how the site is accessed (direct NAT, behind load balancers, etc) are all on you in order to complete the migration for public access.

In my case I’ll probably prepare this new VM with the exact same Private IP information just on a separate vSwitch, once it’s verified working 100% shutdown the old VM, swap the vSwitch connection to production, test, if good, delete old VM, if not change vSwitch ports and bring up old server.

Worse case delete both VMs, restore my original server from my Veeam backups.

*NOTE* I did experience one issue were I was unable to update WordPress or plugins after the migration. Turns out many posts point to the File System permissions (which I suspected) I checked my old WordPress instance vs my new one and noticed all the files under the physical path had different owner and grp (root on new vs www-data on old) so…

chown -R www-data:www-data /path/to/wordpress

After that updates worked without issue.