vmkfstools -i Source-Thick.vmdk -d thin Destination-thin.vmdk
Don’t think this is worth it’s own blog post, which is probably why I deleted it initially…. meh, I’ll leave it hanging around untagged.
Everything IT
vmkfstools -i Source-Thick.vmdk -d thin Destination-thin.vmdk
Don’t think this is worth it’s own blog post, which is probably why I deleted it initially…. meh, I’ll leave it hanging around untagged.
Continuing on from my source blog post. In this case he goes on to install and configure the role to be a subordinate enterprise CA. But what do you do if you already deployed an Enterprise Root CA? I’m going to go off a hunch and that something gets applied into AD somewhere to present this information to domain clients. I found this nice article from MS directly on the directions to take, it stated for Server 2012, so I hope the procedure on this hasn’t changed much in 2016.
*NOTE* All steps that state need to be done to AD objects, those commands are run as a Domain Admin, or Enterprise Admin directly logged onto those servers. Most other commands or steps will be done via a client system MMC Snap-in, or logged directly into the CA server.
Step 1: Revoke all active certificates that are issued by the enterprise CA
Simple enough…
Increase the CRL interval
Step 2: Increase the CRL publication interval
Note The lifetime of the Certificate Revocation List (CRL) should be longer than the lifetime that remains for certificates that have been revoked.
Easy enough, done and done.
Step 3: Publish a new CRL
Again easy, done.
*DEFAULT, generally Not required.
Step 4: Deny any pending requests
By default, an enterprise CA does not store certificate requests. However, an administrator can change this default behavior. To deny any pending certificate requests, follow these steps:
Not the case for me.
Step 5: Uninstall Certificate Services from the server
(1)Microsoft Base Cryptographic Provider v1.0:
1a3b2f44-2540-408b-8867-51bd6b6ed413
MS IIS DCOM ClientSYSTEMS-1-5-18
MS IIS DCOM Server
Windows2000 Enterprise Root CA
MS IIS DCOM ClientAdministratorS-1-5-21-436374069-839522115-1060284298-500
certutil -delkey CertificateAuthorityName
Note If your CA name contains spaces, enclose the name in quotation marks.
In this example, the certificate authority name is “Windows2000 Enterprise Root CA.” Therefore, the command line in this example is as follows:
certutil -delkey “Windows2000 Enterprise Root CA”
* OK, this is where things got weird for me. For some reason I wasn’t getting back the same type of results as the guide, instead I got this:
C:\ProgramData\Microsoft\Crypto\RSA>certutil –key
Microsoft Strong Cryptographic Provider:
TSSecKeySet1
f686aace6942fb7f4566yh1212eef4a4_ae5889t-54c3-4b6f-8b60-f9f8471c0525
RSA
AT_KEYEXCHANGE
CertUtil: -key command completed successfully.
And any attempt to delete the key based on the known CA name just failed. I asked about this in TechNet under the security section, and was told basically what I figured and that the key either didn’t exist or was corrupted. So basically continue on with the steps. It was later answered by Mark Cooper.
This one again got answered by Mark Cooper, include –csp ksp (keys are located under: %allusersprofile%\Microsoft\Crypto\Keys)
From all the research I’ve done, it seems people are adamant that you delete the key before you remove the certs, why exactly I’m not sure…(From my testing if you deleted the certificate via certutil, it comes right back when restarting certsvc. It must be rebuilt from the registry?)
So: certutil –csp ksp –delkey <key>
Checking the keys directory show empty. Good stuff.
Certutil –store my
This made me start to wonder where the actual certificate files were stored, a google away and it turns out to be in the registry? Lol (HKLM\System\Microsoft\SystemCertificates)
Nothing more than just a string of obfuscated code (much like opening up a CSR), so the only way to interact with them is using the Microsoft CryptoAPI (CertUtil), or the Snap-in.
Certutil –delstore my <Serial>
Reopening regedit, and the cert is gone.
Delete Trusted Root CA Cert
Certutil –store ca
Certutil –delstore ca <serial>
So moving on…*
Uninstall-AdcsCertificationAuthority
If the remaining role services, such as the Online Responder service, were configured to use data from the uninstalled CA, you must reconfigure these services to support a different CA. After a CA is uninstalled, the following information is left on the server:
By default, this information is kept on the server in case you are uninstalling and then reinstalling the CA. For example, you might uninstall and reinstall the CA if you want to change a stand-alone CA to an enterprise CA.
Step 6: Remove CA objects from Active Directory
When Microsoft Certificate Services is installed on a server that is a member of a domain, several objects are created in the configuration container in Active Directory.
These objects are as follows:
When the CA is uninstalled, only the pKIEnrollmentService object is removed. This prevents clients from trying to enroll against the decommissioned CA. The other objects are retained because certificates that are issued by the CA are probably still outstanding. These certificates must be revoked by following the procedure in the “Step 1: Revoke all active certificates that are issued by the enterprise CA” section.
For Public Key Infrastructure (PKI) client computers to successfully process these outstanding certificates, the computers must locate the Authority Information Access (AIA) and CRL distribution point paths in Active Directory. It is a good idea to revoke all outstanding certificates, extend the lifetime of the CRL, and publish the CRL in Active Directory. If the outstanding certificates are processed by the various PKI clients, validation will fail, and those certificates will not be used.
If it is not a priority to maintain the CRL distribution point and AIA in Active Directory, you can remove these objects. Do not remove these objects if you expect to process one or more of the formerly active digital certificates.
To remove all Certification Services objects from Active Directory, follow these steps:
At this point I was having issues with the input command of the ldf file was failing. I posted these results in my Technet post. After a bit more research I noticed other examples online not having any other information appended after the “changetype: delete” line. So I simply followed along and did the same deleting all the lines after that one, leaving the base DN object in place and sure enough it finally succeeded.
Generate base object LDF file:
After editing line as specified in MS article:
New altered LDF file:
Same command after altering file:
Second run I simply deleted the object under the KRA folder, and it returns no values.
13) Delete the certificate templates if you are sure that all of the certificate authorities have been deleted. Repeat step 12 to determine whether any AD objects remain.
I did this via the Site and Service Snap-in, under the PKI section of the Services node.
Step 7: Delete certificates published to the NtAuthCertificates object
After you delete the CA objects, you have to delete the CA certificates that are published to the NtAuthCertificates object. Use either of the following commands to delete certificates from within the NTAuthCertificates store:
certutil -viewdelstore “ldap:///CN=NtAuthCertificates,CN=Public Key
Services,…,DC=ForestRoot,DC=com?cACertificate?base?objectclass=certificationAuthority”
certutil -viewdelstore “ldap:///CN=NtAuthCertificates,CN=Public Key
Services,…,DC=ForestRoot,DC=com?cACertificate?base?objectclass=pKIEnrollmentService”
Note You must have Enterprise Administrator permissions to perform this task.
The -viewdelstore action invokes the certificate selection UI on the set of certificates in the specified attibute. You can view the certificate details. You can cancel out of the selection dialog to make no changes. If you select a certificate, that certificate is deleted when the UI closes and the command is fully executed
Use the following command to see the full LDAP path to the NtAuthCertificates object in your Active Directory:
certutil store -? | findstr “CN=NTAuth”
Nice and easy, finally.
Step 8: Delete the CA database
When Certification Services is uninstalled, the CA database is left intact so that the CA can be re-created on another server.
To remove the CA database, delete the %systemroot%\System32\Certlog folder.
Nice and easy, I like these steps.
Step 9: Clean up domain controllers
After the CA is uninstalled, the certificates that were issued to domain controllers must be removed.
Which states for 2003 and up:
certutil -dcinfo deleteBad
My results:
With the same list of garbage for the DC, then rerunning Certutil –dcinfo, still reported the same certs… So I had to manually remove these, but again opening a MMC snap-in via a client system, add the certificate snap-in and point to the machine store on the DC’s. Then manually delete the certificates, once this was done for both DC’s. CertUtil –dcinfo finally reported clean…
Finally!!! What a gong show it is to remove an existing CA from an environment… even one that literally wasn’t used for anything outside its initial deployment as an enterprise root CA.
Nothing special here, run through the windows installer as usual.
Deploying Certificate Services on Windows Server 2016 is simple enough – open Server Manager, open the Add Roles and Features wizard and choose Active Directory Certificate Services under Server Roles. Ensure you choose only the Certificate Authority role for the Root CA.
To make installing Certificate Services simpler, do it via PowerShell instead via Add-WindowsFeature:
Add-WindowsFeature -Name ADCS-Cert-Authority -IncludeManagementTools
Which will look like this, no reboot required:
After Certificate Services is installed, start the configuration wizard from Server Manager:
Set the credentials to be used while configuring Certificate Services. In this case, we’re configuring CA on a standalone machine and I’m logged on with the local Administrator account.
For the Root CA, we have only on role to configure.
This certificate authority is being configured on a stand-alone server not a member of Active Directory, so we’ll only be able to configure a Standalone CA.
This is the first CA in our environment, so be sure to configure this as a root CA.
With the first CA in the environment, we’ll won’t have an existing private key, so must choose to create a new one.
Windows will no longer accept certificates signed with SHA1 after 1st of January 2017, so be sure to choose at least SHA256. (Default for Windows Server 2016)
*Note there are many cryptographic providers available, but generally most places should stick with RSA, I have seen certain cases where DSA has been selected, only choose this option if you have a specific reason for it. As well generally stick with a 2048 Key Length, you can go higher if you know your system resources can handle the additional computational load, or lower if you are running older hardware and don’t require has high of a security posture.
Specify a name for the new certificate authority. I’d recommend keeping this simple using the ANSI character set, using a meaningful name.
Select the validity period – perhaps the default is the best to choose; however, this can be customized based on your requirements. This is a topic that is a whole security conversation in itself; however, renewing CA certificates isn’t something that you want to be doing too often. Considerations for setting the validity period should include business risk, the size and complexity of the environment you are installing the PKI into, and how mature the IT organization is.
*Note pretty well stated, and in our case I don’t want to renew certs every 5 years, so 10 years sounds about good to me, and I’m hoping 2048 Key length with a SHA256 Hash will still be pretty common 10 years from now, but at least this gives us a very nice time buffer should things change.
On the next page of the wizard, you can choose the location of the certificate services database and logs location (C:\Windows\System32\Certlog), which can be changed depending on your specific environment.
On the last page, you will see a summary of the configuration before committing it to the local certificate services.
Now that certificate services has been installed and the base configuration is complete, a set of specific configuration changes is required to ensure that an offline Root CA will work for us.
Start – Windows Administrative Tools -> Certificate Authority
If you open the Certificate Authority management console, you can view the properties of the certificate authority and the Root CA’s certificate:
Before we take any further steps, including deploying a subordinate CA for issuing certificates, we need to configure the Certificate Revocation List (CRL) Distribution Point. Because this CA will be offline and not a member of Active Directory, the default locations won’t work. For more granular information on configuring the CDP and AIA, see these sources:one and two.
In the properties of the CA, select the Extensions tab to view the CRL Distribution Points (CDP). By default, the ldap:// and file:// locations will be the default distribution points. These, of course, won’t work for the reasons I’ve just stated, and because these locations are embedded in the properties of certificates issued by this CA, we should change them.
Default values:
C:\windows\system32\CertSrv\CertEnroll\<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl
ldap:///CN=<CATruncatedName><CRLNameSuffix>,CN=<ServerShortName>,CN=CDP,CN=Public Key Services,CN=Services,<ConfigurationContainer><CDPObjectClass>
http://<ServerDNSName>/CertEnroll/<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl
file://<ServerDNSName>/CertEnroll/<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl
To set up a CRL distribution point that will work with a location that’s online (so that clients can contact the CRL), we’ll add a new distribution point rather than modify an existing DP and use HTTP.
Before that we’ll want to do two things:
Now add a new CRL location, using the same HTTP location value included by default; however, change <ServerDNSName> for the FQDN for the host that will serve the CRL. In my example, I’ve changed:
http://<ServerDNSName>/CertEnroll/<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl
to
http://ca.corp.com/CertEnroll/<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl
This FQDN is an alias for the subordinate certificate authority that I’ll be deploying to actually issue certificates to clients. This CA will be online with IIS installed, so will be available to serve the CRLs. (Again he doesn’t provide a snippet of the completed entry, just a snippet of the creation of the entry, since by default all check boxes are unticked, and no mention of any changes to the added location, I’m again going to assume they are simply left untouched.) Here are my example setup:
and then adding the custom http CDP location that will be the Sub-CA with IIS.
*NOTE* UNCHECK ALL CHECKBOXES from the LDAP and FILE, the above picture for those settings are wrong.
Repeat the same process for the Authority Information Access (AIA) locations:
Apply the changes, and you will be prompted to restart Active Directory Certificate Services. If you don’t remember to manually restart the service later.
Before publishing the CRL, set the Publication Interval to something other than the default 1 week. Whatever you set the interval to, this will be the maximum amount of time that you’ll need to have the CA online, publish the CRL and copy it to you CRL publishing point.
Open the properties of the Revoked Certificates node and set the CRL publication interval to something suitable for the environment you have installed the CA into. Remember that you’ll need to boot the Root CA and publish a new CRL before the end of this interval.
Ensure that the Certificate Revocation list is published to the to the file system – right-click Revoked Certificates, select All Tasks / Publish. We will then copy these to the subordinate CA.
Browse to C:\Windows\System32\CertSrv\CertEnroll to view the CRL and the root CA certificate.
The default validity period for certificates issued by this CA will be 1 year. Because this is a stand-alone certification authority, we don’t have templates available to use that we can use to define the validity period for issued certificates. So we need to set this in the registry.
As we’ll only be issuing subordinate CA certificates from this root CA, 1 year isn’t very long. If the subordinate CA certificate is only valid for 1 year, any certificates that it issues can only be valid for less than 1 year from the date of issue – not long indeed. Therefore, we should set the validity period on the root CA before we issue any certificates.
To change the validity period, open Registry Editor and navigate to the following key:
HKLM\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\<certification authority name>
In my lab, this path is:
HKLM\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\CORP-OFFLINE-ROOT-CA
Here I can see two values that define how long issued certificates are valid for – ValidityPeriod (defaults to 1) and ValidityPeriodUnits (defaults to “Years”).
Viewing the Root CA certificate validity lifetime
Open ValidityPeriodUnits and change this to the desired value. My recommendation would be to make this 1/2 the lifetime of Root CA’s certificate validity period, so if you’ve configured the Root CA for 10 years, set this to 5 years. You’ll need to restart the Certificate Authority service for this to take effect.
An alternative to editing the registry directly is to set this value to certutil.exe. To change the validity period to 5 years run:
certutil -setreg ca\ValidityPeriodUnits “5”
Yes this is pretty much a copy n paste of the source, it was so well written and nice to follow, there are just a couple additions I added in where things got a little confusing I hope those might help someone who comes across this.
Much like the source in my next post I’ll also cover setting up a Subordinate Root CA, however I will also cover removing an existing CA from an AD environment before replacing it with the new subordinate. As well as cover some errors and issues I faced along the way and how I managed to correct them. This was part was pretty straight cut so I didn’t have much reason to alter it from the source.
Thanks StealthPuppy.
One way:
[Version] Signature="$Windows NT$" [NewRequest] Subject = "CN=SERVER.CONTOSO.COM" ; For a wildcard use "CN=*.CONTOSO.COM" for example ; For an empty subject use the following line instead or remove the Subject line entierely ; Subject = Exportable = FALSE ; Private key is not exportable KeyLength = 2048 ; Common key sizes: 512, 1024, 2048, 4096, 8192, 16384 KeySpec = 1 ; AT_KEYEXCHANGE KeyUsage = 0xA0 ; Digital Signature, Key Encipherment MachineKeySet = True ; The key belongs to the local computer account ProviderName = "Microsoft RSA SChannel Cryptographic Provider" ProviderType = 12 SMIME = FALSE RequestType = CMC ; At least certreq.exe shipping with Windows Vista/Server 2008 is required to interpret the [Strings] and [Extensions] sections below [Strings] szOID_SUBJECT_ALT_NAME2 = "2.5.29.17" szOID_ENHANCED_KEY_USAGE = "2.5.29.37" szOID_PKIX_KP_SERVER_AUTH = "1.3.6.1.5.5.7.3.1" szOID_PKIX_KP_CLIENT_AUTH = "1.3.6.1.5.5.7.3.2" [Extensions] %szOID_SUBJECT_ALT_NAME2% = "{text}dns=computer1.contoso.com&dns=computer2.contoso.com" %szOID_ENHANCED_KEY_USAGE% = "{text}%szOID_PKIX_KP_SERVER_AUTH%,%szOID_PKIX_KP_CLIENT_AUTH%" [RequestAttributes] CertificateTemplate= WebServer
certreq –new ssl.inf ssl.req
Once the certificate request was created you can verify the request with the following command:
certutil ssl.req
certreq –submit ssl.req
You will get a selection dialog to select the CA from. If the CA is configured to issue certificates based on the template settings, the CA may issue the certificate immediately.If RPC traffic is not allowed between the computer where the certificate request was created and the CA, transfer the certificate request to the CA and perform the above command locally at the CA.
If the certificate template name was not specified in the certificate request above, you can specify it as part of the submission command:
certreq -attrib "CertificateTemplate:webserver" –submit ssl.req
certreq –accept ssl.cer
The installation actually puts the certificate into the computer’s personal store, links it with the key material created in step #1 and builds the certificate property. The certificate property stores information such as the friendly name which is not part of a certificate.After performing steps 1 to 4 the certificate will show up in the IIS or ISA management interface and can be bound to a web site or a SSL listener.
*UPDATE* Powershell – I love PowerShell. If you’d like some more automated scripts to help with such a task. Please see this Blog Post by Adam Bertram in which he provides a link to his gitHub page with the required scripts.
Thanks Adam for this fine work. I might just make a pull request to make some changes/tweaks to the scripts. Amazing the neat little tricks I learn from reading other peoples code. 🙂
I’ve started to mange server core installations more and more. I recently required to manage on that was utilize IIS. While I’m fairly used to IIS manager, I wasn’t exactly quite sure how remote management worked.
At first I thought it was a part of RSAT, nope, but fret not it is a feature of Windows, just not enabled by default.
As I expected there to be a bunch of configuration BS required figured I’d google how to do it instead of googling errors. 😀 I found this really nice right tot the point YouTube video. Luckily this made my life easy.
So on the Core server:
#Install the required service
Install-WindowsFeature -Name Web-Mgmt-Service
#enable IIS remote management
reg add HKLM\SOFTWARE\Microsoft\WebManagement\Server /v EnableRemoteManagement /t REG_DWORD /d 1
#Enable service at boot
Set-Service WMSVC -StartupType Automatic
#Enable Service
Start-Service WMSVC
On the Client Machine (Windows 7-10)
#Enable IIS management tools
Programs and Features -> Turn Windows Features on or off -> IIS -> (check off all items under Web Management Tools, you may not need them all but to be safe doesn’t hurt to add them)
#Open IIS Manager
Either through Server Manager -> manage -> IIS
Or Under the Star menu -> Admin Tools -> IIS Manager
*NOTE* Don’t bother adding the IIS manager Snap-in to an existing MMC session, I found it’s missing the top menu bar.
*NOTE 2* You also need to install IIS Manager for Remote Administration 1.2 (Cause you know this isn’t bundled with RSAT, cause… reasons)
Else you’ll be missing the connect to server option under the file menu.
*UPDATE* Grab it from here (good drive share) as MS for some reason has removed the source link in place of a 404.
*NOTE 3* You have to prepend the admin user name with the domain name, else the connection will failed stating unauthorized.
Thanks SSmith!
I plan on releasing a 3 part series blog post on configuring a new CA infrastructure, in an existing one where an Enterprise root CA has already been configured. In my series I decided to utilize core servers, these provide an additional layer of issues as managing them is a little more difficult as it usual requires more cmd based knowledge or better yet PowerShell whenever there are such options available. Turns outs in this case even more so then ever.
I won’t go over too much details here, as I’ll save that for my series. Basically one step requires me to import the signed certificate into the Sub Enterprise CA, being core I have to use the RSAT MMC CA snap-in (funny enough even if you have desktop experience it’s the same tool and snap-in used).
What I discovered is when I’d use the RSAT tool on and remote client system loading my actual CA server when loading the nap-in, it would never actually load the input wizard.
I’d right click my CA, select the option to install a CA certificate:
Then it simply act as if it’s reloading the snap-in…
Then nothing… So I asked about it on Technet. Lucky for me Mark Cooper the Master PKI guy came to my rescue.
The solution: On the Sub CA
certutil -installcert <your certificate file name here>
Get ’em while they’re hot. Fresh from the bunnums of the internet!
Now I love my ESXi, and I recently converted my old gaming rig into a hypervisor with non other than my favorite beast ESXi! I first played with 6.5, and don’t get me wrong the fact it was a direct login to the host right from a fresh install is such a thing of beauty. With a plugin available for a smoother console experience from the web driven one. While the HTML5 based web interface is very slick, the console isn’t exactly 100% real time. With the plugins it’s a nice way around that, however the host management tasks are all locked down to the hosts HTML 5 web interface. So long goes any chance of using the old phat (.Net based) client. I have to say thats sad cause I LOVE the phat client, it is by far the smoothest of all management interfaces, in my experience.
Anyway, logging into my personal host… I see this
This of course doesn’t surprise me. However believe it or not you can continue to run ESXi completely free. It’s generally enough for most peoples needs, there however some limitations.
I won’t go over the details too much but the basics steps are as follows:
In Short, it’s not supported. If you’re running Workstation 9 or above, there’s this trick.
Now this guy goes into the real nitty gritty, and I love that! I however was working with ESXi 5.5 u3b. Now VMware did the same thing with the ESXi hypervisor and introduced USB 3.0 support via the xHCI controller. However the exact same limitation apply.
1) Drivers of USB 3.0 Host Controller are not provided by VMware Tools.
2) VMware USB 3.0 Host Controller will work only if your Virtual Machine OS has Native USB 3.o Support. Examples of such OS are – Windows 8, Windows Server 2012 and Linux Kernel 2.6.31 and above.
He goes on to say he’s screwed, but I’ve found the older EHCI +UHCI controller works for USB 1.1 and 2 devices I haven’t fully tested all case scenarios however. .For a Windows Server 2016 VM, on a HP Gen9 server with ESXi 5.5. My findings were as follows:
I wasn’t sure why the USB 2.0 Device didn’t show up, so I simply removed the xHCI USB controller, and instead installed the EHCI +UHCI. Re-Connected the USB 2.0 devices and added it to the VM, this time the device did show up. I can’t remember the exact performance counters. I’ll update this post when I do some better analysis. My plan is to script some I/O tests using diskspd and PowerShell. Stay tuned. 😀
I’m also going to see if I can connect the same USB device via hardware pass-through instead of utilizing the USB controllers and Devices VM settings options. I’ve manly done this with RDM’s and storage controllers with storage type VM’s (FreeNas mostly).
As for the main point of this post… I figured the main link I posted and this one here as well form the VMware forms that I’d be able to get a way to make the xCHI controller work on the Windows 7 VM guest. The answer is basically grab the Intel xCHI drivers for Windows 7/2008R2 from Intel and install it manually, not via the setup.exe.
To my dismay I couldn’t get it to work, the wizard simply couldn’t locate the device (since the hardware IDs didn’t match) and installing the otherwise the device wouldn’t start.
I even decided to try and use double driver (extracts drivers) against a newer guest OS. This also failed. I simply couldn’t get it to work.
There are multiple ways to do a V2V depending on your migration/conversion.
See here, here and here for some source examples and more in depth reviews of alternative tools/products, or even V2P as unlikely as that maybe 😛
This one will be short n sweet.
V2V a VMDK to a VHDX
Get this.
DO this:
Import-Module ‘C:\Program Files\Microsoft Virtual Machine Converter\MvmcCmdlet.psd1’
ConvertTo-MvmcVirtualHardDisk -SourceLiteralPath (Drive):\VM-disk1.vmdk -VhdType DynamicHardDisk -VhdFormat vhdx -destination (Drive):\vm-disk1
This was nice, but after a good amount of time, I realized I don’t like using Hyper-V much…. so how do you convert back from VHDX to a VMDK?
I used Linux open source tool;
Ubuntu Linux is used in this example for running qemu-img.
First, install qemu-img from the online software repositories:
sudo apt-get install qemu-utils
*Note if using Ubuntu live you will need to enable the community (*Universe) repository (outside the scope of this post)
Go to the directory where virtual disk images are stored (in this example VHD and VHDX virtual disk files are stored in /media/user1/data/):
cd /media/user1/data/
Get the root privileges (with the sudo su command) and view the VHD image information:
qemu-img info test-disk.vhd
Similarly, you can check the information about the VHDX virtual disk:
qemu-img info /media/user1/data/WinServer2016.vhdx
In order to convert VHD to VMDK with qemu-img in Linux, run the command as root:
qemu-img convert /media/user1/data/WinServer2016.vhdx -O vmdk /media/user1/data/WinServer2016qemu.vmdk -p
Where:
-O – define the output file format
-p – show the progress bar
Wait until the conversion process is finished.
Download qemu-img from the official web site (32-bit and 64-bit installers are available to download). In the current example, qemu-img-win-x64-2_3_0 is used. Extract files form the downloaded archive, for example, to C:\Programs\qemu-img\. Then launch the Windows command line (CMD) – Press Windows+R to open the “Run” box. Type “cmd” into the box and press Ctrl+Shift+Enter to run the command as an administrator.
Go to the qemu-img directory:
cd C:\Programs\qemu-img
Commands of qemu-img in Linux and Windows are identical.
Run CMD as administrator and go to the directory where qemu-img is installed.
View the virtual disk information:
qemu-img.exe info c:\Virtual\Hyper-V\test-disk.vhd
Convert the VHD disk image to the VMDK format:
qemu-img.exe convert -p c:\Virtual\Hyper-V\test-disk.vhd -O vmdk c:\Virtual\Hyper-V\test-disk.vmdk
Where:
-p – show progress
-O – the output file
Wait until the conversion process is finished.
Now the main thing to note is this conversion will be of a “type” that will only work with VMware Workstation… so if you need to mount this VMDK to a ESXi VM, you’ll need to “import it” basically convert it to the proper type… I’m usually a fan of VMware but this one is kind of lame.
I found this interesting, I was checking out my DNS server to make some new static host records for my newly networked Test environment/sandbox. To my surprise I found these weird new records DHCID (Dynamic Host Configuration Identifier) I wasn’t sure what was up with these but I did notice them paired along with an A host record (same name, different value). A quick google search revealed this nice old MS gem.
While it states “Name squatting occurs when a non-Windows-based computer registers in Domain Name System (DNS) with a name that is already registered to a Windows-based computer.” and even Susie Long pretty much states the same thing from this TechNet Post.
What I found in my case was it was created for only a couple users and it was from their iPhones after I had renewed them with new iPhones and updated their phones by using iTunes to make a backup and copy their contents to their new phones. I’m assuming cause the same name already existed in DNS from the old phones DHCP request, and the new phone had the same device name after the restore. In my case I knew they weren’t important records since noone would ever need to access their phones via DNS name, lol. So I simply deleted them. We’ll see if they come back.
I already knew all about DHCP and DNS scavenging but this was a new one for me. 😀