Installing vCenter

Installing vCenter

Since vCenter will not be support on Windows moving forward, all discussion of vCenter will simply be referenced by its new known acronym; VCSA. vCenter based on linux.

I just signed up for VMUG advantage as such I get to play with vCenter at home, yay, else get the required ISO from VMware’s product portal using your own VMware login ID.

Although 6.7 is out, and well polished, 6.7 cannot manage ESXi 5.5 hosts, since I still have a few I’d like to use in my cluster, I’m going to be using VSCA 6.5 for this guide.

Also, I technically only have 5.5 based hosts at this moment (I love the phat (C#) client).

new version PhotonOS?

VCSA CPU and RAM Requirements

VCSA Storage Requirements

Open/Mount the ISO on your OS of choice. For me in Windows, simply mount the ISO and navigate into the vcsa-ui-installer\Win32\installer.exe

Run it!

Stage 1

*Drools* I’m not sure what to do… *Clicks Install*

Introduction; Next
EULA; Accept; Next
VCSA + PSC
Target Host + port + username + Password; next
VM Name + Root Password
Select Datastore (I enabled Thin Disk)
Give a system name (which you’ll want to point to the IP address you define, in the DNS servers used by the VCSA and any client systems needing management access)
IP Address
IP MASK
Gateway + DNS Server


Finish.

Now it states this will take a few minutes as it depends on, the hardware specs of the ESXi host it was deployed to, and maybe internet speed if these RPMs are not on the OVF template that was deployed. Also the VM has to boot.

Quick Break time!

Interesting default… until it finally completes…

Stage 2

NEXT!

NTP servers (0.ca.pool.ntp.org,1.ca.pool.ntp.org,2.ca.pool.ntp.org)

Next

New SSO domain, create a password for administrator@vsphere.local (I’ll create a SSO domain for zewwy.ca later to allow my local AD based accounts to have logon rights later on in this or another tutorial).

DEPLOY!

Mhmm, after 2 attempts I kept getting a pschealth service error. I googled it but the VMware KB was rather useless.

On the third try, I set the system name to IP address, as well as set the vCenter to simply use the hosts time, instead of NTP (even though I used the same NTP server the host was using… so shrug), also waited a little bit longer when starting stage 2, and on the third try it finally succeeded the installation.

Then I added the license key and assigned it to vCenter. which was provided to me when I checked out the “purchase” on VMUG advantage partner site.

Summary

Over all the process is very straight forward. In the next post I’ll cover adding hosts, assigning keys, connecting VCSA to an AD server for an alternative SSO domain. Stay tuned!

8Bitdo Unresponsive with charge light stuck on

If you haven’t heard of 8Bitdo check them out. They have an amazing line of wireless controllers. I personally love my SN30 Pro.

The other day I wanted to play some Cuphead and to my dismay the controller wasn’t responding. When I disconnected the USB cable (as I was charging it and using ti at the same time just recently prior) the charge light remained lit.

baffled I tried looking in the manual, and trying different button combinations and holds, after all this failed I googled which lead me to this reddit form

To my amazement the reply by “Tamoketh”

“The orange light stayed on?

Hold the Start button until the orange light turns off. Then turn on as normal.”

actually worked, I don’t know why this wasn’t written in the user manual, considering it has no direct power button.

Migrating WordPress

The Story

In the beginning, there was a man! This man ran! This man ran a site, it was awww inspiring, and so unknown. It ran on Linux in many forms, then one day it realized!!!!!!!! Everything becomes out of date!

Ahem, anyway…

Reasons for migrating

Whatever the reason maybe, mine happens to be this little gem right here:

I’m sorry… did that just say insecure… me… insecure…

OK, sometimes I can be a little insecure but you didn’t have to shove it in my face. Anyway, whatever your reason for migrating maybe. I haven’t done, and maybe you haven’t either. So lets do this… together!

Yeah… I googled… This was my main source, so big thanks to Tom Ewer for this write up much like his mine will be rather indepth and manual. If you wish to avoid learning the nitty gritty and just want to get it done, or use a plugin to help see wpbeginner site here they use a plugin called duplicator and seems to have solid reviews. Since I’m not a fan of paid magic, I’m gonna do this manually.

Migrating WordPress

Step 1 – Backup

Make sure you have a backup of your WordPress Server, in my case since they are VMs, I used Veeam to create a backup of my current WordPress server. Sadly even after logging in to the hosting VM and updating the repos and host OS (Debain 8 Jessie), this version of Debian ran out of support and thus it’s repos weren’t able to supply the updated PHP libraries needed to clear the alert.

After that backup I followed along with Toms blog and instead of FTP (File Transfer Protocol) I used SCP (Yeah… I’m NOT insecure! :P) WinSCP that is, what this actually means I’m not sure apparently either Secure Contain Protect or Special Containment Procedures, but under the hood it just uses SSH? Oh… “Secure Copy (SCP) is method of transferring files between computers over a secure channel. It uses the SSH protocol to do so”

In my case since it was Turnkey Linux, the web files were located at /var/www

Step 2 – Export WP Database

Now Tom ended up using phpMyADmin in his example, I wasn’t sure if the Turnkey image I deployed was using the same, turns out after a quick google search and right on the VMs console, states Adminer on port 12322

after login…

Export….

Left all the default selections…. then… what the….

I was expecting a file to save, so instead I had to select all and save it to a file with Notepad++, all this was, was an output of the DBs in raw SQL.

Step 3 – Create DB Instances

Create your new DB instances, depending on how the SQL server handles DBs imports creation of actual DB’s and their tables may very.

Step 4 – Verify DB Login Credentials

From the backup files in step 1, look in config.php for the DB user connection strings. Verify this user has a login and has proper access rights to the DB’s and Tables being imported.

Now….

What happened to me

Now in my case I simply spun up a new TurnKey WordPress server to see if they were maintaining their images, and sure enough this new was running on Debian 9 Strech, which is the recommended version. So from here although the Adminer version is the same @ 4.2.5 it looks different. Another thing to note was on the old version I was able to login to Adminer with ‘root’ on the new Adminer I had to login in with ‘adminer’.

Now Tom states to create the Databases, I’m not sure if he means the instances here, cause in my case it turns out the creation of the databases happens at import. So what happened to me was this.. first I tried to import…

Then he says to edit the config.php in my case since it was all default WordPress I figured, meh hopefully it’ll all match.. hahah so I skipped his Step 4, also skipped his step 5 cause I already did that, as I said the creation of the DBs happens at import, at least if they don’t already exist like the above error. In my case it just stated it for mysql, but not wordpress so I assumed the other 4 successful queries were for my wordpress data (hahah so many assumed mistakes).

So I uploaded those unedited web files back to the same dir on the new server….

and….

DOH! hahaha That’s why you don’t skip Tom’s Step 4, but in my case I had no interest in creating these as they should already exist, at least unless the WordPress image has changed that (in this case so lucky, they didn’t) so all I had to do was figure out how to set the wordpress users password on the new DB to match that which is in the config.php…. so I opened config.php grabbed the wordpress users password, and found out about this trick by James Goodwin… 😀

However, I was able to load my new WordPress site, but it was still the default one, not mine with my posts, themes, plugins… what the… which brought me back to my “failed” DB import….

When WordPress gives you attitude you drop em like their hot, drop em like their hot…. but in this case WordPress is hot is I won’t drop it, but I will drop those useless databases So, since this was all just a test server anyway, I went back into the Adminer on the new server and simple selected both MySQL, and wordpress and clicked the drop button.

Then back into import, selected the same export SQL file… and…

and sure enough my site loaded with all my content, all my plugins, all my posts, and no more PHP warning!

Summary

Now this covers the manual work to move WordPress, however much like what Tom covered, the final touches of how the site is accessed (direct NAT, behind load balancers, etc) are all on you in order to complete the migration for public access.

In my case I’ll probably prepare this new VM with the exact same Private IP information just on a separate vSwitch, once it’s verified working 100% shutdown the old VM, swap the vSwitch connection to production, test, if good, delete old VM, if not change vSwitch ports and bring up old server.

Worse case delete both VMs, restore my original server from my Veeam backups.

*NOTE* I did experience one issue were I was unable to update WordPress or plugins after the migration. Turns out many posts point to the File System permissions (which I suspected) I checked my old WordPress instance vs my new one and noticed all the files under the physical path had different owner and grp (root on new vs www-data on old) so…

chown -R www-data:www-data /path/to/wordpress

After that updates worked without issue.

Systems not showing up in WSUS console

When a system doesn’t show up in WSUS, do these steps on the system not showing up:

1) Verify this registry setting (usually set via a GPO):

reg query HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate

2) Verify the System can resolve the hostname of the WSUS server, if specified. If IP, move on.

3) Use telnet against the specified port (if different from 8530), this verifies layer 4 and that not firewall ports are in the way.

4) Ensure the Windows Service is actually running via services.msc

5) All else fails try this:

net stop wuauserv
reg delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate /v SusClientId /f
reg delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate /v SusClientIdValidation /f
net start wuauserv
wuauclt /resetauthorization /detectnow
wuauclt /reportnow

Wait a couple mins and hard refresh the WSUS MMC Snap-in. I noticed this trick also works for systems that are in WSUS but won’t report an install percentage of 100%.

I noticed one system was not showing up with 100% install rate, and a yellow icon indicating needed updates still required, however checking for updates on the system kept reporting all updates, even after step 5 a couple times.

So… to get updates WSUS doesn’t have on Desktop based version of Windows there’s usually a nice link that states “Check for updates from Windows Update Online”, but this was a core hyper-v server, and no GUI, so…

*Note ByPassing WSUS on Core, without GPO changes.

reg add HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU /v UseWUServer /t REG_DWORD /d 0
net stop wuauserv
net start wuauserv
sconfig

run option 6 and check for all updates (this assumes the server/system has access to internet servers).

Don’t forget to set the setting off again:

reg add HKLM\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU /v UseWUServer /t REG_DWORD /d 1

Then run Step 5 again.

Update* Feb 2020

I discovered there’s a new “Expert” in WSUS named Adam
Following his suggestion on this technet post, managed to resolve the issue and have his machine install the CU for 1909 and report it’s status to WSUS!
net stop bits
net stop wuauserv
reg delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate" /v AccountDomainSid /f
reg delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate" /v PingID /f
reg delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate" /v SusClientId /f
reg delete "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WindowsUpdate" /v SusClientIDValidation /f
rd /s /q "C:\WINDOWS\SoftwareDistribution"
net start bits
net start wuauserv
wuauclt /resetauthorization /detectnow
Only a couple systems that may need to be manually intervened before all systems report successfully and are 100% updated.
I also first attempted this fix “I have Declined all superseded updates and then run the clean up tool. Forced a client to /detectnow and it started working.”
Which sadly did not work, but very nice to clear up the DB size, and disk spaced used!

Setting Network Profile from Public to Domain on Core

Get Network interfaces profile

Get-NetConnectionProfile

sometimes when you recover a VM from backup it likes to change to Public even though all the other network settings were recovered successfully.

NLA Reset via Disable/Enable NIC

first thing to try is simply disable and re-enable the NIC once connection to a DC is verified.

Get-NetAdapter | Disable-NetAdapter
Get-NetAdapter | Enable-NetAdapter

All else fails, Set the DNS Suffix

If the NLA still refuses to show Domain status, try setting the DNS suffix…

Get-NetAdapter Ethernet | Set-DNSClient -ConnectionSpecificSuffix "Zewwy.ca" -UseSuffixWhenRegistering $true

Then, again disable, enable the interface.

It should go back to DomainAuthenticated. If not verify the computer is still actually authenticated with a DC with nltest.

Hope this helps someone.

HTTP to HTTPS redirect Sub-CA Core

The Story

One day I noticed I had configured my 2008 R2 CA server to automatically redirect to the certsrv site over HTTPS even when navigating to the root site via HTTP. There was however no URL rewrite module… and I didn’t blog about so I had to figure out…. how did I do it?! Why?….. Cause this…

Sucks, and why would you issue certificates over unsecure HTTP (yeah yeah, locked down networks don’t matter, but still, if its easy enough to secure, why not).

The First Problem

The first problem should be pretty evident from the title alone…. yeah it’s core, which means; No Desktop, no GUI tools, much of anything on the server itself. So we will have to manage IIS settings remotely.

SubCA:

Nice, and…

Windows 10:

as well install IIS RM 1.2 (Google Drive share) Why… see here

and finally connect to the sub-CAs IIS…

and

Expand Sites, and highlight the default site…

Default Settings

By default you can notice a few things, first there’s no binding for the alternative default port of 443 which HTTPS standardizes on.

Now you can simply select the same Computer based certificate that was issued to the computer for the actual Sub-CA itself.. and this will work…

however navigating to the site gave cert warnings as I was accessing the site by a hostname different than the common name, and without any SANs specified for this you get certificate errors/warnings, not a great choice. So let’s create a new certificate for IIS.

Alright, no worries I blogged about this as well

On the Windows 10 client machine, open MMC…

Certificates Snap in -> Comp -> SubCA

-> Personal -> Certificates -> Right Click open area -> All Tasks -> Advanced Operations -> Create Custom Request….

Next, Pick AD enrollment, Next, Template: Web Server; PKCS #10, Next,

Click Details, then Properties, populate the CN and SANS, Next

Save the request file, Open the CA Snap-in…. sign the cert…

provide the request file, and save the certificate…

import it back to the CA via the remote MMC cert snap-in…

Now back on IIS… let’s change the cert on the binding…

Mhmmmm not showing up in the list… let’s re-open IIS manager… nope cause…

I don’t have the key.

The Second Problem

I see so even though I created the CSR on the server remotely… it doesn’t have the key after importing… I didn’t have this issue on my initial testing at work, so I’m not exactly sure what happened here considering I followed all the steps I did before exactly…. so ok weird…I think this might be an LTSB bug (Nope Tested on a 1903 client VM) or something, it’s the only difference I can think of at this moment.

In my initial tests of this the SubCA did have the key with the cert but when attempting to bind it in IIS would always error out with an interesting error.

Which now I’ll have to get a snippet of, as my home lab provided different results… which kind of annoys the shit out of me right now. So even if you get the key with the “first method” it won’t work you get the above ever, or you simply don’t get the key with the request and import and it never shows in the IIS bindings dropdown list.

Anyway, I only managed to resolve it by following the second method of creating a cert on IIS Core.

Enabling RDP on Core

Now I’m lazy and didn’t want to type out the whole inf file, and my first attempts to RDP in failed cause of course you have to configure it, i know how on desktop version, but luckily MS documented this finally…

so on the console of the SubCA:

cscript C:\Windows\System32\Scregedit.wsf /ar 0

open notepad and create CSR on SubCA directly…

save it, and convert it, and submit it!

Save!!!! the Cert!

Accept! The Cert!

Now in cert snap-in you can see the system has the key:

and should now be selectable in IIS, and not give and error like shown above.

But first the default error messages section:

and add the new port binding:

Now we should be able to access the certsrv page securely or you know the welcome splash…

Now for the magic, I took the idea of this guy”

Make sure that under SSL Settings, Require SSL is not checked. Otherwise it will complain with 403.4.forbidden

” response from this site I sourced in my original HTTP to HTTPS redirect

So…

Creating a custom Error Page

which gives this:

and finally, enable require SSL:

Now if you navigate to http://subca you get https://subca/certsrv

No URL rewrite module required:

Press enter.. and TADA:

Summary

There’s always multiple ways to accomplish something, I like this method cause I didn’t have to install and alternative module on my SubCA server. This also always enforces a secure connection when using the web portal to issue certificates. I also found no impact on any regular MMC requests either. All good all around.

I hope someone enjoys this post! Cheers!

*UPDATE 2023* This trick caused my SubCA CA services to not start. Stating failed to retrieve CRL, this was due to any attempt to retrieve the CRL over regular HTTP to fail as those requests would redirect back to the certsrv site, but requests to the same CRL via HTTPS would work. So only implement this change if you have already edited your Offline and SubCA Certificates to have CRL’s pointing to a https based URL references.

Veeam, SMB, and the Failed to get disk free space

The Story

I wanted to try Veeam B&R Free again, now that I discovered a trick on re-issuing the 60 day trial key on ESXi hosts so I should be able to get past my old issue I blogged about “The VMware Screw“…

So I D/L the latest n greatest from Veeam and that’s B&R 9.5 Update 4, grab the latest builds here (Veeam Login Req)…

Run the installer, nothing special here. Love the new UI, amazing how much nicer it is vs the old Free Edition.

Anyway, navigate to Backup Infrastructure to add a Repo, in this case a simple USB HDD I was sharing via SMB on a FreeNAS server. I had created it with open access so no authentication was required to access the share.

As shown here, I was accessing the file share without issue in Windows Explorer…

However, attempting to add it as a Repo…

Whomp, whomm, whmomomomomom.

Kind of annoying that anonymous SMB is I guess not supported as a Repo type, or maybe just not with my particular setup, I’m not exactly sure what the exact reason for this error being hit as I don’t have access to Veeam source code. Anyway, I started to google for a possible solution, annoying the first result was simply a post which a Veeam rep simply posted to the second most common solution post which basically stated:

“add the registry setting:

Key: HKLM\SOFTWARE\Veeam\Veeam Backup and Replication\
DWORD: NetUseShareAccess = 1

As per KB1735”

Did that and…. Failed to get disk free space. Since a lot of people on these threads are mentioning the user of a username and password I decided to follow this guide on creating a user for an SMB share on FreeNAS.

Follow the Veeam wizard and… “Can’t get disk free space”

Ughhhh… I was about to give up and actually attempt a physical alternative then I noticed something people were saying…

ender – “Depending on your SMB server, you may need to enter the username as DOMAIN\user or SERVERNAME\user before it’s accepted.”

StivoBerlin – “Have you tried to write “YOUDOMAINONYOURSYNOLOGY\youruser” instead of only “youruser” for the NAS login ?”

michaelbrandi – “I have found that it makes a difference if you enter “full” credentials so not admin but IP\admin, not sure it’s universal, but it helped me.”

That’s when I added an account to Veeam as “FREENAS\TestUser” along with the password, and used that credential after entering the path, and got past the error!

I was actually rather impressed at the speed of the backups considering it was a USB drive shared over the network via SMB from FreeNAS…

Look at that, backed up a 20 gig vm in 8 minutes wooo! Not bad for Free.

Maybe I’ll re-follow up on my old free VM backup series and bring it back with a proper tut on each step required to make it work.

For now I hope this information helps someone!

ESXi 6.5 on Proliant Gen9 Hardware Status Unknown

I’ll keep this post short.

If you have a Proliant Gen9 server and running ESXi 6.5 u2 along with VSCA 6.5u2.

You will get all hosts not displaying any hardware status. This should fixed immediately as you don’t get alerts on any hardware faults via IPMI. This includes status from hosts running ESXi 5.5 or 6.5.

The first fix is to upgrade the VCSA to 6.5u3.

After upgrading the VCSA to 6.5u3… Hardware status will come back for each host.. however.. if you are running ESXi 6.5u2 on the Gen9 servers you’ll something like this:

as you can see some sensors are a lil wonky…

The fix here is to upgrade the host to 6.5u3 via the HPE build.

After the hosts and the VCSA are on 6.5u3 all is good and hardware faults will again will trigger critical alarms on vSphere.

Migrating Users and Passwords
Same NetBIOS Name

Story

*NOTE* All information here is provided as is for educational purposes only, whatever you do in your own environment is on you.

I want to give a bit of a background here, the goal was to migrate users to a new child domain, that was previously a 2 way forest wide trust. In this case scenario, there was no need for a 2 way trust as both domains were owned and operated by the same corporation. A simplified AD structure was conceived, new workflow servers, new clean permissions throughout.

Everything was coming along swimming until, after some extensive research, how to migrate users and their passwords without service interruptions…

With Windows Authentication in the back end there was only one choice, ADMT v3.2 and it’s associated documentation … yeah that’s not a content based website, it’s a download to a doc, that was last updated 2014 with support with Windows Server 2012 R2.

It states the following:

Any DB works (I used SQL 2016 Express)
Other things, read it if you wish to be over/under whelmed
Needs a Trust

Simple Overview

Here’s a simple of an idea of how the Forests are laid out, and you can see where the users are planned to be migrated… just one major problem… You can’t create a trust between forests where the NetBIOS match. And Yes, that is a thread from 2011 unanswered, which I will answer for you tonight. Which you can see from my design, is exactly the problem I was facing. I was initially hoping this could be done without a trust which turned out was the answer which lead me to the answer..

You can do this via trust but not how you might normally think about it.

Build out a new dc in your source domain and allow it to replicate properly, be sure that it is a dc/gc and dns server. Disconnect this dc from the current domain and expect to NEVER connect to this domain again.

Do a metadata cleanup of this dc….

Along with this, How to rename the NetBIOS name, now mix everything into one huge sch-melting pot and what do we get…. this blog post.

Setup

ADMT

*Note don’t bother installing SQL + ADMT until the member server is domain joined to the target domain. In this example Windows Server 2016 is used to host ADMT services. So this server is domain joined to Special.NewDomain.com

First Download ADMT from here

Based on the size of the installer I’m assuming this is an online installer and instead of dicking around trying to find an offline version.. simply connect this server to the internet. However before we begin ADMT requires an SQL instance to utilize, to keep life easy we will install SQL Express on this server and run it locally for the migration.

Installing SQL Express 2016

Now looking at where to find specific version to download and use… I wasn’t sure which version was best
I decided to start with Express Core …
Next, next, next, Mixed Auth (just in case) sa password.

Installing ADMT

Now with SQL Express on the target domain joined server, double click the ADMT v3.2 installer exe.

Accept the EULA. Accept/Decline the CEIP

Use local SQL and….

DB Import… NO, next

Now we should be good to run ADMT (as a Domain admin on Special.NewDomain.com), IF you installed SQL + ADMT before joining to the target domain and did not choose to use mixed auth and have no sa account, follow my previous blog post to recover access to the SQL Express instance, granting your domain admin account access.

Preparing the Source Domain

Now how you choose to accomplish this is entirely up to you. If you wish to go the route suggested by the TechNet Post to create a new DC thats a GC and rip it out of the Forest/Domain via a MetaData Cleanup… be my guest but that’s a lot of work.

Instead I choose to simply create a secondary version of the Special.local DC via a backup, but I could have easily made a clone since it’s all virtualized. So for me it started out like this…

Clearly at this point a trust still can’t be established as NetBIOS names are still the same, however now we have no fear of mucking up the source domain as it’s simply a clone will all users and their passwords still encrypted within AD. So this migration will require 2 things:

1 – A domain rename

2- Password Export Server (Covered later in this post)

Renaming the Source Domain

Changing the IP Address

Since this is a clone, and I was not interested in alternative firewalls outside the windows firewalls I connected the source domain to the same subnet as the target domain to ease life. This requires the IP address to change.

So open Network and Sharing Center and edit the adapter settings accordingly, this will however break the DC’s DNS service. So lets fix that.

Deleting the DC A host Record

On the DC open DNS snap-in, and navigate to the top where the SOA and NS records are, and below that search for the A host record for the DC itself, and delete that record (remember this should be a clone or copy of a DC from the original source DC so no risk should be had here).

Reset DNS settings

ipconfig /flushdns
net stop dns
net stop netlogon
net start dns
net start netlogon

Create a new DNS zone

Open DNS snap-in again on cloned source DC, and create a new DNS Zone for the new domain name.

Complete the wizard.

Configure Domain to Accept new DNS Suffix

– Open ADSI Edit
– Right Click ADSI Edit -> Connect to…
– Leave defaults -> ok
– Expand “Default Naming Context”
– Right Click Domain Parent Object -> Properties

– Enter the new domain name into the msDS-AllowedDNSSuffixes

Enable update DNS Suffix option

Server will reboot after this step, again since it is a clone and not actually hosting AD services for any production need this is no problem, right?

Get the required XML to edit:

Everything I was reading online stated that you need to do all this from a member server, and you need to copy rendom and another application from the System32 from a DC, etc, etc, all a bunch of rubbish… everytime I attempted to follow such guides the rendom command would spit out some lines that seems was supposed to be parsed by something else to provide useful return output. People stated running from System32 directly fixed that issue for them, but not for me. Instead I decided to run all the commands directly from the DC since it was the lowest risk for me. as stated serveral times why above. Sooo…..

– Open CMD as admin on DC
– Run “rendom /list”

Edit the XML file (open via elevated cmd prompt)

save and check by running “rendom /showforest”

It should report the changes you made to the XML file.

Upload the XML to the DC

Prepare DC

Execute Domain Rename

Let DC reboot and then complete the rename.

End the Domain Rename process

So now the setup should look like this…

Now as you can see we no longer have the same NetBIOS name and thus we can create a trust here to migrate users using ADMT and PES yay!

The Trust

Conditional Forwarders

For the trust to work each domain must be reachable by the other domains DC via FQDN. This obviously requires conditional forwarders to be configured for each DC accordingly.

So opening a MMC.exe application, from a member system with RSAT installed, or directly on Special.NewDomain.com if it has the desktop experience. Then Add the DNS snap-in. Add a conditional forwarder, in my case I added NotSpecial.com pointing to the IP address of the cloned and renamed DC.

Then doing the same thing on NotSpecial.com DC, opening the DNS application (or remotely with RSAT), and creating a conditional forwarder that says special.newdomain.com pointed to the IP address of the actual child DC, in the same subnet as in the diagram.

At this point ensure that Target DC (Special.NewDoamin.com) can ping NotSpecial.com, and that Source DC (Special.com) can ping Special.NewDomain.com. If yes, we can now go ahead and build the trust.

Building the Trust

Open domain and trusts. Right click domain and properties:

Click the Trusts Tab, New Trust:

Complete the wizard for both sides of the trust. I had a domain admin account in each source and target domain.

With admin account on each domain and already logged in as domain admin on the NotSpecial domain, wizard completes successfully:

Now with a trust in place, we could start just migrating users, but we need those passwords migrated as well, else we will have a bunch of angry users.

ADMT & PES

Permissions

Nest Special\DomainAdmin into NotSpecial\BuiltIn\Administrators group, as well as into Special\BuiltIn\Administrators group. You might be wondering why? Well I hit this error when attempting to migrate users passwords:

After reading this, I made the changes above and it finally got past this error when attempting to migrate users passwords.

Logged on to ADMT as Special\DomainAdmin

Password Export Server Setup

Step 1) Create the encryption key for the migration:

admt key /option:create /sourcedomain:notspecial.com /keyfile:"C:\path\to\file.pes" /keypassword:*

Step 2) Copy the Key to the NotSpecial.com DC (I used RDP)

Step 3) Grab PES installer from here, and get it on NotSpecial.com DC

You should now have this on NotSpecial.com DC:

Step 4) Install PES running the MSI from an elevated cmd prompt:
If you’re wondering why, I was about to smash a monitor when the installer kept telling me the password was wrong for the encryption file, when I knew for certain I wasn’t putting it in wrong, and someone else blogged about it.

I used a installed using local system account cause again this DC will be shutdown after the migration.

Step 5) Complete the Install and after reboot start the service

ADMT and Migrating Users

At this point we should be officially ready to migrate users, on ADMT open ADMT:

Right click the folder and select the user migration wizard
Populate the domain names and tree source domain controllers should pick up automatically.

Select your users, Pick a target OU, then select to migrate password:

Given you followed the permissions section, this should work:

Keep target state same as source and don’t copy SID as we have no intention of using SID filtering.

These settings worked great for me, change based on your needs.

again these settings worked for me.

After the process…

It worked! However i was amazed even in my first test run, there was one noticeable message in the log:

Rename UPN name user@NewDomain.com to user@NewDomain.com. Cannot create accounts with the same UPN name as another UPN in the enterprise.

Well cause there already exists a user with that UPN at the parent, but why is it picking the parent for setting the UPN? Who knows… but much like that reference you can bulk select users in ADUAC Snap-in, and select the child domain from the drop down text-box.

Summary

  1. Create a Copy of the Source Domain Controller
  2. Rename its Domain
  3. Connect to target domain subnet
  4. create conditional forwarders
  5. create two way trust
  6. Setup ADMT
  7. Setup PES
  8. Migrate Users
  9. Remove Trust, and Shutdown NotSpecial.com DC
  10. Happy Dance

Hope everyone enjoyed this post, and hopefully someone finds it useful.

*Update* Here’s a really good blog on SID FIltering between forest trusts!

Resetting Access to SQLEXPRESS 2016

So today I wanted to rerun a task of using ADMT on a server I configured, I was now connecting to this server via a different domain account then when I had first configured the server in my first test run. Now for the purpose of this particular DB and server purpose I could easily rebuild… but what if… you’re in a situation in which the data and the access to it is much more important.

Also… I was lazy… so I researched.. after a little while (my dev got involved too) it was simply a mission… a purpose a GOAL to figure it out… in the end it was really easy it was just SYNTAX, oh gawd the important of syntax.

Now the SQL team at Microsoft does a lot of wonky things and doesn’t follow standards that most other divisions follow, so it hats off to the walls, if the SQL guys are on mushrooms today better expect some funky changes without notice or documentation. Wait… what… anyway…

My research began.. . stackoverflow.… to… a random blog, archived on the wayback machine ahhh I feel that’d what will become of my blog….

Let me paraphrase everything:

  1. If you are a local admin you can fix this
  2. SYNTAX

My main issue was even though I kept starting the service with the sqlservr -m as the blog posted this resulted in an error, it turns out you have to specify the instance name… so

  1. change DIR to “C:\Program Files\Microsoft SQL Server\MSSQL13.SQLEXPRESS\MSSQL\Binn”
  2. run “sqlservr -s SQLEXPRESS -m”
    *Note* If you are on said mushroom you can skip the space and run “sqlservr -sSQLEXPRESS -m” and it will work too
    At this point there are two lines to watch out for:
    A) SQL service is running in single user mode (-m)
    B) SQL service is running under admin account that is logged in
  3. open another cmd prompt as admin and run “sqlcmd -S COMPUTERNAME\INSTANCENAME -E”
    E.G. sqlcmd -S SQLSERVER\SQLEXPRESS -E
    *NOTE* the computername is super important else you will get a logon failure at the other console screen showing the compueraccount instead of the user account for some reason.

Because of this error I kept trying other things, my dev tried a couple things and thought one of them was enabling TCP/IP stack, but I said this was all local commands and connections with sqlcmd since this SQLEXPRESS doesn’t come bundled with SSMS and there was no member server or client system with SSMS available to connect or use, as to the point of resetting access locally.

once you get the 1> prompt you can just follow the other guides which is:

1> CREATE LOGIN [DOMAIN\USER] FROM WINDOWS
2>go
1>exec sp_addsrvrolemember @loginame=’DOMAIN\USER’,@rolename=’sysadmin’
2>go
1>exit

Yes that is loginame and not loginname, cause mushrooms.