Publish my own personal mx record in hopes to get my own email going….
I decided to see why my email outbound wasn’t working (sigh even following Paul Cunningham’s post seems I’m missing something) seems all my out-bound based SMTP connections to external mail servers seems to be failing. According to my firewall (Palo Alto) The rule is allowing it out but the application shows incomplete… like it’s never establishing a connection. So from my previous posts, I use telnet to attempt a connection on external known IPs for SMTP mail server, and sure enough no connections can be established (I know I’ll eventually have to create a receive connector from outbound sources and create a security rule to allow email from outside in, but I wanted to tackle email going out first).
I decided to attempt the same port 25 connection to the new record I created (I have multiple internet connection to utilize to actually test connections from “outside” instead of having to rely on a loop back NAT rule or anything). to my dismay it showed failed to connect (I already expected this as I created a NAT rule but I never created a security rule to allow the connection). I decided to go to my Monitor tab to see if I could see the attempted connection, I indeed did see it. However what surprised me more was the failed attempts from others in the short time I created this record (considering I had the IP for a long time and pretty much all ports were blocked forever, I didn’t expect there to be much attempts) these were either crawlers or something else…. but guess who the every first was….
University of Michigan (AS36375)
Not once, but twice from two sequential IP addresses…. Mhmmm what are those Michigans up to?
Not sure who they are, might have to check em out..
Couple hours later…
I guess it only makes sense after Americans, and Anonymous it be nothing other than the Russians right…. To be fair I don’t actually known wtf thi sis lol, Japanese mixed with Russian or something pile of who knows what.
They are least tried three times in a row from same IP (Good thought idea, if it doesn’t work once, heck try again a couple times)
Then my attempt… pretty funny what you can hear if you just listen…
This isn’t actually I Spike You! Like from the old school GoldenEye movie, but this is what you’d actually do if you wanted to “Spike” someone online, this is my actual server I plan to use of course, but if I actually wanted to find out what people are up to I’d create a honeypot. Maybe now that I post this, they’ll think my mx record is a honeypot, but it’ll secretly become in use… sometime…. lol
Well in my previous post I discussed the issue I faced resolving an email problem with one of our development applications in which it was unable to send emails after a recent Exchange upgrade/migration. So initially we were going to simply rebuild our own workflow in-house using ASP .NET Core. Until we noticed that even our own workflows were failing… in this case the answer from the old post which was super vague “reconfigure the receive connector”. Then I somehow stumbled upon my answer through one of my hundreds of google searches… I founds this gem!
OK before I link the gem which will be the source to my answer. I also wanted to point something out real quick here in hopes maybe someone can comment below the answer to this one:
When using Exchange 2016 as an email SMTP relay, and you use a no-reply from address, with an external email address for the destination, how do you query to find out if its gone through, or stuck in que? All I could see in the ECP it always required me to select a mailbox… there’s no mailbox associated with these relayed email messages, so how does one check this?
OK, now for the gem. This guy “Paul Cunningham” He’s… uhhhhhh… He’s uhhhhh… he’s uhhhh a good guy. So I always knew you could use telnet to check certain ports and services… but this was so concise… it nailed the problem…
From my K2 Server or my in house workflow server:
1) Ensure Telnet Client feature is enabled
2) Open cmd prompt or PowerShell:
telnet exchangeServer 25
mail from: email@example.com
rcpt to: ExternalUser@gmail.ca
220 EXSERVER.exchange2016demo.com Microsoft ESMTP MAIL Service ready at Thu, 22
Jun 2018 12:04:45 +1000
250 EXSERVER.exchange2016demo.com Hello [192.168.0.30]
mail from: firstname.lastname@example.org
250 2.1.0 Sender OK
rcpt to: email@example.com
550 5.7.54 SMTP; Unable to relay recipient in non-accepted domain
huh, just like the source blog, now why would I be getting that error… I allowed Anonymous users via the check box under the receive connectors security tab… yet Paul does a lil extra step that doesn’t seem to be mentioned elsewhere, and that check box I mentioned is his first line, but then look at the interesting second line….
Since I was using anonymous settings on the Application server side (K2 in this case) I gave the second PowerShell cmdlet a run from my new exchange server.
Amazingly enough just like the source blog after running the second line (edited to fit my environment obviously) then the rcpt to succeeded!
20 EXSERVER.exchange2016demo.com Microsoft ESMTP MAIL Service ready at Thu, 22
Jun 2018 12:59:39 +1000
250 EXSERVER.exchange2016demo.com Hello [192.168.0.30]
mail from: firstname.lastname@example.org
250 2.1.0 Sender OK
rcpt to: email@example.com
250 2.1.5 Recipient OK
Part 2 – The Solution
If K2 is configured to use EWS, check that stuff out elsewhere, if you landed here from my previous post looking for the answer to the “There is no connection string for the destination email address ‘Email Address'” and wanted to know how that person altered his receive connector:
I’m going to keep this post short, just in hopes that I don’t go off the rails on this product; K2 Blackpearl 4.7. I have plenty of awesome SharePoint 2010 to 2016 migration content yet to post on my site. I’m sorry I wish I could get all the awesome things I do on here, there are many awesome things I keep thinking about; iOmega NAS conversion I did replacing the ix12 OS with FreeNAS, My test enviroment, ISCSI MPIO (VMware, Microsoft, linux configs)… anyway… K2…. ugh
I’ve never posted about this product on my site before, cause to be frank…. I tried to stay away from it as much as possible, so all I did was update the base OS and pray that the developers or users didn’t complain about errors in any of the “K2 apps”. Trust me there are lots I can’t tell you how many times I had o hear about K2 issues… anyway, I digress. Ya’d figure it’s this simple eh… well for running the setup manager, for first time config sure… but you have to run it again if you want to ever change this value… *ahem*… alright so lets say you did this… it’ll just work right… you set the email server destination and port, it’s gotta work for everything in the server (We’ere talking standalone, not clustered). right?…. Nope… So you eventually find that this is the most common, and generic error you will find if you ever have any email issues with K2… “There is no connection string for the destination email address ‘Email Address'” and you will get this for a lot of different things. Which oddly enough some of it gets covered here. But it’s a mess, and you have no clue which problem is the cause for the error. So much like the shared link there:
Check 1 – Environment Variables
Check the Environment Variables (If you are not sure what Environment Variables are you can read this *Ahem* Awesome… WhitePaper)
Alright… so we re-ran the config manager, updated the email settings there, updated out environment variables, we gotta be good now!…..
“There is no connection string for the destination email address ‘Email Address'”
Check 2 – SMTP Config Strings
You got to be…… ok, ok…. we got this… there’s got to be something else we must have missed… let’s see… mhmmm as Mikhal says from here…
“Email configuration is externalized from process and K2 server relies on connection strings in configuration file, processes look at Environment Library, but you should also keep in mind String Table. I saw cases when people did update of Mail Server field in environment library, but their workflow was deployed from other environment with old/incorrect email settings which were written into String Table during deployment time – so you should also make sure that you have correct settings there.”
soooo… From K2 terrible support page you can either dig in and manually edit “k2hostserver.exe.config” Really….. really….. .exe.config ….. anyway load the terrible Windows application that they use to edit this XML type config file. Now you mind find your self wondering “Do I have to create a new SMTP connection string for every from and destination address? That’s…. just…… unmanageable!” And yup it sort of is, what my awesome colleague and I discovered (He’s a K2 Master by the way) is that any internal “spoofed” mail (since we had decided we didn’t utilize any of K2’s EWS integration) would only work when we had that particular user with a SMTP string in this tool; ConnectionStringEditor.exe
It took my colleague a really long time before magically discovering what the syntax was in the SMTP connection string to be a wild card E.G. *@Zewwy.ca …. Drum Roll…….. NOTHING
That’s right nothing to make a wildcard SMTP connection string simply leave the field blank in the first step of the wizard. Alright… so now we had validated workflows could send on behalf of over SMTP based email (even to exchange without any EWS integration)… However we also utilized another workflow to send external email address emails…
“There is no connection string for the destination email address ‘Email Address'”
Check 3 – SMTP Receive Connector
Are you Kidding ME?!?!?! Alright…. Jesus what else did I forget/miss…
At this point you may find yourself a lil bit stuck as every other post points you to the same solutions above, or like this jerk-off goes over everything, then laughs in your face and says “‘There is no connection string for the destination email address firstname.lastname@example.org’. That is a pity. I thought we just added it. Ok, I admit, I knew this was going to be the result, but I thought I’d keep you intrigued. Now you will have to wait for my next article to see how to fix it. You can find the answer in part 2 of this article, along with a few other tips on how to resolve some other issues when trying to send an email through a SMTP server.” only to find that there is no Part 2….
So I was kind of stuck with the initial share I gave…
“Thanks for your help. We found the root reason is that the Exchange Server config the receive policy so we changed the policy then resolved the problem.”
If dealing with issues isn’t bad enough the internet is littered with useless help. “Don’t worry guys, I figured out my problem, if you have this problem too, well I figured out mine, good luck with yours.” Anyway, again I digress and I am not such a jerk and I will tell you how I finally managed to resolve this issue for good.
So I double checked my receive connector on my Exchange server… may there’s just something I missed… well… I covered everything, No TLS…. using Port 25, listening from only my K2 Server, Anonymous Users Checked off under security… What the heck I have them all covered…
Check 4 – Part 2
I was at my wits end until this! (Part 2 LOLOLOLOL, for real though… it’s coming soon. in like 20-30 mins, maybe an hour shouldn’t take me long to write up)
This can however be expanded on knowing powershell being object oriented, roomname is actually any mailbox, :\ is the delimiter, Calendar is the SubFolder. -User $Group or $User (Groups are usually best practice per IDGLA) and finally -AccessRights, you can specify any of the access rights you see in the pull down option of Outlook.
That’s it. that’s all there is to mangling Mailbox sub-item permissions.
I’m my quest to completely rebuild my company domain from the ground up (new Domain Controllers (Server 2016 Core), SQL 2016, SharePoint 2016, Exchange 2016….You get the idea) I’ve had to face many interesting challenges (not all blogged about just yet, but if you follow my TechNet posts you’ll see I have plenty of content to write about moving forward if I don’t come across interesting new things, at this rate that seems unlikely… anyway) This time it was another interesting one.
After spending the entire weekend combing over every little detail of the migration (Mail relays, systems that use email, how they send email, the receive connectors they would need (Auth types, TLS sec, etc) I figured I had all my bases covered, and made the switch (all changes including expanding existing server configs to allow mail flow, not hinder any) so the last part of my switch was ensuring most servers/services were using a Fully Qualified Domain Name (FQDN) in their settings/configs for SMTP. So my cut over in this case was very simple change the A host record, and clear all systems DNS cache. To my amazement everything was still working (even ActiveSync cut over without a hiccup)…. Until… the next day…
The Next Day
Out of all things I didn’t anticipate internal email flow to break… I mean… there’s nothing different between Joe.email@example.com and Joe.firstname.lastname@example.org right??!?! Wrong! lol with Microsoft Exchange you are totally, and utterly wrong! First… Read this and read this to understand exactly what I mean. In short internal email likes to use it’s own special address… (give a dirty look)… called X.500 addresses (AKA The LegacyExchangeDN) bunch of garbage, muff cabbage BS…. So instead of everything resolving normally cause all new linked mailboxes had the proper SMTP address (So all other outbound/inbound flowed without issue), user wanting to reply to old emails, or creating new ones and having the TO field be auto populated, they would get a stupid NDR (AKA a bounce back telling them the email can’t be sent to the recipient) cause FFS it can’t just use the SMTP address NOOOOOOOO it uses the stupid Legacy X.500 Address… Gosh Darn *Mumbles* Exchange… in case you can’t tell I dispise email with a passion.
Anyway, I looked up what the possible solutions were, I wasn’t too happy. For now I was telling people to simply remove the old Auto Populate cache Outlook was using. As for existing Calendar events, turns our all resources fell under the exact same annoying problem, even though I created them all with the same alias’s it wasn’t good enough Exchange was seeing them via their old X.500 addresses (Since all old Calendar Items were imported from Backup), they had new X.500 addresses in the new Exchange Server. So I would have to tell people to remove those resources, and simply re-add them.
There is however a problem with this, and that is if someone edits an existing Calendar event and changes the time, the room may already have been booked (the new room) so when the user editing the old re-occurring event goes to re-add the room, it would complain about a conflict. Someone has already booked the “new” room even though it should have been held by the initial booking. Alright so how does one re-map this… well took me a while digging through google like this guy, or this guy (seems everyones a blogger these days), but I found an excellent resource blog that covers the problem, and the solution pretty clearly…
To keep it short for everyone, and as usual to Paraphrase the solution, which so far is not even working for me :@, even after waiting 18 hours. *UPDATE!* Don’t put in X.500 like the stupid UI tells you to… just put in X500 without the ****ing dot… See my Technet post for details!
1) Open User and Computers (From linked Domain/ Old Domain)
2) Find any User/Resource/Equipment object that were migrated
3) Right Click and select attribute editor tab (requires advanced view)
4) press "L" and lookup LegacyExchangeDN, double click and copy
5) Open Exchange ECP (New Server)
6) Under Recipients double click migrated mailbox, click email address
7) Add new email, Type X500, paste the address you copied in step 4
8) Wait for the OAB to synchronize across the farm and clients
Working on an Exchange migration this weekend, I was using our backup software to simply export users mailbox’s from the most recent backup of your old Exchange server, then importing them into the new Exchange server for each mailbox after creation.
I would have loved to have simply selected each user as a whole and import those pst files. However from testing showed it simply created a sub item with the users name and all their folders, instead of properly placing them under the primary parent hierarchy. So I was forced to export Each item individually (Inbox, Sent Items,Drafts, Etc) and Import them. I initially didn’t script this as there were only about 30-40 users I had to migrate, i figured it was easier to just go through the wizards… until I discovered some users created folders outside of their Inbox! Ohhh boy…. Anyway, turns out if you exceed 9 imports for a single mailbox without specifying a special name for it (even after they succeed) you will get en error as follows:
“The name must be unique per mailbox. There isn’t a default name available for a new request owned by mailbox xyz”
However sometimes in my case I found I was still getting there error even though I cleared all completed import requests (with default names obviously). I found out I was having a weird bug happen to be where imports where showing as Queued, yet if I piped them into Get-MailboxImportRequestStatstics | Select Status, they reported a status of Completed…(If you want all the details, pipe into Format-List, instead of Select)
Get-MailboxImportRequest -status Queued | Get-MailboxImportRequestStatstics | Select Status
lol I wasn’t sure what to make of this but there was 2 solutions.
Clear the “Queued” imports that are really Completed.
Give your new import a unique name using the -name parameter
I’ll admit though Exchange 2016 is more intuitive to manage then old Exchange 2010.
I’m going to keep this post short, so there won’t be any use of the TOC plug I recently deployed. 😛
I recently used a img of a sysprepped machine I used to deploy new machines. To my dismay the image was created with an MBR partition and was mostly used via BIOS boot options. This isn’t very secure as many of the security features of EUFI.
It’s been well known that moving from MBR to GPT back in the day was a painful process. I won’t go over the details as this “Microsoft Mechanics” video does a decent job of doing this.
Install the subordinate certificate authority
Request and approve a CA certificate from the offline root CA
Configure the subordinate CA for the CRL to work correctly
You need to be a member of the Enterprise Admins group to complete these tasks.
Installing Certificate Services
Just as with the offline Root CA, deploying Certificate Services on Windows Server 2012 R2 is simple, I stuck with PowerShell, view source blog for step-by-step GUI tutorial. Instal-ADCSCertificateAuthority?
In the source guide he talk about creating a CNAME record, since he set his offline Root CA CRL to point to “clr.blah.domain” in my case I specified the direct hostname of the CA. Maybe he did this for obfuscation security reasons, I’m not sure, either way I skipped this since an A host record already exists for the path I entered in the CRL information for the offline root ca.
Configuring Certificate Services
After the Certificate Services roles are installed, start the configuration wizard from Server Manager – click the flag and yellow icon and click the Configure Active Directory Certificate Services… link.
My CA server is core thus no GUI, thus no direct Server Manager. Connect to a client system that has required network access to run Server Manager and point it to the CA server. In this case I’ll be using a Windows 10 client machine. Run Server Manager, and add CA server as Domain/Enterprise Admin account.
Then just like the source blog guide, you should notice a notification at the top right requiring post ADCS configuration deployment.
Use a proper admin account:
Click Next, then select CA and CA Web Enrollment.
Click Next, Configure this subordinate certificate authority as an Enterprise CA. The server is a member of a domain and an Enterprise CA allows more flexibility in certificate management, including supporting certificate auto enrollment with domain authentication.
Click Next, Configure this CA as a subordinate CA. After configuration, we will submit a CA certificate request to the offline root CA.
Click Next, Create a new private key for this CA as this is the first time we’re configuring it. Now I’m curious to see what CertUtil reports after this wizard and what the RSA directories on the CA will contain. They should contain the keys, right?!
Click Next, leave the defaults, again simply going to use RSA @ 2048 Key Length, with a SHA256 hash checksum. Should remain the standard for hopefully the next 10 years.
Click Next, because this is a subordinate CA, we’ll need to send a CA certificate request to the offline root CA. Save the request locally which will be used later to manually request and approve the certificate. This is saved to the root of C: by default. Again oddly the initial Common name is auto generated into the request name with no option to alter it…
Moving along, click next, and specify the DB location. Generally leave the defaults.
Finally Summary and confirmation.
Click Configure and the wizard will configure the certificate services roles. Note the warning that the configuration for this CA is not complete, as we still need to request, approve and import the CA certificate.
Configuring the CRL Distribution Point
Before configuring the Certification Authority itself, we’ll first copy across the certificate and CRL from the root CA.
Ensure the root CA virtual machine is running and copy the contents of C:\Windows\System32\certsrv\CertEnroll from the root CA to the same folder on the subordinate CA. This is the default location to which certificates and CRLs are published. Keeping the default locations will require the minimum amount of configuration for the CRL and AIA distribution points.
The result on the subordinate certificate authority will look something like this – note that the CRL for the root CA is located here:
In my case it’s an offline (non domain joined) making a shareable UNC path is slightly painful in these cases, and for the most part, it is completely offline, and no NIC settings are even defined on the VM, heck I could remove the vNic completely :D. Anyway to complete this task I did the usual vUSB (a VMDK I mount to different VMs as needed to transfer files) and copied the resulting files specified above into this VMDK, then attached to the Sub CA VM, and moved files to their appropriate path. Again in this case the Sub CA is Core, so either use diskpart, or Server Manager from the client machine to bring the disk online and mount it.
Issuing the Subordinate CA Certificate
Next, we will request, approve the certificate request for the subordinate CA. At this point, the subordinate CA is un-configured because it does not yet have a valid CA certificate.
Copy the initial request created by the confiz wizard to the movable VMDK.
Now, remove the disk from the Sub CA and attach it to the Off-line root CA, then open up the CA tool to request a new certificate. (Don’t worry about taking the disk offline, it’ll unmount automagically).
Browse to where the certificate request for the subordinate certificate authority is located and open the file.
The certificate request will then be listed under Pending Requests on the root CA. Right-click the request, choose All Tasks and Issue.
The subordinate CA’s certificate will now be issued and we can copy it to that CA. View the certificate under Issued Certificates. Right-click the certificate, click Open and choose Copy to File… from the Details tab on the certificate properties.
Export the new certificate to a file in PKCS format. Copy the file back to the subordinate certificate authority, so that it can be imported and enable certificate services on that machine.
Configuring the Subordinate CA
With the certificate file stored locally to the subordinate CA, open the Certificate Authority console – note that the certificate service is stopped. Right-click the CA, select All Tasks and choose Install CA Certificate…
So from a client system open the CA snap-in, point to the new sub CA…
This is where I got stuck for a good while, all the guides I found online were using CA’s with Desktop experience enabled allowing them to run the CA MMC Snap-in locally, and from all my testing against a Server 2016 Core server running the CA role, the snap-in simply wouldn’t load the input wizard…
So I posted the bug on Technet, and Mark saved my bacon!
“You will need to use the commandline to do this. On the CA itself:
1) Open a command prompt
2) Navigate to where your certificate file is located
3) certutil -installcert <your certificate file name here>”
WOOOOOO! We have a working Enterprise Sub-CA… Now the question on if CRL works, and how to deploy the chain properly to servers and clients so things come up with a trusted chain and a green check mark!
“If the CRL is online correctly, the service should start without issues.
Continuing on from my source blog post. In this case he goes on to install and configure the role to be a subordinate enterprise CA. But what do you do if you already deployed an Enterprise Root CA? I’m going to go off a hunch and that something gets applied into AD somewhere to present this information to domain clients. I found this nice article from MS directly on the directions to take, it stated for Server 2012, so I hope the procedure on this hasn’t changed much in 2016.
*NOTE* All steps that state need to be done to AD objects, those commands are run as a Domain Admin, or Enterprise Admin directly logged onto those servers. Most other commands or steps will be done via a client system MMC Snap-in, or logged directly into the CA server.
Remove Existing Enterprise Root CA
Revoke Existing Certificates
Step 1: Revoke all active certificates that are issued by the enterprise CA
Click Start, point to Administrative Tools, and then click Certification Authority.
Expand your CA, and then click the Issued Certificates folder.
In the right pane, click one of the issued certificates, and then press CTRL+A to select all issued certificates.
Right-click the selected certificates, click All Tasks, and then click Revoke Certificate.
In the Certificate Revocation dialog box, click to select Cease of Operation as the reason for revocation, and then click OK.
Increase the CRL interval
Step 2: Increase the CRL publication interval
In the Certification Authority Microsoft Management Console (MMC) snap-in, right-click the Revoked Certificates folder, and then click Properties.
In the CRL Publication Interval box, type a suitably long value, and then click OK.
Note The lifetime of the Certificate Revocation List (CRL) should be longer than the lifetime that remains for certificates that have been revoked.
Easy enough, done and done.
Step 3: Publish a new CRL
In the Certification Authority MMC snap-in, right-click the Revoked Certificates folder.
Click All Tasks, and then click Publish.
In the Publish CRL dialog box, click New CRL, and then click OK.
Again easy, done.
Deny Pending Requests
*DEFAULT, generally Not required.
Step 4: Deny any pending requests
By default, an enterprise CA does not store certificate requests. However, an administrator can change this default behavior. To deny any pending certificate requests, follow these steps:
In the Certification Authority MMC snap-in, click the Pending Requests folder.
In the right pane, click one of the pending requests, and then press CTRL+A to select all pending certificates.
Right-click the selected requests, click All Tasks, and then click Deny Request.
Not the case for me.
Uninstall Certificate Services
Step 5: Uninstall Certificate Services from the server
To stop Certificate Services, click Start, click Run, type cmd, and then click OK.
At the command prompt, type certutil -shutdown, and then press Enter.
At the command prompt, type certutil -key, and then press Enter. This command will display the names of all the installed cryptographic service providers (CSP) and the key stores that are associated with each provider. Listed among the listed key stores will be the name of your CA. The name will be listed several times, as shown in the following example:
(1)Microsoft Base Cryptographic Provider v1.0: 1a3b2f44-2540-408b-8867-51bd6b6ed413 MS IIS DCOM ClientSYSTEMS-1-5-18 MS IIS DCOM Server Windows2000 Enterprise Root CA MS IIS DCOM ClientAdministratorS-1-5-21-436374069-839522115-1060284298-500
Delete the private key that is associated with the CA. To do this, at a command prompt, type the following command, and then press Enter:
certutil -delkey CertificateAuthorityName
Note If your CA name contains spaces, enclose the name in quotation marks.
In this example, the certificate authority name is “Windows2000 Enterprise Root CA.” Therefore, the command line in this example is as follows:
certutil -delkey “Windows2000 Enterprise Root CA”
* OK, this is where things got weird for me. For some reason I wasn’t getting back the same type of results as the guide, instead I got this:
Microsoft Strong Cryptographic Provider:
CertUtil: -key command completed successfully.
And any attempt to delete the key based on the known CA name just failed. I asked about this in TechNet under the security section, and was told basically what I figured and that the key either didn’t exist or was corrupted. So basically continue on with the steps. It was later answered by Mark Cooper.
Locating the CA Master Key
This one again got answered by Mark Cooper, include –csp ksp (keys are located under: %allusersprofile%\Microsoft\Crypto\Keys)
Deleting the CA Master Key
From all the research I’ve done, it seems people are adamant that you delete the key before you remove the certs, why exactly I’m not sure…(From my testing if you deleted the certificate via certutil, it comes right back when restarting certsvc. It must be rebuilt from the registry?)
So: certutil –csp ksp –delkey <key>
Checking the keys directory show empty. Good stuff.
Viewing the Certificate store
Certutil –store my
This made me start to wonder where the actual certificate files were stored, a google away and it turns out to be in the registry? Lol (HKLM\System\Microsoft\SystemCertificates)
You can see they key container name matches the certificate hash.
Nothing more than just a string of obfuscated code (much like opening up a CSR), so the only way to interact with them is using the Microsoft CryptoAPI (CertUtil), or the Snap-in.
Deleting the CA Certificate
Certutil –delstore my <Serial>
Reopening regedit, and the cert is gone.
Delete Trusted Root CA Cert
Certutil –store ca
Certutil –delstore ca <serial>
So moving on…*
List the key stores again to verify that the private key for your CA was deleted.
After you delete the private key for your CA, uninstall Certificate Services. To do this, follow these steps, depending on the version of Windows Server that you are running.
If the remaining role services, such as the Online Responder service, were configured to use data from the uninstalled CA, you must reconfigure these services to support a different CA. After a CA is uninstalled, the following information is left on the server:
The CA database (To be deleted see below)
The CA public and private keys (Deleted see above)
The CA’s certificates in the Personal store (Deleted See above)
The CA’s certificates in the shared folder, if a shared folder was specified during AD CS setup (N/A)
The CA chain’s root certificate in the Trusted Root Certification Authorities store (Deleted See Above)
The CA chain’s intermediate certificates in the Intermediate Certification Authorities store (none existed for me)
The CA’s CRL (yup)
By default, this information is kept on the server in case you are uninstalling and then reinstalling the CA. For example, you might uninstall and reinstall the CA if you want to change a stand-alone CA to an enterprise CA.
Known AD CA Objects
Step 6: Remove CA objects from Active Directory
When Microsoft Certificate Services is installed on a server that is a member of a domain, several objects are created in the configuration container in Active Directory.
These objects are as follows:
Located in CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC=ForestRootDomain.
Contains the CA certificate for the CA.
Published Authority Information Access (AIA) location.
Located in CN=ServerName,CN=CDP,CN=Public Key Service,CN=Services,CN=Configuration,DC=ForestRoot,DC=com.
Contains the CRL periodically published by the CA.
Published CRL Distribution Point (CDP) location
Located in CN=Certification Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=ForestRoot,DC=com.
Contains the CA certificate for the CA.
Located in CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=ForestRoot,DC=com.
Created by the enterprise CA.
Contains information about the types of certificates the CA has been configured to issue. Permissions on this object can control which security principals can enroll against this CA.
When the CA is uninstalled, only the pKIEnrollmentService object is removed. This prevents clients from trying to enroll against the decommissioned CA. The other objects are retained because certificates that are issued by the CA are probably still outstanding. These certificates must be revoked by following the procedure in the “Step 1: Revoke all active certificates that are issued by the enterprise CA” section.
For Public Key Infrastructure (PKI) client computers to successfully process these outstanding certificates, the computers must locate the Authority Information Access (AIA) and CRL distribution point paths in Active Directory. It is a good idea to revoke all outstanding certificates, extend the lifetime of the CRL, and publish the CRL in Active Directory. If the outstanding certificates are processed by the various PKI clients, validation will fail, and those certificates will not be used.
If it is not a priority to maintain the CRL distribution point and AIA in Active Directory, you can remove these objects. Do not remove these objects if you expect to process one or more of the formerly active digital certificates.
Remove all Certification Services objects from Active Directory
To remove all Certification Services objects from Active Directory, follow these steps:
Know the CA common name (use CertUtil)
Use Sites and Service MMC Snap-in from a client computer using a domain admin account with proper permissions, highlight the parent snap-in node -> View (from the toolbar) -> Show Services Node.
Expand Services, expand Public Key Services, and then click the AIA folder.
In the right pane, right-click the CertificationAuthority object for your CA, click Delete, and then click “Yes”.
Left Nav, Click CDP folder.
In the right pane, right-click the CertificationAuthority object for your CA, click Delete, and then click “Yes”.
Left Nav, Click Certificate Authority.
In the right pane, right-click the CertificationAuthority object for your CA, click Delete, and then click Yes.
Left Nav, Click Enrollment Services (This should have been auto removed, in my case it was)
If you did not locate all the objects, some objects may be left in the Active Directory after you perform these steps. To clean up after a CA that may have left objects in Active Directory, follow these steps to determine whether any AD objects remain:
Type the following command at a command line, and then press ENTER:
Open the remainingCAobjects.ldf file in Notepad. Replace the term “changetype: add” with “changetype: delete.” Then, verify whether the Active Directory objects that you will delete are legitimate.
At a command prompt, type the following command, and then press ENTER to delete the remaining CA objects from Active Directory:
ldifde -i -f remainingCAobjects.ldf
At this point I was having issues with the input command of the ldf file was failing. I posted these results in my Technet post. After a bit more research I noticed other examples online not having any other information appended after the “changetype: delete” line. So I simply followed along and did the same deleting all the lines after that one, leaving the base DN object in place and sure enough it finally succeeded.
Generate base object LDF file:
After editing line as specified in MS article:
New altered LDF file:
Same command after altering file:
Second run I simply deleted the object under the KRA folder, and it returns no values.
13) Delete the certificate templates if you are sure that all of the certificate authorities have been deleted. Repeat step 12 to determine whether any AD objects remain.
I did this via the Site and Service Snap-in, under the PKI section of the Services node.
Delete NTAuthCertificates Objects Published Certificates
Step 7: Delete certificates published to the NtAuthCertificates object
After you delete the CA objects, you have to delete the CA certificates that are published to the NtAuthCertificates object. Use either of the following commands to delete certificates from within the NTAuthCertificates store:
Note You must have Enterprise Administrator permissions to perform this task.
The -viewdelstore action invokes the certificate selection UI on the set of certificates in the specified attibute. You can view the certificate details. You can cancel out of the selection dialog to make no changes. If you select a certificate, that certificate is deleted when the UI closes and the command is fully executed
Use the following command to see the full LDAP path to the NtAuthCertificates object in your Active Directory:
certutil store -? | findstr “CN=NTAuth”
Nice and easy, finally.
Delete the CA Database
Step 8: Delete the CA database
When Certification Services is uninstalled, the CA database is left intact so that the CA can be re-created on another server.
To remove the CA database, delete the %systemroot%\System32\Certlog folder.
Nice and easy, I like these steps.
Clean up the DC’s
Step 9: Clean up domain controllers
After the CA is uninstalled, the certificates that were issued to domain controllers must be removed.
Which states for 2003 and up:
certutil -dcinfo deleteBad
With the same list of garbage for the DC, then rerunning Certutil –dcinfo, still reported the same certs… So I had to manually remove these, but again opening a MMC snap-in via a client system, add the certificate snap-in and point to the machine store on the DC’s. Then manually delete the certificates, once this was done for both DC’s. CertUtil –dcinfo finally reported clean…
Finally!!! What a gong show it is to remove an existing CA from an environment… even one that literally wasn’t used for anything outside its initial deployment as an enterprise root CA.
Configure VM with hardware settings as specified, Boot option EFI, Remove Floppy Device, attach Windows Server 2016 ISO, and connect at boot. Boot VM
Server 2016, US English, Next, Install Now, Server 2016 Datacenter (Desktop Experience), Accept EULA, custom, select blank disk, next. Install Complete, Reboot. Install VMware tools.
Change hostname and workgroup (Note complained about NetBOIS only supporting 15 characters, so the NetBIOS name was shortened to CORP-OFFLINE-RO)
Installing Certificate Services
Deploying Certificate Services on Windows Server 2016 is simple enough – open Server Manager, open the Add Roles and Features wizard and choose Active Directory Certificate Services under Server Roles. Ensure you choose only the Certificate Authority role for the Root CA.
To make installing Certificate Services simpler, do it via PowerShell instead via Add-WindowsFeature:
*Note there are many cryptographic providers available, but generally most places should stick with RSA, I have seen certain cases where DSA has been selected, only choose this option if you have a specific reason for it. As well generally stick with a 2048 Key Length, you can go higher if you know your system resources can handle the additional computational load, or lower if you are running older hardware and don’t require has high of a security posture.
Specify a name for the new certificate authority. I’d recommend keeping this simple using the ANSI character set, using a meaningful name.
Select the validity period – perhaps the default is the best to choose; however, this can be customized based on your requirements. This is a topic that is a whole security conversation in itself; however, renewing CA certificates isn’t something that you want to be doing too often. Considerations for setting the validity period should include business risk, the size and complexity of the environment you are installing the PKI into, and how mature the IT organization is.
*Note pretty well stated, and in our case I don’t want to renew certs every 5 years, so 10 years sounds about good to me, and I’m hoping 2048 Key length with a SHA256 Hash will still be pretty common 10 years from now, but at least this gives us a very nice time buffer should things change.
On the next page of the wizard, you can choose the location of the certificate services database and logs location (C:\Windows\System32\Certlog), which can be changed depending on your specific environment.
On the last page, you will see a summary of the configuration before committing it to the local certificate services.
Verifying the Root CA
Now that certificate services has been installed and the base configuration is complete, a set of specific configuration changes is required to ensure that an offline Root CA will work for us.
Start – Windows Administrative Tools -> Certificate Authority
If you open the Certificate Authority management console, you can view the properties of the certificate authority and the Root CA’s certificate:
Configure the CA Extensions
Before we take any further steps, including deploying a subordinate CA for issuing certificates, we need to configure the Certificate Revocation List (CRL) Distribution Point. Because this CA will be offline and not a member of Active Directory, the default locations won’t work. For more granular information on configuring the CDP and AIA, see these sources:one and two.
In the properties of the CA, select the Extensions tab to view the CRL Distribution Points (CDP). By default, the ldap:// and file:// locations will be the default distribution points. These, of course, won’t work for the reasons I’ve just stated, and because these locations are embedded in the properties of certificates issued by this CA, we should change them.
To set up a CRL distribution point that will work with a location that’s online (so that clients can contact the CRL), we’ll add a new distribution point rather than modify an existing DP and use HTTP.
Before that we’ll want to do two things:
Ensure that ‘Publish CRLs to this location’ and ‘Publish Delta CRLs to this location’ are selected on the default C:\Windows\System32\CertSrv\CertEnroll location. This should be the default setting.
For each existing DP, remove any check marks enabled for ‘Include in CRLs’. (He failed to mention anything about the “Include in the CDP extension of issued Certs” So I’m going to assume to leave this as is for all DP’s)
*Note* This was a mistake I had to fix manually.
Now add a new CRL location, using the same HTTP location value included by default; however, change <ServerDNSName> for the FQDN for the host that will serve the CRL. In my example, I’ve changed:
This FQDN is an alias for the subordinate certificate authority that I’ll be deploying to actually issue certificates to clients. This CA will be online with IIS installed, so will be available to serve the CRLs. (Again he doesn’t provide a snippet of the completed entry, just a snippet of the creation of the entry, since by default all check boxes are unticked, and no mention of any changes to the added location, I’m again going to assume they are simply left untouched.) Here are my example setup:
and then adding the custom http CDP location that will be the Sub-CA with IIS.
*NOTE* UNCHECK ALL CHECKBOXES from the LDAP and FILE, the above picture for those settings are wrong.
Disable ‘Include in the AIA extensions of issued certificates’ for all existing locations. (In my case only one had it checked; file:// record)
Copy the existing http:// location
Add a new http:// location, changing <ServerDNSName> for the FQDN of the alias also used for the CRL distribution point. (Here I noticed he was nice enough to provide a snippet and I noticed the “Include in the AIA extension if issued certificates” is checked off, while by default it is not, so following this I will check this option off on the AIA record, I will also go back and check off The include in CDP extensions of issued certificates on the CDP record as well.)
Apply the changes, and you will be prompted to restart Active Directory Certificate Services. If you don’t remember to manually restart the service later.
Configure CRL Publishing
Before publishing the CRL, set the Publication Interval to something other than the default 1 week. Whatever you set the interval to, this will be the maximum amount of time that you’ll need to have the CA online, publish the CRL and copy it to you CRL publishing point.
Open the properties of the Revoked Certificates node and set the CRL publication interval to something suitable for the environment you have installed the CA into. Remember that you’ll need to boot the Root CA and publish a new CRL before the end of this interval.
Ensure that the Certificate Revocation list is published to the to the file system – right-click Revoked Certificates, select All Tasks / Publish. We will then copy these to the subordinate CA.
Browse to C:\Windows\System32\CertSrv\CertEnroll to view the CRL and the root CA certificate.
Setting the Issued Certificate Validity Period
The default validity period for certificates issued by this CA will be 1 year. Because this is a stand-alone certification authority, we don’t have templates available to use that we can use to define the validity period for issued certificates. So we need to set this in the registry.
As we’ll only be issuing subordinate CA certificates from this root CA, 1 year isn’t very long. If the subordinate CA certificate is only valid for 1 year, any certificates that it issues can only be valid for less than 1 year from the date of issue – not long indeed. Therefore, we should set the validity period on the root CA before we issue any certificates.
To change the validity period, open Registry Editor and navigate to the following key:
Here I can see two values that define how long issued certificates are valid for – ValidityPeriod (defaults to 1) and ValidityPeriodUnits (defaults to “Years”).
Viewing the Root CA certificate validity lifetime
Open ValidityPeriodUnits and change this to the desired value. My recommendation would be to make this 1/2 the lifetime of Root CA’s certificate validity period, so if you’ve configured the Root CA for 10 years, set this to 5 years. You’ll need to restart the Certificate Authority service for this to take effect.
Setting the Root CA’s ValidityUnits
An alternative to editing the registry directly is to set this value to certutil.exe. To change the validity period to 5 years run:
certutil -setreg ca\ValidityPeriodUnits “5”
Yes this is pretty much a copy n paste of the source, it was so well written and nice to follow, there are just a couple additions I added in where things got a little confusing I hope those might help someone who comes across this.
Much like the source in my next post I’ll also cover setting up a Subordinate Root CA, however I will also cover removing an existing CA from an AD environment before replacing it with the new subordinate. As well as cover some errors and issues I faced along the way and how I managed to correct them. This was part was pretty straight cut so I didn’t have much reason to alter it from the source.
PS – If you plan on publishing new CRL to be hosted by AD for domain systems, don’t forget to set the DSConfigDN setting on this offline CA.