Just another Mon…. Tuesday

The Start

Nothing new or exciting to the start of my day, clean house! Now I was actually cleaning my office, not making system changes 😉 Then…. Monday happened, I mean it’s really Tuesday but it was my first day back after the weekend (The first weekend I finally didn’t have to do any system changes)… life’s good right?

I made my first talks with our DBA, and then followed up with our Developer. Our current Developer is one really hard working and amazing dev, so he needed a couple things from my to move his current project forward; a CNAME record, along with a gMSA. This is one of the many reasons I like this guy (not only understands security posture, but what is needed for it to all work!). So the first one took seconds, and the second a couple more (besides the fact I need to reboot the server, cause I choose to do IDLGA instead of granting the computer account direct permissions to retrieve the gMSA’s password)… Yes… yes I’m aware of the special klist purge command to clear kerberos tickets, but I wanted to be 100% sure. Then came the problem…

The Problem

I don’t want to get too into the nitty gritty, but the gist of it was we had an authoritative source of data that resided on an older SQL server, our Devs new project was in a whole new data center, utilizing a whole new database server.

Since we didn’t want to alter any firewall rules between the datacenters (while they are all in house and owned by the same company, the two data centers are still wall gardened off, with a 2 way trust created for most authentication purposes) this would mean either:

A) I have to allow the old SQL server to do LDAP queries against my new Datacenters DC’s. (I wasn’t in the mood for architecture changes, which I already stated so this was last resort) Then grant the new datacenters gMSA account permissions on the database.

B) Figure out a way to utilize two different accounts, to make two different source data calls, from the same App/code.

Now I like the sound of B cause lets face it, it puts all the work on the Dev and not me. (If this sounds Dilbertish…. cause it is :P) At this point I was pretty confident that this was possible… I mean… why not? Well a couple seconds later my Dev comes back and tells me that it is in fact not possible…. well sort of not possible… it’s not possible for our exact case…. for reals… let me explain, first off I’m talking about ASP.NET, second of all I’m talking about 2 different connection strings to a Database. For some references we both found, like this and this, and this, and this and even this … ok that’s a fair amount of reading (sadly I still couldn’t find one of the original sources haha) but in each case you are probably wondering (How do I specify the user name and password for an alternative connection string when using Windows Auth instead of SQL auth)…. Drum Rolll…………..

YOU CAN’T …… TADA

So…..

The Second Problem

This lead us to our second problem, while the new SQL instance was already configured for mixed auth (This means it allows Windows as well as local SQL server authentication to be permitted), our old SQL server instance…. not so much. As much as I wanted to avoid infrastructure changes, seems it was inevitable…. so I asked my DBA if this would be a problem, since you can change the auth mode on any given instance and not all instances on the server, figured this was a quick a easy solution; Enable Mixed Auth mode, re-start the instance, create a local SQL account to be hard-coded and used by the App until the source data can be properly relocated (Thus removing any hard coded garbage in the app). Alright! until….. Ughhhhhh…

The Third Problem

When my DBA went to restart the instance, he decided to use SSMS remotely (now there is nothing wrong with this, I didn’t know it was even possible and was excited to learn something new… until…) the service failed to come up successfully (Ohh boy here we go), so sure enough we get into fix mode. My DBA jumps right into Event Viewer (good man) and discovers the first error stating the service was unable to bind to the port as it is in use (DBA opens SQL Config Manager, and Services.msc and sees both services and not running). This however instantly told me one thing… the service didn’t stop properly, event though Services.msc showed no signs of running, Task Manager and Tasklist, showed otherwise. Here’s the kickers, every attempt to force stop the process (service instance sqlserver.exe) reported back “Access Denied” even running psexec (Love you Mark!!) as SYSTEM still reported “Access Denied”

The Fix

At this point I basically figured that we had to reboot the server (I also assumed it would get stuck shutting down at the “stopping service” stage, but amazingly it did not!) Sure enough after the reboot everything came up without a hitch and the new Mixed Auth mode was enabled for our Dev’s alternate ASP.NET connection string! OK I know this sounds pretty crappy for a solution but honestly it was the only thing we had left in our toolbox, and it fixed both problem one (Mixed Auth mode is now enabled for old instance) and the fact the instance came up without a problem.

Until…..

While we (DBA, Dev, and Myself(SysAdmin)) continued to test our other applications that were built via other means, it seemed a couple things were broken, this ones a little bit funny cause we assumed there was an issue for everyone, however turned out only to be an issue for the DBA and Dev, not myself (but I wasn’t on my local machine to do any front end testing from my account) so let me explain, The Dev kept digging into the real nitty gritty of the code, jumping all the way into the backend of the SQL’s stored procedures and views, to discover there was empty values being returned (Now I have no clue if this was always an issue (based on the fix) or if it actually was due to something else….anyway) turns out one of the built in views it used as a source to create a temp table was returning null thus throwing the error when calling one of the stored procedures. When the Dev and I went to talk to the DBA in the lunch room, we had discussed some of the permission changes we had just implemented on the Security Logins of the instance, and made some assumptions, so I went into the back-end AD groups to validate somethings, sure enough it was a little funny in that due to the fact their stand accounts had direct logins for the instance (generally not a fan of this, as I love scalable design and prefer to utilize IDGLA) so my DBA told me he had fixed this, and then he told me something I never would have expected and is a huge learning experience for me:

WHEN YOU GRANT AN ACCOUNT THE “SYSADMIN” ROLE/PERMISSION THE OTHER ROLES IN WHICH THE ACCOUNT IS A MEMBER DOES NOT APPLY PROPERLY (IS BYPASSED OR SOMETHING).

Literally, so what happened was there was a group we have defined to be granted sysadmin rights on the server (to manage them, not manipulate data) normally this contains admin based accounts (we all do standard account and admin accounts for least privileged best practice right :). However their admin accounts and their standard accounts were in there, which I removed, and once that was corrected and the proper nested grounds their standard accounts where suppose to get based on other roles, then applied properly and the issue was fixed.

Party in the House… Until….

Yes… believe it or not my day does not end here…. there was simply more information the great world of IT had to shove down my tiny brain that’s already overloaded and overwhelmed at the pure magnitude of knowledge you need to manage systems!! WHYYYYYY! GOD WHYYY!!!!

Anyway…. so to end the day we get a unique error message from one of our workflows, and sure enough another email from an external user providing a snipping of an error (How nice of them). Is this a coincidence…. not a chance 100% related… So again most of the heavy lifting is done by our Dev (this guy….. he’s a super star!) He managed to break it down to an assembly problem… but we were shocked as to how this could be (we checked everything was working after all our above fixes)… well until our DBA made a confession (He wasn’t happy to have found out his own account was the DB owner of a fair share of DB’s within the instance) so he secretly clean it up… well after some trial and error (reverting the change a couple times) it turned out that the error “error the server may be running out of resources or the assembly may not be trusted with permission_set external_access or unsafe” was simply due to a single missing permission needing to be granted to the new DB owner account:

In SSMS -> Instance -> Logins -> Account -> Properties -> Securables -> Check Grant for Unsafe Assemblies

Sometimes… you just gotta run unsafe code 😛

Alright! Home Time!

I Spike You!

Ahhh the internet….

Publish my own personal mx record in hopes to get my own email going….
I decided to see why my email outbound wasn’t working (sigh even following Paul Cunningham’s post seems I’m missing something) seems all my out-bound based SMTP connections to external mail servers seems to be failing. According to my firewall (Palo Alto) The rule is allowing it out but the application shows incomplete… like it’s never establishing a connection. So from my previous posts, I use telnet to attempt a connection on external known IPs for SMTP mail server, and sure enough no connections can be established (I know I’ll eventually have to create a receive connector from outbound sources and create a security rule to allow email from outside in, but I wanted to tackle email going out first).

I decided to attempt the same port 25 connection to the new record I created (I have multiple internet connection to utilize to actually test connections from “outside” instead of having to rely on a loop back NAT rule or anything). to my dismay it showed failed to connect (I already expected this as I created a NAT rule but I never created a security rule to allow the connection). I decided to go to my Monitor tab to see if I could see the attempted connection, I indeed did see it. However what surprised me more was the failed attempts from others in the short time I created this record (considering I had the IP for a long time and pretty much all ports were blocked forever, I didn’t expect there to be much attempts) these were either crawlers or something else…. but guess who the every first was….

141.212.122.227
University of Michigan (AS36375)

Not once, but twice from two sequential IP addresses…. Mhmmm what are those Michigans up to?

185.35.62.150… unknown, someone remaining anonymous, Michigian Hookup? occurred 3 minutes after.

Then Hours Later….

107.170.227.216
Digital Ocean, Inc. (AS14061)

Not sure who they are, might have to check em out..

Couple hours later…

46.29.161.101…. Anonymous

I guess it only makes sense after Americans, and Anonymous it be nothing other than the Russians right…. To be fair I don’t actually known wtf thi sis lol, Japanese mixed with Russian or something pile of who knows what.

95.181.178.182
FOP ILIUSHENKO VOLODYMYR OLEXANDROVUCH (AS57311)

They are least tried three times in a row from same IP (Good thought idea, if it doesn’t work once, heck try again a couple times)

Then my attempt… pretty funny what you can hear if you just listen…

This isn’t actually I Spike You! Like from the old school GoldenEye movie, but this is what you’d actually do if you wanted to “Spike” someone online, this is my actual server I plan to use of course, but if I actually wanted to find out what people are up to I’d create a honeypot. Maybe now that I post this, they’ll think my mx record is a honeypot, but it’ll secretly become in use… sometime…. lol

Configuring an Anonymous Receive Connector on Exchange 2016

The Story

Well in my previous post I discussed the issue I faced resolving an email problem with one of our development applications in which it was unable to send emails after a recent Exchange upgrade/migration. So initially we were going to simply rebuild our own workflow in-house using ASP .NET Core. Until we noticed that even our own workflows were failing… in this case the answer from the old post which was super vague “reconfigure the receive connector”. Then I somehow stumbled upon my answer through one of my hundreds of google searches… I founds this gem!

OK before I link the gem which will be the source to my answer. I also wanted to point something out real quick here in hopes maybe someone can comment below the answer to this one:

When using Exchange 2016 as an email SMTP relay, and you use a no-reply from address, with an external email address for the destination, how do you query to find out if its gone through, or stuck in que? All I could see in the ECP it always required me to select a mailbox… there’s no mailbox associated with these relayed email messages, so how does one check this?

OK, now for the gem. This guy “Paul Cunningham” He’s… uhhhhhh… He’s uhhhhh… he’s uhhhh a good guy. So I always knew you could use telnet to check certain ports and services… but this was so concise… it nailed the problem…

From my K2 Server or my in house workflow server:

1) Ensure Telnet Client feature is enabled

2) Open cmd prompt or PowerShell:

telnet exchangeServer 25
helo
mail from: user@corp.ca
rcpt to: ExternalUser@gmail.ca

220 EXSERVER.exchange2016demo.com Microsoft ESMTP MAIL Service ready at Thu, 22
Jun 2018 12:04:45 +1000
helo
250 EXSERVER.exchange2016demo.com Hello [192.168.0.30]
mail from: adam.wally@exchange2016demo.com
250 2.1.0 Sender OK
rcpt to: exchangeserverpro@gmail.com
550 5.7.54 SMTP; Unable to relay recipient in non-accepted domain

huh, just like the source blog, now why would I be getting that error… I allowed Anonymous users via the check box under the receive connectors security tab… yet Paul does a lil extra step that doesn’t seem to be mentioned elsewhere, and that check box I mentioned is his first line, but then look at the interesting second line….

[PS] C:\>Set-ReceiveConnector "EXSERVER\Anon Relay EXSERVER" -PermissionGroups AnonymousUsers
[PS] C:\>Get-ReceiveConnector "EXSERVER\Anon Relay EXSERVER" | Add-ADPermission -User 'NT AUTHORITY\Anonymous Logon' -ExtendedRights MS-Exch-SMTP-Accept-Any-Recipient

Since I was using anonymous settings on the Application server side (K2 in this case) I gave the second PowerShell cmdlet a run from my new exchange server.

Amazingly enough just like the source blog after running the second line (edited to fit my environment obviously) then the rcpt to succeeded!

20 EXSERVER.exchange2016demo.com Microsoft ESMTP MAIL Service ready at Thu, 22
Jun 2018 12:59:39 +1000
helo
250 EXSERVER.exchange2016demo.com Hello [192.168.0.30]
mail from: test@test.com
250 2.1.0 Sender OK
rcpt to: exchangeserverpro@gmail.com
250 2.1.5 Recipient OK

Part 2 – The Solution

If K2 is configured to use EWS, check that stuff out elsewhere, if you landed here from my previous post looking for the answer to the “There is no connection string for the destination email address ‘Email Address'” and wanted to know how that person altered his receive connector:

[PS] C:\>Get-ReceiveConnector "EXSERVER\Anon Relay EXSERVER" | Add-ADPermission -User 'NT AUTHORITY\Anonymous Logon' -ExtendedRights MS-Exch-SMTP-Accept-Any-Recipient

K2 SMTP Configurations

Intro

I’m going to keep this post short, just in hopes that I don’t go off the rails on this product; K2 Blackpearl 4.7. I have plenty of awesome SharePoint 2010 to 2016 migration content yet to post on my site. I’m sorry I wish I could get all the awesome things I do on here, there are many awesome things I keep thinking about; iOmega NAS conversion I did replacing the ix12 OS with FreeNAS, My test enviroment, ISCSI MPIO (VMware, Microsoft, linux configs)… anyway… K2…. ugh

The Problem

I’ve never posted about this product on my site before, cause to be frank…. I tried to stay away from it as much as possible, so all I did was update the base OS and pray that the developers or users didn’t complain about errors in any of the “K2 apps”. Trust me there are lots I can’t tell you how many times I had o hear about K2 issues… anyway, I digress. Ya’d figure it’s this simple eh… well for running the setup manager, for first time config sure… but you have to run it again if you want to ever change this value… *ahem*… alright so lets say you did this… it’ll just work right… you set the email server destination and port, it’s gotta work for everything in the server (We’ere talking standalone, not clustered). right?…. Nope… So you eventually find that this is the most common, and generic error you will find if you ever have any email issues with K2… “There is no connection string for the destination email address ‘Email Address'” and you will get this for a lot of different things. Which oddly enough some of it gets covered here. But it’s a mess, and you have no clue which problem is the cause for the error. So much like the shared link there:

Check 1 – Environment Variables

Check the Environment Variables (If you are not sure what Environment Variables are you can read this *Ahem* Awesome… WhitePaper)

Alright… so we re-ran the config manager, updated the email settings there, updated out environment variables, we gotta be good now!…..

“There is no connection string for the destination email address ‘Email Address'”

Check 2 – SMTP Config Strings

You got to be…… ok, ok…. we got this… there’s got to be something else we must have missed… let’s see… mhmmm as Mikhal says from here…

“Email configuration is externalized from process and K2 server relies on connection strings in configuration file, processes look at Environment Library, but you should also keep in mind String Table. I saw cases when people did update of Mail Server field in environment library, but their workflow was deployed from other environment with old/incorrect email settings which were written into String Table during deployment time – so you should also make sure that you have correct settings there.”

soooo… From K2 terrible support page you can either dig in and manually edit “k2hostserver.exe.config” Really….. really….. .exe.config ….. anyway load the terrible Windows application that they use to edit this XML type config file. Now you mind find your self wondering “Do I have to create a new SMTP connection string for every from and destination address? That’s…. just…… unmanageable!” And yup it sort of is, what my awesome colleague and I discovered (He’s a K2 Master by the way) is that any internal “spoofed” mail (since we had decided we didn’t utilize any of K2’s EWS integration) would only work when we had that particular user with a SMTP string in this tool; ConnectionStringEditor.exe

It took my colleague a really long time before magically discovering what the syntax was in the SMTP connection string to be a wild card E.G. *@Zewwy.ca …. Drum Roll…….. NOTHING

That’s right nothing to make a wildcard SMTP connection string simply leave the field blank in the first step of the wizard. Alright… so now we had validated workflows could send on behalf of over SMTP based email (even to exchange without any EWS integration)… However we also utilized another workflow to send external email address emails…

“There is no connection string for the destination email address ‘Email Address'”

Check 3 – SMTP Receive Connector

Are you Kidding ME?!?!?! Alright…. Jesus what else did I forget/miss…

At this point you may find yourself a lil bit stuck as every other post points you to the same solutions above, or like this jerk-off goes over everything, then laughs in your face and says “‘There is no connection string for the destination email address testsmtp4velocity@gmail.com’. That is a pity. I thought we just added it. Ok, I admit, I knew this was going to be the result, but I thought I’d keep you intrigued. Now you will have to wait for my next article to see how to fix it. You can find the answer in part 2 of this article, along with a few other tips on how to resolve some other issues when trying to send an email through a SMTP server.” only to find that there is no Part 2….

So I was kind of stuck with the initial share I gave…

“Thanks for your help. We found the root reason is that the Exchange Server config the receive policy so we changed the policy then resolved the problem.”

If dealing with issues isn’t bad enough the internet is littered with useless help. “Don’t worry guys, I figured out my problem, if you have this problem too, well I figured out mine, good luck with yours.” Anyway, again I digress and I am not such a jerk and I will tell you how I finally managed to resolve this issue for good.

So I double checked my receive connector on my Exchange server… may there’s just something I missed… well… I covered everything, No TLS…. using Port 25, listening from only my K2 Server, Anonymous Users Checked off under security… What the heck I have them all covered…

Check 4 – Part 2

I was at my wits end until this! (Part 2 LOLOLOLOL, for real though… it’s coming soon. in like 20-30 mins, maybe an hour shouldn’t take me long to write up)

Setting Mailbox Subitem Permissions

I had to do this for Resource Items, to allow all staff to view the Calendar of the resource so they could schedule items accordingly. As one of the comments mentions from this Spiceworks post

Set-MailboxFolderPermission "roomname:\Calendar" -User Default -AccessRights LimitedDetails

This can however be expanded on knowing powershell being object oriented, roomname is actually any mailbox, :\ is the delimiter, Calendar is the SubFolder. -User $Group or $User (Groups are usually best practice per IDGLA) and finally -AccessRights, you can specify any of the access rights you see in the pull down option of Outlook.

That’s it. that’s all there is to mangling Mailbox sub-item permissions.

LegacyExchange Annoyance
(Exchange Cross Forest Migration Woes)

The Begining

I’m my quest to completely rebuild my company domain from the ground up (new Domain Controllers (Server 2016 Core), SQL 2016, SharePoint 2016, Exchange 2016….You get the idea) I’ve had to face many interesting challenges (not all blogged about just yet, but if you follow my TechNet posts you’ll see I have plenty of content to write about moving forward if I don’t come across interesting new things, at this rate that seems unlikely… anyway) This time it was another interesting one.

The Weekend

After spending the entire weekend combing over every little detail of the migration (Mail relays, systems that use email, how they send email, the receive connectors they would need (Auth types, TLS sec, etc) I figured I had all my bases covered, and made the switch (all changes including expanding existing server configs to allow mail flow, not hinder any) so the last part of my switch was ensuring most servers/services were using a Fully Qualified Domain Name (FQDN) in their settings/configs for SMTP. So my cut over in this case was very simple change the A host record, and clear all systems DNS cache. To my amazement everything was still working (even ActiveSync cut over without a hiccup)…. Until… the next day…

The Next Day

Out of all things I didn’t anticipate internal email flow to break… I mean… there’s nothing different between Joe.blow@corp.ca and Joe.blow@corp.ca right??!?! Wrong! lol with Microsoft Exchange you are totally, and utterly wrong! First… Read this and read this to understand exactly what I mean. In short internal email likes to use it’s own special address… (give a dirty look)… called X.500 addresses (AKA The LegacyExchangeDN) bunch of garbage, muff cabbage BS…. So instead of everything resolving normally cause all new linked mailboxes had the proper SMTP address (So all other outbound/inbound flowed without issue), user wanting to reply to old emails, or creating new ones and having the TO field be auto populated, they would get a stupid NDR (AKA a bounce back telling them the email can’t be sent to the recipient) cause FFS it can’t just use the SMTP address NOOOOOOOO it uses the stupid Legacy X.500 Address… Gosh Darn *Mumbles* Exchange… in case you can’t tell I dispise email with a passion.

The Search

Anyway, I looked up what the possible solutions were, I wasn’t too happy. For now I was telling people to simply remove the old Auto Populate cache Outlook was using. As for existing Calendar events, turns our all resources fell under the exact same annoying problem, even though I created them all with the same alias’s it wasn’t good enough Exchange was seeing them via their old X.500 addresses (Since all old Calendar Items were imported from Backup), they had new X.500 addresses in the new Exchange Server. So I would have to tell people to remove those resources, and simply re-add them.

The Problem

There is however a problem with this, and that is if someone edits an existing Calendar event and changes the time, the room may already have been booked (the new room) so when the user editing the old re-occurring event goes to re-add the room, it would complain about a conflict. Someone has already booked the “new” room even though it should have been held by the initial booking. Alright so how does one re-map this… well took me a while digging through google like this guy, or this guy (seems everyones a blogger these days), but I found an excellent resource blog that covers the problem, and the solution pretty clearly

The Answer

To keep it short for everyone, and as usual to Paraphrase the solution, which so far is not even working for me :@, even after waiting 18 hours. *UPDATE!* Don’t put in X.500 like the stupid UI tells you to… just put in X500 without the ****ing dot… See my Technet post for details!

1)  Open User and Computers (From linked Domain/ Old Domain)
2) Find any User/Resource/Equipment object that were migrated
3) Right Click and select attribute editor tab (requires advanced view)
4) press "L" and lookup LegacyExchangeDN, double click and copy
5) Open Exchange ECP (New Server)
6) Under Recipients double click migrated mailbox, click email address
7) Add new email, Type X500, paste the address you copied in step 4
8) Wait for the OAB to synchronize across the farm and clients

New-MailboxImportRequest Failed

This is going to be another short post.

Working on an Exchange migration this weekend, I was using our backup software to simply export users mailbox’s from the most recent backup of your old Exchange server, then importing them into the new Exchange server for each mailbox after creation.

I would have loved to have simply selected each user as a whole and import those pst files. However from testing showed it simply created a sub item with the users name and all their folders, instead of properly placing them under the primary parent hierarchy. So I was forced to export Each item individually (Inbox, Sent Items,Drafts, Etc) and Import them. I initially didn’t script this as there were only about 30-40 users I had to migrate, i figured it was easier to just go through the wizards… until I discovered some users created folders outside of their Inbox! Ohhh boy…. Anyway, turns out if you exceed 9 imports for a single mailbox without specifying a special name for it (even after they succeed) you will get en error as follows:

“The name must be unique per mailbox. There isn’t a default name available for a new request owned by mailbox xyz”

The solution was easy enough to find a good band-aid indeed.

Get-MailboxImportRequest -status completed | Remove-MailboxImportRequest

However sometimes in my case I found I was still getting there error even though I cleared all completed import requests (with default names obviously). I found out I was having a weird bug happen to be where imports where showing as Queued, yet if I piped them into Get-MailboxImportRequestStatstics | Select Status, they reported a status of Completed…(If you want all the details, pipe into Format-List, instead of Select)

Get-MailboxImportRequest -status Queued | Get-MailboxImportRequestStatstics | Select Status

lol I wasn’t sure what to make of this but there was 2 solutions.

  1. Clear the “Queued” imports that are really Completed.
  2. Give your new import a unique name using the -name parameter

I’ll admit though Exchange 2016 is more intuitive to manage then old Exchange 2010.

Upgrading a Windows Volume from MBR to GPT to support EUFI boot and features

I’m going to keep this post short, so there won’t be any use of the TOC plug I recently deployed. 😛

I recently used a img of a sysprepped machine I used to deploy new machines. To my dismay the image was created with an MBR partition and was mostly used via BIOS boot options. This isn’t very secure as many of the security features of EUFI.

It’s been well known that moving from MBR to GPT back in the day was a painful process. I won’t go over the details as this “Microsoft Mechanics” video does a decent job of doing this.

If you’d like a little more nitty gritty details, you can view this Technet Blog.

In short:

  1. Boot into PE.
  2. use “mbr2gpt” command to validate and convert the partition.
  3. Boot into the Mainboard Config (Bios/EUFI)
  4. Configure boot option to EUFI

Now THAT was easy!

Setup Subordinate CA (Part 3)

Intro

In this part we are going to:

Install the subordinate certificate authority
Request and approve a CA certificate from the offline root CA
Configure the subordinate CA for the CRL to work correctly

Required Permissions

You need to be a member of the Enterprise Admins group to complete these tasks.

Procedure

Installing Certificate Services

Just as with the offline Root CA, deploying Certificate Services on Windows Server 2012 R2 is simple, I stuck with PowerShell, view source blog for step-by-step GUI tutorial. Instal-ADCSCertificateAuthority?

Add-WindowsFeature -IncludeManagementTools -Name ADCS-Cert-Authority, `
ADCS-Web-Enrollment, Web-Default-Doc, Web-Dir-Browsing, Web-Http-Errors, `
Web-Static-Content, Web-Http-Redirect, Web-Http-Logging, Web-Log-Libraries, `
Web-Request-Monitor, Web-Http-Tracing, Web-Stat-Compression, Web-Filtering, `
Web-Windows-Auth, Web-ASP, Web-ISAPI-Ext

DNS

In the source guide he talk about creating a CNAME record, since he set his offline Root CA CRL to point to “clr.blah.domain” in my case I specified the direct hostname of the CA. Maybe he did this for obfuscation security reasons, I’m not sure, either way I skipped this since an A host record already exists for the path I entered in the CRL information for the offline root ca.

Configuring Certificate Services

After the Certificate Services roles are installed, start the configuration wizard from Server Manager – click the flag and yellow icon and click the Configure Active Directory Certificate Services… link.

My CA server is core thus no GUI, thus no direct Server Manager. Connect to a client system that has required network access to run Server Manager and point it to the CA server. In this case I’ll be using a Windows 10 client machine. Run Server Manager, and add CA server as Domain/Enterprise Admin account.

Then just like the source blog guide, you should notice a notification at the top right requiring post ADCS configuration deployment.

Use a proper admin account:

Click Next, then select CA and CA Web Enrollment.

Click Next, Configure this subordinate certificate authority as an Enterprise CA. The server is a member of a domain and an Enterprise CA allows more flexibility in certificate management, including supporting certificate auto enrollment with domain authentication.

Click Next, Configure this CA as a subordinate CA. After configuration, we will submit a CA certificate request to the offline root CA.

Click Next, Create a new private key for this CA as this is the first time we’re configuring it. Now I’m curious to see what CertUtil reports after this wizard and what the RSA directories on the CA will contain. They should contain the keys, right?!

Click Next, leave the defaults, again simply going to use RSA @ 2048 Key Length, with a SHA256 hash checksum. Should remain the standard for hopefully the next 10 years.

Click Next,

Click Next, because this is a subordinate CA, we’ll need to send a CA certificate request to the offline root CA. Save the request locally which will be used later to manually request and approve the certificate. This is saved to the root of C: by default. Again oddly the initial Common name is auto generated into the request name with no option to alter it…

Moving along, click next, and specify the DB location. Generally leave the defaults.

Finally Summary and confirmation.

Click Configure and the wizard will configure the certificate services roles. Note the warning that the configuration for this CA is not complete, as we still need to request, approve and import the CA certificate.

Configuring the CRL Distribution Point

Before configuring the Certification Authority itself, we’ll first copy across the certificate and CRL from the root CA.

Ensure the root CA virtual machine is running and copy the contents of C:\Windows\System32\certsrv\CertEnroll from the root CA to the same folder on the subordinate CA. This is the default location to which certificates and CRLs are published. Keeping the default locations will require the minimum amount of configuration for the CRL and AIA distribution points.

The result on the subordinate certificate authority will look something like this – note that the CRL for the root CA is located here:

In my case it’s an offline (non domain joined) making a shareable UNC path is slightly painful in these cases, and for the most part, it is completely offline, and no NIC settings are even defined on the VM, heck I could remove the vNic completely :D. Anyway to complete this task I did the usual vUSB (a VMDK I mount to different VMs as needed to transfer files) and copied the resulting files specified above into this VMDK, then attached to the Sub CA VM, and moved files to their appropriate path. Again in this case the Sub CA is Core, so either use diskpart, or Server Manager from the client machine to bring the disk online and mount it.

Issuing the Subordinate CA Certificate

Next, we will request, approve the certificate request for the subordinate CA. At this point, the subordinate CA is un-configured because it does not yet have a valid CA certificate.

Copy the initial request created by the confiz wizard to the movable VMDK.

Now, remove the disk from the Sub CA and attach it to the Off-line root CA, then open up the CA tool to request a new certificate. (Don’t worry about taking the disk offline, it’ll unmount automagically).

Browse to where the certificate request for the subordinate certificate authority is located and open the file.

The certificate request will then be listed under Pending Requests on the root CA. Right-click the request, choose All Tasks and Issue.

The subordinate CA’s certificate will now be issued and we can copy it to that CA. View the certificate under Issued Certificates. Right-click the certificate, click Open and choose Copy to File… from the Details tab on the certificate properties.

Export the new certificate to a file in PKCS format. Copy the file back to the subordinate certificate authority, so that it can be imported and enable certificate services on that machine.

Configuring the Subordinate CA

With the certificate file stored locally to the subordinate CA, open the Certificate Authority console – note that the certificate service is stopped. Right-click the CA, select All Tasks and choose Install CA Certificate…

So from a client system open the CA snap-in, point to the new sub CA…

This is where I got stuck for a good while, all the guides I found online were using CA’s with Desktop experience enabled allowing them to run the CA MMC Snap-in locally, and from all my testing against a Server 2016 Core server running the CA role, the snap-in simply wouldn’t load the input wizard…

So I posted the bug on Technet, and Mark saved my bacon!

“You will need to use the commandline to do this. On the CA itself:

1) Open a command prompt

2) Navigate to where your certificate file is located

3) certutil -installcert <your certificate file name here>”

WOOOOOO! We have a working Enterprise Sub-CA… Now the question on if CRL works, and how to deploy the chain properly to servers and clients so things come up with a trusted chain and a green check mark!

“If the CRL is online correctly, the service should start without issues.

To be Continued…..

Remove Existing Enterprise Root CA (Part 2)

Intro

Continuing on from my source blog post. In this case he goes on to install and configure the role to be a subordinate enterprise CA. But what do you do if you already deployed an Enterprise Root CA? I’m going to go off a hunch and that something gets applied into AD somewhere to present this information to domain clients. I found this nice article from MS directly on the directions to take, it stated for Server 2012, so I hope the procedure on this hasn’t changed much in 2016.

*NOTE* All steps that state need to be done to AD objects, those commands are run as a Domain Admin, or Enterprise Admin directly logged onto those servers. Most other commands or steps will be done via a client system MMC Snap-in, or logged directly into the CA server.

Remove Existing Enterprise Root CA

Revoke Existing Certificates

Step 1: Revoke all active certificates that are issued by the enterprise CA

  1. Click Start, point to
    Administrative Tools, and then click Certification Authority.
  2. Expand your CA, and then click the Issued Certificates folder.
  3. In the right pane, click one of the issued certificates, and then press CTRL+A to select all issued certificates.
  4. Right-click the selected certificates, click All Tasks, and then click Revoke Certificate.
  5. In the Certificate Revocation dialog box, click to select Cease of Operation as the reason for revocation, and then click OK.

Simple enough…

 

Increase the CRL interval

Step 2: Increase the CRL publication interval

  1. In the Certification Authority Microsoft Management Console (MMC) snap-in, right-click the Revoked Certificates folder, and then click Properties.
  2. In the CRL Publication Interval box, type a suitably long value, and then click OK.

Note The lifetime of the Certificate Revocation List (CRL) should be longer than the lifetime that remains for certificates that have been revoked.

Easy enough, done and done.

Step 3: Publish a new CRL

  1. In the Certification Authority MMC snap-in, right-click the
    Revoked Certificates folder.
  2. Click All Tasks, and then click
    Publish.
  3. In the Publish CRL dialog box, click
    New CRL, and then click OK.

Again easy, done.

Deny Pending Requests

*DEFAULT, generally Not required.

Step 4: Deny any pending requests

By default, an enterprise CA does not store certificate requests. However, an administrator can change this default behavior. To deny any pending certificate requests, follow these steps:

  1. In the Certification Authority MMC snap-in, click the Pending Requests folder.
  2. In the right pane, click one of the pending requests, and then press CTRL+A to select all pending certificates.
  3. Right-click the selected requests, click All Tasks, and then click Deny Request.

Not the case for me.

Uninstall Certificate Services

Step 5: Uninstall Certificate Services from the server

  1. To stop Certificate Services, click Start, click Run, type cmd, and then click OK.
  2. At the command prompt, type certutil -shutdown, and then press Enter.
  3. At the command prompt, type
    certutil -key, and then press Enter. This command will display the names of all the installed cryptographic service providers (CSP) and the key stores that are associated with each provider. Listed among the listed key stores will be the name of your CA. The name will be listed several times, as shown in the following example:

(1)Microsoft Base Cryptographic Provider v1.0:
1a3b2f44-2540-408b-8867-51bd6b6ed413
MS IIS DCOM ClientSYSTEMS-1-5-18
MS IIS DCOM Server
Windows2000 Enterprise Root CA
MS IIS DCOM ClientAdministratorS-1-5-21-436374069-839522115-1060284298-500

  1. Delete the private key that is associated with the CA. To do this, at a command prompt, type the following command, and then press Enter:

certutil -delkey CertificateAuthorityName

Note If your CA name contains spaces, enclose the name in quotation marks.

In this example, the certificate authority name is “Windows2000 Enterprise Root CA.” Therefore, the command line in this example is as follows:

certutil -delkey “Windows2000 Enterprise Root CA”

* OK, this is where things got weird for me. For some reason I wasn’t getting back the same type of results as the guide, instead I got this:

C:\ProgramData\Microsoft\Crypto\RSA>certutil –key
Microsoft Strong Cryptographic Provider:
TSSecKeySet1
f686aace6942fb7f4566yh1212eef4a4_ae5889t-54c3-4b6f-8b60-f9f8471c0525
RSA
AT_KEYEXCHANGE

CertUtil: -key command completed successfully.

And any attempt to delete the key based on the known CA name just failed. I asked about this in TechNet under the security section, and was told basically what I figured and that the key either didn’t exist or was corrupted. So basically continue on with the steps. It was later answered by Mark Cooper.

Locating the CA Master Key

This one again got answered by Mark Cooper, include –csp ksp (keys are located under: %allusersprofile%\Microsoft\Crypto\Keys)

Deleting the CA Master Key

From all the research I’ve done, it seems people are adamant that you delete the key before you remove the certs, why exactly I’m not sure…(From my testing if you deleted the certificate via certutil, it comes right back when restarting certsvc. It must be rebuilt from the registry?)

So: certutil –csp ksp –delkey <key>

Checking the keys directory show empty. Good stuff.

Viewing the Certificate store

Certutil –store my

This made me start to wonder where the actual certificate files were stored, a google away and it turns out to be in the registry? Lol (HKLM\System\Microsoft\SystemCertificates)

You can see they key container name matches the certificate hash.

Nothing more than just a string of obfuscated code (much like opening up a CSR), so the only way to interact with them is using the Microsoft CryptoAPI (CertUtil), or the Snap-in.

Deleting the CA Certificate

Certutil –delstore my <Serial>

Reopening regedit, and the cert is gone.

Delete Trusted Root CA Cert

Certutil –store ca
Certutil –delstore ca <serial>

So moving on…*

  1. List the key stores again to verify that the private key for your CA was deleted.
    Check
  2. After you delete the private key for your CA, uninstall Certificate Services. To do this, follow these steps, depending on the version of Windows Server that you are running.

    Uninstall-AdcsCertificationAuthority

    If the remaining role services, such as the Online Responder service, were configured to use data from the uninstalled CA, you must reconfigure these services to support a different CA. After a CA is uninstalled, the following information is left on the server:

    • The CA database (To be deleted see below)
    • The CA public and private keys (Deleted see above)
    • The CA’s certificates in the Personal store (Deleted See above)
    • The CA’s certificates in the shared folder, if a shared folder was specified during AD CS setup (N/A)
    • The CA chain’s root certificate in the Trusted Root Certification Authorities store (Deleted See Above)
    • The CA chain’s intermediate certificates in the Intermediate Certification Authorities store (none existed for me)
    • The CA’s CRL (yup)

By default, this information is kept on the server in case you are uninstalling and then reinstalling the CA. For example, you might uninstall and reinstall the CA if you want to change a stand-alone CA to an enterprise CA.

Known AD CA Objects

Step 6: Remove CA objects from Active Directory

When Microsoft Certificate Services is installed on a server that is a member of a domain, several objects are created in the configuration container in Active Directory.

These objects are as follows:

  • certificateAuthority object
    • Located in CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC=ForestRootDomain.
    • Contains the CA certificate for the CA.
    • Published Authority Information Access (AIA) location.
  • crlDistributionPoint object
    • Located in CN=ServerName,CN=CDP,CN=Public Key Service,CN=Services,CN=Configuration,DC=ForestRoot,DC=com.
    • Contains the CRL periodically published by the CA.
    • Published CRL Distribution Point (CDP) location
  • certificationAuthority object
    • Located in CN=Certification Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=ForestRoot,DC=com.
    • Contains the CA certificate for the CA.
  • pKIEnrollmentService object
    • Located in CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=ForestRoot,DC=com.
    • Created by the enterprise CA.
    • Contains information about the types of certificates the CA has been configured to issue. Permissions on this object can control which security principals can enroll against this CA.

When the CA is uninstalled, only the pKIEnrollmentService object is removed. This prevents clients from trying to enroll against the decommissioned CA. The other objects are retained because certificates that are issued by the CA are probably still outstanding. These certificates must be revoked by following the procedure in the “Step 1: Revoke all active certificates that are issued by the enterprise CA” section.

For Public Key Infrastructure (PKI) client computers to successfully process these outstanding certificates, the computers must locate the Authority Information Access (AIA) and CRL distribution point paths in Active Directory. It is a good idea to revoke all outstanding certificates, extend the lifetime of the CRL, and publish the CRL in Active Directory. If the outstanding certificates are processed by the various PKI clients, validation will fail, and those certificates will not be used.

If it is not a priority to maintain the CRL distribution point and AIA in Active Directory, you can remove these objects. Do not remove these objects if you expect to process one or more of the formerly active digital certificates.

Remove all Certification Services objects from Active Directory

To remove all Certification Services objects from Active Directory, follow these steps:

  1. Know the CA common name (use CertUtil)
  2. Use Sites and Service MMC Snap-in from a client computer using a domain admin account with proper permissions, highlight the parent snap-in node -> View (from the toolbar) -> Show Services Node.
  3. Expand Services, expand Public Key Services, and then click the AIA folder.
  4. In the right pane, right-click the CertificationAuthority object for your CA, click Delete, and then click “Yes”.
  5. Left Nav, Click CDP folder.
  6. In the right pane, right-click the CertificationAuthority object for your CA, click Delete, and then click “Yes”.
  7. Left Nav, Click Certificate Authority.
  8. In the right pane, right-click the CertificationAuthority object for your CA, click Delete, and then click Yes.
  9. Left Nav, Click Enrollment Services (This should have been auto removed, in my case it was)
  10. If you did not locate all the objects, some objects may be left in the Active Directory after you perform these steps. To clean up after a CA that may have left objects in Active Directory, follow these steps to determine whether any AD objects remain:
    1. Type the following command at a command line, and then press ENTER:
      1. ldifde -r “cn=CACommonName” -d “CN=Public Key Services,CN=Services,CN=Configuration,DC=ForestRoot,DC=com” -f output.ldf
    2. In this command, CACommonName represents the Name value that you determined in step 1. For example, if the Name value is “CA1 Contoso,” type the following:
      1. ldifde -r “cn=CA1 Contoso” -d “cn=public key services,cn=services,cn=configuration,dc=contoso,dc=com” -f remainingCAobjects.ldf
    3. Open the remainingCAobjects.ldf file in Notepad. Replace the term “changetype: add” with “changetype: delete.” Then, verify whether the Active Directory objects that you will delete are legitimate.
    4. At a command prompt, type the following command, and then press ENTER to delete the remaining CA objects from Active Directory:
      1. ldifde -i -f remainingCAobjects.ldf

At this point I was having issues with the input command of the ldf file was failing. I posted these results in my Technet post. After a bit more research I noticed other examples online not having any other information appended after the “changetype: delete” line. So I simply followed along and did the same deleting all the lines after that one, leaving the base DN object in place and sure enough it finally succeeded.

Generate base object LDF file:

After editing line as specified in MS article:

New altered LDF file:

Same command after altering file:

Second run I simply deleted the object under the KRA folder, and it returns no values.

13) Delete the certificate templates if you are sure that all of the certificate authorities have been deleted. Repeat step 12 to determine whether any AD objects remain.

I did this via the Site and Service Snap-in, under the PKI section of the Services node.

Delete NTAuthCertificates Objects Published Certificates

Step 7: Delete certificates published to the NtAuthCertificates object

After you delete the CA objects, you have to delete the CA certificates that are published to the NtAuthCertificates object. Use either of the following commands to delete certificates from within the NTAuthCertificates store:

certutil -viewdelstore “ldap:///CN=NtAuthCertificates,CN=Public Key
Services,…,DC=ForestRoot,DC=com?cACertificate?base?objectclass=certificationAuthority”

certutil -viewdelstore “ldap:///CN=NtAuthCertificates,CN=Public Key
Services,…,DC=ForestRoot,DC=com?cACertificate?base?objectclass=pKIEnrollmentService”

Note You must have Enterprise Administrator permissions to perform this task.

The -viewdelstore action invokes the certificate selection UI on the set of certificates in the specified attibute. You can view the certificate details. You can cancel out of the selection dialog to make no changes. If you select a certificate, that certificate is deleted when the UI closes and the command is fully executed

Use the following command to see the full LDAP path to the NtAuthCertificates object in your Active Directory:

certutil store -? | findstr “CN=NTAuth”

Nice and easy, finally.

Delete the CA Database

Step 8: Delete the CA database

When Certification Services is uninstalled, the CA database is left intact so that the CA can be re-created on another server.

To remove the CA database, delete the %systemroot%\System32\Certlog folder.

Nice and easy, I like these steps.

Clean up the DC’s

Step 9: Clean up domain controllers

After the CA is uninstalled, the certificates that were issued to domain controllers must be removed.

Which states for 2003 and up:

certutil -dcinfo deleteBad

My results:

With the same list of garbage for the DC, then rerunning Certutil –dcinfo, still reported the same certs… So I had to manually remove these, but again opening a MMC snap-in via a client system, add the certificate snap-in and point to the machine store on the DC’s. Then manually delete the certificates, once this was done for both DC’s. CertUtil –dcinfo finally reported clean…

Summary

Finally!!! What a gong show it is to remove an existing CA from an environment… even one that literally wasn’t used for anything outside its initial deployment as an enterprise root CA.