Just another Mon…. Tuesday

The Start

Nothing new or exciting to the start of my day, clean house! Now I was actually cleaning my office, not making system changes 😉 Then…. Monday happened, I mean it’s really Tuesday but it was my first day back after the weekend (The first weekend I finally didn’t have to do any system changes)… life’s good right?

I made my first talks with our DBA, and then followed up with our Developer. Our current Developer is one really hard working and amazing dev, so he needed a couple things from my to move his current project forward; a CNAME record, along with a gMSA. This is one of the many reasons I like this guy (not only understands security posture, but what is needed for it to all work!). So the first one took seconds, and the second a couple more (besides the fact I need to reboot the server, cause I choose to do IDLGA instead of granting the computer account direct permissions to retrieve the gMSA’s password)… Yes… yes I’m aware of the special klist purge command to clear kerberos tickets, but I wanted to be 100% sure. Then came the problem…

The Problem

I don’t want to get too into the nitty gritty, but the gist of it was we had an authoritative source of data that resided on an older SQL server, our Devs new project was in a whole new data center, utilizing a whole new database server.

Since we didn’t want to alter any firewall rules between the datacenters (while they are all in house and owned by the same company, the two data centers are still wall gardened off, with a 2 way trust created for most authentication purposes) this would mean either:

A) I have to allow the old SQL server to do LDAP queries against my new Datacenters DC’s. (I wasn’t in the mood for architecture changes, which I already stated so this was last resort) Then grant the new datacenters gMSA account permissions on the database.

B) Figure out a way to utilize two different accounts, to make two different source data calls, from the same App/code.

Now I like the sound of B cause lets face it, it puts all the work on the Dev and not me. (If this sounds Dilbertish…. cause it is :P) At this point I was pretty confident that this was possible… I mean… why not? Well a couple seconds later my Dev comes back and tells me that it is in fact not possible…. well sort of not possible… it’s not possible for our exact case…. for reals… let me explain, first off I’m talking about ASP.NET, second of all I’m talking about 2 different connection strings to a Database. For some references we both found, like this and this, and this, and this and even this … ok that’s a fair amount of reading (sadly I still couldn’t find one of the original sources haha) but in each case you are probably wondering (How do I specify the user name and password for an alternative connection string when using Windows Auth instead of SQL auth)…. Drum Rolll…………..

YOU CAN’T …… TADA

So…..

The Second Problem

This lead us to our second problem, while the new SQL instance was already configured for mixed auth (This means it allows Windows as well as local SQL server authentication to be permitted), our old SQL server instance…. not so much. As much as I wanted to avoid infrastructure changes, seems it was inevitable…. so I asked my DBA if this would be a problem, since you can change the auth mode on any given instance and not all instances on the server, figured this was a quick a easy solution; Enable Mixed Auth mode, re-start the instance, create a local SQL account to be hard-coded and used by the App until the source data can be properly relocated (Thus removing any hard coded garbage in the app). Alright! until….. Ughhhhhh…

The Third Problem

When my DBA went to restart the instance, he decided to use SSMS remotely (now there is nothing wrong with this, I didn’t know it was even possible and was excited to learn something new… until…) the service failed to come up successfully (Ohh boy here we go), so sure enough we get into fix mode. My DBA jumps right into Event Viewer (good man) and discovers the first error stating the service was unable to bind to the port as it is in use (DBA opens SQL Config Manager, and Services.msc and sees both services and not running). This however instantly told me one thing… the service didn’t stop properly, event though Services.msc showed no signs of running, Task Manager and Tasklist, showed otherwise. Here’s the kickers, every attempt to force stop the process (service instance sqlserver.exe) reported back “Access Denied” even running psexec (Love you Mark!!) as SYSTEM still reported “Access Denied”

The Fix

At this point I basically figured that we had to reboot the server (I also assumed it would get stuck shutting down at the “stopping service” stage, but amazingly it did not!) Sure enough after the reboot everything came up without a hitch and the new Mixed Auth mode was enabled for our Dev’s alternate ASP.NET connection string! OK I know this sounds pretty crappy for a solution but honestly it was the only thing we had left in our toolbox, and it fixed both problem one (Mixed Auth mode is now enabled for old instance) and the fact the instance came up without a problem.

Until…..

While we (DBA, Dev, and Myself(SysAdmin)) continued to test our other applications that were built via other means, it seemed a couple things were broken, this ones a little bit funny cause we assumed there was an issue for everyone, however turned out only to be an issue for the DBA and Dev, not myself (but I wasn’t on my local machine to do any front end testing from my account) so let me explain, The Dev kept digging into the real nitty gritty of the code, jumping all the way into the backend of the SQL’s stored procedures and views, to discover there was empty values being returned (Now I have no clue if this was always an issue (based on the fix) or if it actually was due to something else….anyway) turns out one of the built in views it used as a source to create a temp table was returning null thus throwing the error when calling one of the stored procedures. When the Dev and I went to talk to the DBA in the lunch room, we had discussed some of the permission changes we had just implemented on the Security Logins of the instance, and made some assumptions, so I went into the back-end AD groups to validate somethings, sure enough it was a little funny in that due to the fact their stand accounts had direct logins for the instance (generally not a fan of this, as I love scalable design and prefer to utilize IDGLA) so my DBA told me he had fixed this, and then he told me something I never would have expected and is a huge learning experience for me:

WHEN YOU GRANT AN ACCOUNT THE “SYSADMIN” ROLE/PERMISSION THE OTHER ROLES IN WHICH THE ACCOUNT IS A MEMBER DOES NOT APPLY PROPERLY (IS BYPASSED OR SOMETHING).

Literally, so what happened was there was a group we have defined to be granted sysadmin rights on the server (to manage them, not manipulate data) normally this contains admin based accounts (we all do standard account and admin accounts for least privileged best practice right :). However their admin accounts and their standard accounts were in there, which I removed, and once that was corrected and the proper nested grounds their standard accounts where suppose to get based on other roles, then applied properly and the issue was fixed.

Party in the House… Until….

Yes… believe it or not my day does not end here…. there was simply more information the great world of IT had to shove down my tiny brain that’s already overloaded and overwhelmed at the pure magnitude of knowledge you need to manage systems!! WHYYYYYY! GOD WHYYY!!!!

Anyway…. so to end the day we get a unique error message from one of our workflows, and sure enough another email from an external user providing a snipping of an error (How nice of them). Is this a coincidence…. not a chance 100% related… So again most of the heavy lifting is done by our Dev (this guy….. he’s a super star!) He managed to break it down to an assembly problem… but we were shocked as to how this could be (we checked everything was working after all our above fixes)… well until our DBA made a confession (He wasn’t happy to have found out his own account was the DB owner of a fair share of DB’s within the instance) so he secretly clean it up… well after some trial and error (reverting the change a couple times) it turned out that the error “error the server may be running out of resources or the assembly may not be trusted with permission_set external_access or unsafe” was simply due to a single missing permission needing to be granted to the new DB owner account:

In SSMS -> Instance -> Logins -> Account -> Properties -> Securables -> Check Grant for Unsafe Assemblies

Sometimes… you just gotta run unsafe code 😛

Alright! Home Time!

I Spike You!

Ahhh the internet….

Publish my own personal mx record in hopes to get my own email going….
I decided to see why my email outbound wasn’t working (sigh even following Paul Cunningham’s post seems I’m missing something) seems all my out-bound based SMTP connections to external mail servers seems to be failing. According to my firewall (Palo Alto) The rule is allowing it out but the application shows incomplete… like it’s never establishing a connection. So from my previous posts, I use telnet to attempt a connection on external known IPs for SMTP mail server, and sure enough no connections can be established (I know I’ll eventually have to create a receive connector from outbound sources and create a security rule to allow email from outside in, but I wanted to tackle email going out first).

I decided to attempt the same port 25 connection to the new record I created (I have multiple internet connection to utilize to actually test connections from “outside” instead of having to rely on a loop back NAT rule or anything). to my dismay it showed failed to connect (I already expected this as I created a NAT rule but I never created a security rule to allow the connection). I decided to go to my Monitor tab to see if I could see the attempted connection, I indeed did see it. However what surprised me more was the failed attempts from others in the short time I created this record (considering I had the IP for a long time and pretty much all ports were blocked forever, I didn’t expect there to be much attempts) these were either crawlers or something else…. but guess who the every first was….

141.212.122.227
University of Michigan (AS36375)

Not once, but twice from two sequential IP addresses…. Mhmmm what are those Michigans up to?

185.35.62.150… unknown, someone remaining anonymous, Michigian Hookup? occurred 3 minutes after.

Then Hours Later….

107.170.227.216
Digital Ocean, Inc. (AS14061)

Not sure who they are, might have to check em out..

Couple hours later…

46.29.161.101…. Anonymous

I guess it only makes sense after Americans, and Anonymous it be nothing other than the Russians right…. To be fair I don’t actually known wtf thi sis lol, Japanese mixed with Russian or something pile of who knows what.

95.181.178.182
FOP ILIUSHENKO VOLODYMYR OLEXANDROVUCH (AS57311)

They are least tried three times in a row from same IP (Good thought idea, if it doesn’t work once, heck try again a couple times)

Then my attempt… pretty funny what you can hear if you just listen…

This isn’t actually I Spike You! Like from the old school GoldenEye movie, but this is what you’d actually do if you wanted to “Spike” someone online, this is my actual server I plan to use of course, but if I actually wanted to find out what people are up to I’d create a honeypot. Maybe now that I post this, they’ll think my mx record is a honeypot, but it’ll secretly become in use… sometime…. lol

Multi BIN games on Lakka

I figured I may as well start a series on what I have to do to get Lakka setup (Lakka is a skin pretty much for retroArch, compiled for many different small board architectures including the Raspberry Pi). Today I got a game that happened to have multi bins, now I hope to post soon on how I managed to get multi disc games to play nice, which basically involved using a PsX2PsP conversion tool, which I read from another source, but that is for another post. This time I had multi bins up to 12 separate ones, and that other tool although supporting bin files only supported up-to 5 for multi-disc purposes not multi track purposes… so what to do… Google! and I found this ahh blogs make this world go round. So I basically did the same thing but in a VM to be safe, used DAEMON tools to mount the .cue, then used ImgBurn to make a single bin/cue file. Then used that in the PsX2PsP tool for a bit of compression… and it worked 😀

Configuring an Anonymous Receive Connector on Exchange 2016

The Story

Well in my previous post I discussed the issue I faced resolving an email problem with one of our development applications in which it was unable to send emails after a recent Exchange upgrade/migration. So initially we were going to simply rebuild our own workflow in-house using ASP .NET Core. Until we noticed that even our own workflows were failing… in this case the answer from the old post which was super vague “reconfigure the receive connector”. Then I somehow stumbled upon my answer through one of my hundreds of google searches… I founds this gem!

OK before I link the gem which will be the source to my answer. I also wanted to point something out real quick here in hopes maybe someone can comment below the answer to this one:

When using Exchange 2016 as an email SMTP relay, and you use a no-reply from address, with an external email address for the destination, how do you query to find out if its gone through, or stuck in que? All I could see in the ECP it always required me to select a mailbox… there’s no mailbox associated with these relayed email messages, so how does one check this?

OK, now for the gem. This guy “Paul Cunningham” He’s… uhhhhhh… He’s uhhhhh… he’s uhhhh a good guy. So I always knew you could use telnet to check certain ports and services… but this was so concise… it nailed the problem…

From my K2 Server or my in house workflow server:

1) Ensure Telnet Client feature is enabled

2) Open cmd prompt or PowerShell:

telnet exchangeServer 25
helo
mail from: user@corp.ca
rcpt to: ExternalUser@gmail.ca

220 EXSERVER.exchange2016demo.com Microsoft ESMTP MAIL Service ready at Thu, 22
Jun 2018 12:04:45 +1000
helo
250 EXSERVER.exchange2016demo.com Hello [192.168.0.30]
mail from: adam.wally@exchange2016demo.com
250 2.1.0 Sender OK
rcpt to: exchangeserverpro@gmail.com
550 5.7.54 SMTP; Unable to relay recipient in non-accepted domain

huh, just like the source blog, now why would I be getting that error… I allowed Anonymous users via the check box under the receive connectors security tab… yet Paul does a lil extra step that doesn’t seem to be mentioned elsewhere, and that check box I mentioned is his first line, but then look at the interesting second line….

[PS] C:\>Set-ReceiveConnector "EXSERVER\Anon Relay EXSERVER" -PermissionGroups AnonymousUsers
[PS] C:\>Get-ReceiveConnector "EXSERVER\Anon Relay EXSERVER" | Add-ADPermission -User 'NT AUTHORITY\Anonymous Logon' -ExtendedRights MS-Exch-SMTP-Accept-Any-Recipient

Since I was using anonymous settings on the Application server side (K2 in this case) I gave the second PowerShell cmdlet a run from my new exchange server.

Amazingly enough just like the source blog after running the second line (edited to fit my environment obviously) then the rcpt to succeeded!

20 EXSERVER.exchange2016demo.com Microsoft ESMTP MAIL Service ready at Thu, 22
Jun 2018 12:59:39 +1000
helo
250 EXSERVER.exchange2016demo.com Hello [192.168.0.30]
mail from: test@test.com
250 2.1.0 Sender OK
rcpt to: exchangeserverpro@gmail.com
250 2.1.5 Recipient OK

Part 2 – The Solution

If K2 is configured to use EWS, check that stuff out elsewhere, if you landed here from my previous post looking for the answer to the “There is no connection string for the destination email address ‘Email Address'” and wanted to know how that person altered his receive connector:

[PS] C:\>Get-ReceiveConnector "EXSERVER\Anon Relay EXSERVER" | Add-ADPermission -User 'NT AUTHORITY\Anonymous Logon' -ExtendedRights MS-Exch-SMTP-Accept-Any-Recipient

K2 SMTP Configurations

Intro

I’m going to keep this post short, just in hopes that I don’t go off the rails on this product; K2 Blackpearl 4.7. I have plenty of awesome SharePoint 2010 to 2016 migration content yet to post on my site. I’m sorry I wish I could get all the awesome things I do on here, there are many awesome things I keep thinking about; iOmega NAS conversion I did replacing the ix12 OS with FreeNAS, My test enviroment, ISCSI MPIO (VMware, Microsoft, linux configs)… anyway… K2…. ugh

The Problem

I’ve never posted about this product on my site before, cause to be frank…. I tried to stay away from it as much as possible, so all I did was update the base OS and pray that the developers or users didn’t complain about errors in any of the “K2 apps”. Trust me there are lots I can’t tell you how many times I had o hear about K2 issues… anyway, I digress. Ya’d figure it’s this simple eh… well for running the setup manager, for first time config sure… but you have to run it again if you want to ever change this value… *ahem*… alright so lets say you did this… it’ll just work right… you set the email server destination and port, it’s gotta work for everything in the server (We’ere talking standalone, not clustered). right?…. Nope… So you eventually find that this is the most common, and generic error you will find if you ever have any email issues with K2… “There is no connection string for the destination email address ‘Email Address'” and you will get this for a lot of different things. Which oddly enough some of it gets covered here. But it’s a mess, and you have no clue which problem is the cause for the error. So much like the shared link there:

Check 1 – Environment Variables

Check the Environment Variables (If you are not sure what Environment Variables are you can read this *Ahem* Awesome… WhitePaper)

Alright… so we re-ran the config manager, updated the email settings there, updated out environment variables, we gotta be good now!…..

“There is no connection string for the destination email address ‘Email Address'”

Check 2 – SMTP Config Strings

You got to be…… ok, ok…. we got this… there’s got to be something else we must have missed… let’s see… mhmmm as Mikhal says from here…

“Email configuration is externalized from process and K2 server relies on connection strings in configuration file, processes look at Environment Library, but you should also keep in mind String Table. I saw cases when people did update of Mail Server field in environment library, but their workflow was deployed from other environment with old/incorrect email settings which were written into String Table during deployment time – so you should also make sure that you have correct settings there.”

soooo… From K2 terrible support page you can either dig in and manually edit “k2hostserver.exe.config” Really….. really….. .exe.config ….. anyway load the terrible Windows application that they use to edit this XML type config file. Now you mind find your self wondering “Do I have to create a new SMTP connection string for every from and destination address? That’s…. just…… unmanageable!” And yup it sort of is, what my awesome colleague and I discovered (He’s a K2 Master by the way) is that any internal “spoofed” mail (since we had decided we didn’t utilize any of K2’s EWS integration) would only work when we had that particular user with a SMTP string in this tool; ConnectionStringEditor.exe

It took my colleague a really long time before magically discovering what the syntax was in the SMTP connection string to be a wild card E.G. *@Zewwy.ca …. Drum Roll…….. NOTHING

That’s right nothing to make a wildcard SMTP connection string simply leave the field blank in the first step of the wizard. Alright… so now we had validated workflows could send on behalf of over SMTP based email (even to exchange without any EWS integration)… However we also utilized another workflow to send external email address emails…

“There is no connection string for the destination email address ‘Email Address'”

Check 3 – SMTP Receive Connector

Are you Kidding ME?!?!?! Alright…. Jesus what else did I forget/miss…

At this point you may find yourself a lil bit stuck as every other post points you to the same solutions above, or like this jerk-off goes over everything, then laughs in your face and says “‘There is no connection string for the destination email address testsmtp4velocity@gmail.com’. That is a pity. I thought we just added it. Ok, I admit, I knew this was going to be the result, but I thought I’d keep you intrigued. Now you will have to wait for my next article to see how to fix it. You can find the answer in part 2 of this article, along with a few other tips on how to resolve some other issues when trying to send an email through a SMTP server.” only to find that there is no Part 2….

So I was kind of stuck with the initial share I gave…

“Thanks for your help. We found the root reason is that the Exchange Server config the receive policy so we changed the policy then resolved the problem.”

If dealing with issues isn’t bad enough the internet is littered with useless help. “Don’t worry guys, I figured out my problem, if you have this problem too, well I figured out mine, good luck with yours.” Anyway, again I digress and I am not such a jerk and I will tell you how I finally managed to resolve this issue for good.

So I double checked my receive connector on my Exchange server… may there’s just something I missed… well… I covered everything, No TLS…. using Port 25, listening from only my K2 Server, Anonymous Users Checked off under security… What the heck I have them all covered…

Check 4 – Part 2

I was at my wits end until this! (Part 2 LOLOLOLOL, for real though… it’s coming soon. in like 20-30 mins, maybe an hour shouldn’t take me long to write up)

Setting Mailbox Subitem Permissions

I had to do this for Resource Items, to allow all staff to view the Calendar of the resource so they could schedule items accordingly. As one of the comments mentions from this Spiceworks post

Set-MailboxFolderPermission "roomname:\Calendar" -User Default -AccessRights LimitedDetails

This can however be expanded on knowing powershell being object oriented, roomname is actually any mailbox, :\ is the delimiter, Calendar is the SubFolder. -User $Group or $User (Groups are usually best practice per IDGLA) and finally -AccessRights, you can specify any of the access rights you see in the pull down option of Outlook.

That’s it. that’s all there is to mangling Mailbox sub-item permissions.

Splitting WordPress Titles
Post Headers

I wanted to do this since I noticed my one header being really long and was unsightly… I decided to google this, like I google everything…

First one I found, I didn’t want to dink with code, I like coding but mostly PowerShell (If you haven’t noticed based on my categories)… Sorry you’re out!

Second one I found, I didn’t like cause it used a plugin… However there’s always something great to learn form the comments, specially “pessimists” *Cough* realists such as this great comment by “KRZYSIEK DRÓŻDŻ”

“Wow, you really need a plugin for that?

Why don’t you just insert tag? Installing million plugins, that aren’t doing anything really isn’t a good idea… Especially, if such plugin is not popular, so very few people have looked at/controlled it’s code (this plugin had 30 active installs).”

Made me gooo, waaaaaa that’s it? So sure enough I add <br> in my Title, and Bam! The Title is split on 2 lines, now that was easy. Thanks Krzysiek!

Looks like my third source basically does the same thing.

LegacyExchange Annoyance
(Exchange Cross Forest Migration Woes)

The Begining

I’m my quest to completely rebuild my company domain from the ground up (new Domain Controllers (Server 2016 Core), SQL 2016, SharePoint 2016, Exchange 2016….You get the idea) I’ve had to face many interesting challenges (not all blogged about just yet, but if you follow my TechNet posts you’ll see I have plenty of content to write about moving forward if I don’t come across interesting new things, at this rate that seems unlikely… anyway) This time it was another interesting one.

The Weekend

After spending the entire weekend combing over every little detail of the migration (Mail relays, systems that use email, how they send email, the receive connectors they would need (Auth types, TLS sec, etc) I figured I had all my bases covered, and made the switch (all changes including expanding existing server configs to allow mail flow, not hinder any) so the last part of my switch was ensuring most servers/services were using a Fully Qualified Domain Name (FQDN) in their settings/configs for SMTP. So my cut over in this case was very simple change the A host record, and clear all systems DNS cache. To my amazement everything was still working (even ActiveSync cut over without a hiccup)…. Until… the next day…

The Next Day

Out of all things I didn’t anticipate internal email flow to break… I mean… there’s nothing different between Joe.blow@corp.ca and Joe.blow@corp.ca right??!?! Wrong! lol with Microsoft Exchange you are totally, and utterly wrong! First… Read this and read this to understand exactly what I mean. In short internal email likes to use it’s own special address… (give a dirty look)… called X.500 addresses (AKA The LegacyExchangeDN) bunch of garbage, muff cabbage BS…. So instead of everything resolving normally cause all new linked mailboxes had the proper SMTP address (So all other outbound/inbound flowed without issue), user wanting to reply to old emails, or creating new ones and having the TO field be auto populated, they would get a stupid NDR (AKA a bounce back telling them the email can’t be sent to the recipient) cause FFS it can’t just use the SMTP address NOOOOOOOO it uses the stupid Legacy X.500 Address… Gosh Darn *Mumbles* Exchange… in case you can’t tell I dispise email with a passion.

The Search

Anyway, I looked up what the possible solutions were, I wasn’t too happy. For now I was telling people to simply remove the old Auto Populate cache Outlook was using. As for existing Calendar events, turns our all resources fell under the exact same annoying problem, even though I created them all with the same alias’s it wasn’t good enough Exchange was seeing them via their old X.500 addresses (Since all old Calendar Items were imported from Backup), they had new X.500 addresses in the new Exchange Server. So I would have to tell people to remove those resources, and simply re-add them.

The Problem

There is however a problem with this, and that is if someone edits an existing Calendar event and changes the time, the room may already have been booked (the new room) so when the user editing the old re-occurring event goes to re-add the room, it would complain about a conflict. Someone has already booked the “new” room even though it should have been held by the initial booking. Alright so how does one re-map this… well took me a while digging through google like this guy, or this guy (seems everyones a blogger these days), but I found an excellent resource blog that covers the problem, and the solution pretty clearly

The Answer

To keep it short for everyone, and as usual to Paraphrase the solution, which so far is not even working for me :@, even after waiting 18 hours. *UPDATE!* Don’t put in X.500 like the stupid UI tells you to… just put in X500 without the ****ing dot… See my Technet post for details!

1)  Open User and Computers (From linked Domain/ Old Domain)
2) Find any User/Resource/Equipment object that were migrated
3) Right Click and select attribute editor tab (requires advanced view)
4) press "L" and lookup LegacyExchangeDN, double click and copy
5) Open Exchange ECP (New Server)
6) Under Recipients double click migrated mailbox, click email address
7) Add new email, Type X500, paste the address you copied in step 4
8) Wait for the OAB to synchronize across the farm and clients

New-MailboxImportRequest Failed

This is going to be another short post.

Working on an Exchange migration this weekend, I was using our backup software to simply export users mailbox’s from the most recent backup of your old Exchange server, then importing them into the new Exchange server for each mailbox after creation.

I would have loved to have simply selected each user as a whole and import those pst files. However from testing showed it simply created a sub item with the users name and all their folders, instead of properly placing them under the primary parent hierarchy. So I was forced to export Each item individually (Inbox, Sent Items,Drafts, Etc) and Import them. I initially didn’t script this as there were only about 30-40 users I had to migrate, i figured it was easier to just go through the wizards… until I discovered some users created folders outside of their Inbox! Ohhh boy…. Anyway, turns out if you exceed 9 imports for a single mailbox without specifying a special name for it (even after they succeed) you will get en error as follows:

“The name must be unique per mailbox. There isn’t a default name available for a new request owned by mailbox xyz”

The solution was easy enough to find a good band-aid indeed.

Get-MailboxImportRequest -status completed | Remove-MailboxImportRequest

However sometimes in my case I found I was still getting there error even though I cleared all completed import requests (with default names obviously). I found out I was having a weird bug happen to be where imports where showing as Queued, yet if I piped them into Get-MailboxImportRequestStatstics | Select Status, they reported a status of Completed…(If you want all the details, pipe into Format-List, instead of Select)

Get-MailboxImportRequest -status Queued | Get-MailboxImportRequestStatstics | Select Status

lol I wasn’t sure what to make of this but there was 2 solutions.

  1. Clear the “Queued” imports that are really Completed.
  2. Give your new import a unique name using the -name parameter

I’ll admit though Exchange 2016 is more intuitive to manage then old Exchange 2010.

Upgrading a Windows Volume from MBR to GPT to support EUFI boot and features

I’m going to keep this post short, so there won’t be any use of the TOC plug I recently deployed. 😛

I recently used a img of a sysprepped machine I used to deploy new machines. To my dismay the image was created with an MBR partition and was mostly used via BIOS boot options. This isn’t very secure as many of the security features of EUFI.

It’s been well known that moving from MBR to GPT back in the day was a painful process. I won’t go over the details as this “Microsoft Mechanics” video does a decent job of doing this.

If you’d like a little more nitty gritty details, you can view this Technet Blog.

In short:

  1. Boot into PE.
  2. use “mbr2gpt” command to validate and convert the partition.
  3. Boot into the Mainboard Config (Bios/EUFI)
  4. Configure boot option to EUFI

Now THAT was easy!