OPNSense for Exchange Reverse Proxy

OPNsense and Exchange

Unlike the German blog I reference below, I use a Palo Alto as my main device to handle normal NAT for the OPNsense box’s internet, as well as the NAT rule to allow HTTP Validation (which I covered in my last blog as it was causing me some issues). Another notable difference is I have a dedicated Datacenter zone which has it’s own dedicated NAT rules for internet access, but not direct NAT rules from the outside world (as it should be), which means no dirty double NAT (like it should be). Then once certs are setup, the OPNsense will reverse proxy the HTTPS requests for OWA, and hopefully Active Sync.

First however, I’m going to add a new VMPG network in this I called it (DMZ) and assigned it a VLAN (70). Since this is ESXi running on an old desktop with only 1 NIC (initially) I have to utilize VLAN to make the most out of the lack of physical adapters. Then I’ll need to create a sub interface on my Palo Alto, with the same VLAN tag of 70, and give it an IP address of 192.168.16.1/24. This will be the subnet of the DMZ. Now you maybe wondering why I’m putting the subinterface and IP on my Palo Alto and not on the OPNsense VM, the reason for this is I use Palo Alto firewall to manage all the other networks in my environment. so all known routes will take place there.

The whole idea here is to get Active Sync to work, and the PANs do not support reverse proxying. So the idea is to have a NAT rule allow port 443 (HTTPS) from the internet to the OPNsense vm. so after the redesign I have 1 OPNsense VM (192.168.16.10/24 – VLAN 70) and a new DMZ VR, with a new subinterface on the PAN (192.168.16.1/24 – VLAN 70)

 

and the PAN…

So I added static routes between my Zewwy network and my new DMZ, as you can also tell based on the mgmt-interface profiles, I only allowed pinging the gateway, so the OPNsense ICMP request shown above to succeed.

I had to set the default gateway on the OPNsense VM via the CLI first in order to gain access to the OPNsense web UI

route add default 192.168.16.1

change IP based on your gateway. Then once in the UI go to:

System : Gateways : Single : Add

This was required to keep the default route persistent after reboots.

 

Well I was getting a bit stuck so decided to google a bit and sure enough a blog to the rescue, odd enough, it’s a German blog. Ich can ein klein beste duetch aber nicht sehr gut. So I picked translate…

I thought… oooo he’s on a VM on ESXi too, and installing VMtools nice… goto plugins… don’t see a list like him, and thought… Shiiiit, my OPNsense have no internet…

Sooo, I decided to give my OPN VM internet access to get updates and plugins (best move). I won’t cover this but basically required me to add a default route to the DMZ VR, create NAT rule and Sec rule, test pinging internet IP from OPN, and success.

OK so.. Now that the PAN is all setup, and we have tested our NAT rule for internet for the OPNsense VM… let’s just go over the OPNsense install…

OPNsense Install

On your Hypervisor or Hardware of choice, in my case ESXi New VM. 🙂

In this case I know I/O is not a big deal so the local ESXi datastore will suffice for this VM:

Pick VM V8 (cause I’m still on ESXi 5.5)

FreeBSD 64Bit (for some reason we won’t be able to pick EUFI)

CPU: 2, Mem: 2GB, 1 E1000 Nic in the DMZ

LSI Logic Parallel SCSI, New 20 Gig Thin Prov Disk, Create VM.

Edit VM settings, remove floppy, Boot Options Force BIOS.

Open Console, and Boot VM. Disable Disekette A:

Advanced, IO Device Config, Disable All (its a VM we don’t need these)

Now, Select the disc part and mount the OPNsense ISO for booting:

Boot it! by Pressing F10 in the VM and save BIOS settings:

Mhmmmmm so delightful…. and now we let it load the live instance, while this live instance is good enough to start using, I don’t exactly feel like loosing my settings every-time it boots and having to remount my ISO from my local machine… so we’ll install OPNsense by logging in with the installer account:

As you can see it’s assigned our one and only NIC the LAN settings, to ease our deployment and the above section I striked out, we’ll be assigning the interface the WAN value. 😛 anyway logging in the with opnsense password.

Mhmmm just look at the old style look, make me juicy…

*NOTE* if installing EFI based the input here may freeze… googling it quickly I only found one reference to the issue by a comment by eugine-chow

  1. Press CTRL + C (This exists the installer)
  2. re-logon on as installer account (This resumes the install with keyboard control

OK, Let’s go! Accept, Guided instillation! Pick Disk, for simplicity and low disk, we’ll just pick MBR… and look at that installation go… mhmmm humbling…

Set a root password:

Now reboot and unmount the ISO, now the boots quicker and our settings will be saved! First things first, assigning NICs… or should I say our one NIC, login in as root via the console. Press 1 to assign interfaces. Even though I showed VLAN assigning above that is used by the ESXi hypervisor and thus I select no to VLAN tagging here, and then specify em0 as my WAN NIC:

Now in my case it wait a long while at Configuring WAN interface, cauuse it’s defaulting to DHCP, and there’s no DHCP in the subnet… ugh, I don’t know why they don’t ask for IP assignment type in this part of the wizard…

now Select option 2 to set IP which should have been part of the wizard in part 1…

Now that is out of the way, we can access the OPNsense web UI from our Datacenter Laptop/VM… you won’t be able to ping it, but the anti-lockout rule will be created on the WAN rules so…

Follow the config guide… only important part being the upstream gateway:

And of course in my case since it’s being NATed the RFC1918 Networks will be unblocked as it’s using one 😛 and NO LAN IP.

First order of business is going to be moving th eport off of port 80 as that will be needed for Lets Encrypt Validation (only cause my DNS provider doesn’t have the API for DNS validation yet).

Finally time for OPNpackages

OPN packages

Bammmmm that was easy!

OK, Firewall, since my OPNsense only has WAN, and it’s open, all security will be handled by the Pal alto, so I don’t want to open HTTPS from the internet to my the OPN sense just yet, till we create the other requirements.

HAPRoxy

Create a Real Server, in this case this will be our Exchange server as in the topology.

Now for a Backend Pool

He doesn’t mention any other settings so I just clicked save… I probably should have named the Backend pool better but meh.

Following the German guide I was a lil upset cause I was running OPNsense 19.1, it seems they changed the HAProxy options, however I did manage to figure it out after a while…

ACLs now Conditions

Go to Services -> HAProxy -> Rules & Checks -> Conditions

Add a condition, for testing I kept it simple as the blog I was following:

and then…

Actions are now Rules

Go to Services -> HAProxy -> Rules & Checks -> Rules

add a rule:

Frontends are now Public Services

Go to Services -> HAProxy -> Virtual Services -> Public Services

Add a public service:

Enable The HAProxy Service:

OPNsense Firewall Settings

Even though this VM wasn’t routing any traffic, I still had to create an allow rule under the firewall area before my PA firewall would see completed packets:

first attempts, gave site unavailable and my PA logs showed…

On OPNsense:

Firewall -> Rules -> WAN -> Add -> TCP (HTTPS) Allow + TCP (HTTP) Allow

 

basically allowing all TCP packets, after applying I was able to get the OWA page from my Windows 10 VM in the datacenter:

so now it’s going to basically be creating a NAT rule on the PA to see it from the internet… but before I get to that…

Certificates!

Now that I covered getting Let’s Encrypt to work behind a Palo Alto firewall I should be able to complete this part!

Lets Encrypt

Enable the service, and the extension to HAproxy, hit apply
Create an Account

I did select my exchange front end, even though I didn’t show it here, then I created a Lets Encrypt Frontend as exchange won’t deal with HTTP:

LetEncrypt FrontEnd

Well lets test this out… Create a Certificate..

Click save changes, but just before we click Issue Certificates, lets tail the log (/var/log/acme.sh.log) to see the process… If you try to open it before you click issue it will fail cause the file only gets created on first run… so click issue and then quickly open the log file with tail command… if it gets stuck at ACCOUNT_THUMBPRINT something went wrong… and of course… something went wrong… ugh……

Mhmmm sure enough… Domain Key error on second try…

But if I alter my HTTP validation to…

and attempt to issue the certificate then I see in my acme.sh.log its success…

but the UI will still show validation error even though it was issued successfully…

Let me see if I can at least assign this cert even though it may not be automatic…

seems like it… lets test…

Well at least that’s something… I’m not sure if the auto renewall will still work… if so I’m not sure exactly what the point of the HA plugin really is… I mean if you can specify the normal WAN and port 80 to validate the certs and seclt the cert to use on the public service… figured it work none-the-less right?

Well I guess well find out… now there one last thing I want to cover… but I’ll do that when I get it figured out again…

For now I’ll post this blog post as is casue it is getting rather long.

Cheers! OK NM I did it quickly…

Blocking the ECP

Under OPNsense HAProxy go to Conditions:

Then Rules:

Then Edit your Public Service settings and add the rules:

Finally test access to ECP via the Proxy…

Ahhhh much better… 😀 something not mentioned by the German blogger makes me wonder if I can access his ECP.. mhmmm

Alright that’s all for tonight. 😀

ZoneMinder on Debian 9

The Story

Alright… here we go again. So I wanted to install the latest ZoneMinder choosing Debian as my OS for stability reasons… I don’t like having to fix stuff, I do it enough for a living and can be rather stressful hahah. Read more on each visiting their respective websites.

However, like usual, and unlike Windows, Installing isn’t usually as easy as just double clicking an executable file… mhmm geez who woulda thought. And sure enough I was getting a bit aww struck by the pain of SecureAptthis is one of those cases where security hinders productivity, but alais its there for a reason.. even though the initial guide I was following did cover this part a lil bit… I wasn’t happy using a third party repo, and creating a hodge podge Debian setup, others in the #Debian IRC channel agreed.

Luckily the very helpful people there educated me on the backport repo 😀

With this info, I was able to setup ZoneMinder, like a boss… but that’s not good enough… I wanted others to have clean setups without me having to hold a dedicated image (too much size). So what better than a Open Source Script?

Yessss…. I spent my weekend polishing my BASH scripting so others can enjoy a clean ZoneMinder setup on Debian 9 too, using my simple script and following this guide!

Installing Debain9

Grab Debian 9 Net install from here (Note this is CD netinst direct ISO D/L) so ensure you are installing this on a compatible system with a network connection (internet). In my case I’ll be use VMs on ESXi.

Standard Install (no graphics), English, Canada, American English,

Hostname, domain (if you have one), set root password, create first user,

set user password, pick clock region, guided – use entire disk (unless you want to do more advanced disc partitions and configurations, not covered by this guide), All Files in one partition, yes disks partitions be created, and install base

All your bass belong to us, scan other cds (no), pick package manager mirror, Canada, first mirrtors fine for me, no HTTP proxy, survey no thx, unselect desktop enviro, and print server, and then select web server and ssh server.

Install Baby!!

*Double Tap Chest* Reboot!

Mhmmm a nice clean install of Debain 9…

My Script

Let me test this first…

What’s this? it’s checking to ensure permissions… wow 😀 ok let’s su to root..

OK, so far so good, this part takes a lil while on fresh install… Coffee Break!

Ahh DB security and setup time, enter a SQL root password (like enter one to be created, this can be anything), and follow the prompts…

change root (n) we just set it…

Remove Anon users: yes

disallow root remote login: yes

remove test DB: yes

reload priv tables: yes

Now enter the password you just created, three times as stated by each step.

Wooooo, checking the service statuses and loading the page! Bam!

There you have it, the easiest install of ZoneMiner 1.30.4 on Debian 9!

Alright, let me just create the repo for this puppy

K that’s done…. now Let me try one last run but by grabbing the actual script from the internet… directly from a brand new Debian 9 Install (again).

I’m going to publish this for now… and I’ll try something like this in a new VM.

Grab the source of the script, and save it to the server via SSH

I tried to grab the script with wget, or curl and push it into the shell but it would always fail on me.. :@ :(.

But if you save it locally and adjust the permission, it works fine….

Sigh, I really wanted to figure out a way to call it right from the source via my github repo. But since I can’t this works too for now…

I hope this helps others. 😀

Lets Encrypt HTTP Validation
And the Palo Alto Firewall

The Story

This…… this one…. this one drove me NUTS! for almost a week…. it was a lil mix of a perfect storm I guess… but lets start from the beginning shall we..

So a couple weeks ago i wanted to get active sync setup for my exchange server (Checking OWA sucks)… so I was sought after OPNsense for my open source firewall of choice.

I started following this German blog post, and I hope to have that blog post up very soon as well (sorry I don’t usually get hung up like this).

My setup was pretty much exactly the same however I was getting hung up on the plugin not validating my scripts over HTTP. See the full pain details here on github, anyway, I did finally manage to get my OPNsense server behind the NAT rule to finally succeeded behind my Palo Alto Firewall (by basically opening up the rule way more then I ever wanted to) so I knew! I knew it was the Palo Alto blocking still somehow… but how I couldn’t make sense so I wasn’t sure how to create my Security rule.

First try

My first try was exactly like the github issue describes, was failing on domain key creation, this failed even on my OPNsense with a Public IP and all rules exactly as the OPNsense basic guide states to set it up.

When Neilpang (the main script writer/contributor) said ti was fixed and no commit was applied, I tried again and it worked, I can only assume this was due to the fact DNS may not have replicated to the external DNS servers lets encrypt servers are configured to use when I first made my attempts at a cert validation.

That didnt’ explain why every attempt behind my Palo Alto with a NAT and security rule would fail…

The Palo Alto

I love these things, but they can also be very finicky. to verify my rule I had used my IIS Core VM (That I’ve used in previous posts on how to manage Windows Server Core) along with the HAProxy plugin on OPNsense to basically move the requests from the NAT rule of the Palo Alto but really serve up the IIS website of my IIS server. Not to my amazement, but sure enough I was able to access the IIS website from the internet, so my security rules and nat rules on the Palo ALto are working fine, as well as the security rules on the OPNsense server…. so what gives? Why are these HTTP Validation requests failing??

Again, as stated above I knew it was the Palo Alto from opening up the rule completely and it working, but I figured it was the issue even before I did that… but opening up the security rule completely is not the answer here… like it works but its far to insecure…

So I managed to talk to a friend of mine who happens to be realllllly good at deploying Palo Alto as he does it for a living. I basically describe my issue to him, and ask him if there’s anything he can think of that might be a problem. (I’ll hopefully be having a couple more Palo Alto blog posts as soon as I can get my proper licensed VM) To my actual amazement he goes on about this one setting you can use inside security rules and about a story about when it caused him grief…. go figure, he’s experienced it all!

What was it?!?!?!

Alright so here’s my rule I intially had, which was causing failures of the let’s encrypt OPNsense plugin…

AS you can see nothing really special, until he told me about… PAN DSRI or Palo Alto’s Disable Server Response Inspection you can check the link for more details. Now the funny part is that post covers better performance…. in my case, it was simply needed to work! And all it was, was a checkbox….

once that checkbox was selected, the rule adds a icon to it.

I was able to click Issue certificates on the OPNsense Lets Encrypt plugin, and I got some certs! I’m ready to now add the Let’s Encrypt HAProxy plugin integration and set these certificates for backend services… like my ActiveSync… or OWA… Ohhh exciting stuff!

Man that feels good to finally have that sorted! Wooooo!

WMI and the WBEMTEST

WMI and the WBEMTEST

I’ll try and keep this post short, as I have many things to catch up on, and this just happened to be one of those things I haven’t done in a while and had to do today for some newer servers that have been configured.

Now since I hadn’t blogged about this myself I went out to the interests to give me a good reminder on how to accomplish this. My first hit was, Sysops… and I usually really like this site…. well till i read this…

“Access denied should be self-explanatory. The credentials you use must have administrator rights.”

Ughhhhh I’m sorry what did you just say? No I don’t think so, WMI maybe, by default, restricted, but it doesn’t require such drastic permissions to utilize.

My second find was a lot nicer, in particular telling you how to manage those permissions, without ahem need administrator access lol.

So lets follow along shall we! so much for short..

First order of busy-nas is creating a user:

Of course WMI being Windows Management Interface, means I’m making obviously a windows domain user. Nothing special, especially no admin.. 😛

Again, nothing special here. Alright now I need two servers, well I guess in this case the server being monitored is sort of like a client… ugh anyway…

I guess fo r now I’ll just login to my exchange server and wmi query another server to test out first off… mhmm all I have besides that are core servers, oh boy ok… I think I’m going to need to spin up a new testing server one second…

OK all basic settings…

remove floppy boot into EUFI:

Boot system… attach disc from local host…

lets find us some windows erver 2016…. bug CD-ROM stuck “connecting”…
Close vSphere, reopen console, try again…

always loved this trick over uploading a ISO to a datastore….

Ahh modern Windows still giving off that great nostalgic feel.. 😀

yada yada, setup, vmware tools, and join domain, you get the jist of it.

Ping and the Firewall

First order of Business Ping and the Firewall!

Ahh yes connectivity verified (I knew it was good cause I joined the system to the domain, but I like ping… just nothing like a good ICMP) good thing that m is not a u….

Anyway time to run WBEMTEST, bet the first attempt fails cause the firewall again…. hour glass… and (not responding) yeah…. sounds like a stupid firewall…

What?! no way RPC error… lol I totally saw this coming cause again a default server installation doesn’t allow these connections through the firewall by default.

This is a bit old, but lets see if it still works…

Amazing it worked… but yes this was just to verify connectivity through the firewall… so…

WBEMTEST Testing WMI with Least Privileges

OK now that we verified connectivity to the wmi stack with wbemtest using our admin account, lets do it again as a normal domain user. Just to validate these credentials were OK as a standard user i logged into a normal workstation with it, if you want to protect this even further you’d use GPOs to disallow this account local logon. Anyway…

What?! Access denied… lol again expected.. now instead of granting this account admin access, which is overkill, lets grant it the basic enable and remote access on the WMI object… so back on the server we want to be monitored via WMI…

Hope that was easy enough to follow without even saying anything.. anyway lets try that connection again…

Try 2, Scale-able

Mhmmm access still denied… lets see here

This is how I normally do it for a monitoring account anyway cause it usually needs more permissions when mointoring a server so lets try it that way… revert the direct permissions… and grant performance group access…

Now lets add wmi reader account to the dcom groujps and the performance monitor group and reboot the server…

Server rebooting, back up, and lets test that connection again on wbemtest!

and….

Bazzaaaaaa! An account thats not a admin anywhere with permissions needed to monitor your server with WMI! Use these accounts on software such as PRTG, Splunk, Zenoss, etc etc.

Hope everyone enjoyed this tutorial on WMI configuration and testing. 😀

WinXP a Timeless Classic

Something about it that I loved, I rocked the vista themed copy for so long, you know when the Vista Fiasco was the Windows 8 fisaco, that was the uhhh yeah anyway…

Just look at that dark epic theme, a couple pieces of junk, but nothing modern live tiles brings… a Japanese checkers board to appease someone with the shortest attention span one could ever imagine. But not this classic beauty, just look at that recycle bin, made from fine glass.

Holy ball sacks, I was able to download Chrome via IE7 and it worked… in 2019!!

That’s just amazing, chrome supported XP till 2015, if there’s a die hard OS, XP was it. Holy crap… there’s people still commenting about this… like now…

Well, if you’re lucky like me and my old netbook, they manufacture made and it’s third party hardware peeps made drivers up till Windows 7, so I managed to install Windows 7 with an SSD and my old laptop is great, regardless of how many people complain on these comments, haha. 😀 Which is crazy considering…

Yes, Windows 7 Extended support is coming up… Another Solid beast I hope gets the extended life support it deserves. :D.

This was pretty much just a blah post but whatever… xP

I just needed a VM to test my OPNsense VM lol, figured I had the old ISO why not…

VMware ESXi 5.5
D-Link DGE-530T RevC

The Story

Are you guys ready for a story? This one is actually not so bad. A couple days ago I post on Facebook if anyone happened to have a spare PCI/PCIe Network Interface Card (NIC), since it was going to be used for interest access I was ok with it being 100, but was aiming for 1000 (now that Shaw provide over 300mbps internet, clearly 100 doesn’t cut it).

After a day of no luck, and a bunch of funny remarks (as almost none of my friends had any idea of what I was talking about), I decided to take another look through my old computer hardware to see what I could scrounge up…

PCI NIC Found!

well, well, not even dusty, a PCI NIC, exactly what I needed in my hypervisor to play with OPNsense. I originally was going to try layer 2 trunking via VLANs, however the main vSwitch already had VMkernel Nics bound to the physical adapter @ layer 3, and the same interface on my firewall (Palo Alto) wouldn’t allow me to create a layer 2 sub-interface is the main interface was already bound to layer 3. Since I wanted my OPNsense VM to get an actual public IP address, this required my device to get a connection from my VM, directly to my modem at layer 2… yeah another NIC. So here we are, and it didn’t take long for me to shut down my VMs and install the card, and boot my hypervisor back up (I hope to one day have multiple hypervisor to not have to shut down my VMs, but even then, if you don’t pay chances are you won’t get access to the APIs that migrate the memory states of the VMs for you, so it’s a hassle either way…. anyway back to the story.

PCI NIC Found … NOT

Oh Borat, who brought you in?!?! So as you may have guessed I went to add a new vSwitch for my new VM to get it’s direct Public IP, and to my dismay there was no physical NIC to pick… what the….

So to Google! and hopefully either VMware support, or usually always better personal blogs! We all loves these right… ahem… anyway…

You can probably guess where the official answer went, but I’ll enlighten you as I did follow along for … pain? OK I don’t know why I did, I was really hopeful it wasn’t going to be the answer I knew it was going to be….

Hey! some of the command they provided helped, or did they? All this was, was some BS data chasing to tell you, IT’s Not supported, SOWWY!

Clearly, there must be some answers by the community forums right??

Community’s great! VMwares…. :S

So what do we get… One… unanswered and crying about a badly referenced link to source two... also unanswered crying about the same stuff we already know…. it’s officially not supported. Well I’m running ESXi 5.5 Free and using GhettoVCB’s scripts, also unsupported, so not really an issue… the issue is teh lack of help right now.

But bring me down, I don’t thikn so, the internet has many sites, and many people sharing their knowledge, how?!?! BLOGS! Ahem…

Blogs to the Rescue!

Yes believe it or not it is the power of the real untethered, unfiltered beauty that is blogging that we actually get some meat and potatoes. My first source showed signs of light! One problem, it’s literally 9 years old and using ESXi 4. OK well it also wanted a fair amount of direct file placing and special manipulation. Most of this works fairly differently in ESXi 5.x, and vibs or precompiled binaries that work with esxcli are the more preferred method. I avoid saying supported here, cause I use these methods to install unsupported packages :D.

Alright, so now what, well the Holy Grail! This King managed to not only blog about getting this working but shared the drivers/vibs packages required to get it to work too! Epic! Let’s get this dang NIC working…

1) Grab the VIB files

2) Change your support level on ESXi5+:

~ # esxcli software acceptance set –level=CommunitySupported
Host acceptance level changed to ‘CommunitySupported’.

3) Install the driver with: “esxcli software vib install -v /DLink-528T-1.x86_64.vib“

4) Reboot

Sounds simple enough lets give it a shot… and I hit some errors, classic…

I won’t show the erros just yet as I have it one long snippet, but basically I had a bit of problems cause of the GhettoVCB scripts I had pushed on to my host, but the error results weren’t exactly clear… I attempted a couple things first, like copying the VIB to the path it kept complaining about and specifying the fully qualified path to the VIB.. nothing till I stumbled across this...

esxcli software vib install -v /full/path/to/.vib -f

which finally gave me a driver install successful!

Alright, and after reboot…..


OMG! No way, there it is with the proper name and everything. Considering the blog post I followed was for a different NIC model I wasn’t sure if it would work, but there it is… so lets not get to ahead of ourselfs and see if it comes up and is able to transmit packets…

I was having some issues initially so I decided to give my lil netbook a simple /24 IP and give my OPNsense a simple /24 IP just to validate the card wasn’t the issue, or the drivers I just installed.

Plug them together, lights come up, that’s good… checking ESXI vSphere…

That’s good, and finally can we transmit?!?!

Hey!!!! we have communication! Now it’ll be figuring out getting the Public IP configured properly. But we’ll save that for another post. 😀 Cheers!

Another BitLocker Problem

The Story

I’ll keep this one short as I have a lot of things to do and this was an interesting find.

So I had to deploy some new laptops, did my usual trick with multiple systems, grab the latest version of Windows,run spiceworks decrapifier, install all updates, install Office, install all updates, install a couple third party software, clean.

Then cleanup the default profile. there have been issues with the “CopyProfile” option that MS supports with an XML file during sysprep, not only have there been known issues but this is total rubbish when it used to be a button. I reallllllllly hate this move by MS, there are times you want to configure the default profile and not sysprep (family computer anyone?)

Well ok enough of that MS rant (there are many) if you need help configuring the default profile check out this guys blog “scribbleghost” who sources the same one I originally followed by “Jose Espitia” which I think has a cleaner look and feel. IMHO

This was so far the cleanest, smoothest deployment I’ve done so far, and I haven’t hit a single snag, I also haven’t had to deal with Forenstics “DefProf” leaving lingering services with the above blog posts. Or other anomies by their profile migration tool.

Instead I suggest admins look into Ehlers “User State Migration Tool GUI” he basically took MS’s new user migration “tool” *cough* cmd line based app *cough* which normally would have someone digging through endless cmd parameters and syntax requirements (I only like doing this if I have to script, outside of that give me a damn GUI MS) Well no worries this guy did it. (it’s worth the cost, buy it).

OK now that allllll that is out of the way, what the heck was the issue man?!?!

So I go to BitLocker one of the deployed systems and BAM! Error in my face in particular Error code: 0x8004259A

so go to google, and my first attempts were not successful as it seems no bit locker reference to this error code has been shown. After some more searching I it this MS support page with some more English understandable definition of the code:

0x8004259A

VDS_E_SHRINK_DIRTY_VOLUME

The volume selected for shrink might be corrupted. Use a file system repair utility to fix the corruption problem and then try to shrink the volume again.

Alright well this is something…

The Solution

On my particular laptop that I first tested on (and I only was on my first other test deployment after mine) in which I forgot to enable BitLocker, as other systems leave the office more than Mine ever does. I was able to reproduce the error.

Yet on my laptop CHKDSK always returned clean, what gives, yet shrinking the volume and re-extending it resolved the issue for me…

Until I went to do the same on the first deployed laptop only to find it was telling me I was unable to shrink due to corruption (sure this one picks up on something; remember I shrink the data partitions before making my base image to make DDing it onto other system much faster).

So this time a CHKDSK /f, and a rebooted made chkdsk clean the disk, and without shrinking or expanding was able to run BitLocker!

Another win for today!

Working on PowerShell scripts (ISE) w/ GitHub

GitHub

So as you all probably know GitHub has been acquired by Microsoft. I had initially groaned at this acquisition as usually a lot of things Microsoft has done lately has really bothered me (locking down APIs to O365 and not providing them to on Prem, for example) but then they also have done some good moves… .Net Core 2.0 and all the open source incentives are a nice change of pace.

And to top that with some sugar how about some private Repositories for free members! Yeah That’s right, now that this is an option I’m going to use GitHub more. Now I’ve played with it before, however this time I wanted to write this up for my own memories. Hopefully it helps someone out there too.

Let’s have some fun saving our PowerShell scripts on GitHub!

PowerShell ISE and GIT

Dependencies

So for this demo you’ll need:

1) A GitHub Account (Free)
2) PowerShell ISE (Free with Windows)
3) Git for Windows

First, install and configure Git for Windows. Mike previously covered this topic in another blog article. In this scenario, I ran the Git installer elevated so I could install it in the program files folder and I took the option to add the path for Git to the system environment variable path:

posh-git0a

Make sure that you’ve configured Git as the user who is running PowerShell (I ran these commands from within my elevated PowerShell session):

4) Install the Posh-Git PowerShell module from the PowerShell Gallery:

The Fun Stuff

So I originally follow this guys blog post on how to accomplish this.

Now I had already installed git for windows so I was set there.

SharePoint Profiles

I liked the part where he had altered his console display depending on where he was located to not ensue confusion, however I wasn’t exactly sure what he meant by Profiles a lil searching and education session later I was able to verify my profile path:

$profile

Then simply edit that Microsoft.PowerShell_profile.ps1 with Mikes script:

Set-Location -Path $env:SystemDrive\
Clear-Host
$Error.Clear()
Import-Module -Name posh-git -ErrorAction SilentlyContinue
if (-not($Error[0])) {
    $DefaultTitle = $Host.UI.RawUI.WindowTitle
    $GitPromptSettings.BeforeText = '('
    $GitPromptSettings.BeforeForegroundColor = [ConsoleColor]::Cyan
    $GitPromptSettings.AfterText = ')'
    $GitPromptSettings.AfterForegroundColor = [ConsoleColor]::Cyan
    function prompt {
        if (-not(Get-GitDirectory)) {
            $Host.UI.RawUI.WindowTitle = $DefaultTitle
            "PS $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) "
        }
        else {
            $realLASTEXITCODE = $LASTEXITCODE
            Write-Host 'PS ' -ForegroundColor Green -NoNewline
            Write-Host "$($executionContext.SessionState.Path.CurrentLocation) " -ForegroundColor Yellow -NoNewline
            Write-VcsStatus
            $LASTEXITCODE = $realLASTEXITCODE
            return "`n$('$' * ($nestedPromptLevel + 1)) "
        }
    }
}
else {
    Write-Warning -Message 'Unable to load the Posh-Git PowerShell Module'
}

Now that we’ll have the same special console to avoid confusion let’s link a directory!

Linking GitHub Repo to Your local Directory

Then I cloned my new private Repo:

git clone https://github.com/Zewwy/Remove-SPFeature Remove-SPFeature -q

That felt awesome…

Nice, nice…

Opening scripts from the ISE

Alright. Well now that we have a repo, and are in it, how do I open a file in the very ISE we are running to edit them? Now Mike didn’t exactly cover this, cause I suppose to him this was already common knowledge… well not to me haha so it’s actually pretty simple once you know how.

psEdit .\Remove-SPFeature.ps1

Woah! Epic, it can be bothersome when dealing with length scripts, so ensure you utilize regions (w/ endregions) to allow for quick named areas to access, as you can use this command in ISE to collapse all regions once a script is loaded.

$psISE.CurrentFile.Editor.ToggleOutliningExpansion()

lets start making some changes *changes made*

Committing and Pushing

Get your mind out of the gutter!

Now I had originally did a git push, and instantly got everything is up-to-date alert, so awesome, I did not have to fight through his whole schpeal about auth (I got a prompt the very first time I attempted to clone my repo, requesting me to login into my GitHub Account). So my tokens were good right from the start after that happened. However I did make some updates to one file and was now instead presented with this after a commit:

Again a bit of searching I was able to find the answer, seem usually to be some form of ignorance, that is why I’m doing this.. to learn 😛

Now again I got a bit confused at how this worked and when I did some searching I discovered:

Don’t do a “git commit -a” from the ISE, it’ll crash asking you for a line to provide in for the description.

Do proper staged commits as described here. 🙂

I hope this maybe gets some more people power shelling!

Next I should learn to use Visual Studio for more app building… but I’m more of a sys admin then a dev…

I recently took a course in resiliency, and they basically said be a tree… ok.

Branching

Whats is branching? well pretty much, try stuff without changing the source code. Backups anyone? it’s a nice way to try stuff without breaking the original code, and once tested, can be merged.

unlike a tree it’s not often a branch just becomes the trunk, but whatever…

Following this guide:

To create a branch locally

You can create a branch locally as long as you have a cloned version of the repo.

From your terminal window, list the branches on your repository.

$ git branch 
* master

This output indicates there is a single branch, the master and the asterisk indicates it is currently active.

Create a new feature branch in the repository

$ git branch <feature_branch>

Switch to the feature branch to work on it.

$ git checkout <feature_branch>

You can list the branches again with the git branch command.

Commit the change to the feature branch:

$ git add . 
$ git commit -m "adding a change from the feature branch"

Results:

Hopefully tomorrow I can cover merging. 🙂

Cheers for now!

SharePoint Orphaned
Content Types (ReportServer)

New Series! SharePoint Orphaned!

The only thing that should be orphaned, is SharePoint itself…. ohhh ouch,

The Story

joking aside, our Developer again came by reporting some issues with the newly developed SharePoint site I had migrated for him to test creating some new SharePoint web part apps. He already had his own documentation available when he first did this, good man. Even after we got past the “how to create a new template from a site with publishing features enabled”, we were still receiving an error.

Slow SharePoint fixed… But…

During this whole process this new site was intermittently responding slowly, it was baffling, and as we dug through the UNLS logs we found the issue, apparently the service account configured to run the Web Application Pool did not get access to the ProfileDB for some reason, after granting the login SPAccess on the ProfileDB it fixed the slow intermittent SharePoint loads… but sadly we were still receiving errors while attempting to deploy new sites from templates.

The signs were clear

Looking further in the UNLS logs, and the error itself complaining that content types could not contain special characters…. A bit more searching pointed us towards the sites content types page….

Whooops how did I miss this… (ReportServer Feature…)

Guess there are the “special characters”….. ugh, even though the “Test-SPContentDatabase cmdlet” returned clean throughout my migration (and all my scripts I have yet to publish). Guess this one isn’t picked up by the checker? unno anyway… what to do about this…

The search

Source one… too complicated, but interesting… he sure worked hard, I’d go this route, but I’m sure there are easier solutions… got to be and… yup.

Source two simple… lets try it…

The Solution

Install the feature, disable it on all web apps deployed on the farm, uninsatall the feature. Nice and simple, and how I usually like it, letting the system do most of the heavy lifting to avoid human error.

So Step 1: Grab Reporting Service installers (my case SharePoint 2016)

Step 2: Install it;

Next, Accept the EULA, Install

Success.

This makes the content type names behave correctly.

Step 3: Enable it;

Install-SPFeature -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\TEMPLATE\FEATURES\ReportServer"

Now the original post said to simply Uninstall it after, but as you can see it will error, why cause, as it clearly states it’s still enabled so…

Step 4: Disable the feature on all web applications

Disable-SPFeature –Identity ReportServer –Url http://spsite.domain.com

Step 5: Uninstall the Feature:

Uninstall-SPFeature -Identity e8389ec7-70fd-4179-a1c4-6fcb4342d7a0

Step 6: Uninstall the package:

msiexec /uninstall rsSharePoint.msi

I recommend to do this after hours and on a test first. It did seem to do a IISRESET as all sites had to reload, and took a lil bit for the .NET assemblies to recompile. 😀

Now go enjoy a coffee. Thx, Jussi Palo!

The Second Solution

OK, not gonna lie, I assumed it was all good, and that assumption came to bite me in the ass…. again, never assume.

So I told my dev that I had completed the steps and should have no issues creating a new site from his template, but as I’m walking down the hall a short time later, he give me the snap fingers (like it worked) and says, “same error”.

Ughhhhh… what…

So looking back at the Sites Content Types the Report Model Document content type still remained… ok what the….

So running through the proceedure again, it cmplained stating the feature was not available for my web apps, so I re-enabled it, saw all three content types, disabled it… and report model still there…. :@ c’mon! Let’s just delete the content type!

Can never give me a break eh SharePoint…

Luckily my dev is super awesome and told me about another blog he had read (sorry I don’t have the source) and told me that the only reason the front end actually refuses to let you delete the content type, isn’t so much that it’s tied to an actual feature (even though we all know that this one did come from the ReportServer feature), but rather that it simply has a flag set on it in the table for it…

Now I normally never recommend making changes on any SharePoint Database stuff directly, and usually always recommend making all required changes via either the Central Admin/psconfig, site settings or PowerShell. However in this case we clearly installed the proper dependencies, de-activated the feature that populates those content types yet was not being removed from the content databases…

Only do this if you have tried everything else, only do this in a test environment, actually never do this…. well I guess if you have tried everything else this is your only option…

This requires you to have syadmin rights on the SQL Server instance hosting the SharePoint Content Databases. Open SSMS…

SELECT *
FROM WSS_CONTENTDB.[dbo].ContentTypes
WHERE Definition LIKE '%Report%'

Find the row which contains the ID for the Report Builder Content Type (Or which ever other system based content type you have orphaned needs removing). usually easily spotted as it’ll be the only one with 1 under

USE WSS_CONTENTDB
Go
UPDATE dbo.ContentTypes
SET IsFromFeature = 0
WHERE ContentTypeID = *ID From above Query*

Now you can go into the actual orphaned Content Type under Site Settings and watch the delete content type not fail or error, and destroy that content type from your SharePoint life!

*Note* My Dev came back saying same error again, lol, but this time it was discovered we simply had to re-create the template and deploying the template from new worked (which originally didn’t before the above changes)

Happy SharePointing!

SharePoint Rest API call returns 500.50 URL rewrite error

The Story

Hey all another SharePoint Story here!

So my dev was working on another SharePoint site app. We did everything like before, and now he was getting a URL rewrite error. I wasn’t sure why this was happening, and since he generally had more experience troubleshooting these types of issues I sort of let him handle it for a while.

Well after a while he still couldn’t figure it out, and funny thing happened, we learned some interesting things and got bit by erroneous error messages in the end. So the first thing he tried was to give his re-write rules some new variable names. Which didn’t help and the same error was returned.

After a little while I had forgot to set the Service Principal Names (SPNs) for the new web applications we created for the new SharePoint sites. I was certain this was it, but we kept getting a URL rewrite error! (This turns out was actually the initial reason for the error, yeah it really was cause it turns out…)

I showed by dev this post by Scott on the same error. Now the reason we were getting the same URL rewrite error was cause when he changed the variable names in his re-write rule he didn’t change their associated server variables as mentioned in Scotts blog.

The Answer

The only reason we got the error both times was simply a coincidence. So it turns out:

1) If you forget to set the SPN when you Web App is set for Kerberos, and your hosting app server is on another server. You will get a re-write error if you have everything else in place.

2) If you change variables in your re-write rule and forget to set the associated system variables with it.

Both will result in a 500.50 URL rewrite error… who would of figured…