When easy becomes hard; Go MacGyver

Today I had to install a switch in a rack, sounds easy right 🙂

The first issue was that there was only 1u of space available and the slot did not have cage nuts installed. Even with the sides off the rack ears were still in the way, and if you ever installed cage nuts you know they are installed from behind. Then I watched this nice trick on YouTube using this tool “Cage nut puller”.

As you watch the video @ the 9 second mark you can see installing the nut and his hands are taking up roughly 3u of rack space, and even then, he mentions that it’s not recommended and difficult. However, I don’t have this tool.

I figured I would just unmount the old switch (as it was being replaced) and use the existing cage nuts there. When I went to remove it, there was a PDU in the way, and I didn’t have a stubby Robinson available, ended up using the bit and a wrench, but the head was stripped. After a while ended up having to use the wrench right on the screw head to get it off, how did the head get stripped you make ask? Well, I guess when it was installed by the previous tech the screws were cross threaded into the nuts (not proper thread screw). When I went to remove the last screw right before the screw got loose, the cage nut popped out of the rack ear…. So, I had to get a colleague to use the plyers to hold the old cage not in place, while I finished unscrewing it. Pretty crazy how long this took considering how long it should have taken.

So even the plan to use the existing cage nuts was useless since they had been cross threaded. I still didn’t have the tool I needed, so I ended up looking around the filing room, and I found a lanyard with a metal card clip, the metal “arms” that hold the card looked exactly like the end of the tool mentioned in the YT video. So I ended up using the concrete in the stairwell to grind down the pivot arm that was holding both parts together, then used plyers to flatten the ends to make the base of the cage nut puller tool.

Here’s a clip of me using this MacGyver’d tool to replace the cage nuts, well the clip is using it to install the cage nut after the old one had been removed. The snaping sound is soooo satisfying.

Edge, why so many instances?

Another short n sweet one, today I noticed there were over 10 instances of edge when I open the browser:

So, I did a quick Google search and I found someone with the same question, luckily outside the usual rubbish answer from officals, there was a really helpful comment by a Volenteer Moderator by the name of “¡Firedog” I’ll give them double props for using an upside-down exclamation point in their name.

“That isn’t anything to be alarmed about. Which pages open when you launch the program are set under When Microsoft Edge starts at ⋯ (Settings and more) > Settings > Start, home and new tabs. Each tab will have at least one process associated with it, and the browser itself will have several more. You can see what all these processes are by pressing Shift-Escape in the browser (you can also select Browser task manager from the page’s window control – Alt-Space, or right-click on the title bar). ”

Sure enough when I had the system focus (I clicked) on my Edge browser and then pressing “Shift-Escape”:

I thought that was pretty neat, didn’t know about that one.

The Alt+Space was a neat lil menu, good one in there was customize toolbar.

3TB Drive Shows up as 750GB

There’s a lot of stuff on this, so I’ll keep it short.

On windows, check Intle RST drivers (assuming there storage controller the hard drive is connected to is Intel based).

In my case it was behind a USB Enclosure. The drive showed properly as 3TB, but it didn’t recognize the File Systems.

Figuring I could see the files in linux that’s when the problem presented itself.

Lucky for me I had another machine that was 64 bit and had sata ports, plugged it into that and checked there (the storage controller was old nvidia nforce4, if anyone remembers that lol)

and it worked it saw the drive. When I went to mount the partition though it stated “unknown filesystem type ‘linux_raid_member’

So I did the same thing and mounted it using mdadm, I also had to do “mdadm –stop /dev/md0” or else it always say the /dev/sdb3 was busy.  Strange.

This was cause the drive was from a RAID 1 member, so all files were accessible.

Never seen this one before, and yes I’m aware of 2TB limit of 32 bit systems, So I knew that was not the issue. This was good to know though in case of future file recovery attempts. 🙂

TPM security on a ESXi VM

Great part about vSphere 7 is it introduced the ability to add a TPM based hardware to a VM.

Let’s see if we can pull it off in our lab.

What I need a Key Provider, Lucky for use with 7.0.3 VMware provides a “Native Key Provider

During my deployment of the NKP, one requirement is to make a backup of the key I guess, which was failing for me. I found this VMware thread with someone having the same issue.

Sure enough, the comment by “acartwright” was pretty helpful, as I too opened the browser console and noticed the CORS errors. The only diff was I wasn’t using CNAMEs, per say, but I had done a pilot of vCenter renaming. the fact the names showing up as not matching and the ones that were listed in the console reminded me of that. When I went to check the hostname, and local host file, sure enough they had the incorrect name in there.

So, after following the steps in my old blog post to fix the hostname and the localhosts file, I tried to backup the NKP and it worked this time. 😀

So, sure there after this I went to add the TPM and I couldn’t find it, oh right it’s a newer feature, I’ll have to update the VM’s compatibility mode.

Made snapshot, updated to latest hardware ID, boots fine, lets add the TPM hardware, error can’t add TPM with snapshots. Ugh, fine delete snapshot (tested VM boots fine before doing this), add TPM success.

Before changing the VM boot option to EFI, boot the VM and boot the OS into Windows RE, use mbr2gpt command to convert the boot partitions to the proper type supported by EFI.

Once completed, change VM boot options to EFI, and check off secure boot.

Congrats you just configured a ESXi VM with a vTPM module. 🙂

 

Updating Power CLI 12

If you did an offline install, you may need to grab the package files from an online machine. Otherwise, you may have come across a warning error about an existing instance of power CLI when you go to run the main install cmdlet.

When I first went to run this, it told me the version would be installed “side-by-side” with my old version. Oh yeah, I forgot I did that…

Alright, so I use the force toggle, and it fails again… Oi…

Lucky for me the world is full of blogger these days and someone else had also come across this problem for the exact same reason.

VMware.PowerCLI install update error – Install-Package: Authenticode issuer | vGeek – Tales from real IT system Administration environment (vcloud-lab.com)

If you want all the nitty details check out their post, the main part I need was this one line, “This issue can be resolved deleting modules from the PowerShell modules folder inside Program Files. Once the modules folder for VMware are deleted try installing modules again, you can also mention the modules installation scope.”

AKA, Delete the old one, or point install to other location. He states he needed the old version but doesn’t specify for what. Anyway, I’ll just delete the old files.

So, at this point I figured I was going to have a snippet of a 100% clean install, but no, again something happened, and it is discussed here.

If I’m lucky I will not need to use any of the conflicting cmdlets and if I do; I’ll follow the suggestions in that thread.

OK let’s move on. Well, the commands were still not there, looks this has to succeed, and there’s no prefix option during install only import, which you can only do after install, the other option was to clobber the install. Not interested, so I went into Windows add/remove features, and removed the PowerShell module for Hyper-V. No reboot required, and the install worked.

the Hyper-v MMC snap in still works for most of my needs. Now that I finally have the 2 required pre-reqs in place.

Step 2a) connect to server via Power CLI

Why did this happen?

A: Cause self signed certificate on vCenter, and system accessing it doesn’t have the vCenter’s CA certificate in its own trusted ca store.

How can it be resolved?

A:  Option 1) Have a proper PKI deployed, get a proper signed cert for this service by the CA admin, assign the cert to the vCenter mgmt services. This option is outside the scope of this post.

Option 2) Install the Self Sign CA cert into the machine that’s running PowerCLI’s machine store’s trusted CA folder.

Option 3) Set the PowerCLI parameter settings to prompt to accept untrusted certificates.

I chose option 3:

Make sure when you set your variable to use single quotes and not double quotes (why this parameter takes system.string instead of secureString is beyond me).

While I understand the importance of PowerShell for scripting and automation and mass deployment situations, requiring it to apply a single toggle setting is a bit redic, take note VMware; Do better.

ACME HTTP Validation with HTTPs redirection

I had this got this to work with this requirement for an external A host record, redirects, negate rules. It was quite complex, and, in the end, it did work. I was excited, I got ready to write this long post, then I realized, I had somehow missed the obvious. I found this post on the forms with someone having the exact same issue what amazed me the most, was how simple their solution was.

So, I tested it…

The HTTP to HTTPS redirect condition:

and this will take any HTTP request and convert them into HTTPS.  If you configured HTTP validation though this will be a problem when the request from ACME comes in to hit the backend created by the ACME plugin.

As stated by the guy, he simply made a clone of the condition, and made it a negate.

then apply it to the redirect rule…

then apply this to the http listener

Test a cert renewal… it worked

That was way simpler than I thought up. lol

Hope this helps someone.

Hypertext String Validation via Powershell

So I had this running code:

function isURL($URL) 
{
$uri = $URL -as [System.URI]
$uri.AbsoluteURI -ne $null -and $uri.Scheme -match "http|https"
}

isURL('http://www.powershell.com')
isURL('test')
isURL($null)
isURL('zzz://zumsel.zum')
isURL('hp:')
isURL('https:')
isURL('http')
isURL('http:/incomplete')
isURL('Maybenot.http://complete') #our function has an outliar here
isURL('http://complete.should.return.true')
isURL('https://also.complete.should.return.true')

Though there was one outliar, lets fix that…

I was having some issues playing around with different things, till I got me head out my ass and followed KISS principal..

Found this simple reference… and made a simple change in my code…

function isURL($URL) 
{
$uri = $URL -as [System.URI]
$uri.AbsoluteURI -ne $null -and $uri.Scheme -like "http*"

}

isURL('http://www.powershell.com')
isURL('test')
isURL($null)
isURL('zzz://zumsel.zum')
isURL('hp:')
isURL('https:')
isURL('http')
isURL('http:/incomplete')
isURL('Maybenot.http://complete') #All Good now :)
isURL('http://complete.should.return.true')
isURL('https://also.complete.should.return.true')

Normally if your doing coding in other languages and not writing scripts, you’d usually want to write actual test code blocks. In scripting usually just keep things simple by utilizing input validation. If you look online you can use Invoke-Request but that requires being dependent on proper network stack and puts a load on the server or something that could easily be validated client side before any server requests are made.

Hope this helps someone.

Bonus (getting all sub paths from a URL string):

$Tet = "http://somesite.notorg/subsite/subite2/s3/doc/folder/no/matter/how/deep?"
$Array = ($Tet -split "/")
$Array = $Array[3..($Array.length -1)]
foreach($Item in $Array)
{
$FullLine = $FullLine + "\" + $Item
}
$FullLine

Mailbox Offline Exception

Since I need some email from an address I use, I figured I’d have some fun and spin up the ol’ Exchange server.

To my surprise when I attempted to login to OWA (since the front ends were loading just fine) after authentication I would be greeted with “Microsoft.Exchange.Data.Storage.MailboxOfflineException”.

My initial googlings didn’t provide much of good results.

I went to the server and did the usual check services and such, and noticed the root cause. Low Disk Space. I figured extending the logical volume and a reboot would suffice… nope. Problem persisted.

I decided to run the MS Exchange health checker: https://aka.ms/ExchangeHealthChecker

even after getting everything green in the health checker, the problem persisted.

A bit more google fooing and I was able to track down someone with a similar problem on TechNet with some useful guidance to use eseutil.exe to check the database.

The database indeed return “Dirty Shutdown”

ran the repair commands. *Note* you should try to use /r before using /p if it works you don’t need to use /p as it’s a hard recovery and data loss could ensure from it. I didn’t care as it’s use lab data.

K checking again it return “Clean Shutdown” everything I’ve read says it should be able to be mounted now. Failed to Mount….

As a last ditch effort, I try to Google some more if I missed something else. I found this nice post by Eric Simson

Step 1: Backup the Database (my case don’t care)
Step 2: Check Storage
(Was the cause, extended volume to 190GB used out of 250GB)
Step 3: Restart Exchange Services (Yeap, ran health checker)
Step 4: Check Database State (Yeap fixed it)
Step 5: Repair Exchange Database (Yeap fixed it)

Yet even after reboot and using PowerShell AND using accept data loss…

I was about to give up when I had one final idea, I realized that since /p does a hard recovery of the DB even if the log files are lost, and the log files take up a lot of space…

At this point I had well over 50% free space on the server. I ran the repair DB command again just to be safe.

wait.. what .. no error…. guess I was only at 24% free space, and that wouldn’t cut it, I don’t get why considering the -AcceptDataloss was defined.

Go to log in to OWA…. Ehhhh!!! There’s my emails!

Hope this helps someone.

Log Searching with Powershell

Context. You have a log directory with hundreds of log files, you need to look for a specific string, but you don’t know which file it resides in.

With PowerShell we can restrict things down in two ways.

  1. If we roughly know when the log entry was done, we can constrain on time.
  2.  We can then use Select String to filter further.
$daysToCheck = $(get-date).AddDays(-2)

-2 in this case indicates I want to find files that were modified at most 2 days ago. These means from right now, go back a max of 2 days.

Get-ChildItem -Recurse | ?{$_.LastWriteTime -gt $daysToCheck} | Select-String "String to Search for" -list | Select Path

In this example it’ll search the current working directory as it was not defined in the first command call. the list operation is important as to only list the file the string was found in once, else the file path will be listed for every instance the string is found within the file.

This will list all the files contain the string in question. What you wish to do with this is list is on you. However you at least now know where to look further for more information on whatever it is you might be looking for.

Hope this helps someone.

WinRM on Server Core

Prerequisites

  • AD with a Enterprise CA
    Why? For easier Certificate management, if you want step by step details using self sign, you can read this blog post by Tyler Muir. Thanks Tyler for your wonderful blog post it was really help to me.
  • Server Core (2016+)
  • A Certificate Template published and available to client machines

Now you *Technically* don’t need a template, if you were using self signed. However there are some prerequisites to the Certificate. According to the official Microsoft source it states:

“WinRM HTTPS requires a local computer Server Authentication certificate with a CN matching the hostname to be installed. The certificate mustn’t be expired, revoked, or self-signed.”

If you have a correct cert but not for the type of server auth you will get an error:

Which is super descriptive and to the point.

Implementation

Basic Implementation

If you don’t have a Server Authenticating certificate, consult your certificate administrator. If you have a Microsoft Certificate server, you may be able to request a certificate using the web certificate template from HTTPS://<MyDomainCertificateServer>/certsrv.

Once the certificate is installed type the following to configure WINRM to listen on HTTPS:

winrm quickconfig -transport:https

If you don’t have an appropriate certificate, you can run the following command with the authentication methods configured for WinRM. However, the data won’t be encrypted.

winrm quickconfig

Example:

On my Core Server domain joined, using a “Computer”/Machine Template certificate.

powershell
cd Cert:\LocalMahcine\My
Get-Certificate -Template Machine

ensure you exit out of powershell to run winrm commands

winrm quickconfig -transport:HTTPS

Congrats you’re done.

Advanced Implementation

Now remember in the above it stated “If you don’t have a Server Authenticating certificate, consult your certificate administrator. If you have a Microsoft Certificate server, you may be able to request a certificate using the web certificate template ”

That’s what this section hopes to cover.

There’s only one other pre-req I can think of besides the primary ones mentioned at the start of this blog post.

Once these are met, request a certificate from the CA and ensure it’s installed on the client machine you wish to configure WinRM on. Once installed grab the certificate Thumbprint.

Creating the listener using the certificate ThumbPrint:

winrm create winrm/config/Listener?Address=*+Transport=HTTPS '@{Hostname="<YOUR_DNS_NAME>"; CertificateThumbprint="<COPIED_CERTIFICATE_THUMBPRINT>"}'

Manually configuring the Firewall:

netsh advfirewall firewall add rule name="Windows Remote Management (HTTPS-In)" dir=in action=allow protocol=TCP localport=5986

Start the service:

net start winrm

Issues

Failed to create listener

Error: “The function: “HttpSetServiceConfiguration” failed unexpectedly. Error=1312.

Resolution: Ensure the machine actually has the key required for the certificate.  See Reference Three in this blog for more details.

Not Supported Certificate

Error: “The requested certificate template is not supported by this CA”

Resolution: Ensure you typed the Certificate template name correctly. If so, Ensure it is published to the CA signing the certificate.

References

Zero

official Microsoft source

One

Straight to the point command references at site below:
ITOM Practitioner Portal (microfocus.com)

Two

Another great source that covers manual setup of WinRM:
Visual Studio Geeks | How to configure WinRM for HTTPS manually

Three

When using the MMC snap in pointed to a ore server certificate store, and generated the cert request, and imported the certificate all using the MMC Snap cert plugin remotely. Whenever I would go to create the listener it would error out with “The function: “HttpSetServiceConfiguration” failed unexpectedly. Error=1312. 

I could only find this guys blog post covering it where he seems to indicate that he wasn’t importing the key for the cert.

Powershell WinRM HTTPs CA signed certificate configuration | vGeek – Tales from real IT system Administration environment (vcloud-lab.com)

This reminded me of a similar issue using Microsoft User Migration Tool and the Cert store showing it had the cert key (little key icon in the cert mmc snap in) but not actually being available. I felt this was the same case. Creating the req from the client machine directly, copying to CA, signing, copying signed cert back to client machine and installing manually resolved the issue.

My might have been able to just use the cert I created via the MMC snap in by running

certutil –repairstore my <serial number> 

I did not test this and simply create the certificate (Option 2) from scratch.

Four

“The requested certificate template is not supported by this CA.

A valid certification authority (CA) configured to issue certificates based on this template cannot be located, or the CA does not support this operation, or the CA is not trusted.”

This one lead me down a rabbit hole for a long time. Whenever I would have everything in place and request the certificate via powershell I would get this error. If you Google it you will get endless posts how all you need to do is “Publish it to your CA”, such this and this

it wasn’t until I attempted to manually create the certificate (Option 2) did it finally state the proper reason which was.

“A certificate issued by the certificate  authority cannot be installed. Contact your system administrator.
a certificate chain could not be built to a trusted root authority.”

I think checked, and sure enough (I have no clue how) my DC was missing the Offline Root Certificate in it’s Trusted Root Authority store.

Again all buggy, attempting to do it via the Certificate Snap in MMC remotely caused an error, so I had to manually copy the offline root cert file to the domain controller and install it manually with certutil.

This error can also stem from specifying a certificate template that doesn’t exist on the CA. Hence all the blog posts to “publish it”.  HOWEVER, in my case I had assumed the “Computer” template (as seen in MMC Snap in Cert tool) is only the display name, the actual name for this template is actually “Machine”

Five

I just have to share this, cause this trick saved my bacon. If you use RDP to manage a core server, you can also use the same RDP to copy files to the core server. Since you know, server core doesn’t have a “GUI”.

On windows server core, how can I copy file located in my local computer to the windows server? – Server Fault

In short

  1. enable you local drive under the Resources tab of RDP before connecting.
  2. open notepad on the RDP session core server.
  3. Press CTRL+O (or File->Open). Change file type to all.
  4. Use the notepad’s file explorer to move files. 😀

Six

Another thing to note about Core Server 2016:

Unable to Change Security Settings / Log on as Batch Service on Server Core (microsoft.com)

Server Core 2016, does not have added capability via FOD

Thus does not have secpol, or mmc.exe natively. To set settings either use Group Policy, or if testing on standalone instances or Server Core 2016, you’ll have to define to security policies via a system with a GUI installed, export them and import them into core using secedit.

¯\_(ツ)_/¯