WinXP a Timeless Classic

Something about it that I loved, I rocked the vista themed copy for so long, you know when the Vista Fiasco was the Windows 8 fisaco, that was the uhhh yeah anyway…

Just look at that dark epic theme, a couple pieces of junk, but nothing modern live tiles brings… a Japanese checkers board to appease someone with the shortest attention span one could ever imagine. But not this classic beauty, just look at that recycle bin, made from fine glass.

Holy ball sacks, I was able to download Chrome via IE7 and it worked… in 2019!!

That’s just amazing, chrome supported XP till 2015, if there’s a die hard OS, XP was it. Holy crap… there’s people still commenting about this… like now…

Well, if you’re lucky like me and my old netbook, they manufacture made and it’s third party hardware peeps made drivers up till Windows 7, so I managed to install Windows 7 with an SSD and my old laptop is great, regardless of how many people complain on these comments, haha. 😀 Which is crazy considering…

Yes, Windows 7 Extended support is coming up… Another Solid beast I hope gets the extended life support it deserves. :D.

This was pretty much just a blah post but whatever… xP

I just needed a VM to test my OPNsense VM lol, figured I had the old ISO why not…

VMware ESXi 5.5
D-Link DGE-530T RevC

The Story

Are you guys ready for a story? This one is actually not so bad. A couple days ago I post on Facebook if anyone happened to have a spare PCI/PCIe Network Interface Card (NIC), since it was going to be used for interest access I was ok with it being 100, but was aiming for 1000 (now that Shaw provide over 300mbps internet, clearly 100 doesn’t cut it).

After a day of no luck, and a bunch of funny remarks (as almost none of my friends had any idea of what I was talking about), I decided to take another look through my old computer hardware to see what I could scrounge up…

PCI NIC Found!

well, well, not even dusty, a PCI NIC, exactly what I needed in my hypervisor to play with OPNsense. I originally was going to try layer 2 trunking via VLANs, however the main vSwitch already had VMkernel Nics bound to the physical adapter @ layer 3, and the same interface on my firewall (Palo Alto) wouldn’t allow me to create a layer 2 sub-interface is the main interface was already bound to layer 3. Since I wanted my OPNsense VM to get an actual public IP address, this required my device to get a connection from my VM, directly to my modem at layer 2… yeah another NIC. So here we are, and it didn’t take long for me to shut down my VMs and install the card, and boot my hypervisor back up (I hope to one day have multiple hypervisor to not have to shut down my VMs, but even then, if you don’t pay chances are you won’t get access to the APIs that migrate the memory states of the VMs for you, so it’s a hassle either way…. anyway back to the story.

PCI NIC Found … NOT

Oh Borat, who brought you in?!?! So as you may have guessed I went to add a new vSwitch for my new VM to get it’s direct Public IP, and to my dismay there was no physical NIC to pick… what the….

So to Google! and hopefully either VMware support, or usually always better personal blogs! We all loves these right… ahem… anyway…

You can probably guess where the official answer went, but I’ll enlighten you as I did follow along for … pain? OK I don’t know why I did, I was really hopeful it wasn’t going to be the answer I knew it was going to be….

Hey! some of the command they provided helped, or did they? All this was, was some BS data chasing to tell you, IT’s Not supported, SOWWY!

Clearly, there must be some answers by the community forums right??

Community’s great! VMwares…. :S

So what do we get… One… unanswered and crying about a badly referenced link to source two... also unanswered crying about the same stuff we already know…. it’s officially not supported. Well I’m running ESXi 5.5 Free and using GhettoVCB’s scripts, also unsupported, so not really an issue… the issue is teh lack of help right now.

But bring me down, I don’t thikn so, the internet has many sites, and many people sharing their knowledge, how?!?! BLOGS! Ahem…

Blogs to the Rescue!

Yes believe it or not it is the power of the real untethered, unfiltered beauty that is blogging that we actually get some meat and potatoes. My first source showed signs of light! One problem, it’s literally 9 years old and using ESXi 4. OK well it also wanted a fair amount of direct file placing and special manipulation. Most of this works fairly differently in ESXi 5.x, and vibs or precompiled binaries that work with esxcli are the more preferred method. I avoid saying supported here, cause I use these methods to install unsupported packages :D.

Alright, so now what, well the Holy Grail! This King managed to not only blog about getting this working but shared the drivers/vibs packages required to get it to work too! Epic! Let’s get this dang NIC working…

1) Grab the VIB files

2) Change your support level on ESXi5+:

~ # esxcli software acceptance set –level=CommunitySupported
Host acceptance level changed to ‘CommunitySupported’.

3) Install the driver with: “esxcli software vib install -v /DLink-528T-1.x86_64.vib“

4) Reboot

Sounds simple enough lets give it a shot… and I hit some errors, classic…

I won’t show the erros just yet as I have it one long snippet, but basically I had a bit of problems cause of the GhettoVCB scripts I had pushed on to my host, but the error results weren’t exactly clear… I attempted a couple things first, like copying the VIB to the path it kept complaining about and specifying the fully qualified path to the VIB.. nothing till I stumbled across this...

esxcli software vib install -v /full/path/to/.vib -f

which finally gave me a driver install successful!

Alright, and after reboot…..


OMG! No way, there it is with the proper name and everything. Considering the blog post I followed was for a different NIC model I wasn’t sure if it would work, but there it is… so lets not get to ahead of ourselfs and see if it comes up and is able to transmit packets…

I was having some issues initially so I decided to give my lil netbook a simple /24 IP and give my OPNsense a simple /24 IP just to validate the card wasn’t the issue, or the drivers I just installed.

Plug them together, lights come up, that’s good… checking ESXI vSphere…

That’s good, and finally can we transmit?!?!

Hey!!!! we have communication! Now it’ll be figuring out getting the Public IP configured properly. But we’ll save that for another post. 😀 Cheers!

Another BitLocker Problem

The Story

I’ll keep this one short as I have a lot of things to do and this was an interesting find.

So I had to deploy some new laptops, did my usual trick with multiple systems, grab the latest version of Windows,run spiceworks decrapifier, install all updates, install Office, install all updates, install a couple third party software, clean.

Then cleanup the default profile. there have been issues with the “CopyProfile” option that MS supports with an XML file during sysprep, not only have there been known issues but this is total rubbish when it used to be a button. I reallllllllly hate this move by MS, there are times you want to configure the default profile and not sysprep (family computer anyone?)

Well ok enough of that MS rant (there are many) if you need help configuring the default profile check out this guys blog “scribbleghost” who sources the same one I originally followed by “Jose Espitia” which I think has a cleaner look and feel. IMHO

This was so far the cleanest, smoothest deployment I’ve done so far, and I haven’t hit a single snag, I also haven’t had to deal with Forenstics “DefProf” leaving lingering services with the above blog posts. Or other anomies by their profile migration tool.

Instead I suggest admins look into Ehlers “User State Migration Tool GUI” he basically took MS’s new user migration “tool” *cough* cmd line based app *cough* which normally would have someone digging through endless cmd parameters and syntax requirements (I only like doing this if I have to script, outside of that give me a damn GUI MS) Well no worries this guy did it. (it’s worth the cost, buy it).

OK now that allllll that is out of the way, what the heck was the issue man?!?!

So I go to BitLocker one of the deployed systems and BAM! Error in my face in particular Error code: 0x8004259A

so go to google, and my first attempts were not successful as it seems no bit locker reference to this error code has been shown. After some more searching I it this MS support page with some more English understandable definition of the code:

0x8004259A

VDS_E_SHRINK_DIRTY_VOLUME

The volume selected for shrink might be corrupted. Use a file system repair utility to fix the corruption problem and then try to shrink the volume again.

Alright well this is something…

The Solution

On my particular laptop that I first tested on (and I only was on my first other test deployment after mine) in which I forgot to enable BitLocker, as other systems leave the office more than Mine ever does. I was able to reproduce the error.

Yet on my laptop CHKDSK always returned clean, what gives, yet shrinking the volume and re-extending it resolved the issue for me…

Until I went to do the same on the first deployed laptop only to find it was telling me I was unable to shrink due to corruption (sure this one picks up on something; remember I shrink the data partitions before making my base image to make DDing it onto other system much faster).

So this time a CHKDSK /f, and a rebooted made chkdsk clean the disk, and without shrinking or expanding was able to run BitLocker!

Another win for today!

Working on PowerShell scripts (ISE) w/ GitHub

GitHub

So as you all probably know GitHub has been acquired by Microsoft. I had initially groaned at this acquisition as usually a lot of things Microsoft has done lately has really bothered me (locking down APIs to O365 and not providing them to on Prem, for example) but then they also have done some good moves… .Net Core 2.0 and all the open source incentives are a nice change of pace.

And to top that with some sugar how about some private Repositories for free members! Yeah That’s right, now that this is an option I’m going to use GitHub more. Now I’ve played with it before, however this time I wanted to write this up for my own memories. Hopefully it helps someone out there too.

Let’s have some fun saving our PowerShell scripts on GitHub!

PowerShell ISE and GIT

Dependencies

So for this demo you’ll need:

1) A GitHub Account (Free)
2) PowerShell ISE (Free with Windows)
3) Git for Windows

First, install and configure Git for Windows. Mike previously covered this topic in another blog article. In this scenario, I ran the Git installer elevated so I could install it in the program files folder and I took the option to add the path for Git to the system environment variable path:

posh-git0a

Make sure that you’ve configured Git as the user who is running PowerShell (I ran these commands from within my elevated PowerShell session):

4) Install the Posh-Git PowerShell module from the PowerShell Gallery:

The Fun Stuff

So I originally follow this guys blog post on how to accomplish this.

Now I had already installed git for windows so I was set there.

SharePoint Profiles

I liked the part where he had altered his console display depending on where he was located to not ensue confusion, however I wasn’t exactly sure what he meant by Profiles a lil searching and education session later I was able to verify my profile path:

$profile

Then simply edit that Microsoft.PowerShell_profile.ps1 with Mikes script:

Set-Location -Path $env:SystemDrive\
Clear-Host
$Error.Clear()
Import-Module -Name posh-git -ErrorAction SilentlyContinue
if (-not($Error[0])) {
    $DefaultTitle = $Host.UI.RawUI.WindowTitle
    $GitPromptSettings.BeforeText = '('
    $GitPromptSettings.BeforeForegroundColor = [ConsoleColor]::Cyan
    $GitPromptSettings.AfterText = ')'
    $GitPromptSettings.AfterForegroundColor = [ConsoleColor]::Cyan
    function prompt {
        if (-not(Get-GitDirectory)) {
            $Host.UI.RawUI.WindowTitle = $DefaultTitle
            "PS $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) "
        }
        else {
            $realLASTEXITCODE = $LASTEXITCODE
            Write-Host 'PS ' -ForegroundColor Green -NoNewline
            Write-Host "$($executionContext.SessionState.Path.CurrentLocation) " -ForegroundColor Yellow -NoNewline
            Write-VcsStatus
            $LASTEXITCODE = $realLASTEXITCODE
            return "`n$('$' * ($nestedPromptLevel + 1)) "
        }
    }
}
else {
    Write-Warning -Message 'Unable to load the Posh-Git PowerShell Module'
}

Now that we’ll have the same special console to avoid confusion let’s link a directory!

Linking GitHub Repo to Your local Directory

Then I cloned my new private Repo:

git clone https://github.com/Zewwy/Remove-SPFeature Remove-SPFeature -q

That felt awesome…

Nice, nice…

Opening scripts from the ISE

Alright. Well now that we have a repo, and are in it, how do I open a file in the very ISE we are running to edit them? Now Mike didn’t exactly cover this, cause I suppose to him this was already common knowledge… well not to me haha so it’s actually pretty simple once you know how.

psEdit .\Remove-SPFeature.ps1

Woah! Epic, it can be bothersome when dealing with length scripts, so ensure you utilize regions (w/ endregions) to allow for quick named areas to access, as you can use this command in ISE to collapse all regions once a script is loaded.

$psISE.CurrentFile.Editor.ToggleOutliningExpansion()

lets start making some changes *changes made*

Committing and Pushing

Get your mind out of the gutter!

Now I had originally did a git push, and instantly got everything is up-to-date alert, so awesome, I did not have to fight through his whole schpeal about auth (I got a prompt the very first time I attempted to clone my repo, requesting me to login into my GitHub Account). So my tokens were good right from the start after that happened. However I did make some updates to one file and was now instead presented with this after a commit:

Again a bit of searching I was able to find the answer, seem usually to be some form of ignorance, that is why I’m doing this.. to learn 😛

Now again I got a bit confused at how this worked and when I did some searching I discovered:

Don’t do a “git commit -a” from the ISE, it’ll crash asking you for a line to provide in for the description.

Do proper staged commits as described here. 🙂

I hope this maybe gets some more people power shelling!

Next I should learn to use Visual Studio for more app building… but I’m more of a sys admin then a dev…

I recently took a course in resiliency, and they basically said be a tree… ok.

Branching

Whats is branching? well pretty much, try stuff without changing the source code. Backups anyone? it’s a nice way to try stuff without breaking the original code, and once tested, can be merged.

unlike a tree it’s not often a branch just becomes the trunk, but whatever…

Following this guide:

To create a branch locally

You can create a branch locally as long as you have a cloned version of the repo.

From your terminal window, list the branches on your repository.

$ git branch 
* master

This output indicates there is a single branch, the master and the asterisk indicates it is currently active.

Create a new feature branch in the repository

$ git branch <feature_branch>

Switch to the feature branch to work on it.

$ git checkout <feature_branch>

You can list the branches again with the git branch command.

Commit the change to the feature branch:

$ git add . 
$ git commit -m "adding a change from the feature branch"

Results:

Hopefully tomorrow I can cover merging. 🙂

Cheers for now!

SharePoint Orphaned
Content Types (ReportServer)

New Series! SharePoint Orphaned!

The only thing that should be orphaned, is SharePoint itself…. ohhh ouch,

The Story

joking aside, our Developer again came by reporting some issues with the newly developed SharePoint site I had migrated for him to test creating some new SharePoint web part apps. He already had his own documentation available when he first did this, good man. Even after we got past the “how to create a new template from a site with publishing features enabled”, we were still receiving an error.

Slow SharePoint fixed… But…

During this whole process this new site was intermittently responding slowly, it was baffling, and as we dug through the UNLS logs we found the issue, apparently the service account configured to run the Web Application Pool did not get access to the ProfileDB for some reason, after granting the login SPAccess on the ProfileDB it fixed the slow intermittent SharePoint loads… but sadly we were still receiving errors while attempting to deploy new sites from templates.

The signs were clear

Looking further in the UNLS logs, and the error itself complaining that content types could not contain special characters…. A bit more searching pointed us towards the sites content types page….

Whooops how did I miss this… (ReportServer Feature…)

Guess there are the “special characters”….. ugh, even though the “Test-SPContentDatabase cmdlet” returned clean throughout my migration (and all my scripts I have yet to publish). Guess this one isn’t picked up by the checker? unno anyway… what to do about this…

The search

Source one… too complicated, but interesting… he sure worked hard, I’d go this route, but I’m sure there are easier solutions… got to be and… yup.

Source two simple… lets try it…

The Solution

Install the feature, disable it on all web apps deployed on the farm, uninsatall the feature. Nice and simple, and how I usually like it, letting the system do most of the heavy lifting to avoid human error.

So Step 1: Grab Reporting Service installers (my case SharePoint 2016)

Step 2: Install it;

Next, Accept the EULA, Install

Success.

This makes the content type names behave correctly.

Step 3: Enable it;

Install-SPFeature -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\TEMPLATE\FEATURES\ReportServer"

Now the original post said to simply Uninstall it after, but as you can see it will error, why cause, as it clearly states it’s still enabled so…

Step 4: Disable the feature on all web applications

Disable-SPFeature –Identity ReportServer –Url http://spsite.domain.com

Step 5: Uninstall the Feature:

Uninstall-SPFeature -Identity e8389ec7-70fd-4179-a1c4-6fcb4342d7a0

Step 6: Uninstall the package:

msiexec /uninstall rsSharePoint.msi

I recommend to do this after hours and on a test first. It did seem to do a IISRESET as all sites had to reload, and took a lil bit for the .NET assemblies to recompile. 😀

Now go enjoy a coffee. Thx, Jussi Palo!

The Second Solution

OK, not gonna lie, I assumed it was all good, and that assumption came to bite me in the ass…. again, never assume.

So I told my dev that I had completed the steps and should have no issues creating a new site from his template, but as I’m walking down the hall a short time later, he give me the snap fingers (like it worked) and says, “same error”.

Ughhhhh… what…

So looking back at the Sites Content Types the Report Model Document content type still remained… ok what the….

So running through the proceedure again, it cmplained stating the feature was not available for my web apps, so I re-enabled it, saw all three content types, disabled it… and report model still there…. :@ c’mon! Let’s just delete the content type!

Can never give me a break eh SharePoint…

Luckily my dev is super awesome and told me about another blog he had read (sorry I don’t have the source) and told me that the only reason the front end actually refuses to let you delete the content type, isn’t so much that it’s tied to an actual feature (even though we all know that this one did come from the ReportServer feature), but rather that it simply has a flag set on it in the table for it…

Now I normally never recommend making changes on any SharePoint Database stuff directly, and usually always recommend making all required changes via either the Central Admin/psconfig, site settings or PowerShell. However in this case we clearly installed the proper dependencies, de-activated the feature that populates those content types yet was not being removed from the content databases…

Only do this if you have tried everything else, only do this in a test environment, actually never do this…. well I guess if you have tried everything else this is your only option…

This requires you to have syadmin rights on the SQL Server instance hosting the SharePoint Content Databases. Open SSMS…

SELECT *
FROM WSS_CONTENTDB.[dbo].ContentTypes
WHERE Definition LIKE '%Report%'

Find the row which contains the ID for the Report Builder Content Type (Or which ever other system based content type you have orphaned needs removing). usually easily spotted as it’ll be the only one with 1 under

USE WSS_CONTENTDB
Go
UPDATE dbo.ContentTypes
SET IsFromFeature = 0
WHERE ContentTypeID = *ID From above Query*

Now you can go into the actual orphaned Content Type under Site Settings and watch the delete content type not fail or error, and destroy that content type from your SharePoint life!

*Note* My Dev came back saying same error again, lol, but this time it was discovered we simply had to re-create the template and deploying the template from new worked (which originally didn’t before the above changes)

Happy SharePointing!

SharePoint Rest API call returns 500.50 URL rewrite error

The Story

Hey all another SharePoint Story here!

So my dev was working on another SharePoint site app. We did everything like before, and now he was getting a URL rewrite error. I wasn’t sure why this was happening, and since he generally had more experience troubleshooting these types of issues I sort of let him handle it for a while.

Well after a while he still couldn’t figure it out, and funny thing happened, we learned some interesting things and got bit by erroneous error messages in the end. So the first thing he tried was to give his re-write rules some new variable names. Which didn’t help and the same error was returned.

After a little while I had forgot to set the Service Principal Names (SPNs) for the new web applications we created for the new SharePoint sites. I was certain this was it, but we kept getting a URL rewrite error! (This turns out was actually the initial reason for the error, yeah it really was cause it turns out…)

I showed by dev this post by Scott on the same error. Now the reason we were getting the same URL rewrite error was cause when he changed the variable names in his re-write rule he didn’t change their associated server variables as mentioned in Scotts blog.

The Answer

The only reason we got the error both times was simply a coincidence. So it turns out:

1) If you forget to set the SPN when you Web App is set for Kerberos, and your hosting app server is on another server. You will get a re-write error if you have everything else in place.

2) If you change variables in your re-write rule and forget to set the associated system variables with it.

Both will result in a 500.50 URL rewrite error… who would of figured…

SharePoint – Invalid Field Name

The Story

Today was an interesting day, I was getting my morning coffee with dock and video cables in hand as I was about to help a colleague with a video issue when my developer walked in.

I could tell something was up when he walked in as he had a bit of a “catch his breathe” feel in his persona as he went about asking me how my morning was going. Sensing the tension in the conversation I ask him what’s going on. Then he gets right to the point, and it’s SharePoint related. Having had my gooooood amount of SharePoint experience doing majority of the SharePoint site migrations to 2016, however this time the issue revolved around the old 2010 site and server that was setup and configured before my time there.

Long story short, I took the correlation ID and searched the good old UNLS logs (C:\Program Files\Common Files\Microsoft Shared\Web Server Extentsions\14\LOGS). Nothing really stood out, a couple access denied due to a secure store permission for a web part, which I knew about due to the account I was using to test, and saw this webpart error out before hitting the error page on a web part edit (a different webpart that was actually working fine and failing on edit). There are other blog posts on dealing with access denied on the secure store front so I’ll leave those out of this as they were not of value to me. Continuing through the log till the very last match on the correlation ID brought up a exception halt with the line reading “Invalid field name.” along with a bunch of inner system method calls down a usual stacktrace. The stack strace wasn’t really of much relevance so I did the best thing I could, google the UNLS issue about the invalid field name, and sure enough I stumble upon a technet blog post by a Brendan Griffen.

Now you can go ahead and read more on his issue and store there, and he seems to really be hooked about “FormURN” but my case didn’t really have anything to do with that, and he does cover more the details as to the fact that certain tables that are used for content types somehow missing certain fields (ahem columns), as we all know you don’t mess with the DB directly when dealing with SharePoint. Now I don’t even get into the nitty gritty of the commands he used to verify the missing fields (thinking about that now it could have made for a more interesting blog post….. oh well) since he covers the solution to recover the missing fields with two SharePoint powershell commands, and well running those two fields in my test environment (since I was quickly able to duplicate the issue in my test) it sure beat doing that then digging endlessly through DB’s for columns I’m not sure I’m even missing, and for logs this was the last line in it so it was this or nothing…

The solution

Command 1:

Disable-SPFeature –identity "Fields" -URL RootWebURL

You will be asked if you are sure, select yes.

Command 2:

Enable-SPFeature –identity "Fields" -URL RootWebURL

that was it, just like magic (didn’t require a reboot or even a iisreset) my dev was able to edit the web part, and I was on my way to help a user with their monitor.

I’m going to soon have some SharePoint posts about orphaned items stay tuned! As well as some awesome scripts to clean up old 3rd party plugins!

Make Sure your DFSR is working!

This one is kind of interesting. I use a replicated test environment to validate things, it works great. I was using the domains sysvol to quickly copy some text between member servers, however to my amazement I was not seeing the same contents from two different member servers even though both of them validated their security with my domain (nltest /sc_verify:domain)…

It wasn’t until I checked both DC’s that I noticed one member server was seeing a SYSVOL from DC 1 and the other member server was seeing contents from SYSVOL from DC2.

Now, all DC’s have the same SYSVOl contents right?! So what gives?

You may have already guessed it, DFSR issues…. if you know the title didn’t give anything away…

Which lead me to this nice MS support page.

The most important line from it is this…

For /f %i IN ('dsquery server -o rdn') do @echo %i && @wmic /node:"%i" /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo WHERE replicatedfoldername='SYSVOL share' get replicationgroupname,replicatedfoldername,state

with this my DC’s reported a state of 2 (well that could explain the diff I was seeing)

which lead me to this nice MS Support page. 🙂 These are usually better than most I’ll admit. I followed the steps on “How to perform an authoritative synchronization of DFSR-replicated SYSVOL (like “D4″ for FRS)”

Until I realized that core doesn’t come with DFSR mgmt tools, even if you install the AD role… So for the most part I skipped the steps stating run “DFSRDIAG POLLAD” cause it’ll fail to run, as it does not exist

Maybe some one out there is smart enough to know the answer…

STS Security Token Service on SharePoint 2013

Today I was bringing my stepping server back up. In this case I use it to upgrade content databases from 2010 -> 2016.

Since you can’t directly upgrade, since the config data had been wiped, I was going through the config wizard to get it rebuilt. Now the wizard will complain if the old website still exists. So for some reason I decided to remove all the old sites and app pools. figured it would get rebuilt.

Now the wizard completed without a hitch, and I was off creating a web app and some content databases to delete as I’d test and mount the 2010 content databases for staging.

Oddly after I had mounted the database I had noticed the server was failing to successfully call “Get-SPSite”, saying that it was due to the security token store service. There’s lots of links out there with similiar issues… such as this, this, this, this and even this …. most of which are dead ends.

There’s MS support page on this as well, however I may have accidentally deleted that App Pool…

Then I stumbled across this, a MS blog post, which I find a lil more useful usually cause they are more hands on… in this case since I was already hooped I gave the command a try, and it ran just like his…

I wasn’t sure if this was enough, then I found this and ran these commands as well…

$sts = Get-SPServiceApplication | ?{$_ -match "Security"}
$sts.Status
$sts.Provision()

after a reboot, all of a sudden Get-SPSite was working again!

How to Shrink a VMDK

Hey all,

Not often you have to shrink a VMDK file, expanding one is super easy, even on a live Virtual Machine. Shrinking one however, isn’t as straight forward.

This guy does a decent job giving a step by step tutorial, but you can soon realize you can do it even faster, and without cloning…

1) Use his math to get the disk size you need to edit inside the vmdk:

The number highlighted above, under the heading #Extent description, after the letters RW, defines the size of the VMware virtual disk (VMDK).

this number – 83886080, and it’s calculated as follows:

40 GB = 40 * 1024 * 1024 * 1024 / 512 = 83886080

2) Only shrink VMDKs in which you know the end of the disk contains allocated blocks, do this in a test only, make sure you have backups.

Now instead of cloning, simply remove the disk from the vm, and re-attach it. watch it’s reattached size be smaller, and it matches, much like the source guys post.