Renew Certificate with same Key via CMD

certutil -store my

The command above can be used to get the required serial number for the cert needing to be renewed. This should show the machine store, if you need certs displayed for the user store remove the “my” keyword.

certreq -enroll -machine -q -PolicyServer * -cert <serial#> renew reusekeys

If you get the following error:

Ensure the machine account has enroll permission on the published certificate template. For step by step guidance follow this blog post by itexperince.

afterwards it should succeed.

*NOTE* this option archives the old certificate, and generates a new one with a new expiration date, with the same key, with a new serial number. How services that are bound to the certificate update themselves, I’m not sure for this test I did not have the certificate bound to any particular services. Verifying this actually the web server using the cert did automatically bind to the new cert, I’d still recommend you verify where the certificate is being used and ensure those services are update/restarted accordingly to apply the changes.

Fixing SharePoint Search

Broken SharePoint Crawl

Issue: SharePoint Search

First Issue: Crawl ends in 1 min 20 seconds.

Solution: Check to see if the Page Loads when queried from the front end server. If you get a credential prompt 3 times.  Then you have to disable loop back checking.

How to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa

Add a new DWORD value named DisableLoopbackCheck and give it a value of 1. After setting the value reboot your server.

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa /v DisableLoopbackCheck /t REG_DWORD /d 1

Second Issue: Crawl appears to complete but Errors in Log

Log Details of “the content processing pipeline failed to process the item ExpandSegments”

You may find lots of sources stating to watch out for multi valued attributes or property names such as this nicely detailed blog post however, watch for the details in the log message and it’s a bit different. However I feel this TechNet was answered incorrectly.

I tried some of the basics, such as clearing the index and adding the search account as site admin, did not help. On that note I learnt that separate search content sources within a search application do not have their own indexes. You have to create dedicated search applications for each SharePoint site if you wish them to have their own index (“Database/Tables”).

After a bit more scouring I found many instances of the same problem with the solution always being you have to create a new search application, such as this TechNet post and this TechNet Post.

Solution:

Remove the content source from the existing Search Service Application.

Create an entirely new Search Service Application.

Reboot the Front End.

Add Crawl Content on new SSA. Crawl should work.

Third Issue: Red X on Search Service Application Services

*NOTE* After creating the new Search Application, a front end reboot is required to remove the red X’s on some of the application services statuses.

Fourth Issue: Search results not working

This from my testing appeared to have resolved the crawl issue at hand. However I wasn’t able to get results returned when entering data in to search. I did a bit of searching and testing and I found the solution. Turns out you have to associate the web application with the new search service application (In my testing I created a new uniquely defined search application to have a separate index then the other SharePoint Sites).

Solution:

Navigate to Central Admin > Application Management > Manage web applications >Highlight the web application > Select Service Connections from top ribbon > Make sure your Search service application is selected.

This was it for me, however if you still experience issues I have also read updating the front end servers can resolve issues as well, as blogged about by Stefan Goßner.

Fifth Issue: Runaway Crawl

For some unknown reason, when I went to check out the SharePoint front end after the weekend, I noticed it was using near 100% CPU usage, very similar to a Front End that is actively doing a crawl, I knew this wasn’t a normal time for a crawl and it generally never happens during this time.

To my amazement 3 crawls were on going for over 70+ hours. Attempting to stop them from the front end option “Stop all crawl” resulted in them being stuck in a stopping state.

Googling this I found stop and starting the SharePoint Search Service under services.msc did help to bring them back to “idle”.

Now I believe this next step is what caused my next issue, but it is uncertain.

I clicked on Remove Index, for the primary search service application. It was stuck at “This shouldn’t take long” for a really long time, and I believe it was my impatience here that caused the next problem, as I restarted the search service again after this fact.

Sixth Issue: Paused by System

All solutions I found for this issue resulted in no success. Even rebooting the server the primary search service content crawl states were “paused by system. Regardless of:

    1. Pausing and Resuming the Search Services
    2. Restarting the Search Services
    3. Rebooting the server

I was only able to resolve this issue the same way I fixed the initial issue with the crawl, rebuild the search service application.

I hope this blogs helps anyone experiencing issues with SharePoint Search functionality. Cheers!

Verifying Web connections to SharePoint Sites

Simple as:

Get-Counter -Counter '\web service(_total)\current connections'

You can also use performance Monitor utilizing the same filter/”counter”.

To view all filter/”counter” types: (*note a lot of output, may want to pipe to file)

typeperf -qx

This is useful info if you plan on making Site Changes and need to know how many people might be affected. At the time of this writing I haven’t been able to figure out WHO these connections belong to, however I believe with more knowledge this might be possible. I’ll leave this open for the time being, and come back to it when time permits.

Rename Lync/Skype Server

Overview

Short Answer, Don’t start fresh.

If in a clustered topology with a dedicated CMS. Remove from topology,  add new server with new name to topology and redeploy fresh.

My Expereince:

Step 1) Restore LyncVM created on resource domain, change hostname.
Step 2) Stop All Lync Services
Step 3) Create SRV record for _sipinteraltls, and 3 a records (admin, meet, dialin)
Step 4) Remove old CSM, else topogly deployment will fail. (Remove-CSconfigurationStorelocation)
Step 4.1) Extend Storage to 80Gig, else publishing will fail. (https://docs.microsoft.com/en-us/skypeforbusiness/troubleshoot/server-install-or-uninstall/error-when-install-lync-server#:~:text=This%20issue%20occurs%20because%20the,16%20GB%20of%20free%20space.)
Step 5) Open Topology Builder, if host name same, open file, if new host create new topology. Publish
FAIL

Step 1) Cloned Server
Step 2) Removed from Domain (disconnected)
Step 3) changed Hostname, and joined Domain
Step 4) Run Deployment Wizard -> Prep First Standard eidtion server
Step 5) Run Deployment Wizard -> Setup or Remove Lync Components -> Step 2
FAIL

Step 1) Cloned Server
Step 2) Removed from Domain (disconnected)
Step 3) remove all local SQL instances
Step 4) Delete C:\CSData
Step 5) Rename, Join Domain
Step 6) Run Deployment Wizard, Prepare First standard edition server
Step 7) Adjust DNS (_sipinternaltls, admin, dialin, meet)
Step 8) Remove-CSConfigurationStoreLocation
Step 9) Create and Publish Topology
SUCCESS
However Install CMS: failed:
https://social.technet.microsoft.com/Forums/en-US/88fc3dc6-cf74-474e-baf7-08609211ac1b/cannot-open-database-quotxdsquot-requested-by-the-login-the-login-failed-login-failed-for-user?forum=lyncdeploy

Long Answer (Source): This info has been shared on many blogs, not sure who’s the originator of the list

  1. Remove Skype for Business server from topology
  2. Publish topology.
  3. Run Skype for Business Server Deployment Wizard local setup on server to remove Lync components (or run the bootstrapper)
  4. Uninstall SQL Server. Front-ends have LyncLocal and RTCLocal instances. Remove both, rebooting between instance removal.  Edge only has RTCLocal instance.
  5. Remove SQL Server 2012 Management Objects (x64)
  6. Remove SQL Server 2012 Native Client (x64)
  7. Remove Microsoft System CLR Types for SQL Server 2012 (x64)
  8. Remove Microsoft Skype for Business Server 2015, Front End Server
  9. Remove Microsoft Skype for Business Server 2015, Core Components
  10. Delete leftover data:
    Delete C:Program FilesMicrosoft SQL Server
    Delete C:Program FilesMicrosoft Skype for Business Server 2015
    Delete C:CSData
  11. Rename server and restart
  12. Wait until AD replication completes with new server name.
  13. Open Topology Builder, add a new server to existing pool and publish. (If this is a SBA or Standard Edition Server, the pool and the server FQDN is identical.)
  14. Reinstall Skype for Business Server 2015components and all cumulative updates
  15. Generate new certificate with updated server name and assign to appropriate services using Skype for Business Server 2015 Deployment Wizard.
  16. Restart all servers in pool at same time (only relevant for front-end servers in an Enterprise pool).

Summary

Don’t do it. It’s so much easier to start fresh then deal with this garbage.

Audit Client Side Outlook Archive Settings

Why…

What You Need to Know About Managing PST Files (ironmountain.com)

How?

[SOLVED] Powershell Script find pst files on network – Spiceworks

From this guys script I wrote a simple script of my own.

As noted in the current issue that it only works running under the users current context. Though I know the results from testing, I can’t source any material on the CIM_DataFile class as to denote how or why this is the case.

For the time being this post will remain short as it’s a work in progress that I haven’t been able to resolve the interactive part of the script, I’m not happy with a requirement of a open share, and a logon script. As the script in its current state isn’t even written to support that design.

Will come back to this when time permits.

BitwardenRS Upgrade to Vaultwarden

The Story

A while a go I blogged about installing BitwardenRS, the on prem version of Bitwarden, which is amazing by the way.

Recently they announced they are changing the name to respect of the original project to avoid confusion.

You can follow this guys great video if you happen to use UNRAID (which I haven’t used myself but looks really neat).

If you followed my blog then you are running bitwardenrs via docker-compose.

In this case it was actually simpler than I thought.

Updating/Upgrading BitwardenRS to Vaultwarden

If you are simply updating to the latest build with the same old name.

Step 1) bring down the Container

cd \path\to\dockerimage
docker-compose down

Step 2) pull the latest build

docker-compose pull

Step 3) Bring up the new container

docker-compose up -d

That’s literally it, and it is super fast process.

However if you want to use the new image. You’ll have to change the name of the source project in the docker-compose yaml file:

Change the image: line

image: vaultwarden/server

Then, just like before, bring down the container, pull new, bring up.

Important Change (broken Email)

After updating I wasn’t first aware of an issue (as I normally don’t manage multiple users and orgs), however attempting to add a user to an org I got an error: SMTP improper Auth Mechanism selected.

No matter which one you pick the error remained (against a standard port 25 connection, anon). No matter what you entered in the “admin portal” under the SMTP configuration area, the same error would persist. My colleague started to dig through the source code, and the logic seemed clean. The issue seemed that once you configure specific “environment variables” (EG.
– SMTP_USERNAME=[username]
-SMTP_PASSWORD=[password]) that these for some reason are not being “overwritten” when defined in the admin portal. Since there was fields defined “[username]” the code was building a connection for auth, and expecting a proper auth mechanism. Since Auth Mechanism was never defined in the “environment variables”  and the bug of the settings of SMTP in the “admin panel” were not overwriting it would never hit the proper method in the SMTP code to make a standard port 25 anonymous connection.

To fix the issue you have to remove those two lines from the docker-compose YAML file.

So ONLY DEFINE:
SMTP_HOST=Email Relay DNS name
SMTP_FROM=from address
SMTP_PORT=25
SMTP_SS=false

Save the YAML file and bring down, and then bring up the container.

Watch as email works again. 😀

Super thanks to my buddy GB for the deep code analysis to help resolve this issue. 🙂

Check if Someone is Remoted into a Computer

Let’s say you have a shared workstation, and you’d liek to check if someone is using it without connecting first and having the “someone is already using the workstation”, or interrupting them in the first place.

I found this and I just have to make a super quick short post about it since it blew my mind.

Why it blew my mind.

  1. It’s been around for along time.
  2. It’s native to Windows.
qwinsta /server:RemoteMachine

That’s literally it from here. Admin not needed on local or remote machine, just need remote access to remote machine from my quick testing.

Cheers!

Check in all Files in a SharePoint Doc Library

Here’s a real quick and dirty way to check in all files using SharePoint Powershell.

Get the SharePoint Web application where the doc library sits.

$YourSPSite = Get-SPWeb https://sharepointweb.domain

Get the Document Library from the root (or as many sub layers as needed)

$DocLibrary = ($YourSPSite.RootFolder.SubFolders["Name of Folder"]).SubFolders["Name of Nested Folder"]

After this call the checkin method for each file that is not in a checked in state

$DocLibrary.Files | ?{$_.CheckOutStatus -ne "None"} | foreach {$_.CheckIn("File Checked in by Admin")}

If there are even more subfolders within this folder, they have to be taken care of by simply adding a nested .SubFolders call .EG.

$DocLibrary.SubFolders.Files | ?{$_.CheckOutStatus -ne "None"} | foreach {$_.CheckIn("File Checked in by Admin")}

At this point it doesn’t matter what the sub folders are named it’ll go through each of their files within that “layer” of nested folders.

Hope this helps someone

How to remove a Datastore from a vSphere Cluster

How to Remove a Datastore

Intro

Hey everyone,

I figured I’d write up a quick little help guide on removing a Datastore. Now this isn’t new and likely to be buried on the internet because of it. However in my searches I have found the following sources to be great reads. I highly recommend you check them out.

1)  Official Source VMware KB2004605.

2) A Blog guide by Sam McGeown, here.

3) A post by Mike on cswitchzero.

Now let’s go through the checklist from the official source one by one.

Check List

  • If the LUN is being used as a VMFS datastore, all objects (for example, virtual machines, templates, and Snapshots) stored on the VMFS datastore must be unregistered or moved to another datastore.-This one is pretty easy navigate to the datastore files and check. You may find some remanence from the following though.
  • All CD/DVD images located on the VMFS datastore must also be unmounted/unregistered from the virtual machines.-This shouldn’t even be the case if you did check one.
  • The datastore is not used for vSphere HA heartbeat.-This setting will use a folder labeled “.vSphere-HA”
    For a Quick overview of Datastore Heart beating See here
    To “remove” aka change them See here
  • The datastore is not part of a datastore cluster.-You can find useless help on this process from VMware here. I’m assuming it’s an easy task via the WebUI
  • The datastore is not managed by Storage DRS.-If you removed it from the datastore cluster, how could this be an issue?
  • The datastore is not configured as a diagnostic coredump Partition/File and Scratch Partition. For more information, see the following:
  • Storage I/O Control is disabled for the datastore.-See here on how to enable (disabling is the exact reverse)
  • No third-party scripts or utilities running on the ESXi host can access the LUN that has issue.-Honestly I’m not sure how you could check this… even when doing some quick research, you can have scripts I guess that are not on the hosts, but run by alternative machines via PowerCLI. As described in this community post. I guess you’d have to know, either way the scripts would just fail, shouldn’t affect the vSphere cluster.
  • If the LUN is being used as an RDM, remove the RDM from the virtual machine. Click Edit Settings, highlight the RDM hard disk, and click Remove. Select Delete from disk if it is not selected and click OK.Note: This destroys the mapping file but not the LUN content.

    – This is more involving the removing of the backend physical device. Which in my case is the final goal. Though if yours was just to remove a datastore while keeping the physical storage in place this can be ignored.

  • As noted by Sam but not the official source or Mike is if you see a .dvsData folder. as stated by SAM “The .vdsData folder is created on any VMFS store that has a Virtual Machine on it that also participates in the VDS – so by migrating your VMs off the datastore you’ll be ensuring the configuration data is elsewhere.”
  • Check that there are no processes locking the VMFS with this command:
esxcli storage core device world list -d

Datastore Removal Steps

Step 1) Follow the Checklist above.

Make sure no files reside on the Datastore.

Step 2) Unmount Datastore from all ESXi hosts.

As noted by SAM blog post even in vSphere 5.x using the C# phat client, this was possible to do via a wizard against all hosts that have the datastore mounted. Even on the newer HTML5 WebUI this is still possible (I think everyone wants to fully forget that VMware chose flash for a short time).

At this point the Datastore will show up as inaccessible to vSphere. As noted by both Mike and Sam. This will be the same anywhere from 5.x-7.x (As noted by Mike it might be slightly more important to follow procedures with earlier versions of ESXi 3 or 4). If the Check list was followed, there should be no issues unmounting the datastore.

If you need to do this via esxcli (Source):

# esxcli storage filesystem list

Unmount the datastore by running the command:

# esxcli storage filesystem unmount [-u UUID | -l label | -p path ]

For example, use one of these commands to unmount the LUN01 datastore:

# esxcli storage filesystem unmount -l LUN01

# esxcli storage filesystem unmount -u 4e414917-a8d75514-6bae-0019b9f1ecf4

# esxcli storage filesystem unmount -p /vmfs/volumes/4e414917-a8d75514-6bae-0019b9f1ecf4

Step 3) Detach the LUN from all hosts.

As noted by Sam, if you are on 5.x you might want to automate this via PowerCLI. Then noted by Mike, newer 7.x can now do this in bulk via the Management WebUI.

6/7 WebUI -> Hosts n Clusters -> Hosts -> Cluster -> Host -> Configure Tab -> Storage Device (left side tree) -> Highlight Device -> Detach

for esxcli

Obtaining the NAA ID of the LUN to be removed

esxcli storage vmfs extent list

To detach the device/LUN, run the command:

# esxcli storage core device set --state=off -d NAA_ID

6. To verify that the device is offline, run the command:

# esxcli storage core device list -d NAA_ID

The output, which shows that the Status of the disk is off.

Step 4) Rescan HBAs

At this point, if you rescan all HBAs on all hosts the inaccessible datastore should be gone from the WebUI.

At this point you can remove the LUN from being seen (disc from showing up under devices) this will either be iSCSI based configurations (remove static and dynamic IPs from the iSCSI initiator settings on each host.) Mostly likely for a shared VMFS datastore.

It could be a local disc over a local storage controller (such as a logical drive created in RAID) such as behind a Pxxx storage controller.

Removing the source device will always be dependent on how it was configured in the first place.

Summary

So today we covered removing a Datastore. The important thing to remember is removing a Datastore takes a lot more steps than removing one, cause so many different VM’s and services can be applied to a datastore once it has started being used.

In many cases, the SysLog and Scratch partition are big hang ups, and should be looked at closely. Which, however, as stated if you are actually checking for files on the datastore this stuff will be pretty evident.

In most cases, ensure you follow the check list and the process should be pretty smooth. Hope this helps someone.

*Note* I often provide screen shots to provide some context, in this case I decided to leave it more generic to span multiple versions of vSphere.

WSUS Cleanup Unused Updates

How I got here

I needed to swap a disc, for a storage array to rebuild the logical volume.

Check, “disk is not authentic” **** off HPE. Workaround (disable sensors, no thanks). Fix 1, get authentic disk, not happening. Fix 2, move to alternative storage.

Alt storage available. Begin migration process (multiple ways to accomplish this, not in scope of this post). Good time to clean up source data, in this case WSUS update files. Lets clean them up…

Should be easy, eh? Open WSUS -> Options -> Server Cleanup Wizard -> Check  (Unused updates and update revisions)

Reality:

**** off Microsoft…. OK let’s see what Google has for me today….

Rabbit Hole Begins

Classic Adam with some suggestions, as mentioned here and here, same help suggestions as follows:

“* Make the following “Advanced Settings” for WSUS Application Pool in IIS:
– Queue Length: 25000 from 1000
– Limit Interval (minutes): 15 from 5
– “Service Unavailable” Response: TcpLevel from HttpLevel
* (Stop IIS first) Edit the web.config ( C:\Program Files\Update Services\WebServices\ClientWebService\web.config ) for WSUS:
– Replace <httpRuntime maxRequestLength=”4096″ /> with <httpRuntime maxRequestLength=”204800″ executionTimeout=”7200″/>
* Adjust the private memory limit.
– If you have WSUS Automated Maintenance (WAM), from the WAM Shell run:
.\Clean-WSUS.ps1 -SetApplicationPoolMemory 4096
– If you don’t have WAM, edit the pool’s configuration directly to change it to 4194304 (4GB)”

To stop IIS “issreset /stop”

Seems his copy n paste answer to this problem. Well I did all the above, and same results. Let’s try a reboot maybe that helps make these settings apply (doubt it). Nope same error. these changes did nothing to resolve the problem.

Same results. However as noted by the OP in the second link, in which Adam tell the OP to follow his guide on validating something in the SUSDB. However this simply links to his “Reinstall WSUS guide” in which he states you need SSMS “To tell if the WID carries more than the SUSDB database, you’ll need to install SQL Server Management Studio (SSMS) and connect to the WID instance to browse the databases.”

Installing MSsqlcmd

Nah SSMS is heavy you can also use “Microsoft® Command Line Utilities for SQL Server” for WSUS on 2016 I recommend version 14 along with (I bleieve is needed) ODBC Drivers (at time of this writing version 17, required Visual C++ 2017 redist)

*correction ODBC 17, did not work, installed wanted ODBC driver 11 for some reason.. this one. (FFS)

and…

are you shitting me.. what gives… Someone already blog posted about this..

Grab version ODBC version 13.1!

OMG it worked, it somehow hardcoded to check for only this particular version of ODBC, unreal… lets move on.

To help guide me in its use I followed this blog post. Thanks mavboss.

Install Visual C++ 2017 Redist.

Install ODBC drivers (AFAIK enable ODBC Driver for SQL Server SDK, during install wizard, MAKE SURE v13.1!!)

Install MSsqlcmd (v14 at the time of this writing, yes, even though the wizard picture states v13)

Holy Sheeeshh, k let’s see if we can connect to the WID…

Connecting to the WID with SQLCMD

cd "c:\Program Files\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn"
SQLCMD -E -S np:\\.\pipe\MICROSOFT##WID\tsql\query

Ehhh look at that, ok next part the queries mentioned in the initial second link share…

Ehhhh, well its going, but its taking a long time, I can see why the timeouts were extended in the app pool section…

one thing I noticed was when you run the wizard CPU goes up but does not max out, maybe a few spikes here n there. Running this stored procedure pins the CPU at 100%. will report how long this takes…

hour n 30 minutes later the process is still going…. Oi… publishing for now will update this post when new info is discovered. For now this is no answer to the problem, just a hold up to the end of the rabbit hole.

Over 3 and a half hours later it completed :O. I was just about to figure out how to cut it off when right when I was thinking about it the process dropped in CPU usage and some disk usage went up :O

And amazingly got a result from the cmd prompt. Me being the lazy guy I am, had no interest in counting the number of results, so I took the results saved them in text file in a shared file folder. Then opened it on my main work station and pasted it into excel.

Jeeeeeeeee le weeez, over 8000 results, no wonder WSUS kept crashing, plus the 5 to 15 minute timeout wouldn’t help for shit with it having taken nearly 4 hours to complete the query. OK now…. how am I going to clean this up. I have a feeling it’ll be best to write a SP myself, or at least a generalized query to delete some of these in bulk, maybe start off with 10 items and work up to 100 items at a time, even at 100 it’ll take 80 runs to clear them all….

Nutty, I don’t think removing one item will make the front end work like it did for the OP, however I’ll try to manually delete some…

That took about a minute… that times 8000… uhhhh

That’s going to take way too long… researching the stored proecdure in question I found this Blog post.

I ran the indexes mentioned but found no improvement in running the SP.

little more looking into sqlcmd, was able to determine how I could run the SP per numbered line…

SQLCMD -S np:\\.\pipe\MICROSOFT##WID\tsql\query -Q "use SUSDB; exec spDeleteUpdate @localUpdateID=69691;"

Time to write a powershell script to help bulk run this task. The linked Blog shows how SQL script, but that script itself builds a table from the Stored procedure “getObsoleteUpdateToDelete” which took 4 hours so I don’t want to run that again, since I already saved the results in a txt file.

I should be able to use PowerShell to easily iterate each line of the text file (adjust the number of items within the source file) to do the bulk operation. 😀

Let’s do this…

PS C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn> foreach($line in Get-Content 'C:\temp\New folder\list.txt'){write-host "removing $line"; SQLCMD -S np:\\.\pipe\MICROSOFT##WID\tsql\query -Q "use SUSDB; exec spDeleteUpdate @localUpdateID=$line;"}

… This one liner script allows me to run the cleanup on as many or as little updates as needed, simply add each update ID per line within the line.txt file. Done. Simple!

It took literally day’s almost a week, of slowly updating my list file and running the for each command to remove all the records from the database. Then finally opened up that WSUS wizard ran the cleanup wizard and….

Ooo no way finally! what a Pain that was. But got it done. No SSMS required.