Verifying Web connections to SharePoint Sites

Simple as:

Get-Counter -Counter '\web service(_total)\current connections'

You can also use performance Monitor utilizing the same filter/”counter”.

To view all filter/”counter” types: (*note a lot of output, may want to pipe to file)

typeperf -qx

This is useful info if you plan on making Site Changes and need to know how many people might be affected. At the time of this writing I haven’t been able to figure out WHO these connections belong to, however I believe with more knowledge this might be possible. I’ll leave this open for the time being, and come back to it when time permits.

Rename Lync/Skype Server

Overview

Short Answer, Don’t start fresh.

If in a clustered topology with a dedicated CMS. Remove from topology,  add new server with new name to topology and redeploy fresh.

My Expereince:

Step 1) Restore LyncVM created on resource domain, change hostname.
Step 2) Stop All Lync Services
Step 3) Create SRV record for _sipinteraltls, and 3 a records (admin, meet, dialin)
Step 4) Remove old CSM, else topogly deployment will fail. (Remove-CSconfigurationStorelocation)
Step 4.1) Extend Storage to 80Gig, else publishing will fail. (https://docs.microsoft.com/en-us/skypeforbusiness/troubleshoot/server-install-or-uninstall/error-when-install-lync-server#:~:text=This%20issue%20occurs%20because%20the,16%20GB%20of%20free%20space.)
Step 5) Open Topology Builder, if host name same, open file, if new host create new topology. Publish
FAIL

Step 1) Cloned Server
Step 2) Removed from Domain (disconnected)
Step 3) changed Hostname, and joined Domain
Step 4) Run Deployment Wizard -> Prep First Standard eidtion server
Step 5) Run Deployment Wizard -> Setup or Remove Lync Components -> Step 2
FAIL

Step 1) Cloned Server
Step 2) Removed from Domain (disconnected)
Step 3) remove all local SQL instances
Step 4) Delete C:\CSData
Step 5) Rename, Join Domain
Step 6) Run Deployment Wizard, Prepare First standard edition server
Step 7) Adjust DNS (_sipinternaltls, admin, dialin, meet)
Step 8) Remove-CSConfigurationStoreLocation
Step 9) Create and Publish Topology
SUCCESS
However Install CMS: failed:
https://social.technet.microsoft.com/Forums/en-US/88fc3dc6-cf74-474e-baf7-08609211ac1b/cannot-open-database-quotxdsquot-requested-by-the-login-the-login-failed-login-failed-for-user?forum=lyncdeploy

Long Answer (Source): This info has been shared on many blogs, not sure who’s the originator of the list

  1. Remove Skype for Business server from topology
  2. Publish topology.
  3. Run Skype for Business Server Deployment Wizard local setup on server to remove Lync components (or run the bootstrapper)
  4. Uninstall SQL Server. Front-ends have LyncLocal and RTCLocal instances. Remove both, rebooting between instance removal.  Edge only has RTCLocal instance.
  5. Remove SQL Server 2012 Management Objects (x64)
  6. Remove SQL Server 2012 Native Client (x64)
  7. Remove Microsoft System CLR Types for SQL Server 2012 (x64)
  8. Remove Microsoft Skype for Business Server 2015, Front End Server
  9. Remove Microsoft Skype for Business Server 2015, Core Components
  10. Delete leftover data:
    Delete C:Program FilesMicrosoft SQL Server
    Delete C:Program FilesMicrosoft Skype for Business Server 2015
    Delete C:CSData
  11. Rename server and restart
  12. Wait until AD replication completes with new server name.
  13. Open Topology Builder, add a new server to existing pool and publish. (If this is a SBA or Standard Edition Server, the pool and the server FQDN is identical.)
  14. Reinstall Skype for Business Server 2015components and all cumulative updates
  15. Generate new certificate with updated server name and assign to appropriate services using Skype for Business Server 2015 Deployment Wizard.
  16. Restart all servers in pool at same time (only relevant for front-end servers in an Enterprise pool).

Summary

Don’t do it. It’s so much easier to start fresh then deal with this garbage.

Audit Client Side Outlook Archive Settings

Why…

What You Need to Know About Managing PST Files (ironmountain.com)

How?

[SOLVED] Powershell Script find pst files on network – Spiceworks

From this guys script I wrote a simple script of my own.

As noted in the current issue that it only works running under the users current context. Though I know the results from testing, I can’t source any material on the CIM_DataFile class as to denote how or why this is the case.

For the time being this post will remain short as it’s a work in progress that I haven’t been able to resolve the interactive part of the script, I’m not happy with a requirement of a open share, and a logon script. As the script in its current state isn’t even written to support that design.

Will come back to this when time permits.

BitwardenRS Upgrade to Vaultwarden

The Story

A while a go I blogged about installing BitwardenRS, the on prem version of Bitwarden, which is amazing by the way.

Recently they announced they are changing the name to respect of the original project to avoid confusion.

You can follow this guys great video if you happen to use UNRAID (which I haven’t used myself but looks really neat).

If you followed my blog then you are running bitwardenrs via docker-compose.

In this case it was actually simpler than I thought.

Updating/Upgrading BitwardenRS to Vaultwarden

If you are simply updating to the latest build with the same old name.

Step 1) bring down the Container

cd \path\to\dockerimage
docker-compose down

Step 2) pull the latest build

docker-compose pull

Step 3) Bring up the new container

docker-compose up -d

That’s literally it, and it is super fast process.

However if you want to use the new image. You’ll have to change the name of the source project in the docker-compose yaml file:

Change the image: line

image: vaultwarden/server

Then, just like before, bring down the container, pull new, bring up.

Important Change (broken Email)

After updating I wasn’t first aware of an issue (as I normally don’t manage multiple users and orgs), however attempting to add a user to an org I got an error: SMTP improper Auth Mechanism selected.

No matter which one you pick the error remained (against a standard port 25 connection, anon). No matter what you entered in the “admin portal” under the SMTP configuration area, the same error would persist. My colleague started to dig through the source code, and the logic seemed clean. The issue seemed that once you configure specific “environment variables” (EG.
– SMTP_USERNAME=[username]
-SMTP_PASSWORD=[password]) that these for some reason are not being “overwritten” when defined in the admin portal. Since there was fields defined “[username]” the code was building a connection for auth, and expecting a proper auth mechanism. Since Auth Mechanism was never defined in the “environment variables”  and the bug of the settings of SMTP in the “admin panel” were not overwriting it would never hit the proper method in the SMTP code to make a standard port 25 anonymous connection.

To fix the issue you have to remove those two lines from the docker-compose YAML file.

So ONLY DEFINE:
SMTP_HOST=Email Relay DNS name
SMTP_FROM=from address
SMTP_PORT=25
SMTP_SS=false

Save the YAML file and bring down, and then bring up the container.

Watch as email works again. 😀

Super thanks to my buddy GB for the deep code analysis to help resolve this issue. 🙂

Check if Someone is Remoted into a Computer

Let’s say you have a shared workstation, and you’d liek to check if someone is using it without connecting first and having the “someone is already using the workstation”, or interrupting them in the first place.

I found this and I just have to make a super quick short post about it since it blew my mind.

Why it blew my mind.

  1. It’s been around for along time.
  2. It’s native to Windows.
qwinsta /server:RemoteMachine

That’s literally it from here. Admin not needed on local or remote machine, just need remote access to remote machine from my quick testing.

Cheers!

Check in all Files in a SharePoint Doc Library

Here’s a real quick and dirty way to check in all files using SharePoint Powershell.

Get the SharePoint Web application where the doc library sits.

$YourSPSite = Get-SPWeb https://sharepointweb.domain

Get the Document Library from the root (or as many sub layers as needed)

$DocLibrary = ($YourSPSite.RootFolder.SubFolders["Name of Folder"]).SubFolders["Name of Nested Folder"]

After this call the checkin method for each file that is not in a checked in state

$DocLibrary.Files | ?{$_.CheckOutStatus -ne "None"} | foreach {$_.CheckIn("File Checked in by Admin")}

If there are even more subfolders within this folder, they have to be taken care of by simply adding a nested .SubFolders call .EG.

$DocLibrary.SubFolders.Files | ?{$_.CheckOutStatus -ne "None"} | foreach {$_.CheckIn("File Checked in by Admin")}

At this point it doesn’t matter what the sub folders are named it’ll go through each of their files within that “layer” of nested folders.

Hope this helps someone

How to remove a Datastore from a vSphere Cluster

How to Remove a Datastore

Intro

Hey everyone,

I figured I’d write up a quick little help guide on removing a Datastore. Now this isn’t new and likely to be buried on the internet because of it. However in my searches I have found the following sources to be great reads. I highly recommend you check them out.

1)  Official Source VMware KB2004605.

2) A Blog guide by Sam McGeown, here.

3) A post by Mike on cswitchzero.

Now let’s go through the checklist from the official source one by one.

Check List

  • If the LUN is being used as a VMFS datastore, all objects (for example, virtual machines, templates, and Snapshots) stored on the VMFS datastore must be unregistered or moved to another datastore.-This one is pretty easy navigate to the datastore files and check. You may find some remanence from the following though.
  • All CD/DVD images located on the VMFS datastore must also be unmounted/unregistered from the virtual machines.-This shouldn’t even be the case if you did check one.
  • The datastore is not used for vSphere HA heartbeat.-This setting will use a folder labeled “.vSphere-HA”
    For a Quick overview of Datastore Heart beating See here
    To “remove” aka change them See here
  • The datastore is not part of a datastore cluster.-You can find useless help on this process from VMware here. I’m assuming it’s an easy task via the WebUI
  • The datastore is not managed by Storage DRS.-If you removed it from the datastore cluster, how could this be an issue?
  • The datastore is not configured as a diagnostic coredump Partition/File and Scratch Partition. For more information, see the following:
  • Storage I/O Control is disabled for the datastore.-See here on how to enable (disabling is the exact reverse)
  • No third-party scripts or utilities running on the ESXi host can access the LUN that has issue.-Honestly I’m not sure how you could check this… even when doing some quick research, you can have scripts I guess that are not on the hosts, but run by alternative machines via PowerCLI. As described in this community post. I guess you’d have to know, either way the scripts would just fail, shouldn’t affect the vSphere cluster.
  • If the LUN is being used as an RDM, remove the RDM from the virtual machine. Click Edit Settings, highlight the RDM hard disk, and click Remove. Select Delete from disk if it is not selected and click OK.Note: This destroys the mapping file but not the LUN content.

    – This is more involving the removing of the backend physical device. Which in my case is the final goal. Though if yours was just to remove a datastore while keeping the physical storage in place this can be ignored.

  • As noted by Sam but not the official source or Mike is if you see a .dvsData folder. as stated by SAM “The .vdsData folder is created on any VMFS store that has a Virtual Machine on it that also participates in the VDS – so by migrating your VMs off the datastore you’ll be ensuring the configuration data is elsewhere.”
  • Check that there are no processes locking the VMFS with this command:
esxcli storage core device world list -d

Datastore Removal Steps

Step 1) Follow the Checklist above.

Make sure no files reside on the Datastore.

Step 2) Unmount Datastore from all ESXi hosts.

As noted by SAM blog post even in vSphere 5.x using the C# phat client, this was possible to do via a wizard against all hosts that have the datastore mounted. Even on the newer HTML5 WebUI this is still possible (I think everyone wants to fully forget that VMware chose flash for a short time).

At this point the Datastore will show up as inaccessible to vSphere. As noted by both Mike and Sam. This will be the same anywhere from 5.x-7.x (As noted by Mike it might be slightly more important to follow procedures with earlier versions of ESXi 3 or 4). If the Check list was followed, there should be no issues unmounting the datastore.

If you need to do this via esxcli (Source):

# esxcli storage filesystem list

Unmount the datastore by running the command:

# esxcli storage filesystem unmount [-u UUID | -l label | -p path ]

For example, use one of these commands to unmount the LUN01 datastore:

# esxcli storage filesystem unmount -l LUN01

# esxcli storage filesystem unmount -u 4e414917-a8d75514-6bae-0019b9f1ecf4

# esxcli storage filesystem unmount -p /vmfs/volumes/4e414917-a8d75514-6bae-0019b9f1ecf4

Step 3) Detach the LUN from all hosts.

As noted by Sam, if you are on 5.x you might want to automate this via PowerCLI. Then noted by Mike, newer 7.x can now do this in bulk via the Management WebUI.

6/7 WebUI -> Hosts n Clusters -> Hosts -> Cluster -> Host -> Configure Tab -> Storage Device (left side tree) -> Highlight Device -> Detach

for esxcli

Obtaining the NAA ID of the LUN to be removed

esxcli storage vmfs extent list

To detach the device/LUN, run the command:

# esxcli storage core device set --state=off -d NAA_ID

6. To verify that the device is offline, run the command:

# esxcli storage core device list -d NAA_ID

The output, which shows that the Status of the disk is off.

Step 4) Rescan HBAs

At this point, if you rescan all HBAs on all hosts the inaccessible datastore should be gone from the WebUI.

At this point you can remove the LUN from being seen (disc from showing up under devices) this will either be iSCSI based configurations (remove static and dynamic IPs from the iSCSI initiator settings on each host.) Mostly likely for a shared VMFS datastore.

It could be a local disc over a local storage controller (such as a logical drive created in RAID) such as behind a Pxxx storage controller.

Removing the source device will always be dependent on how it was configured in the first place.

Summary

So today we covered removing a Datastore. The important thing to remember is removing a Datastore takes a lot more steps than removing one, cause so many different VM’s and services can be applied to a datastore once it has started being used.

In many cases, the SysLog and Scratch partition are big hang ups, and should be looked at closely. Which, however, as stated if you are actually checking for files on the datastore this stuff will be pretty evident.

In most cases, ensure you follow the check list and the process should be pretty smooth. Hope this helps someone.

*Note* I often provide screen shots to provide some context, in this case I decided to leave it more generic to span multiple versions of vSphere.

WSUS Cleanup Unused Updates

How I got here

I needed to swap a disc, for a storage array to rebuild the logical volume.

Check, “disk is not authentic” **** off HPE. Workaround (disable sensors, no thanks). Fix 1, get authentic disk, not happening. Fix 2, move to alternative storage.

Alt storage available. Begin migration process (multiple ways to accomplish this, not in scope of this post). Good time to clean up source data, in this case WSUS update files. Lets clean them up…

Should be easy, eh? Open WSUS -> Options -> Server Cleanup Wizard -> Check  (Unused updates and update revisions)

Reality:

**** off Microsoft…. OK let’s see what Google has for me today….

Rabbit Hole Begins

Classic Adam with some suggestions, as mentioned here and here, same help suggestions as follows:

“* Make the following “Advanced Settings” for WSUS Application Pool in IIS:
– Queue Length: 25000 from 1000
– Limit Interval (minutes): 15 from 5
– “Service Unavailable” Response: TcpLevel from HttpLevel
* (Stop IIS first) Edit the web.config ( C:\Program Files\Update Services\WebServices\ClientWebService\web.config ) for WSUS:
– Replace <httpRuntime maxRequestLength=”4096″ /> with <httpRuntime maxRequestLength=”204800″ executionTimeout=”7200″/>
* Adjust the private memory limit.
– If you have WSUS Automated Maintenance (WAM), from the WAM Shell run:
.\Clean-WSUS.ps1 -SetApplicationPoolMemory 4096
– If you don’t have WAM, edit the pool’s configuration directly to change it to 4194304 (4GB)”

To stop IIS “issreset /stop”

Seems his copy n paste answer to this problem. Well I did all the above, and same results. Let’s try a reboot maybe that helps make these settings apply (doubt it). Nope same error. these changes did nothing to resolve the problem.

Same results. However as noted by the OP in the second link, in which Adam tell the OP to follow his guide on validating something in the SUSDB. However this simply links to his “Reinstall WSUS guide” in which he states you need SSMS “To tell if the WID carries more than the SUSDB database, you’ll need to install SQL Server Management Studio (SSMS) and connect to the WID instance to browse the databases.”

Installing MSsqlcmd

Nah SSMS is heavy you can also use “Microsoft® Command Line Utilities for SQL Server” for WSUS on 2016 I recommend version 14 along with (I bleieve is needed) ODBC Drivers (at time of this writing version 17, required Visual C++ 2017 redist)

*correction ODBC 17, did not work, installed wanted ODBC driver 11 for some reason.. this one. (FFS)

and…

are you shitting me.. what gives… Someone already blog posted about this..

Grab version ODBC version 13.1!

OMG it worked, it somehow hardcoded to check for only this particular version of ODBC, unreal… lets move on.

To help guide me in its use I followed this blog post. Thanks mavboss.

Install Visual C++ 2017 Redist.

Install ODBC drivers (AFAIK enable ODBC Driver for SQL Server SDK, during install wizard, MAKE SURE v13.1!!)

Install MSsqlcmd (v14 at the time of this writing, yes, even though the wizard picture states v13)

Holy Sheeeshh, k let’s see if we can connect to the WID…

Connecting to the WID with SQLCMD

cd "c:\Program Files\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn"
SQLCMD -E -S np:\\.\pipe\MICROSOFT##WID\tsql\query

Ehhh look at that, ok next part the queries mentioned in the initial second link share…

Ehhhh, well its going, but its taking a long time, I can see why the timeouts were extended in the app pool section…

one thing I noticed was when you run the wizard CPU goes up but does not max out, maybe a few spikes here n there. Running this stored procedure pins the CPU at 100%. will report how long this takes…

hour n 30 minutes later the process is still going…. Oi… publishing for now will update this post when new info is discovered. For now this is no answer to the problem, just a hold up to the end of the rabbit hole.

Over 3 and a half hours later it completed :O. I was just about to figure out how to cut it off when right when I was thinking about it the process dropped in CPU usage and some disk usage went up :O

And amazingly got a result from the cmd prompt. Me being the lazy guy I am, had no interest in counting the number of results, so I took the results saved them in text file in a shared file folder. Then opened it on my main work station and pasted it into excel.

Jeeeeeeeee le weeez, over 8000 results, no wonder WSUS kept crashing, plus the 5 to 15 minute timeout wouldn’t help for shit with it having taken nearly 4 hours to complete the query. OK now…. how am I going to clean this up. I have a feeling it’ll be best to write a SP myself, or at least a generalized query to delete some of these in bulk, maybe start off with 10 items and work up to 100 items at a time, even at 100 it’ll take 80 runs to clear them all….

Nutty, I don’t think removing one item will make the front end work like it did for the OP, however I’ll try to manually delete some…

That took about a minute… that times 8000… uhhhh

That’s going to take way too long… researching the stored proecdure in question I found this Blog post.

I ran the indexes mentioned but found no improvement in running the SP.

little more looking into sqlcmd, was able to determine how I could run the SP per numbered line…

SQLCMD -S np:\\.\pipe\MICROSOFT##WID\tsql\query -Q "use SUSDB; exec spDeleteUpdate @localUpdateID=69691;"

Time to write a powershell script to help bulk run this task. The linked Blog shows how SQL script, but that script itself builds a table from the Stored procedure “getObsoleteUpdateToDelete” which took 4 hours so I don’t want to run that again, since I already saved the results in a txt file.

I should be able to use PowerShell to easily iterate each line of the text file (adjust the number of items within the source file) to do the bulk operation. 😀

Let’s do this…

PS C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn> foreach($line in Get-Content 'C:\temp\New folder\list.txt'){write-host "removing $line"; SQLCMD -S np:\\.\pipe\MICROSOFT##WID\tsql\query -Q "use SUSDB; exec spDeleteUpdate @localUpdateID=$line;"}

… This one liner script allows me to run the cleanup on as many or as little updates as needed, simply add each update ID per line within the line.txt file. Done. Simple!

It took literally day’s almost a week, of slowly updating my list file and running the for each command to remove all the records from the database. Then finally opened up that WSUS wizard ran the cleanup wizard and….

Ooo no way finally! what a Pain that was. But got it done. No SSMS required.

ESXi new install; failed to create new Datastore

Well I booted up a new server, created a new logical drive, bot ESXi and Failed to create datastore… what is this?

Google help? Yeah Forms help.

1. Show connected disks.

ls -lha /vmfs/devices/disks/

(Verify the disk is seen. You will probably see your disk ID then :1. This is a partition on the disk. We only need to work about the main disk ID.)

Neat. next

2. Show the error on disk.

partedUtil getptbl /vmfs/devices/disks/(disk ID)

(It will probably indicate that the GPT is located beyond the end of the disk.)

Ohhh yeah, huh… fix it

3. Wipe disk and rewrite with a basic MSDOS partion.

partedUtil setptbl /vmfs/devices/disks/(disk ID) msdos

(The output from this should be similar to msdos and the next line will be o o o o)

Go to create data store after this, yay it worked. Please note to use your own values, images are just for reference.

*UPDATE* I went to reuse some old drives from an old RAID controller. In this case I had removed the logical drive from the old RAID configuration, pulled the disks. Since they were same Caddy as an alternative server, and went on to create some new logical drives to use as an alternative datastore on this particular host.

In the examples above, it would fail at creation of the datastore. In this example it failed at the point in the wizard to define the partition to create. with an error as follows:

“Either the selected disk already has a VMFS datastore or the host cannot perform a partition table conversion. Select another disk” in a nice red banner.

Now attempting My usual fix as mentioned above resulted in…

… to be updated (i have such a headache right now from the endless issues)

Had to clear the drives to fix this problem (delete logical drive) rip Drives out of server, use a USB enclosure to use “diskpart” and the “clean command on windows to clean the drives.

Then after that the health light on the server went off, saying my one disk or caddy is “unauthentic” even though it was just working. Apparently terrible engineer caddy’s.

Which to find out this issue I had to get into iLO which the admin password was unknown so had to run up my old blog post to get into that. and now after all that.. I have a headache.

Good job computers, you managed to make my day fantastic… again.

Veeam: NFC storage connection is unavailable.
Failed to create NFC download stream.

I’ll keep this post short as well.

Run replication Job… ERROR. Check error, huh haven’t this one before…

4/16/2021 11:22:16 AM :: Processing VMName Error: NFC storage connection is unavailable. Storage: [stg:datastore-3,nfchost:host-2,conn:vcenter]. Storage display name: [ESXi-Local-Datastore].
Failed to create NFC upload stream. NFC path: [nfc://conn:vCenter,nfchost:host-23982,stg:datastore-23983@VMName_replica/VMName.vmx].

To note on this, I did some changes, I changed a route between sites (as I needed to reduce a entire network from being improperly routed, but some of services still required access from the main location, thus some dedicated /32 routes were put in their place).

I had also just patched the host in question and was testing jobs after the patch. Since I wasn’t exactly sure which was the cause. I decided to do regular troubleshooting to get more details to the root cause.

I love Veeam, they got a nice KB to help with this. So I followed along, checking the main Veeam server log areas didn’t have the log file in question, so was pretty confident it was still using the proxy at the alt site.

Checking the log as mentioned by the KB, sure enough the same error line showed up which indicated it was DNS related. Checking the proxie’s DNS settings…. DOH. It was using a server within the routes I had removed, and didn’t create a dedicated /32 for, as I wasn’t expecting any systems here to need to communicate to that subnet.

Now that I know what the issue is… this feels familiar… oh yeah the Veeam Soap Fault issue I had to deal with

The funny part about this is… 1) it’s the same server/proxy 2) Again DNS related 3) Again going to stick with host file to avoid dependencies on DNS servers

In my case the error showed the ESXi server by the hostname WAS fully qualified, but access to a DNS server to resolve it was unavailable. As soon as I saw this I had two options:

  1. Create a route to allow the Proxy to reach required DNS servers (which won’t be available in a DR case) OR
  2. Just add a static record in the Proxy host file. (DNS server not required, but if hostname or IP changes needs to be adjusted manually here)

As you can see I have the exactly 2 options similar to my first post, the difference is now it was fully dependent on DNS. Since this is a self hosted instance of a Veeam proxy, there’s a good chance DNS access might not be available when time comes, so this option was chosen.

It’s important to note that when these types of choices are made it is well documented WHY they were made.

So in this case… to resolve it I added a record in the Proxy’s local host file

172.x.x.x     ESXi.domain.postfix

You may notice that the ESXi hostname is not within the initial error, it only tells your the datastore, the Veeam logs will indicate which lookup failed. More than likely the hostname look up for the ESXi host in which the VM will be created on.

I really hope this post helps someone. Honestly I just followed the Veeam KB which was a great source reference to troubleshooting the issue. Your case maybe different, depending on the root cause your resolution maybe different then what was discussed here.

Cheers stay safe everyone.