SharePoint Site Has Not Been Shared With You

SharePoint Site Not Accessible

Overview Story

Created brand new SharePoint Teams site.

Enabled Publishing Feature.

Created Access groups, Nest User in Group.

 

Troubleshooting

First Source Disable Access request feature. This feature was enabled, disable it, Result….

Second Source far more thing to try.

  1. Cache, not the case same issue from any browser and client machine.
  2. *PROCEED WITH CAUTION* Stop and Start “Microsoft SharePoint Foundation Web Application” service from Central Admin >>Application Management >>Manage services on server. In case, you face issues, use STSADM command line.cd “c:\program files\common files\microsoft shared\Web Server Extentions\16\BIN”Stop:
    stsadm -o provisionservice -action stop -servicetype spwebservice
    [Time taken ~30min not sure went for a break since it was taking so long]
    iisreset /noforce [!1min]
    Start:
    stsadm -o provisionservice -action start -servicetype spwebservice
    [Roughly 15 min]
    iisreset /noforce

Result, SharePoint completely broken, Central Admin is accessible, but all sites are in a non working state. I cannot recommend to try this fix, but if you have to ensure you are doing so either in a test environment or have a backup plan.

I managed to resolve the issue in my test environment, as it turned out the sites were all defined to be HTTPS, however the binding was done manually on IIS to certs created, and then the AAMs updated on CA. Sources one and two

Running these commands SharePoint is not aware to recreate the HTTPS bindings so this has to be done manually.

Result:

Ughhhhhhhhhhhh!

3.If are migrated from SharePoint 2010, or backup-restore/import-exported: If your source site collection is in classic windows authentication mode and the target is in claims authentication

Not the case, but tried it anayway, and results…

This is really starting to bug me…

4.If you have a custom master page verify it’s published! Checked-out master pages could cause this issue.

We did make changes to the master page to resolve some UI issues, but this had to be published to even have those changes show, so YES, no change in result.

5.If you have this feature enabled: “Limited-access user permission lockdown mode” at site collection level – Deactivate it. – Because, this prevents limited access users from getting Form pages!

Found this, deactivated it and….

Ughhhhh….

6.On publishing sites: Make sure you set the cache accounts: Super User & Reader to a domain account for the SharePoint 2013 Web Application with appropriate rights.

I’ve read this from a different source as well however none of my other sites that are teams sites with publishing enabled have these extra properties defined and they are working without issue, I decided to try anyway,

Result:

7. If you didn’t run the product and configuration wizard after installation/patch, you may get this error even if you are a site collection administrator. Run it once and get rid of this issue.

Let’s give it a whirl… Now I pushed this down from #2 as it’s pretty rough like the suggestion I put as #2, which I should probably shift down for the same risk reasons, but I did try this before the other ones and it like the stsadm commands it broke my ShaerPoint Sites, it stated errors about features defined in the content database of attached sites that the features were not installed on the front end server.

In this case I tried to run my scripts I had written to fix these types of issues (publishing of script pending), but it wouldn’t accept the site URL stating it was not a site collection, it was at this point running get-spsite returned an error…

and sure enough…

AHHHHHHHHHHHH!!!!!!!!!

I asked my colleague for help since he seems to be good at solving issue when I feel I’ve tried everything. He noted the master pages and layouts area had unique permissions, setting it to inherit from parent made the pages finally load. But is this normal? I found someone asking about the permissions here, apparently it shouldn’t be inherited and they list the default permission sets:

Yes, Master Page Gallery use unique permissions by default.
Here is the default permissions for your information:
Approvers SharePoint Group Approvers Read
Designers SharePoint Group Designers Design
Hierarchy Managers SharePoint Group Hierarchy Managers Read
Restricted Readers SharePoint Group Restricted Readers Restricted Read
Style Resource Readers SharePoint Group Style Resource Readers Read
System Account User SHAREPOINT\system Full Control

Setting there defaults I still got the page was not shared problem, for some reason it works when I set the library to inherit permissions from the parent site.

When checking other sites I was intrigued when there was only the Style Resource Readers defined. I found this blog on a similar issue when sub site inheritance is broken which interesting enough directly mentions this group.

“It occurs when the built-in SharePoint group “Style Resource Readers” has been messed with in some way.”

Will check this out.

So I went through his entire validation process for that group, it all checked out, however as soon as I broke inheritance on the master page library, made it the “known defaults” and verified all Style Resources users were correct and all permission for it on all listed libraries and lists all match,  I continued to receive Site not shared with you error.

I don’t feel the fix I found is correct, but I can’t find any other cause, or solution at this time. I hope this blog post helps someone, but I’m sad to say I wasn’t able to determine the root cause nor the proper solution.

 

Renew Certificate with same Key via CMD

certutil -store my

The command above can be used to get the required serial number for the cert needing to be renewed. This should show the machine store, if you need certs displayed for the user store remove the “my” keyword.

certreq -enroll -machine -q -PolicyServer * -cert <serial#> renew reusekeys

If you get the following error:

Ensure the machine account has enroll permission on the published certificate template. For step by step guidance follow this blog post by itexperince.

If you get this error “The certificate Authority denied teh request. A required certificate is not within it’s validity period when verifying against currect system clock”:

Ensure the Certificate you are attempting to renew is not already expired.

If it is follow my guide on creating new certs via CLI.

afterwards it should succeed.

*NOTE* this option archives the old certificate, and generates a new one with a new expiration date, with the same key, with a new serial number. How services that are bound to the certificate update themselves, I’m not sure for this test I did not have the certificate bound to any particular services. Verifying this actually the web server using the cert did automatically bind to the new cert, I’d still recommend you verify where the certificate is being used and ensure those services are update/restarted accordingly to apply the changes.

Fixing SharePoint Search

Broken SharePoint Crawl

Issue: SharePoint Search

First Issue: Crawl ends in 1 min 20 seconds.

Solution: Check to see if the Page Loads when queried from the front end server. If you get a credential prompt 3 times.  Then you have to disable loop back checking.

How to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa

Add a new DWORD value named DisableLoopbackCheck and give it a value of 1. After setting the value reboot your server.

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa /v DisableLoopbackCheck /t REG_DWORD /d 1

Second Issue: Crawl appears to complete but Errors in Log

Log Details of “the content processing pipeline failed to process the item ExpandSegments”

You may find lots of sources stating to watch out for multi valued attributes or property names such as this nicely detailed blog post however, watch for the details in the log message and it’s a bit different. However I feel this TechNet was answered incorrectly.

I tried some of the basics, such as clearing the index and adding the search account as site admin, did not help. On that note I learnt that separate search content sources within a search application do not have their own indexes. You have to create dedicated search applications for each SharePoint site if you wish them to have their own index (“Database/Tables”).

After a bit more scouring I found many instances of the same problem with the solution always being you have to create a new search application, such as this TechNet post and this TechNet Post.

Solution:

Remove the content source from the existing Search Service Application.

Create an entirely new Search Service Application.

Reboot the Front End.

Add Crawl Content on new SSA. Crawl should work.

Third Issue: Red X on Search Service Application Services

*NOTE* After creating the new Search Application, a front end reboot is required to remove the red X’s on some of the application services statuses.

Fourth Issue: Search results not working

This from my testing appeared to have resolved the crawl issue at hand. However I wasn’t able to get results returned when entering data in to search. I did a bit of searching and testing and I found the solution. Turns out you have to associate the web application with the new search service application (In my testing I created a new uniquely defined search application to have a separate index then the other SharePoint Sites).

Solution:

Navigate to Central Admin > Application Management > Manage web applications >Highlight the web application > Select Service Connections from top ribbon > Make sure your Search service application is selected.

This was it for me, however if you still experience issues I have also read updating the front end servers can resolve issues as well, as blogged about by Stefan Goßner.

Fifth Issue: Runaway Crawl

For some unknown reason, when I went to check out the SharePoint front end after the weekend, I noticed it was using near 100% CPU usage, very similar to a Front End that is actively doing a crawl, I knew this wasn’t a normal time for a crawl and it generally never happens during this time.

To my amazement 3 crawls were on going for over 70+ hours. Attempting to stop them from the front end option “Stop all crawl” resulted in them being stuck in a stopping state.

Googling this I found stop and starting the SharePoint Search Service under services.msc did help to bring them back to “idle”.

Now I believe this next step is what caused my next issue, but it is uncertain.

I clicked on Remove Index, for the primary search service application. It was stuck at “This shouldn’t take long” for a really long time, and I believe it was my impatience here that caused the next problem, as I restarted the search service again after this fact.

Sixth Issue: Paused by System

All solutions I found for this issue resulted in no success. Even rebooting the server the primary search service content crawl states were “paused by system. Regardless of:

    1. Pausing and Resuming the Search Services
    2. Restarting the Search Services
    3. Rebooting the server

I was only able to resolve this issue the same way I fixed the initial issue with the crawl, rebuild the search service application.

I hope this blogs helps anyone experiencing issues with SharePoint Search functionality. Cheers!

Verifying Web connections to SharePoint Sites

Simple as:

Get-Counter -Counter '\web service(_total)\current connections'

You can also use performance Monitor utilizing the same filter/”counter”.

To view all filter/”counter” types: (*note a lot of output, may want to pipe to file)

typeperf -qx

This is useful info if you plan on making Site Changes and need to know how many people might be affected. At the time of this writing I haven’t been able to figure out WHO these connections belong to, however I believe with more knowledge this might be possible. I’ll leave this open for the time being, and come back to it when time permits.

Rename Lync/Skype Server

Overview

Short Answer, Don’t start fresh.

If in a clustered topology with a dedicated CMS. Remove from topology,  add new server with new name to topology and redeploy fresh.

My Expereince:

Step 1) Restore LyncVM created on resource domain, change hostname.
Step 2) Stop All Lync Services
Step 3) Create SRV record for _sipinteraltls, and 3 a records (admin, meet, dialin)
Step 4) Remove old CSM, else topogly deployment will fail. (Remove-CSconfigurationStorelocation)
Step 4.1) Extend Storage to 80Gig, else publishing will fail. (https://docs.microsoft.com/en-us/skypeforbusiness/troubleshoot/server-install-or-uninstall/error-when-install-lync-server#:~:text=This%20issue%20occurs%20because%20the,16%20GB%20of%20free%20space.)
Step 5) Open Topology Builder, if host name same, open file, if new host create new topology. Publish
FAIL

Step 1) Cloned Server
Step 2) Removed from Domain (disconnected)
Step 3) changed Hostname, and joined Domain
Step 4) Run Deployment Wizard -> Prep First Standard eidtion server
Step 5) Run Deployment Wizard -> Setup or Remove Lync Components -> Step 2
FAIL

Step 1) Cloned Server
Step 2) Removed from Domain (disconnected)
Step 3) remove all local SQL instances
Step 4) Delete C:\CSData
Step 5) Rename, Join Domain
Step 6) Run Deployment Wizard, Prepare First standard edition server
Step 7) Adjust DNS (_sipinternaltls, admin, dialin, meet)
Step 8) Remove-CSConfigurationStoreLocation
Step 9) Create and Publish Topology
SUCCESS
However Install CMS: failed:
https://social.technet.microsoft.com/Forums/en-US/88fc3dc6-cf74-474e-baf7-08609211ac1b/cannot-open-database-quotxdsquot-requested-by-the-login-the-login-failed-login-failed-for-user?forum=lyncdeploy

Long Answer (Source): This info has been shared on many blogs, not sure who’s the originator of the list

  1. Remove Skype for Business server from topology
  2. Publish topology.
  3. Run Skype for Business Server Deployment Wizard local setup on server to remove Lync components (or run the bootstrapper)
  4. Uninstall SQL Server. Front-ends have LyncLocal and RTCLocal instances. Remove both, rebooting between instance removal.  Edge only has RTCLocal instance.
  5. Remove SQL Server 2012 Management Objects (x64)
  6. Remove SQL Server 2012 Native Client (x64)
  7. Remove Microsoft System CLR Types for SQL Server 2012 (x64)
  8. Remove Microsoft Skype for Business Server 2015, Front End Server
  9. Remove Microsoft Skype for Business Server 2015, Core Components
  10. Delete leftover data:
    Delete C:Program FilesMicrosoft SQL Server
    Delete C:Program FilesMicrosoft Skype for Business Server 2015
    Delete C:CSData
  11. Rename server and restart
  12. Wait until AD replication completes with new server name.
  13. Open Topology Builder, add a new server to existing pool and publish. (If this is a SBA or Standard Edition Server, the pool and the server FQDN is identical.)
  14. Reinstall Skype for Business Server 2015components and all cumulative updates
  15. Generate new certificate with updated server name and assign to appropriate services using Skype for Business Server 2015 Deployment Wizard.
  16. Restart all servers in pool at same time (only relevant for front-end servers in an Enterprise pool).

Summary

Don’t do it. It’s so much easier to start fresh then deal with this garbage.

Audit Client Side Outlook Archive Settings

Why…

What You Need to Know About Managing PST Files (ironmountain.com)

How?

[SOLVED] Powershell Script find pst files on network – Spiceworks

From this guys script I wrote a simple script of my own.

As noted in the current issue that it only works running under the users current context. Though I know the results from testing, I can’t source any material on the CIM_DataFile class as to denote how or why this is the case.

For the time being this post will remain short as it’s a work in progress that I haven’t been able to resolve the interactive part of the script, I’m not happy with a requirement of a open share, and a logon script. As the script in its current state isn’t even written to support that design.

Will come back to this when time permits.

BitwardenRS Upgrade to Vaultwarden

The Story

A while a go I blogged about installing BitwardenRS, the on prem version of Bitwarden, which is amazing by the way.

Recently they announced they are changing the name to respect of the original project to avoid confusion.

You can follow this guys great video if you happen to use UNRAID (which I haven’t used myself but looks really neat).

If you followed my blog then you are running bitwardenrs via docker-compose.

In this case it was actually simpler than I thought.

Updating/Upgrading BitwardenRS to Vaultwarden

If you are simply updating to the latest build with the same old name.

Step 1) bring down the Container

cd \path\to\dockerimage
docker-compose down

Step 2) pull the latest build

docker-compose pull

Step 3) Bring up the new container

docker-compose up -d

That’s literally it, and it is super fast process.

However if you want to use the new image. You’ll have to change the name of the source project in the docker-compose yaml file:

Change the image: line

image: vaultwarden/server

Then, just like before, bring down the container, pull new, bring up.

Important Change (broken Email)

After updating I wasn’t first aware of an issue (as I normally don’t manage multiple users and orgs), however attempting to add a user to an org I got an error: SMTP improper Auth Mechanism selected.

No matter which one you pick the error remained (against a standard port 25 connection, anon). No matter what you entered in the “admin portal” under the SMTP configuration area, the same error would persist. My colleague started to dig through the source code, and the logic seemed clean. The issue seemed that once you configure specific “environment variables” (EG.
– SMTP_USERNAME=[username]
-SMTP_PASSWORD=[password]) that these for some reason are not being “overwritten” when defined in the admin portal. Since there was fields defined “[username]” the code was building a connection for auth, and expecting a proper auth mechanism. Since Auth Mechanism was never defined in the “environment variables”  and the bug of the settings of SMTP in the “admin panel” were not overwriting it would never hit the proper method in the SMTP code to make a standard port 25 anonymous connection.

To fix the issue you have to remove those two lines from the docker-compose YAML file.

So ONLY DEFINE:
SMTP_HOST=Email Relay DNS name
SMTP_FROM=from address
SMTP_PORT=25
SMTP_SS=false

Save the YAML file and bring down, and then bring up the container.

Watch as email works again. 😀

Super thanks to my buddy GB for the deep code analysis to help resolve this issue. 🙂

Check if Someone is Remoted into a Computer

Let’s say you have a shared workstation, and you’d liek to check if someone is using it without connecting first and having the “someone is already using the workstation”, or interrupting them in the first place.

I found this and I just have to make a super quick short post about it since it blew my mind.

Why it blew my mind.

  1. It’s been around for along time.
  2. It’s native to Windows.
qwinsta /server:RemoteMachine

That’s literally it from here. Admin not needed on local or remote machine, just need remote access to remote machine from my quick testing.

Cheers!

Check in all Files in a SharePoint Doc Library

Here’s a real quick and dirty way to check in all files using SharePoint Powershell.

Get the SharePoint Web application where the doc library sits.

$YourSPSite = Get-SPWeb https://sharepointweb.domain

Get the Document Library from the root (or as many sub layers as needed)

$DocLibrary = ($YourSPSite.RootFolder.SubFolders["Name of Folder"]).SubFolders["Name of Nested Folder"]

After this call the checkin method for each file that is not in a checked in state

$DocLibrary.Files | ?{$_.CheckOutStatus -ne "None"} | foreach {$_.CheckIn("File Checked in by Admin")}

If there are even more subfolders within this folder, they have to be taken care of by simply adding a nested .SubFolders call .EG.

$DocLibrary.SubFolders.Files | ?{$_.CheckOutStatus -ne "None"} | foreach {$_.CheckIn("File Checked in by Admin")}

At this point it doesn’t matter what the sub folders are named it’ll go through each of their files within that “layer” of nested folders.

Hope this helps someone

How to remove a Datastore from a vSphere Cluster

How to Remove a Datastore

Intro

Hey everyone,

I figured I’d write up a quick little help guide on removing a Datastore. Now this isn’t new and likely to be buried on the internet because of it. However in my searches I have found the following sources to be great reads. I highly recommend you check them out.

1)  Official Source VMware KB2004605.

2) A Blog guide by Sam McGeown, here.

3) A post by Mike on cswitchzero.

Now let’s go through the checklist from the official source one by one.

Check List

  • If the LUN is being used as a VMFS datastore, all objects (for example, virtual machines, templates, and Snapshots) stored on the VMFS datastore must be unregistered or moved to another datastore.-This one is pretty easy navigate to the datastore files and check. You may find some remanence from the following though.
  • All CD/DVD images located on the VMFS datastore must also be unmounted/unregistered from the virtual machines.-This shouldn’t even be the case if you did check one.
  • The datastore is not used for vSphere HA heartbeat.-This setting will use a folder labeled “.vSphere-HA”
    For a Quick overview of Datastore Heart beating See here
    To “remove” aka change them See here
  • The datastore is not part of a datastore cluster.-You can find useless help on this process from VMware here. I’m assuming it’s an easy task via the WebUI
  • The datastore is not managed by Storage DRS.-If you removed it from the datastore cluster, how could this be an issue?
  • The datastore is not configured as a diagnostic coredump Partition/File and Scratch Partition. For more information, see the following:
  • Storage I/O Control is disabled for the datastore.-See here on how to enable (disabling is the exact reverse)
  • No third-party scripts or utilities running on the ESXi host can access the LUN that has issue.-Honestly I’m not sure how you could check this… even when doing some quick research, you can have scripts I guess that are not on the hosts, but run by alternative machines via PowerCLI. As described in this community post. I guess you’d have to know, either way the scripts would just fail, shouldn’t affect the vSphere cluster.
  • If the LUN is being used as an RDM, remove the RDM from the virtual machine. Click Edit Settings, highlight the RDM hard disk, and click Remove. Select Delete from disk if it is not selected and click OK.Note: This destroys the mapping file but not the LUN content.

    – This is more involving the removing of the backend physical device. Which in my case is the final goal. Though if yours was just to remove a datastore while keeping the physical storage in place this can be ignored.

  • As noted by Sam but not the official source or Mike is if you see a .dvsData folder. as stated by SAM “The .vdsData folder is created on any VMFS store that has a Virtual Machine on it that also participates in the VDS – so by migrating your VMs off the datastore you’ll be ensuring the configuration data is elsewhere.”
  • Check that there are no processes locking the VMFS with this command:
esxcli storage core device world list -d

Datastore Removal Steps

Step 1) Follow the Checklist above.

Make sure no files reside on the Datastore.

Step 2) Unmount Datastore from all ESXi hosts.

As noted by SAM blog post even in vSphere 5.x using the C# phat client, this was possible to do via a wizard against all hosts that have the datastore mounted. Even on the newer HTML5 WebUI this is still possible (I think everyone wants to fully forget that VMware chose flash for a short time).

At this point the Datastore will show up as inaccessible to vSphere. As noted by both Mike and Sam. This will be the same anywhere from 5.x-7.x (As noted by Mike it might be slightly more important to follow procedures with earlier versions of ESXi 3 or 4). If the Check list was followed, there should be no issues unmounting the datastore.

If you need to do this via esxcli (Source):

# esxcli storage filesystem list

Unmount the datastore by running the command:

# esxcli storage filesystem unmount [-u UUID | -l label | -p path ]

For example, use one of these commands to unmount the LUN01 datastore:

# esxcli storage filesystem unmount -l LUN01

# esxcli storage filesystem unmount -u 4e414917-a8d75514-6bae-0019b9f1ecf4

# esxcli storage filesystem unmount -p /vmfs/volumes/4e414917-a8d75514-6bae-0019b9f1ecf4

Step 3) Detach the LUN from all hosts.

As noted by Sam, if you are on 5.x you might want to automate this via PowerCLI. Then noted by Mike, newer 7.x can now do this in bulk via the Management WebUI.

6/7 WebUI -> Hosts n Clusters -> Hosts -> Cluster -> Host -> Configure Tab -> Storage Device (left side tree) -> Highlight Device -> Detach

for esxcli

Obtaining the NAA ID of the LUN to be removed

esxcli storage vmfs extent list

To detach the device/LUN, run the command:

# esxcli storage core device set --state=off -d NAA_ID

6. To verify that the device is offline, run the command:

# esxcli storage core device list -d NAA_ID

The output, which shows that the Status of the disk is off.

Step 4) Rescan HBAs

At this point, if you rescan all HBAs on all hosts the inaccessible datastore should be gone from the WebUI.

At this point you can remove the LUN from being seen (disc from showing up under devices) this will either be iSCSI based configurations (remove static and dynamic IPs from the iSCSI initiator settings on each host.) Mostly likely for a shared VMFS datastore.

It could be a local disc over a local storage controller (such as a logical drive created in RAID) such as behind a Pxxx storage controller.

Removing the source device will always be dependent on how it was configured in the first place.

Summary

So today we covered removing a Datastore. The important thing to remember is removing a Datastore takes a lot more steps than removing one, cause so many different VM’s and services can be applied to a datastore once it has started being used.

In many cases, the SysLog and Scratch partition are big hang ups, and should be looked at closely. Which, however, as stated if you are actually checking for files on the datastore this stuff will be pretty evident.

In most cases, ensure you follow the check list and the process should be pretty smooth. Hope this helps someone.

*Note* I often provide screen shots to provide some context, in this case I decided to leave it more generic to span multiple versions of vSphere.