FreeNAS Single SSD as ZIL and L2ARC

Quick Story I remember I set this up on my FreeNAS server in hopes to get better performance, in reality, I don’t think it helped anything cause of my FreeNAS servers setup. Which was an old desktop with 3 Gigs of memory and a couple SATA drives, 2 spindle and 1 SSD.

Took me a while but I finally found the original source I followed.

Main Parts (assume SSD is ada0):

root@freenas1:~ % gpart create -s gpt ada0
ada0 created
root@freenas1:~ % gpart add -a 4k -b 128 -t freebsd-zfs -s 10G ada0
ada0p1 added
root@freenas1:~ % gpart add -a 4k -t freebsd-zfs ada0
ada0p2 added

List Disk to get GUIDs

root@freenas1:~ % gpart list

Add partitions as Zil and L2ARC on a logical disk (volume0)

root@freenas1:~ % zpool add volume0 log gptid/94a4bd28-aeb7-11e5-99ac-bc5ff42c6cb2
root@freenas1:~ % zpool add volume0 cache gptid/9a79622f-aeb7-11e5-99ac-bc5ff42c6cb2

Nice you can use the zpool command to verify their used as such:

If you are paying attention you’ll noticed the guid are different. Anyway you can use the GUI to see the results as well, if you click the main volume under Storage -> Volume, Then click the Show details button.

If you pick any of the partitions in this list, at the bottom you get a button labeled “Remove”. To undo the previous additions made via the back end SSH.

After this I removed the old Volume completely, including the old File based extent I was using on it.

I then created all new Volumes, one volume on each drive, then created 1 zVol on each volume, then used those zVols as Device based Extents on the iSCSI service…. and I couldn’t believe the performance increase, I couldn’t saturate the 1gbps link before with storage vMotions. Now every single Datastore maxes out the NIC and I hot 100 MB/s plus on every storage vMotion and I increased my storage capacity. W00t (of course I never had storage redundancy to begin with so nothing lost, all gains.

Summary

Don’t bother using a SSD to try and gain speeds on simple homelab FreeNAS servers. It’s useless… “Some more specifics: as a rule of thumb L2ARC is only really useful if you have lots of RAM (64GB+) and a ZIL is only useful if you’re performing lots of synchronous writes.” – anodos

Upgrade and Migrate a vCenter Server

Intro

Hello everyone! Today I’ll be doing a test in my home lab where I will be upgrading, not to be confused with updating, a vCenter server. If you are interested in staying on the version your vCenter is currently on but just patch to the latest version, see my other blog post: VMware vCenter Updates using VAMI – Zewwy’s Info Tech Talks

Before I get into it, there are a couple thing expected from you:

  1. An existing instance of vCenter deployed (for me yup, 6.7)
  2. A backup of the config or whole server via a backup product
  3. A Copy of the latest vCenter ISO (either from VMware directly or for me from VMUG)

Side Story

*Interesting Side Note* VM Creation dates property is only a thing since vCenter 6.7. Before that it was in the events table that gets rotated out from retention policies. 🙂

*Side Note 2* I was doing some vmotions of VMs to prepare rebooting a storage device hosting some datastores before the vCenter update, and oddly even though the Task didn’t complete it would disappear from the recent task view. Clicking all Tasks showed the task in progress but @ 0% so no indication of the progress. The only trick that worked for me was to log off and back in.

A quick little side story, it was a little while since I had logged into VMUG for anything, and I have to admit the site setup is unbelievably bad designed. It’s so unintuitive I had to Google, again, how to get the ISO’s I need from VMUG.

Also for some reason, I don’t know why, when I went to log in it stated my username and password is wrong. Considering I use a password manager, I was very confident it was something wrong on their end. Attempting to do a password reset, provided no email to my email address.

Distort I decided to make a another account with the same email, which oddly enough when created brought me right back to my old account on first log in. Super weird. According to Reddit I was not the only one to experience oddities with VMUG site.

Also on the note of VMware certification, I totally forgot you have to take one of the mandatory classes before you can challenge, or take any of the VMware exams.

“Without the mandatory training? Yes, they represent a reasonable value proposition. With mandatory training? No, they do not. Requiring someone who’s been using your products for a decade to attend a class which covers how to spell ESXi is patronizing if not downright condescending. I only carry VMware certifications because I was able to attain them without going through the nonsense mandatory training.”

“The exam might as well cost $3500 and “include” the class for “free”.”

Don’t fully agree with that last one cause you can take any one class (AFAIK) and take all the exams. I get the annoyance of the barrier to entry, gotta keep the poor out. 😛

Simple Summary about VMUG.

  1. Create account and Sign up for Advantage from the main site.
  2. Download Files from their dedicated Repo Site.

Final gripes about VMUG:

  1. You can’t get Offline Bundles to create custom ESXi images.
  2. You can’t seem to get older versions of the software from there.
  3. The community response is poor.
  4. The site is unintuitive and buggy.

So now that we finally got the vCenter 7 ISO

For a more technical coverage of updating vCenter see VMware’s guide.

For shits.. moving esxi hosts, and vcenter to new subnet.

1) Build Subnet, and firewall rules and vlans
2) Configure all hosts with new VMPG for new vlan
3) Move each host one at a time to new subent, ensure again that network will be allowed to the vCenter server after migration
4) Can’t change VMK for mgmt to use VLAN from the vCenter GUI, have to do it at host level.
i) Place host into maintenance mode, remove from inventory (if host were added by IP, otherwise just disconnect)
ii) Update hosts IP address via the hosts console, and update DNS records
iii) Re-add the host to the cluster via new DNS hostname

Changing vCenter Server IP address

Source: How to change vCenter and vSphere IP Address ( embedded PSC ) – Virtualblog.nl

changed IP address in the VAMI, it even changed the vpxa config serverIP address to the new IP automatically. it worked. :O

Upgrading vCenter

Using the vCenter ISO

The ISO is not a bootable one, so for me I mount it on to a Windows machine that has access to the vCenter server.

Run the installer exe file…

Click Upgrade

I didn’t enter the source ESXi host IP.. lets see

nope wants all the info, fill all fields including source esxi host info.

Yes.

Target ESXi Host for new VCSA deployment. Next

Target VCSA VM info. Next

Would you like, large or eXtra large?

pick VMs datastore location, next.

VM temp info, again insure network connections are open between subnets if working with segregated networks.

Ready to deploy.

Deploying VM to target ESXi host. Once this was done got a message to move on to Stage 2, which can be done later, I clicked next.

Note right here, when you get a prompt for entering the Root password, I found it to be the target Root password not actually the source.

Second Note Resolving Certs Expired Pre-Check

While working on a client upgrade, it was more in my face when doing the source server pre-checks and would not continue stating certificates expired.

I was wondering how to check Existing certs and while this KB states you can check it via the WebUI There  could be a couple issues.

1) You might not even be able to login into the WebUI as mentioned in this Blog, a bit of a catch 22. (Note* same goes for SSO domains, it can’t be managed by VAMI, so if there’s an AD issue with a source, you often get a service 503 error attempting to log on to the WebUI)

2) It might not even show up in that area of the WebUI.

In these cases I managed to find this blog post… which shockingly enough is the very guy who wrote the fixsts script used to fix my problem in this very blog post :O

Checking Certs via the CLI

Grab Script from This VMware KB

Download the checksts.py script attached to the above KB article.
Upload to attached script to the VCSA or external PSC.

For example, /tmp

Once the script has been successfully uploaded to VCSA, change the directory to /tmp.

For example:

cd /tmp

Run python checksts.py.

OK Dokie then, I guess this script doesn’t check the required cert… so instead I followed along with this VMware KB (Yes another one).

In which case I ran the exact commands as specified in the KB and saved the certificate to a txt, file and opened it up in Windows by double clicking the .crt file.

openssl s_client -connect MGMT-IP:7444 | more

So now instead of running the fixsts script, this KB states to run the following to reset this certificate to use the Machine Cert (self signed with valid date stamps, at least that’s what this server showed when checking them via the Certificate management are in the vCenter WebUI).

For the Appliance (I don’t deal with the Windows Server version as it EOL)

/usr/lib/vmware-vmafd/bin/vecs-cli entry getcert --store MACHINE_SSL_CERT --alias __MACHINE_CERT > /var/tmp/MachineSSL.crt
/usr/lib/vmware-vmafd/bin/vecs-cli entry getkey --store MACHINE_SSL_CERT --alias __MACHINE_CERT > /var/tmp/MachineSSL.key
/usr/lib/vmware-vmafd/bin/vecs-cli entry getcert --store STS_INTERNAL_SSL_CERT --alias __MACHINE_CERT > /var/tmp/sts_internal_backup.crt
/usr/lib/vmware-vmafd/bin/vecs-cli entry getkey --store STS_INTERNAL_SSL_CERT --alias __MACHINE_CERT > /var/tmp/sts_internal_backup.key
/usr/lib/vmware-vmafd/bin/vecs-cli entry delete --store STS_INTERNAL_SSL_CERT --alias __MACHINE_CERT -y
/usr/lib/vmware-vmafd/bin/vecs-cli entry create --store STS_INTERNAL_SSL_CERT --alias __MACHINE_CERT --cert /var/tmp/MachineSSL.crt --key /var/tmp/MachineSSL.key

Then:

  • service-control --stop --all
  • service-control --start --all

In my case for some odd reason I saw a bunch of these… when stopping and starting the services

2021-09-20T18:35:47.049Z Service vmware-sts-idmd does not seem to be registered with vMon. If this is unexpected please make sure your service config is a valid json. Also check vmon logs for warnings.

I was nervous at first I may have broke it, after sometime it didn’t complete the startup command sequence, and after some time the WebUI was fully accessible again. Let’s validate the cert with the same odd method we did above.

Which sure enough showed a date valid cert that is the machine cert, self-signed.

Running the Update Wizard… Boooo Yeah!

 

Uhhh, ok….

 

Ok dokie?

I didn’t care too much about old metrics.

nope.

Let’s go!

After some time…

Nice! and it appears to have worked. 🙂

Another Side Trail

I was excited cause I deployed this new VCSA off the FreeNAS Datastore I wanted to bring and reboot. but low and behold some new random VMs are on the Datastore…

doing some research I found this simple explanation of them however it wasn’t till I found this VMware article with the info I was more after.

Datastore selection for vCLS VMs

The datastore for vCLS VMs is automatically selected based on ranking all the datastores connected to the hosts inside the cluster. A datastore is more likely to be selected if there are hosts in the cluster with free reserved DRS slots connected to the datastore. The algorithm tries to place vCLS VMs in a shared datastore if possible before selecting a local datastore. A datastore with more free space is preferred and the algorithm tries not to place more than one vCLS VM on the same datastore. You can only change the datastore of vCLS VMs after they are deployed and powered on.

If you want to move the VMDKs for vCLS VMs to a different datastore or attach a different storage policy, you can reconfigure vCLS VMs. A warning message is displayed when you perform this operation.

You can perform a storage vMotion to migrate vCLS VMs to a different datastore. You can tag vCLS VMs or attach custom attributes if you want to group them separately from workload VMs, for instance if you have a specific meta-data strategy for all VMs that run in a datacenter.

In vSphere 7.0 U2, new anti-affinity rules are applied automatically. Every three minutes a check is performed, if multiple vCLS VMs are located on a single host they will be automatically redistributed to different hosts.

Note:When a datastore is placed in maintenance mode, if the datastore hosts vCLS VMs, you must manually apply storage vMotion to the vCLS VMs to move them to a new location or put the cluster in retreat mode. A warning message is displayed.

The enter maintenance mode task will start but cannot finish because there is 1 virtual machine residing on the datastore. You can always cancel the task in your Recent Tasks if you decide to continue.
The selected datastore might be storing vSphere Cluster Services VMs which cannot be powered off. To ensure the health of vSphere Cluster Services, these VMs have to be manually vMotioned to a different datastore within the cluster prior to taking this datastore down for maintenance. Refer to this KB article: KB 79892.

Select the checkbox Let me migrate storage for all virtual machines and continue entering maintenance mode after migration. to proceed.

huh, the checkbox is greyed out and I can’t click it.
vmotioned them and the process kept moving up.

How a Small Mistake Became a Big Problem

Background

This story is about implantation of compliance requirements, and the technical changes made that caused some heartburn. In particular Exchange server and retention policies.

Very simple; Compliance, and regulatory practices.

Retention Policy was becoming enforced. As such on Exchange no less, see here for more information on how to configure retention policies on Exchange.

You might notice you have to create tags of time frames. In this case there wasn’t one already pre-populated with my needs. So you have to create those. You may have also noticed that the only name time frame is all defined with number of days.

Human Error

So long story short, I  wrongly defined the number of days for the length of period I wanted to defined. Simply due to bad arithmetic, I swear I was an ace at math in school. Anyway, after this small mistake on the tag definition, and it was deployed to all Mailboxes. (Yeap there was steps of approval, and wasn’t caught even during pilot users).

Once it was discovered, there were 2 options. 1) Wait till specific people notice and  recover as required, or  2) do it all in one swoop.

Recover deleted messages in a user’s mailbox in Exchange Server | Microsoft Docs

After following this, it was determined that we couldn’t find just the emails from the time frame we needed to restore. This turns out cause all the emails “whenChanged” timestamp all became the same time the retention policy came into effect. So filtering by Date was completely useless.

Digging a Hole

At this point we figured we’d just restore all email, and let the retention policy rerun with the proper time frame tag applied. While this did work, there was a technique or property that was recently added that would have restored the emails into the sub folders in which they were removed from. Instead, all the emails were placed back into users Inbox.

This was a rough burn.  Overall it did work, it just wasn’t very clean and there was some fallout from the whole ordeal.

Hope this story helps someone prevent the same mistakes.

SharePoint Site Has Not Been Shared With You

SharePoint Site Not Accessible

Overview Story

Created brand new SharePoint Teams site.

Enabled Publishing Feature.

Created Access groups, Nest User in Group.

 

Troubleshooting

First Source Disable Access request feature. This feature was enabled, disable it, Result….

Second Source far more thing to try.

  1. Cache, not the case same issue from any browser and client machine.
  2. *PROCEED WITH CAUTION* Stop and Start “Microsoft SharePoint Foundation Web Application” service from Central Admin >>Application Management >>Manage services on server. In case, you face issues, use STSADM command line.cd “c:\program files\common files\microsoft shared\Web Server Extentions\16\BIN”Stop:
    stsadm -o provisionservice -action stop -servicetype spwebservice
    [Time taken ~30min not sure went for a break since it was taking so long]
    iisreset /noforce [!1min]
    Start:
    stsadm -o provisionservice -action start -servicetype spwebservice
    [Roughly 15 min]
    iisreset /noforce

Result, SharePoint completely broken, Central Admin is accessible, but all sites are in a non working state. I cannot recommend to try this fix, but if you have to ensure you are doing so either in a test environment or have a backup plan.

I managed to resolve the issue in my test environment, as it turned out the sites were all defined to be HTTPS, however the binding was done manually on IIS to certs created, and then the AAMs updated on CA. Sources one and two

Running these commands SharePoint is not aware to recreate the HTTPS bindings so this has to be done manually.

Result:

Ughhhhhhhhhhhh!

3.If are migrated from SharePoint 2010, or backup-restore/import-exported: If your source site collection is in classic windows authentication mode and the target is in claims authentication

Not the case, but tried it anayway, and results…

This is really starting to bug me…

4.If you have a custom master page verify it’s published! Checked-out master pages could cause this issue.

We did make changes to the master page to resolve some UI issues, but this had to be published to even have those changes show, so YES, no change in result.

5.If you have this feature enabled: “Limited-access user permission lockdown mode” at site collection level – Deactivate it. – Because, this prevents limited access users from getting Form pages!

Found this, deactivated it and….

Ughhhhh….

6.On publishing sites: Make sure you set the cache accounts: Super User & Reader to a domain account for the SharePoint 2013 Web Application with appropriate rights.

I’ve read this from a different source as well however none of my other sites that are teams sites with publishing enabled have these extra properties defined and they are working without issue, I decided to try anyway,

Result:

7. If you didn’t run the product and configuration wizard after installation/patch, you may get this error even if you are a site collection administrator. Run it once and get rid of this issue.

Let’s give it a whirl… Now I pushed this down from #2 as it’s pretty rough like the suggestion I put as #2, which I should probably shift down for the same risk reasons, but I did try this before the other ones and it like the stsadm commands it broke my ShaerPoint Sites, it stated errors about features defined in the content database of attached sites that the features were not installed on the front end server.

In this case I tried to run my scripts I had written to fix these types of issues (publishing of script pending), but it wouldn’t accept the site URL stating it was not a site collection, it was at this point running get-spsite returned an error…

and sure enough…

AHHHHHHHHHHHH!!!!!!!!!

I asked my colleague for help since he seems to be good at solving issue when I feel I’ve tried everything. He noted the master pages and layouts area had unique permissions, setting it to inherit from parent made the pages finally load. But is this normal? I found someone asking about the permissions here, apparently it shouldn’t be inherited and they list the default permission sets:

Yes, Master Page Gallery use unique permissions by default.
Here is the default permissions for your information:
Approvers SharePoint Group Approvers Read
Designers SharePoint Group Designers Design
Hierarchy Managers SharePoint Group Hierarchy Managers Read
Restricted Readers SharePoint Group Restricted Readers Restricted Read
Style Resource Readers SharePoint Group Style Resource Readers Read
System Account User SHAREPOINT\system Full Control

Setting there defaults I still got the page was not shared problem, for some reason it works when I set the library to inherit permissions from the parent site.

When checking other sites I was intrigued when there was only the Style Resource Readers defined. I found this blog on a similar issue when sub site inheritance is broken which interesting enough directly mentions this group.

“It occurs when the built-in SharePoint group “Style Resource Readers” has been messed with in some way.”

Will check this out.

So I went through his entire validation process for that group, it all checked out, however as soon as I broke inheritance on the master page library, made it the “known defaults” and verified all Style Resources users were correct and all permission for it on all listed libraries and lists all match,  I continued to receive Site not shared with you error.

I don’t feel the fix I found is correct, but I can’t find any other cause, or solution at this time. I hope this blog post helps someone, but I’m sad to say I wasn’t able to determine the root cause nor the proper solution.

 

Renew Certificate with same Key via CMD

certutil -store my

The command above can be used to get the required serial number for the cert needing to be renewed. This should show the machine store, if you need certs displayed for the user store remove the “my” keyword.

certreq -enroll -machine -q -PolicyServer * -cert <serial#> renew reusekeys

If you get the following error:

Ensure the machine account has enroll permission on the published certificate template. For step by step guidance follow this blog post by itexperince.

If you get this error “The certificate Authority denied teh request. A required certificate is not within it’s validity period when verifying against currect system clock”:

Ensure the Certificate you are attempting to renew is not already expired.

If it is follow my guide on creating new certs via CLI.

afterwards it should succeed.

*NOTE* this option archives the old certificate, and generates a new one with a new expiration date, with the same key, with a new serial number. How services that are bound to the certificate update themselves, I’m not sure for this test I did not have the certificate bound to any particular services. Verifying this actually the web server using the cert did automatically bind to the new cert, I’d still recommend you verify where the certificate is being used and ensure those services are update/restarted accordingly to apply the changes.

Fixing SharePoint Search

Broken SharePoint Crawl

Issue: SharePoint Search

First Issue: Crawl ends in 1 min 20 seconds.

Solution: Check to see if the Page Loads when queried from the front end server. If you get a credential prompt 3 times.  Then you have to disable loop back checking.

How to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa

Add a new DWORD value named DisableLoopbackCheck and give it a value of 1. After setting the value reboot your server.

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa /v DisableLoopbackCheck /t REG_DWORD /d 1

Second Issue: Crawl appears to complete but Errors in Log

Log Details of “the content processing pipeline failed to process the item ExpandSegments”

You may find lots of sources stating to watch out for multi valued attributes or property names such as this nicely detailed blog post however, watch for the details in the log message and it’s a bit different. However I feel this TechNet was answered incorrectly.

I tried some of the basics, such as clearing the index and adding the search account as site admin, did not help. On that note I learnt that separate search content sources within a search application do not have their own indexes. You have to create dedicated search applications for each SharePoint site if you wish them to have their own index (“Database/Tables”).

After a bit more scouring I found many instances of the same problem with the solution always being you have to create a new search application, such as this TechNet post and this TechNet Post.

Solution:

Remove the content source from the existing Search Service Application.

Create an entirely new Search Service Application.

Reboot the Front End.

Add Crawl Content on new SSA. Crawl should work.

Third Issue: Red X on Search Service Application Services

*NOTE* After creating the new Search Application, a front end reboot is required to remove the red X’s on some of the application services statuses.

Fourth Issue: Search results not working

This from my testing appeared to have resolved the crawl issue at hand. However I wasn’t able to get results returned when entering data in to search. I did a bit of searching and testing and I found the solution. Turns out you have to associate the web application with the new search service application (In my testing I created a new uniquely defined search application to have a separate index then the other SharePoint Sites).

Solution:

Navigate to Central Admin > Application Management > Manage web applications >Highlight the web application > Select Service Connections from top ribbon > Make sure your Search service application is selected.

This was it for me, however if you still experience issues I have also read updating the front end servers can resolve issues as well, as blogged about by Stefan Goßner.

Fifth Issue: Runaway Crawl

For some unknown reason, when I went to check out the SharePoint front end after the weekend, I noticed it was using near 100% CPU usage, very similar to a Front End that is actively doing a crawl, I knew this wasn’t a normal time for a crawl and it generally never happens during this time.

To my amazement 3 crawls were on going for over 70+ hours. Attempting to stop them from the front end option “Stop all crawl” resulted in them being stuck in a stopping state.

Googling this I found stop and starting the SharePoint Search Service under services.msc did help to bring them back to “idle”.

Now I believe this next step is what caused my next issue, but it is uncertain.

I clicked on Remove Index, for the primary search service application. It was stuck at “This shouldn’t take long” for a really long time, and I believe it was my impatience here that caused the next problem, as I restarted the search service again after this fact.

Sixth Issue: Paused by System

All solutions I found for this issue resulted in no success. Even rebooting the server the primary search service content crawl states were “paused by system. Regardless of:

    1. Pausing and Resuming the Search Services
    2. Restarting the Search Services
    3. Rebooting the server

I was only able to resolve this issue the same way I fixed the initial issue with the crawl, rebuild the search service application.

I hope this blogs helps anyone experiencing issues with SharePoint Search functionality. Cheers!

Verifying Web connections to SharePoint Sites

Simple as:

Get-Counter -Counter '\web service(_total)\current connections'

You can also use performance Monitor utilizing the same filter/”counter”.

To view all filter/”counter” types: (*note a lot of output, may want to pipe to file)

typeperf -qx

This is useful info if you plan on making Site Changes and need to know how many people might be affected. At the time of this writing I haven’t been able to figure out WHO these connections belong to, however I believe with more knowledge this might be possible. I’ll leave this open for the time being, and come back to it when time permits.

Rename Lync/Skype Server

Overview

Short Answer, Don’t start fresh.

If in a clustered topology with a dedicated CMS. Remove from topology,  add new server with new name to topology and redeploy fresh.

My Expereince:

Step 1) Restore LyncVM created on resource domain, change hostname.
Step 2) Stop All Lync Services
Step 3) Create SRV record for _sipinteraltls, and 3 a records (admin, meet, dialin)
Step 4) Remove old CSM, else topogly deployment will fail. (Remove-CSconfigurationStorelocation)
Step 4.1) Extend Storage to 80Gig, else publishing will fail. (https://docs.microsoft.com/en-us/skypeforbusiness/troubleshoot/server-install-or-uninstall/error-when-install-lync-server#:~:text=This%20issue%20occurs%20because%20the,16%20GB%20of%20free%20space.)
Step 5) Open Topology Builder, if host name same, open file, if new host create new topology. Publish
FAIL

Step 1) Cloned Server
Step 2) Removed from Domain (disconnected)
Step 3) changed Hostname, and joined Domain
Step 4) Run Deployment Wizard -> Prep First Standard eidtion server
Step 5) Run Deployment Wizard -> Setup or Remove Lync Components -> Step 2
FAIL

Step 1) Cloned Server
Step 2) Removed from Domain (disconnected)
Step 3) remove all local SQL instances
Step 4) Delete C:\CSData
Step 5) Rename, Join Domain
Step 6) Run Deployment Wizard, Prepare First standard edition server
Step 7) Adjust DNS (_sipinternaltls, admin, dialin, meet)
Step 8) Remove-CSConfigurationStoreLocation
Step 9) Create and Publish Topology
SUCCESS
However Install CMS: failed:
https://social.technet.microsoft.com/Forums/en-US/88fc3dc6-cf74-474e-baf7-08609211ac1b/cannot-open-database-quotxdsquot-requested-by-the-login-the-login-failed-login-failed-for-user?forum=lyncdeploy

Long Answer (Source): This info has been shared on many blogs, not sure who’s the originator of the list

  1. Remove Skype for Business server from topology
  2. Publish topology.
  3. Run Skype for Business Server Deployment Wizard local setup on server to remove Lync components (or run the bootstrapper)
  4. Uninstall SQL Server. Front-ends have LyncLocal and RTCLocal instances. Remove both, rebooting between instance removal.  Edge only has RTCLocal instance.
  5. Remove SQL Server 2012 Management Objects (x64)
  6. Remove SQL Server 2012 Native Client (x64)
  7. Remove Microsoft System CLR Types for SQL Server 2012 (x64)
  8. Remove Microsoft Skype for Business Server 2015, Front End Server
  9. Remove Microsoft Skype for Business Server 2015, Core Components
  10. Delete leftover data:
    Delete C:Program FilesMicrosoft SQL Server
    Delete C:Program FilesMicrosoft Skype for Business Server 2015
    Delete C:CSData
  11. Rename server and restart
  12. Wait until AD replication completes with new server name.
  13. Open Topology Builder, add a new server to existing pool and publish. (If this is a SBA or Standard Edition Server, the pool and the server FQDN is identical.)
  14. Reinstall Skype for Business Server 2015components and all cumulative updates
  15. Generate new certificate with updated server name and assign to appropriate services using Skype for Business Server 2015 Deployment Wizard.
  16. Restart all servers in pool at same time (only relevant for front-end servers in an Enterprise pool).

Summary

Don’t do it. It’s so much easier to start fresh then deal with this garbage.

Audit Client Side Outlook Archive Settings

Why…

What You Need to Know About Managing PST Files (ironmountain.com)

How?

[SOLVED] Powershell Script find pst files on network – Spiceworks

From this guys script I wrote a simple script of my own.

As noted in the current issue that it only works running under the users current context. Though I know the results from testing, I can’t source any material on the CIM_DataFile class as to denote how or why this is the case.

For the time being this post will remain short as it’s a work in progress that I haven’t been able to resolve the interactive part of the script, I’m not happy with a requirement of a open share, and a logon script. As the script in its current state isn’t even written to support that design.

Will come back to this when time permits.

BitwardenRS Upgrade to Vaultwarden

The Story

A while a go I blogged about installing BitwardenRS, the on prem version of Bitwarden, which is amazing by the way.

Recently they announced they are changing the name to respect of the original project to avoid confusion.

You can follow this guys great video if you happen to use UNRAID (which I haven’t used myself but looks really neat).

If you followed my blog then you are running bitwardenrs via docker-compose.

In this case it was actually simpler than I thought.

Updating/Upgrading BitwardenRS to Vaultwarden

If you are simply updating to the latest build with the same old name.

Step 1) bring down the Container

cd \path\to\dockerimage
docker-compose down

Step 2) pull the latest build

docker-compose pull

Step 3) Bring up the new container

docker-compose up -d

That’s literally it, and it is super fast process.

However if you want to use the new image. You’ll have to change the name of the source project in the docker-compose yaml file:

Change the image: line

image: vaultwarden/server

Then, just like before, bring down the container, pull new, bring up.

Important Change (broken Email)

After updating I wasn’t first aware of an issue (as I normally don’t manage multiple users and orgs), however attempting to add a user to an org I got an error: SMTP improper Auth Mechanism selected.

No matter which one you pick the error remained (against a standard port 25 connection, anon). No matter what you entered in the “admin portal” under the SMTP configuration area, the same error would persist. My colleague started to dig through the source code, and the logic seemed clean. The issue seemed that once you configure specific “environment variables” (EG.
– SMTP_USERNAME=[username]
-SMTP_PASSWORD=[password]) that these for some reason are not being “overwritten” when defined in the admin portal. Since there was fields defined “[username]” the code was building a connection for auth, and expecting a proper auth mechanism. Since Auth Mechanism was never defined in the “environment variables”  and the bug of the settings of SMTP in the “admin panel” were not overwriting it would never hit the proper method in the SMTP code to make a standard port 25 anonymous connection.

To fix the issue you have to remove those two lines from the docker-compose YAML file.

So ONLY DEFINE:
SMTP_HOST=Email Relay DNS name
SMTP_FROM=from address
SMTP_PORT=25
SMTP_SS=false

Save the YAML file and bring down, and then bring up the container.

Watch as email works again. 😀

Super thanks to my buddy GB for the deep code analysis to help resolve this issue. 🙂