Run BitWardenRS with Internal PKI

I recently covered installing BitWarden_RS, that used let’s encrypt which is great for public service type.

Private industry that like to run on prem sometimes doesn’t want to have the front end exposed to the interwebs, and without any direct NAT and sec rules to allow external entities to hit the bitwarden server at all, HTTP validation (which these scripts use) will fail, even if you configured them to use DNS validation, getting the certs on the server still requires access of some kind if automation is wanted.

With an internal PKI the life of certs can be greatly extended and also kept entirely in-house, if one so pleases.

So this Guide continues on after the last just before letsencrypt is installed but after the NginX setup as been configured to allow the challenges, I might simply pull that part of the includes part of the NginX config as it won’t be needed but lets move on.

Now the letsencrypt uses etc/letsencrypt path to store certs n keys. Since I will be using this all just for nginx, i’lll use /etc/nginx/certs:

mkdir /etc/nginx/certs
cd /etc/nginx/certs
openssl req -new -newkey rsa:2048 -nodes -keyout bwserver.key -out bwserver.csr

use cat to open the CSR n copy n paste the contents:

Navigate to your internal CA server, request cert -> advanced template to use: Web Server, Paste your CSR

THen save the file (for now I saved both Base 64 and DER and used WinSCP to copy them to the server

now I noticed that the config uses PEM files so I found out how to convert the certs into what I need:

openssl x509 -inform der -in /home/zewwy/bwserverCert.der -out /etc/nginx/certs/bwserver.pem
$EDIT sites-available/bitwarden

Adjust the HTTPS section under the HTTP section accordingly:

#
# HTTPS
#
# This assumes you're using Let's Encrypt for your SSL certs (and why wouldn't
# you!?)... https://letsencrypt.org
server {
    # add [IP-Address:]443 ssl in the next line if you want to limit this to a single interface
    listen 0.0.0.0:443 ssl;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/[your domain]/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/[your domain]/privkey.pem;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    # to create this, see https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    keepalive_timeout 20s;     server_name [your domain];
    root /home/data/[your domain];
    index index.php;     # change the file name of these logs to include your server name
    # if hosting many services...
    access_log /var/log/nginx/[your domain]_access.log;
    error_log /var/log/nginx/[your domain]_error.log;     location /notifications/hub/negotiate {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Forwarded-Proto https;
        proxy_connect_timeout 2400;
        proxy_read_timeout 2400;
        proxy_send_timeout 2400;
    }     location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Forwarded-Proto https;
        proxy_connect_timeout 2400;
        proxy_read_timeout 2400;
        proxy_send_timeout 2400;
    }     location /notifications/hub {
        proxy_pass http://127.0.01:3012;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    #
    # These "harden" your security
    add_header 'Access-Control-Allow-Origin' "*";
}

in this case I adjusted my certs to my now internal signed ones:

After this follow the remaing part of the bitwarden install guide… when I did I was able to get it up but I got a cert error, at first it was cause in my enviroment, I didn’t have my offline-root cert installed on the client, so after I got that, and verified my intermidate sub CA was good, I verified it by navigating to my CA certsrv site and it was all green… yet I was getting an error even though my chain was green across the board….

Oh yeah…. shit Chrome requires a SAN even if no alternative names is ever planned to be used, …. THanks Google!

ok… lets backup a bit… stop the docker instance::

docker-compose stop

Now I should just need to reconfigure that nginx bitwarden file after creating new certificates with a SAN in it… but how to do that with OpenSSL…. lil more googling I found this great guide by Paul Kehrer almost 10 years ago… first thing I read is …

“SAN CSRs cannot be generated using the interactive prompt in OpenSSL” … Why?! it’s now literally standard… and the prompts don’t even ask for it.. what is this an IMSVA?! :@

anyway… lets following along

cd /etc/nginx/certs
nano req.conf
[ req ]
default_bits        = 2048
default_keyfile     = bwserverSAN.key
distinguished_name  = req_distinguished_name
req_extensions     = req_ext # The extentions to add to the self signed cert

[ req_distinguished_name ]
countryName           =
 

CA
countryName_default   = CA
stateOrProvinceName   = MB
stateOrProvinceName_default = MB
localityName          = WPG
localityName_default  = WPG
organizationName          = ZWY
organizationName_default  = ZWY
commonName            = bitwarden.zewwy.ca
commonName_max        = 64

[ req_ext ]
subjectAltName          = @alt_names

[alt_names]
DNS.1   = bitwarden.zewwy.ca
DNS.2   = www.bitwarden.zewwy.ca
DNS.3   = bw.zewwy.ca

openssl req -new -nodes -out myreq.csr -config req.conf

k… checking our files, we just need to resign our new CSR…copy it back to the server with WinSCP, convert it with the openssl command, check our files are as needed:

lets change our nginx files:

nano /etc/nginx/sites-available/bitwarden

test it, confirm it, apply it, and bring up our docker instance again:

and test it from the client side…

I hope this helps someone, mainly future me.

Cheers.

*UPDATE 2023* Very helpful thank you.
Note 1) When generating a new certificate, the old private key is used unless you generate a new private key.

Note 2) you only need to restart nginx service, you don’t need to drop the container, and repull it, unless you want the latest updates to the container software.

vCenter SSO

vCenter SSO

The other day I covered installing vCenter.

Today I’ll do a very quick overview on setting up SSO with a Windows based AD Auth.

DNS

Step 1) validate vCenter can reach any AD via the Root domain name:
*USE AD SERVER FOR DNS, 3rd Party DNS leads to failure as missing specialized records, E.G. srv records)
*Ensure Time is synced to within 5 minutes of AD server*

I ssh’d into the VCSA using root and then, “shell” and a regular old ping command to validate.

Step 2) Follow Virten’s Guide for doing the Flash way, or CLI way to join vCenter to the Windows Domain. Via the HTML5 Web Client: Menu -> Administration -> SSO -> Configuration -> Active Directory Domain -> Click Join AD (hidden behind the menu in the snippet)

Enter the domain to join, and an account that is allowed to join systems to the domain, in my case I used my Domain ADmin Account:

Populate the fields, and click joing and sure enough you will join the domain without issue… if you have a proper working NTP/AD architecture that is…

Thanks VMware… Ugghh ok, and if I use the CLI maybe some more verbose error?

What do you mean you “DC not found” what kind of PCLoadLetter error is this? Like I just verified lookup via DNS which is like the primary pre-req besides firewalls, which I have already configured my actually firewalls… so what gives, Googling this error leads me to this.

and I quote “On ESXi 6.5, the command is executed from /usr/lib/likewise/bin. If you haven’t enabled the AD firewall rule mentioned earlier, you must temporarily unload the ESXi firewall – assuming it is enabled – for this to work. Failing this, you will get an Error: NERR_DCNotFound [code 0x00000995] error.”

Are you ****in’ with me…. for reals… man wtf VMware….

Shit, right this is the VCSA not a ESXi host… ugggh quick research…

What… da… How, did I not know about this?! There’s a special VCSA management page, everything online just uses the “Web Client” which all VMware’s documentation assumes this to be the Flash client, which doesn’t even reference this at all!

https://vcsa:5480

Alrighty then… logging in… mhmm

That’s awesome but I don’t see firewall, maybe if I navigate to networking…

Nope, NICs settings and that’s about it:

C’mon those firewall settings have to be here, I don’t want to have to be forced to use flash…. cmon…..

F*** it says it’s for 6.7 I’m clearly on 6.5 there has to be a way…

After some deeper digging ( I found out VCSA uses python scripts to use specific files to build the firewall) then also talking this problem over with someone on the IRC channel #wmware, and digging a bit further and finding this vmware post….

I was at first simply using a third part DNS, having JUST an A host record for the AD server, not any of the other service records for LDAP or anything else, after changing my DNS settings on the VCSA to point to the AD server itself I got a different error at the CLI:

Bahhh what? oh wait… lol all my time is wrong, everywhere…

NTP – Fixing Time

Actual time 8:20 PM Winnipeg Central Time. Mon Oct 7, 2019

AD server time: 2:09 PM Mon Oct 7, 2019 (CST)

VCSA time: Tue Oct 8 01:15:08 UTC 2019

What a gong show… let’s fix this! First MS states to leave the PDC to system time to get form the host as host gets acurate time, well not for me. I could point the host to external, and wait then changing PDC time auto. But if you want to Domain join the hosts they should follow the hierarchy and use the PDC as time, catch 22, so instead PDC points to external source, and hosts will point to PDC for time and DNS (this allows for ease for changing external time provider and no issues with time sync).

So fixing PDC time:

before:

after

NOw time has changed and my firewall shows the successful packets, but why is my offset still so off? and why is my time an hour off?

Here’s my local workstation:

Yet here’s my PDC:

ok everything I checked online I’m sure I did it right but the syntax on one of the guides I was following didn’t seem right and I tried again and this time it worked, finally!

K, now I can update each host in my lab….

Before:

Configure:

After:

Finally VCSA itself, https://vcsa:5480 (login as root) -> Time

Before:

Configure:

After:

Yay, after fixing my time everywhere:

Joining VSCA to Windows Domain via CLI

/opt/likewise/bin/domainjoin-cli join $domain $user '$password'

YAY!

Quick Re-Cap:

So bad news is this isn’t as short a blog as I wanted, but good news is we are all learning something! Yay!

Now that we got our system domain joined (reboot required)

waiting… waiting….

Verifying AD object on AD server (core, via powerhsell)

and on the HTML 5 Web Client:

Adding Identity Source

Now I can finally follow adding the Identity source A) AD Auth from here.

Click on Identity Sources -> Add Identity Source:

omg finally something that was dead simple…

Defining Permissions

Now click on global Permissions.

Click “+” icon, and if system join is all good it should be able to query the AD and find the users when typed into the Name field:

Lets test it….

Second attempt but pushing to children objects:

and yay this time I was able to get in successfully:

but I had to put in my UPN (user@doman.local) what if I just want to enter my user name…

What a bunch of poop, that’s cause we didn’t set the primary SSO domain… back in the VCSA settings https://vcsa:5480 – summary shows

back on vCenter Web Client, Menu -> Administration -> SSO -> Configure -> Identity Sources -> select new source -> click Set as Default:

login again:

success, and finally as the source virten post stated, the “Use Windows Authentication” option is greyed out unless the Enhanced Authentication Plugin is installed. You can find the download link at the bottom of the login screen.

Summary

That was a bit more painful then I wanted it to be, but it really was nice that it was this painful cause it reminded me of the moving parts that have to be setup correct for this all to play nicely to begin with.

I hope this guide has helped someone. Please leave a comment, any comment will do!!!

 

BitWardenRS Install

The Story

I’ve been trying to find a decent password manager. I need team sharing abilities, I wanted to try psono, but my lack of NginX skills to get the web client to work cause it was an “optional” install, so they didn’t give direct instructions. 🙁

I then came across BitWarden but I wasn’t too excited when I couldn’t even create a local “Corporation” to use any team sharing abilities without a license.

I’m not a fan of DRM, period. There’s a forked package trying to change the DLL so you can just generate your own license, meh. All not my thing.

Then I read this guys blog post. He was in a very similar boat, so now I’m going to blog following his blog to see how easy or hard it really is.

BitWardenRS Install

Pre-Reqs

He talks about “Virtual Server” or IaaS (Infrastructure as a Service), so people who can’t run their own hardware. I’m not in this boat and I will instead spin up my own VM. However if you do not run your own hardware this is a great choice. This of course requires you to trust the owners of the datacenters in which you set these servers up on, and learn the UI’s they provide to create them.

My VM I gave 2 vCPUs, 2 GB mem, and 20 GIG SSD storage.

I also, as you can tell from this site, run my own domain so I created a record to point to the internal load balancer that will listen on the headers and direct them to this new Ubuntu LTS server. (This required me to double check my firewall and router configuration, as well as my load balancer setup) This was the source blogs “Get your Domain lined up” part.

Time for the funnest part “Set up a Docker Server”

The VM

hahah how sad, see if it even survives with these pathetic specs.

Yeah my usual boot into UEFI menu, then mount the ISO on Console of VM…

As you can see the removable devices is greyed out, but after booting the VM…

and now you can boot an ISO from your client device.

Boot and install Ubuntu 18.04 LTS:

Coffee time!

Step 1) unpriv user: add “user”: done (note I will change this in production, the default name was used for ease of following along the source guide)

Step 2) grab basic packages:

apt-get update && apt-get install vim git etckeeper

Corrected git config issues by supplying email and name fields:

git config --global user.email "[your email]"
git config --global user.name "[your full name, e.g. Jane Doe]"

Step 3) Init etckeeper:

etckeeper init
etckeeper commit -m "initial commit of BitWarden host"

Step 4) Docker Deps:

apt-get install apt-transport-https ca-certificates curl software-properties-common pwgen

Install secure key needed to add the docker.com package repository to your system

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Confirm the key is valid

apt-key fingerprint 0EBFCD88

Step 6) Add Repo

add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Step 7) Apt Update!

apt-get update

Step 8) Docker CE:

apt-get install docker-ce

This is were the guide takes a bit of a dive he states:

“Add your unprivileged user (“ubuntu” in this case – substitute the unprivileged user you created!) to a new “docker” group and add that user to other useful groups:”

groupadd docker
adduser ubuntu
adduser ubuntu sudoers
adduser ubuntu admin adduser ubuntu docker

but this lead to all these groups “not existing” except the last command, so I moved on.

Step 9) Create an SSH key for your unprivileged user and allow logins for that user from external connection:

sudo -Hu ubuntu ssh-keygen -t rsa
cp /root/.ssh/authorized_keys /home/ubuntu/.ssh/
chown ubuntu:ubuntu /home/ubuntu/.ssh/
adduser ubuntu ssh

Step 10) More install stuff:

apt install python-pip

pip install -U pip

wtf…. nice anomaly…

pip install docker-compose

OK.. lovely… I managed to back track… remove pip:

python -m pip uninstall pip

Then remove python-pip and re-installed it:

apt remove python-pip
apt install python-pip

then do not update pip with pip install -U pip…

seems that line breaks it. then without running that line I could install docker-compose:

Not a good sign for python or pip not sure who to blame either way.. this type of stuff blows hard.

Step 11) Fuck there are a lot of steps here…
Set a convenience variable for [your domain] here (note: it’ll only be recognized for this session, i.e. until you log out):

DOMAIN=[your domain]
USER=[unprivileged user, e.g. ubuntu]

 

Create directories to hold both the Docker Compose configurations and the persistent data you don’t want to lose if you remove your Docker containers (namely your password database and configuration information):

mkdir -p /home/docker/$DOMAIN && mkdir -p /home/data/$DOMAIN
chown -R ${USER}:${USER} /home/data /home/docker/

Install the NGINX (pronounced “Engine X”) webserver which will act as a reverse proxy for the BitWarden service and terminate the encryption via HTTPS:

apt-get install nginx-full

Configure the server’s firewill and make an exception for SSH and NGINX services

ufw allow OpenSSH
ufw allow "Nginx Full"
ufw enable

Create a directory for including files for NGINX

cd /etc/nginx mkdir includes

Choose your text editor for editing files. Here’re options for Vim or Nano – you can install and select others. Setting the EDIT shall variable allows you to copy and paste these commands regardless of which editor you prefer as it’ll replace the value of $EDIT with the full path to your preferred editor.

EDIT=`which nano` or EDIT=`which vim`

 

To support encrypted data transfer between external devices and your server using HTTPS, you need a valid SSL certificate. Until recently, these were costly and hard to get. With Let’s Encrypt, they’ve become a straightforward and essential part of any good (user-respecting) web site or service. To facilitate getting and periodically renewing your SSL certificate, you need to create the file letsencrypt.conf:

$EDIT includes/letsencrypt.conf

and enter the following content:

#############################################################################
# Configuration file for Let's Encrypt ACME Challenge location
# This file is already included in listen_xxx.conf files.
# Do NOT include it separately!
#############################################################################
#
# This config enables to access /.well-known/acme-challenge/xxxxxxxxxxx
# on all our sites (HTTP), including all subdomains.
# This is required by ACME Challenge (webroot authentication).
# You can check that this location is working by placing ping.txt here:
# /var/www/letsencrypt/.well-known/acme-challenge/ping.txt
# And pointing your browser to:
# http://xxx.domain.tld/.well-known/acme-challenge/ping.txt
#
# Sources:
# https://community.letsencrypt.org/t/howto-easy-cert-generation-and-renewal-with-nginx/3491
#
# Rule for legitimate ACME Challenge requests
location ^~ /.well-known/acme-challenge/ {
    default_type "text/plain";
    # this can be any directory, but this name keeps it clear
    root /var/www/letsencrypt;
}
# Hide /acme-challenge subdirectory and return 404 on all requests.
# It is somewhat more secure than letting Nginx return 403.
# Ending slash is important!
location = /.well-known/acme-challenge/ {
    return 404;
}

Now you need to create the directory described in the letsencrypt.conf file:

mkdir /var/www/letsencrypt

Create “forward secrecy & Diffie Hellman ephemeral parameters” to make your server more secure… The result will be a secure signing key stored in /etc/ssl/certs/dhparam.pem (note, getting enough “entropy” to generate sufficient randomness to calculate this will take a few minutes!):

openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096

Start time 12:18 – 12:22 yup a couple minutes, and then you need to create the reverse proxy configuration file as follows:

cd ./sites-available
$EDIT bitwarden

Shit the gut thought he had changed directories when you didn’t and I figured when said DOMAIN like just the domain not the server FQDN which you can tell in the next config file part I will get to, but first a quick fix:

and fill it with this content, replacing all [tokens] with your relevant values:

#
# HTTP does *soft* redirect to HTTPS
#
server {
    # add [IP-Address:]80 in the next line if you want to limit this to a single interface
    listen 0.0.0.0:80;
   server_name [your domain];
    root /home/data/[your domain];
    index index.php;

    # change the file name of these logs to include your server name
    # if hosting many services...
    access_log /var/log/nginx/[your domain]_access.log;
    error_log /var/log/nginx/[your domain]_error.log;  
    include includes/letsencrypt.conf;     # redirect all HTTP traffic to HTTPS.
    location / {
        return  302 https://[your domain]$request_uri;
    }
}

and make the configuration available to NGINX by linking the file from sites-available into sites-enabled (you can disable the site by removing the link and reloading NGINX)

cd ..
ln -sf sites-available/bitwarden sites-enabled/bitwarden

Check to make sure NGINX is happy with the configuration (it did not)

nginx -t

as you can tell…. it did not, only if I copied the file would the config be accepted, linked it would just fail… sigh… I don’t know why.

*Update it failed due to either the -sf options or the not fully named link but what I found worked was:

ln sites-available/bitwarden sites-enabled/bitwarden.zewwy.ca

If you don’t get any errors, you can restart NGINX

service nginx restart

and it should be configured properly to respond to requests at http://[your domain]/.well-known/acme-challenge/ which is required for creating a Let’s Encrypt certificate.

ughhhh, wat? there are no files in the dir that’s now specified in the config file, and navigating to the URL sure enough gives me an NginX 404… ok so anyway I guess I’ll just move on since he’s not making a lot of sense at this point….

So now we can create the certificate. You’ll need to install the letscencrypt scripts:

apt-get install letsencrypt

You will be asked to enter some information about yourself, including an email address – this is necessary so that the letsencrypt service can email you if any of your certificates are not successfully updated (they need to be renewed every few weeks – normally this happens automatically!) so that you site and users aren’t affected by an expired SSL certificate (a bad look!). Trust me, these folks are the good guys.
You create a certificate for [your domain] with the following command (with relevant substitutions):

letsencrypt certonly --webroot -w /var/www/letsencrypt -d $DOMAIN

so at first I forgot in my load balancer to change the backend to this new server as I was using my pihole to test access to the server URL externally from the internet as thats required for HTTP based auth (That’s what I’m assuming these scripts/services are setup to auth as.. looking at the invalid response) however even after correcting that I was getting failures…

So frustrated right now, I can’t seem to even get a simple html file to load… ugggh then again this whole thing hasn’t exactly been as the guide either.

publishing for now.

to be continued….

OK so, I had asked a buddy of mine I went to lunch with recently if he had experience with NginX as I remembered him mentioning it. I went on to vent my frustrations due to my own ignorance, and he offered to double check my config if I could grant SSH access, this is of course no issue to me, and I made some quick Firewall rules and granted him access. He soon mentioned he got it working locally, but failed to see access externally. Even though I was sure I had configured my load balancer correctly.. and then it hit me in the face, the firewall I was doing everything else on, Doh! soon I was able to see the basic HTML page I wanted to see:

followed the acme “ping.txt” test

ok… so now that I can reach that (externally as well) lets try again…

Woooo yes finally… ok lets move on…

Edit the nginx configuration file for the BitWarden service again

$EDIT sites-available/bitwarden

and add the following to the bottom of file (starting the line below the final "}")

#
# HTTPS
#
# This assumes you're using Let's Encrypt for your SSL certs (and why wouldn't
# you!?)... https://letsencrypt.org
server {
    # add [IP-Address:]443 ssl in the next line if you want to limit this to a single interface
    listen 0.0.0.0:443 ssl;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/[your domain]/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/[your domain]/privkey.pem;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    # to create this, see https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    keepalive_timeout 20s;     server_name [your domain];
    root /home/data/[your domain];
    index index.php;     # change the file name of these logs to include your server name
    # if hosting many services...
    access_log /var/log/nginx/[your domain]_access.log;
    error_log /var/log/nginx/[your domain]_error.log;     location /notifications/hub/negotiate {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Forwarded-Proto https;
        proxy_connect_timeout 2400;
        proxy_read_timeout 2400;
        proxy_send_timeout 2400;
    }     location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Forwarded-Proto https;
        proxy_connect_timeout 2400;
        proxy_read_timeout 2400;
        proxy_send_timeout 2400;
    }     location /notifications/hub {
        proxy_pass http://127.0.01:3012;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    #
    # These "harden" your security
    add_header 'Access-Control-Allow-Origin' "*";
}

You should now be able to run

nginx -t

again, and it you haven’t got an accidental errors in the files, it should return no errors. You can restart nginx to make sure it picks up your SSL certificates…

service nginx restart

Nice, it worked, but no verification steps provided in the blog, so I guess I just have to move on again…

Setup Bitwarden Service

Before we start this part, you’ll need a few bits of information. First, you’ll need a 64 character random string to be your “admin token”… you can create that like this:

pwgen -y 64 1

“copy the result (highlight the text and hit CTRL+SHIFT+C) and paste it somewhere so you can copy-and-paste it into the file below later.

Also, if you want your BitWarden server to be able to send out emails, like for password recovery, you’ll need to have an “authenticating SMTP email account”… I would recommend setting one up specifically for this purpose. You can use a random gmail account or any other email account that lets you send mail by logging into an SMTP (Simple Mail Transfer Protocol) server, i.e. most mail servers. You’ll need to know the SMTP [host name], the [port] (usually 465 or 587), the [login security] (usually “true” or “TLS”), and your authenticating [username] (possibly this is also the email address) and [password]. You’ll also need a “[from email] like bitwarden@[your domain] or similar, which will be the sender of email from your server.

You’re going to be setting up your configuration in the directory we created earlier, so run”

yada yada yada something about email…

cd /home/docker/$DOMAIN

and there

$EDIT docker-compose.yml

copy-and-pasting in the following, replacing the [tokens] appropriately:

version: "3"
services:
    app:
        image: bitwardenrs/server
        environment:
            - DOMAIN=https://[your domain]
            - WEBSOCKET_ENABLED=true
            - SIGNUPS_ALLOWED=false
            - LOG_FILE="/data/bitwarden.log"
            - INVITATIONS_ALLOWED=true
            - ADMIN_TOKEN=[admin token]
            - SMTP_HOST=[host name]
            - SMTP_FROM=[from email]
            - SMTP_PORT=[port]
            - SMTP_SSL=[login security]
            - SMTP_USERNAME=[username]
            - SMTP_PASSWORD=[password]
        volumes:
            - /home/data/[your domain]/data/:/data/
        ports:
            - "127.0.0.1:8080:80"
            - "127.0.0.1:3012:3012"
        restart:
            unless-stopped

in my case I tested email via telnet on port 25, and it worked, so hoping this will work.

Note that the indentation has to be exact in this file – Docker Compose will complain otherwise.

With the docker-compose file completed, you’re ready to “pull” your package!

docker-compose up -d && docker-compose logs -f

the “up -d” option actually starts the container called “app” which is actually your BitWarden rust server in “daemon” mode, which means it’ll keep running unless you tell it to stop. If that’s successful, it automatically then shows you the logs of that container. You can exit at any time with CTRL-C which will put you back on the command prompt. If you do want the container to stop, just run.

docker-compose stop

“You should now be able to point your browser at http://[your domain] which, in turn, should automatically redirect you to https://[your domain] and you should see the BitWarden web front end similar to that shown in the attached screen shot!”

Which he didn’t have but to my utter amazement!

Soooo then everytime I went to register/create an account, it wouldn’t let me…

It would simply state Registration not allowed.. and only on issue reported with a dull answer

Dave ends off with: “To do your initial login, I believe (I’ll test this and update this howto!) you’ll be asked to provide your “admin token” to create a first user with administration privileges.”

Then I decided to hit the admin section:

http://bitwarden.zewwy.ca/admin

and I was asked for the admin token, once logged in I invited myself via an email account I have on my own Exchange server:

Yay a successful registration and login!

Summary

That was a lot of work, and in my next post I’ll cover creating an organization so I can finally share passwords securely.. to some degree…

*falls over on to couch*

HUGH shout out to my buddy; Troy Denton. Super awesome dude check him out on GitHub. I hope this helps someone.

*UPDATE if you hit this error (which you will following this guide and default settings)

do this:

Add the following parameter in nginx.conf file. Default location is /etc/nginx/nginx.conf

client_max_body_size 105M;

CU update not Showing in WSUS

The Story

Today was a bit annoying…

I did my usual updates sync, and approve required updates, and in the past this has included CU’s without much fuss. However today I did my usual and ran check for updates on a member machine, returned clean (which i was expecting as per the results on WSUS console) and then I did the followup “Check for updates from Microsoft Updates” to my dismay the server stated an update was available a CU (KB4516061)… ughhh OK…

Checking WSUS

Decided to double check WSUS, to my dismay, re-syncing, and checking unapproved updates yielded no new updates. But I know there’s a new CU/// what gives?

Doing some reseach I find this is nothing really new and has been a problem for a while due to what could be multiple problems, including apparently packaging certain updates into other updates… how lovely.

As the main answer from that one says you can Import them… Ughhh fine…

*Expectation* Download MSU, click Import update, update gets imported to WSUS, and approve.

*Reality* Well reality is generally always worse then the expectations…

Importing Update into WSUS

So I downloaded the 1.5 Gig KB from MS Catalog and on my MMC snap-in click Import update… What do I get, a Windows Explorer popup asking me where the msu file I want to import is… NO… a link to the MS Catalog website…

Ughh… I already downloaded it what gives…

after a bit more research (honestly software should not be this non-intuitive, but that’s how old software was… non-intuitive…) turns out this “Import Updates” is not even designed for remote use (uhhh isn’t that the whole point of MMC Snap-ins?!?!?) Anyway, OK so people state you have to use it directly on the WSUS server….

FINE, Log directly into the server and open the WSUS console, click “Import Updates” IE opens and page can’t be loaded. Strange checking the IE security settings the site attempting to be navigated should be trusted.

Even grabbing the direct catalog link and pasting it in this IE window only gave me the option to Download, not “add” and then “view basket”. It turns out the option to add only becomes available after an ActiveX install for something.

Originally I was not getting this, it wasn’t until I read this , I found a golden egg on Technet, and very carefully read the answer:

MS WSUS Product Team:

“Just to let you know, a statement from the WSUS Product Team has been published: WSUS Catalog import failures

“We are aware of the issue and presently working on a fix. In the meantime, the following workarounds can be used to unblock your deployment:

After clicking on the “Import Updates…” option in the WSUS console, an Internet Explorer window will open on the following URL: http://catalog.update.microsoft.com/… &Protocol=1.20
Before proceeding with importing the updates, change the “1.20” protocol value in the URL to the previous protocol value “1.8”. The URL should look like this when you’re done: http://catalog.update.microsoft.com/… &Protocol=1.8″

Uhhh ok… so it turns out on the initial pop-up where you get the Windows can’t display this page:

Change end number to 1.8

Yes, and Yes

Once this page loads, you can add the Active X control at the bottom:

Now you get the add, and view basket, and finally get the import option:

Well that was an annoying morning…

And there they finally are…

Another annoying WSUS morning… :S *Update Feb 2020* still valid procedure.

Update Computer Group Membership without Reboot

Source

Purge the computer account kerberos tickets

klist -lh 0 -li 0x3e7 purge

Force the gpo re-evaluation

gpupdate /force

Any previous attempt for access via newly added group membership should work; such as in this example I created a new Group, added this computer object into it, created a gMSA granting the group permission to use it, however the computer was not rebooted since added it into the group which was allowed access to install the gMSA.

PS C:\Windows\system32> New-ADGroup -Name "gMSANewGroup" -SamAccountName gMSANewGroup -GroupCategory Security -GroupScope Domain -DisplayName "gMSANewGroup" -Path "CN=Managed Service Accounts,DC=zewwy,DC=ca" -Description "Members of this group get Access to gMSATest2"
PS C:\Windows\system32> Add-ADGroupMember "gMSANewGroup" -Members "THISCOMP$"
PS C:\Windows\system32> New-ADServiceAccount -name gMSATest2 -DNSHostName gMSATest2.zewwy.ca -PrincipalsAllowedToRetrieveManagedPassword "gMSANewGroup"

Then Attempting to install the gMSA fails as the computer object hasn’t updated its group memberships locally, even though it has replicated throughout the domain, but following the command above to purge the computers tickets worked:

Hope this helps someone who needs to do granular group control but also don’t have the ability to reboot the host machine for service disruptions. 🙂

*NOTE* This does not apply to user group mapping. LSASS deal with users permission within groups (use whoami /groups) to see what I mean. a gpupdate /force, and a klist purge will not cause LSASS to update a users group membership. Users will still require to log off and back on for LSASS to apply new group memberships. Sorry!

Quick Managed Service Account Audit

First get the list of gMSAs from AD:

$gMSAlist = Get-ADServiceAccount -filter {samAccountName -like "*"}

Second Determine the systems allowed to use them:

ForEach ($gMSA in $gMSAlist) {(Get-ADServiceAccount $gMSA -properties *).PrincipalsAllowedToRetrieveManagedPassword}

Yay, we know who can use these accounts… but ARE they currently using it. If this returns a Group, look to see the systems in this group, else just access the system in question.

Third, verify the account is in use by listing all the services on the system and the accounts used to run them:

Get-Service | Select -ExpandProperty Name | ForEach{(Get-WmiObject Win32_Service -Filter "Name='$_'") | Select Name, StartName}

The above command simply lists out all the services and the account they run under, it’s not optimal as it is slow, but it gets it all, and if you need a more readable version pipe it into Output-CSV, or apply a more granular filter on the result for the gMSAs in question.

That’s about it, if you don’t see the gMSA listed on any service on the target machine, it’s rather safe to assume that the gMSA is not in use and can be safely removed from AD.

Remove-ADServiceAccount gMSAToBeRemoved

Fixing Veeam (Veeam Service won’t Start)

Veeam Won’t Start

Yeap, the one thing you don’t want can happen at the worst time. For me I was testing a hypervisor upgrade scenario, and my host sure enough failed to come up successfully. Well…. shit.

While I was going crazy trying to bring my host back up (the stock ESXi images wasn’t good enough cause…. RealTek, yeah… this Mobo I picked was an overall bad choice, sad cause it’s ASUS… anyway…

I went to go restore some VMs from backup onto other hosts till I could recover my main host (find that custom ESXi install image) and to my dismay… Veeam console failed to connect…

Failed to connect to the Veeam Backup & Replication server:
No connection could be made because the target machine actively refused it :9392

ughhhh, what? this is a standalone server, not domain joined, no special services account or MSAs, or separate servers, like what gives?

Event viewer is literally useless… as nothing shows anywhere for any hints.

First Fix Attempt

OK so, the usual, google, and let’s see here

Like other symptoms not much help and a generic console error, so this fix was worth a shot, what I took away from it was how to do a manual DB backup (assuming this is all the settings and configurations if re-install required) and some registry keys used by Veeam and that this was not the problem (not the droids you are after). I thought maybe I had updated and not tested, as I do tend to do shutdown instead of reboot, with my limited resources and well windows is heavy on resources.

HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\SqlServerName (This is the server name where SQL is running)
HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\SqlInstanceName (This is the instance name needed for the connection, which is in the format Servername\InstanceName)
HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\SqlDatabaseName (This is the database name in the Databases folder once you connect)

But sadly no good, as I guess my issue is not related to any lock files on the SQL DB… ok so what else is there…

Second Fix

So I started reading this one and at first I was thinking, yup same problem, and reading along, I like Foggy but them not sharing the answer was rather annoying… then after some others reported the solution and my jaw literally dropped (probably why they tell you call support, cause this is some dirty laundry…)

as Tommy stated

“It is very likely to caused by the changing of the host name, do refer to the following link, i managed to my Veeam service started again.”

What….

sure enough running the req query command and hostname showed I had indeed changed the hostname to something more suitable AFTER installation.

Why they’d rely on a reg key vs a simply enviroment variable is really beyond me, cause the problem with using a reg key for this is pretty clear here….

So let’s try to fix this, thanks to the second guys reply by spacecrab:

“I know this is an old post, but thank you for replying with this information. I installed Veeam Backup and Replication before changing the default generated hostname, and it was really throwing me through a loop. The fix noted at that url worked perfectly after I rebooted to reset the services. I’ll relay the content here in case that sources goes away.

In my case I had renamed the computer from a default WIN234dfasd type name to a ‘much’ better alternative. Veeam refers to the local computer name in a couple of registry entries and promptly stopped working – which we didn’t notice until later.

The keys are:

HKLM\SOFTWARE\Veeam\Veeam Backup and Replication\SqlServerName
HKLM\SOFTWARE\Veeam\Veeam Backup Catalog\CatalogSharedFolderPath

Backup of the site’s Virtual Machines is now running again.”

alright let’s update some keys to be Veeam…. I just used reg edit to do this vs figuring out the exact query (although I probably should figure out a query in-case other keys but meh….

and after a reboot… Woah! all the Veeam services are running, sure enough I can connect to my standalone Veeam Server! Wooo thanks Spacecrab!

 

 

Remove “inaccessable” datastore from VCSA

In my previous post I mentioned restoring my ESXi after a bad upgrade. Today when I attempted to add it back into vCenter, it complained stating a Datastore with the same name exists. I was a bit stumped when I saw it showing up under the datastore area as inaccessible, when there should be nothing referencing it. Googling led me to this gem where MikeOD states:

“I figured it out. I was double checking on VM’s on those datastores. Under “related objects”, there were no VM’s or hosts, but there were two old templates that were still referenced by the original VCenter. When I right clicked on the template and selected “remove from inventory”, the data stores disappeared.”

mhmmm, looking at the associated VM, I checked one of it’s settings and sure enough, an old ISO was mounted on it:

just as Mike said, as soon as I removed the association, by changing the VM to client device, the inaccessible datastore went away.

You can also check for templates, snapshots, etc.

Using VMware Update Manager (VUM)

VUM

Overview

In this post I’m going to try and upgrade one of my ESXi 5.5 host to 6.5 using VCSA’s now built in by default VUM. I followed this video on youtube for reference.

First thing I noticed but the video doesn’t mention is that (at least for 6.5) VUM tab is only available using the flash based client. If you use the HTML5 based web client the Update tab isn’t shown:

HTML 5 Missing update Tab:

Flash based client with Update Tab:

As soon as I see a video, I can tell which version is being used as its sooo different. (Flash sucks)

Import Image

About a minute into the source video he gets to where the major images are for major host upgrades. Since I had no existing images provided like the video I decided to try the import wizard, which poped up a useful Windows Selection Dialog box (as I was testing this form the latest version of Windows 10 – 1903, with the latest Chrome (Built in flash after enabling it)), on top of that I uploaded the the image from a UNC path (\\ip\some\folder\file.iso) after verifying access via windows explorer, I simply pasted the UNC path into the Windows Selection Dialog box address bar, and selected the ISO image. and it worked.

only once a image is uploaded, and selected does the creation of a baseline option make itself shown.

Baseline

Let the beat drop! *Bass beat drops* What the point of this is I can’t exactly tell yet, it seems to be a one to one mapping between a name and the ISO image being used?

paraphrasing the video guide “Now that we completed this useless step, we navigate to the cluster needing to be upgraded” In my case a single 5.5 test host, and by clicking on “Go to compliance view”

Attach Baseline

It would seem the Update Tab is available, at either the vSphere host level, Datacenter level, cluster level or host level depending on the scope you wish to deploy a “baseline”. Once within the scope you choose, I’m at host, click on “attach baseline” after ensuring you are still on the update tab.

much like the source after attaching the baseline the compliance level was shown as unknown, let’s follow along and “scan for updates”.

Now I’m assuming cause I am at the host level I don’t see the tabs with compliant and others cause there is only one host. and in this case it does change to “non-compliant” cause as the speaker states “The hosts listed as non-compliant do not match the version of ESXi associated with the attached baseline” AKA these hosts need to be upgraded.

Remediate

Click it to being the upgrade process for the host/cluster, which will being a wizard! Ohhh might wizard guide me to the light at the end of the tunnel!

while I clicked next, flash gave me an error prompt telling me my session had expired, and kicked me out back to the login page, even though I was still pretty actively working on it (snippets don’t take that long). Stupid flash, logged back in and back to the wizard:

agree to the EULA

Schedule it or do it now by not checking a scheduled time.

Pick your additional options and remediation options (I picked for my test to suspend my VMs as they are unable to be vmotions live due to no EVC based cluster of the hosts. they are all stand alone at the time of this writing. so lets try that.

After clciking finish I didn’t see anyting much happening at the vcenter tasks, so I logge dinto the host being upgraded and saw it was suspending the vms in question:

Now it has to copy all the memory from these VMs to disk so this could take a bit of time… then I’d assume I’ll be disconnected from the host once it reboots.

Monitoring my pings for these servers the pings have dropped starting the suspend stat (makes sense) but the host is still responding (makes sense).

I decided at this point to go get some food, I’m lazy and don’t cook, so by the time I had returned I was rather shocked to see the host had succefully been updated, showed compliant and my systems were right back to operational…

Summary

Besides the flash rubbish, this was overall a rather good experience. :O

I think I may upgrade more hosts this way in the future. I didn’t even have to step into my basement at all. That was great!

Until…

I was going to update my second host at home and was hit with this…

well wtf… then it hit me in the face… oh yeah…. I forgot about that, this is a nice real possible word example of third party, unsupported drivers. When I checked my own blog, and lucky the reference to the driver, and where I got it, it appears it still works for 6.x, so I can only guess I’ll have to remove the VIB, run the VUM update procedure, then manually re-install the third party driver… lets try this!

Remove VIB

Following this as a reference, I did the same thing:

esxcli software vib list

esxcli software vib remove --vibname DLink-528T

Ughhhh…

I remember that script/VIB, He was generally really cool guy an dI really loved his blog posts, but his VIB has been rather garbage…. as others have mentioned

Errors

As the picture shows a reboot is required now…. I let VUM do its thing with the standard ESXi 6.5u2 baseline I was using, after the server rebooted I got a problem:

“There was a problem with the Network Device specified on the command line. Error: No NIC found with MAC address.”

Discussion :

The NIC to be used as the management NIC has no drivers installed for it.

Ohhh crap, I forgot when I installed ESXi on this desktop I had to make a custom image, and this is a requirement for systems with custom builds, I removed the drivers for the one NIC but it was not for the ESXi mgmt, but the built in NIC on the mainboard is Realtek and… yeah… anyway, I’ll make a post on creating a custom image, but after a good while of failing to get what I need (as it was my hypervisor with my internet providing VM). I managed to find my initial build and re-install it manually and re-register the VMs and re-create the vSwitches and brought everything back up.

In this case I could have used my Veeam server, however none of my other hypervisors have multiple NICs and thus not an option to use them. My lab is def no redundant lab setup.