Getting A+ Qualys Report

As some of you may know you can validate the security strength of your HTTPS secured website using https://www.ssllabs.com/ssltest/index.html

A good read on Perfect Forward secrecy

I use HA Proxy with Lets Encrypt for my sites security. While setting up those to plugins to work together apparently by default it’s not using the most secure suites ok the dev shows how you can adjust accordingly… but which ones? This what I get by default:

Phhh only a B, lets get secure here.

Little more searching I find the base ssl suites from mozilla config generator

which gave me this for the string of suites

ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384

But then ssllab report still complained about weak DH… so had to remove the final two options in the list leaving me with this

ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305

Now after applying the setting on the listener I get this!

Mhmmm yeah! A+ baby but looks like some poor saps may not be able to see my site:

Too bad so sad for IE on older OS’s, same with iOS (Macs) running older Safari.

Now let’s tackle DNS CAA well I was going to discuss how to set this up, but the linked site covers it well. Since my external DNS provider was listed in the supported providers, I logged into my providers portal to manage my DNS, and sure enough the wizard was straight forward to grant Lets Encrypt authority to sign my certificates! Finally one that was actually really easy! Wooo!

Now I suppose I can eventually play with experimental TLS1.3 but I’ll save that for another post! Cheers!

 

Mitigating from CVE-2018-3646 on ESXi 6.7

To keep this short, new VCSA 6.7 has VUM built in. No more Flash needed. Yay finally.

So I upload the latest 6.7u3 image, create my baseline, and test remedy one of my simple laptop hosts. After system reboots and comes back on VCSA dashbaord… uhhh what’s with this yellow warning icon…. Summary…

OK great, so after years of Intel being ahead of AMD, looks like at the cost of some pretty shitty shortcuts. and these shortcuts have caused Intel a huge problem, and pretty everyone else. Since it affected everyone, everyone has some form of write up on it. In this case VMware has coded the above warning, with a reference to this KB, so you know read that if you want a dry overview.

As you can see I have what shows as 4 logical processors, but after applying the mitigation (setting VMkernel.Boot.hyperthreadingMitigation to true on the host advanced settings) and rebooting…

Yay the warnings gone, but apparently so are half my logical processors?

If your wondering why they didn’t enable this by default is due system resource management, which of course, is exactly what vSphere is. Since it affects the available resource of the host it may not be able to accommodate the workload it was originally designed for. In my case it’s a lab and my work load is obviously very light, and this isn’t an issue for me.

Was it worth the mitigation? I don’t really know at this point as I’m unaware of any easy simple tactics any attacker could use to attack my footprint. At the same type CPU resource is not my major constraint, it’s usually memory.

For now better safe than sorry. In my next post I hope to cover the vCenter upgrade path and an error that happened to me along the way, luckily it wasn’t that hard to recover from. 🙂

Cheers!

ESXi 6.5 Stuck on Initializing Scheduler
PSOD PCPU1 could not start

I’m making this post short to note this odd experience with this host build.

First weird thing was when trying to install ESXi I couldn’t get past vmkusb not sure what it was about but only found this decent reddit post with the same problem.

In short he noticed it would only get past this if a second USB was plugged into the USB2 ports, and sure enough that worked for me too. strange….

Then a couple days later while doing some more test boots, I get a Purple screen of death, complaining about the PCPU1 not starting or some shit, ugh again lets see what others have to say… well I found this vmware thread on it, he basically stated that resetting the BIOS settings worked, after farting around with some bios settings, I had other failed boots and my CPU and system was rather hot. I let everything cool down, added more powerful fans and tried again after resetting the BIOS to factory and much like the post it worked.

After finalizing the build a little more, I switched to an old 2.5″ HDD.. same problem but I noticed it gets stuck on initializing scheduler before PSODing, while I searched for what might be up with that I found this

It did help shit, same problem next boot, instead of just resetting the BIOS I played around with a couple more settings like I enabled intel AES-NI which helps for CPU offloading of AES computations. and another one which I sadly forgot, and then my next boot was fine. saving this at this point in case it comes back again.

Upgrading my ASUS RT-N16

The ASUS RT-N16

I love this thing, I remember when I first read my first blog posts about the specs, and what it could all do…

Wireless

Wireless Frequency Bands 2.4 GHz
Number of Antennas 3
WLAN Mode 802.11n
Transmit Power 15.5 to 19.5 dBm
Antenna Placement External

Interface

Ports 1 x Ethernet (RJ45) (Uplink)
4 x 10/100/1000 Mb/s Gigabit Ethernet (RJ45)
2 x 480 Mb/s USB Type-A

Performance

Throughput 300 Mb/s
CPU 480 MHz Broadcom SoC
128 MB RAM
32 MB Flash

Security

Wireless Security WEP, WPA, WPA2

Those are some good specs for 2010, pretty much a decade ago. and most of the blogs touted DD-WRT, which I joined the form site way back in 2012… looking back at my old posts didn’t seem to get much of any help… but sure had oddities I was recently running a KongMod of DDWRT (build 22000M) Circa 2014, looking it up found out he stopped to made modded firmware for OpenWRT. I grabbed the latest DDWRT for my router using the DDWRT database factory reset settings, cleared NVRam and used IE with a windows laptop with static IP bound to port 1 on the router….

Soft Brick?

Gave the system enough time to boot, but noticed the pings were not coming back up, the Power light would flicker during boot and then stay off, while the wireless LED said lit.

I thought I may have soft bricked it, so I grabbed the stock firmware and flashing tool from ASUS website to my dismay even though I could press the restore button and have the power LED blink slowly indicating it ready for TFTP file, even the flashing tool would fail either that its not in flashing mode, or faiiled to flash. I thought I was hooped in this case and was in a soft lock loop, and thought I would have to JTAG flash it…

Then for shits (since the WiFi LED was on) I wondered if it was broadcasting… and when I checked for a available WiFi on my phone I was shocked to see it was, I connected, shocked again, and could ping the router… wait what??

Solution?

Sure enough I could see the DDWRT web interface.. I was stumped and started to Google, but only found one post that was dead on… but as I figured the solution provided did not work for me, well the vlan1 check setting BS.

There was another bunch of posts stating to add commands “swconfig dev eth0 set enable_vlan 1” or some crap, yeah that didn’t work either. Even though people said don’t do it, I decided to use the DDWRT web interface Firmware Update section over WiFi (either was would have to JTAG flash if it failed) So at first I simply used the K2.6 Mini build instead of the mega, after the flash the exact same shit, but the power light at least stayed on. but again could only connect via WiFi. Since the only other answer was “I flashed a newer firmware” which is a timeless statement lol which exact version who knows, and I sadly didn’t have the old Kong build if I simply wanted to go back.

AdvancedTomato FTW

I was about to try OpenWRT when I decided to look at Tomato again… so flashed it via the DDWRT firmware update section (Fuck you DDWRT) and to my amazement it came up perfectly, Wifi was fine, and I could ping it on a physical LAN port again. Woooo!

Since I wasn’t used to the interface I did need a bit of a hand getting it setup as s simple AP again, guess it makes sense DHCP is set at the bridge so if you want to setup different NICs for different subnets and still have their own DHCP, but in my case I wanted none.

Then I read this nice post by How-To-Geek on configuring traffic monitoring, something I never had on DDWRT, so not only is the new UI a fresh change, so are some of the features. I really hope also a lot less bugs. Cause DDWRT with OTRW was buggy and a HUGE PITA.

Optware?

Well googling did show there was the possibility… and installation seemed straight forwarded enough… of course both guides being 8+ years old, wasn’t too compelling, so I checked if the source referenced script was still accessible… and it was! Nice, checking the script out I see another external reference source and check it out too, amazingly it’s still reference-able too.

So I followed along, starting by first attempting to create a partition (512 MB, labeled “optware” as ext2) I did this by USB pass-through of my USB stick to a Mint Linux VM. Then simply using gparted created my partition, I also created a 1 Gig, and 2.5 Gig ext 2 partition labeled whatever with the spare space. (I tried a 4 Gig partition, but… it failed to mount so stuck with the recommendations).

Ran the installation as suggested…

wget http://tomatousb.org/local--files/tut:optware-installation/optware-install.sh -O - | tr -d '\r' > /tmp/optware-install.sh
chmod +x /tmp/optware-install.sh
sh /tmp/optware-install.sh

I did this of course after verifying that indeed my partition was mounted as /opt, and the script ran without issue.. .amazing…

after that, I first installed htop, cause lets face it, normal top sucks…

ipkg install htop

I followed this up with the main packages I actually used, screen and irssi. This allows me to have a persistent IRC chat client (given the AP/Router doesn’t reboot)

ipkg install screen
ipkg install irssi

Add User?

Now I remember specifically having issues with DDWRT, and adding standard users with limited permissions. Specifically with the name showing up weird

So I searched quickly to see if it was possible, and if so how people were doing it

much like the guy in the first link, I  didn’t quite follow what was going on and then after checking each line, eventually it made sense. (Basically defining specific environment variables, and special actual files with embedded lines that are all saved to NVRAM, then a script (3 lines) is run to populate the linux user list)

I did add “adduser” but much like mentioned elsewhere it would complain about not having “passwd”, there was no packages for “mkpasswd”, or “makepasswd”. I wasn’t in the mood to change my root system password and run a single stupid line to set one users password … :S (

sed -n -e "s,^root:,$UNAM:,p" < /etc/shadow >> /etc/shadow.custom

)

Instead much like the alternative suggestion on the page itself “You can also cut & paste passwd and shadow entries from another linux box.” which is exactly what I did, using my Linux Mint VM, I used openssl passwd with a salt to generate a MD5 hashed password.

Now I was able to SSH in with my new non root account, YAY!

Now according to the source “These commands need only be done once for each custom username. Thereafter, the user will always be created every time the router boots up. To delete a user, edit /etc/passwd.custom and /etc/group.custom and delete the line with that username, then save them to nvram.”

OK…. I’m going to reboot now…

mhmmm is it going to work….? Oooo… account is there after reboot in passwd file… and line exists for account in shadow file, and home dir exists… lets log in… well shit… the password didn’t save… still same as root even though they differ in the shadow file… k… let’s make a new one… save in shadow file, relogin, yup password changed. and now change in shadow.custom and save, and reboot…. arrrrggggg C’MON

Maybe you have to run that setfile commands when changing a file set to nvram? second try… there we go! success.

Screen, Irssi and the fun Stuff

It’s been a while since I had to reconfigure this stuff so someone’s blog to help me along the way, and it’s rather old now… but still good stuff and this simple one

and then run irssi 😀 (by typing irssi and hitting enter)

Silly Rabbit Trix are for… I mean Irssi I’m following a guide already…

/network list

to make adding to Freenode easier instead of having to type /connect irc.freenode.net I’m going to setup a reference much like the existing reference from the above command:

The above shows names, but not there DNS lookups which is the server list

To add our reference:

/server add -auto -network Freenode irc.freenode.net

Now that we have an auto connecting server, we’d like to specify the user and login details:

/network add -nick Zew -autosendcmd "/msg nickserv IDENTIFY *******" Freenode

Now that my usual helpful sources are added (you can always catch me in one of these places) let’s test it all out… run /quit and then irssi again. Which worked! I was joined to my server, authed and joined to my channels 😀

Use Ctrl + X to switch connected servers, Esc + left or Right to move channels.

and finally “Ctrl + A then Ctrl + D . Doing this will detach you from the screen session which you can later resume by doing screen -r .”

See you on IRC!

 

Updates to the Zewwy PiCade

Zewwy PiCade Updates

It’s been a while since I completed my PiCade now a lot can happen in a year, and sadly I have not done more guides around the OS lakka I really hope to provide more of these for anyone who is simply running Lakka on whatever hardware they choose. Since the OS is compiled for many SOCs as well as the plentiful x86/64 architecture the hardware choices are fruitful.

Anyway, so let me cover the first couple things… Hardware Updates….

Hardware Updates

1)  I had recently brought my arcade to a local Anti-social hosted by non other than the amazing local Skullspace. (If you are in Winnipeg check these guys out, they are amazing. And funny in the main slides all three people are friends I know well.) While there wasn’t many places to put it I decided to test the build by placing it right ontop of a subwoofer/amplifer (yeah test it hardcore), while it did last the majority of the night, by the end it had started to flicker the screen on n off. a good wack would usually bring it back (some poor connection somewhere). I had intially thought it maybe self built power extender for the screen board, but after bypassing it I was still experiencing the issue. Having everything on one board, made pulling it out for diagnostics a breeze. and sure enough… simply checking all the connectors, a loose piece was found, I’m not sure exactly what this piece was (googling the numbers on seem it maybe an inductor) either way it had one leg that was clearly broken from its mating surface. So I soldered it back, but wanted to ensure it hold this time, I didn’t have a shrink tube big enough so I simply grabbed some plastic wrap (food wrap) and gave it a good wrap up….

and sure enough it’s been great since. Not sure I’ll put it on a subwoofer again, but it did otherwise old up great! 😀 That’s a pretty good build I’d say…

2) as you can see from the above picture I also moved the screen input selector (still annoyed this board has this considering the other input types are not even on the board (but I believe they use the same logic IC regardless to save costs) to the top, so it’s clickable using a toothpick (I hope to 3D print a button).

3) I had my buddy at Skullspace help me create an acrylic front protector since I had to pull the LCD from the glass cover and digitizer that normally comes with the screen  (this can be seen as a plus or minus, I think in this case it was a plus cause the glass was cracked, but the LCD screen was fine) which worked out for my build.

4) I wanted a way to keep the “lid” shut since it was designed to lift up to take an iPad in and out, which is nice for my all on one board design as well, but only when I need to do work, or switch SD cards for trying different OS’s. Since there was no build in latching mechanism I decided to use a but of trick magic I learnt from watching a bunch of Chris Ramsay on Youtube, you know the guy who opens all the puzzles. In this case I first thought similar to the same technique I used to hold the speakers in… but then that would be ugly, it worked great for the speakers since it’s from behind and you can’t see it… so instead puzzle magic I drilled 4 tiny holes in the lid and carefully on each side making sure not to go through the side vinyl.

Then using a piece from a paper clip… and a magic from a fish tank cleaning kit…

I placed the clip in the Lid hole till it’s completely hidden, then shut the lid, and use the magnet to pull the pin out into the side piece, thus locking the lid on both sides…

as you can see I’m lifting it right up the front edge which normally would open and pivot on the rear pins but with my secret hidden pins its locked and besides the tiny bit of play as you can see, it works great. (I could have attempted a bit thicker of a paper clip, but then I risk it rubbing on the walls of the holes I drilled and the magnet may not have the power to beat the friction)

As you can also tell the new acrylic is reflective but protects the LCD screen, I also attached speaker covers, and the pot handle for the volume control. I hope to get edges 3D printed as well as the front bezel which is just a grey cardboard right now.

Now on the back is nothing more than the removable Battery which operates the unit.

Software Updates

Now Lakkas made some updates since my build which is running Rpi-2.1.x The current release is on 2.4.x so i got hopefully we get some nice updates, first thing I did was test the new build on a standalone MicroSD to avoid buggering up my current build, now my main SD card is 16 Gigs vs 4, which is ironically 4 times the size. Now I have to figure out how to install the newer version while migrating most of my settings (Joy stick bindings, and game listings and Images) with the least amount of work… mhnmmm Neat!

I’m going to follow option B!

“Manual updates

  1. Download the latest img.gz file corresponding to your hardware from here.
  2. Place this file in your Storage partition, in the .update folder, either using SAMBA or by mounting it
  3. Boot or reboot your hardware.

You should see some messages about the upgrade process. After some seconds, your system will reboot.”

First thing I thought I’ll use my MicroSD to USB adapter and just use file explorer, hahah you silly goose Windows isn’t for user friendly, and of course what I mean by this is lakka will use linux partitions (like ext2,3 maybe 4) and clearly not MS based  NTFS, and MS doesn’t naively support such FS. This requires additional software, which I don’t want to install.  I also left all IODD device at work so there are all my bootable images and I sadly don’t have my own PXE (Yet)…. great OK so there goes almost all my direct mounting options, so i guess network based transfer it will be…. I reallllly wanted to DD the card as is Just in case the upgrade shits the bed, and I can simply DD my back img back onto the MicroSD, but since I can’t do that now… alright fine….

Downloaded linux mint, created VM, booted VM, installed Linux mint, atatched USB controller, passed through USB device (MicroSD to USB), auto mounted, and used a terminal to make my backup like a movie tech nerd…

I can also use this to inject the Lakka img, I do have it on my machine but don’t have file passthrough on the vmware console, simplest thing would be another USB passthough but I’m out of USB devices right now. So I used an network share, I wasn’t sure at first which the “storage partition” was but made a fair assumption:

K that was quick, unmount from VM, remove from host USB port, and place SD card in PiCade… and lets see what happens (at least I have a backup now) 😀

Now when I booted it, it did say unpacking, and did update, and did retain my playlists, nice new icons for each item, very nice.

But i lost all my core key map bindings, and rom remap bindings… that’s annoying. My Arcade/MAME games did run with the normal playlist, core changes? haven’t exactly figured that out yet. Since I recently picked up the Genesis Mini, I put that whole playlist on here :D.

VMware Host Oddities

I’ll keep this post short as I find it rather interesting turn of events…

I was ensuring config files were being backed up as good Sysadmin Posture would suggest. I noticed an issue on one of the hosts when I went to run the configuration backup. When I was hit with a “general system error”. I Wish I would have saved the actual error message but when I went to research it, it gave me more people complaining about the error when trying to vMotion (In this case I grabbed the most specific line from the call back in hopes to get something) most results simply pointed to saying restarted the ESXi hosts management agents. I decided to see if I could vMotion or if any other symptoms from the host, but everything showed green in vCenter, and vMotions worked without an issue.

I left it for the night (before) and decided to update the host since it was running 6.5-u2 and needed to be updated to 6.5-u3. I was hoping it would resolve the config backup issue (I had an older copy on hand but wanted most up to date) So I decided to do the update via a ISO image and a host reboot, moved VMs, no problem, placed host into Maintenance mode, no issues, send host for shutdown (I wasn’t using iLO or iDRAC, so needed to manually mount the USB hosting the installation media), while I was at the console waiting for the host to show “shutting down” it simply …. didn’t… after 10 minutes which is far more then generous I decided to press F2 to get into the console to have to tell me … uggghhh I wish I would have taken a picture of the error, something along the lines of you can’t use the console cause you been locked out…. it was weird, after waiting another 10 minutes and nothing happening (I’m assuming there must have maybe been some actual underlying issue with the datastore holding the scratch?) I decided to just do a hard shutdown (hold power for 5 seconds). I could rebuild this server faster if need be. Powering it back up was perfectly fine through all POST checks, booted the ESXi 6.5u3 installer, booted fined, checked for logical drives, found all of them perfectly fine including the SD card holding the OS (scratch changed to a persistent location, another datastore on the local host with multiple disks used for a datastore) anyway I selected it and it saw there was an existing installation and selected to update it, successful, reboot. reboots fine, but host doesn’t come back to vCenter…..

Log into host directly, and notice it states it has 18 VMs, all with a status of error, now I had moved them and they are even still running on another host (thankfully it didn’t do anything stupid to try and hijack them) at this point I had called VMware support and placed an SR. Once I had a tech on the line, I discussed my symptoms and issues. At this point I asked what was the best course of action as, I didn’t want to rebuild the host (I could, but time). In this case I simply un-registered all the VMs that were in error as they were not associated with the host), then attempted to re-connect the host. It first failed complaining about bad username and password (I’m not sure if this was the root account used to add it, or the VPX user it uses to manage the host) but it prompted the wizard like adding a new host, since all other settings were still fine on the host (network and vswitches w/ VMPGs and VMKs, and all Datatstores) after this wizard the host was re-added back into the cluster with the latest updates, and running the backup config command:

esxcli storage vmfs extent list

worked without error.  So yay! all fixed, but one thing still bothered me… what was the root cause…

I decided to check my scratch settings real quick, and sure enough it hold good and is on redundant datastore on the local ESXi host, so not sure what the root cause was, but it was fixed. I’m posting this for my own reference. Just incase this issue re-arises.

 

DCs Show CPU Spikes by svchost

I’ll try to keep this one short. The other night I was installing updates on some computers, I like to see system resources when doing this, as Windows is a heavy, HEAVY bitch. As I was scorlling through my VMs I noticed my DC’s hovering around 18%. While that may not sound like much, I know it was high lol for what they do. So I went to check the vSphere logs to see how long has this been going on… 1 day… mhmm. 1 week…. mgmmm 1 month… could this be normal? I don’t think so I just have caught it…. and looking at the Year view showed the increase a couple months back… but what could it be….

I noticed it on my other DC’s as well… all the same time frame.

After a while I noticed it was the svchost process. Using Mark’s ProcExp.exe I narrowed it down to svchost (DHCP/ nethost / eventlog)… I decided after many other failed searches to view this particular process having CPU issues. Then I found this, exactly what I was experiencing…. Funky CPU in Taskmgr, that process. all of it. and his answer:

TL;DR: EventLog file was full. Overwriting entries is expensive and/or not implemented very well in Windows Server 2008.

just as he mentions in detail in his answer, the security log was at max and being overwritten. Now I know there isn’t much happening at these times of the day so how did the log get filled so fast and being overwritten to cause CPU spikes. looking at the Log (Palo Alto User Agent Log on, and Logoff events) lots of them. I haven’t blogged yet in my series with Palo Alto about User mappings when it comes to the monitor area of the Palo ALto Firewalls, but you can configure Palo Alto to use Server monitoring directly instead of a user-ID agent server, which you can install on a dedicated windows server which will use SMI to query client devices on behalf of the Palo Alto firewall to determine what IP address is being used by whom…

In most cases, the majority of your network users will have logins to your monitored domain services. For these users, the Palo Alto Networks User-ID agent monitors the servers for login events and performs the IP address to username mapping.”

Now I can’t find a good Palo Alto Networks source on it, but when you configure the Monitoring Servers which “enable the User-ID agent to map IP addresses to usernames by searching for logon events in the security event logs of servers, configure the settings described in the following table.

which is all good and great however, the default for this is:

Server Log Monitor Frequency (sec)
Specify the frequency in seconds at which the firewall will query Windows server security logs for user mapping information (range is 1-3600; default is 2)

and apparently this process is not session based itself, so every 2 seconds the firewalls were hitting the DC’s looking to see who’s got what IP based on the logon events, and this in itself was creating a logon, and logoff event every 2 seconds. That apparently not only filled the log, but is enough garbage to flood the security log and cause the overwrite function on eventviewer to cause CPU “spikes”.

The solution was to increase the frequency of this lookup. This obviously reduces the accuracy of the mapping, but when you have long lease times on your DHCP settings, and users don’t change networks (like almost ever) this is a low risk, while still retaining user field information the Palo Alto Monitoring section. This along with a backup and clearing of the security event and the systems all went back to low CPU usage.

Happy happy joy joy

 

Run BitWardenRS with Internal PKI

I recently covered installing BitWarden_RS, that used let’s encrypt which is great for public service type.

Private industry that like to run on prem sometimes doesn’t want to have the front end exposed to the interwebs, and without any direct NAT and sec rules to allow external entities to hit the bitwarden server at all, HTTP validation (which these scripts use) will fail, even if you configured them to use DNS validation, getting the certs on the server still requires access of some kind if automation is wanted.

With an internal PKI the life of certs can be greatly extended and also kept entirely in-house, if one so pleases.

So this Guide continues on after the last just before letsencrypt is installed but after the NginX setup as been configured to allow the challenges, I might simply pull that part of the includes part of the NginX config as it won’t be needed but lets move on.

Now the letsencrypt uses etc/letsencrypt path to store certs n keys. Since I will be using this all just for nginx, i’lll use /etc/nginx/certs:

mkdir /etc/nginx/certs
cd /etc/nginx/certs
openssl req -new -newkey rsa:2048 -nodes -keyout bwserver.key -out bwserver.csr

use cat to open the CSR n copy n paste the contents:

Navigate to your internal CA server, request cert -> advanced template to use: Web Server, Paste your CSR

THen save the file (for now I saved both Base 64 and DER and used WinSCP to copy them to the server

now I noticed that the config uses PEM files so I found out how to convert the certs into what I need:

openssl x509 -inform der -in /home/zewwy/bwserverCert.der -out /etc/nginx/certs/bwserver.pem
$EDIT sites-available/bitwarden

Adjust the HTTPS section under the HTTP section accordingly:

#
# HTTPS
#
# This assumes you're using Let's Encrypt for your SSL certs (and why wouldn't
# you!?)... https://letsencrypt.org
server {
    # add [IP-Address:]443 ssl in the next line if you want to limit this to a single interface
    listen 0.0.0.0:443 ssl;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/[your domain]/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/[your domain]/privkey.pem;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    # to create this, see https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    keepalive_timeout 20s;     server_name [your domain];
    root /home/data/[your domain];
    index index.php;     # change the file name of these logs to include your server name
    # if hosting many services...
    access_log /var/log/nginx/[your domain]_access.log;
    error_log /var/log/nginx/[your domain]_error.log;     location /notifications/hub/negotiate {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Forwarded-Proto https;
        proxy_connect_timeout 2400;
        proxy_read_timeout 2400;
        proxy_send_timeout 2400;
    }     location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Forwarded-Proto https;
        proxy_connect_timeout 2400;
        proxy_read_timeout 2400;
        proxy_send_timeout 2400;
    }     location /notifications/hub {
        proxy_pass http://127.0.01:3012;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    #
    # These "harden" your security
    add_header 'Access-Control-Allow-Origin' "*";
}

in this case I adjusted my certs to my now internal signed ones:

After this follow the remaing part of the bitwarden install guide… when I did I was able to get it up but I got a cert error, at first it was cause in my enviroment, I didn’t have my offline-root cert installed on the client, so after I got that, and verified my intermidate sub CA was good, I verified it by navigating to my CA certsrv site and it was all green… yet I was getting an error even though my chain was green across the board….

Oh yeah…. shit Chrome requires a SAN even if no alternative names is ever planned to be used, …. THanks Google!

ok… lets backup a bit… stop the docker instance::

docker-compose stop

Now I should just need to reconfigure that nginx bitwarden file after creating new certificates with a SAN in it… but how to do that with OpenSSL…. lil more googling I found this great guide by Paul Kehrer almost 10 years ago… first thing I read is …

“SAN CSRs cannot be generated using the interactive prompt in OpenSSL” … Why?! it’s now literally standard… and the prompts don’t even ask for it.. what is this an IMSVA?! :@

anyway… lets following along

cd /etc/nginx/certs
nano req.conf
[ req ]
default_bits        = 2048
default_keyfile     = bwserverSAN.key
distinguished_name  = req_distinguished_name
req_extensions     = req_ext # The extentions to add to the self signed cert

[ req_distinguished_name ]
countryName           =
 

CA
countryName_default   = CA
stateOrProvinceName   = MB
stateOrProvinceName_default = MB
localityName          = WPG
localityName_default  = WPG
organizationName          = ZWY
organizationName_default  = ZWY
commonName            = bitwarden.zewwy.ca
commonName_max        = 64

[ req_ext ]
subjectAltName          = @alt_names

[alt_names]
DNS.1   = bitwarden.zewwy.ca
DNS.2   = www.bitwarden.zewwy.ca
DNS.3   = bw.zewwy.ca

openssl req -new -nodes -out myreq.csr -config req.conf

k… checking our files, we just need to resign our new CSR…copy it back to the server with WinSCP, convert it with the openssl command, check our files are as needed:

lets change our nginx files:

nano /etc/nginx/sites-available/bitwarden

test it, confirm it, apply it, and bring up our docker instance again:

and test it from the client side…

I hope this helps someone, mainly future me.

Cheers.

vCenter SSO

vCenter SSO

The other day I covered installing vCenter.

Today I’ll do a very quick overview on setting up SSO with a Windows based AD Auth.

DNS

Step 1) validate vCenter can reach any AD via the Root domain name:
*USE AD SERVER FOR DNS, 3rd Party DNS leads to failure as missing specialized records, E.G. srv records)
*Ensure Time is synced to within 5 minutes of AD server*

I ssh’d into the VCSA using root and then, “shell” and a regular old ping command to validate.

Step 2) Follow Virten’s Guide for doing the Flash way, or CLI way to join vCenter to the Windows Domain. Via the HTML5 Web Client: Menu -> Administration -> SSO -> Configuration -> Active Directory Domain -> Click Join AD (hidden behind the menu in the snippet)

Enter the domain to join, and an account that is allowed to join systems to the domain, in my case I used my Domain ADmin Account:

Populate the fields, and click joing and sure enough you will join the domain without issue… if you have a proper working NTP/AD architecture that is…

Thanks VMware… Ugghh ok, and if I use the CLI maybe some more verbose error?

What do you mean you “DC not found” what kind of PCLoadLetter error is this? Like I just verified lookup via DNS which is like the primary pre-req besides firewalls, which I have already configured my actually firewalls… so what gives, Googling this error leads me to this.

and I quote “On ESXi 6.5, the command is executed from /usr/lib/likewise/bin. If you haven’t enabled the AD firewall rule mentioned earlier, you must temporarily unload the ESXi firewall – assuming it is enabled – for this to work. Failing this, you will get an Error: NERR_DCNotFound [code 0x00000995] error.”

Are you ****in’ with me…. for reals… man wtf VMware….

Shit, right this is the VCSA not a ESXi host… ugggh quick research…

What… da… How, did I not know about this?! There’s a special VCSA management page, everything online just uses the “Web Client” which all VMware’s documentation assumes this to be the Flash client, which doesn’t even reference this at all!

https://vcsa:5480

Alrighty then… logging in… mhmm

That’s awesome but I don’t see firewall, maybe if I navigate to networking…

Nope, NICs settings and that’s about it:

C’mon those firewall settings have to be here, I don’t want to have to be forced to use flash…. cmon…..

F*** it says it’s for 6.7 I’m clearly on 6.5 there has to be a way…

After some deeper digging ( I found out VCSA uses python scripts to use specific files to build the firewall) then also talking this problem over with someone on the IRC channel #wmware, and digging a bit further and finding this vmware post….

I was at first simply using a third part DNS, having JUST an A host record for the AD server, not any of the other service records for LDAP or anything else, after changing my DNS settings on the VCSA to point to the AD server itself I got a different error at the CLI:

Bahhh what? oh wait… lol all my time is wrong, everywhere…

NTP – Fixing Time

Actual time 8:20 PM Winnipeg Central Time. Mon Oct 7, 2019

AD server time: 2:09 PM Mon Oct 7, 2019 (CST)

VCSA time: Tue Oct 8 01:15:08 UTC 2019

What a gong show… let’s fix this! First MS states to leave the PDC to system time to get form the host as host gets acurate time, well not for me. I could point the host to external, and wait then changing PDC time auto. But if you want to Domain join the hosts they should follow the hierarchy and use the PDC as time, catch 22, so instead PDC points to external source, and hosts will point to PDC for time and DNS (this allows for ease for changing external time provider and no issues with time sync).

So fixing PDC time:

before:

after

NOw time has changed and my firewall shows the successful packets, but why is my offset still so off? and why is my time an hour off?

Here’s my local workstation:

Yet here’s my PDC:

ok everything I checked online I’m sure I did it right but the syntax on one of the guides I was following didn’t seem right and I tried again and this time it worked, finally!

K, now I can update each host in my lab….

Before:

Configure:

After:

Finally VCSA itself, https://vcsa:5480 (login as root) -> Time

Before:

Configure:

After:

Yay, after fixing my time everywhere:

Joining VSCA to Windows Domain via CLI

/opt/likewise/bin/domainjoin-cli join $domain $user '$password'

YAY!

Quick Re-Cap:

So bad news is this isn’t as short a blog as I wanted, but good news is we are all learning something! Yay!

Now that we got our system domain joined (reboot required)

waiting… waiting….

Verifying AD object on AD server (core, via powerhsell)

and on the HTML 5 Web Client:

Adding Identity Source

Now I can finally follow adding the Identity source A) AD Auth from here.

Click on Identity Sources -> Add Identity Source:

omg finally something that was dead simple…

Defining Permissions

Now click on global Permissions.

Click “+” icon, and if system join is all good it should be able to query the AD and find the users when typed into the Name field:

Lets test it….

Second attempt but pushing to children objects:

and yay this time I was able to get in successfully:

but I had to put in my UPN (user@doman.local) what if I just want to enter my user name…

What a bunch of poop, that’s cause we didn’t set the primary SSO domain… back in the VCSA settings https://vcsa:5480 – summary shows

back on vCenter Web Client, Menu -> Administration -> SSO -> Configure -> Identity Sources -> select new source -> click Set as Default:

login again:

success, and finally as the source virten post stated, the “Use Windows Authentication” option is greyed out unless the Enhanced Authentication Plugin is installed. You can find the download link at the bottom of the login screen.

Summary

That was a bit more painful then I wanted it to be, but it really was nice that it was this painful cause it reminded me of the moving parts that have to be setup correct for this all to play nicely to begin with.

I hope this guide has helped someone. Please leave a comment, any comment will do!!!

 

BitWardenRS Install

The Story

I’ve been trying to find a decent password manager. I need team sharing abilities, I wanted to try psono, but my lack of NginX skills to get the web client to work cause it was an “optional” install, so they didn’t give direct instructions. 🙁

I then came across BitWarden but I wasn’t too excited when I couldn’t even create a local “Corporation” to use any team sharing abilities without a license.

I’m not a fan of DRM, period. There’s a forked package trying to change the DLL so you can just generate your own license, meh. All not my thing.

Then I read this guys blog post. He was in a very similar boat, so now I’m going to blog following his blog to see how easy or hard it really is.

BitWardenRS Install

Pre-Reqs

He talks about “Virtual Server” or IaaS (Infrastructure as a Service), so people who can’t run their own hardware. I’m not in this boat and I will instead spin up my own VM. However if you do not run your own hardware this is a great choice. This of course requires you to trust the owners of the datacenters in which you set these servers up on, and learn the UI’s they provide to create them.

My VM I gave 2 vCPUs, 2 GB mem, and 20 GIG SSD storage.

I also, as you can tell from this site, run my own domain so I created a record to point to the internal load balancer that will listen on the headers and direct them to this new Ubuntu LTS server. (This required me to double check my firewall and router configuration, as well as my load balancer setup) This was the source blogs “Get your Domain lined up” part.

Time for the funnest part “Set up a Docker Server”

The VM

hahah how sad, see if it even survives with these pathetic specs.

Yeah my usual boot into UEFI menu, then mount the ISO on Console of VM…

As you can see the removable devices is greyed out, but after booting the VM…

and now you can boot an ISO from your client device.

Boot and install Ubuntu 18.04 LTS:

Coffee time!

Step 1) unpriv user: add “user”: done (note I will change this in production, the default name was used for ease of following along the source guide)

Step 2) grab basic packages:

apt-get update && apt-get install vim git etckeeper

Corrected git config issues by supplying email and name fields:

git config --global user.email "[your email]"
git config --global user.name "[your full name, e.g. Jane Doe]"

Step 3) Init etckeeper:

etckeeper init
etckeeper commit -m "initial commit of BitWarden host"

Step 4) Docker Deps:

apt-get install apt-transport-https ca-certificates curl software-properties-common pwgen

Install secure key needed to add the docker.com package repository to your system

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Confirm the key is valid

apt-key fingerprint 0EBFCD88

Step 6) Add Repo

add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Step 7) Apt Update!

apt-get update

Step 8) Docker CE:

apt-get install docker-ce

This is were the guide takes  a bit of a dive he states:

“Add your unprivileged user (“ubuntu” in this case – substitute the unprivileged user you created!) to a new “docker” group and add that user to other useful groups:”

groupadd docker
adduser ubuntu
adduser ubuntu sudoers
adduser ubuntu admin adduser ubuntu docker

but this lead to all these groups “not existing” except the last command, so I moved on.

Step 9) Create an SSH key for your unprivileged user and allow logins for that user from external connection:

sudo -Hu ubuntu ssh-keygen -t rsa
cp /root/.ssh/authorized_keys /home/ubuntu/.ssh/
chown ubuntu:ubuntu /home/ubuntu/.ssh/
adduser ubuntu ssh

Step 10) More install stuff:

apt install python-pip

pip install -U pip

wtf…. nice anomaly…

pip install docker-compose

OK.. lovely… I managed to back track… remove pip:

python -m pip uninstall pip

Then remove python-pip and re-installed it:

apt remove python-pip
apt install python-pip

then do not update pip with pip install -U pip…

seems that line breaks it. then without running that line I could install docker-compose:

Not a good sign for python or pip not sure who to blame either way.. this type of stuff blows hard.

Step 11) Fuck there are a lot of steps here…
Set a convenience variable for [your domain] here (note: it’ll only be recognized for this session, i.e. until you log out):

DOMAIN=[your domain]
USER=[unprivileged user, e.g. ubuntu]

 

Create directories to hold both the Docker Compose configurations and the persistent data you don’t want to lose if you remove your Docker containers (namely your password database and configuration information):

mkdir -p /home/docker/$DOMAIN && mkdir -p /home/data/$DOMAIN
chown -R ${USER}:${USER} /home/data /home/docker/

Install the NGINX (pronounced “Engine X”) webserver which will act as a reverse proxy for the BitWarden service and terminate the encryption via HTTPS:

apt-get install nginx-full

Configure the server’s firewill and make an exception for SSH and NGINX services

ufw allow OpenSSH
ufw allow "Nginx Full"
ufw enable

Create a directory for including files for NGINX

cd /etc/nginx mkdir includes

Choose your text editor for editing files. Here’re options for Vim or Nano – you can install and select others. Setting the EDIT shall variable allows you to copy and paste these commands regardless of which editor you prefer as it’ll replace the value of $EDIT with the full path to your preferred editor.

EDIT=`which nano` or EDIT=`which vim`

 

To support encrypted data transfer between external devices and your server using HTTPS,  you need a valid SSL certificate. Until recently, these were costly and hard to get. With Let’s Encrypt, they’ve become a straightforward and essential part of any good (user-respecting) web site or service. To facilitate getting and periodically renewing your SSL certificate, you need to create the file letsencrypt.conf:

$EDIT includes/letsencrypt.conf

and enter the following content:

#############################################################################
# Configuration file for Let's Encrypt ACME Challenge location
# This file is already included in listen_xxx.conf files.
# Do NOT include it separately!
#############################################################################
#
# This config enables to access /.well-known/acme-challenge/xxxxxxxxxxx
# on all our sites (HTTP), including all subdomains.
# This is required by ACME Challenge (webroot authentication).
# You can check that this location is working by placing ping.txt here:
# /var/www/letsencrypt/.well-known/acme-challenge/ping.txt
# And pointing your browser to:
# http://xxx.domain.tld/.well-known/acme-challenge/ping.txt
#
# Sources:
# https://community.letsencrypt.org/t/howto-easy-cert-generation-and-renewal-with-nginx/3491
#
# Rule for legitimate ACME Challenge requests
location ^~ /.well-known/acme-challenge/ {
    default_type "text/plain";
    # this can be any directory, but this name keeps it clear
    root /var/www/letsencrypt;
}
# Hide /acme-challenge subdirectory and return 404 on all requests.
# It is somewhat more secure than letting Nginx return 403.
# Ending slash is important!
location = /.well-known/acme-challenge/ {
    return 404;
}

Now you need to create the directory described in the letsencrypt.conf file:

mkdir /var/www/letsencrypt

Create “forward secrecy & Diffie Hellman ephemeral parameters” to make your server more secure… The result will be a secure signing key stored in /etc/ssl/certs/dhparam.pem (note, getting enough “entropy” to generate sufficient randomness to calculate this will take a few minutes!):

openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096

Start time 12:18 – 12:22 yup a couple minutes, and then you need to create the reverse proxy configuration file as follows:

cd ./sites-available
$EDIT bitwarden

Shit the gut thought he had changed directories when you didn’t and I figured when said DOMAIN like just the domain not the server FQDN which you can tell in the next config file part I will get to, but first a quick fix:

and fill it with this content, replacing all [tokens] with your relevant values:

#
# HTTP does *soft* redirect to HTTPS
#
server {
    # add [IP-Address:]80 in the next line if you want to limit this to a single interface
    listen 0.0.0.0:80;
   server_name [your domain];
    root /home/data/[your domain];
    index index.php;

    # change the file name of these logs to include your server name
    # if hosting many services...
    access_log /var/log/nginx/[your domain]_access.log;
    error_log /var/log/nginx/[your domain]_error.log;  
    include includes/letsencrypt.conf;     # redirect all HTTP traffic to HTTPS.
    location / {
        return  302 https://[your domain]$request_uri;
    }
}

and make the configuration available to NGINX by linking the file from sites-available into sites-enabled (you can disable the site by removing the link and reloading NGINX)

cd ..
ln -sf sites-available/bitwarden sites-enabled/bitwarden

Check to make sure NGINX is happy with the configuration (it did not)

nginx -t

as you can tell…. it did not, only if I copied the file would the config be accepted, linked it would just fail… sigh… I don’t know why.

*Update it failed due to either the -sf options or the not fully named link but what I found worked was:

ln sites-available/bitwarden sites-enabled/bitwarden.zewwy.ca

If you don’t get any errors, you can restart NGINX

service nginx restart

and it should be configured properly to respond to requests at http://[your domain]/.well-known/acme-challenge/ which is required for creating a Let’s Encrypt certificate.

ughhhh, wat? there are no files in the dir that’s now specified in the config file, and navigating to the URL sure enough gives me an NginX 404… ok so anyway I guess I’ll just move on since he’s not making a lot of sense at this point….

So now we can create the certificate. You’ll need to install the letscencrypt scripts:

apt-get install letsencrypt

You will be asked to enter some information about yourself, including an email address – this is necessary so that the letsencrypt service can email you if any of your certificates are not successfully updated (they need to be renewed every few weeks – normally this happens automatically!) so that you site and users aren’t affected by an expired SSL certificate (a bad look!). Trust me, these folks are the good guys.
You create a certificate for [your domain] with the following command (with relevant substitutions):

letsencrypt certonly --webroot -w /var/www/letsencrypt -d $DOMAIN

so at first I forgot in my load balancer to change the backend to this new server as I was using my pihole to test access to the server URL externally from the internet as thats required for HTTP based auth (That’s what I’m assuming these scripts/services are setup to auth as.. looking at the invalid response) however even after correcting that I was getting failures…

So frustrated right now, I can’t seem to even get a simple html file to load… ugggh then again this whole thing hasn’t exactly been as the guide either.

publishing for now.

to be continued….

OK so, I had asked a buddy of mine I went to lunch with recently if he had experience with NginX as I remembered him mentioning it. I went on to vent my frustrations due to my own ignorance, and he offered to double check my config if I could grant SSH access, this is of course no issue to me, and I made some quick Firewall rules and granted him access. He soon mentioned he got it working locally, but failed to see access externally. Even though I was sure I had configured my load balancer correctly.. and then it hit me in the face, the firewall I was doing everything else on, Doh! soon I was able to see the basic HTML page I wanted to see:

followed the acme “ping.txt” test

ok… so now that I can reach that (externally as well) lets try again…

Woooo yes finally… ok lets move on…

Edit the nginx configuration file for the BitWarden service again

$EDIT sites-available/bitwarden

and add the following to the bottom of file (starting the line below the final "}")

#
# HTTPS
#
# This assumes you're using Let's Encrypt for your SSL certs (and why wouldn't
# you!?)... https://letsencrypt.org
server {
    # add [IP-Address:]443 ssl in the next line if you want to limit this to a single interface
    listen 0.0.0.0:443 ssl;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/[your domain]/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/[your domain]/privkey.pem;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    # to create this, see https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    keepalive_timeout 20s;     server_name [your domain];
    root /home/data/[your domain];
    index index.php;     # change the file name of these logs to include your server name
    # if hosting many services...
    access_log /var/log/nginx/[your domain]_access.log;
    error_log /var/log/nginx/[your domain]_error.log;     location /notifications/hub/negotiate {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Forwarded-Proto https;
        proxy_connect_timeout 2400;
        proxy_read_timeout 2400;
        proxy_send_timeout 2400;
    }     location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $server_name;
        proxy_set_header X-Forwarded-Proto https;
        proxy_connect_timeout 2400;
        proxy_read_timeout 2400;
        proxy_send_timeout 2400;
    }     location /notifications/hub {
        proxy_pass http://127.0.01:3012;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
    #
    # These "harden" your security
    add_header 'Access-Control-Allow-Origin' "*";
}

You should now be able to run

nginx -t

again, and it you haven’t got an accidental errors in the files, it should return no errors. You can restart nginx to make sure it picks up your SSL certificates…

service nginx restart

Nice, it worked, but no verification steps provided in the blog, so I guess I just have to move on again…

Setup Bitwarden Service

Before we start this part, you’ll need a few bits of information. First, you’ll need a 64 character random string to be your “admin token”… you can create that like this:

pwgen -y 64 1

“copy the result (highlight the text and hit CTRL+SHIFT+C) and paste it somewhere so you can copy-and-paste it into the file below later.

Also, if you want your BitWarden server to be able to send out emails, like for password recovery, you’ll need to have an “authenticating SMTP email account”… I would recommend setting one up specifically for this purpose. You can use a random gmail account or any other email account that lets you send mail by logging into an SMTP (Simple Mail Transfer Protocol) server, i.e. most mail servers. You’ll need to know the SMTP [host name], the [port] (usually 465 or 587), the [login security] (usually “true” or “TLS”), and your authenticating [username] (possibly this is also the email address) and [password]. You’ll also need a “[from email] like bitwarden@[your domain] or similar, which will be the sender of email from your server.

You’re going to be setting up your configuration in the directory we created earlier, so run”

yada yada yada something about email…

cd /home/docker/$DOMAIN

and there

$EDIT docker-compose.yml

copy-and-pasting in the following, replacing the [tokens] appropriately:

version: "3"
services:
    app:
        image: bitwardenrs/server
        environment:
            - DOMAIN=https://[your domain]
            - WEBSOCKET_ENABLED=true
            - SIGNUPS_ALLOWED=false
            - LOG_FILE="/data/bitwarden.log"
            - INVITATIONS_ALLOWED=true
            - ADMIN_TOKEN=[admin token]
            - SMTP_HOST=[host name]
            - SMTP_FROM=[from email]
            - SMTP_PORT=[port]
            - SMTP_SSL=[login security]
            - SMTP_USERNAME=[username]
            - SMTP_PASSWORD=[password]
        volumes:
            - /home/data/[your domain]/data/:/data/
        ports:
            - "127.0.0.1:8080:80"
            - "127.0.0.1:3012:3012"
        restart:
            unless-stopped

in my case I tested email via telnet on port 25, and it worked, so hoping this will work.

Note that the indentation has to be exact in this file – Docker Compose will complain otherwise.

With the docker-compose file completed, you’re ready to “pull” your package!

docker-compose up -d && docker-compose logs -f

the “up -d” option actually starts the container called “app” which is actually your BitWarden rust server in “daemon” mode, which means it’ll keep running unless you tell it to stop. If that’s successful, it automatically then shows you the logs of that container. You can exit at any time with CTRL-C which will put you back on the command prompt. If you do want the container to stop, just run.

docker-compose stop

“You should now be able to point your browser at http://[your domain] which, in turn, should automatically redirect you to https://[your domain] and you should see the BitWarden web front end similar to that shown in the attached screen shot!”

Which he didn’t have but to my utter amazement!

Soooo then everytime I went to register/create an account, it wouldn’t let me…

It would simply state Registration not allowed.. and only on issue reported with a dull answer

Dave ends off with: “To do your initial login, I believe (I’ll test this and update this howto!) you’ll be asked to provide your “admin token” to create a first user with administration privileges.”

Then I decided to hit the admin section:

http://bitwarden.zewwy.ca/admin

and I was asked for the admin token, once logged in I invited myself via an email account I have on my own Exchange server:

Yay a successful registration and login!

Summary

That was a lot of work, and in my next post I’ll cover creating an organization so I can finally share passwords securely.. to some degree…

*falls over on to couch*

HUGH shout out to my buddy; Troy Denton. Super awesome dude check him out on GitHub. I hope this helps someone.

*UPDATE if you hit this error (which you will following this guide and default settings)

do this:

Add the following parameter in nginx.conf file. Default location is /etc/nginx/nginx.conf

client_max_body_size 105M;