Palo Alto VPN (GlobalProtect)
Part 2 – Certificates

In my previous post I covered recovering a downed CA, cause it will be needed for this section of the GlobalProtect tutorial.

Adding the Sub CA cert:

Device -> Certs -> Import -> Base64 cer file

Generate a a Sub CA Key for the PA to handle the Gateway certs, afterwards generate a user cert as well for 2FA.

Click generate:

Click Generate

export the CSR, for some reason the latest Chrome causes a constant refresh, argggg had to export the CSR via IE, gross….

Navigate to your CA’s signing Web page (the Sub CA in this case), open the CSR in notepad and paste the results, and select Sub CA for the template:

Then save as Base64 type cert, and import back into the PA firewall, if successful will look like this:

Also import Offline-root-ca cert to complete the chain

Alright time for Certificate Profiles

Add all the Certs

Create a SSL/TLS Profile:

Name it whatever, pick TLS 1.2 as min and max, and select the PA Sub CA we created earlier.

Create User Cert
Create Template on CA

Then under Cert Templates, right click it, and duplicate

5 Years, i don’t like doing this often

Signature and encry, check off include symmetric allowed by subject, min key size of 2048 and key is exportable

Along with the default, check off MS RSA and AES, and RSA SChannel

Subject Name, Supply in the Request, it will complain about the security risk, accept them. (Normally you’d create the certificates at the client machines, but in this case I am doint it the “wrong way” by having a global user certificate)

Click Apply.

If you require additional permissions apply them now, by default domain admins have full control, and domain users have enroll rights.

With the Template configured, lets create the User Cert for the VPN, in this case we generate the CSR on the PA, but since we made the key exportable, we can export the certificate with key to be installed on the end device (instead of the CSR being generated on the device and then signed, and the public key being installed on the portal, which is the right way… hopefully I can get that, but the toughest part is generating certificates on phones, have to learn each devices OS on how to do it)

On the PA Device, Certs, Generate

*NOTE* I noticed that with the latest Chrome that when you attempt to export any certificate it just seems to refresh the page, sadly the only work around I have is to use IE… Ugh….

Open the CSR in Notepad, navigate to your Sub CA’s certificate signing page, sign the certificate.

lol, I was wondering why i couldn’t see my Template in the web interface, so I looked up my own very old blog post (3rd one I believe) and I realized I forgot to publish it, like I did the Authentication Session Template. Durrrr, then it kept complaining about https for cert destro (makes sense) but since I had a core subca, I couldn’t connect to the IIS remotely, then I found this, saved my bacon, and followed this to enable HTTPS, Then finally…

then Import it on to the Firewall,

it should look like this

In the next section I’ll cover configuring the Portal and Gateway settings. 😀

Resolving a down CA

Downed CA!

First off I wanted to address that this wasn’t the intended post of today, this was suppose to be part 2 of the Global Protect post, which is the second dependency; Certificates. However I recently had to get a new cert at work, and discovered this same issue as I had deployed my CA following a decent blog tutorial online by StealthPuppy, and sure enough the same sight provides a follow up on how to fix a mistake made, we all make them and these are great opportunities to learn, so lets get learning!

So you might happen to open up the Certificate Authority Snap and point it to your CA server, to find this…. it’s shutdown….

Fear not StealthPuppy has given us some helpful tips to resolve this!

The Issue

You might find your certificate authority, in this case, a subordinate certificate authority that is not started, perhaps after a server reboot. Attempting to start the CA, results in this message:

The revocation function was unable to check revocation because the revocation server was offline.
0x80092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE)

Which looks like this:

In the Application log on the subordinate CA, I can see event id 100 from source CertificationAuthority:

Active Directory Certificate Services did not start: Could not load or verify the current CA certificate.  stealthpuppy Issuing CA The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE).

As well as, event id 48 from the same source, CertificationAuthority:

Revocation status for a certificate in the chain for CA certificate 0 for stealthpuppy Issuing CA could not be verified because a server is currently unavailable.  The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE).

Certificate 0 is the subordinate CA’s certificate, issued by the offline Root CA.

I bypassed this portion of the blog as I didn’t want to have pictures of before the next required step soooo….

The Workaround

Of course, you probably want to get the CA up and running as quickly as possible. The easy way to do that is to disable CRL checking with the following command on the CA server:

certutil –setreg ca\CRLFlags +CRLF_REVCHECK_IGNORE_OFFLINE

Run this from an elevated command prompt and you should now be able to start the CA and get on with the business of troubleshooting.

Perfect, now lets fix this!

The Cause

My CRL was online as it is available in Active Directory (for domain joined machines) and via HTTP at crl.home.stealthpuppy.com, an alias of the subordinate CA. I’ve tested that I can retrieve the CRL by putting the HTTP path into a browser and I’m prompted to download a file.

Through having spent some time recently with setting up an Enterprise PKI in my lab and for a project, I’ve come to know the command line tool certutil.exe. This tool is available in all versions of Windows and should be the first tool to use to troubleshoot and manage certificates and certificate authorities on Windows.

Certutil can be used to perform many functions, one of which is to verify a CRL. I know the path to the CRL file because I can view the CRLs on the file system (in C:\Windows\System32\certsrv\CertEnroll) and I’ve previously configured CRLs for both CAs.

To verify the CRL, use the -URL switch with the HTTP (or LDAP) path to the CRL:

certutil -URL "http://subca.zewwy.ca/CertEnroll/OFFLINE-ROOT-CA.crl"

This will display the URL Retrieval Tool that shows that the CRLs are able to be contacted and show a status of OK.

*NOTE* I discovered this works directly on the Windows Core Server, if you happened to be running Core (I do as I love optimization, specially when you have to work in lab environments). Except as you might have noticed from the screenshot, the radio buttons are all messed (wonder what library handles that, that wasn’t in core…)

However, if we load a target certificate, in this case, the subordinate CA’s cert, we can start to see why we have an issue with the CRL.

Select the certificate for the subordinate CA that has been previously exported to the file system (in C:\Windows\System32\certsrv\CertEnroll) – click Select, open the certificate and click Retrieve again. This time, we can see a new line that shows that the base CRL for the subordinate CA’s certificate is Expired. (Unfortunetly I had to simply use his source images as in my case I had to also correct my CDP locations on my sign Sub CA Certificate as I had mentioned in my initial Setup Offline Root CA blog post, but I guess I didn’t do it in my lab, so I sort of had to fix to birds with one stone, the CPD locations in my cert by reissuing that, and the offline root ca CRL file, so my additional steps were a bit beyond these exact steps, I also didn’t take snapshots as I wanted to get back on pace with my blog post)

An expired base CRL on the Subordinate CA

An expired base CRL on the Subordinate CA

The CRL for the subordinate CA’s certificate will come from the root CA, so we’ll need to check that CRL. Open the CRL file (C:\windows\system32\certsrv\CertEnroll\stealthpuppy Offline Root CA.crl) – double-click or right-click and Open. Here we can see the CRL information, including the next publishing time (Next CRL Publish).

Viewing the properties of the Root CA's CRL

Viewing the properties of the Root CA’s CRL

At the time of troubleshooting, this date was in the past and because the Root CA is offline and the CRL is hosted on a different server (the subordinate CA), this particular CRL will never receive an update. So, when the subordinate CA has rebooted, it has checked the Root CA’s CRL and found it expired. Hence the certification authority service won’t start.

How To Fix It

Now we know why the certification authority service won’t start and an understanding of why the CRL is offline, even if the wording doesn’t match the symptoms. If the error message had told me the CRL had expired instead of being offline, I might have saved some troubleshooting time. We now know that we need to re-publish the CRL from the Root CA.

Start the offline Root CA, log into it and open the Certification Authority console. We will first want to ensure that the CRL publication interval is extended so that we don’t run into the same problem in the near future. Open the properties of the Revoked Certificates node to view and set the publication interval. The default interval is 1 week, obviously too often for an offline Root CA.

Instead, set this value to something suitable for the environment you have installed the CA into. Remember that you’ll need to boot the Root CA and publish a new CRL before the end of this interval, otherwise, you’ll have exactly the same issue.

Setting the CRL Publication Interval on the Root CA

Setting the CRL Publication Interval on the Root CA

Now publish a new CRL – right-click the Revoked Certificates node and click All Tasks /Publish.

Publishing a new CRL from the Root CA

Publishing a new CRL from the Root CA

Copy the updated CRL (from C:\Windows\System32\certsrv\CertEnroll by default) from the Root CA to the CRL distribution point and overwrite the existing CRL file (C:\Windows\System32\certsrv\CertEnroll again on my subordinate CA).

Now if we again use certutil.exe to verify the CRL, it comes up roses:

Viewing the verified CRL with certutil.exe

Viewing the verified CRL with certutil.exe

To ensure that the subordinate CA’s certification authority service will start, re-enable CRL checking:

certutil –setreg ca\CRLFlags -CRLF_REVCHECK_IGNORE_OFFLINE

If you have re-published the CRL from the Root CA correctly, the service should start and you can then shut down the Root CA. Then open Outlook and put a reminder in the calendar for a week before the CRL expires again.

Conclusion

I’ve had this issue with an Offline CRL a few times now and not really understood what the issue is until I took the time to troubleshoot the issue properly. I don’t spend that much time with an enterprise PKI and it’s easy to underestimate the complexity of setting up AD Certificate Services correctly.

Palo Alto VPN (GlobalProtect)
Part 1 – Authentication

Hey all,

I’d figure I’d spin up a really quick guide on setting up client type VPN using Palo Alto Firewalls. In this case it will use GlobalProtect, so will require appropriate licenses to work. I won’t exactly cover the license aspect in my blog as I find that stuff a bore, and can leave that for the good ol’ VARs to handle. Anyway let’s get into this.

Authentication

Palo Alto firewalls support a wide range of authentication sources (which is awesome) however, to start my lab utilizes LDAP. There are ways to secure LDAP with certs and TLS for LDAPS however again that is beyond the scope of this blog post, and will stick with plaintext LDAP connection son port 389. (In my case all my servers reside on one hypervisor, thus the chances of the vswitch being man in the middle is highly unlikely ;))

Step 1) Add a Server Profile

So to start on the Palo Alto (My Examples utilize PAN OS 7.1.x, however, 5-6, as well as 8 are very similar) Web interface go to Device -> Server Profiles -> LDAP then click add.

Give the Profile a meaningful name, then click on the add button under server list.

Couple Notes here:

1) The Name doesn’t have to be the server name, just give it a meaninful name, or give it the server name, whatever you choose (it’s not a DNS lookup)

2) I was having some issues with testing the Auth profile (Which will be covered a bit further down in the blog post) and I thought I may have had issues with my LDAP settings, however “During LDAP server configuration, the device automatically pulls the Base DN if the connection is successful.

Since I didn’t have LDAPS setup, I had no cert to use, so I had to uncheck “Require SSL/TLS secured connection” only then would the Base DN auto-populate, verifying that my LDAP connection was indeed successful.

Also note in my lab I had a separate NIC dedicated to my domain, which was separate from the usual home network, thus I had to setup a “Service Routes” for LDAP specifically.

I decided to also create a standard domain account and use it for the Bind DN. This may or may not be required.

Now this is where things get a little weird….

According to step 2 from this Palo Alto post you setup the authentication profile.

Step 2) Group Mapping (Optional)

Which is technically the bare minimum, however, I was hoping to have some group filtering at the Auth Profile, in order to do this one has to first setup a “Group Mapping” and followed this to do it:

Device -> User Identification -> Group Mapping

Click Add, Give it a meaningful name, and optionally fill in the domain

Helpful Commands

  • show user group-mapping state all
    Lists all group mappings on the firewall, which shows the Server Profile used and all groups found from it
  • show user group list
    Also lists all groups, but without ordering via the group mapping info
  • show user group name <group name>
    Lists users within the group specified
  • debug user-id refresh group-mapping all
    If new groups or users added to groups in AD, refresh the info on the PA

If you do not setup a group mapping, you can’t filter by groups in the Authentication Profile, under the “allow list” which from testing might not be a bad thing… you’ll see…

Step 3) Create an Authentication Profile

This is where my results get weird, which I finally discovered was all due to the “allow list” filtering in which I was attempting to use. Let me show you what I mean, using my first authentication profile as an example:

So nothing special, select LDAP, selected my LDAP Server Profile, then click the next tab is it is required “Advanced” tab.

Now if you setup a group mapping as I specified under the option step 2, you will see all the groups in the domain (CN=xxxx,OU=xxxx,DC=xxxx) format. If you didn’t you will get only all (and that is probably a good thing, not exactly sure yet,…)

So in my first case I decided to use my “domain users” as my filter group, as it’s pretty common group and I don’t have many users in it.

Simple, so now we have all the basics to now test it.

Step 4) Testing the Authentication Profile

Test and fail and test and fail, and smash your face in for using group mappings…. (yet to be determined why)… but for now, log in via SSH into the Palo Alto firewall and test it:

What the heck?!?! I swear everything was correct, I also knew I was entering my password correctly and everything. No matter how you change the settings in the Auth Profile nothing works (I double and triple checked proper group membership in AD, and ensured the firewall could see it all “show user group” and “show user group name “CN=domain users,OU=users,DC=zewwy,DC=ca””) until I noticed from the link in this steps title that his simple test was using that “all” group that is used when there is no group mapping defined, or just simple at the top of the list.

So creating a new Authentication Profile not specifying my AD group..

and testing…

BAMMMMMMMM….

And that concludes the Authentication part well at least the dependency, in part 2 I’ll cover the certificates requirements. I currently have a ticket open with Palo ALto as to why the “Allow List” section of the Authentication Profile doesn’t pick up the users in which should be in those account as shown by the “show user group name “CN group name”, so if this commands shows users based on a group, the allow list should be able to validate that as well.

Until Part 2, Adios!

IIS redirect HTTP to HTTPS

The Requirement

Recently I posted about creating and using a certificate for use on and IIS website. I did this recently for my Dev to deploy his new web app, which required use of a devices camera for QR scanning purposes, well it runed out that the camera could not be used unless the app was secured with TLS (a certificate). So we created a cert and created a new secure binding.

However, it was soon pretty apparent every time I pressed the down arrow key on my browsers URL it would use the regular HTTP (at first we removed the port 80 binding which caused site unavailable issue). So after re-adding the standard port 80 binding for regular HTTP, I decided to use a rewrite rule to handle the redirection.

It wasn’t as intuitive as I thought it would be so a little google search and I was on my way…

As the source states, after you have your certificate and port binding for HTTPS, as well as one for regular HTTP.

In order to force a secure connection on your website, it is necessary to set up a certain HTTP/HTTPS redirection rule. This way, anyone who enters your site using a link like “yourdomain.com” will be redirected to “https://yourdomain.com” or “https://www.yourdomain.com” (depending on your choice) making the traffic encrypted between the server and the client side.

Below you can find the steps for setting up the required redirect:

The Source

    1. Download and install the “URL Rewrite” module.
    2. Open the “IIS Manager” console and select the website you would like to apply the redirection to in the left-side menu:iisred1
    3. Double-click on the “URL Rewrite” icon.
    4. Click “Add Rule(s)” in the right-side menu.
    5. Select “Blank Rule” in the “Inbound” section, then press “OK”:iisred2
    6. Enter any rule name you wish.
    7. In the “Match URL” section:- Select “Matches the Pattern” in the “Requested URL” drop-down menu
      – Select “Regular Expressions” in the “Using” drop-down menu
      – Enter the following pattern in the “Match URL” section: “(.*)”
      – Check the “Ignore case” box

      iisred3

    8. In the “Conditions” section, select “Match all” under the “Logical Grouping” drop-down menu and press “Add”.
    9. In the prompted window:
      – Enter “{HTTPS}” as a condition input
      – Select “Matches the Pattern” from the drop-down menu
      – Enter “^OFF$” as a pattern
      – Press “OK”

      iisred4

    10. In the “Action” section, select “Redirect” as the action type and specify the following for “Redirect URL”:https://{HTTP_HOST}/{R:1}
    11. Check the “Append query string” box.
    12. Select the Redirection Type of your choice. The whole “Action” section should look like this:iisred5

The Note

NOTE: There are 4 redirect types of the redirect rule that can be selected in that menu:
– Permanent (301) – preferable type in this case, which tells clients that the content of the site is permanently moved to the HTTPS version. Good for SEO, as it brings all the traffic to your HTTPS website making a positive effect on its ranking in search engines.
– Found (302) – should be used only if you moved the content of certain pages to a new place *temporarily*. This way the SEO traffic goes in favour of the previous content’s location. This option is generally not recommended for a HTTP/HTTPS redirect.
– See Other (303) – specific redirect type for GET requests. Not recommended for HTTP/HTTPS.
– Temporary (307) – HTTP/1.1 successor of 302 redirect type. Not recommended for HTTP/HTTPS.

  1. Click on “Apply” on the right side of the “Actions” menu.

The redirect can be checked by accessing your site via http:// specified in the URL. To make sure that your browser displays not the cached version of your site, you can use anonymous mode of the browser.

The Helping Hand

The rule is created in IIS, but the site is still not redirected to https://

Normally, the redirection rule gets written into the web.config file located in the document root directory of your website. If the redirection does not work for some reason, make sure that web.config exists and check if it contains the appropriate rule.

To do this, follow these steps:

  1. In the sites list of IIS, right-click on your site. Choose the “Explore” option:iisred6
  2. “Explore” will open the document root directory of the site. Check if the web.config file is there.
  3. The web.config file must have the following code block:
    <configuration>
    <system.webServer>
    <rewrite>
    <rules>
    <rule name=”HTTPS force” enabled=”true” stopProcessing=”true”>
    <match url=”(.*)” />
    <conditions>
    <add input=”{HTTPS}” pattern=”^OFF$” />
    </conditions>
    <action type=”Redirect” url=”https://{HTTP_HOST}/{R:1}” redirectType=”Permanent” />
    </rule>
    </rules>
    </rewrite>
    </system.webServer>
    </configuration>
  4. If the web.config file is missing, you can create a new .txt file, put the aforementioned code there, save and then rename the file to web.config.

This one, much like my Offline Root CA post is a direct copy of the source. For the same reason, I felt I had no need to re-word it as it was very well written.

Thanks Namecheap, although it would have been nice to thank the actual author who took the time to do the beautiful write up with snippets. 🙂

The Zewwy PiCade

Building a PiCade

Hey all,

I figured I’d finally blog about my PiCade build I made. To start off this is nothing new, and I got the idea basically from this blog post by PJ.

As I don’t have all the pictures I may borrow a couple from his site, well see.

The Parts

Well first and foremost you’re going to need a iCade cabnet, I found them going for new on Amazon for over $200 (This is way too much), so I managed to get a used like new one off e-bay for a bout $50. At the time of this writing I found a ebay posting for one used as low as $23!!

Next you’ll need a Raspberry PI, I used a Pi3B for my system, however there is a Pi3B+ at the time of this writing and should be considered instead. These are generally $40 to $50. (In my case I managed to pick up a Pi3B gaming kit from MemEx which included a 16GB Micro SD, a 16 GB USB stick, the HeatSink set and the PSU and a old SNES USB Controller for $90)

You’ll need an iPad screen, in my case, again had a broken one from work, which I had my colleague rip the screen from it (remember this version the touch screen digitizer is separate from the screen itself).

Obviously iPad screens don’t have an HDMI in, so you’ll need to order an LVDS to HDMI converter board something like this I managed to get mine from Alibaba for about $20, it didn’t have a remote or VGA options.

In my case I first attempted to utilize the Bluetooth of the controller board, and the Pi, however even after talking to some pretty smart dudes that run and build Lakka (The OS we’ll be using for the build) it turns out I hard to go the USB controller board route. I picked this route so I could use the joystick part of the cabinet with other device if I really ever wanted to. So I bought a “Reyann Zero Delay Arcade USB Encoder PC to Joystick for MAME” was about $15 on amazon, this isn’t technically required, I did it so I didn’t have to program the buttons via GPIO pins as PJ did.*

I Temp used a MonoPrice Thin HDMI but I had to wrap it around my boards to handle all the excess length, so I ended up buying a 20 CM cable, since I ended up making my build a little more sleek and organized than PJ’s I decided to buy the more expensive slim style 20cm Cable.

I got my speakers by disassembling some old USB speakers my work was throwing away, so I managed to get these for free however, you can get amazing cheap USB powered speakers by Logitech for $10 (S-120).

iCade                                  $20-$250
Pi Kit                                   $40-$100
iPad Screen                     Free – $90
LVDS-to-HDMI              $20-$40
USB Controller              $15 *
20cm HDMI cable       $1-$25
Plywood                           Free – $10 (Huge Shout out to Thor)
USB Speakers               Free – $10
Velcro Tape                    Free – $5
Total                                   $100-$460+

Now I use the plywood to mound all the pieces and then slide it into the cabinet like it was the iPad (That’s how sleek I managed to make my build ;)!). You could go the extra mile like this guy in the UK did. Where he drilled holes into the side of the cabinet to provide side buttons ( I love this idea just didn’t have time or the extra higher grade buttons). This would add to the cost of the build but it is a possibility for those who maybe reading this.

Alright lets get started!

The Build

The Screen

This section assumes you already have your iPad screen ready to go. In my POC, I had initially placed everything on a cardboard cutout shaped exactly as the amount allowed to be slipped into the cabinet where the iPad goes.

As you can tell I followed the design of the UK guy and mounted my screen landscape mode instead of portrait mode ( It would have been amazing if I could have designed a swivel mount mechanism, unfortunately I didn’t have the ingenuity to pull that off so stuck with Landscape. )

Using this cardboard as my template I out my plywood (2 pieces) and then on one of them I cut out the size of the screen, glued the two pieces of plywood together and then mounted to screen to it.

Beauty, alright now I cut a small square hole on the backside to allow the LVDS-to-HDMI ribbon cable through the plywood, as you can see it hanging off the left side of the last picture. Now on the backside of the plywood is where I mounted all the components.

As you can tell from my first template it was a bit messy, and the long HMDI cable is very unsightly. This will be cleaner in the final build with the 20 CM flat HDMI cable. so again using the as my template I started with the video controller board (LVDS-to-HMDI) seen in the lower center.

As you can tell I readjusted the board a lil more to the left, and placed the screens LED power module right next to the Pi at the top. You may have also noticed that I precut the holes for the speakers and attached slide anchors, this allows for easy speaker removal and replacement if needed.

The Sound

Now in PJ’s build he doesn’t cover the sound much at all other than he states that you need a USB speaker, and from looking at his final build it appears to be that simple little cheesy red dot in the upper left corner of the back of his cabinet. Meh.

The UK guy’s is pretty impressive as he installs speakers in the top part facing down which also lights up. This is very impressive however, it leaves the screen a little on the lower side, which I found is OK only on higher tables, most tables and desk heights I tried I found it more comfortable, with the screen positioned higher on the cabinet. Thus my design places the speakers at the bottom and the screen at the top.

As I mentioned I ripped apart some old PC USB speakers from work, these obviously had lengthy cords, so after ripping their plastic enclosures wide open, I shortened the cables and re-soldered them to the main board.

I then proceeded to shorten the speaker wires, mount the speakers in the holes I cut out, anchor them, connect them to the main board, connect that to the pi via the 3.5 mm jack, and power them via the USB slot of the Pi (There’s 4 slots, and 3 of them will be used; 1. Power Speakers 2. Joystick 3. USB Stick for games).

I also mounted the potentiometer between the speakers to allow for really easy and rapid volume adjustments.

One final thing to note about sound, and that is that you have to SSH in to the Lakka installation and configure the sound to go out the 3.5 mm jack…

Jonathon

amixer cset numid=3 1

Will force the audio out the speaker jack.”

The Cable Diff

As you can see the wrapping around of the long thin HDMI cable, I was super pleased once I got my high priced flat 20cm HDMI cable… just check out the difference!

The Joystick

Sadly I did not take any pictures of this while I was doing it, pretty much cause and I quote as the UK guy said it well…

“First up is to get the main case open, which involved about 4 thousand screws, two of which being those awful security types, so be aware that you’ll need a tiny screwdriver just for those if your iCade has them (some don’t apparently).

Removing the top panel, you can see the bluetooth board in the back, which we wont be needing so first off lets disconnect all of that. If you’re not intending on upgrading the controls, then the connections here will plug straight on to the USB board, saving you a load of time and effort, but as i’m looking to upgrade everything, out it all comes.”

Since I had purchased the USB controller board and I didn’t swap the buttons, it was literally plug-n-play, the only thing I will mention though is, in order to get the unit closed back up I had to cutout the spot where the batteries went, and mounted the USB controller board in it’s place.

Then it just came down to using another USB keyboard to bind all the User 1 input controllers under Lakka’s settings.

The Power

You might have noticed in the last couple pictures that you can’t quite make sense of the cables that protrude from the bottom of the arcade system. This hole was initially designed to route the iPad power, however since I decided to sneakily hide all the electronics between the iCade backboard and my plywood, I used this hole to route: 1. The Pi’s USB power (cable coming out and dropping to the right) 2. The LSVD-to-HDMI controller board power (The cable going to the top right and you see the open solder points to a female coaxial barrel connector) 3.  The Joystick USB cord (From the top right, coiled, and then shoved up into the cabinet.

This allowed me some flexibility. 1. I could use my LiPo Car Booster pack to run the system since it has multiple voltage outs (19v, 12v, 5c) and even more conveniently provides power separately to 5v via a USB output. Meaning I had a completely portable bar-top arcade system, with a run-time of about 8 hours.

Or 2. replace the LiPo Battery pack with a Nema Extension and Wall Warts much how PJ had his mess setup. Should I ever run out of battery power.

The Lakka

I installed Lakka directly to a 16 GB SD card following their installation instructions. Etcher IO is AMAZING, there’s also Rufus. 😉

Now I might follow up on this blog post with some Lakka specific posts cause there are some interesting nitty gritty’s you have to understand about it, although they do a fairly decent job in their wiki or “doc” section.

The Result

An awesome little arcade unit, that’s super fun to enjoy some old classics on. They even managed to make 8 button setups like this work with more button controllers such as the N64. One of my favorite examples is showing off Mario 64 in full smoothness. :D… That’s nice you can see me taking the picture of my arcade in the glare of the screen… lovely…

And a picture of the unit slipped into the cabinet….

Amazigly even for how tight it is, with the heat sinks and the pi mounted at the top, I have not once had an issue with heat. 😀

Hope you enjoyed this post and maybe it inspired you to build you own arcade cabinet! Cheers

RegEx

It’s power and complex.

I was gonna write a blog post, but I got sucked into learning regex for hours… and I’m still baffled at the syntax…

One thing is for sure…. backlooks are difficult without context awareness (variables)…..

Ugh, Example 1 Example 2

Then there’s learning about Anchors or “automatic zero-width assertions” … ooooeeeeeee that’s a mouthful.

Ugh man…  I’ll eventually get the results I want, just a matter of time…

Interesting how to get everything but a set string.

Don’t forget it’s sometimes OK to be Greedy and Lazy.

It amazes me how much time I’ve spent reading up on regex… I knew it was powerful… but man…

MSI installer – Network Resource Unavailable

The Start

The other day my boss came in with a colleagues laptop and told me that the VPN software failed to update, and by fail to update completely removed the old version and didn’t update to the latest version. Checking the laptops ‘Programs and Features’ sure enough showed no signs of the application.

I simply grabbed the latest version of the software installer and attempt to reinstall the application, yet to my dismay the latest installer complains with the following error ‘The feature you are trying to use is on a network resource that is unavailable.’ How insightful…

I figured it was a registry issue, as the registry is known to hold old settings from applications and do not get cleaned up properly. This place can be a cease pool on old machines that have been constantly upgraded.

The Dig

There are plenty of references online when it comes to this error. The main one people reference is the HKLM/Software/Classes/installer/products.

Even though I cleaned everything in this based on the application I was installing, it was still failing with with same error.

The Answer

Lucky for me it kept specifying what it was expecting for a network path, I decided to search the registry based on this, and found there was a key.

After removing the parent GUID based key, the installer ran successfully.

If my memory serves me correctly I believe the problematic key was under:

HKEY_CLASSES_ROOT\Installer\Products

SSH Banners

I love SSH… like I really love it. It is pretty surprising that Windows only first had the ability to naively ssh only in the recent Windows 10 1803 build. That’s pretty sad. Just teste don my 1803 build.. nope… well I know I have done it before… anyway teh point I wanted to get here was more about SSH servers.

Now normally allowing SSH in to a system basically enables an SSH services on that system, making it the SSH server. Then you usually utilize a workstation, like the computer you usually use to navigate websites, like this one, to connect to that server with a piece of software (with Windows that’s usually Putty)… if I can get that dang native ssh to work (shakes fist)… anyway…

When you enable this service it is pretty powerful, depending on how you configure it, and what application is running the service. There are plenty of flavors to choose from (this is pretty common with Linux and open source). This is usually a good thing cause each one is scoped for a certain target audience. In my case I wanted to bring some old life back to my old Asus Router. I’ve been running DDWRT on it for a long time, and utilizing the simple command line interface (embedded linux) on a decent lil system only working as an AP otherwise, is a fun lil place to use IRC. 🙂 Find me on #Freenode (#Windows-Server, VMware, Cisco, Skullspace, FreeNAS) .

Now I figured if I was going to use this again, why not have some fun and re-do my loggin banner. Now in this case there are two things to consider:

1)  The Message of the Day (MOTD) – This displays as soon as a client connects before it asks for a username. In most cases this is a great place to place your unauthorized message. (In my case the MOTD was tied to a Read-Only FileSystem file, and I had no intentions of compiling my own build, so I decided to utilize the option to not display this).

2) The Login Banner – This message displays after you have specified a user name.

Now there can be many ways to customize your login banner, you may need to google the based on the SSH server you are using. In my case my router was utilizing dropbear lucky for me they have decent documentation.

In my case I simply had to create a simple text file pointing anywhere using the -b option: dropbear -b /somepath/banner.file

After I created my file I configured my startup script to point to my new banner file. Sure enough now when I log on I see this:

Boo yeah! Now that’s sweet.

Zewwy Joins GitHub!

Actually I created my account apparently over 6 years ago :O…..

Sadly in those 6 years all I contributed was a couple really lousy DDWRT scripts…. and that OS was painful as getting help with it was near impossible.

I’ve updated my blog social links to include my GitHub Link. I moved my PowerShell Center Write-Host function there. I hope to move my SharePoint script I had written for the SharePoint migration at work. As well as some other contributions coming up soon. Maybe I’ll fork GhettoVCB and finally do part 3 of my Free Hypervisor Backup series. I’m sure everyone’s been waiting so eager for. 🙂

This was a short one. I hope to provide some more meaningful content in the short while. Just been a rough couple days recently.