A certificate chain could not be built to a trusted root authority

Today I tried installing .NET 4.6.2 onto an offline WIndows 7 to setup playnite.

Every since I built my PiCade I’ve been huge into loaders, almost all of them seem to rely on RetroArch for an old game emulation which is fine by me. Check out those sites for their respective offerings. My PiCade uses Lakka which is just a different loader that I find is rather well compiled for ARM based devices such as the Pi. They do offer a x86-64 build however I found that since it’s a linux derivative you have to be pretty Linux swanky to do good with it, by that I mean supporting hardware can be a bit more difficult as you have to be able to get the drivers for certain hardware yourself, in my case using an old HP laptop with an ATI card inside it was CPU rendering everything so N64 and Dreamcase would CHUGGGG, even though the system is way more than capable.

Anyway… Playnite only has one dependency for Windows, thats .NET 4.6.2, so when I downloaded the offline installer I didn’t expect issues, until…

Error! A certificate chain could not be built to a trusted root authority.

Ughhh ok… Google… wtf does that mean? MS says

Grab their certificate

install using elevated cmd with…

certutil -addstore root X:\Where\you\saved\the\Cert.crt

Sure enough after this I was successfully able to install .NET 4.6.2

Fixing WindowsRE

To make a long story short, in my previous post I covered some issues I had dealing with MBR2GPT. In which case I fried the Recovery Partition, and thus ruining my Advanced Startup abilities.

So here’s what happens… As soon as you either A) move the Recovery partition (via GParted) or B) Delete it you’ll have a disabled WinRE.

Checking BCDedit will show Recovery: Yes, and a recovery sequence but agentc will state otherwise:

Heck you may as well even wipe the useless BCD settings at this point!

Sooo how do you fix it?

If you did A) and simply moved the Recovery Partition, the fix is pretty easy.

If you did B) then First you’ll need to shrink a bit of space on your primary disk that hosts the Windows OS files (or another whole different disk, doesn’t really matter to me mon) either way…

I did recall someone recreating the files that are within the actual recovery folder, but I sadly can’t find it now, also this is a good post thread on it

In my case I had another laptop with the same Windows version deployed that still had the recovery partition, so I simply used a linux live to DD it onto a USB drive, and then do the same Linux Live on the laptop with the deleted partition and created another partition with the exact same sector count, and dd the image back into place.

This however still doesn’t fix the WinRE, and simple doing “reagentc /enable” fails stating the WinRE location is null, which from the picture above remains the fact.

I was stuck on this for a while until I stumbled upon this Technet post (not my own… haha woah!)

After reading this, I followed along by mounting my Recovery Partition:

Cleaned my Reagent.xml From this…

to this…

Set the WinRE location with the /setreimage option on reagentc, and enabled that puppy for the win!

This was good enough for me! After this all the advanced recovery options were available again, so I could do things like MBR2GPT without using the /allowfullos switch. 😀

Sorry about the crappy pictures and no headers… I clearly was super lazy on this one.

My Delightful Challenges with MBR2GPT

The Story

I’m not gonna cover in this blog what MBR2GPT is… honestly there’s mroe than enough coverage on this. Instead mines gonna cover a little rabbit hole I went down.  Then how I managed to move on, this might even be a multi part series since I did end up removing my recovery partition in the whole mix up.

How did I get here?

In this case the main reason I got here was due to how I was migrating a particular machine, normally I wouldn’t do it this was but since the old machine was not being reused, I ended up DDing (making a direct coopy) of an SSD from old laptop to M2 SSD on new laptop. New laptop has more storage capacity and since the recovery partition was now smack dab in the center of the disk instead of at the end, I wasn’t able to simply extend the partition within Windows Disk Management Utility (diskmgmt).

Instead I opted to use GParted Live, to move the recovery partition to the end of the drive and extend the usual user data partition containing Windows and all that other fun stuff. Nothing really crazy or exciting here.

After the move and partition expansion ran Windows Check Disk to ensure everything was good (chkdsk /F) and sure enough everything was good. Since this newer laptop uses EUFI and all the fun secureboot jazz along with the Windows 10 OS that was on the SSD already, but the SSD partition scheme was MBR… whommp whomp, but of course I knew about MBR2GPT to make this final transition the easiest thing on earth!

Until….

Disk layout validation failed for disk 0

Ughhhh… ok well I’m sure there’s some simple validation requirements for this script…

  • The disk is currently using MBR
  • There is enough space not occupied by partitions to store the primary and secondary GPTs:
    • 16KB + 2 sectors at the front of the disk
    • 16KB + 1 sector at the end of the disk
  • There are at most 3 primary partitions in the MBR partition table
  • One of the partitions is set as active and is the system partition
  • The disk does not have any extended/logical partition
  • The BCD store on the system partition contains a default OS entry pointing to an OS partition
  • The volume IDs can be retrieved for each volume which has a drive letter assigned
  • All partitions on the disk are of MBR types recognized by Windows or has a mapping specified using the /map command-line option

Ughhh, again, ok that’s a check across the board… wonder what happened…

Well I quickly looked back at my open pages, but couldn’t find it (I must have closed it) but someone on a thread mentioned all they did was delete their recovery partition. I ended up doing this, but it still failed on me…

checking the logs (%windir%/setupact.log)

I had errors, such as partition too close to end, and I extended and shrunk the partition. After going through most random errors in the log I was stuck on one major error I could not find a good solution to…

Cannot find OS partition(s) for disk 0

so I started looking over all the helpful posts on the internet, like this, and this

The first one I came across for general help, and it was a nice post, but it actually didn’t help me solve the problem. The second one I actually hit cause it was literally dead on with my issue…

“After checking logs (%windir%/setuperr.log), it was clear it had a problem with my recovery boot option — it was going through all GUIDs in my BCD (Boot Configuration Data), and failed on the one assigned to recovery entry. This entry was disabled (yet still had GUID assigned to it, which apparently led nowhere and that seems to have been the root cause). I searched some discussion forums, and most of them said that I would have somehow create recovery partition.

Fortunately, it turned out to be a lot easier 🙂 Windows has another command line tool, called REAgentC, which can be used to manage its recovery environment. I ran reagentc /info, which showed that recovery is disabled and its location is not set, but it had assigned the same GUID that was failing when running MBR2GPT. So, I run reagentc /enable, which set recovery location and voila, this time MBR2GPT finished its job successfully.”

That’s cool and all but for me…

  1. I removed my recovery partition, so any GUID pointing to it is irrelevant and its gone
  2. reagentc was not showing as an available command, either on my main Windows installation, or the windows PE that comes with the 1809 installer ISO I was using.

After farting around for a while, finding out I wasted my time with ADK and other annoyance around Windows Deployment methods and solutions when all I wanted was this simple problem solved, I moved on to another solution, a lil more hands on…

If he fixed his with reagentc… and it was due to a problem in the BCD… let’s see what we can do with the BCD directly without third-party tools. Bring on the….. BCDedit!

 

Microsoft Windows [Version 10.0.17763.437]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\WINDOWS\system32>diskmgmt

C:\WINDOWS\system32>mbr2gpt /validate /allowfullos
MBR2GPT: Attempting to validate disk 0
MBR2GPT: Retrieving layout of disk
MBR2GPT: Validating layout, disk sector size is: 512 bytes
Cannot find OS partition(s) for disk 0

C:\WINDOWS\system32>c:\Windows\setuperr.log

C:\WINDOWS\system32>diskmgmt

C:\WINDOWS\system32>mbr2gpt /validate /allowfullos
MBR2GPT: Attempting to validate disk 0
MBR2GPT: Retrieving layout of disk
MBR2GPT: Validating layout, disk sector size is: 512 bytes
Cannot find OS partition(s) for disk 0

C:\WINDOWS\system32>bcdedit

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarxxiskVolume1
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
default                 {current}
resumeobject            {c982d23f-8xx8-11e8-b5xx-ed8385ce2xx2}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 30

Windows Boot Loader
-------------------
identifier              {current}
device                  partition=C:
path                    \WINDOWS\system32\winload.exe
description             Windows 10
locale                  en-US
inherit                 {bootloadersettings}
recoverysequence        {b09bfaxx-6xx4-11e9-98ce-ead29f9a0a02}
displaymessageoverride  Recovery
recoveryenabled         No
allowedinmemorysettings 0x15000075
osdevice                partition=C:
systemroot              \WINDOWS
resumeobject            {c982d23f-8xx8-11e8-b5xx-ed8385ce2xx2}
nx                      OptIn
bootmenupolicy          Standard

Even though I disabled the recovery partition via the “recoveryenabled” no setting the issue kept happening, I could only guess it was this setting, but how can I delete it… ohhhhhh

C:\WINDOWS\system32>bcdedit /deletevalue {current} recoverysequence
The operation completed successfully.

C:\WINDOWS\system32>bcdedit

Windows Boot Manager
--------------------
identifier              {bootmgr}
device                  partition=\Device\HarxxiskVolume1
description             Windows Boot Manager
locale                  en-US
inherit                 {globalsettings}
default                 {current}
resumeobject            {c982d23f-8xx8-11e8-b5xx-ed8385ce2xx2}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 30

Windows Boot Loader
-------------------
identifier              {current}
device                  partition=C:
path                    \WINDOWS\system32\winload.exe
description             Windows 10
locale                  en-US
inherit                 {bootloadersettings}
displaymessageoverride  Recovery
recoveryenabled         No
allowedinmemorysettings 0x15000075
osdevice                partition=C:
systemroot              \WINDOWS
resumeobject            {c982d23f-8xx8-11e8-b5xx-ed8385ce2xx2}
nx                      OptIn
bootmenupolicy          Standard

C:\WINDOWS\system32>mbr2gpt /validate /allowfullos
MBR2GPT: Attempting to validate disk 0
MBR2GPT: Retrieving layout of disk
MBR2GPT: Validating layout, disk sector size is: 512 bytes
MBR2GPT: Validation completed successfully

No way! it worked! Secure boot here I come!

I hope this blog posts helps someone else out there!

Summary

Check the logs: C:\Windows\setuperr.log

There might have been a way I could have moved forward without even have deleted my recovery partition, I just happening to know it was not needed for this system, so it was something that was worth a try. However if I probably had looked in the log before even having done that I may have avoided this rabbit hole.

But…. I learnt something, and that was cool.

Note don’t delete your recovery partition. Bringing it back is an extremely painful process without re-install. I will cover how to accomplish this in my next post. I was actually going to this yesterday, however other work has come up. I will however complete the blog post this week… I promise! (It’s like I’m talking to myself in these posts… like no one actually reads these… do they?)

A general system error occurred: Launch failure

Failure to Launch

Sound the Alarm! Sound the Alarm!

*ARRRRRRREEEEEEEEERRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
ARRRRRRREEEEEEEEERRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR

and one heck of a day sweating bullets, am-I-right?!

Also sorry about the lack of updates, with spring and work, it’s been hard to find time to blog. Deepest apologies.

The Story

Well it’s Tuesday so it’s clearly a day for a story, and boy do I have a story for you. I was going about my usual way of working.. endlessly to meet my goal… that never able to acquire goal of perfection…. anyway… I had completed a couple major tasks of going from vCenter 5.5 to 6.5u2, and while I have yet to blog about that fun bag (cause I had used my own PKI and certificates so the migration scripts VMware provides pooped the bed till I replaced them with self signed) but I digress this has nothing to do with that… well sort of.

Where was I… oh right… I was vMotioning a couple VMs to some new 6.5U2 hosts I had setup on new hardware (yeah buddy this was a whole new world (Why did I just say that in the voice of Princess Jasmine?)) but alas the final VM was not meant to move for it had stalled @ 20 percent (I had this happen once before during my deployment and discovered some interesting things about multi-NIC vmotions) I probably should blog about that toooooo but alas my time is running short. (I still have many other things to tackle and blog about). and then……..

“A general system error occurred: Launch failure”

ugh…. I could have jumped right to the logs but I jumped on google instead…

CloudSpark apparently recently came across this issue and posted on all the many places he apparently could think… reddit, and VMware forums 

the VMware post got a tad long, but it was more the post on reddit that got me wondering….

“Haven’t seen this personally, but there’s a few things via google on the “Failed to connect to peer process” error that suggest it could be due to running out of storage. Specifically in one scenario, the /tmp/vmware-root folder on the ESXi host fills up with logs.” – Astat1ne

When I SSH into the host and ran “df -h” I was surprised to be reported back with an error:

esxcli returned an error: 1

Something to that extent anyway. I ended up Moving all the VMs back off the host, and swapped out the SD card the ESXi software was installed on (I had a copy of the SD card with a clone of the ESXi software and host configure that was created right after the host was installed and configured).

Sure enough after reboot powering it back on, “df -h” returned clean. I was then able to vMotion all the VMs back onto the set of new hosts.

Def a generic error message I’ve never seen, and thinking about it now is rather comical. (I know when your in the middle of the problem it’s not so funny)…

BUT it sure is now! :D…. this is so going to come back to bite me….

(await updates here post April 30th)

Exporting OPNsense HAProxy Let’s Encrypt Certificates

You know… in case you need it for the backend service… or a front end IDS inspection… whatever suits your needs for the export.

Step 1) Locate the Key and certificate, use the ACME logs!

cat /var/log/acme.sh.log | grep “Your cert”

*No that is not a variable for your cert, actually use the line as is

Step 2) Identify your Certificate and Key

Step 3) run the openssl command to create your file:

openssl pkcs12 export out certificate.pfx inkey privateKey.key in certificate.crt

Step 4) use WinSCP to copy your files to your workstation

*Note use SFTP when connecting to OPNsense, for some reason SCP just no worky

Testing Active Sync

I’ll keep this post short.. I swear this time haha

The Story

I recently blogged about using OPNsense as a reverse proxy for Exchange. This was really nice, and I was able to access Outlook Anywhere, literally from anywhere with owa. However…

The Problem

For some reason I could not connect with Active Sync… Even when I had my phone in the same network pretty much. I thought it might had to do with the self signed certificate (I should have known it was not when selecting “Accept all certificates” under the connections options still didn’t work), so  I wastefully exported the certificate and key from my OPNsense server and imported it into the Exchange ECP, and assigned it to the IIS and SMTP services. (I’m probably going to change these back to the self signed as I don’t really have intentions on completing these steps every 60 days.

Since I didn’t want to use my account to test on Microsoft test site (this is a life saver), I used a account and email I was setting up for my colleague… and it passed… I was shocked. (At first I thought it was cause of the certificate changes… I soon found out.. it was not).

So I tested the same connection settings on my phone and it worked!

Woo Hoo!

The Real Problem and Solution

My happiness was sort lived once I attempted to add my account…. which still failed…. what the heck? I had all the settings the same exc…ep…t… my account…. wait a minute…. Google… WTF! Admins can’t use active Sync?!?!? Why isn’t that more specified in the documentation! I was aggravated about this for days… cause of something that was sooo simple!

I’ll cover exporting and importing certificates for other uses in a nother blog post. I just wanted to get this one out cause even though I had configured everything in my previous blog post about using OPNsense as a reverse proxy correctly, I wanted to follow up on why Active Sync was working for me, cause everything else on online guides made it sound so simple, and it rather is… until you find out a little secret GEM MS didn’t tell you about…

Zewwy has not one but two Epiphanies

The Story

Nothing goes better together than a couple moments of realization, and fine blog story. It was a fine brisk morning, on the shallow tides of the Canadian West… as the sun light gazed upon his glorious cheek… wait wait wait… wrong story telling.

The First Epiphany

First to get some reference see my blog post here on setting up OPNsense as a reverse proxy, in this case I had no authentication and my backend pool was a single server so nothing oo-lala going on here. I did however re-design my network to encompass my old dynamic IP for my static one. One itsy bitsy problem I’m restricted on physical adapters, which isn’t a big deal, with trunking and VLAN tagging and all that stuff… however, I am limited on public IP addresses, and the amount of ports that can listen on the standard ports… which is well one for one… If it wasn’t for security, host headers would solve this issue with ease at the application layer (the web server or load balancer) with the requirement of HTTPS there’s just one more hurdle to overcome… but with the introduction of TLS 1.2 (over ten years now, man time flys) we can use Server Name Indication (SNI) to provide individual certs for each host header being served. Mhmmm yeah.

This of course is not the epiphany… no no, it was simply how to get HAproxy plugin on OPNsense configured to use SNI. All the research I did, which wasn’t too much just some quick Googling… revealed that most configurations were manual via a conf file. Not that I have anything against that *cough Human error due to specialized syntax requirements*… it’s just that UIs are sort of good for these sort of things….

The light bulb on what to do didn’t click (my epiphany) till I read this blog post… from stuff-things.net … how original haha

It was this line when the light-bulb went off…

“All you need to do to enable SNI is to be give HAProxy multiple SSL certificates” also note the following he states… “In pass-through mode SSL, HAProxy doesn’t have a certificate because it’s not going to decrypt the traffic and that means it’s never going to see the Host header. Instead it needs to be told to wait for the SSL hello so it can sniff the SNI request and switch on that” this is a lil hint of the SSL inspection can of worms I’ll be touching on later. Also I was not able to specifically figure out how to configure pass-though SSL using SNI… Might be another post however at this time I don’t have a need for that type of configuration.

Sure enough, since I had multiple Certificates already created via the Let’s encrypt plugin… All I had to do was specify multiple certificates… then based on my “Rules/Actions/Conditions” (I used host based rules to trigger different backend pools) zewwy.ca -> WordPress and owa.zewwy.ca -> exchange server

and just like that I was getting proper certificates for each service, using unique certs… on OPNsense 19.1 and HAProxy Plugin, with alternative back-end services… now that’s some oo-lala.

My happiness was sort lived when a new issue presented it’self when I went to check my site via HTTPS:

The Second Epiphany

I let this go the first night as I accepted my SNI results as a victory. Even the next day this issue was already starting to bother me… and I wanted to know what the root of the issue was.

At first I started looking at the Chrome debug console… notice it complaining about some of the plugins I was using and that they were seem as unsafe

but the point is it was not the droids I was actually after… but it was the line (blocked:mixed-content) that set off the light bulb…

So since I was doing SNI on the SSL listener, but I I was specifying my “Rule/Action” that was pointing to my Backend Server that was using the normal HTTP real server. I however wanted to keep regular HTTP access open to my site not just for a HTTP->HTTPS redirect. I had however another listener available for exactly just that. At this point it was all just assumptions, even though from some post I read you can have a HTTPS load balancer hosting a web page over HTTPS while the back-end server is just HTTP. So Not sure on that one, but I figured I’d give it a shot.

So first I went back to my old blog post on getting HTTPS setup on my WordPress website but without the load balancer… turns out it was still working just fine!

Then I simply created a new physical server in HAProxy plugin,

created a new back-end Pool for my secure WordPress connection

created a new “Rule/Action” using my existing host header based condition

and applied it to my listener instead of the standard HTTP rule (Rules on the SSL listener shown in the first snippet):

Now when we access our site via HTTPS this time…

Clean baby clean! Next up some IDS rules and inspection to prevent brute force attempts, SQL injections… Cross site scripting.. yada yada, all the other dirty stuff hackers do. Also those 6 cookies, where did those come from? Maybe I’ll also be a cookie monster next post… who knows!

I hope you enjoyed my stories of “ah-ha moments”. Please share your stories in the comments. 😀

PAN ACC and WildFire

The ACC

For more in depth detail check Palo Alto Networks Page on the topic. Since the Palo Alto are very good Layer 7 based firewalls which allow for amazing granular controls as well as the use of objects and profiles to proliferate amazing scale-ability.

However, if you been following along with this series all I did was setup a basic test network with a single VM, going to a couple simple websites. Yet when I checked my ACC section I had a rating of 3.5…. why would my rating be so high, well according to the charts it was the riskiest thing of all the Internets…. DNS. While there have been DNS tunneling techniques discussed, one would hope PAN has cataloged most DNS sources attempting to utilize this. Guess I can test another time…

You may notice the user is undefined and that’s because we have no User ID servers specified, or User ID agents created. Until then that’s one area in the granular control we won’t be able to utilize till that’s done, which will also be covered under yet another post.

I did some quick search to see why DNS was marked so high, but the main thing I found was this reddit post.

akrob – Partner · 5 months ago – Drop the risk of applications like DNS ;)”

Hardy har har, well can’t find much for that, but I guess the stuff I was talking about above would be the main reasons I can think of at this time.

The better answer came slightly further down which I will share cause I find it will be more of value…

so we got the power, it just takes a lot of time to tweak and adjust for personal needs. For now I’ll simply monitor my active risk with normal use and see how it adjusts.

For now I just want to enable WildFire on the XP VMs internet rule to enable the default protection.

The WildFire

Has such a nice ring to it… even though wild fires are destructive in nature… anyway… this feature requires yet another dedicated license, so ensure you have all your auth codes in place and enabled under Device -> Licenses before moving on.

Now this is similar to the PAN URL categories I covered in my last post.  Yes, these are coming out at a rather quicker than normal pace, as I wish to get to some more detailed stuff, but need these baselines again for reference sake. 😀

Go under Objects -> Security Profiles -> WildFire Analsysis

You will again see a default rule you can use:

Names self explanatory, the location I’m not sure what that exactly is about, the apps and file types are covered under more details here.

to use it again you simply have to select which profile to use under whatever rules you choose under the security rules section. Policies -> Security

Now you can see that lil shield under the profile column thats the PAN URL filter we applied. now after we apply the wild fire…

we get a new icon 😀

Don’t forget to commit…. and now we have the default protection of wild fire. Now this won’t help when users browse websites and download content when those sites are secured with HTTPS. The Palo Alto is unable to determine what content is being generated or passed over those connections, all the PAN FW knows are the URLs being used.

Testing

Following this site, which has links to download test file which are generate uniquely each time to provide a new signature as to trigger the submission. It’s the collaborative work through these submissions that make this system good.

Checking the Wildfire Submissions section under the Monitor Tab.

There they are they have been submitted to Palo Alto WildFire for analysis, which I’m sure they probably have some algo to ignore these test files in some way, or maybe they use to analyze to see how many people test, who knows what things can all be done with all that meta data…. mhmmm

Anyway, you may have noticed that the test VM is now Windows 7, and that the user is till not defined, as there’s no user agent, or LDAP servers since this machine is not domain joined that wouldn’t help anyway and an agent would be required AFAIK to get the user details. I may have a couple features to cover before I get to that fun stuff.

Summary

As you may have noticed the file was still downloaded on the client machine, so even though it was submitted there was nothing stopping the user from executing the  download file, well at least trying to. It would all come down to the possibility of the executable and what version of Windows is being used when it was clicked, etc, etc. Which at that point you’d have to rely on another layer of security, Anti Virus software for example. Oh yeah, we all love A/V right? 😛

You may have also noticed that there was 3 downloads but only 2 submissions, in this case since there is no SSL decryption rules (another whole can of worms I will also eventually cover in this series… there’s a lot to cover haha) when the test file was downloaded via HTTPS, again the firewall could not see that traffic and inspect the downloaded contents for any validity for signatures (cause privacy). Another reason you’d have to again rely on another layer of security here, again A/V or Updates if a certain Vulnerability is attempted to be exploited.

So for now no wild fire submissions will take place until I can snoop on that secure traffic (which I think you can already see why there’s a controversy around this).

Till my next post! Stay Secure!

PAN URL Categories

PAN URL Categories

Heyo! So today I’m gonna cover URL category’s. Obviously Uniform Resource Locations are nothing new and even more so categories hahah. So when you know existing ones and have classified them, you can do some amazing things, what’s the hardest part…. Yes… proper classification of every possible URL, near impossible, but with collaboration feasible. In this post I’m going to cover how to set this up on a Palo Alto Networks firewall, cover some benefits, a couple annoyances, and ways to resolve them when possible…. Let’s get started!

License Stuff

Now when I first started with Palo Alto Networks Firewalls, they were using Brightcloud… here’s a bit of details from here

Palo Alto Networks firewalls support two URL filtering vendors:
PAN-DB—A Palo Alto Networks developed URL filtering database that is tightly integrated into PAN-OS and the Palo Alto Networks threat intelligence cloud. PAN-DB provides high-performance local caching for maximum inline performance on URL lookups, and offers coverage against malicious URLs and IP addresses. As WildFire, which is a part of the Palo Alto Networks threat intelligence cloud, identifies unknown malware, zero-day exploits, and advanced persistent threats (APTs), the PAN-DB database is updated with information on malicious URLs so that you can block malware downloads, and disable Command and Control (C2) communications to protect your network from cyber threats.
BrightCloud—A third-party URL database that is owned by Webroot, Inc. that is integrated into PAN-OS firewalls. For information on the BrightCloud URL database, visit http://brightcloud.com.
I’m not exactly sure if Brightcloud is going to continued to be supported or not and they have instead stuck more with their own in house URL DB, which of course requires a license so under Device -> Licenses ensure you have an active PAN URL-DB license.
For a list of all the class types you can use see here. (PAN login required)
Once you get this out of the way lets get into the good stuff.
Still under the Licenses area, Click the Download Now link under the area.
Considering I have nothing… Yes…
Not sure why they have a region selection… but alright…
Yay!
Now we are ready to start using them!

Objective Profiles… I mean Object Profiles

Yeah… click on the Objects tab… look under Security Profiles… URL Filtering.

There lies a default profile, which allows 57 categories while blocking only 9. For a simple test I’ll use this, the blocked categories are:

  1. abused-drugs (LOL, cause other poisons like Tobacco and alcohol are allowed, cause laws)
  2. adult (I’m assuming this is a business friendly term for porn)
  3. command-and-control (duh)
  4. gambling (duh)
  5. hacking (interesting class definition)
  6. malware (duh)
  7. phishing (duh)
  8. questionable (duh)
  9. weapons (awwwww)

Well that seems like a fairly reasonable list. Creating your own allow and block listing is just as easy as creating a new profile and defining each class accordingly, and yes you can easily clone an existing profile and change one or two categories as required.

The Allow and Block lists are specified under the overrides areas if you happen to need to allow or block a URL before it can be officially re-classed by PAN DB. As quoted by the wizard, “For the block list and allow list enter one entry per row, separating the rows with a newline. Each entry should be in the form of “www.example.com” and without quotes or an IP address (http:// or https:// should not be included). Use separators to specify match criteria – for example, “www.example.com/” will match “www.example.com/test” but not match “www.example.com.hk”” Which makes sense it’s will determine what is allowed as for proctols under the security rules area, this simply states which addresses (DNS or IP based) to allow or block. In the case of DNS till proper classification.

Checking a URL for a Category

To check a address class, check PANs site for it here. If you find a site is mis-classed you can send an email to Palo Alto Networks team and they will test the verification of the re-class and re-class the PAN DB accordingly. As far as I can tell I don’t think this one actually requires a login.

Using IT!

Alright, alright, lets actually get to some uses. Now if you were following my series see my last two posts here, and here for reference material. Under the Security Rule Test Internet, the final tab, actions, we did not define any profile settings, this is where the rubber hits the road for the first time.

Pick Profiles, We’ll cover groups a bit later (its just a group of profiles, who’d of thought).

As you can see this expands the window to show all the profiles you saw under the Objects -> Security Profiles area, in this case we are just going to play with the URL filtering.

Now once I apply this on the internet rule.. productive for my Test XP machine should go up… muahahah and…

HAHAHAHA you lazy mid 2000’s virtual worker… you can’t go gambling get back to work!

Summary

As you can see how useful URL categories can be, unfortunately I did want to cover more granular examples; such as only allowing a server to access it’s known update server URL’s. Hopefully I can update this post to cover that as well.

For now I hope you enjoyed this quick blog post. In my next post I hope to cover how this isn’t an IDS of any kind at this point, but a single layer of the multi-layer security onion. Stay tuned for more. 🙂

 

 

Basic Setup of a PAN VM 50

Quick Intro

Heyo! so on my last post we went through a basic install and update of a Palo Alto Firewall VM. Now it’s time to setup a dataplane NIC, some zones, some rules to allow some basic internet.

I decided to do some very basic setup of one NIC and was surprised to find I could not get any ping responses either from the firewall, or the firewall making any requests. I had a memory of talking to a smart fellow once before about this, and sure enough…

A Caveat

You have to enable Promiscuous mode on the VMPG the NIC is a member of…

I know it sounds ridiculous and it is, but without it, nothing flows through the PA VM. Quick Update on this, I didn’t like this idea one bit, so to ease the risk I did find something rather interesting: according to this (requires a PA login) this hasn’t been needed since PAN OS 7, I disabled it on my Test network

and the pings dropped… ugh… ok… According to the post it says PAN OS 7 and onward uses this setting by default but can be changed under:

Device > Setup > Management  > General Settings

Enabled by default huh… doesn’t seem to be enabled to me…

enable it, commit. Now MAC address changes will take place in this case I did loose connection to my external IP, but pinging from my PA VM to my gateway managed to fix that quickly.

And now sure enough with Promiscuous mode rejected on my vSwitch settings…

Oh thank goodness I can go to bed knowing I didn’t suggest a terrible practice!

Basic Setup

Look at this test network… was using an OPNsense router/firewall, but all these guys are currently shutdown. Lets spin one up and make the PA VM 50 it’s new gateway…

Adding the required Virtual NICs

Then add a new NIC to the PA VM (since it only came with two by default (the first being the mgmt NIC, and the second I connected to my DC)

This should be the second Interface under the PA VM Network Tab.

K looks like we should be good, power on the PA VM again.

Configuring the Interface

Once in the PA Web interface, navigate to Network -> Interfaces.

Again this will be Ethernet 1/2, although it is the third NIC on the VM.

Once we click on Eth1/2 and configure it properly it should show up green as well. I have configured a interface mgmt profile already under Network -> Network Profiles -> Interface Mgmt. Ping checked off, open subnet permitted.

Also a simple Zone, simply named Test.

First thing we have to define is the type (Layer 3), we want a dedicated collision domain please. 😀 In this case I’m simply interested in PA to client connection in the dataplane to be confirmed. We will place the NIC in the default router as well as the Test zone.

Then we click on IPv4 to set an IP address up for this layer 3 NIC.

specifying /24 is important here. else any ip address without a defined subnet is treated as a /32. Then under the advanced tab select the interface mgmt profile to allow it to be pingable.

Once committed it should come up green.

and should be reachable by VMs in the same subnet….

Yay it is, but alas this is not enough to give this VM an internet connection. Remember that default router we connected the NIC to, well it has no default route defined, or well any routes for that matter, however because I connected both NICs (my ZewwyDC and Test) into the same router, even without any routes defined, the XP VM can ping the ZewwyDC IP of the PA VM

Security rules and the fact the server and VMs use a different gateway then what the PA VM has for its test IP in that subnet, the responses would never come back to the PA VM anyway, never mind that we didn’t define any security rules to allow it, it was simply because I had the “allow ping all” interface mgmt profiles on all the NICs and connected to the same router that made those ping requests work.

Since I’m not interested at this every moment to move the DCs internet right now, I’ll provide the PA VM a public IP address of it’s own and then create a NAT rule to allow the Test XP VM an internet connection.

The Internet Interface

Also since I don’t want to keep having to “system is shutdown” my PA, I guess this time I’ll populate it with all the VNICs it will ever be able to use… (8)

I did this mainly cause I wanted the last interface on the Web UI to be used for this internet connection

So you might remember my blog post on getting another NIC in my hypervisor host I was going to use it with OPNsense, but since my physical PA has become more useless than online multiplayer only game with all its servers shutdown. So this is to become a replacement as I re-purpose it’s chassis for another epic build I plan to blog about this summer :D!

Interface Mappings:

Well now that I got my MS paint fun out of the way you can get an idea of which NIC I want this PA VM to have one of it’s internet connections on: Eth9

I created a new Zone: Deadly Internet, and connected it to our default router:

Then I configured the public IP I had originally configured for my OPN VM by clicking on the IPv4 tab… and to help make sense of this, some more paint fun 😀

I also applied my Allow Ping All Interface Mgmt Profile so I can verify that the interface is not only up (green) but actually reachable, sure enough after a commit… the interface shows green (Also checked off Connected and connected at boot under the VM settings).

Mhmmmm not reachable…. ohhh right, the routers default gateway….

Default Route

Since we are configuring this statically and not via DHCP by our ISP this info is also provided to you.

Network -> Virtual Routers -> Default (in my case) -> Static Routes

So As you can see, anything it doesn’t know, next hop, the IP my ISP gave me as my default gateway.

Commit.

Alright, my attempts to ping it are not successful, which has happened to me the last time I configured all this and I had to reboot the modem, but just before I do that. I’m going to login into the PA VM via SSH and attempt to ping out via that interface:

Alright well last time I got up to this point were I had everything tripple checked, I contract my ISP support and we ended up rebooting the modem which is in bridge mode, Since I assume the MAC address table isn’t being update accordingly or unno its stuck with the old MAC… I suppose I could test this theory by spoofing that NIC with the other NICs MAC…. mhmmmmmm I think I’m gonna pleasure my thought here teehee…

dang it won’t let me change the MAC while it’s on, power off PA VM… set MAC… Spoofed from old OPN sense VM… Power on VM… and…. nope I can’t manually assign it, it’s a dedicate MAC that ESXi won’t allow me to manually assign… so set back to automatic, and boot, if no pings after this rebooting the modem… sigh.

Alright, so pinging my IP still no work even after reboot, I created a firewall rule assuming it was that… nope still no ping response even after commit that, odd cause I didn’t see anything under my traffic log on the firewall itself… so I logged into the firewall again via ssh,  but this time I did mange to get a response from my gateway device, wooo yay… ok… so let me try pinging it again externally…. Yes! There it is! had me worried a bit, I had all bases covered so it should have worked, and now it is, w00t!

This is all well and good, however my test VM, on the test switch still won’t be able to reach out, however, it should be able to reach what will become it’s NATed IP address when it comes time to roam the interwebs.

Whoops that wasn’t possible till I expanded the scope of my security rule:

Firewall is very finicky about allowing packets through zones and subnets, so ensure you create rules accordingly. Normally I like to have a deny all rule at the bottom of my list, these would be however above the built in rules:

However there are some Caveats that comes from around doing that which I hope to cover at some point in my Palo Alto series blog posts. For now we won’t go there yet, just be aware of these rules, any packets that reach them are not shown under the traffic tab (IIRC).

However now that we have got all that out of the way, we can finally create the NAT rule  (as well as a security rule) we need for getting internet access to our test subnet.

NATing

It’s time to get into the baby potatoes… mhmm who doesn’t move some baby potatoes…. anyway I won’t be covering all the possible NATs that can be accomplished (although I do plan on covering a whole post on those in this series as well), we will do a basic internet NAT here to get us started.

Policies -> NAT -> Add

Pretty straight forward configuration here, anyone from my test subnet from my test zone, will be NATed out my internet connection on Eth9 using the IP address I have assigned it which came from my ISP.

Security Rules!

I hope you liked my pun there, if not, alls good lets setup some security rules…

Policies -> Security -> add

To do this more salable instead of adding the subnet IP by IP range every time, I added an object…

User tab is passed, as we won’t get into that meat today…

Application: Web Browsing, DNS, Ping, ICMP

Service: Application Default

Now Commit, we should hopefully be able to ping out to an external DNS provider, like 8.8.8.8 from our Test subject VMs… muhahahahaha

Boo yea! There we go.. and internet… whoops… forgot to allow DNS lol….

mhmm connection reset ehh, well I guess we need another application defined… or right SSL

finally….

Update

for some reason a couple days later, I noticed I was unable to access Google, even though I had accessed it before, as the above screen shot shows.

Then I created an open rule and i was able to access google, and found out for google to work it’s defined as it’s own App ID (Google-base), I like granular control, but I should be able to select web browsing and that should be able to group sub apps to make my web browsing experience work…  On top of that I noticed the same reset connection errors going to Youtube, and reddit… ok this is getting a bit redic…

Here’s my new ridiculous rule just to go to Palo altos own site that referenced a youtube video, google itself, and one reddit result I was interested in… Holy eff man…

Setting the Host Name

Device -> Setup -> General Settings

Here you can enter, the host name, domain name, login banner, timezone and a couple other general settings:

Awesome even though it appeared squished after pasting. It still applied 😀

DHCP

It be ridiculous to expect those systems in the Test network to configure themselves, let’s give them a hand with good ol’ DHCP.

Network -> DHCP -> add

Select Interface (in our case Eth2), enter a range in the IP Pools, and Click OK.

Commit, it’s that easy, once created there’s a link to show the IP allocations. 😀

If you need to add custom DHCP options, just click the options tab. Which you will for things like the gateway and DNS servers 😛

Summary

Well I hoped you enjoyed this blog post, we got some basic things done, some zones, some policies, some new interfaces, objects, yet we haven’t even got into the real meat and potatoes, like wild fire profiles, and URL cat profiles and all those other fun things we will get to soon.

The idea behind the basic first couple blog posts is to just get our baseline going so when it comes to the more complex stuff I have some reference material already available for those that need some reference as to exactly “how I got here“.

In my next post I’ll cover using some of the great features, some of these features will be provided with a standard license, other are license separately for your needs and requirements. Since I got a whole lab bundle for educational purposes I’ll get to post about all the goodies soon. 😀

Stay Tuned!