Working on PowerShell scripts (ISE) w/ GitHub

GitHub

So as you all probably know GitHub has been acquired by Microsoft. I had initially groaned at this acquisition as usually a lot of things Microsoft has done lately has really bothered me (locking down APIs to O365 and not providing them to on Prem, for example) but then they also have done some good moves… .Net Core 2.0 and all the open source incentives are a nice change of pace.

And to top that with some sugar how about some private Repositories for free members! Yeah That’s right, now that this is an option I’m going to use GitHub more. Now I’ve played with it before, however this time I wanted to write this up for my own memories. Hopefully it helps someone out there too.

Let’s have some fun saving our PowerShell scripts on GitHub!

PowerShell ISE and GIT

Dependencies

So for this demo you’ll need:

1) A GitHub Account (Free)
2) PowerShell ISE (Free with Windows)
3) Git for Windows

First, install and configure Git for Windows. Mike previously covered this topic in another blog article. In this scenario, I ran the Git installer elevated so I could install it in the program files folder and I took the option to add the path for Git to the system environment variable path:

posh-git0a

Make sure that you’ve configured Git as the user who is running PowerShell (I ran these commands from within my elevated PowerShell session):

4) Install the Posh-Git PowerShell module from the PowerShell Gallery:

The Fun Stuff

So I originally follow this guys blog post on how to accomplish this.

Now I had already installed git for windows so I was set there.

SharePoint Profiles

I liked the part where he had altered his console display depending on where he was located to not ensue confusion, however I wasn’t exactly sure what he meant by Profiles a lil searching and education session later I was able to verify my profile path:

$profile

Then simply edit that Microsoft.PowerShell_profile.ps1 with Mikes script:

Set-Location -Path $env:SystemDrive\
Clear-Host
$Error.Clear()
Import-Module -Name posh-git -ErrorAction SilentlyContinue
if (-not($Error[0])) {
    $DefaultTitle = $Host.UI.RawUI.WindowTitle
    $GitPromptSettings.BeforeText = '('
    $GitPromptSettings.BeforeForegroundColor = [ConsoleColor]::Cyan
    $GitPromptSettings.AfterText = ')'
    $GitPromptSettings.AfterForegroundColor = [ConsoleColor]::Cyan
    function prompt {
        if (-not(Get-GitDirectory)) {
            $Host.UI.RawUI.WindowTitle = $DefaultTitle
            "PS $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) "
        }
        else {
            $realLASTEXITCODE = $LASTEXITCODE
            Write-Host 'PS ' -ForegroundColor Green -NoNewline
            Write-Host "$($executionContext.SessionState.Path.CurrentLocation) " -ForegroundColor Yellow -NoNewline
            Write-VcsStatus
            $LASTEXITCODE = $realLASTEXITCODE
            return "`n$('$' * ($nestedPromptLevel + 1)) "
        }
    }
}
else {
    Write-Warning -Message 'Unable to load the Posh-Git PowerShell Module'
}

Now that we’ll have the same special console to avoid confusion let’s link a directory!

Linking GitHub Repo to Your local Directory

Then I cloned my new private Repo:

git clone https://github.com/Zewwy/Remove-SPFeature Remove-SPFeature -q

That felt awesome…

Nice, nice…

Opening scripts from the ISE

Alright. Well now that we have a repo, and are in it, how do I open a file in the very ISE we are running to edit them? Now Mike didn’t exactly cover this, cause I suppose to him this was already common knowledge… well not to me haha so it’s actually pretty simple once you know how.

psEdit .\Remove-SPFeature.ps1

Woah! Epic, it can be bothersome when dealing with length scripts, so ensure you utilize regions (w/ endregions) to allow for quick named areas to access, as you can use this command in ISE to collapse all regions once a script is loaded.

$psISE.CurrentFile.Editor.ToggleOutliningExpansion()

lets start making some changes *changes made*

Committing and Pushing

Get your mind out of the gutter!

Now I had originally did a git push, and instantly got everything is up-to-date alert, so awesome, I did not have to fight through his whole schpeal about auth (I got a prompt the very first time I attempted to clone my repo, requesting me to login into my GitHub Account). So my tokens were good right from the start after that happened. However I did make some updates to one file and was now instead presented with this after a commit:

Again a bit of searching I was able to find the answer, seem usually to be some form of ignorance, that is why I’m doing this.. to learn 😛

Now again I got a bit confused at how this worked and when I did some searching I discovered:

Don’t do a “git commit -a” from the ISE, it’ll crash asking you for a line to provide in for the description.

Do proper staged commits as described here. 🙂

I hope this maybe gets some more people power shelling!

Next I should learn to use Visual Studio for more app building… but I’m more of a sys admin then a dev…

I recently took a course in resiliency, and they basically said be a tree… ok.

Branching

Whats is branching? well pretty much, try stuff without changing the source code. Backups anyone? it’s a nice way to try stuff without breaking the original code, and once tested, can be merged.

unlike a tree it’s not often a branch just becomes the trunk, but whatever…

Following this guide:

To create a branch locally

You can create a branch locally as long as you have a cloned version of the repo.

From your terminal window, list the branches on your repository.

$ git branch 
* master

This output indicates there is a single branch, the master and the asterisk indicates it is currently active.

Create a new feature branch in the repository

$ git branch <feature_branch>

Switch to the feature branch to work on it.

$ git checkout <feature_branch>

You can list the branches again with the git branch command.

Commit the change to the feature branch:

$ git add . 
$ git commit -m "adding a change from the feature branch"

Results:

Hopefully tomorrow I can cover merging. 🙂

Cheers for now!

SharePoint Orphaned
Content Types (ReportServer)

New Series! SharePoint Orphaned!

The only thing that should be orphaned, is SharePoint itself…. ohhh ouch,

The Story

joking aside, our Developer again came by reporting some issues with the newly developed SharePoint site I had migrated for him to test creating some new SharePoint web part apps. He already had his own documentation available when he first did this, good man. Even after we got past the “how to create a new template from a site with publishing features enabled”, we were still receiving an error.

Slow SharePoint fixed… But…

During this whole process this new site was intermittently responding slowly, it was baffling, and as we dug through the UNLS logs we found the issue, apparently the service account configured to run the Web Application Pool did not get access to the ProfileDB for some reason, after granting the login SPAccess on the ProfileDB it fixed the slow intermittent SharePoint loads… but sadly we were still receiving errors while attempting to deploy new sites from templates.

The signs were clear

Looking further in the UNLS logs, and the error itself complaining that content types could not contain special characters…. A bit more searching pointed us towards the sites content types page….

Whooops how did I miss this… (ReportServer Feature…)

Guess there are the “special characters”….. ugh, even though the “Test-SPContentDatabase cmdlet” returned clean throughout my migration (and all my scripts I have yet to publish). Guess this one isn’t picked up by the checker? unno anyway… what to do about this…

The search

Source one… too complicated, but interesting… he sure worked hard, I’d go this route, but I’m sure there are easier solutions… got to be and… yup.

Source two simple… lets try it…

The Solution

Install the feature, disable it on all web apps deployed on the farm, uninsatall the feature. Nice and simple, and how I usually like it, letting the system do most of the heavy lifting to avoid human error.

So Step 1: Grab Reporting Service installers (my case SharePoint 2016)

Step 2: Install it;

Next, Accept the EULA, Install

Success.

This makes the content type names behave correctly.

Step 3: Enable it;

Install-SPFeature -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\TEMPLATE\FEATURES\ReportServer"

Now the original post said to simply Uninstall it after, but as you can see it will error, why cause, as it clearly states it’s still enabled so…

Step 4: Disable the feature on all web applications

Disable-SPFeature –Identity ReportServer –Url http://spsite.domain.com

Step 5: Uninstall the Feature:

Uninstall-SPFeature -Identity e8389ec7-70fd-4179-a1c4-6fcb4342d7a0

Step 6: Uninstall the package:

msiexec /uninstall rsSharePoint.msi

I recommend to do this after hours and on a test first. It did seem to do a IISRESET as all sites had to reload, and took a lil bit for the .NET assemblies to recompile. 😀

Now go enjoy a coffee. Thx, Jussi Palo!

The Second Solution

OK, not gonna lie, I assumed it was all good, and that assumption came to bite me in the ass…. again, never assume.

So I told my dev that I had completed the steps and should have no issues creating a new site from his template, but as I’m walking down the hall a short time later, he give me the snap fingers (like it worked) and says, “same error”.

Ughhhhh… what…

So looking back at the Sites Content Types the Report Model Document content type still remained… ok what the….

So running through the proceedure again, it cmplained stating the feature was not available for my web apps, so I re-enabled it, saw all three content types, disabled it… and report model still there…. :@ c’mon! Let’s just delete the content type!

Can never give me a break eh SharePoint…

Luckily my dev is super awesome and told me about another blog he had read (sorry I don’t have the source) and told me that the only reason the front end actually refuses to let you delete the content type, isn’t so much that it’s tied to an actual feature (even though we all know that this one did come from the ReportServer feature), but rather that it simply has a flag set on it in the table for it…

Now I normally never recommend making changes on any SharePoint Database stuff directly, and usually always recommend making all required changes via either the Central Admin/psconfig, site settings or PowerShell. However in this case we clearly installed the proper dependencies, de-activated the feature that populates those content types yet was not being removed from the content databases…

Only do this if you have tried everything else, only do this in a test environment, actually never do this…. well I guess if you have tried everything else this is your only option…

This requires you to have syadmin rights on the SQL Server instance hosting the SharePoint Content Databases. Open SSMS…

SELECT *
FROM WSS_CONTENTDB.[dbo].ContentTypes
WHERE Definition LIKE '%Report%'

Find the row which contains the ID for the Report Builder Content Type (Or which ever other system based content type you have orphaned needs removing). usually easily spotted as it’ll be the only one with 1 under

USE WSS_CONTENTDB
Go
UPDATE dbo.ContentTypes
SET IsFromFeature = 0
WHERE ContentTypeID = *ID From above Query*

Now you can go into the actual orphaned Content Type under Site Settings and watch the delete content type not fail or error, and destroy that content type from your SharePoint life!

*Note* My Dev came back saying same error again, lol, but this time it was discovered we simply had to re-create the template and deploying the template from new worked (which originally didn’t before the above changes)

Happy SharePointing!

SharePoint Rest API call returns 500.50 URL rewrite error

The Story

Hey all another SharePoint Story here!

So my dev was working on another SharePoint site app. We did everything like before, and now he was getting a URL rewrite error. I wasn’t sure why this was happening, and since he generally had more experience troubleshooting these types of issues I sort of let him handle it for a while.

Well after a while he still couldn’t figure it out, and funny thing happened, we learned some interesting things and got bit by erroneous error messages in the end. So the first thing he tried was to give his re-write rules some new variable names. Which didn’t help and the same error was returned.

After a little while I had forgot to set the Service Principal Names (SPNs) for the new web applications we created for the new SharePoint sites. I was certain this was it, but we kept getting a URL rewrite error! (This turns out was actually the initial reason for the error, yeah it really was cause it turns out…)

I showed by dev this post by Scott on the same error. Now the reason we were getting the same URL rewrite error was cause when he changed the variable names in his re-write rule he didn’t change their associated server variables as mentioned in Scotts blog.

The Answer

The only reason we got the error both times was simply a coincidence. So it turns out:

1) If you forget to set the SPN when you Web App is set for Kerberos, and your hosting app server is on another server. You will get a re-write error if you have everything else in place.

2) If you change variables in your re-write rule and forget to set the associated system variables with it.

Both will result in a 500.50 URL rewrite error… who would of figured…

SharePoint – Invalid Field Name

The Story

Today was an interesting day, I was getting my morning coffee with dock and video cables in hand as I was about to help a colleague with a video issue when my developer walked in.

I could tell something was up when he walked in as he had a bit of a “catch his breathe” feel in his persona as he went about asking me how my morning was going. Sensing the tension in the conversation I ask him what’s going on. Then he gets right to the point, and it’s SharePoint related. Having had my gooooood amount of SharePoint experience doing majority of the SharePoint site migrations to 2016, however this time the issue revolved around the old 2010 site and server that was setup and configured before my time there.

Long story short, I took the correlation ID and searched the good old UNLS logs (C:\Program Files\Common Files\Microsoft Shared\Web Server Extentsions\14\LOGS). Nothing really stood out, a couple access denied due to a secure store permission for a web part, which I knew about due to the account I was using to test, and saw this webpart error out before hitting the error page on a web part edit (a different webpart that was actually working fine and failing on edit). There are other blog posts on dealing with access denied on the secure store front so I’ll leave those out of this as they were not of value to me. Continuing through the log till the very last match on the correlation ID brought up a exception halt with the line reading “Invalid field name.” along with a bunch of inner system method calls down a usual stacktrace. The stack strace wasn’t really of much relevance so I did the best thing I could, google the UNLS issue about the invalid field name, and sure enough I stumble upon a technet blog post by a Brendan Griffen.

Now you can go ahead and read more on his issue and store there, and he seems to really be hooked about “FormURN” but my case didn’t really have anything to do with that, and he does cover more the details as to the fact that certain tables that are used for content types somehow missing certain fields (ahem columns), as we all know you don’t mess with the DB directly when dealing with SharePoint. Now I don’t even get into the nitty gritty of the commands he used to verify the missing fields (thinking about that now it could have made for a more interesting blog post….. oh well) since he covers the solution to recover the missing fields with two SharePoint powershell commands, and well running those two fields in my test environment (since I was quickly able to duplicate the issue in my test) it sure beat doing that then digging endlessly through DB’s for columns I’m not sure I’m even missing, and for logs this was the last line in it so it was this or nothing…

The solution

Command 1:

Disable-SPFeature –identity "Fields" -URL RootWebURL

You will be asked if you are sure, select yes.

Command 2:

Enable-SPFeature –identity "Fields" -URL RootWebURL

that was it, just like magic (didn’t require a reboot or even a iisreset) my dev was able to edit the web part, and I was on my way to help a user with their monitor.

I’m going to soon have some SharePoint posts about orphaned items stay tuned! As well as some awesome scripts to clean up old 3rd party plugins!

Make Sure your DFSR is working!

This one is kind of interesting. I use a replicated test environment to validate things, it works great. I was using the domains sysvol to quickly copy some text between member servers, however to my amazement I was not seeing the same contents from two different member servers even though both of them validated their security with my domain (nltest /sc_verify:domain)…

It wasn’t until I checked both DC’s that I noticed one member server was seeing a SYSVOL from DC 1 and the other member server was seeing contents from SYSVOL from DC2.

Now, all DC’s have the same SYSVOl contents right?! So what gives?

You may have already guessed it, DFSR issues…. if you know the title didn’t give anything away…

Which lead me to this nice MS support page.

The most important line from it is this…

For /f %i IN ('dsquery server -o rdn') do @echo %i && @wmic /node:"%i" /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo WHERE replicatedfoldername='SYSVOL share' get replicationgroupname,replicatedfoldername,state

with this my DC’s reported a state of 2 (well that could explain the diff I was seeing)

which lead me to this nice MS Support page. 🙂 These are usually better than most I’ll admit. I followed the steps on “How to perform an authoritative synchronization of DFSR-replicated SYSVOL (like “D4″ for FRS)”

Until I realized that core doesn’t come with DFSR mgmt tools, even if you install the AD role… So for the most part I skipped the steps stating run “DFSRDIAG POLLAD” cause it’ll fail to run, as it does not exist

Maybe some one out there is smart enough to know the answer…

STS Security Token Service on SharePoint 2013

Today I was bringing my stepping server back up. In this case I use it to upgrade content databases from 2010 -> 2016.

Since you can’t directly upgrade, since the config data had been wiped, I was going through the config wizard to get it rebuilt. Now the wizard will complain if the old website still exists. So for some reason I decided to remove all the old sites and app pools. figured it would get rebuilt.

Now the wizard completed without a hitch, and I was off creating a web app and some content databases to delete as I’d test and mount the 2010 content databases for staging.

Oddly after I had mounted the database I had noticed the server was failing to successfully call “Get-SPSite”, saying that it was due to the security token store service. There’s lots of links out there with similiar issues… such as this, this, this, this and even this …. most of which are dead ends.

There’s MS support page on this as well, however I may have accidentally deleted that App Pool…

Then I stumbled across this, a MS blog post, which I find a lil more useful usually cause they are more hands on… in this case since I was already hooped I gave the command a try, and it ran just like his…

I wasn’t sure if this was enough, then I found this and ran these commands as well…

$sts = Get-SPServiceApplication | ?{$_ -match "Security"}
$sts.Status
$sts.Provision()

after a reboot, all of a sudden Get-SPSite was working again!

How to Shrink a VMDK

Hey all,

Not often you have to shrink a VMDK file, expanding one is super easy, even on a live Virtual Machine. Shrinking one however, isn’t as straight forward.

This guy does a decent job giving a step by step tutorial, but you can soon realize you can do it even faster, and without cloning…

1) Use his math to get the disk size you need to edit inside the vmdk:

The number highlighted above, under the heading #Extent description, after the letters RW, defines the size of the VMware virtual disk (VMDK).

this number – 83886080, and it’s calculated as follows:

40 GB = 40 * 1024 * 1024 * 1024 / 512 = 83886080

2) Only shrink VMDKs in which you know the end of the disk contains allocated blocks, do this in a test only, make sure you have backups.

Now instead of cloning, simply remove the disk from the vm, and re-attach it. watch it’s reattached size be smaller, and it matches, much like the source guys post.

Sysprep Fatal Error

The Story

The error is fatal alright…

I was helping my buddy setup a bunch of laptops for a classroom deployment, since all these drives had spindle discs it was a lot faster to get windows install on one machine, get all the drivers, install all the updates, install all the software…. then shrink the partition to size used (using gparted in linux live), then sysprep it, then DD the drive up to the used spaced partition end point to a NTFS formatted drive.

After that simply DD the .img file to any /dev/sda drive via Linux Live. Since the DD is writing the image onto the disc sequentially this is far faster then doing the above steps for every machine. When you boot them up they are in OOBE mode, enter a user, machine name, license key and away you go…

However, I got an error…

“A fatal error occured while trying to sysprep the machine”

How insightful, so to google I go… Most of the usual answers…

Issues with Generalize (I’m not generalizing this image)…

The Solution

Checking the registry for sysprep state keys (This was a brand new install, so all was clean), at this point I did remember I had injected “SP2” into this Windows 7 install and thought maybe that might be something… but then I noticed something really simple and odd from one Microsoft Answers page…

OldMX “Load a command prompt with admin rights and type “Net Stop WMPNetworkSvc”

I didn’t think it would work, but figured it was worth a shot, to my amazement it worked!

Thanks OldMX!

*UPDATE* I had followed this guide on using an precanned Unattend.xml file.

Which resulted in the same error, but syslogerr.txt showed a “Unable to deserialize unattend.xml. Turns out there was an error in my XML file from copying the source precanned content. and that was the ‘”‘ types.

After correcting the quotes used in the XML file, sysprep took the unattend.xml file without error.

Maybe this:

<?xml version="1.0" encoding="utf-8"?><unattend xmlns="urn:schemas-microsoft-com:unattend">
<settings pass="specialize">
<component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<CopyProfile>true</CopyProfile>
</component>
</settings>
</unattend>

Secure a WordPress Site with HTTPS

Intro

Well it is slowly becoming a requirement, even for a site that simply shares content and has no portal or user information… such as my site… but may as well do it now since we can get certified certificates for free!!! Wooohoooo!

So doing a bit of research….

Research

Securing WordPress

TurnKey SSL Certs

Let’s Encrypt!

Cert-Bot

TurnKey WordPress uses Debian… what version?

The Tasks

Alright so we are running Debian 8, let’s follow that cert-bot tut….

Let’s start by creating a snapshot, at this point I don’t exactly have backups running yet… I know I know… I was suppose to do Free Hypervisor Backup Part 3 where I redesign ghettoVCB’s script…. unfortunately I can only do so much and I have many projects on the run. I will get it to though, I promise!


Now with that out of the way, running Cert-Bot…
Then I ran into some errors… oopsies….


What happened?!?!
Well I was working through a lot of network redesign, and my public website, the very WordPress I was trying to get a certificate for,
had a NAT rule to get out to the internet, which is why the grabbing and running of the CertBot succeeded up until this point.
I didn’t create the NAT rule to allow HTTP traffic just yet as I was wanting to create this certificate first. Little did I know it was going to be a prerequisite
Anyway I had to update my Websites DNS record to point to my new public IP,
as well as create a NAT and security rule to allow my website to be accessible from the outside world…


I had to wait a while for DNS to replicate to other servers outside, specifically whichever ones Lets Encrypt servers use to locate and validate my requests from CertBot.
so…
after making the changes, and waiting a while I attempted to access my website from the internet again,
it was failing and then I realized my mistake was in the security rule I defined. correcting my security rule, I could access my website.
running Certbot again…

Yay, and it listed all the virtual hosts hosted by my turnkey wordpress..

then created another NAT rule to hanndle https traffic… and then the security rule…

That was literally it! CertBot made it so easy! Yay that’s a first! 😀

Palo Alto VPN (GlobalProtect)
Part 5 – Rules, Testing, Troubleshooting

Intro

In this 5 Part series I covered all the requirements to configure Palo Alto Network’s GlobalProtect VPN:

1) Authentication, Auth Profiles and testing them.

2) Certificates, Cert Profiles, SSL/TLS Profiles and creating them.

3) Portals, what they do and how to configure them.

4) Gateways, what they do and how to configure them.

This part will cover the security rule required, and a little troubleshooting steps along the way.

Things not Covered

I didn’t cover creating DNS records, as again, these come down to your own DNS provider and whatever tools and portals they offer to manage those.

I don’t cover configuring the interfaces (public facing or internal), I don’t cover the virtual router and routes. All these are assumed to be handled by the administrator reading these guides.

I don’t cover installing the client software, if you have the certificates installed on the client devices (Required), it’s simply navigating to the portal address with a supported browser and downloading the installation packages (.exe for windows).

For giggles, I tested navigating my portal from my phone, it did prompt me for my certificate (the VPN was working well) yet after selecting my certificate I got a connection reset error on my browser and checking the Palo Alto Firewall logs (Monitor tab -> traffic) I indeed saw the Deny traffic and action reset-both action… why this is, even though the application was identified correctly as web-browsing and that was enabled in the rule, it wasn’t being allowed by my rule and instead was being denied by my deny all rule. I”m not sure exactly why this is, however I don’t have intentions of accessing my portal web page anytime soon, so for now I’ll ignore this as I use IPsec XAuth RSA on my android device.

I have also noticed that for some reason with Samsung Android I can’t seem to get this VPN setup to work, from quick google searches people seem to say it’s due to packet fragmentation somehow. I haven’t yet had the chance to look into the nitty gritty of this issue just yet, but when I do it will be it’s own blog post!

I also don’t cover installing the completed certificates onto end devices as again this comes down to the end devices being supported by the administrator configuring Global Protect and is outside the scope of this guide.

The Security Rule

As you can tell pretty simple, anyone from the internet (I could be connecting from anywhere, and my IP address changes on my phone all the time, random access points etc) to my public IP address which hosts my portal and gateway, and the required applications (IKE, ipsec-esp-udp, and the SSL and web-browsing) again I haven’t exactly figured out the portal web-page loading issue just yet.

 

*UPDATE* ensure to add panos-global-protect application type, else only X-auth RSA connection will succeeded, that does not rely on the Global Protect Portals.

Failure to add panos-global-protect applicatin results in end client getting “No Network Available” error on the Global Protect App.

My Phone Config

In my case I do run an Android phone, running : 8.0.0: Kernel 4.4.78

The OS is some H93320g couldn’t find much but this about it

For the most part I install both my Offline-Root-CA and my Sub-CA certificates on my phone. Which can be found under (General -> Lock Screen & Security -> Encryption & Credentials -> Trusted Credentials (Instead of CA’s who knows?) -> User (Both Should be listed here)

Then Installed the User certificate with the private key, which then shows up under (General -> Lock Screen & Security -> Encryption & Credentials -> User Credentials (Instead of User Certificates?)) The other annoying part is once you have the certificate installed, this area doesn’t allow you to see the certificate details, you can see them under the area mentioned above, but this area…. nope.. :@

Once the certificates are installed, it simply comes down to configuring the VPN settings. (Settings -> Network -> VPN -> BasicVPN -> Click the plus in the upper right hand corner. Then)

Name: Give it a meaningful name

Type: IPSec XAuth RSA

Server Address: The Address defined in Part 3 -> Agents -> External Gateways

IPSec User Cert: The User Certificate you installed and verified above

IPSec CA Certificate: Don’t verify server (Which is probably why I didn’t need the above server address in the gateway certs as a SAN)

IPSec Server Certificate: Receive from server

Then enter a username and password for a user you defined to be allowed per your Authentication Profile you created in Part 1.

You shouldn’t have to define the advanced settings as those should defined to the client from the gateway config we created in Part 4.

Summary

If done correctly you should have a successfully, you should be able to see all the parts play out in both the traffic logs, and the system logs…

System:

Traffic:

That is pretty much it, if you have a failed connection do the usual step by step troubleshooting starting with connectivity, you should be able to see the access attempt from the device in the traffic logs, if they are being blocked by rules, adjust them accordingly.

If you verified all other things, it maybe your chain, or you are enabling extra security like verifying the server certificate than you chain would have to be different then presented here, probably all certificate including the portal and gateway certs being signed by the sub CA completely, then all certs will be trusted by all devices. I’ll admit this isn’t the cleanest setup, but it’s the closest to a bare minimum install of Global Protect using your own internal PKI.

I hope this guide helps someone. 😀