Upgrade From PA-220 To PA-440

Step 1) Get a PA-440 from your reseller.

Step 2) Power On PA-440

Step 3) Connect Micro USB cable into console, and then USB-A into Workstation of choice, with OS of choice. I will be using a HP Laptop with Windows 11.

  • Baud Rate: 9600
  • Data Bits: 8
  • Parity: None
  • Stop Bits: 1
  • Flow Control: None

Login as admin:admin and change the password.

Step 4) Disable ZTP. Unless you are working with a consultant or advanced VAR you probably won’t be using ZTP (Zero Touch Provisioning), this will prevent us from configuring a static IP address on the MGMT port.

> set system ztp disable

Now wait for the firewall to reboot.

Step 5) Configure a static IP for the PA-440 MGMT port:

> configure
> set deviceconfig system type static ip-address <IP_ADDRESS> netmask <NETMASK> default-gateway <DEFAULT_GATEWAY>
> commit

At this point you can plug a network cable into the MGMT port and into the switch in your network stack that will allow it to communicate to the internet and whatever devices are on the same subnet.

Step 5) Adjust any existing firewall rules to allow the MGMT port to access internet. primarily “paloalto-updates” app type if you are already using a PA series firewall, and want to be really strict on the rules.

Step 6) Register the device with your account on the Palo Alto Support portal. This is required when using the “grab licensees from online servers” option in the firewall. If you are using the device in an offline fashion then you will need to use the activation codes, which is outside the scope of this blog.

Step 7) Activate the PA-440 by checking online for licenses.

Congrats we got the first basic deployment steps configured for the PA-440. We can now manage it via the Web interface on the MGMT port. Now we’ll export the config from the PA-220, and import it into the PA-440.

Step 8) Export existing config from PA-220.

Device -> Setup -> Operational -> Save named snapshot -> name it

Device -> Setup -> Operational -> Export named snapshot -> the one named above

Step 9) On the PA-440 Import the config.

Device -> Setup -> Operational -> Import named snapshot -> the one named above

Device -> Setup -> Operational -> Load named snapshot -> name it

In my case I had a URL security definition that was causing a validation fault. So I had to check for new apps n threats packages and applied the latest one.

This most likely happened cause my export config had a later apps n threats definition then what the new firewall had available.

After this the commit validated without issue.

Step 10) Use Auth codes to activate all features.

Step 11) Commit

Step 12) Power off PA-220, and replace with the PA-440. Plugging network cables 1 for 1 in place, since they both have 8 ports it’s just direct in place drop.

Now that I got a PA-440 with all the bells n whistles, stay tuned for more Palo Alto Networks tutorials. I’ll review what I’ve covered in the past on my website and attempt to avoid duplicates, if I do find those I’ll update those post, otherwise I’ll create a new one for new deployments.

Hope this helps someone.

VMware Changes Update URLs

If you run a home lab, or manage systems for companies you may have noticed updates not working in VAMI… something like…. Ohhh I dunno.. this:

Check the URL and try again.

Unable to patch the vCenter via VAMI as it fails to download the updates from Broadcom public repositories

Cause

Table of Contents

Public facing repository URLs and authentication mechanisms are changing. Download URLs are no longer common but unique for each customer therefore will require to be re-configured.

Well… wow thank you Broadcom for being so… amazing.

If you want to be overly confused about the whole thing you can this this KB: Authenticated Download Configuration Update Script

As the original link I shared above all you have to do is login to the Broadcom support portal, and get a token, and edit the URL…. but….

Notes:

    • The custom URL is not preserved post migration upgrade, FBBR restore and VCHA failover
    • If there is a proxy device configured between vCenter and the internet, ensure it is configured to allow communications to the new URL
    • Further patches automatically update this URL. For example, if 8.0.3.00400 is patched to 8.0.3.00500, the default URL will change to end in 8.0.3.00500.

Looks like this was enforced just a couple days ago … Sooooo, happy patching?   ¯\_(ツ)_/¯

Permission to perform this operation was denied. NoPermission.message.format

For anyone who may use my site as a source of informational references, I do apologies, for the following:

  1. My Site Cert expiring. ACME is great, I’m just a bit upset they refuse to announce their HTTP auth sources so I can’t create a security rule for it. Right now it would be restricted to App Type. While not bad.. not good enough, so I manually have to allow the traffic for the cert to be renewed.

    No… I have no interest in allowing ACME access to my DNS for DNS auth.

  2. Site was down for 24 hours. If anyone noticed at all, yes my site was down for over 24 hours. This was due to a power outage that lasted over 12 hours after a storm hit. No UPS could have saved me from this. Though one is in the works even after project “STFU” has completed.

    No, I have no interest in clouding my site.

I have a couple blog post ideas roaming around, I’m just having a hard time finding the motivation.

Anyway, if you get “Permission to perform this operation was denied. NoPermission.message.format” while attempting to move a ESXi host into a vCenter cluster. Chances are you may have a orphaned vCLS VM.

If so, log into VAMI and restart the ESX Agent Manager (EAM) service.

After restarting that service everything should hunky dory…

Cheers.

Update Veeam 12.3

Grab Update file from Veeam.

Step 1) Sign in to Veeam portal

I didn’t have a paid product license, so my download section was full of free trial links. Since I’m using CE (community edition) from here: Free Backup Software For Windows, VMware, & More – Veeam

Step 2) Download the ISO, it’s a doosy at 13 GBs

Step 3) Read the update notes for any expected issues/outcomes.

For all the FAQs go here: Veaam Upgrade FAQs

For basic System Requirements and release notes see here: Veeam Backup & Replication 12.3 Release Notes

The main thing will be the change of the server SQL service, moving from MS SQL Express, to PostgresDB, Though it’s not directly mentioned from what I can see other than the step 8 in the Upgrade path: Upgrading to Veeam Backup & Replication 12.3 – User Guide for VMware vSphere

Step 4) Attach the ISO to the server being upgraded or installed on

My case a 12.1 based server.

My case it’s a VM, so I just attach it via VMRC.

Step 5) Run the Installer

Make sure you stop any “continuous” jobs, and close the B&R Console.

Double Click Setup.exe on the mounted ISO’s main directory.

If you haven’t guessed it, click Upgrade. Yes, nice to see coding done where it just does a check and knows it’s a Veeam server, so the only option is to Upgrade.

In my case I again only have one option to choose from.

How long we wait is based on the Matrix. Looking at the VM resource usage, and my machines based on the setup, looks like it’s reading from the ISO to load installation files. and writing it somewhere to disk, my setup only yielded me about 40 MB’s and took roughly 8 minutes.

Agree to the EULA.

Upgrade the server, here’s you have a checkbox to update remote components automatically (such as Veeam proxies). In my lab the setup is very simply so I have none. I just click next.

License upgrade: (I’ll try not selecting this since CE, nope wizard wouldn’t let me for CE, shucks hahah)

Service account, Local System (recommended). I left this default, next.

Here’s the OG MS SQL instance:

… yes?

For the Veeam Hunter service… ignore (Shrug)

free space… needs more than 40 Gigs… holy molly….

43.1 GB required, 41 GB Available. Unreal, guess I’ll extend the drive, great part of running VMs. 🙂

Finally! Let’s Gooooo! and sure enough first step.. here comes the new SQL instance.. this is probably why it requires over 40 gigs to do the install, to migrate the SQL instance from MS SQL to Postgres…. Wonder if space will be reclaimed by removal of the MS SQL Express instance….

Roughly half hour later…

Mhmmm checking the services I see the orginal MS SQL instance is still there running. I see a postgres service.. not running… uhhhh mhmmm…

All Veeam services are running, open the Veeam B&R console, connect, and yup it opens. The upgrade component wizard automatically opened, and it updated the only item.. itself.

*UPDATE* Patch for latest CVE of 9.9. If you have a domain joined Veeam server.

KB4724: CVE-2025-23120

*thumbs up* It’s another 8 gig btw…

Installing Core Linux

Installing TC-Linux (Core Only)

Sources

Source: wiki:install_hd – Tiny Core Linux Wiki

On, ESXi VM: wiki:vmware_installation – Tiny Core Linux Wiki

FAQs: http://www.tinycorelinux.net/faq.html

Setting up VM

VM Type: Other Linux 32bit kernel 4.x
CPU: 1
Mem: 256 MB
HDD: 20 Gig
Network: DHCP + Internet Access

Change boot to BIOS (instead of EFUI)

Booting and Installing Core Linux

Attach ISO boot. Core Linux boots automatically from ISO:

For some reason the source doesn’t tell you what to do next. type tc-install and the console says doesn’t know what you are talking about:

AI Chat was kind enough to help me out here, and told me I had to run:

tce-load -wi tc-install

Which required an internet connection:

However even after this, attempting to run gave the same error.. mhmm, using the find command I find it, but it needs to be run as root, so:

sudo su
/tmp/tcloop/tc-install/usr/local/bin/tc-install.sh

C for install from CDrom:

Lets keep things frugal around here:

1 for the whole disk:

y we want a bootloader (It’s extlinux btw located [/mnt/sda1/boot/extlinux/extlinux.conf}):

Press enter again to bypass “Install Extensions from..”

3 for ext4:

Like the install source guide says add boot options for HDD (opt=sda1 home=sda1 tce=sda1)

last chance… (Dooo it!) y:

Congrats… you installed TC-Linux:

Once rebooted the partition and disk free will look different, before reboot, running from memory:

after reboot:

Installing OpenSSH?

tce-load -wi openssh

This is where things got a little weird. Installing an app… Not as root TC-Linux says…

This is when things got a bit annoying n weird, even though the guide says using -wi installs it in the on boot section, I found it wasn’t loading on boot, well at first I noticed it didn’t start at all after install, as I couldn’t SSH in, this was cause of a missing config file…

Even if I got it running it still wouldn’t run at boot and that apparently was cause the file disappeared after reboot. This is apparently cause the system mostly run entirely in RAM. If you didn’t notice even after install the root filesystem was still only roughly 200 MB in size (enough to fit into the RAM we configured for this VM).

Notice the no password on the tc account? Set it, reboot. doesn’t stick…

Notice the auto login on tty1? Attempt to disable.. doesn’t stick…

Configuring Core Linux

Long story short apparently you have to define what paths are to be considered persistent via a file:

/opt/.filetool.lst

These files are saved to mydata.gz via the command:

filetool.sh -b

So here’s what we have to do:

  1. Configure the system to ensure settings we configure stay persistent across reboots.
  2. Change the tc account password.
  3. Disable auto login on TTY1.
  4. Configure Static IP address.
  5. Install and run on boot OpenSSH.

Changing TC Password

Step 1) Edit /opt/.filetool.lst (use vi as root)
– add etc/passwd and etc/shadow

Step 2) run:

filetool.sh -b

Step 3) run

passwd tc

Step 4) run

filetool.sh -b

Now reboot, you may not notice that it applied due to the auto login, however, if you type exit to get back to the actual login banner, type in tc and you will be prompted for the password you just set. Now we can move on to the next step which is to disable the auto login.

Disable Auto-Login

Step 1) Run

sudo su
echo 'echo "booting" > /etc/sysconfig/noautologin' >> /opt/bootsync.sh

Step 2) Run

filetool.sh -b
reboot

K on to the next fun task… static IP…

Static IP Address

For some reason AI said I had to create a script that runs the manual step… not sure if this is the proper way… I looked all over the Wiki: wiki:start – Tiny Core Linux Wiki I can’t find nothing.. I know this works so we’ll just do it this way:

Step 1)  Run:

echo "ifconfig eth0 192.168.0.69 netmask 255.255.255.0 up" > /opt/eth0.sh
echo "route add default gw 192.168.0.1" >> /opt/eth0.sh
echo 'echo "nameserver 192.168.0.7" > /etc/resolv.conf' >> /opt/eth0.sh
chmod +x /opt/eth0.sh
echo "/opt/eth0.sh" >> /opt/bootlocal.sh
filetool.sh -b

Step 2) reboot to apply and verify.

What about SSH?!

Oh right.. we got it installed but we never got it running did we?!

Step 1) Run:

cp /usr/local/etc/ssh/sshd_config.orig /usr/local/etc/ssh/sshd_config
vi /usr/local/etc/ssh/sshd_config

Edit and uncomment:
Port: 22
Address: 0.0.0.0
PasswordAuthedAllowed:true

Step 2) Run:

echo "usr/local/etc/ssh/" >> /opt/.filetool.lst
echo "/usr/local/etc/init.d/openssh start" >> /opt/bootlocal.sh
filetool.sh -b
reboot

congrats you got openSSH working on TC-Linux.

Hostname

Most systems you run the hostname command… ooooeee not so easy not TC-Linux.

Option 1 (Clean)

Edit the first line of /opt/bootsync.sh which sets the hostname.

Then just run filetool.sh -b, done.

Option 2 (Dirty)

To ensure the hostname persists across reboots, you need to modify the /etc/sysconfig/hostname file:

  1. Edit the hostname configuration file:
    sudo vi /etc/sysconfig/hostname
    
  2. Add or modify the line to include your desired hostname:
    your_new_hostname
    
  3. Save and close the file.
  4. Add /etc/sysconfig/hostname to the persistence list:
    echo "etc/sysconfig/hostname" >> /opt/.filetool.lst
    echo "hostname $(cat /etc/sysconfig/hostname)" >> /opt/bootlocal.sh
  5. Save the configuration:
    filetool.sh -b
reboot

That’s it for now, next blog post we’ll get to installing other goodies!

Managing Apps

Installing Apps

As you can see it’s most running:

tce-load -wi

for all the details see their page on this, or run -h.

Source of app (x86): repo.tinycorelinux.net/15.x/x86/tcz/

For the most it’s install app. Edit files as needed, saved edited files to /opt/.filetool.lst. Then run backup command, test service, edit /opt/bootlocal.sh with commands needed to get app/service running. again run filetool.sh and bobs your uncle.

Deleting Apps

To remove a package on Tiny Core Linux that was installed using tce-load, here’s what you can do:

  1. For Extensions in the onboot.lst File:
    • First, remove the package name from the /etc/sysconfig/tcedir/onboot.lst file to prevent it from being loaded at boot. You can edit the file with:
      bash
      sudo nano /etc/sysconfig/tcedir/onboot.lst
      
    • Delete the entry corresponding to the package you wish to remove, then save and exit.
  2. Delete the Extension File:
    • Navigate to the directory where the extensions are stored:
      bash
      cd /etc/sysconfig/tcedir/optional
      
    • Remove the .tcz file associated with the package:
      bash
      sudo rm package-name.tcz
      
  3. Clean Up Dependency Files (Optional):
    • To clean up leftover dependency files related to the removed package, you can check and delete them from the same directory (/etc/sysconfig/tcedir/optional).

 

Retro PC, NO IDE based CDROM in Windows 98/ME

So, I decided to boot up my old Retro PC. To my dismay, when I booted my Windows 98 or ME instances, I noticed that the CDROM was not showing, it would show up in the BIOS, and if I booted MS DOS 6.22 MSCDEX and all the config work I had done there was still working (so I knew it wasn’t a hardware issue).

The two OSs would boot just fine, and no matter how much I played with the BIOS configurations for the IDE channels the result was the same.

I knew I had the proper drivers on them OS’s cause I had everything working previously so I was a bit stumped at this point. I was about to give up, but I really wanted to play some Road Rash.. I checked many threads on the matter, and most of it simply stated to delete the IDE device and let windows reinstall it at next boot, even if I used the standard Microsoft drivers or the NVDIA nForce drivers, the result was the same, NO CDROM.

I then found this thread, and the final answer at the end actually worked…

“Troubleshooting MS-DOS Compatibility Mode on Hard Disks (Q130179)

It was under:
Resolution…..
4…..
d. Check for the NOIDE value in the registry under:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VxD\IOS
The NOIDE value is placed in the registry when the protected-mode driver for the IDE Controller is not properly initialized.

For additional information about how to troubleshoot NOIDE, click the article number below to view the article in the Microsoft Knowledge Base:

Q151911 MS-DOS Compatibility Mode Problems with PCI IDE Controllers
===============================

It had me delete the NOIDE value and reboot. ”

Like…. what? I’ve never seen this before, what a weird problem and solution. I love how there are so many forums and threads alive today for even such old OS’s. Gotta love Retro gaming. 🙂

Veeam VM Restore failed: Cannot apply encryption policy. You must set the default key provider.

So in my Lab vCenter went completely POOOOOF. So, I installed it fresh.

After vCenter was installed, I updated my Veeam configuration to ensure my backup chains wouldn’t break which still works great by the way.

One VM was missing from my vSphere. So I went to restore it when all of a sudden:

I remembered by post about configuring a Native Key Provider cause it was required as such to have a vTPM. So I thought, is this a “PC Load Letter” problem, and it’s actually just complaining that I didn’t configure a NKP for it to “apply encryption policy”?

Follow the same old steps to configure a NKP.

  • Log in to the vSphere Client:
    • Open the vSphere Client and log in with your credentials.
  • Navigate to Key Providers:
    • Select the vCenter Server instance.
    • Click on the Configure tab.
    • Under Security, click on Key Providers.
  • Add a Native Key Provider:
    • Click on Add.
    • Select Add Native Key Provider.
    • Enter a name for the Native Key Provider.
    • If you want to use hosts with TPM 2.0, select the option Use key provider only with TPM protected ESXi hosts.
  • Complete the Setup:
    • Click Add Key Provider.
    • Wait for the process to complete. It might take a few minutes for the key provider to be available on all hosts.
  • Backup the Native Key Provider:
    • After adding the Native Key Provider, you must back it up.
    • Click on the Native Key Provider you just created.
    • Click Backup.
    • Save the backup file and password in a secure location.

Once I did all that…

No way that actually worked. But will it boot? Well it def “booted” but it asked for the BitLocker key (which makes sense since we created a new TPM and it doesn’t have the old keys). I checked my AD and sadly enough for some reason it didn’t have any BitLocker keys saved for this AD object/VM.

Guess this one is a loss and the importance of saving your encryption keys.

Careful Cloning ESXi Hosts

I’ll keep this post short. I was doing some ESXi host deployments in my home lab, and I noticed that when I would install on a 120GB SSD, the install would go smoothly, but I wasn’t able to use any of the storage as a Datastore. However, if I took a fresh install copy of ESXi from installing onto an 8GB USB Stick and DD’d it to the 120GB SSD I got several advantages from this:

  1. When done via a USB3 Pipe of Linux live holding a copy of my base image to deploy I could get speeds in excess of 100 MB/s, and with only 8GB of data to transfer, the “install” would complete in a mere 90 seconds.
  2. The IP address and root password are preconfigured to what I already now, and I can simply change the IP address from the DCUI and call it a day.

Using this method I could have a host up in less than 5 minutes (2 min to boot linux live, 90 seconds to install the base ESXi OS image, and 2 more to boot ESXi). This was of course on machine without ECC and all the server hardware firmware jazz… in those cases install times are always longer. anyway…

This was an amazing option, until I noticed that when I connect one machine in I just deployed and changed the IP address, and (since I’m super anal about networking during this type of project/operations) I noticed my ping from one machine (a completely different IP address) started to drop when the new device came up… and after a while the ping responses would come back but drop from the new host, and vice versa, flip and flop it goes. I’m used to this usually if there’s an IP conflict and two devices have the same IP address. In this case they were different IP addresses… after enough symptom gathering and logical deduction of because I had to assume that the MAC address just be the same and this is the same problem in reverse (different IP’s but same MAC) and as such experiencing the same symptoms.

To validate this I simply deployed my image to a new machine, then I went on the hunt to figure out how to see the MAC address, since I couldn’t plug in the NIC and get to the web based MGMT interface I had to figure out how to do that via the console CLI directly… mhmm after enough googling on my phone I found this spiceworks thread with my answer:

vim-cmd hostsvc/net/info | grep “mac =”

I then checked this against the ESXi host that I saw the flipping flopping with, and sure enough they matched…  After doing a fresh install I noticed that the first 3 sections match the physical MAC, but in my DD deployed ones they retain the MAC of the system from which it was installed and those when I ran the command above, I could tell which ones were deployed via my method. This was further mentioned in this reddit thread by a commenter who goes by the name of sryan2K1:

“The physical NIC MACs are never used. vmk ports, along with VMs themselves will all use VMWare’s OUI as the first half of the address on the wire.”

OK, now maybe I can still salvage my deployment method by simply deleting and recreating the VMK after deployment, but I’d guess it best be done via the DCUI or direct console… I found one KB by VMware/Broadcom but it gave a 404 but Luckly there was a wayback machine link for it here.

Which states the following:

“During Initial Installation and DCUI, ESXi management interface (default vmk0) is created during installation.

The MAC address assigned will be the primary active physical NIC (pnic) associated.

If the associated vmnic is modified with the management interface vmkernel will once again assign MAC address of the associated physical NIC.

To create a VMkernel port and attach it to a portgroup on a Standard vSwitch, run these commands:

esxcli network ip interface add --interface-name=vmkX --portgroup-name=portgroup
esxcli network ip interface ipv4 set --interface-name=vmkX --ipv4=ipaddress --netmask=netmask --type=static"

Alternatively, you can also use esxcli to create the management interface vmkernel on the VDS.

Creation of the management interface with the ‘esxcli network’ will generate a VMware Universally Unique address instead of the pnic MAC address.

It is recommended to use the esxcli network IP interface method to create the management interface and not use DCUI.

Workarounds:               None

Additional Information:
Using DCUI to remove vmnic binding from management vmkernel or any modification will apply change at vSwitch level. Management interface is associated with propagating the change to any port groups within the vSwtich level.

Impact/Risks:                None.”

I”m assuming it means if you use the DCUI to reconfigure the MGMT interface settings the MAC will automatically be reconfigured to match what I found during initial clean install and mentioned in the reddit thread of using the first 3 sections to derive the MAC of the VMK.

But what if you don’t have any additional interfaces to use to make the section change in the DCUI to have that actually happen? cause what I’ve noticed changing the IP address and disabling IPv6 and rebooting did not change the VMK’s MAC address. Oh there’s in option in the DCUI “Reset Network Settings” within there there’s several options, I simply picked reset to factory defaults. Said success, checked the MAC via the first command stated above and bam the VMK nic changed to what it should be! Sweet my deployment method is still viable.

Hope this helps someone.

The virtual machine must be encrypted

Sooo I lost a VM in my fray of re-organizing my server farm. Like a lost pup I figured I just rely on my good old Veeam backup sets. Recover VM, alright here we goo….

What.. what does that mean…. Oh wait is this cause of when I blogged about adding vTPMs to VMs?

Re-checked the linked video from VMware… 2 min in … “Failure to save your key backup will result in unrecoverable data loss”…. mhmmm, OK I thought all I did was add a TPM device to my VM and enabled secure boot, that’s the deal here?

Somewhere I read that the VM config files get encrypted, but I don’t think that’s the case here either.  Even checking the Pre-reqs from VMware I can’t see anything nothing this:

Prerequisites

Ensure that your vSphere environment is configured with a key provider. See the following for more information:
Configuring vSphere Trust Authority
Configuring and Managing a Standard Key Provider
Configuring and Managing vSphere Native Key Provider
Ensure that host encryption mode is enabled. See Enable Host Encryption Mode Explicitly.
The guest OS you use can be Windows Server 2008 and later, Windows 7 and later, or Linux.
The ESXi hosts running in your environment must be ESXi 6.7 or later (Windows guest OS), or 7.0 Update 2 (Linux guest OS).
The virtual machine must use EFI firmware.
Verify that you have the required privileges:
Cryptographic operations.Clone
Cryptographic operations.Encrypt
Cryptographic operations.Encrypt new
Cryptographic operations.Migrate
Cryptographic operations.Register VM

What I think is happening here is my NKP that IS a Prerequisite went poof (the vCenter server that was used to create it is shutdown and not being used), and another temp vCenter is being used.

My first thought was maybe I could just add a new NKP and go as I figured the TPM physical module that’s installed simply needs this, and I think it’s this hardware that’s faulting the boot.

I didn’t want to muck the with original I just recovered so I tried to clone it, but the clone failed too complaining about encryption before adding a TPM, further validating my assumption. What I don’t understand it how the VM was allowed to be created from backup in the first place if I can’t even clone it…?

Any since I know recovery is possible (since I just did it), I guess maybe I can just remove it? Or I could also create a new VM and use vmkfstools to clone the hdd… let’s try that first…

Go to boot VM, well got past that error but the Machine was bitlocked, I was hoping it wasn’t going to be.. go to AD server, open ADUC… no bitlocker tab… ughhhh…

ADUC Missing BitLocker Recovery Tab in 1809 – Microsoft Community

Right but where is that in on a server, oh in server manager it moved…

Yay there’s the bitlocker tab and… it’s empty.. man give me a fucking break… so now I have a bunch of backups that are useless cause I lost the bitlocker key… shiiiiiiit

Well I don’t have anything to follow up on here but a lesson learnt to backup your bitlocker key (I don’t know why it wasn’t save to the AD computer object).

Clearing up Space on an Exchange Server

I wanted to migrate an old exchange server, but the size was way more then I ever expected… so I dusted out my old script…

GitHub – Zewwy/Manage-ExchangeLogs: Script to help ease in managing Exchange logs

inetpub logs ended up being over 50 gigs. etl nearly 5 Gigs, another 5 On the next. all together cut out over 60 Gigs of data space. But lots remained.

I then found a OWA prem folder with lots of data and found a usual blogger covering it’s data removal here: Remove old Exchange OWA files to free up disk space – ALI TAJRAN

“To know which Exchange build number folders we can remove, we have to find which Exchange versions are running in the organization.

$ExchangeServers = Get-ExchangeServer | Sort-Object Name
ForEach ($Server in $ExchangeServers) {
Invoke-Command -ComputerName $Server.Name -ScriptBlock { Get-Command Exsetup.exe | ForEach-Object { $_.FileversionInfo } }
}

After running the above script in Exchange Management Shell, look at the Exchange build number.”

Pretty much run that command and delete all the folders expect the one the system is currently on.

Just like this blogger saved another 10 Gigs off that folder alone, but still heafty, I checked the Exchange DB folder path and found enless log files (guess I have to extend my script. Ended up writing a one liner to grab all the log files and clear then, this was almost another 50 gigs in space saved. We are well over 100 Gigs in space saved/deleted. but there’s still some heft, checking the Exchange client mailbox DB is only 6 gigs (and I”ll see if I can save space there but over all it is peanuts to all the space being used.

Next I found WinSXS folder taking up space. I followed the steps from this blog: What to Do If the Windows Folder Is Too Big on Windows 10? (diskpart.com)

I had already ran diskclean including system files. I ran the DISM commands specified as well but that only brough the 15 gigs of space it was using to roughly 7 Gigs, half is not bad but I was hoping it could do better.

Man wonder if I can just delete it? No! Says Microsoft:

“Don’t delete the WinSxS folder, you can instead reduce the size of the WinSxS folder using tools built into Windows. For more information about the WinSxS folder, see Manage the Component Store.

We following that MS KB pretty runs the same commands as the blog.

I’ll live with that for now, I logged into the mailbox and deleted everything I could, I should have no other mailboxes… wonder what’s got the Exchange DB up to 6 gigs of used space?

I followed another one of Ali’s Blogs: Get mailbox database size and white space – ALI TAJRAN

But that just told me what I had already found out looking at the base file system. He links to another one of this post to use a a script he had written. I checked the script… Clean use of case statement… Well done bud 🙂

Anyway I pipe get mailbox selection into a stats comdlet and then Format the whole list:

(Get-mailbox -resultsize unlimited) | Get-MailboxStatstics | Select *Name,*Count,*Size | FL

This shows All mail items size, and all message and attachment sizes. For me all mailboxes were tiny, and I can get rid of several of them as well. but I don’t think that will change the size on disk…

Got rid of everything down to 2 mailboxes and with 2 gig quotes each no way it could be over 4 gigs, yet it’s still the 6-7 gigs noted eariler.. mhmmm…

So in his blog he basically create a new mailbox DB via the EAC (or Powershell) moves the mailboxes and deletes the old DB. OK I can do that…

So old DB as seen here:

Create new folder path for new DB file (or use a new disk whatever ya want):


Create new DB:

New-MailboxDatabase -Name "DB02" -Server "EX01" -EdbFilePath "C:\DBPath\DB02.edb" -LogFolderPath "C:\LogPath\DB02""

Restart the services.. wait is my new DB and files? Oh I forgot to mount it:

Alright… time to move the mailboxes…

Huh, I was hoping for a process display but I guess it makes sense to throw the job in the background to not be interrupted by signing out or closing the console. Checking resource monitor… Starts chanting… Oh ya Go Beanus, Go Beanus!!

Looks like I/O settled… and…

Get-MoveRequest

Completed nice… size? less then 300 Megs baby.

K, just need to delete the unmount and delete the old DB. What a dink he knew it would fail but I followed his other blog post here:

Cannot delete mailbox database in Exchange Server – ALI TAJRAN

and after running the remove-mailboxdatabase cmdlet it still told me I had to manually delete the file, so I did… I finally got the server to roughly 30 gigs… not bad but I really don’t like that 7 gig SXS folder…

I even cleaned of the SoftwareDistrobution (Windows update Cache folder)

Hope this helps someone, time to hole punch this vmdk and migrate this server.

*Update* The hole punch didn’t work, Why? cause I forgot to run sdelete.
*WARNING* I tried to run sdelete on the VM while it was thin provisioned on a datastore that didn’t have enough storage to will the whole drive, as such the VM errored out with there is no more free space on disk.

It’s like the old adamant goes, things gotta get dirty before they get clean. In this case the drive has to be completed filled (with zeros) before it can be hole punched. Make sure the VM resides on a Datastore with enough actual space if the drive were to be completely filled.

*Update #2* Seems I went down a further rabbit hole then I wanted this to go, unlike my post about hole punching a Linux VM, which was pretty easy and straight forward. This one had a couple extra problems up it’s sleeve.

Problem #1 – Clearing Up Storage Space When Extending Is Not Possible.

This is literally what this whole blog post was about, read it and clear whatever space you can. If you have a physical Exchange server you’re probably done, and all your backups would probably be agent based at this point.

However if you’re virtualized (specifically with VMware/ESXi), then you have some more steps to do, the “hole punching”

Problem #2 – Hole Punched Size Doesn’t Match OS Disk Usage

This is were I want to extend on this blog post cause while the solution seems simple and straight forward. Each step has it’s own caveats and issue to overcome. Let’s discuss these now…

  1. You have to “zero the empty space” before VMware’s tools and properly complete the hole punch. This is only an issue if you happen to be over provisioning the datastore. If so:
  2.  At this point its assumed you cleared as much space as possible, and 2 you have defragged the HDD using the Windows defrag tool, and you have the VM overprovisioned. Simply shrink the partition down to a size that IS available on the datastore, or migrate to a datastore with enough storage. In my case I opted for the first choice to shrink the partition when I hit YET ANOTHER PROBLEM:
  3.  Even though I knew I had cleared the used space down to roughly 30 GBs of space, running the shrink wizard in diskmgmt tool stated it could only shrink the disk to 200GB since “There was a system file preventing further shrinkage”. WTF man, we ran disk cleanup, we cleared SXS folder, we cleared old logs, lots, we cleared the actual Exchange Database files, we disabled, and shrunk the pagefile then re-enabled… What is possibly preventing the shrinkage?

I found this post: windows 7 – Can’t shrink partition due to mystery file – Super User after I looked in the event log for event 259 showing the file in question preventing the shrinkage is “$LogFile::$DATA”… Da Fuck does that mean…

In short.. It’s an NTFS journaling file using “Alternate Data Streams“, or as quoted by Andrew Lambert “The $LogFile file is a part of the NTFS filesystem metadata, it’s used as part of journaling. The ::$DATA part of the name indicates the default $DATA stream of the file. I don’t know why it’s causing a problem, though.”

Bunch of comments about System Restore points, but I checked and there were none. Many other comments mentioning the use of 3rd party tools (no thanks). I can’t seem to locate it but I pretty sure I remember reading a comment somewhere that other NTFS aware applications have the ability to move and correct such things. So here’s my plan of action:

  1. Create a snapshot so I don’t have to recover the whole VM is something goes wrong. (On a slower I/O datastore but one with enough space for whole disk just to be safe).
  2. Boot The Exchange VM but with GParted Live disc connected.
  3. Use GParted to shrink the partition.
  4. Clone the VM. (This is what I don’t get the cloned VM still shows a disk size usage of 70 GBs…. AHhhhhhhhh!!)

Here’s another interesting note, as I stated in point 1 I had this VM on a Datastore based on spindle disk, shown on the ESXi host as “Non-SSD”, and cloned it to an SSD Datastore where it now states to use 70 Gig when the OS boots only having a partitioned disk of 46 Gigs, with 12 Gigs free. Opening the defrag application states defrag not possible cause it’s an SSD, guess let’s run sdelete see what happens?!

sdelete -z c:
sdelete -c c:

The backend VMDK grew from 70 Gigs to 80 Gigs… man wtf is going on… Hole punch it:

vmkfstools -K /vmfs/volumes/SSD/VM/VM.vmdk

You’re tellin me.. the SSD can handle ripping the drive at over 250 MB/s but holepunching causes I/O errors?

Good ol technology never ceasing to piss me off… fine I’ll destroy this VM, move the main spindle drive into a new ESXi host. Which will have an SSD Datastore with more storage and hopefully not on the way out (if it actually is an I/O error, storage/drive failure on the SSD). One sec…

So yeah… even with a larger SSD the copy worked, the hole punch “succeeded” but the drive was still 80 gigs. I made a clone and the vmdk came down to 60 gigs, I still can’t make sense of the roughly 30 gigs of discrepancy. Since the whole idea is to move this to my wireless ESXi host, I’ll see what exporting it as it is now results in the final OVA file size and then update this blog post.