Fixing WordPress Security Recommendations

Fixing WordPress Security Recommendations

So.. I log into my WordPress server, and under site health on the Dashboard, the yellow… basically sayin…. “Do better”

Anyway, I asked Co-Pilot for help on this (Specifically about the warning about a dated PHP version [7.4]), and it basically said:

🛠️ Update PHP for Your WordPress Site

1. Check Your Current PHP Version

  • Go to your WordPress dashboard
  • Navigate to Tools > Site Health > Info tab
  • Expand the Server section to see your PHP version

2. Back Up Your Site First

  • Use a plugin like UpdraftPlus or Duplicator
  • Save both your files and database—just in case something breaks

3. Update PHP via Hosting Control Panel

Depending on your host, the process varies slightly:

Hosting Provider How to Update PHP
Bluehost Go to Websites > Settings > PHP Version, choose the latest version
SiteGround Use Site Tools > Devs > PHP Manager, select manual update
Hostinger Go to Websites > Manage > PHP Configuration, pick a new version
cPanel Hosts Use MultiPHP Manager under the Software section

If you’re unsure where to find this, your host’s support team can guide you.

4. Test Compatibility

  • Use the PHP Compatibility Checker plugin to scan for issues
  • Or create a staging site to test the update safely

5. Update WordPress, Plugins, and Themes

  • Make sure everything is up to date before switching PHP versions
  • This reduces the chance of conflicts

Great, except that I’m self hosting… I have no interest in my access to a service pay walled and managed via poorly coded web sites.  So I followed up with Co-Pilot which it then wanted me to basically add a third party repo, in this specific case, a PPA “ondreJ”. I had a couple issues with this.

  1. This is known as a Frankstein build.
  2. Managing third party repo’s can cause issues, and is usually frown’d apon.

These are mentioned in this reddit post where a sysadmin was confused and this type of setup was instantly caught by others with experience. Which is what AI was telling me to do, it’s reasoning was as follows:

“Ubuntu’s default repositories only include certain PHP versions—usually the ones that were current when that Ubuntu release came out. So if you’re on, say, Ubuntu 22.04 or 24.04, you might only get PHP 8.1 or 8.2 from the official repos. Anything newer (like PHP 8.3 or 8.4)? Not there yet.

That’s why developers lean on Ondřej Surý’s PPA (ppa:ondrej/php). He’s a trusted Debian developer who maintains up-to-date PHP packages for Ubuntu and Debian.”

When asked why someone woudl have to install the actual maintainers repo, instead of those builds being pushed to the OS default repo’s, AI stated this:

“Here’s the short version: distro maintainers often freeze package versions when they release a new OS version—for stability and compatibility across thousands of other packages. So even if the upstream maintainer (like Ondřej) has newer PHP versions ready to go, the official Ubuntu repo won’t touch them unless there’s a security patch or they release a new Ubuntu version that includes the updated language package.”

Anyway… instead of all this rubbish, it makes more sense to upgrade the base OS at this point. So let’s do that.

🛠️ Upgrading Base OS Debain 11 to 12

1. Back Up Your System

You’ve already got backups—perfect. Just make sure you’ve got /etc, /var/lib/dpkg, and /var/lib/apt/extended_states covered.

2. Update Current System

sudo apt update
sudo apt upgrade
sudo apt full-upgrade
sudo apt --purge autoremove

3. Edit Your APT Sources

Replace all instances of bullseye with bookworm in your sources list:

sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

If you use additional repos in /etc/apt/sources.list.d/, update those too:

sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list.d/*

Optionally, add the new non-free-firmware section:

sudo sed -i 's/non-free/non-free non-free-firmware/g' /etc/apt/sources.list

4. Refresh Package Index

sudo apt update

5. Minimal Upgrade First

sudo apt upgrade --without-new-pkgs

6. Full System Upgrade

sudo apt full-upgrade

Follow any prompts carefully—especially ones about restarting services or replacing config files.

7. Reboot

sudo reboot

8. Verify Upgrade

bash
lsb_release -a
cat /etc/debian_version
After the reboot I was back in to my wordpress admin portal, and the site warning was gone. Yay… but some still remain.. lets see if we can clean these up.. on at a time:
right inactive plugins, that should be easy enough. Deleted 2 unused plugins, and sure enough the warning is gone. Same with themes, let’s do that next.
Appearance -> Themes -> Theme Details -> Delete.
Older version of PHP… well here we go again, so we went from a site warning to site good, but it still wants php8.3… which was discussed above. Frankenstein the build or wait for a new base OS, I’ll await Debian 13 before introducing a Frankenstein Debian build. We’ll ignore this warning.

Recommended Module: ZIP

This one is a bit annoying cause it can go two ways:
1) Sure, I can install this module, but if I don’t need it why would I? Just to remove this warning, I suppose.

Fair question—and the short answer is: probably not essential, but it’s handy to have. 👍

The zip module isn’t required for WordPress to function at a basic level. If your site is running smoothly, you’re likely not using any plugins or features that rely on it yet. But here’s why WordPress flags it:

📦 What the zip Module Is Used For

  • Plugin installation and updates (especially from ZIP files)
  • Theme import/export
  • Backup plugins (e.g., Duplicator, UpdraftPlus)
  • Migration tools

If you’re manually installing things via FTP or don’t use plugins that automate packaging and extraction, you’re fine without it. WordPress just lists it because a lot of plugins assume it’s present.

🧠 TL;DR

  • ✅ Your site can run without it
  • ⚠️ If you install or update ZIP-based plugins through the admin interface, you might hit errors
  • 🛠️ It’s a tiny module and safe to install if you’re future-proofing

How to?

🛠️ Enable the zip PHP Module on Debian 12

1. Install the Extension

Make sure you install the correct version for your PHP setup. If you’re using PHP 8.2 (the default in Debian 12):

sudo apt update
sudo apt install php8.2-zip

If you’ve upgraded to PHP 8.3 via SURY:

sudo apt install php8.3-zip

2. Restart Your Web Server

Depending on your setup:

sudo systemctl restart apache2

3. Verify Installation

Run:

php -m | grep zip

If it returns zip, you’re good to go.

2) Don’t install it from a security stand point:

Security Angle: Why Not Installing zip Might Be Safer

  • Less attack surface: Fewer PHP extensions mean fewer potential vulnerabilities, especially from bugs in parsing zip headers or malicious zip payloads.
  • Blocks auto-installs: Without ZipArchive, plugins/themes that use zipped packages via the WordPress admin can’t self-extract. That forces all installs to be manual—good for auditability.
  • Script-based exploits are blunted: Malware that attempts to download, unpack, and activate zip files through vulnerable plugins or upload endpoints will hit a wall.
  • Prevents sneaky plugin drops: No zip means plugins can’t unpack themselves unnoticed during malicious file uploads or improperly sanitized form inputs.

⚠️ Trade-Offs

  • Some legit backup or migration plugins won’t work—or will throw warnings
  • Manual updates and installs are more tedious
  • You may need to temporarily install it for specific maintenance tasks, then uninstall afterward

Object Cache

This one was a bit more annoying than I wanted it to be simply because it’s got many options. beside 1) Do it vs 2) Don’t do it.

Right now, my site runs fine but doesn’t want a faster site… right… Right?

🔴 Redis

Pros

  • Very fast and widely adopted
  • Works across multiple servers (great for scaling)
  • Excellent support from plugins like Redis Object Cache
  • Stores complex data types (not just key-value pairs)
  • Can be configured for persistence (disk backup of cache)

⚠️ Cons

  • Uses more memory than simpler caches
  • Requires a background daemon (redis-server)
  • Overkill for tiny or low-traffic sites

🔵 Memcached

Pros

  • Lightweight and blazing fast
  • Great for simple key-value object caching
  • Minimal resource usage—ideal for single-server setups

⚠️ Cons

  • Doesn’t support complex data types
  • No persistence: cache is lost if the server reboots
  • Fewer modern plugin options compared to Redis

🟣 APCu

Pros

  • Fast, simple, and bundled with PHP
  • No external services required—runs in-process
  • Perfect for single-server, low-footprint setups

⚠️ Cons

  • Only works per process: no shared cache across servers
  • Not ideal for large or complex sites
  • Might get flushed more often depending on your PHP configuration

In my case I’m going to try memcached, why I unno….

🧰 Install Memcached + WordPress Integration

1. Install Memcached Server + PHP Extension

bash
sudo apt update
sudo apt install memcached php8.2-memcached
sudo systemctl enable memcached
sudo systemctl start memcached

Replace php8.2 with your actual PHP version if needed.

2. Verify Memcached Is Running

echo "stats settings" | nc localhost 11211

You can use Bash’s built-in TCP support:

exec 3<>/dev/tcp/127.0.0.1/11211
echo -e "stats\r\nquit\r\n" >&3
cat <&3

This opens a raw TCP connection and sends the stats command directly.

You should see a list of stats—if not, Memcached isn’t active.

3. Install a WordPress Plugin

The most common plugin for Memcached integration is W3 Total Cache:

  • Go to your WordPress dashboard
  • Navigate to Plugins > Add New
  • Search for W3 Total Cache
  • Click Install, then Activate

4. Configure W3 Total Cache for Memcached

  • Go to Performance > General Settings
  • Under Object Cache, enable it and select Memcached as the method
    *If memcached is not selectable restart web service (apache2)
  • Do the same for Database Cache if desired
  • Save settings and purge cache

✅ Optional: Use a Drop-In Instead of a Plugin

If you want to skip plugins entirely:

  • Download object-cache.php from Memcachy
  • Place it in /wp-content/
  • Add this to wp-config.php:
    php
    define('WP_CACHE', true);
    $memcached_servers = array(
      array('127.0.0.1', 11211)
    );
    

This gives you Memcached support without a full plugin UI.

After installing the backend modules and the plugin, and configuring memcached as the object cache, site health finally removed that warning too.. now the final one.. HTTPS.

HTTPS

Now it’s not possible to access my site via HTTP, not externally anyway. So, why the warning?

Ah, that’s a classic WordPress quirk—your browser’s using HTTPS, but WordPress itself still thinks it’s living in the insecure past. 😅

Here’s how to fix it since you’re self-hosting:

🔐 Update WordPress URLs to Use HTTPS

  1. Log into your WordPress Admin Dashboard
  2. Go to Settings > General
  3. Update both:
    • WordPress Address (URL)
    • Site Address (URL) to use https://yourdomain.com instead of http://
  4. Scroll down and click Save Changes

But it’s greyed out…

🛠️ How to Fix It

  1. Open your wp-config.php file It’s located in the root of your WordPress installation (same folder as wp-load.php and wp-settings.php).
  2. Look for these lines:
    define('WP_HOME','http://yourdomain.com');
    define('WP_SITEURL','http://yourdomain.com');
    

    Or they might be using https:// if you’ve already set up SSL.

  3. Delete or comment them out:
    php
    // define('WP_HOME','https://yourdomain.com');
    // define('WP_SITEURL','https://yourdomain.com');
    
  4. Save the file and reload your WordPress dashboard The fields under Settings > General should now be editable.

mhmm, uhhh commenting them out caused my site to simple redirect to local loop back IP which of course the system I’m navigating my site on isn’t hosting a web server thus the site timeouts.

yeah, that redirect to 127.0.0.1 or localhost is a classic symptom of WordPress falling back to default behavior when it can’t find proper site URLs. Here’s what likely happened:

🧠 Why It Redirected to Localhost

When you commented out WP_HOME and WP_SITEURL in wp-config.php, WordPress tried to pull the site URL from the database (wp_options table). If those values are still set to http://localhost or http://127.0.0.1, it’ll redirect there—even if you’re accessing the site from a public domain.

🛠️ How to Fix It

✅ Option 1: Restore the Correct URLs in wp-config.php

Uncomment and update the lines like this:

define('WP_HOME', 'https://yourdomain.com');
define('WP_SITEURL', 'https://yourdomain.com');

Replace yourdomain.com with your actual domain name.

✅ Option 2: Update URLs Directly in the Database

If you want to manage URLs from the dashboard again, you’ll need to fix the database values:

  1. Access phpMyAdmin or use the MySQL CLI
  2. Run:
    sql
    UPDATE wp_options SET option_value = 'https://yourdomain.com' WHERE option_name IN ('siteurl', 'home');
    
  3. Clear your browser cache and reload the site

You can then safely remove the hardcoded lines from wp-config.php and regain control via the WordPress admin panel.

FINALLY What a PITA… Hope this helps someone…

Small Rabbit Hole

Allllllright, I wanted to check if I was even running phpMyAdmin (I vaguely remembering that it was part of turnkey Linux setup, which I THINK is what this was originally spun up from, which had such services baked in for ease of use)… ANYWAY, I went to try n figure this out my simply checking my servers listening ports… when I found IPV6

🚫 IPv6

I dislike it, others disagree. if it be supported (by now it’s pretty widely adopted), or if it’s something you need.. ughhh, then give er… let  the world be your oyster or some dumb shit. I personally don’t like the idea of everything having a fully publicly routeable IP address.. if it even works that way.. unno… I still stick to IPv4 where, yes I use NAT… ooo nooooo…

Anyway long story short I wanted to disable IPv6 on my WordPress server…

🧱 Method 1: Disable via sysctl (Persistent)

Edit the system config file:

sudo nano /etc/sysctl.conf

Add these lines at the end:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

Apply changes:

sudo sysctl -p

I did this but still found listening port on IPv6 (specifically SSH and NTP), I could have reconfigured this services butt, instead..

🧨 Method 2: Blacklist the IPv6 Kernel Module

Create a blacklist file:

sudo nano /etc/modprobe.d/blacklist-ipv6.conf

Add:

blacklist ipv6

Then update initramfs:

sudo update-initramfs -u
sudo reboot

This didn’t work for me.

🧪 Method 3: Disable via GRUB Boot Parameters

Edit GRUB config:

sudo nano /etc/default/grub

Find the line starting with GRUB_CMDLINE_LINUX_DEFAULT and add:

ipv6.disable=1

Example:

GRUB_CMDLINE_LINUX_DEFAULT="quiet ipv6.disable=1"

Update GRUB:

sudo update-grub
sudo reboot

This finally worked!

Rabbit Hole #2: stunnel4

what is stunnel4

Stunnel4 is a proxy tool that wraps unencrypted traffic in SSL/TLS encryption, allowing legacy or non-secure services to communicate securely without modifying their original code. Think of it as a secure tunnel that sits between a client and a server, encrypting everything that passes through.

🔐 What Stunnel4 Does

  • Encrypts plaintext protocols like HTTP, SMTP, FTP, or Telnet
  • Acts as a TLS/SSL wrapper for services that don’t natively support encryption
  • Can run in client or server mode
  • Supports load balancing, certificate validation, and FIPS-compliant cryptography
  • Uses OpenSSL under the hood for encryption algorithms

🧠 How It Works

Imagine you have a mail server that only speaks plain SMTP on port 25. You can configure Stunnel to:

  • Accept encrypted connections on port 465
  • Decrypt the traffic
  • Forward it to port 25 locally

This way, clients connect securely—even though the mail server itself is oblivious to SSL.

🛠️ Typical Use Cases

  • Securing legacy services without rewriting them
  • Tunneling blocked protocols (e.g. SSH over HTTPS)
  • Adding SSL to VPNs (like OpenVPN)
  • Acting as a secure gateway for internal services

Want help setting up a config file or using it to secure a specific service like SMTP or Redis? I can walk you through it.

“Tunneling blocked protocols (e.g. SSH over HTTPS)” This is exactly what I see my system doign on port 12320.. and it shows as service stunel4. but I don’t know if I want to stop the service as a whole…

You’re spot-on in diagnosing that behavior—port 12320 with stunnel4 almost certainly means your system is tunneling SSH over HTTPS, or at least exposing a secure channel for remote access via SSL. This is commonly used in stacks like Webmin/Virtualmin/Usermin, where stunnel4 wraps non-HTTPS services to provide secure access through a browser

🔍 What Port 12320 Usually Does with stunnel4

  • Acts as a gateway to Webmin or Usermin shell access
  • Allows remote file management, terminal, or SSH-like features over HTTPS
  • Wrapped securely so that firewalls see it as “harmless” SSL traffic

You can confirm this by checking:

sudo systemctl status stunnel4
cat /etc/stunnel/stunnel.conf

Or:

grep -r 12320 /etc/stunnel/

That config file should reveal exactly which service is being tunneled.

⚠️ Should You Stop It?

That depends on:

  • Do you use Webmin/Virtualmin remotely?
  • Are you managing your server through a web browser interface?
  • Is 12320 exposed to the public internet or locked behind a firewall?

If you don’t use it and it’s open to the world, you could:

  • Disable it safely:
    sudo systemctl disable --now stunnel4
    
  • Or edit the config to disable only port 12320, leaving other tunnels active

But if you rely on browser-based admin tools, killing stunnel4 might break access to file manager or terminal tools from the web UI.

🧠 Pro Tip

Rather than killing the whole service, you can:

  • Comment out the 12320 tunnel in stunnel.conf
  • Reload the config:
    bash
    sudo systemctl restart stunnel4
    

That gives you selective control—without losing secure access to services you still need.

Want help editing that config file or running a quick audit on what’s listening? I can help you tailor your server’s security posture to your exact setup.

stunnel4.service, Loaded: masked (Reason: unit stunnel4.service is maked.), Active: inactive (dead) grep -r 12320 /etc/stunnel/ /etc/stunnel/shellinabox.conf:12320 So, there it is the shell (SSH) service using stunnel, which is why I see a consoel when I access https://wordpress:12320. But why is the service saying dead when I clearly see it bound via ss command?

Excellent deduction — you’re practically reverse-engineering the system just by reading its sockets. 🧠🔧 Let’s break down what’s happening:

🧩 Why stunnel4.service Says Dead but the Port Is Active

You’re seeing a listening socket on port 12320, likely from ShellInABox wrapped by stunnel, but stunnel4.service reports:

Loaded: masked
Active: inactive (dead)

Here’s the likely cause:

⚙️ Stunnel is running, but not via the stunnel4.service unit

  • The service is masked, which means it’s intentionally disabled from being started or stopped using systemctl.
  • This doesn’t stop you (or another app) from manually launching Stunnel as a background process, or from invoking it via a custom systemd unit, init script, or even Webmin/Virtualmin.

🧠 ss shows the port because:

  • The process was started outside the official service file, possibly via:
    • A different systemd unit (e.g., webmin-stunnel.service)
    • A direct call in /etc/rc.local or cron or Webmin’s internal startup sequence
    • A bundled wrapper from ShellInABox or another tool

🔍 How to Find the Real Parent Process

Let’s track down what’s owning port 12320:

sudo lsof -i :12320

Or:

sudo ss -tulpn | grep 12320

This will show you the actual PID and the command or executable responsible. Likely output will say something like:

LISTEN  ... 12320  ... /usr/bin/stunnel

Once you get the PID, trace it:

ps -fp <PID>

🛠️ How to Disable That Specific Tunnel (Without Killing Stunnel Globally)

You can comment out or delete the tunnel block in /etc/stunnel/shellinabox.conf. Try this:

  1. Edit the config:
    sudo nano /etc/stunnel/shellinabox.conf
    
  2. Comment out all lines by adding #:
    conf
    # accept = 12320
    # connect = 127.0.0.1:12319
    # TIMEOUTclose = 0
    
  3. Save and exit, then reboot.

Rabbit Hole #3: Master

using ss -plunt it shows as service “master” I can’t see that in webmin.. what’s going on here? Port 25

Ah, the mysterious "master" process — that’s actually Postfix, your mail transfer agent (MTA). 📨

When you run:

sudo ss -plunt | grep :25

and see something like:

LISTEN 0 100 0.0.0.0:25 0.0.0.0:* users:(("master",pid=1234,fd=12))

the "master" refers to the Postfix master daemon, which coordinates all the individual Postfix services (like smtp, pickup, qmgr, etc.).

I disabled postfix and sure enough port 25 listener went away.

I also disabled NTP service, to drop listening on port 123.

OK, final one, I saw system listening on port  UDP 10000. I saw one that was TCP based but it was only bound to loopback so I didn’t care about that much, and research showed it’s used by webmin. however I didn’t like the UDP cause it was listening to all interfaces on the system (0.0.0.0).

After enough searching I found it configured by webmin itself.

webmin -> Webmin Configurtaions -> ports and addresses -> “Listen for broadcasts on UDP port” but what is this used for?

The “Listen for broadcasts on UDP port” option in Webmin is used for auto-discovery of other Webmin servers on the local network. Here’s what it’s doing:

📡 What It’s For

  • Webmin listens on UDP port 10000 to receive broadcast packets from other systems running Webmin.
  • These packets are part of the Webmin Servers Index module, which lets you:
    • Scan your LAN for other Webmin instances
    • Automatically add them to a central dashboard
    • Optionally control them via RPC (Remote Procedure Calls)

🧠 Why It’s Optional

  • If you’re not using clustered Webmin setups or don’t need to discover other servers, it’s safe to disable.
  • Disabling it stops Webmin from binding to UDP 10000, which:
    • Reduces your attack surface
    • Cleans up your ss output
    • Avoids confusion when auditing open ports

✅ What Happens When You Turn It Off

  • Webmin will no longer respond to broadcast discovery requests
  • You can still manually add other servers to the Webmin Servers Index if needed
  • The TCP listener on port 10000 (for HTTPS access) remains unaffected

Hope all these tweaks helps someone.

Upgrade From PA-220 To PA-440

Step 1) Get a PA-440 from your reseller.

Step 2) Power On PA-440

Step 3) Connect Micro USB cable into console, and then USB-A into Workstation of choice, with OS of choice. I will be using a HP Laptop with Windows 11.

  • Baud Rate: 9600
  • Data Bits: 8
  • Parity: None
  • Stop Bits: 1
  • Flow Control: None

Login as admin:admin and change the password.

Step 4) Disable ZTP. Unless you are working with a consultant or advanced VAR you probably won’t be using ZTP (Zero Touch Provisioning), this will prevent us from configuring a static IP address on the MGMT port.

> set system ztp disable

Now wait for the firewall to reboot.

Step 5) Configure a static IP for the PA-440 MGMT port:

> configure
> set deviceconfig system type static ip-address <IP_ADDRESS> netmask <NETMASK> default-gateway <DEFAULT_GATEWAY>
> commit

At this point you can plug a network cable into the MGMT port and into the switch in your network stack that will allow it to communicate to the internet and whatever devices are on the same subnet.

Step 5) Adjust any existing firewall rules to allow the MGMT port to access internet. primarily “paloalto-updates” app type if you are already using a PA series firewall, and want to be really strict on the rules.

Step 6) Register the device with your account on the Palo Alto Support portal. This is required when using the “grab licensees from online servers” option in the firewall. If you are using the device in an offline fashion then you will need to use the activation codes, which is outside the scope of this blog.

Step 7) Activate the PA-440 by checking online for licenses.

Congrats we got the first basic deployment steps configured for the PA-440. We can now manage it via the Web interface on the MGMT port. Now we’ll export the config from the PA-220, and import it into the PA-440.

Step 8) Export existing config from PA-220.

Device -> Setup -> Operational -> Save named snapshot -> name it

Device -> Setup -> Operational -> Export named snapshot -> the one named above

Step 9) On the PA-440 Import the config.

Device -> Setup -> Operational -> Import named snapshot -> the one named above

Device -> Setup -> Operational -> Load named snapshot -> name it

In my case I had a URL security definition that was causing a validation fault. So I had to check for new apps n threats packages and applied the latest one.

This most likely happened cause my export config had a later apps n threats definition then what the new firewall had available.

After this the commit validated without issue.

Step 10) Use Auth codes to activate all features.

Step 11) Commit

Step 12) Power off PA-220, and replace with the PA-440. Plugging network cables 1 for 1 in place, since they both have 8 ports it’s just direct in place drop.

Now that I got a PA-440 with all the bells n whistles, stay tuned for more Palo Alto Networks tutorials. I’ll review what I’ve covered in the past on my website and attempt to avoid duplicates, if I do find those I’ll update those post, otherwise I’ll create a new one for new deployments.

Hope this helps someone.

VMware Changes Update URLs

If you run a home lab, or manage systems for companies you may have noticed updates not working in VAMI… something like…. Ohhh I dunno.. this:

Check the URL and try again.

Unable to patch the vCenter via VAMI as it fails to download the updates from Broadcom public repositories

Cause

Table of Contents

Public facing repository URLs and authentication mechanisms are changing. Download URLs are no longer common but unique for each customer therefore will require to be re-configured.

Well… wow thank you Broadcom for being so… amazing.

If you want to be overly confused about the whole thing you can this this KB: Authenticated Download Configuration Update Script

As the original link I shared above all you have to do is login to the Broadcom support portal, and get a token, and edit the URL…. but….

Notes:

    • The custom URL is not preserved post migration upgrade, FBBR restore and VCHA failover
    • If there is a proxy device configured between vCenter and the internet, ensure it is configured to allow communications to the new URL
    • Further patches automatically update this URL. For example, if 8.0.3.00400 is patched to 8.0.3.00500, the default URL will change to end in 8.0.3.00500.

Looks like this was enforced just a couple days ago … Sooooo, happy patching?   ¯\_(ツ)_/¯

Permission to perform this operation was denied. NoPermission.message.format

For anyone who may use my site as a source of informational references, I do apologies, for the following:

  1. My Site Cert expiring. ACME is great, I’m just a bit upset they refuse to announce their HTTP auth sources so I can’t create a security rule for it. Right now it would be restricted to App Type. While not bad.. not good enough, so I manually have to allow the traffic for the cert to be renewed.

    No… I have no interest in allowing ACME access to my DNS for DNS auth.

  2. Site was down for 24 hours. If anyone noticed at all, yes my site was down for over 24 hours. This was due to a power outage that lasted over 12 hours after a storm hit. No UPS could have saved me from this. Though one is in the works even after project “STFU” has completed.

    No, I have no interest in clouding my site.

I have a couple blog post ideas roaming around, I’m just having a hard time finding the motivation.

Anyway, if you get “Permission to perform this operation was denied. NoPermission.message.format” while attempting to move a ESXi host into a vCenter cluster. Chances are you may have a orphaned vCLS VM.

If so, log into VAMI and restart the ESX Agent Manager (EAM) service.

After restarting that service everything should hunky dory…

Cheers.

Update Veeam 12.3

Grab Update file from Veeam.

Step 1) Sign in to Veeam portal

I didn’t have a paid product license, so my download section was full of free trial links. Since I’m using CE (community edition) from here: Free Backup Software For Windows, VMware, & More – Veeam

Step 2) Download the ISO, it’s a doosy at 13 GBs

Step 3) Read the update notes for any expected issues/outcomes.

For all the FAQs go here: Veaam Upgrade FAQs

For basic System Requirements and release notes see here: Veeam Backup & Replication 12.3 Release Notes

The main thing will be the change of the server SQL service, moving from MS SQL Express, to PostgresDB, Though it’s not directly mentioned from what I can see other than the step 8 in the Upgrade path: Upgrading to Veeam Backup & Replication 12.3 – User Guide for VMware vSphere

Step 4) Attach the ISO to the server being upgraded or installed on

My case a 12.1 based server.

My case it’s a VM, so I just attach it via VMRC.

Step 5) Run the Installer

Make sure you stop any “continuous” jobs, and close the B&R Console.

Double Click Setup.exe on the mounted ISO’s main directory.

If you haven’t guessed it, click Upgrade. Yes, nice to see coding done where it just does a check and knows it’s a Veeam server, so the only option is to Upgrade.

In my case I again only have one option to choose from.

How long we wait is based on the Matrix. Looking at the VM resource usage, and my machines based on the setup, looks like it’s reading from the ISO to load installation files. and writing it somewhere to disk, my setup only yielded me about 40 MB’s and took roughly 8 minutes.

Agree to the EULA.

Upgrade the server, here’s you have a checkbox to update remote components automatically (such as Veeam proxies). In my lab the setup is very simply so I have none. I just click next.

License upgrade: (I’ll try not selecting this since CE, nope wizard wouldn’t let me for CE, shucks hahah)

Service account, Local System (recommended). I left this default, next.

Here’s the OG MS SQL instance:

… yes?

For the Veeam Hunter service… ignore (Shrug)

free space… needs more than 40 Gigs… holy molly….

43.1 GB required, 41 GB Available. Unreal, guess I’ll extend the drive, great part of running VMs. 🙂

Finally! Let’s Gooooo! and sure enough first step.. here comes the new SQL instance.. this is probably why it requires over 40 gigs to do the install, to migrate the SQL instance from MS SQL to Postgres…. Wonder if space will be reclaimed by removal of the MS SQL Express instance….

Roughly half hour later…

Mhmmm checking the services I see the orginal MS SQL instance is still there running. I see a postgres service.. not running… uhhhh mhmmm…

All Veeam services are running, open the Veeam B&R console, connect, and yup it opens. The upgrade component wizard automatically opened, and it updated the only item.. itself.

*UPDATE* Patch for latest CVE of 9.9. If you have a domain joined Veeam server.

KB4724: CVE-2025-23120

*thumbs up* It’s another 8 gig btw…

Installing Core Linux

Installing TC-Linux (Core Only)

Sources

Source: wiki:install_hd – Tiny Core Linux Wiki

On, ESXi VM: wiki:vmware_installation – Tiny Core Linux Wiki

FAQs: http://www.tinycorelinux.net/faq.html

Setting up VM

VM Type: Other Linux 32bit kernel 4.x
CPU: 1
Mem: 256 MB
HDD: 20 Gig
Network: DHCP + Internet Access

Change boot to BIOS (instead of EFUI)

Booting and Installing Core Linux

Attach ISO boot. Core Linux boots automatically from ISO:

For some reason the source doesn’t tell you what to do next. type tc-install and the console says doesn’t know what you are talking about:

AI Chat was kind enough to help me out here, and told me I had to run:

tce-load -wi tc-install

Which required an internet connection:

However even after this, attempting to run gave the same error.. mhmm, using the find command I find it, but it needs to be run as root, so:

sudo su
/tmp/tcloop/tc-install/usr/local/bin/tc-install.sh

C for install from CDrom:

Lets keep things frugal around here:

1 for the whole disk:

y we want a bootloader (It’s extlinux btw located [/mnt/sda1/boot/extlinux/extlinux.conf}):

Press enter again to bypass “Install Extensions from..”

3 for ext4:

Like the install source guide says add boot options for HDD (opt=sda1 home=sda1 tce=sda1)

last chance… (Dooo it!) y:

Congrats… you installed TC-Linux:

Once rebooted the partition and disk free will look different, before reboot, running from memory:

after reboot:

Installing OpenSSH?

tce-load -wi openssh

This is where things got a little weird. Installing an app… Not as root TC-Linux says…

This is when things got a bit annoying n weird, even though the guide says using -wi installs it in the on boot section, I found it wasn’t loading on boot, well at first I noticed it didn’t start at all after install, as I couldn’t SSH in, this was cause of a missing config file…

Even if I got it running it still wouldn’t run at boot and that apparently was cause the file disappeared after reboot. This is apparently cause the system mostly run entirely in RAM. If you didn’t notice even after install the root filesystem was still only roughly 200 MB in size (enough to fit into the RAM we configured for this VM).

Notice the no password on the tc account? Set it, reboot. doesn’t stick…

Notice the auto login on tty1? Attempt to disable.. doesn’t stick…

Configuring Core Linux

Long story short apparently you have to define what paths are to be considered persistent via a file:

/opt/.filetool.lst

These files are saved to mydata.gz via the command:

filetool.sh -b

So here’s what we have to do:

  1. Configure the system to ensure settings we configure stay persistent across reboots.
  2. Change the tc account password.
  3. Disable auto login on TTY1.
  4. Configure Static IP address.
  5. Install and run on boot OpenSSH.

Changing TC Password

Step 1) Edit /opt/.filetool.lst (use vi as root)
– add etc/passwd and etc/shadow

Step 2) run:

filetool.sh -b

Step 3) run

passwd tc

Step 4) run

filetool.sh -b

Now reboot, you may not notice that it applied due to the auto login, however, if you type exit to get back to the actual login banner, type in tc and you will be prompted for the password you just set. Now we can move on to the next step which is to disable the auto login.

Disable Auto-Login

Step 1) Run

sudo su
echo 'echo "booting" > /etc/sysconfig/noautologin' >> /opt/bootsync.sh

Step 2) Run

filetool.sh -b
reboot

K on to the next fun task… static IP…

Static IP Address

For some reason AI said I had to create a script that runs the manual step… not sure if this is the proper way… I looked all over the Wiki: wiki:start – Tiny Core Linux Wiki I can’t find nothing.. I know this works so we’ll just do it this way:

Step 1)  Run:

echo "ifconfig eth0 192.168.0.69 netmask 255.255.255.0 up" > /opt/eth0.sh
echo "route add default gw 192.168.0.1" >> /opt/eth0.sh
echo 'echo "nameserver 192.168.0.7" > /etc/resolv.conf' >> /opt/eth0.sh
chmod +x /opt/eth0.sh
echo "/opt/eth0.sh" >> /opt/bootlocal.sh
filetool.sh -b

Step 2) reboot to apply and verify.

What about SSH?!

Oh right.. we got it installed but we never got it running did we?!

Step 1) Run:

cp /usr/local/etc/ssh/sshd_config.orig /usr/local/etc/ssh/sshd_config
vi /usr/local/etc/ssh/sshd_config

Edit and uncomment:
Port: 22
Address: 0.0.0.0
PasswordAuthedAllowed:true

Step 2) Run:

echo "usr/local/etc/ssh/" >> /opt/.filetool.lst
echo "/usr/local/etc/init.d/openssh start" >> /opt/bootlocal.sh
filetool.sh -b
reboot

congrats you got openSSH working on TC-Linux.

Hostname

Most systems you run the hostname command… ooooeee not so easy not TC-Linux.

Option 1 (Clean)

Edit the first line of /opt/bootsync.sh which sets the hostname.

Then just run filetool.sh -b, done.

Option 2 (Dirty)

To ensure the hostname persists across reboots, you need to modify the /etc/sysconfig/hostname file:

  1. Edit the hostname configuration file:
    sudo vi /etc/sysconfig/hostname
    
  2. Add or modify the line to include your desired hostname:
    your_new_hostname
    
  3. Save and close the file.
  4. Add /etc/sysconfig/hostname to the persistence list:
    echo "etc/sysconfig/hostname" >> /opt/.filetool.lst
    echo "hostname $(cat /etc/sysconfig/hostname)" >> /opt/bootlocal.sh
  5. Save the configuration:
    filetool.sh -b
reboot

That’s it for now, next blog post we’ll get to installing other goodies!

Managing Apps

Installing Apps

As you can see it’s most running:

tce-load -wi

for all the details see their page on this, or run -h.

Source of app (x86): repo.tinycorelinux.net/15.x/x86/tcz/

For the most it’s install app. Edit files as needed, saved edited files to /opt/.filetool.lst. Then run backup command, test service, edit /opt/bootlocal.sh with commands needed to get app/service running. again run filetool.sh and bobs your uncle.

Deleting Apps

To remove a package on Tiny Core Linux that was installed using tce-load, here’s what you can do:

  1. For Extensions in the onboot.lst File:
    • First, remove the package name from the /etc/sysconfig/tcedir/onboot.lst file to prevent it from being loaded at boot. You can edit the file with:
      bash
      sudo nano /etc/sysconfig/tcedir/onboot.lst
      
    • Delete the entry corresponding to the package you wish to remove, then save and exit.
  2. Delete the Extension File:
    • Navigate to the directory where the extensions are stored:
      bash
      cd /etc/sysconfig/tcedir/optional
      
    • Remove the .tcz file associated with the package:
      bash
      sudo rm package-name.tcz
      
  3. Clean Up Dependency Files (Optional):
    • To clean up leftover dependency files related to the removed package, you can check and delete them from the same directory (/etc/sysconfig/tcedir/optional).

 

Retro PC, NO IDE based CDROM in Windows 98/ME

So, I decided to boot up my old Retro PC. To my dismay, when I booted my Windows 98 or ME instances, I noticed that the CDROM was not showing, it would show up in the BIOS, and if I booted MS DOS 6.22 MSCDEX and all the config work I had done there was still working (so I knew it wasn’t a hardware issue).

The two OSs would boot just fine, and no matter how much I played with the BIOS configurations for the IDE channels the result was the same.

I knew I had the proper drivers on them OS’s cause I had everything working previously so I was a bit stumped at this point. I was about to give up, but I really wanted to play some Road Rash.. I checked many threads on the matter, and most of it simply stated to delete the IDE device and let windows reinstall it at next boot, even if I used the standard Microsoft drivers or the NVDIA nForce drivers, the result was the same, NO CDROM.

I then found this thread, and the final answer at the end actually worked…

“Troubleshooting MS-DOS Compatibility Mode on Hard Disks (Q130179)

It was under:
Resolution…..
4…..
d. Check for the NOIDE value in the registry under:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VxD\IOS
The NOIDE value is placed in the registry when the protected-mode driver for the IDE Controller is not properly initialized.

For additional information about how to troubleshoot NOIDE, click the article number below to view the article in the Microsoft Knowledge Base:

Q151911 MS-DOS Compatibility Mode Problems with PCI IDE Controllers
===============================

It had me delete the NOIDE value and reboot. ”

Like…. what? I’ve never seen this before, what a weird problem and solution. I love how there are so many forums and threads alive today for even such old OS’s. Gotta love Retro gaming. 🙂

Veeam VM Restore failed: Cannot apply encryption policy. You must set the default key provider.

So in my Lab vCenter went completely POOOOOF. So, I installed it fresh.

After vCenter was installed, I updated my Veeam configuration to ensure my backup chains wouldn’t break which still works great by the way.

One VM was missing from my vSphere. So I went to restore it when all of a sudden:

I remembered by post about configuring a Native Key Provider cause it was required as such to have a vTPM. So I thought, is this a “PC Load Letter” problem, and it’s actually just complaining that I didn’t configure a NKP for it to “apply encryption policy”?

Follow the same old steps to configure a NKP.

  • Log in to the vSphere Client:
    • Open the vSphere Client and log in with your credentials.
  • Navigate to Key Providers:
    • Select the vCenter Server instance.
    • Click on the Configure tab.
    • Under Security, click on Key Providers.
  • Add a Native Key Provider:
    • Click on Add.
    • Select Add Native Key Provider.
    • Enter a name for the Native Key Provider.
    • If you want to use hosts with TPM 2.0, select the option Use key provider only with TPM protected ESXi hosts.
  • Complete the Setup:
    • Click Add Key Provider.
    • Wait for the process to complete. It might take a few minutes for the key provider to be available on all hosts.
  • Backup the Native Key Provider:
    • After adding the Native Key Provider, you must back it up.
    • Click on the Native Key Provider you just created.
    • Click Backup.
    • Save the backup file and password in a secure location.

Once I did all that…

No way that actually worked. But will it boot? Well it def “booted” but it asked for the BitLocker key (which makes sense since we created a new TPM and it doesn’t have the old keys). I checked my AD and sadly enough for some reason it didn’t have any BitLocker keys saved for this AD object/VM.

Guess this one is a loss and the importance of saving your encryption keys.

Careful Cloning ESXi Hosts

I’ll keep this post short. I was doing some ESXi host deployments in my home lab, and I noticed that when I would install on a 120GB SSD, the install would go smoothly, but I wasn’t able to use any of the storage as a Datastore. However, if I took a fresh install copy of ESXi from installing onto an 8GB USB Stick and DD’d it to the 120GB SSD I got several advantages from this:

  1. When done via a USB3 Pipe of Linux live holding a copy of my base image to deploy I could get speeds in excess of 100 MB/s, and with only 8GB of data to transfer, the “install” would complete in a mere 90 seconds.
  2. The IP address and root password are preconfigured to what I already now, and I can simply change the IP address from the DCUI and call it a day.

Using this method I could have a host up in less than 5 minutes (2 min to boot linux live, 90 seconds to install the base ESXi OS image, and 2 more to boot ESXi). This was of course on machine without ECC and all the server hardware firmware jazz… in those cases install times are always longer. anyway…

This was an amazing option, until I noticed that when I connect one machine in I just deployed and changed the IP address, and (since I’m super anal about networking during this type of project/operations) I noticed my ping from one machine (a completely different IP address) started to drop when the new device came up… and after a while the ping responses would come back but drop from the new host, and vice versa, flip and flop it goes. I’m used to this usually if there’s an IP conflict and two devices have the same IP address. In this case they were different IP addresses… after enough symptom gathering and logical deduction of because I had to assume that the MAC address just be the same and this is the same problem in reverse (different IP’s but same MAC) and as such experiencing the same symptoms.

To validate this I simply deployed my image to a new machine, then I went on the hunt to figure out how to see the MAC address, since I couldn’t plug in the NIC and get to the web based MGMT interface I had to figure out how to do that via the console CLI directly… mhmm after enough googling on my phone I found this spiceworks thread with my answer:

vim-cmd hostsvc/net/info | grep “mac =”

I then checked this against the ESXi host that I saw the flipping flopping with, and sure enough they matched…  After doing a fresh install I noticed that the first 3 sections match the physical MAC, but in my DD deployed ones they retain the MAC of the system from which it was installed and those when I ran the command above, I could tell which ones were deployed via my method. This was further mentioned in this reddit thread by a commenter who goes by the name of sryan2K1:

“The physical NIC MACs are never used. vmk ports, along with VMs themselves will all use VMWare’s OUI as the first half of the address on the wire.”

OK, now maybe I can still salvage my deployment method by simply deleting and recreating the VMK after deployment, but I’d guess it best be done via the DCUI or direct console… I found one KB by VMware/Broadcom but it gave a 404 but Luckly there was a wayback machine link for it here.

Which states the following:

“During Initial Installation and DCUI, ESXi management interface (default vmk0) is created during installation.

The MAC address assigned will be the primary active physical NIC (pnic) associated.

If the associated vmnic is modified with the management interface vmkernel will once again assign MAC address of the associated physical NIC.

To create a VMkernel port and attach it to a portgroup on a Standard vSwitch, run these commands:

esxcli network ip interface add --interface-name=vmkX --portgroup-name=portgroup
esxcli network ip interface ipv4 set --interface-name=vmkX --ipv4=ipaddress --netmask=netmask --type=static"

Alternatively, you can also use esxcli to create the management interface vmkernel on the VDS.

Creation of the management interface with the ‘esxcli network’ will generate a VMware Universally Unique address instead of the pnic MAC address.

It is recommended to use the esxcli network IP interface method to create the management interface and not use DCUI.

Workarounds:               None

Additional Information:
Using DCUI to remove vmnic binding from management vmkernel or any modification will apply change at vSwitch level. Management interface is associated with propagating the change to any port groups within the vSwtich level.

Impact/Risks:                None.”

I”m assuming it means if you use the DCUI to reconfigure the MGMT interface settings the MAC will automatically be reconfigured to match what I found during initial clean install and mentioned in the reddit thread of using the first 3 sections to derive the MAC of the VMK.

But what if you don’t have any additional interfaces to use to make the section change in the DCUI to have that actually happen? cause what I’ve noticed changing the IP address and disabling IPv6 and rebooting did not change the VMK’s MAC address. Oh there’s in option in the DCUI “Reset Network Settings” within there there’s several options, I simply picked reset to factory defaults. Said success, checked the MAC via the first command stated above and bam the VMK nic changed to what it should be! Sweet my deployment method is still viable.

Hope this helps someone.

The virtual machine must be encrypted

Sooo I lost a VM in my fray of re-organizing my server farm. Like a lost pup I figured I just rely on my good old Veeam backup sets. Recover VM, alright here we goo….

What.. what does that mean…. Oh wait is this cause of when I blogged about adding vTPMs to VMs?

Re-checked the linked video from VMware… 2 min in … “Failure to save your key backup will result in unrecoverable data loss”…. mhmmm, OK I thought all I did was add a TPM device to my VM and enabled secure boot, that’s the deal here?

Somewhere I read that the VM config files get encrypted, but I don’t think that’s the case here either.  Even checking the Pre-reqs from VMware I can’t see anything nothing this:

Prerequisites

Ensure that your vSphere environment is configured with a key provider. See the following for more information:
Configuring vSphere Trust Authority
Configuring and Managing a Standard Key Provider
Configuring and Managing vSphere Native Key Provider
Ensure that host encryption mode is enabled. See Enable Host Encryption Mode Explicitly.
The guest OS you use can be Windows Server 2008 and later, Windows 7 and later, or Linux.
The ESXi hosts running in your environment must be ESXi 6.7 or later (Windows guest OS), or 7.0 Update 2 (Linux guest OS).
The virtual machine must use EFI firmware.
Verify that you have the required privileges:
Cryptographic operations.Clone
Cryptographic operations.Encrypt
Cryptographic operations.Encrypt new
Cryptographic operations.Migrate
Cryptographic operations.Register VM

What I think is happening here is my NKP that IS a Prerequisite went poof (the vCenter server that was used to create it is shutdown and not being used), and another temp vCenter is being used.

My first thought was maybe I could just add a new NKP and go as I figured the TPM physical module that’s installed simply needs this, and I think it’s this hardware that’s faulting the boot.

I didn’t want to muck the with original I just recovered so I tried to clone it, but the clone failed too complaining about encryption before adding a TPM, further validating my assumption. What I don’t understand it how the VM was allowed to be created from backup in the first place if I can’t even clone it…?

Any since I know recovery is possible (since I just did it), I guess maybe I can just remove it? Or I could also create a new VM and use vmkfstools to clone the hdd… let’s try that first…

Go to boot VM, well got past that error but the Machine was bitlocked, I was hoping it wasn’t going to be.. go to AD server, open ADUC… no bitlocker tab… ughhhh…

ADUC Missing BitLocker Recovery Tab in 1809 – Microsoft Community

Right but where is that in on a server, oh in server manager it moved…

Yay there’s the bitlocker tab and… it’s empty.. man give me a fucking break… so now I have a bunch of backups that are useless cause I lost the bitlocker key… shiiiiiiit

Well I don’t have anything to follow up on here but a lesson learnt to backup your bitlocker key (I don’t know why it wasn’t save to the AD computer object).