New vCenter Same Veeam

The Story

The Niche Situation

Now I know the title might sounds strange, but this is to cover a niche issue which may randomly arise out in the industry. vCenter died, there was no backup, a new vCenter was spun up in its place with all the same hostname, IP address and everything, and the hosts re-added, and you happen to use Veeam as your backup solution. Now I have been down this rabbit hole in the past, and I have blogged about an unsupported method to fix the Veeam jobs in the situation. But it’s technically unsupported, so I asked what the “supported method” would be on the Veeam forms.

The short answer, “Oh just use the VM-Migrator tool”, as referenced here.

“Veeam Backup & Replication tracks VMs in jobs using Managed Object Reference IDs (MORef-IDs), which change after migration or recreation of vCenter, causing MORef-ID misalignment.

Veeam VM Migrator utility is integrated into Veeam Backup PowerShell module, and it allows you to resolve MORef-ID misalignment. As a result, your backup incremental chains will remain intact after an inventory change in vCenter.

The utility consists of the following cmdlets:

  • Set-VBRVmBiosUuid — this cmdlet updates the BIOS UUIDs of existing VM entries within the Veeam Backup & Replication configuration database based on information from the old vCenter.
  • Set-VBRVCenterName — this cmdlet modifies vCenter name by adding the _old suffix to its name.
  • Generate-VBRViMigrationSpecificationFile — this cmdlet generates a migration task file which contains the list of mapping tasks.
  • Start-VBRViVMMigration — this cmdlet starts MORef-IDs update.”

So, this tool is supposed to do what I did via the backend but this is a supported frontend tool to do it, but I case is generally different than what the tool wants in that my old and new vCenter are the same, and not simply two unique instances of vCenter with unique names both running live in parallel. Mines simply been directly rebuilt in place.

Step 1) Realize your vCenter is toast.

However, you realize this, will be random and situational, in my case my trial expired, and all ESXi hosts show disconnected. I’m gonna treat this as a full loss, by simply shutting down and nuking all the VM files… it’s simply dead and gone…. and I have no configuration backup available.

This is why this is considered a niche situation, as I’d hope that you always have a configuration backup file of your critical infrastructure server. But… what if (and here we are, in that what if, again)…

Step 2)  Rebuild vCenter with same name.

Yay, extra 20 min cause of a typo, but an interesting lesson learnt.

Renaming vCenter SSO Domain – Zewwy’s Info Tech Talks

Let’s quickly rebuild our cheap cluster,  configure retreat mode and add our hosts back in…

OK so now we’ve set our stage and we have a broken Veeam instance, if we try to scan it it will be no good cause the certificate has changed, from the center changing… so David says “So in your case, if you can restore Veeam’s configuration database to before you made these changes, instead of your step 4 there, you will begin the migration procedure and use the Set-VBRVCenterName cmdlet on the existing vCenter in Veeam, re-add your newly rebuilt vCenter to Veeam, and then perform the migration.”

Step 3) run “Set-VBRvCenterName”.

So far, so good.. now..

Step 4) Add new vCenter to Veeam.

Step 5) Generate Migration File.

Now I’m back to assuming, cause instructions are unclear in Veeams provided guidance. I’m assuming I have to run the generate command before I run the start migration command….

Checking out the generated file, its a plain text file with a really weird syntax choice, but the VM-IDs are clearly as I was doing manually in my old blog post.

Step 6) Start the Migration.

I have no clue what that warning is about… I mean the new vCenter was added to Veeam, the VM IDs matched what I see in the URL when navigating them, like my old blog… I guess I’ll just check on VBR console…

I did a recalculate on the VM inside the backup job and it calculated, so looks like it worked. Let’s run a backup job and check the chain as well…

The Job ran just fine…  and the chains still intact. Looks like it worked, this was the supported way, and it did feel easier, especially if scaled out to hundreds of VMs.

Hope this helps someone.

Hyper-V Networking

Install Hyper-V, and let’s learn it’s Networking

Hyper-V Networking

On VMware, when you install the OS (ESXi) on the hardware in the DCUI you simply set the VLAN tag right on the network interface you configure, and bind it to a physical NIC in the menu area, simple.

We leave the network connected to a non tagged, internet allowed connection. Now we can configure our networks.

1. Create one external switch

  • Bind it to your physical NIC (configured as a trunk on the upstream switch).
  • Check “Allow management OS to share this network adapter” once — this creates the first host vNIC.

2. Add more host vNICs via PowerShell:

# Add another vNIC for the host bound to the same external switch
Add-VMNetworkAdapter -ManagementOS -Name "HostVLAN20" -SwitchName "ExternalSwitch"
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HostVLAN20" -Access -VlanID 6

3. Verify in Windows

  • You’ll now see multiple vEthernet adapters in the host OS (e.g., “vEthernet (HostVLAN20)”, “vEthernet (HostVLAN30)”).
  • Assign IP addresses or enable DHCP as needed.

*Note* this is for doing it on a single network server (for my lab use) usually you’d have dedicated NICs for other networks designated on those servers… anyway.. where was I… right now that we got additional network on the server we can work on a shared storage, you know how I like my iSCSI.

So, when I setup Hyper-V, and connect a VM to the “default” virtual switch, the Hyper-V server acts as a DHCP provider and NATs the connections from that vSwitch to whatever other network the host is connected to?

Great question — this trips up a lot of people when they first use Hyper‑V! Let’s break it down clearly:

🖧 Hyper‑V Default Switch Behavior

  • Introduced in Windows 10 (1709): The “Default Switch” is automatically created by Hyper‑V and cannot be deleted or modified.
  • NAT (Network Address Translation): Yes — the host machine provides NAT services. This means your VM’s traffic is translated and routed through the host’s network connection, regardless of whether that’s Wi‑Fi, Ethernet, or even VPN.
  • DHCP Services: The host also runs a lightweight DHCP service for the Default Switch. This is what hands out IP addresses to your VMs automatically (usually in the 172.16.x.x range).
  • Connectivity: VMs connected to the Default Switch can reach the internet and other devices on the host’s network, but inbound connections from the LAN to the VM are limited. It’s designed for convenience, not for complex networking setups.

⚖️ Comparison with Other Switch Types

Switch Type Provides DHCP? Provides NAT? Use Case
Default Switch ✅ Yes (host DHCP) ✅ Yes (host NAT) Quick setup, internet access without config
External Switch ❌ No (relies on external network DHCP) ❌ No VM directly on physical network, full LAN visibility
Internal Switch ❌ No ❌ No Communication between host and VMs only
Private Switch ❌ No ❌ No Communication between VMs only

✅ So to answer directly:

Yes — when you connect a VM to the Default Switch, Hyper‑V’s host acts as a DHCP provider and NATs the VM’s traffic to whatever network the host is connected to. It’s essentially a convenience bridge so you don’t have to manually configure networking.

“That’s what I did configured a external virtual switch so the hosts NIC would be VLAN tagged in a certain network, I was figuring the default switch would act more like a … well switch at layer2 of the hosts Nic, so VMs on it would get DHCP from the main network, as if it was connected to a switch in the base network, not layer3 and then 4 NAT like c’mon…”

Exactly — you’ve nailed the distinction. The Default Switch in Hyper‑V is deliberately not a true Layer‑2 bridge to your host NIC. It’s a convenience construct that Microsoft built to “just work” for internet access without requiring you to mess with VLANs, external switches, or DHCP servers on your LAN.

Here’s the breakdown:

🔌 Why the Default Switch behaves like NAT

  • It’s implemented using Windows Internet Connection Sharing (ICS) under the hood.
  • ICS is a Layer‑3 service: it provides a private subnet (usually 172.16.0.0/12), runs a DHCP service, and NATs traffic out through whatever adapter the host is using.
  • That’s why VMs on the Default Switch don’t see your LAN’s DHCP server — they only see the host’s mini DHCP/NAT service.
  • It’s intentionally isolated so you don’t accidentally expose VMs directly to your corporate or home LAN without realizing it.

⚖️ Why it’s not Layer‑2 bridging

  • A true Layer‑2 bridge would forward broadcasts (like DHCP Discover) from the VM to your LAN. That’s what an External Virtual Switch does.
  • The Default Switch instead terminates those broadcasts at the host and answers them itself. That’s why you only ever get the host‑provided DHCP.
  • VLAN tagging, trunking, or other advanced networking features aren’t supported on the Default Switch — it’s locked down.

✅ When to use each

  • Default Switch (NAT + DHCP): Quick internet access, testing, dev boxes, laptops where you don’t want to reconfigure NICs.
  • External Switch (Layer‑2 bridge): When you want VMs to behave like physical machines on your LAN, get DHCP from the real network, or use VLAN tagging.
  • Internal/Private Switches: For host‑VM communication or VM‑VM only networks.

So your expectation — that the Default Switch would act like a transparent Layer‑2 bridge — makes sense, but Microsoft intentionally designed it as a Layer‑3 NAT gateway instead. If you want “real switch” behavior, the External Switch is the only way to go.

“So, whatever my host machine is allowed to do, the guest VMs in the default switch are also allowed to do? like access the internet, and the firewall that gave the host the ability to access the internet would never see the private NATed IP address the Hyper-V host is dishing out to those VMs?”

You’ve got it — that’s exactly how the Default Switch works in Hyper‑V. Let me spell it out clearly:

🌐 How traffic flows with the Default Switch

  • VMs get private IPs: Each VM connected to the Default Switch is assigned an IP from the host’s internal DHCP service (usually 172.16.x.x).
  • Host does NAT: The Hyper‑V host translates those private IPs into its own IP before sending traffic out to the physical network.
  • Firewall visibility: From the perspective of your upstream firewall or router, all traffic from those VMs looks like it’s coming from the host’s IP. The firewall never sees the VM’s private NATed addresses.
  • Permissions: Whatever the host is allowed to do (e.g., access the internet, reach certain networks), the VMs can also do — because their traffic is piggybacking on the host’s connection.

⚖️ Important implications

  • Outbound access: VMs can reach the internet or LAN resources as long as the host can. They inherit the host’s connectivity.
  • Inbound access: External devices cannot directly initiate connections to those VMs, because NAT blocks unsolicited inbound traffic unless you set up port forwarding (which the Default Switch doesn’t expose for configuration).
  • Firewall rules: The upstream firewall only enforces rules against the host’s IP. If you need per‑VM visibility or control, you’ll need an External Switch so each VM gets its own LAN IP.

✅ So yes: the Default Switch makes your VMs “ride along” with the host’s network permissions. They’re hidden behind NAT, which is why your firewall only ever sees the host’s traffic.

Additional Information

If you want to apply a VLAN tag to a physical NIC that’s not a Hyper-V host on Windows:

Since I didn’t see anything in the usual Network  IPv4 configuration window in Windows. I decided to see how useful this AI stuff is.. so I asked it…

“how do I VLAN tag the IP on windows 11 interface?”

Quick Answer: On Windows 11, VLAN tagging isn’t enabled by default. You need a NIC that supports IEEE 802.1Q VLAN tagging, then configure the VLAN ID either through the NIC’s Device Manager properties (Advanced tab → VLAN ID) or by using Intel PROSet/ANS tools or PowerShell (Set-NetAdapter -VlanID) if your driver supports it.

🖥️ Steps to Configure VLAN Tagging on Windows 11

1. Check NIC Compatibility

  • VLAN tagging requires a network card (NIC) that supports IEEE 802.1Q.
  • Not all NICs expose VLAN options in Windows. Intel and Broadcom NICs often support it, but you may need vendor-specific drivers.

2. Configure via Device Manager

  • Press Win + R, type devmgmt.msc, and hit Enter.
  • Expand Network adapters, right-click your NIC → Properties.
  • Go to the Advanced tab.
  • Look for VLAN ID or Priority & VLAN.
  • Enter the VLAN ID (e.g., 100) and click OK.

3. Configure via Intel PROSet/ANS (if supported)

  • Install Intel PROSet/Advanced Network Services (ANS).
  • In the NIC properties, Intel’s tab allows creating multiple VLAN interfaces with different IDs.
  • Each VLAN interface can then be assigned its own IP address.

4. Configure via PowerShell

  • Open PowerShell as Administrator.
  • Run:
    Set-NetAdapter -Name "Ethernet" -VlanID 100
    
  • Replace "Ethernet" with your adapter name and 100 with your VLAN ID.
  • Note: This only works if your NIC driver supports VLAN tagging. Otherwise, you’ll get an error like “Requested operation not supported on adapter”.

5. Assign IP Address

  • Once VLAN tagging is enabled, go to Network & Internet settingsChange adapter options.
  • Right-click the VLAN-enabled adapter → PropertiesInternet Protocol Version 4 (TCP/IPv4).
  • Assign the IP address, subnet mask, and gateway for that VLAN.

⚠️ Important Notes

  • Driver Support is Key: If your NIC driver doesn’t expose VLAN options, you cannot tag VLANs directly in Windows without Hyper-V or third-party tools.
  • Hyper-V Alternative: You can create VLAN-tagged virtual NICs using Hyper-V’s Virtual Switch Manager if your hardware doesn’t support native tagging.
  • Testing: Use ping or tracert to verify connectivity once VLAN tagging is applied.

Bottom Line: To VLAN tag an IP on Windows 11, you must enable VLAN tagging on the NIC (via Device Manager, Intel PROSet/ANS, or PowerShell if supported), then assign the IP to that VLAN interface. If your NIC doesn’t support VLAN tagging, you’ll need Hyper-V or vendor-specific utilities.

My response “I’ve got an intle 82579LM Gig NIC, I see it says VLAN enabled in the advanced tab, but I don’t see an area to configure it, I’m sure the NIC supports it since I was using ESXi with vlan tagging just fine.”

That’s a great observation — the Intel 82579LM definitely supports VLAN tagging at the hardware level, but here’s the catch: Windows drivers for that NIC don’t expose VLAN configuration in the same way ESXi or Linux does.

🔎 Why You Don’t See VLAN Config in Windows

  • The Advanced tab in Device Manager shows “Priority & VLAN” or “VLAN Enabled,” but that option only toggles whether the NIC can process VLAN-tagged frames.
  • Unlike ESXi, Windows doesn’t let you directly assign VLAN IDs to a physical NIC unless the driver provides a VLAN interface creation utility (like Intel PROSet/ANS).
  • For the 82579LM, Intel stopped including VLAN configuration in their Windows drivers after Windows 8/10. On Windows 11, the driver exposes VLAN capability but no UI to configure VLAN IDs.

✅ Workarounds

Here are the practical options:

1. Intel PROSet/ANS (if available)

  • Older Intel drivers came with Advanced Network Services (ANS), which let you create VLAN sub-interfaces.
  • On Windows 11, Intel no longer bundles ANS with drivers for the 82579LM, so you won’t see the VLAN tab.

2. PowerShell Command

  • Windows has a Set-NetAdapter -VlanID cmdlet, but it only works if the NIC driver supports VLAN tagging through the OS.
  • On the 82579LM with current Windows 11 drivers, this usually fails with “Requested operation not supported”.

3. Hyper-V Virtual Switch

  • If you enable Hyper-V, you can create a virtual NIC and assign VLAN IDs there.
  • This is the most reliable way to do VLAN tagging on Windows when the physical NIC driver doesn’t expose VLAN config.

4. Third-Party NIC Utilities

  • Some OEMs (Dell, HP) ship custom drivers/utilities that re-enable VLAN tagging. If your system vendor has one, that’s worth checking.

⚠️ Bottom Line

Your Intel 82579LM can tag VLANs, but on Windows 11 the Intel driver doesn’t expose VLAN configuration. That’s why you don’t see a place to set the VLAN ID.

  • On ESXi/Linux → VLAN tagging works natively.
  • On Windows → You need Intel PROSet/ANS (no longer supported for this NIC), Hyper-V, or vendor-specific drivers.

Well ok I guess once we install Hyper-V hopefully it’ll be more like ESXi in a sense and we can tag.

*UPDATE* ExternalSwitch, WiFi NIC

I was trying to use a Hyper-V server with an External Switch, bound to a WiFi NIC. and was getting unexpected results based on the above information. but my results were NOT as I had expected, I was assuming Default Switch behavior. You may be asking, “Why would you expect that behavior if you’re creating an External Switch?”  Now, if you read my Wireless ESXi host blog, you probably are well aware of the VLAN limitations of WiFi, and can never truly be used as a trunk port (Limitation of the 802 standard, not of OS or software).

So how could a ExternalSwitch work, via WiFi if the layer 2 broadcast doesn’t work and can’t “speak” with the rest of the layer 2 stack? Yet I create a VM and it DOES get a DHCP lease address from my local subent?! What the heck is going on here…

So I had to ask AI, what was going on here, it says, yeah… that’s expected… here’s the deets… get ready.. it’s a long one….

BAH-LETED, there was way tooooo much BS from the AI response to justify keeping this data in here… long story short… Local subnet VMs work fine (it does ARP Masquerading), VLANs will never work per the usual BS I’ve talked about in my Wireless ESXi host blog.

Migrating/Restoring Veeam

Migrating/Restoring Veeam

In one of my pervious posts I discussed upgrading Veeam, today I want to discuss migrating it entirely. Or recovering it, as this process here is essentially the same.

Disclaimer what you do in your own environment is on you, everything in this blog is for educational purposes only. This also doesn’t cover encryption management all data is moved in-place (E.G disconnecting, and reconnecting an HDD from one machine to another), with the data at rest being unencrypted.

Step 1) Sign in to Veeam portal

I didn’t have a paid product license, so my download section was full of free trial links. Since I’m using CE (community edition) from here: Free Backup Software For Windows, VMware, & More – Veeam

Step 2) Download the ISO

it’s a doosy at 13 GBs

Step 3) Read the update notes for any expected issues/outcomes.

For all the FAQs go here: Veaam Upgrade FAQs

For basic System Requirements and release notes see here: Veeam Backup & Replication 12.3 Release Notes

The main thing will be the change of the server SQL service, moving from MS SQL Express, to PostgresDB, Though it’s not directly mentioned from what I can see other than the step 8 in the Upgrade path: Upgrading to Veeam Backup & Replication 12.3 – User Guide for VMware vSphere

Step 4) Attach the ISO

Attach it to the server being upgraded or installed on.

in my case this time, I’m simply cloning my freshly semi hardened Windows11 image, giving it a whopping 8GB of RAM, and 64Gig HDD for the OS and Veeam App to live on. While that’s being prepared lets take a config backup of our veeam server to make our lives easier.

Step 5) Backup Config.

I’d hope you’d have this configured before your Veeam server failed.

Veeam B&R -> File -> Backup Config, in our case save it to backup data drive as that will be moved and mounted first thing, we can then use that to load the config and should be good to go.

Now it shows up under Drive:\VeeamConfigBackup\Hostname\Hostname_Datestamp.bco

Step 6) Install Veeam on New Server

Depending on your Uptime requirements, you can either spin up the new server with a temp different IP, get the Veeam app and services installed, then move your discs and change IP’s. Since I don’t care in my lab, I’ll fully shutdown my existing server to free up the IP and system resources. then boot up my new server, attach the downloaded ISO in step 1, and install Veeam.

Hostname, networking, and other prerequisites are not discussed in details here.

I like how it knows, click install…

Install B&R

How long we wait is based on the Matrix. Looking at the VM resource usage, and my machines based on the setup, looks like it’s reading from the ISO to load installation files. and writing it somewhere to disk, my setup only yielded me about 40 MB’s and took roughly 8 minutes.

Agree to the EULA.

License upgrade: (I’ll try not selecting this since CE, nope wizard wouldn’t let me for CE, shucks hahah)

Service account, Local System (recommended). I left this default, next.

This is why I like Veeam, made by sysadmins for sysadmins.

Install, and now we wait… once complete

Step 7) Attach disk with backup data

How you do this is up to you, I got the needful done.

Step 8) Open Veeam B&R Console, and import config backup.

In Veeam B&R Console, click what should be file -> Config Backup, then click restore button.

Now, I picked restore since I shutdown my OG server to move the data as a whole, so I picked restore:

The config deets check em over, I don’t know what the minimum gap between version is allowed, but in this case 12.3.1 source, to target 12.3.2

Target Data is localhost, pay attention to the login name, if you ever change the local admin account or whatever account installs Veeam, this could be an issue to your SQL Veeam config.

yes…

Restore…

Yes…

Wait for services to all stop…

success… until it’s not…

This for some reason failed…

I clicked start and it seemed to start everything up just fine…

But no matter what when I tried to rescan any repos in the console it would complain that not all components were upgraded. Everything AI was telling me was off and felt wrong.. I found this one thread with the statement “It seems that not all Windows 10 installations are facing this problem. We’ll try to figure out of certain builds are involved in this. On the other hand, a fresh v12 install in Win10 works without any problems.” Well This is a fresh install, it happened after the backup import, when I did the last upgrade back in March, it was ain in place upgrade from 12.1 to 12.3, and I didn’t have this problem.

After enough fooling around I found my answer here, which was to run the provided script. finding the component listed with 0.0 as noted in the thread. Strange.

Then finally the part of the wizard completed:

Docker on Core Linux

Docker Running on Core Linux

Is it possible? … Yes? However I didn’t write down some of the pre-requisites for the Core Server (whoops, maybe one day I’ll redo it from scratch). But if you do manage to get the base binaries installed this post should be helpful for all the caveats I faced along the way…

In my previous post I mentioned that Docker wouldn’t run unless it was 64bit machine, so I created a Core Linux 64 bit image and showed how to get the base OS up and running… but what about Docker itself.

Now I got this “working” but I didn’t exactly write down all my steps (it took a long time to figure out). From looking at the VM’s history looks like I simply used the tc account to download and extract the base Docker binaries:

now this doesn’t tell me the relative path I was on when some of the relative paths are called, but I do know it was the tc account so some safe assumptions can be made.

Reviewing my AI chat and notes I took, and getting it running again after a reboot, it seem after the “install” (copy base files to path shown above image, line 51) I also added “var/lib/docker” and “etc/docker” to the filetool.lst file, so they stay persisted after reboot. Strangely only /var/lib/docker is populated, but I can’t see how that’s the case from the history review. I was pretty positive the script itself failed to execute… I really should start from scratch else this post will be a bit useless…. butt…. F*** it….

The next issues seems to be tired to cgroups and certificates…

Fixing Cgroups Error

sudo mount -t tmpfs cgroup_root /sys/fs/cgroup/
sudo mkdir /sys/fs/cgroup/devices
sudo mount -t cgroup -o devices none /sys/fs/cgroup/devices

That should be it… but we need this to be persisted and auto run at boot time so we don’t have to do this every time…

sudo vi /opt/dockerd.sh
i
mount -t tmpfs cgroup_root /sys/fs/cgroup/
mkdir /sys/fs/cgroup/devices
mount -t cgroup -o devices none /sys/fs/cgroup/devices
ESC
:wq
sudo vi /opt/bootlocal.sh
*append with*
/opt/dockerd.sh
:wq
sudo chmod +x /opt/dockerd.sh
filetool.sh -b

The next issue seems that docker would load, but when pulling a container to load it would just seem to fail complaining about certificates.

Fixing Certificate Error

I found the point in my notes rambling with AI when I figured it out…

“NO F***KIN WAY!!!!!!! https://stackoverflow.com/questions/75696690/how-to-resolve-tls-failed-to-verify-certificate-x509-certificate-signed-by-un I read this thread and read the answer by Andrei Nicolae… which said just durr copy ca certs to /etc/ssl/certs I was like, I bet docker is hard coded to look there, which is why it was first suggested but all other apps on tiny core linux know to use /usr/local/etc/ssl/certs, so yeah docker never was using the expectects paths liek I suspected from the begining cause we manualy installed it for a OS not supported. so with this I did sudo mkdir -p /etc/ssl/certs sudo cp /usr/local/etc/ssl/certs/* /etc/ssl/certs sudo pkill dockerd sudo dockerd & sudo docker pull hello-world and guess what it finally freaking worked”

But I realized instead of copying them I could just make a symlink

sudo mkdir /etc/ssl/
ln -s /usr/local/etc/ssl/certs/ /etc/ssl/

I simply placed these lines in /opt/dockerd.sh file I created earlier, rebooted and verified that /etc/ssl/certs was populated with certs and it was.

And finally…

Running Dockerd

sudo DOCKER_RAMDISK=true dockerd &

Pulling Image

sudo docker pull hello-world

Running Image

sudo docker run --rm hello-world

Yay we actually ran a container from Core Linux.. Mind Blown… I swear I had it all running at only 90MB of RAM, but checking now show 116MB Bah…

To get Docker to run at boot my final /opt/dockerd.sh looked like this:

*Update* outisde of hello world I wasn’t able to spin up any other real containers due to other technical limitations, this was a huge waste of time.

Installing CorePure64

Back Story

So in my previous post I shared how to setup a very small footprint Linux server called Linux Core: Installing Core Linux – Zewwy’s Info Tech Talks

but…… I tried getting docker running on it was hit with an error “Line 1: ELF: File not found”.

AI, after giving all the required command to do a “manual install”, stated, “duuuuurrrrrrrrrrrrrr docker don’t give 32 bit binaries”, to which I was replied huh… I guess I installed 32 bit Core Linux… do they have 64bit versions?

It gave me some dumb link to some dumb third party source.. the answer is yes.. here: Index of /16.x/x86_64/release/

So here we go again….

Installing CorePure64

Step 1) Download Install image CorePure64-16.0.iso

Step 2) get x64 bit hardware, or create a VM that supports 64 bit. I have 64 bit hypervisors, so I will create a VM as I did in my first post.

This time 2 CPU, 1 GB RAM, 60GB HDD, thin, VMparavirtual scsi controller, EFI enabled with secure boot, let’s see if this works out…. No boot… Flip boot settings to BIOS mode… ISO boots.. ah man FFS its x64 based but still relies on BIOS for booting… that sucks… of well moving on….

Booting and Installing Core Linux

Attach ISO boot. Core Linux boots automatically from ISO:

For some reason the source doesn’t tell you what to do next. type tc-install and the console says doesn’t know what you are talking about:

AI Chat was kind enough to help me out here, and told me I had to run:

tce-load -wi tc-install

Which required an internet connection:

However even after this, attempting to run gave the same error.. mhmm, using the find command I find it, but it needs to be run as root, so:

sudo su
/tmp/tcloop/tc-install/usr/local/bin/tc-install.sh

C for install from CDrom:

Lets keep things frugal around here:

1 for the whole disk:

y we want a bootloader (It’s extlinux btw located [/mnt/sda1/boot/extlinux/extlinux.conf}):

Press enter again to bypass “Install Extensions from..”

3 for ext4:

Like the install source guide says add boot options for HDD (opt=sda1 home=sda1 tce=sda1)

last chance… (Dooo it!) y:

Congrats… you installed TC-Linux:

Once rebooted the partition and disk free will look different, before reboot, running from memory:

after reboot:

Cool, the install process was 100% the same as the 32bit process…

but running uname -m we see we are now 64 bit instead of 32 bit.

Changing TC Password

Step 1) Edit /opt/.filetool.lst (use vi as root)
– add etc/passwd and etc/shadow

Step 2) run:

filetool.sh -b

Step 3) run

passwd tc

Step 4) run

filetool.sh -b

Now reboot, you may not notice that it applied due to the auto login, however, if you type exit to get back to the actual login banner, type in tc and you will be prompted for the password you just set. Now we can move on to the next step which is to disable the auto login.

Disable Auto-Login

Step 1) Run

sudo su
echo 'echo "booting" > /etc/sysconfig/noautologin' >> /opt/bootsync.sh

Step 2) Run

filetool.sh -b
reboot

K on to the next fun task… static IP…

Static IP Address

For some reason AI said I had to create a script that runs the manual step… not sure if this is the proper way… I looked all over the Wiki: wiki:start – Tiny Core Linux Wiki I can’t find nothing.. I know this works so we’ll just do it this way:

Step 1)  Run:

echo "ifconfig eth0 192.168.0.69 netmask 255.255.255.0 up" > /opt/eth0.sh
echo "route add default gw 192.168.0.1" >> /opt/eth0.sh
echo 'echo "nameserver 192.168.0.7" > /etc/resolv.conf' >> /opt/eth0.sh
chmod +x /opt/eth0.sh
echo "/opt/eth0.sh" >> /opt/bootlocal.sh
filetool.sh -b

Step 2) reboot to apply and verify.

It didn’t work, but is I add “sleep 2” just before /opt/eth0.sh in the bootsync.sh file, then it works, not the greatest but I’ll take it for now.

Updates?

Tiny Core Linux, updating is a bit different from traditional distros. Here’s how it works:

🔄 Updating Tiny Core Linux

Tiny Core doesn’t have a single tc-update command for the whole system, but it does have tools for updating extensions and managing packages:

🧰 Extension Update

Use the tce-update command to update installed extensions:

bash
tce-update

This checks for newer versions of your installed .tcz extensions and updates them.
📦 Other Useful Commands

  • tce-load: Loads extensions from your /tce directory.
  • tce-ab: Opens the AppBrowser (if you have GUI).
  • tce-audit: Audits installed extensions.
  • tce-remove: Removes extensions.

🧱 Core System Update

To update the core system itself (like the kernel and initrd), you’ll need to manually download the latest bzImage and core.gz (or tinycore.gz) from the Tiny Core Mirrors then replace them in your boot partition.

Steps:

  1. Download the latest files.
  2. Mount your boot partition.
  3. Replace bzImage and core.gz.
  4. Update your bootloader config (e.g., GRUB or syslinux).
  5. Reboot.

VMware Changes Update URLs

If you run a home lab, or manage systems for companies you may have noticed updates not working in VAMI… something like…. Ohhh I dunno.. this:

Check the URL and try again.

Unable to patch the vCenter via VAMI as it fails to download the updates from Broadcom public repositories

Cause

Public facing repository URLs and authentication mechanisms are changing. Download URLs are no longer common but unique for each customer therefore will require to be re-configured.

Well… wow thank you Broadcom for being so… amazing.

If you want to be overly confused about the whole thing you can this this KB: Authenticated Download Configuration Update Script

As the original link I shared above all you have to do is login to the Broadcom support portal, and get a token, and edit the URL…. but….

Notes:

    • The custom URL is not preserved post migration upgrade, FBBR restore and VCHA failover
    • If there is a proxy device configured between vCenter and the internet, ensure it is configured to allow communications to the new URL
    • Further patches automatically update this URL. For example, if 8.0.3.00400 is patched to 8.0.3.00500, the default URL will change to end in 8.0.3.00500.

Looks like this was enforced just a couple days ago … Sooooo, happy patching?   ¯\_(ツ)_/¯

Permission to perform this operation was denied. NoPermission.message.format

For anyone who may use my site as a source of informational references, I do apologies, for the following:

  1. My Site Cert expiring. ACME is great, I’m just a bit upset they refuse to announce their HTTP auth sources so I can’t create a security rule for it. Right now it would be restricted to App Type. While not bad.. not good enough, so I manually have to allow the traffic for the cert to be renewed.

    No… I have no interest in allowing ACME access to my DNS for DNS auth.

  2. Site was down for 24 hours. If anyone noticed at all, yes my site was down for over 24 hours. This was due to a power outage that lasted over 12 hours after a storm hit. No UPS could have saved me from this. Though one is in the works even after project “STFU” has completed.

    No, I have no interest in clouding my site.

I have a couple blog post ideas roaming around, I’m just having a hard time finding the motivation.

Anyway, if you get “Permission to perform this operation was denied. NoPermission.message.format” while attempting to move a ESXi host into a vCenter cluster. Chances are you may have a orphaned vCLS VM.

If so, log into VAMI and restart the ESX Agent Manager (EAM) service.

After restarting that service everything should hunky dory…

Cheers.

Update Veeam 12.3

Grab Update file from Veeam.

Step 1) Sign in to Veeam portal

I didn’t have a paid product license, so my download section was full of free trial links. Since I’m using CE (community edition) from here: Free Backup Software For Windows, VMware, & More – Veeam

Step 2) Download the ISO, it’s a doosy at 13 GBs

Step 3) Read the update notes for any expected issues/outcomes.

For all the FAQs go here: Veaam Upgrade FAQs

For basic System Requirements and release notes see here: Veeam Backup & Replication 12.3 Release Notes

The main thing will be the change of the server SQL service, moving from MS SQL Express, to PostgresDB, Though it’s not directly mentioned from what I can see other than the step 8 in the Upgrade path: Upgrading to Veeam Backup & Replication 12.3 – User Guide for VMware vSphere

Step 4) Attach the ISO to the server being upgraded or installed on

My case a 12.1 based server.

My case it’s a VM, so I just attach it via VMRC.

Step 5) Run the Installer

Make sure you stop any “continuous” jobs, and close the B&R Console.

Double Click Setup.exe on the mounted ISO’s main directory.

If you haven’t guessed it, click Upgrade. Yes, nice to see coding done where it just does a check and knows it’s a Veeam server, so the only option is to Upgrade.

In my case I again only have one option to choose from.

How long we wait is based on the Matrix. Looking at the VM resource usage, and my machines based on the setup, looks like it’s reading from the ISO to load installation files. and writing it somewhere to disk, my setup only yielded me about 40 MB’s and took roughly 8 minutes.

Agree to the EULA.

Upgrade the server, here’s you have a checkbox to update remote components automatically (such as Veeam proxies). In my lab the setup is very simply so I have none. I just click next.

License upgrade: (I’ll try not selecting this since CE, nope wizard wouldn’t let me for CE, shucks hahah)

Service account, Local System (recommended). I left this default, next.

Here’s the OG MS SQL instance:

… yes?

For the Veeam Hunter service… ignore (Shrug)

free space… needs more than 40 Gigs… holy molly….

43.1 GB required, 41 GB Available. Unreal, guess I’ll extend the drive, great part of running VMs. 🙂

Finally! Let’s Gooooo! and sure enough first step.. here comes the new SQL instance.. this is probably why it requires over 40 gigs to do the install, to migrate the SQL instance from MS SQL to Postgres…. Wonder if space will be reclaimed by removal of the MS SQL Express instance….

Roughly half hour later…

Mhmmm checking the services I see the orginal MS SQL instance is still there running. I see a postgres service.. not running… uhhhh mhmmm…

All Veeam services are running, open the Veeam B&R console, connect, and yup it opens. The upgrade component wizard automatically opened, and it updated the only item.. itself.

*UPDATE* Patch for latest CVE of 9.9. If you have a domain joined Veeam server.

KB4724: CVE-2025-23120

*thumbs up* It’s another 8 gig btw…

Installing Core Linux

Installing TC-Linux (Core Only)

Sources

Source: wiki:install_hd – Tiny Core Linux Wiki

On, ESXi VM: wiki:vmware_installation – Tiny Core Linux Wiki

FAQs: http://www.tinycorelinux.net/faq.html

Setting up VM

VM Type: Other Linux 32bit kernel 4.x
CPU: 1
Mem: 256 MB
HDD: 20 Gig
Network: DHCP + Internet Access

Change boot to BIOS (instead of EFUI)

Booting and Installing Core Linux

Attach ISO boot. Core Linux boots automatically from ISO:

For some reason the source doesn’t tell you what to do next. type tc-install and the console says doesn’t know what you are talking about:

AI Chat was kind enough to help me out here, and told me I had to run:

tce-load -wi tc-install

Which required an internet connection:

However even after this, attempting to run gave the same error.. mhmm, using the find command I find it, but it needs to be run as root, so:

sudo su
/tmp/tcloop/tc-install/usr/local/bin/tc-install.sh

C for install from CDrom:

Lets keep things frugal around here:

1 for the whole disk:

y we want a bootloader (It’s extlinux btw located [/mnt/sda1/boot/extlinux/extlinux.conf}):

Press enter again to bypass “Install Extensions from..”

3 for ext4:

Like the install source guide says add boot options for HDD (opt=sda1 home=sda1 tce=sda1)

last chance… (Dooo it!) y:

Congrats… you installed TC-Linux:

Once rebooted the partition and disk free will look different, before reboot, running from memory:

after reboot:

Installing OpenSSH?

tce-load -wi openssh

This is where things got a little weird. Installing an app… Not as root TC-Linux says…

This is when things got a bit annoying n weird, even though the guide says using -wi installs it in the on boot section, I found it wasn’t loading on boot, well at first I noticed it didn’t start at all after install, as I couldn’t SSH in, this was cause of a missing config file…

Even if I got it running it still wouldn’t run at boot and that apparently was cause the file disappeared after reboot. This is apparently cause the system mostly run entirely in RAM. If you didn’t notice even after install the root filesystem was still only roughly 200 MB in size (enough to fit into the RAM we configured for this VM).

Notice the no password on the tc account? Set it, reboot. doesn’t stick…

Notice the auto login on tty1? Attempt to disable.. doesn’t stick…

Configuring Core Linux

Long story short apparently you have to define what paths are to be considered persistent via a file:

/opt/.filetool.lst

These files are saved to mydata.gz via the command:

filetool.sh -b

So here’s what we have to do:

  1. Configure the system to ensure settings we configure stay persistent across reboots.
  2. Change the tc account password.
  3. Disable auto login on TTY1.
  4. Configure Static IP address.
  5. Install and run on boot OpenSSH.

Changing TC Password

Step 1) Edit /opt/.filetool.lst (use vi as root)
– add etc/passwd and etc/shadow

Step 2) run:

filetool.sh -b

Step 3) run

passwd tc

Step 4) run

filetool.sh -b

Now reboot, you may not notice that it applied due to the auto login, however, if you type exit to get back to the actual login banner, type in tc and you will be prompted for the password you just set. Now we can move on to the next step which is to disable the auto login.

Disable Auto-Login

Step 1) Run

sudo su
echo 'echo "booting" > /etc/sysconfig/noautologin' >> /opt/bootsync.sh

Step 2) Run

filetool.sh -b
reboot

K on to the next fun task… static IP…

Static IP Address

For some reason AI said I had to create a script that runs the manual step… not sure if this is the proper way… I looked all over the Wiki: wiki:start – Tiny Core Linux Wiki I can’t find nothing.. I know this works so we’ll just do it this way:

Step 1)  Run:

echo "ifconfig eth0 192.168.0.69 netmask 255.255.255.0 up" > /opt/eth0.sh
echo "route add default gw 192.168.0.1" >> /opt/eth0.sh
echo 'echo "nameserver 192.168.0.7" > /etc/resolv.conf' >> /opt/eth0.sh
chmod +x /opt/eth0.sh
echo "/opt/eth0.sh" >> /opt/bootlocal.sh
filetool.sh -b

Step 2) reboot to apply and verify.

What about SSH?!

Oh right.. we got it installed but we never got it running did we?!

Step 1) Run:

cp /usr/local/etc/ssh/sshd_config.orig /usr/local/etc/ssh/sshd_config
vi /usr/local/etc/ssh/sshd_config

Edit and uncomment:
Port: 22
Address: 0.0.0.0
PasswordAuthedAllowed:true

Step 2) Run:

echo "usr/local/etc/ssh/" >> /opt/.filetool.lst
echo "/usr/local/etc/init.d/openssh start" >> /opt/bootlocal.sh
filetool.sh -b
reboot

congrats you got openSSH working on TC-Linux.

Hostname

Most systems you run the hostname command… ooooeee not so easy not TC-Linux.

Option 1 (Clean)

Edit the first line of /opt/bootsync.sh which sets the hostname.

Then just run filetool.sh -b, done.

Option 2 (Dirty)

To ensure the hostname persists across reboots, you need to modify the /etc/sysconfig/hostname file:

  1. Edit the hostname configuration file:
    sudo vi /etc/sysconfig/hostname
    
  2. Add or modify the line to include your desired hostname:
    your_new_hostname
    
  3. Save and close the file.
  4. Add /etc/sysconfig/hostname to the persistence list:
    echo "etc/sysconfig/hostname" >> /opt/.filetool.lst
    echo "hostname $(cat /etc/sysconfig/hostname)" >> /opt/bootlocal.sh
  5. Save the configuration:
    filetool.sh -b
reboot

That’s it for now, next blog post we’ll get to installing other goodies!

Managing Apps

Installing Apps

As you can see it’s most running:

tce-load -wi

for all the details see their page on this, or run -h.

Source of app (x86): repo.tinycorelinux.net/15.x/x86/tcz/

For the most it’s install app. Edit files as needed, saved edited files to /opt/.filetool.lst. Then run backup command, test service, edit /opt/bootlocal.sh with commands needed to get app/service running. again run filetool.sh and bobs your uncle.

Deleting Apps

To remove a package on Tiny Core Linux that was installed using tce-load, here’s what you can do:

  1. For Extensions in the onboot.lst File:
    • First, remove the package name from the /etc/sysconfig/tcedir/onboot.lst file to prevent it from being loaded at boot. You can edit the file with:
      bash
      sudo nano /etc/sysconfig/tcedir/onboot.lst
      
    • Delete the entry corresponding to the package you wish to remove, then save and exit.
  2. Delete the Extension File:
    • Navigate to the directory where the extensions are stored:
      bash
      cd /etc/sysconfig/tcedir/optional
      
    • Remove the .tcz file associated with the package:
      bash
      sudo rm package-name.tcz
      
  3. Clean Up Dependency Files (Optional):
    • To clean up leftover dependency files related to the removed package, you can check and delete them from the same directory (/etc/sysconfig/tcedir/optional).

 

Veeam VM Restore failed: Cannot apply encryption policy. You must set the default key provider.

So in my Lab vCenter went completely POOOOOF. So, I installed it fresh.

After vCenter was installed, I updated my Veeam configuration to ensure my backup chains wouldn’t break which still works great by the way.

One VM was missing from my vSphere. So I went to restore it when all of a sudden:

I remembered by post about configuring a Native Key Provider cause it was required as such to have a vTPM. So I thought, is this a “PC Load Letter” problem, and it’s actually just complaining that I didn’t configure a NKP for it to “apply encryption policy”?

Follow the same old steps to configure a NKP.

  • Log in to the vSphere Client:
    • Open the vSphere Client and log in with your credentials.
  • Navigate to Key Providers:
    • Select the vCenter Server instance.
    • Click on the Configure tab.
    • Under Security, click on Key Providers.
  • Add a Native Key Provider:
    • Click on Add.
    • Select Add Native Key Provider.
    • Enter a name for the Native Key Provider.
    • If you want to use hosts with TPM 2.0, select the option Use key provider only with TPM protected ESXi hosts.
  • Complete the Setup:
    • Click Add Key Provider.
    • Wait for the process to complete. It might take a few minutes for the key provider to be available on all hosts.
  • Backup the Native Key Provider:
    • After adding the Native Key Provider, you must back it up.
    • Click on the Native Key Provider you just created.
    • Click Backup.
    • Save the backup file and password in a secure location.

Once I did all that…

No way that actually worked. But will it boot? Well it def “booted” but it asked for the BitLocker key (which makes sense since we created a new TPM and it doesn’t have the old keys). I checked my AD and sadly enough for some reason it didn’t have any BitLocker keys saved for this AD object/VM.

Guess this one is a loss and the importance of saving your encryption keys.