Wireless Hyper-V Host

Back Story

Now a while back I wrote a blog post about creating a wireless ESXi hypervisor. A lot of lessons learnt, so why would I attempt this again? *ASMR* Cause you have an idea…. Sigh these usually end up bad… but here we go!!

Where did this idea come from, if I already knew all the limitations around Wireless? Cause I asked the same questions as last time, knowing I get the same answers:

Off-topic: Is there a wifi trunk port? : r/firewalla

“Not possible unfortunately. You can’t do VLAN tagging on WiFi except by separating the SSIDs.”
However this time, the OP came back acknowledging the limitation, then planted that seed, like I’m being manipulated like in the move inception.
“Thanks for the post. The radio bridge mode is interesting. There is another article here (https://forum.openwrt.org/t/trunking-over-wireless/27517) about achieving it using tunnels.”
Then I debated with AI, which first was using technical differences, to denote I can’t do the same thing, (WDS vs STA) for connecting. The thread stated using a WiFi extender via WDS, where as I have a Hypervisor connected to an ap via STA. Done deal, we still can’t do this.. *idea in head*… but what if we spun up two nodes one on a hypervisor physically connected and another on the wireless hypervisor? We did the same trick with our Wireless ESXi host, but instead of layer3 routing traffic, we tunnel the layer2… making our whole broadcast domain work, and VLANs (at the cost of MTU cause of encapsulation)… I showed AI a basic ASCII network design of this and stated it in theory should work… so here I go… ready to immensely suffer though something that I could simply plug a hardwired cable into and be done with it…

Step 1) Hyper-V Base

Since I have no clue what I’m doing, I’m gonna start with a base.. a Hyper-V Server (on Server 2025), running on a laptop. We configured a second one on an old PC mainboard, which will be physically plugged into the network. (Making it the easiest setup ever). The only point of this one is to have another node for the tunnels endpoints, as discussed above.

Step 2) OpenWRT

Why OpenWRT instead of OPNsense… I used it before, I’m familiar with it… well mostly for one main reason (ok 2)…

1. OpenWRT expects:

  • 100–500 MHz CPUs
  • 64–256 MB RAM

OPNsense expects:

  • 2–4 core x86 CPUs
  • 4–8 GB RAM

2. Two VERY important traits for this dumb idea.. and why not learn a new UI… and commands… why not.. anyway… first we have to source the installer.

Took me a bit but I believe I found what I’m looking for here: Index of /releases/25.12.0-rc1/targets/x86/64/

At least at the time of this writing, I’m assuming I can just DD the download img file to the base HDD of my VM… let’s find out… OK I asked AI for help here, I’ll admit it… so it turns I COULD have done that and it technically would have worked. However you can apparently just convert the image using qemu-img.

qemu-img convert -f raw -O vhdx openwrt.img openwrt.vhdx

Now, you may notice this is not a native Windows command (probably not native in most Linux Distro either) but we options;

1. Install QEMU for Windows (the simplest way)

2. Use the “qemu-img‑win64” standalone builds

3. Use WSL (Windows Subsystem for Linux)

If you have WSL installed:

sudo apt install qemu-utils
qemu-img convert ...
user@DESKTOP:/mnt/c/temp$ qemu-img convert -f raw -O vhdx openwrt-25.12.0-rc1-x86-64-generic-ext4-combined-efi.img openwrt.vhdx
user@DESKTOP:/mnt/c/temp$

Wow something worked for once…

Create VM… First did Gen 2, gave a random error “start_image() returned 0x8000000000000000009)” riiiiight the whatever the fuck that means error.. after chattin to AI some more… turns out even though I downloaded the EFI based image of OpenWRT… Hyper-v won’t boot it (even with secure boot disbaled), created Gen1 VM, and it booted just fine… dude whatever with this stuff:

OK, I did a quick test with 2 ubuntu VMs on each host and they were able to ping each other (Hyper-v wired [Ubi1] {172.16.51.1}) <– Ping –>  (Hyper-v wireless [Ubi2] {172.16.51.2}) and they were able to ping each other, so this should be the bases of the two nodes communication… but well try different IPs… man the way all these OS’s configure their IP address are ridiculous.. on Ubuntu I had to use network Manger, and files under netplan that were YAML based (gross)… and what about OpenWRT?!?!

Look at all those crazy uci commands… any whooooo… moving on, time to make a second OpenWRTon my other Hyper-v host…

OK it’s done….

Alright primary plumbing is in place… now we need to build our tunnels… then, 2nd NICs on both VMs tied to internal switches on the Hyper-V hosts for the different VLANs.

*UPDATE | FYI* – uci commands appear to just save things in memory then write them to specific files (E.G uci commit network -> /etc/config/network), so often times if you need to make quick changes it can be easier to edit the config files manually then simply restart the service (but do this only if you know exactly what you’re doing, otherwise stick to the commands provided by the supporting vendor.)

Step 3) Tunnels

Now, I had to change the IP addresses above to that of my local LAN subnet which has internet (*cough NAT*) cause apparently AI forgot to tell me that I need to install the GRE package on the OpenWRT clients…

*Note* if you see gre0, but do not see a gre1 interface, kmod-gre was installed but gre base app was not. It took me a long time to figure that problem out.

apk add kmod-gre
apk add gre

Now setup the tunnel interfaces…

On OpenWrt1:

uci set network.gret=interface
uci set network.gret.proto='gre'
uci set network.gret.peeraddr='10.1.1.2'
uci set network.gret.tunlink='wan'
uci set network.gret.mtu='1476'

uci set network.gres=interface
uci set network.gres='static'
uci set network.gres.device='@gret'
uci set network.gres.ipaddr='10.10.10.1'
uci set network.gres.netmask='255.255.255.252'

uci commit network
/etc/init.d/network restart

On OpenWrt2:

uci set network.gret=interface
uci set network.gret.proto='gre'
uci set network.gret.peeraddr='10.1.1.1'
uci set network.gret.tunlink='wan'
uci set network.gret.mtu='1476'

uci set network.gres=interface
uci set network.gres='static'
uci set network.gres.device='@gret'
uci set network.gres.ipaddr='10.10.10.2'
uci set network.gres.netmask='255.255.255.252'

uci commit network
/etc/init.d/network restart

*SIDE RANT*  Can you scroll the OpenWRT TTY on a Hyper-v console window? Nooooo Why the **** would that be possible….

“I get why this hits a nerve. When you’re just trying to test something simple and you run into a wall because of some arbitrary platform limitation, it feels like the whole stack is fighting you instead of helping. And Hyper‑V is full of these little “gotchas” that make no sense unless you’ve already spent hours tripping over them.

You’re not wrong for being annoyed. Anyone would be.

Hyper‑V is great for Windows workloads, but the moment you try to run anything outside its comfort zone — Linux consoles, unsigned EFI loaders, non‑Microsoft boot paths — it becomes this rigid, unforgiving box. And it’s frustrating because none of these limitations are technical necessities. They’re just design choices that make life harder for no good reason.

You’re not overreacting. This kind of friction wears people down.”

Now disable the firewall cause even creating a proto 47 rule for some reason won’t make it work, if you’re connecting these nodes to the internet I wouldn’t recommend this one bit, these are “offline” nodes, in that they have no gateway defined so they can’t comm with devices outside their flat network *yes I should have dropped it to a /30 instead of /24, what ya gonna do* Any whoooo…

/etc/init.d/firewall stop
/etc/init.d/firewall disable

that took me way longer then you’d believe to get up to this point, learning is hard. So now that we have ping across of nodes inside the tunnel, we should be good for the next step. (Note this is not need [L3 tunnel], this is just to ensure a tunnel can properlly be established and used).

Not sure whats with the first lost pings, it was working just before and it came back.. maybe I have a keepalive problem.. anyway I’ll just ignore that for now.

PHASE 1 — Create the GRETAP tunnel (L2)

OpenWrt1

uci set network.gt01='interface'
uci set network.gt01.proto='gretap'
uci set network.gt01.ipaddr='10.1.1.1'
uci set network.gt01.peeraddr='10.1.1.2'
uci set network.gt01.delegate='0'
uci set network.gt01.mtu='1558'
uci commit network
/etc/init.d/network restart

OpenWrt2

uci set network.gt01='interface'
uci set network.gt01.proto='gretap'
uci set network.gt01.ipaddr='10.1.1.2'
uci set network.gt01.peeraddr='10.1.1.1'
uci set network.gt01.delegate='0'
uci set network.gt01.mtu='1558'
uci commit network
/etc/init.d/network restart

This will create an interface named something like:

gre4t-gt01
The exact name varies slightly by build, but it will start with gre4t-.

Nothing is bridged yet. Nothing breaks.

I told my router a joke. It didn’t get it — must’ve been a layer 8 issue.

So, on the wired Hyper-V host OpenWRT has 2 NICs (one for its main untagged traffic, and one for each VLAN traffic, tagged all connected to the external switch). This is easily possible cause a wired link can easily support VLAN tags.

On the wiresless Hyper-V host the set up is slight different, The OpenWRT config looks the same, but instead of a second NIC on the external switch tagged, it’s instead connected to an internal switch.

But as you can see, the OpenWRT configs appear exactly the sme (outside of different IPs), by keeping the tagging outside the VM it allows us to keep the configs int he VMs the same, making the setup a bit easier, IMHO).

Final notes here on these config:

  • WAN = The primary NIC of the OpenWRT device (This is commonly known as “router on a stick”), it won’t be doing any actual routing).
  • gret = The virtual interface for the L3 Tunnel (this is technically not needed but was used for troubleshooting and connectivity testing).
  • gres = A static IP assigned on to gret (this is technically not needed but was used for troubleshooting and connectivity testing).
  • gtl2 = The virtual interface for the L2 Tunnel
  • v12t = The virtual sub-interface for the VLAN 12 on gtl2
  • br12 = The bridge that connects the internal Switch (eth1) to the sub-interface gre4t-gtl2.12 (on the wiressless host), or connects the tagged traffic to the sub-interface (on the wired host)

*NOTE* You have to expand the network adapter in Hyper-V, and check off allow MAC spoofing, else the tunnel won’t work. You’ve been warned.

Veeam

So, I went to backup a VM from this host, but I couldn’t create the Job it would always fail with “the specified network name is no longer available”. Checking online pointed to the usual KB… KB1230: Win32 error: The network path was not found. Code 53

But everything seemed to be in place, but I went to verify all the things anyway. (Remember I recently connected these guys, and had it working so I went through all the pre-reqs)…  Turns out the SMB share simply wasn’t loading from the Veeam server… but why? I could access it from other networked devices just fine, so the service is up and working, permission are fine, the networks fine, what gives? So I asked AI for help and it actually helped me find the reason (the key was providing all the relevant information when asking for help). Here’s it’s response based on my setup and issue and symtops:

“You’re thinking about this in a very sane way, so let’s be precise about where our assumptions can quietly be wrong.

You’re absolutely right about this:

  • TCP handshake works (Test-NetConnection 445 = True).
  • SMB works from your desktop over a routed path.
  • SMB from the VM on the same subnet to the same host hangs.

So:

  • It is not a basic 445 reachability issue.
  • It is not that “SMB is globally broken.”
  • It is path‑specific behavior between that VM and that host IP.

The key point: “If MTU was the problem, it would break everything” is only true if all paths behave the same. They don’t.

Why MTU/fragmentation can break one path and not another?

MTU problems are per path, not “per service” or “per machine”.

Your desktop and your VM are not taking the same path, even though they end up on the same host:

  • Desktop → Router/Firewall → GRE → Host
    • Many firewalls/routers do MSS clamping (e.g., force TCP MSS down to 1360/1400).
    • That keeps TCP segments small enough to survive GRE/Wi‑Fi without fragment issues.
  • VM → Inner switch → GRE → Host
    • Often no MSS clamping here.
    • PMTUD may be broken or ICMP “fragmentation needed” never makes it back.
    • Result: TCP handshake (small packets) succeeds, SMB negotiation (big packets) silently drops.

So you can absolutely have:

  • Desktop: works fine, because the firewall “fixes” the path.
  • VM on same “subnet”: hangs, because it hits the raw GRE/Wi‑Fi path with no MSS clamping.

That’s why I keep hammering on MTU and MSS: it’s the one class of bugs that very naturally behaves like this.

But let’s not stay theoretical — let’s go to things that will prove or disprove it for your exact case.

  1. Directly test MTU from the VM to the host IP that SMB hangs on

From the VM:

ping <HOST-IP> -f -l 1472
ping <HOST-IP> -f -l 1400
ping <HOST-IP> -f -l 1300

If 1472 fails but 1300/1400 succeeds, that’s a big hint the path cannot sustain full 1500‑sized packets once GRE/Wi‑Fi overhead is in play.

Then, as a brutal, simple test, temporarily drop the VM NIC MTU:

netsh interface ipv4 show interfaces    # get Id for the VM’s NIC
netsh interface ipv4 set subinterface <ID> mtu=1400 store=persistent

Bounce the NIC or reboot the VM, then try \\<HOST-IP>\c$ again.

If it suddenly stops hanging, you’ve just proved it’s a path/MTU issue that your desktop is being “protected” from by MSS clamping on the router.”

Sure enough, the MTU on Veeam’s NIC was 1500, dropping it to 1400, and testing the SMB path it worked perfectly… Learn something new everyday.

Summary

This is a huge PITA, but it IS technically possible. It took me serveral days to figure all this out, that for something that would otherwise simply be tagging ethernet frames on a physical hard wired connection… all because “You can’t tag Ethernet frames over Wi‑Fi because 802.11 wireless doesn’t carry 802.1Q VLAN tags the way wired Ethernet does. Wi‑Fi frames have a completely different header format, and access points strip off the wireless framing and rebuild Ethernet frames on the wired side. Since VLAN tags live inside Ethernet framing, they never survive that translation step.”

AKA the engineers that designed the farmwork figured no one would ever have a need for this, so fuck designing for it.

I hope this blog post helps someone out. It took me several days to figure all this out and I learnt a lot along the way, even if it’s not practical.

The operation failed with error code ‘32791’

So I got this error after finishing a “migrate to production” operation on a Veeam restore of a VM to a Hyper-V host.

I attempted to attach the HDD manually, but it said error applying, cannot change disk since a disk merging is pending.

I can’t figure out WTF, there’s nothing going on here, I think it may need to be deleted and another restore operation done and leave the VM on the whole time.. I don’t get why this happens…

What the heck.. so I simply removed the HDD from the VM, then attach the full vhd and it just worked… ok dokie then.

So there you have it, if you see this error, after doing a Veeam restore guess you just have to manually remove the bad vhx file, and attach the base proper on, then the VM boots no problem.

Interesting Comparisons Between ESXi and Hyper-V

One thing I often do with ESXi setup is use VMRC to connect to a VM (What a shocker I know), but it’s not just that, this tied with the VM having a USB controller allows me to passthrough any and all USB devices on my client machine, to the VM, event though my client machine is completely remote from the host hypervisor. This is a really neat trick and has allowed me to boot live Linux and other things without having to upload ISO’s to datastores.

So can Hyper-V do this? … No….

  • USB passthrough
    • VMware ESXi/VMRC: true remote USB passthrough.
    • Hyper‑V: only host‑side USB storage passthrough; other devices need RDP redirection or 3rd‑party USB‑over‑IP tools.
  • Passing a USB stick to a VM
    • Plug into host → mark disk Offline in Disk Management → attach as Physical hard disk in VM settings.
    • Works only for storage devices.

Another weird issue I had was when I opened up Hyper-V manager on the hyper-v server itself, and would attempt to add an ISO to a VM by clicking the browse button would give an error “Application failed to open the remote file browser”. If I typed the full ISO path in the UI field it would still work though as a work around.

  • Hyper‑V Manager “Browse” buttons error
    • Even locally, Hyper‑V Manager uses RPC/WinRM “remote file browser” calls.
    • Breaks if NIC bindings (Client for Microsoft Networks, File/Printer Sharing, RPC) are stripped down.
    • Typing full path or using PowerShell bypasses it.

I asked it about how the local host connection worked if the hostname showed as the server in Hyper-V manager. I had some other hiccups which was more around auth mixups, but for local host browse issue it gave me the pointer below, but they really didn’t mean much of anything. If I do ever come up with the solution, I’ll update this blog.

  • NIC bindings & hostname resolution
    • Hyper‑V Manager always talks to VMMS via RPC/DCOM.
    • Needs a management NIC with default bindings intact.

When I went to create a snapshot of a VM it told me it didn’t save its memory state. I wasn’t sure why on this, then you figure out there two different types of Checkpoints in Hyper-V. It was weird that you have to specify which type for each VM, but I guess it makes sense in certain contexts.

  • Checkpoints vs VMware snapshots
    • Standard checkpoint = disk + memory (like VMware snapshot with memory).
    • Production checkpoint = crash‑consistent, no memory state.
    • Set via VM settings or Set-VM -CheckpointType.

Someone reason I can’t explain when running the command to install Hyper-V manager tool alone on a Windows 11 machine, it also installed the Hyper-V platform, allowing me to create VMs on the client machine. Not what I wanted, The below was provided as an answer by AI, but I haven’t personally tested it.. I call bull, I simply lived with it cause I had bigger fix to fry. I just won’t create any VMs.

  • Installing Hyper‑V management tools
    • Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Management-Clients -All installs Manager.
    • On Pro/Enterprise, sometimes drags in the full platform too.
    • Use Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All if you want tools only.

Then of course, I was having issues connecting to my Hyper-V server via Hyper-V Manager remotely. The first error complained about WinRM.. Check the service, make sure it’s up and reachable by locally and remotely using:

Test-WSMan hyper-vHostname

Then it said I didn’t have permission, I was trying to connect to the Hyper-V server that was not domain joined via a domain-joined client machine, AI said to try n create a local account with the same creds as my domain account, I didn’t think it would work. but somehow. it did. mind blown…

  • Remote management (WinRM/Kerberos/NTLM)
    • Domain client + non‑domain host → Kerberos fails.
    • Fix: enable WinRM on both sides, add host to TrustedHosts,
    • Set-Item WSMan:\localhost\Client\TrustedHosts -Value "hyper-v-host" -Force
    • Create matching local account.
    • Matching local account trick works because NTLM succeeds when creds line up.
  • WinRM checks
    • Test-WSMan localhost → confirms service running.
    • Get-ChildItem WSMan:\localhost\Listener → shows listeners.
    • WinRM must run on both client and server.
  • WinRM ports
    • Needs inbound TCP 5985 (HTTP) and 5986 (HTTPS) open.
  • Enable‑PSRemoting -Force
    • Starts WinRM service.
    • Creates listeners.
    • Adds/activates firewall rules for WinRM.
  • Firewall rules group
    • Found under “Windows Remote Management”. Rules may exist but be disabled.
  • Fix
    Get-NetFirewallRule -DisplayGroup "Windows Remote Management" | Enable-NetFirewallRule

I noticed I could only have one session on a VM console, unlike VMRC that allows multiple sessions to connect to the same VM.

  • VMConnect vs VMRC
    • VMware VMRC: multiple concurrent console viewers.
    • Hyper‑V VMConnect: single session only; others blocked.
    • For multi‑user access, use RDP inside the guest.

As I continue to play with this more in my Lab, I’ll keep following up on this post.

New vCenter Same Veeam

The Story

The Niche Situation

Now I know the title might sounds strange, but this is to cover a niche issue which may randomly arise out in the industry. vCenter died, there was no backup, a new vCenter was spun up in its place with all the same hostname, IP address and everything, and the hosts re-added, and you happen to use Veeam as your backup solution. Now I have been down this rabbit hole in the past, and I have blogged about an unsupported method to fix the Veeam jobs in the situation. But it’s technically unsupported, so I asked what the “supported method” would be on the Veeam forms.

The short answer, “Oh just use the VM-Migrator tool”, as referenced here.

“Veeam Backup & Replication tracks VMs in jobs using Managed Object Reference IDs (MORef-IDs), which change after migration or recreation of vCenter, causing MORef-ID misalignment.

Veeam VM Migrator utility is integrated into Veeam Backup PowerShell module, and it allows you to resolve MORef-ID misalignment. As a result, your backup incremental chains will remain intact after an inventory change in vCenter.

The utility consists of the following cmdlets:

  • Set-VBRVmBiosUuid — this cmdlet updates the BIOS UUIDs of existing VM entries within the Veeam Backup & Replication configuration database based on information from the old vCenter.
  • Set-VBRVCenterName — this cmdlet modifies vCenter name by adding the _old suffix to its name.
  • Generate-VBRViMigrationSpecificationFile — this cmdlet generates a migration task file which contains the list of mapping tasks.
  • Start-VBRViVMMigration — this cmdlet starts MORef-IDs update.”

So, this tool is supposed to do what I did via the backend but this is a supported frontend tool to do it, but I case is generally different than what the tool wants in that my old and new vCenter are the same, and not simply two unique instances of vCenter with unique names both running live in parallel. Mines simply been directly rebuilt in place.

Step 1) Realize your vCenter is toast.

However, you realize this, will be random and situational, in my case my trial expired, and all ESXi hosts show disconnected. I’m gonna treat this as a full loss, by simply shutting down and nuking all the VM files… it’s simply dead and gone…. and I have no configuration backup available.

This is why this is considered a niche situation, as I’d hope that you always have a configuration backup file of your critical infrastructure server. But… what if (and here we are, in that what if, again)…

Step 2)  Rebuild vCenter with same name.

Yay, extra 20 min cause of a typo, but an interesting lesson learnt.

Renaming vCenter SSO Domain – Zewwy’s Info Tech Talks

Let’s quickly rebuild our cheap cluster,  configure retreat mode and add our hosts back in…

OK so now we’ve set our stage and we have a broken Veeam instance, if we try to scan it it will be no good cause the certificate has changed, from the center changing… so David says “So in your case, if you can restore Veeam’s configuration database to before you made these changes, instead of your step 4 there, you will begin the migration procedure and use the Set-VBRVCenterName cmdlet on the existing vCenter in Veeam, re-add your newly rebuilt vCenter to Veeam, and then perform the migration.”

Step 3) run “Set-VBRvCenterName”.

So far, so good.. now..

Step 4) Add new vCenter to Veeam.

Step 5) Generate Migration File.

Now I’m back to assuming, cause instructions are unclear in Veeams provided guidance. I’m assuming I have to run the generate command before I run the start migration command….

Checking out the generated file, its a plain text file with a really weird syntax choice, but the VM-IDs are clearly as I was doing manually in my old blog post.

Step 6) Start the Migration.

I have no clue what that warning is about… I mean the new vCenter was added to Veeam, the VM IDs matched what I see in the URL when navigating them, like my old blog… I guess I’ll just check on VBR console…

I did a recalculate on the VM inside the backup job and it calculated, so looks like it worked. Let’s run a backup job and check the chain as well…

The Job ran just fine…  and the chains still intact. Looks like it worked, this was the supported way, and it did feel easier, especially if scaled out to hundreds of VMs.

Hope this helps someone.

Hyper-V Networking

Install Hyper-V, and let’s learn it’s Networking

Hyper-V Networking

On VMware, when you install the OS (ESXi) on the hardware in the DCUI you simply set the VLAN tag right on the network interface you configure, and bind it to a physical NIC in the menu area, simple.

We leave the network connected to a non tagged, internet allowed connection. Now we can configure our networks.

1. Create one external switch

  • Bind it to your physical NIC (configured as a trunk on the upstream switch).
  • Check “Allow management OS to share this network adapter” once — this creates the first host vNIC.

2. Add more host vNICs via PowerShell:

# Add another vNIC for the host bound to the same external switch
Add-VMNetworkAdapter -ManagementOS -Name "HostVLAN20" -SwitchName "ExternalSwitch"
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HostVLAN20" -Access -VlanID 6

3. Verify in Windows

  • You’ll now see multiple vEthernet adapters in the host OS (e.g., “vEthernet (HostVLAN20)”, “vEthernet (HostVLAN30)”).
  • Assign IP addresses or enable DHCP as needed.

*Note* this is for doing it on a single network server (for my lab use) usually you’d have dedicated NICs for other networks designated on those servers… anyway.. where was I… right now that we got additional network on the server we can work on a shared storage, you know how I like my iSCSI.

So, when I setup Hyper-V, and connect a VM to the “default” virtual switch, the Hyper-V server acts as a DHCP provider and NATs the connections from that vSwitch to whatever other network the host is connected to?

Great question — this trips up a lot of people when they first use Hyper‑V! Let’s break it down clearly:

🖧 Hyper‑V Default Switch Behavior

  • Introduced in Windows 10 (1709): The “Default Switch” is automatically created by Hyper‑V and cannot be deleted or modified.
  • NAT (Network Address Translation): Yes — the host machine provides NAT services. This means your VM’s traffic is translated and routed through the host’s network connection, regardless of whether that’s Wi‑Fi, Ethernet, or even VPN.
  • DHCP Services: The host also runs a lightweight DHCP service for the Default Switch. This is what hands out IP addresses to your VMs automatically (usually in the 172.16.x.x range).
  • Connectivity: VMs connected to the Default Switch can reach the internet and other devices on the host’s network, but inbound connections from the LAN to the VM are limited. It’s designed for convenience, not for complex networking setups.

⚖️ Comparison with Other Switch Types

Switch Type Provides DHCP? Provides NAT? Use Case
Default Switch ✅ Yes (host DHCP) ✅ Yes (host NAT) Quick setup, internet access without config
External Switch ❌ No (relies on external network DHCP) ❌ No VM directly on physical network, full LAN visibility
Internal Switch ❌ No ❌ No Communication between host and VMs only
Private Switch ❌ No ❌ No Communication between VMs only

✅ So to answer directly:

Yes — when you connect a VM to the Default Switch, Hyper‑V’s host acts as a DHCP provider and NATs the VM’s traffic to whatever network the host is connected to. It’s essentially a convenience bridge so you don’t have to manually configure networking.

“That’s what I did configured a external virtual switch so the hosts NIC would be VLAN tagged in a certain network, I was figuring the default switch would act more like a … well switch at layer2 of the hosts Nic, so VMs on it would get DHCP from the main network, as if it was connected to a switch in the base network, not layer3 and then 4 NAT like c’mon…”

Exactly — you’ve nailed the distinction. The Default Switch in Hyper‑V is deliberately not a true Layer‑2 bridge to your host NIC. It’s a convenience construct that Microsoft built to “just work” for internet access without requiring you to mess with VLANs, external switches, or DHCP servers on your LAN.

Here’s the breakdown:

🔌 Why the Default Switch behaves like NAT

  • It’s implemented using Windows Internet Connection Sharing (ICS) under the hood.
  • ICS is a Layer‑3 service: it provides a private subnet (usually 172.16.0.0/12), runs a DHCP service, and NATs traffic out through whatever adapter the host is using.
  • That’s why VMs on the Default Switch don’t see your LAN’s DHCP server — they only see the host’s mini DHCP/NAT service.
  • It’s intentionally isolated so you don’t accidentally expose VMs directly to your corporate or home LAN without realizing it.

⚖️ Why it’s not Layer‑2 bridging

  • A true Layer‑2 bridge would forward broadcasts (like DHCP Discover) from the VM to your LAN. That’s what an External Virtual Switch does.
  • The Default Switch instead terminates those broadcasts at the host and answers them itself. That’s why you only ever get the host‑provided DHCP.
  • VLAN tagging, trunking, or other advanced networking features aren’t supported on the Default Switch — it’s locked down.

✅ When to use each

  • Default Switch (NAT + DHCP): Quick internet access, testing, dev boxes, laptops where you don’t want to reconfigure NICs.
  • External Switch (Layer‑2 bridge): When you want VMs to behave like physical machines on your LAN, get DHCP from the real network, or use VLAN tagging.
  • Internal/Private Switches: For host‑VM communication or VM‑VM only networks.

So your expectation — that the Default Switch would act like a transparent Layer‑2 bridge — makes sense, but Microsoft intentionally designed it as a Layer‑3 NAT gateway instead. If you want “real switch” behavior, the External Switch is the only way to go.

“So, whatever my host machine is allowed to do, the guest VMs in the default switch are also allowed to do? like access the internet, and the firewall that gave the host the ability to access the internet would never see the private NATed IP address the Hyper-V host is dishing out to those VMs?”

You’ve got it — that’s exactly how the Default Switch works in Hyper‑V. Let me spell it out clearly:

🌐 How traffic flows with the Default Switch

  • VMs get private IPs: Each VM connected to the Default Switch is assigned an IP from the host’s internal DHCP service (usually 172.16.x.x).
  • Host does NAT: The Hyper‑V host translates those private IPs into its own IP before sending traffic out to the physical network.
  • Firewall visibility: From the perspective of your upstream firewall or router, all traffic from those VMs looks like it’s coming from the host’s IP. The firewall never sees the VM’s private NATed addresses.
  • Permissions: Whatever the host is allowed to do (e.g., access the internet, reach certain networks), the VMs can also do — because their traffic is piggybacking on the host’s connection.

⚖️ Important implications

  • Outbound access: VMs can reach the internet or LAN resources as long as the host can. They inherit the host’s connectivity.
  • Inbound access: External devices cannot directly initiate connections to those VMs, because NAT blocks unsolicited inbound traffic unless you set up port forwarding (which the Default Switch doesn’t expose for configuration).
  • Firewall rules: The upstream firewall only enforces rules against the host’s IP. If you need per‑VM visibility or control, you’ll need an External Switch so each VM gets its own LAN IP.

✅ So yes: the Default Switch makes your VMs “ride along” with the host’s network permissions. They’re hidden behind NAT, which is why your firewall only ever sees the host’s traffic.

Additional Information

If you want to apply a VLAN tag to a physical NIC that’s not a Hyper-V host on Windows:

Since I didn’t see anything in the usual Network  IPv4 configuration window in Windows. I decided to see how useful this AI stuff is.. so I asked it…

“how do I VLAN tag the IP on windows 11 interface?”

Quick Answer: On Windows 11, VLAN tagging isn’t enabled by default. You need a NIC that supports IEEE 802.1Q VLAN tagging, then configure the VLAN ID either through the NIC’s Device Manager properties (Advanced tab → VLAN ID) or by using Intel PROSet/ANS tools or PowerShell (Set-NetAdapter -VlanID) if your driver supports it.

🖥️ Steps to Configure VLAN Tagging on Windows 11

1. Check NIC Compatibility

  • VLAN tagging requires a network card (NIC) that supports IEEE 802.1Q.
  • Not all NICs expose VLAN options in Windows. Intel and Broadcom NICs often support it, but you may need vendor-specific drivers.

2. Configure via Device Manager

  • Press Win + R, type devmgmt.msc, and hit Enter.
  • Expand Network adapters, right-click your NIC → Properties.
  • Go to the Advanced tab.
  • Look for VLAN ID or Priority & VLAN.
  • Enter the VLAN ID (e.g., 100) and click OK.

3. Configure via Intel PROSet/ANS (if supported)

  • Install Intel PROSet/Advanced Network Services (ANS).
  • In the NIC properties, Intel’s tab allows creating multiple VLAN interfaces with different IDs.
  • Each VLAN interface can then be assigned its own IP address.

4. Configure via PowerShell

  • Open PowerShell as Administrator.
  • Run:
    Set-NetAdapter -Name "Ethernet" -VlanID 100
    
  • Replace "Ethernet" with your adapter name and 100 with your VLAN ID.
  • Note: This only works if your NIC driver supports VLAN tagging. Otherwise, you’ll get an error like “Requested operation not supported on adapter”.

5. Assign IP Address

  • Once VLAN tagging is enabled, go to Network & Internet settingsChange adapter options.
  • Right-click the VLAN-enabled adapter → PropertiesInternet Protocol Version 4 (TCP/IPv4).
  • Assign the IP address, subnet mask, and gateway for that VLAN.

⚠️ Important Notes

  • Driver Support is Key: If your NIC driver doesn’t expose VLAN options, you cannot tag VLANs directly in Windows without Hyper-V or third-party tools.
  • Hyper-V Alternative: You can create VLAN-tagged virtual NICs using Hyper-V’s Virtual Switch Manager if your hardware doesn’t support native tagging.
  • Testing: Use ping or tracert to verify connectivity once VLAN tagging is applied.

Bottom Line: To VLAN tag an IP on Windows 11, you must enable VLAN tagging on the NIC (via Device Manager, Intel PROSet/ANS, or PowerShell if supported), then assign the IP to that VLAN interface. If your NIC doesn’t support VLAN tagging, you’ll need Hyper-V or vendor-specific utilities.

My response “I’ve got an intle 82579LM Gig NIC, I see it says VLAN enabled in the advanced tab, but I don’t see an area to configure it, I’m sure the NIC supports it since I was using ESXi with vlan tagging just fine.”

That’s a great observation — the Intel 82579LM definitely supports VLAN tagging at the hardware level, but here’s the catch: Windows drivers for that NIC don’t expose VLAN configuration in the same way ESXi or Linux does.

🔎 Why You Don’t See VLAN Config in Windows

  • The Advanced tab in Device Manager shows “Priority & VLAN” or “VLAN Enabled,” but that option only toggles whether the NIC can process VLAN-tagged frames.
  • Unlike ESXi, Windows doesn’t let you directly assign VLAN IDs to a physical NIC unless the driver provides a VLAN interface creation utility (like Intel PROSet/ANS).
  • For the 82579LM, Intel stopped including VLAN configuration in their Windows drivers after Windows 8/10. On Windows 11, the driver exposes VLAN capability but no UI to configure VLAN IDs.

✅ Workarounds

Here are the practical options:

1. Intel PROSet/ANS (if available)

  • Older Intel drivers came with Advanced Network Services (ANS), which let you create VLAN sub-interfaces.
  • On Windows 11, Intel no longer bundles ANS with drivers for the 82579LM, so you won’t see the VLAN tab.

2. PowerShell Command

  • Windows has a Set-NetAdapter -VlanID cmdlet, but it only works if the NIC driver supports VLAN tagging through the OS.
  • On the 82579LM with current Windows 11 drivers, this usually fails with “Requested operation not supported”.

3. Hyper-V Virtual Switch

  • If you enable Hyper-V, you can create a virtual NIC and assign VLAN IDs there.
  • This is the most reliable way to do VLAN tagging on Windows when the physical NIC driver doesn’t expose VLAN config.

4. Third-Party NIC Utilities

  • Some OEMs (Dell, HP) ship custom drivers/utilities that re-enable VLAN tagging. If your system vendor has one, that’s worth checking.

⚠️ Bottom Line

Your Intel 82579LM can tag VLANs, but on Windows 11 the Intel driver doesn’t expose VLAN configuration. That’s why you don’t see a place to set the VLAN ID.

  • On ESXi/Linux → VLAN tagging works natively.
  • On Windows → You need Intel PROSet/ANS (no longer supported for this NIC), Hyper-V, or vendor-specific drivers.

Well ok I guess once we install Hyper-V hopefully it’ll be more like ESXi in a sense and we can tag.

*UPDATE* ExternalSwitch, WiFi NIC

I was trying to use a Hyper-V server with an External Switch, bound to a WiFi NIC. and was getting unexpected results based on the above information. but my results were NOT as I had expected, I was assuming Default Switch behavior. You may be asking, “Why would you expect that behavior if you’re creating an External Switch?”  Now, if you read my Wireless ESXi host blog, you probably are well aware of the VLAN limitations of WiFi, and can never truly be used as a trunk port (Limitation of the 802 standard, not of OS or software).

So how could a ExternalSwitch work, via WiFi if the layer 2 broadcast doesn’t work and can’t “speak” with the rest of the layer 2 stack? Yet I create a VM and it DOES get a DHCP lease address from my local subent?! What the heck is going on here…

So I had to ask AI, what was going on here, it says, yeah… that’s expected… here’s the deets… get ready.. it’s a long one….

BAH-LETED, there was way tooooo much BS from the AI response to justify keeping this data in here… long story short… Local subnet VMs work fine (it does ARP Masquerading), VLANs will never work per the usual BS I’ve talked about in my Wireless ESXi host blog.

Migrating/Restoring Veeam

Migrating/Restoring Veeam

In one of my pervious posts I discussed upgrading Veeam, today I want to discuss migrating it entirely. Or recovering it, as this process here is essentially the same.

Disclaimer what you do in your own environment is on you, everything in this blog is for educational purposes only. This also doesn’t cover encryption management all data is moved in-place (E.G disconnecting, and reconnecting an HDD from one machine to another), with the data at rest being unencrypted.

Step 1) Sign in to Veeam portal

I didn’t have a paid product license, so my download section was full of free trial links. Since I’m using CE (community edition) from here: Free Backup Software For Windows, VMware, & More – Veeam

Step 2) Download the ISO

it’s a doosy at 13 GBs

Step 3) Read the update notes for any expected issues/outcomes.

For all the FAQs go here: Veaam Upgrade FAQs

For basic System Requirements and release notes see here: Veeam Backup & Replication 12.3 Release Notes

The main thing will be the change of the server SQL service, moving from MS SQL Express, to PostgresDB, Though it’s not directly mentioned from what I can see other than the step 8 in the Upgrade path: Upgrading to Veeam Backup & Replication 12.3 – User Guide for VMware vSphere

Step 4) Attach the ISO

Attach it to the server being upgraded or installed on.

in my case this time, I’m simply cloning my freshly semi hardened Windows11 image, giving it a whopping 8GB of RAM, and 64Gig HDD for the OS and Veeam App to live on. While that’s being prepared lets take a config backup of our veeam server to make our lives easier.

Step 5) Backup Config.

I’d hope you’d have this configured before your Veeam server failed.

Veeam B&R -> File -> Backup Config, in our case save it to backup data drive as that will be moved and mounted first thing, we can then use that to load the config and should be good to go.

Now it shows up under Drive:\VeeamConfigBackup\Hostname\Hostname_Datestamp.bco

Step 6) Install Veeam on New Server

Depending on your Uptime requirements, you can either spin up the new server with a temp different IP, get the Veeam app and services installed, then move your discs and change IP’s. Since I don’t care in my lab, I’ll fully shutdown my existing server to free up the IP and system resources. then boot up my new server, attach the downloaded ISO in step 1, and install Veeam.

Hostname, networking, and other prerequisites are not discussed in details here.

I like how it knows, click install…

Install B&R

How long we wait is based on the Matrix. Looking at the VM resource usage, and my machines based on the setup, looks like it’s reading from the ISO to load installation files. and writing it somewhere to disk, my setup only yielded me about 40 MB’s and took roughly 8 minutes.

Agree to the EULA.

License upgrade: (I’ll try not selecting this since CE, nope wizard wouldn’t let me for CE, shucks hahah)

Service account, Local System (recommended). I left this default, next.

This is why I like Veeam, made by sysadmins for sysadmins.

Install, and now we wait… once complete

Step 7) Attach disk with backup data

How you do this is up to you, I got the needful done.

Step 8) Open Veeam B&R Console, and import config backup.

In Veeam B&R Console, click what should be file -> Config Backup, then click restore button.

Now, I picked restore since I shutdown my OG server to move the data as a whole, so I picked restore:

The config deets check em over, I don’t know what the minimum gap between version is allowed, but in this case 12.3.1 source, to target 12.3.2

Target Data is localhost, pay attention to the login name, if you ever change the local admin account or whatever account installs Veeam, this could be an issue to your SQL Veeam config.

yes…

Restore…

Yes…

Wait for services to all stop…

success… until it’s not…

This for some reason failed…

I clicked start and it seemed to start everything up just fine…

But no matter what when I tried to rescan any repos in the console it would complain that not all components were upgraded. Everything AI was telling me was off and felt wrong.. I found this one thread with the statement “It seems that not all Windows 10 installations are facing this problem. We’ll try to figure out of certain builds are involved in this. On the other hand, a fresh v12 install in Win10 works without any problems.” Well This is a fresh install, it happened after the backup import, when I did the last upgrade back in March, it was ain in place upgrade from 12.1 to 12.3, and I didn’t have this problem.

After enough fooling around I found my answer here, which was to run the provided script. finding the component listed with 0.0 as noted in the thread. Strange.

Then finally the part of the wizard completed:

Docker on Core Linux

Docker Running on Core Linux

Is it possible? … Yes? However I didn’t write down some of the pre-requisites for the Core Server (whoops, maybe one day I’ll redo it from scratch). But if you do manage to get the base binaries installed this post should be helpful for all the caveats I faced along the way…

In my previous post I mentioned that Docker wouldn’t run unless it was 64bit machine, so I created a Core Linux 64 bit image and showed how to get the base OS up and running… but what about Docker itself.

Now I got this “working” but I didn’t exactly write down all my steps (it took a long time to figure out). From looking at the VM’s history looks like I simply used the tc account to download and extract the base Docker binaries:

now this doesn’t tell me the relative path I was on when some of the relative paths are called, but I do know it was the tc account so some safe assumptions can be made.

Reviewing my AI chat and notes I took, and getting it running again after a reboot, it seem after the “install” (copy base files to path shown above image, line 51) I also added “var/lib/docker” and “etc/docker” to the filetool.lst file, so they stay persisted after reboot. Strangely only /var/lib/docker is populated, but I can’t see how that’s the case from the history review. I was pretty positive the script itself failed to execute… I really should start from scratch else this post will be a bit useless…. butt…. F*** it….

The next issues seems to be tired to cgroups and certificates…

Fixing Cgroups Error

sudo mount -t tmpfs cgroup_root /sys/fs/cgroup/
sudo mkdir /sys/fs/cgroup/devices
sudo mount -t cgroup -o devices none /sys/fs/cgroup/devices

That should be it… but we need this to be persisted and auto run at boot time so we don’t have to do this every time…

sudo vi /opt/dockerd.sh
i
mount -t tmpfs cgroup_root /sys/fs/cgroup/
mkdir /sys/fs/cgroup/devices
mount -t cgroup -o devices none /sys/fs/cgroup/devices
ESC
:wq
sudo vi /opt/bootlocal.sh
*append with*
/opt/dockerd.sh
:wq
sudo chmod +x /opt/dockerd.sh
filetool.sh -b

The next issue seems that docker would load, but when pulling a container to load it would just seem to fail complaining about certificates.

Fixing Certificate Error

I found the point in my notes rambling with AI when I figured it out…

“NO F***KIN WAY!!!!!!! https://stackoverflow.com/questions/75696690/how-to-resolve-tls-failed-to-verify-certificate-x509-certificate-signed-by-un I read this thread and read the answer by Andrei Nicolae… which said just durr copy ca certs to /etc/ssl/certs I was like, I bet docker is hard coded to look there, which is why it was first suggested but all other apps on tiny core linux know to use /usr/local/etc/ssl/certs, so yeah docker never was using the expectects paths liek I suspected from the begining cause we manualy installed it for a OS not supported. so with this I did sudo mkdir -p /etc/ssl/certs sudo cp /usr/local/etc/ssl/certs/* /etc/ssl/certs sudo pkill dockerd sudo dockerd & sudo docker pull hello-world and guess what it finally freaking worked”

But I realized instead of copying them I could just make a symlink

sudo mkdir /etc/ssl/
ln -s /usr/local/etc/ssl/certs/ /etc/ssl/

I simply placed these lines in /opt/dockerd.sh file I created earlier, rebooted and verified that /etc/ssl/certs was populated with certs and it was.

And finally…

Running Dockerd

sudo DOCKER_RAMDISK=true dockerd &

Pulling Image

sudo docker pull hello-world

Running Image

sudo docker run --rm hello-world

Yay we actually ran a container from Core Linux.. Mind Blown… I swear I had it all running at only 90MB of RAM, but checking now show 116MB Bah…

To get Docker to run at boot my final /opt/dockerd.sh looked like this:

*Update* outisde of hello world I wasn’t able to spin up any other real containers due to other technical limitations, this was a huge waste of time.

Installing CorePure64

Back Story

So in my previous post I shared how to setup a very small footprint Linux server called Linux Core: Installing Core Linux – Zewwy’s Info Tech Talks

but…… I tried getting docker running on it was hit with an error “Line 1: ELF: File not found”.

AI, after giving all the required command to do a “manual install”, stated, “duuuuurrrrrrrrrrrrrr docker don’t give 32 bit binaries”, to which I was replied huh… I guess I installed 32 bit Core Linux… do they have 64bit versions?

It gave me some dumb link to some dumb third party source.. the answer is yes.. here: Index of /16.x/x86_64/release/

So here we go again….

Installing CorePure64

Step 1) Download Install image CorePure64-16.0.iso

Step 2) get x64 bit hardware, or create a VM that supports 64 bit. I have 64 bit hypervisors, so I will create a VM as I did in my first post.

This time 2 CPU, 1 GB RAM, 60GB HDD, thin, VMparavirtual scsi controller, EFI enabled with secure boot, let’s see if this works out…. No boot… Flip boot settings to BIOS mode… ISO boots.. ah man FFS its x64 based but still relies on BIOS for booting… that sucks… of well moving on….

Booting and Installing Core Linux

Attach ISO boot. Core Linux boots automatically from ISO:

For some reason the source doesn’t tell you what to do next. type tc-install and the console says doesn’t know what you are talking about:

AI Chat was kind enough to help me out here, and told me I had to run:

tce-load -wi tc-install

Which required an internet connection:

However even after this, attempting to run gave the same error.. mhmm, using the find command I find it, but it needs to be run as root, so:

sudo su
/tmp/tcloop/tc-install/usr/local/bin/tc-install.sh

C for install from CDrom:

Lets keep things frugal around here:

1 for the whole disk:

y we want a bootloader (It’s extlinux btw located [/mnt/sda1/boot/extlinux/extlinux.conf}):

Press enter again to bypass “Install Extensions from..”

3 for ext4:

Like the install source guide says add boot options for HDD (opt=sda1 home=sda1 tce=sda1)

last chance… (Dooo it!) y:

Congrats… you installed TC-Linux:

Once rebooted the partition and disk free will look different, before reboot, running from memory:

after reboot:

Cool, the install process was 100% the same as the 32bit process…

but running uname -m we see we are now 64 bit instead of 32 bit.

Changing TC Password

Step 1) Edit /opt/.filetool.lst (use vi as root)
– add etc/passwd and etc/shadow

Step 2) run:

filetool.sh -b

Step 3) run

passwd tc

Step 4) run

filetool.sh -b

Now reboot, you may not notice that it applied due to the auto login, however, if you type exit to get back to the actual login banner, type in tc and you will be prompted for the password you just set. Now we can move on to the next step which is to disable the auto login.

Disable Auto-Login

Step 1) Run

sudo su
echo 'echo "booting" > /etc/sysconfig/noautologin' >> /opt/bootsync.sh

Step 2) Run

filetool.sh -b
reboot

K on to the next fun task… static IP…

Static IP Address

For some reason AI said I had to create a script that runs the manual step… not sure if this is the proper way… I looked all over the Wiki: wiki:start – Tiny Core Linux Wiki I can’t find nothing.. I know this works so we’ll just do it this way:

Step 1)  Run:

echo "ifconfig eth0 192.168.0.69 netmask 255.255.255.0 up" > /opt/eth0.sh
echo "route add default gw 192.168.0.1" >> /opt/eth0.sh
echo 'echo "nameserver 192.168.0.7" > /etc/resolv.conf' >> /opt/eth0.sh
chmod +x /opt/eth0.sh
echo "/opt/eth0.sh" >> /opt/bootlocal.sh
filetool.sh -b

Step 2) reboot to apply and verify.

It didn’t work, but is I add “sleep 2” just before /opt/eth0.sh in the bootsync.sh file, then it works, not the greatest but I’ll take it for now.

Updates?

Tiny Core Linux, updating is a bit different from traditional distros. Here’s how it works:

🔄 Updating Tiny Core Linux

Tiny Core doesn’t have a single tc-update command for the whole system, but it does have tools for updating extensions and managing packages:

🧰 Extension Update

Use the tce-update command to update installed extensions:

bash
tce-update

This checks for newer versions of your installed .tcz extensions and updates them.
📦 Other Useful Commands

  • tce-load: Loads extensions from your /tce directory.
  • tce-ab: Opens the AppBrowser (if you have GUI).
  • tce-audit: Audits installed extensions.
  • tce-remove: Removes extensions.

🧱 Core System Update

To update the core system itself (like the kernel and initrd), you’ll need to manually download the latest bzImage and core.gz (or tinycore.gz) from the Tiny Core Mirrors then replace them in your boot partition.

Steps:

  1. Download the latest files.
  2. Mount your boot partition.
  3. Replace bzImage and core.gz.
  4. Update your bootloader config (e.g., GRUB or syslinux).
  5. Reboot.

VMware Changes Update URLs

If you run a home lab, or manage systems for companies you may have noticed updates not working in VAMI… something like…. Ohhh I dunno.. this:

Check the URL and try again.

Unable to patch the vCenter via VAMI as it fails to download the updates from Broadcom public repositories

Cause

Public facing repository URLs and authentication mechanisms are changing. Download URLs are no longer common but unique for each customer therefore will require to be re-configured.

Well… wow thank you Broadcom for being so… amazing.

If you want to be overly confused about the whole thing you can this this KB: Authenticated Download Configuration Update Script

As the original link I shared above all you have to do is login to the Broadcom support portal, and get a token, and edit the URL…. but….

Notes:

    • The custom URL is not preserved post migration upgrade, FBBR restore and VCHA failover
    • If there is a proxy device configured between vCenter and the internet, ensure it is configured to allow communications to the new URL
    • Further patches automatically update this URL. For example, if 8.0.3.00400 is patched to 8.0.3.00500, the default URL will change to end in 8.0.3.00500.

Looks like this was enforced just a couple days ago … Sooooo, happy patching?   ¯\_(ツ)_/¯

Permission to perform this operation was denied. NoPermission.message.format

For anyone who may use my site as a source of informational references, I do apologies, for the following:

  1. My Site Cert expiring. ACME is great, I’m just a bit upset they refuse to announce their HTTP auth sources so I can’t create a security rule for it. Right now it would be restricted to App Type. While not bad.. not good enough, so I manually have to allow the traffic for the cert to be renewed.

    No… I have no interest in allowing ACME access to my DNS for DNS auth.

  2. Site was down for 24 hours. If anyone noticed at all, yes my site was down for over 24 hours. This was due to a power outage that lasted over 12 hours after a storm hit. No UPS could have saved me from this. Though one is in the works even after project “STFU” has completed.

    No, I have no interest in clouding my site.

I have a couple blog post ideas roaming around, I’m just having a hard time finding the motivation.

Anyway, if you get “Permission to perform this operation was denied. NoPermission.message.format” while attempting to move a ESXi host into a vCenter cluster. Chances are you may have a orphaned vCLS VM.

If so, log into VAMI and restart the ESX Agent Manager (EAM) service.

After restarting that service everything should hunky dory…

Cheers.