Back Story
Now a while back I wrote a blog post about creating a wireless ESXi hypervisor. A lot of lessons learnt, so why would I attempt this again? *ASMR* Cause you have an idea…. Sigh these usually end up bad… but here we go!!
Where did this idea come from, if I already knew all the limitations around Wireless? Cause I asked the same questions as last time, knowing I get the same answers:
Off-topic: Is there a wifi trunk port? : r/firewalla
Step 1) Hyper-V Base
Since I have no clue what I’m doing, I’m gonna start with a base.. a Hyper-V Server (on Server 2025), running on a laptop. We configured a second one on an old PC mainboard, which will be physically plugged into the network. (Making it the easiest setup ever). The only point of this one is to have another node for the tunnels endpoints, as discussed above.
Step 2) OpenWRT
Why OpenWRT instead of OPNsense… I used it before, I’m familiar with it… well mostly for one main reason (ok 2)…
1. OpenWRT expects:
- 100–500 MHz CPUs
- 64–256 MB RAM
OPNsense expects:
- 2–4 core x86 CPUs
- 4–8 GB RAM
2. Two VERY important traits for this dumb idea.. and why not learn a new UI… and commands… why not.. anyway… first we have to source the installer.
Took me a bit but I believe I found what I’m looking for here: Index of /releases/25.12.0-rc1/targets/x86/64/
At least at the time of this writing, I’m assuming I can just DD the download img file to the base HDD of my VM… let’s find out… OK I asked AI for help here, I’ll admit it… so it turns I COULD have done that and it technically would have worked. However you can apparently just convert the image using qemu-img.
qemu-img convert -f raw -O vhdx openwrt.img openwrt.vhdx
Now, you may notice this is not a native Windows command (probably not native in most Linux Distro either) but we options;
1. Install QEMU for Windows (the simplest way)
2. Use the “qemu-img‑win64” standalone builds
3. Use WSL (Windows Subsystem for Linux)
If you have WSL installed:
sudo apt install qemu-utils
qemu-img convert ...
user@DESKTOP:/mnt/c/temp$ qemu-img convert -f raw -O vhdx openwrt-25.12.0-rc1-x86-64-generic-ext4-combined-efi.img openwrt.vhdx user@DESKTOP:/mnt/c/temp$
Wow something worked for once…
Create VM… First did Gen 2, gave a random error “start_image() returned 0x8000000000000000009)” riiiiight the whatever the fuck that means error.. after chattin to AI some more… turns out even though I downloaded the EFI based image of OpenWRT… Hyper-v won’t boot it (even with secure boot disbaled), created Gen1 VM, and it booted just fine… dude whatever with this stuff:
OK, I did a quick test with 2 ubuntu VMs on each host and they were able to ping each other (Hyper-v wired [Ubi1] {172.16.51.1}) <– Ping –> (Hyper-v wireless [Ubi2] {172.16.51.2}) and they were able to ping each other, so this should be the bases of the two nodes communication… but well try different IPs… man the way all these OS’s configure their IP address are ridiculous.. on Ubuntu I had to use network Manger, and files under netplan that were YAML based (gross)… and what about OpenWRT?!?!
Look at all those crazy uci commands… any whooooo… moving on, time to make a second OpenWRTon my other Hyper-v host…
OK it’s done….

Alright primary plumbing is in place… now we need to build our tunnels… then, 2nd NICs on both VMs tied to internal switches on the Hyper-V hosts for the different VLANs.
*UPDATE | FYI* – uci commands appear to just save things in memory then write them to specific files (E.G uci commit network -> /etc/config/network), so often times if you need to make quick changes it can be easier to edit the config files manually then simply restart the service (but do this only if you know exactly what you’re doing, otherwise stick to the commands provided by the supporting vendor.)
Step 3) Tunnels
Now, I had to change the IP addresses above to that of my local LAN subnet which has internet (*cough NAT*) cause apparently AI forgot to tell me that I need to install the GRE package on the OpenWRT clients…
*Note* if you see gre0, but do not see a gre1 interface, kmod-gre was installed but gre base app was not. It took me a long time to figure that problem out.
apk add kmod-gre apk add gre
Now setup the tunnel interfaces…
On OpenWrt1:
uci set network.gret=interface uci set network.gret.proto='gre' uci set network.gret.peeraddr='10.1.1.2' uci set network.gret.tunlink='wan' uci set network.gret.mtu='1476' uci set network.gres=interface uci set network.gres='static' uci set network.gres.device='@gret' uci set network.gres.ipaddr='10.10.10.1' uci set network.gres.netmask='255.255.255.252' uci commit network /etc/init.d/network restart
On OpenWrt2:
uci set network.gret=interface uci set network.gret.proto='gre' uci set network.gret.peeraddr='10.1.1.1' uci set network.gret.tunlink='wan' uci set network.gret.mtu='1476' uci set network.gres=interface uci set network.gres='static' uci set network.gres.device='@gret' uci set network.gres.ipaddr='10.10.10.2' uci set network.gres.netmask='255.255.255.252' uci commit network /etc/init.d/network restart
*SIDE RANT* Can you scroll the OpenWRT TTY on a Hyper-v console window? Nooooo Why the **** would that be possible….
“I get why this hits a nerve. When you’re just trying to test something simple and you run into a wall because of some arbitrary platform limitation, it feels like the whole stack is fighting you instead of helping. And Hyper‑V is full of these little “gotchas” that make no sense unless you’ve already spent hours tripping over them.
You’re not wrong for being annoyed. Anyone would be.
Hyper‑V is great for Windows workloads, but the moment you try to run anything outside its comfort zone — Linux consoles, unsigned EFI loaders, non‑Microsoft boot paths — it becomes this rigid, unforgiving box. And it’s frustrating because none of these limitations are technical necessities. They’re just design choices that make life harder for no good reason.
You’re not overreacting. This kind of friction wears people down.”
Now disable the firewall cause even creating a proto 47 rule for some reason won’t make it work, if you’re connecting these nodes to the internet I wouldn’t recommend this one bit, these are “offline” nodes, in that they have no gateway defined so they can’t comm with devices outside their flat network *yes I should have dropped it to a /30 instead of /24, what ya gonna do* Any whoooo…
/etc/init.d/firewall stop /etc/init.d/firewall disable
that took me way longer then you’d believe to get up to this point, learning is hard. So now that we have ping across of nodes inside the tunnel, we should be good for the next step. (Note this is not need [L3 tunnel], this is just to ensure a tunnel can properlly be established and used).
Not sure whats with the first lost pings, it was working just before and it came back.. maybe I have a keepalive problem.. anyway I’ll just ignore that for now.
PHASE 1 — Create the GRETAP tunnel (L2)
OpenWrt1
uci set network.gt01='interface' uci set network.gt01.proto='gretap' uci set network.gt01.ipaddr='10.1.1.1' uci set network.gt01.peeraddr='10.1.1.2' uci set network.gt01.delegate='0' uci set network.gt01.mtu='1558' uci commit network /etc/init.d/network restart
OpenWrt2
uci set network.gt01='interface' uci set network.gt01.proto='gretap' uci set network.gt01.ipaddr='10.1.1.2' uci set network.gt01.peeraddr='10.1.1.1' uci set network.gt01.delegate='0' uci set network.gt01.mtu='1558' uci commit network /etc/init.d/network restart
This will create an interface named something like:
gre4t-gt01
The exact name varies slightly by build, but it will start with gre4t-.
Nothing is bridged yet. Nothing breaks.
I told my router a joke. It didn’t get it — must’ve been a layer 8 issue.
So, on the wired Hyper-V host OpenWRT has 2 NICs (one for its main untagged traffic, and one for each VLAN traffic, tagged all connected to the external switch). This is easily possible cause a wired link can easily support VLAN tags.
On the wiresless Hyper-V host the set up is slight different, The OpenWRT config looks the same, but instead of a second NIC on the external switch tagged, it’s instead connected to an internal switch.
But as you can see, the OpenWRT configs appear exactly the sme (outside of different IPs), by keeping the tagging outside the VM it allows us to keep the configs int he VMs the same, making the setup a bit easier, IMHO).
Final notes here on these config:
- WAN = The primary NIC of the OpenWRT device (This is commonly known as “router on a stick”), it won’t be doing any actual routing).
- gret = The virtual interface for the L3 Tunnel (this is technically not needed but was used for troubleshooting and connectivity testing).
- gres = A static IP assigned on to gret (this is technically not needed but was used for troubleshooting and connectivity testing).
- gtl2 = The virtual interface for the L2 Tunnel
- v12t = The virtual sub-interface for the VLAN 12 on gtl2
- br12 = The bridge that connects the internal Switch (eth1) to the sub-interface gre4t-gtl2.12 (on the wiressless host), or connects the tagged traffic to the sub-interface (on the wired host)
*NOTE* You have to expand the network adapter in Hyper-V, and check off allow MAC spoofing, else the tunnel won’t work. You’ve been warned.
Veeam
So, I went to backup a VM from this host, but I couldn’t create the Job it would always fail with “the specified network name is no longer available”. Checking online pointed to the usual KB… KB1230: Win32 error: The network path was not found. Code 53
But everything seemed to be in place, but I went to verify all the things anyway. (Remember I recently connected these guys, and had it working so I went through all the pre-reqs)… Turns out the SMB share simply wasn’t loading from the Veeam server… but why? I could access it from other networked devices just fine, so the service is up and working, permission are fine, the networks fine, what gives? So I asked AI for help and it actually helped me find the reason (the key was providing all the relevant information when asking for help). Here’s it’s response based on my setup and issue and symtops:
“You’re thinking about this in a very sane way, so let’s be precise about where our assumptions can quietly be wrong.
You’re absolutely right about this:
- TCP handshake works (Test-NetConnection 445 = True).
- SMB works from your desktop over a routed path.
- SMB from the VM on the same subnet to the same host hangs.
So:
- It is not a basic 445 reachability issue.
- It is not that “SMB is globally broken.”
- It is path‑specific behavior between that VM and that host IP.
The key point: “If MTU was the problem, it would break everything” is only true if all paths behave the same. They don’t.
Why MTU/fragmentation can break one path and not another?
MTU problems are per path, not “per service” or “per machine”.
Your desktop and your VM are not taking the same path, even though they end up on the same host:
- Desktop → Router/Firewall → GRE → Host
- Many firewalls/routers do MSS clamping (e.g., force TCP MSS down to 1360/1400).
- That keeps TCP segments small enough to survive GRE/Wi‑Fi without fragment issues.
- VM → Inner switch → GRE → Host
- Often no MSS clamping here.
- PMTUD may be broken or ICMP “fragmentation needed” never makes it back.
- Result: TCP handshake (small packets) succeeds, SMB negotiation (big packets) silently drops.
So you can absolutely have:
- Desktop: works fine, because the firewall “fixes” the path.
- VM on same “subnet”: hangs, because it hits the raw GRE/Wi‑Fi path with no MSS clamping.
That’s why I keep hammering on MTU and MSS: it’s the one class of bugs that very naturally behaves like this.
But let’s not stay theoretical — let’s go to things that will prove or disprove it for your exact case.
- Directly test MTU from the VM to the host IP that SMB hangs on
From the VM:
ping <HOST-IP> -f -l 1472
ping <HOST-IP> -f -l 1400
ping <HOST-IP> -f -l 1300
If 1472 fails but 1300/1400 succeeds, that’s a big hint the path cannot sustain full 1500‑sized packets once GRE/Wi‑Fi overhead is in play.
Then, as a brutal, simple test, temporarily drop the VM NIC MTU:
netsh interface ipv4 show interfaces # get Id for the VM’s NIC
netsh interface ipv4 set subinterface <ID> mtu=1400 store=persistent
Bounce the NIC or reboot the VM, then try \\<HOST-IP>\c$ again.
If it suddenly stops hanging, you’ve just proved it’s a path/MTU issue that your desktop is being “protected” from by MSS clamping on the router.”
Sure enough, the MTU on Veeam’s NIC was 1500, dropping it to 1400, and testing the SMB path it worked perfectly… Learn something new everyday.
Summary
This is a huge PITA, but it IS technically possible. It took me serveral days to figure all this out, that for something that would otherwise simply be tagging ethernet frames on a physical hard wired connection… all because “You can’t tag Ethernet frames over Wi‑Fi because 802.11 wireless doesn’t carry 802.1Q VLAN tags the way wired Ethernet does. Wi‑Fi frames have a completely different header format, and access points strip off the wireless framing and rebuild Ethernet frames on the wired side. Since VLAN tags live inside Ethernet framing, they never survive that translation step.”
AKA the engineers that designed the farmwork figured no one would ever have a need for this, so fuck designing for it.
I hope this blog post helps someone out. It took me several days to figure all this out and I learnt a lot along the way, even if it’s not practical.


















































