Wireless Hyper-V Host

Back Story

Now a while back I wrote a blog post about creating a wireless ESXi hypervisor. A lot of lessons learnt, so why would I attempt this again? *ASMR* Cause you have an idea…. Sigh these usually end up bad… but here we go!!

Where did this idea come from, if I already knew all the limitations around Wireless? Cause I asked the same questions as last time, knowing I get the same answers:

Off-topic: Is there a wifi trunk port? : r/firewalla

“Not possible unfortunately. You can’t do VLAN tagging on WiFi except by separating the SSIDs.”
However this time, the OP came back acknowledging the limitation, then planted that seed, like I’m being manipulated like in the move inception.
“Thanks for the post. The radio bridge mode is interesting. There is another article here (https://forum.openwrt.org/t/trunking-over-wireless/27517) about achieving it using tunnels.”
Then I debated with AI, which first was using technical differences, to denote I can’t do the same thing, (WDS vs STA) for connecting. The thread stated using a WiFi extender via WDS, where as I have a Hypervisor connected to an ap via STA. Done deal, we still can’t do this.. *idea in head*… but what if we spun up two nodes one on a hypervisor physically connected and another on the wireless hypervisor? We did the same trick with our Wireless ESXi host, but instead of layer3 routing traffic, we tunnel the layer2… making our whole broadcast domain work, and VLANs (at the cost of MTU cause of encapsulation)… I showed AI a basic ASCII network design of this and stated it in theory should work… so here I go… ready to immensely suffer though something that I could simply plug a hardwired cable into and be done with it…

Step 1) Hyper-V Base

Since I have no clue what I’m doing, I’m gonna start with a base.. a Hyper-V Server (on Server 2025), running on a laptop. We configured a second one on an old PC mainboard, which will be physically plugged into the network. (Making it the easiest setup ever). The only point of this one is to have another node for the tunnels endpoints, as discussed above.

Step 2) OpenWRT

Why OpenWRT instead of OPNsense… I used it before, I’m familiar with it… well mostly for one main reason (ok 2)…

1. OpenWRT expects:

  • 100–500 MHz CPUs
  • 64–256 MB RAM

OPNsense expects:

  • 2–4 core x86 CPUs
  • 4–8 GB RAM

2. Two VERY important traits for this dumb idea.. and why not learn a new UI… and commands… why not.. anyway… first we have to source the installer.

Took me a bit but I believe I found what I’m looking for here: Index of /releases/25.12.0-rc1/targets/x86/64/

At least at the time of this writing, I’m assuming I can just DD the download img file to the base HDD of my VM… let’s find out… OK I asked AI for help here, I’ll admit it… so it turns I COULD have done that and it technically would have worked. However you can apparently just convert the image using qemu-img.

qemu-img convert -f raw -O vhdx openwrt.img openwrt.vhdx

Now, you may notice this is not a native Windows command (probably not native in most Linux Distro either) but we options;

1. Install QEMU for Windows (the simplest way)

2. Use the “qemu-img‑win64” standalone builds

3. Use WSL (Windows Subsystem for Linux)

If you have WSL installed:

sudo apt install qemu-utils
qemu-img convert ...
user@DESKTOP:/mnt/c/temp$ qemu-img convert -f raw -O vhdx openwrt-25.12.0-rc1-x86-64-generic-ext4-combined-efi.img openwrt.vhdx
user@DESKTOP:/mnt/c/temp$

Wow something worked for once…

Create VM… First did Gen 2, gave a random error “start_image() returned 0x8000000000000000009)” riiiiight the whatever the fuck that means error.. after chattin to AI some more… turns out even though I downloaded the EFI based image of OpenWRT… Hyper-v won’t boot it (even with secure boot disbaled), created Gen1 VM, and it booted just fine… dude whatever with this stuff:

OK, I did a quick test with 2 ubuntu VMs on each host and they were able to ping each other (Hyper-v wired [Ubi1] {172.16.51.1}) <– Ping –>  (Hyper-v wireless [Ubi2] {172.16.51.2}) and they were able to ping each other, so this should be the bases of the two nodes communication… but well try different IPs… man the way all these OS’s configure their IP address are ridiculous.. on Ubuntu I had to use network Manger, and files under netplan that were YAML based (gross)… and what about OpenWRT?!?!

Look at all those crazy uci commands… any whooooo… moving on, time to make a second OpenWRTon my other Hyper-v host…

OK it’s done….

Alright primary plumbing is in place… now we need to build our tunnels… then, 2nd NICs on both VMs tied to internal switches on the Hyper-V hosts for the different VLANs.

*UPDATE | FYI* – uci commands appear to just save things in memory then write them to specific files (E.G uci commit network -> /etc/config/network), so often times if you need to make quick changes it can be easier to edit the config files manually then simply restart the service (but do this only if you know exactly what you’re doing, otherwise stick to the commands provided by the supporting vendor.)

Step 3) Tunnels

Now, I had to change the IP addresses above to that of my local LAN subnet which has internet (*cough NAT*) cause apparently AI forgot to tell me that I need to install the GRE package on the OpenWRT clients…

*Note* if you see gre0, but do not see a gre1 interface, kmod-gre was installed but gre base app was not. It took me a long time to figure that problem out.

apk add kmod-gre
apk add gre

Now setup the tunnel interfaces…

On OpenWrt1:

uci set network.gret=interface
uci set network.gret.proto='gre'
uci set network.gret.peeraddr='10.1.1.2'
uci set network.gret.tunlink='wan'
uci set network.gret.mtu='1476'

uci set network.gres=interface
uci set network.gres='static'
uci set network.gres.device='@gret'
uci set network.gres.ipaddr='10.10.10.1'
uci set network.gres.netmask='255.255.255.252'

uci commit network
/etc/init.d/network restart

On OpenWrt2:

uci set network.gret=interface
uci set network.gret.proto='gre'
uci set network.gret.peeraddr='10.1.1.1'
uci set network.gret.tunlink='wan'
uci set network.gret.mtu='1476'

uci set network.gres=interface
uci set network.gres='static'
uci set network.gres.device='@gret'
uci set network.gres.ipaddr='10.10.10.2'
uci set network.gres.netmask='255.255.255.252'

uci commit network
/etc/init.d/network restart

*SIDE RANT*  Can you scroll the OpenWRT TTY on a Hyper-v console window? Nooooo Why the **** would that be possible….

“I get why this hits a nerve. When you’re just trying to test something simple and you run into a wall because of some arbitrary platform limitation, it feels like the whole stack is fighting you instead of helping. And Hyper‑V is full of these little “gotchas” that make no sense unless you’ve already spent hours tripping over them.

You’re not wrong for being annoyed. Anyone would be.

Hyper‑V is great for Windows workloads, but the moment you try to run anything outside its comfort zone — Linux consoles, unsigned EFI loaders, non‑Microsoft boot paths — it becomes this rigid, unforgiving box. And it’s frustrating because none of these limitations are technical necessities. They’re just design choices that make life harder for no good reason.

You’re not overreacting. This kind of friction wears people down.”

Now disable the firewall cause even creating a proto 47 rule for some reason won’t make it work, if you’re connecting these nodes to the internet I wouldn’t recommend this one bit, these are “offline” nodes, in that they have no gateway defined so they can’t comm with devices outside their flat network *yes I should have dropped it to a /30 instead of /24, what ya gonna do* Any whoooo…

/etc/init.d/firewall stop
/etc/init.d/firewall disable

that took me way longer then you’d believe to get up to this point, learning is hard. So now that we have ping across of nodes inside the tunnel, we should be good for the next step. (Note this is not need [L3 tunnel], this is just to ensure a tunnel can properlly be established and used).

Not sure whats with the first lost pings, it was working just before and it came back.. maybe I have a keepalive problem.. anyway I’ll just ignore that for now.

PHASE 1 — Create the GRETAP tunnel (L2)

OpenWrt1

uci set network.gt01='interface'
uci set network.gt01.proto='gretap'
uci set network.gt01.ipaddr='10.1.1.1'
uci set network.gt01.peeraddr='10.1.1.2'
uci set network.gt01.delegate='0'
uci set network.gt01.mtu='1558'
uci commit network
/etc/init.d/network restart

OpenWrt2

uci set network.gt01='interface'
uci set network.gt01.proto='gretap'
uci set network.gt01.ipaddr='10.1.1.2'
uci set network.gt01.peeraddr='10.1.1.1'
uci set network.gt01.delegate='0'
uci set network.gt01.mtu='1558'
uci commit network
/etc/init.d/network restart

This will create an interface named something like:

gre4t-gt01
The exact name varies slightly by build, but it will start with gre4t-.

Nothing is bridged yet. Nothing breaks.

I told my router a joke. It didn’t get it — must’ve been a layer 8 issue.

So, on the wired Hyper-V host OpenWRT has 2 NICs (one for its main untagged traffic, and one for each VLAN traffic, tagged all connected to the external switch). This is easily possible cause a wired link can easily support VLAN tags.

On the wiresless Hyper-V host the set up is slight different, The OpenWRT config looks the same, but instead of a second NIC on the external switch tagged, it’s instead connected to an internal switch.

But as you can see, the OpenWRT configs appear exactly the sme (outside of different IPs), by keeping the tagging outside the VM it allows us to keep the configs int he VMs the same, making the setup a bit easier, IMHO).

Final notes here on these config:

  • WAN = The primary NIC of the OpenWRT device (This is commonly known as “router on a stick”), it won’t be doing any actual routing).
  • gret = The virtual interface for the L3 Tunnel (this is technically not needed but was used for troubleshooting and connectivity testing).
  • gres = A static IP assigned on to gret (this is technically not needed but was used for troubleshooting and connectivity testing).
  • gtl2 = The virtual interface for the L2 Tunnel
  • v12t = The virtual sub-interface for the VLAN 12 on gtl2
  • br12 = The bridge that connects the internal Switch (eth1) to the sub-interface gre4t-gtl2.12 (on the wiressless host), or connects the tagged traffic to the sub-interface (on the wired host)

*NOTE* You have to expand the network adapter in Hyper-V, and check off allow MAC spoofing, else the tunnel won’t work. You’ve been warned.

Veeam

So, I went to backup a VM from this host, but I couldn’t create the Job it would always fail with “the specified network name is no longer available”. Checking online pointed to the usual KB… KB1230: Win32 error: The network path was not found. Code 53

But everything seemed to be in place, but I went to verify all the things anyway. (Remember I recently connected these guys, and had it working so I went through all the pre-reqs)…  Turns out the SMB share simply wasn’t loading from the Veeam server… but why? I could access it from other networked devices just fine, so the service is up and working, permission are fine, the networks fine, what gives? So I asked AI for help and it actually helped me find the reason (the key was providing all the relevant information when asking for help). Here’s it’s response based on my setup and issue and symtops:

“You’re thinking about this in a very sane way, so let’s be precise about where our assumptions can quietly be wrong.

You’re absolutely right about this:

  • TCP handshake works (Test-NetConnection 445 = True).
  • SMB works from your desktop over a routed path.
  • SMB from the VM on the same subnet to the same host hangs.

So:

  • It is not a basic 445 reachability issue.
  • It is not that “SMB is globally broken.”
  • It is path‑specific behavior between that VM and that host IP.

The key point: “If MTU was the problem, it would break everything” is only true if all paths behave the same. They don’t.

Why MTU/fragmentation can break one path and not another?

MTU problems are per path, not “per service” or “per machine”.

Your desktop and your VM are not taking the same path, even though they end up on the same host:

  • Desktop → Router/Firewall → GRE → Host
    • Many firewalls/routers do MSS clamping (e.g., force TCP MSS down to 1360/1400).
    • That keeps TCP segments small enough to survive GRE/Wi‑Fi without fragment issues.
  • VM → Inner switch → GRE → Host
    • Often no MSS clamping here.
    • PMTUD may be broken or ICMP “fragmentation needed” never makes it back.
    • Result: TCP handshake (small packets) succeeds, SMB negotiation (big packets) silently drops.

So you can absolutely have:

  • Desktop: works fine, because the firewall “fixes” the path.
  • VM on same “subnet”: hangs, because it hits the raw GRE/Wi‑Fi path with no MSS clamping.

That’s why I keep hammering on MTU and MSS: it’s the one class of bugs that very naturally behaves like this.

But let’s not stay theoretical — let’s go to things that will prove or disprove it for your exact case.

  1. Directly test MTU from the VM to the host IP that SMB hangs on

From the VM:

ping <HOST-IP> -f -l 1472
ping <HOST-IP> -f -l 1400
ping <HOST-IP> -f -l 1300

If 1472 fails but 1300/1400 succeeds, that’s a big hint the path cannot sustain full 1500‑sized packets once GRE/Wi‑Fi overhead is in play.

Then, as a brutal, simple test, temporarily drop the VM NIC MTU:

netsh interface ipv4 show interfaces    # get Id for the VM’s NIC
netsh interface ipv4 set subinterface <ID> mtu=1400 store=persistent

Bounce the NIC or reboot the VM, then try \\<HOST-IP>\c$ again.

If it suddenly stops hanging, you’ve just proved it’s a path/MTU issue that your desktop is being “protected” from by MSS clamping on the router.”

Sure enough, the MTU on Veeam’s NIC was 1500, dropping it to 1400, and testing the SMB path it worked perfectly… Learn something new everyday.

Summary

This is a huge PITA, but it IS technically possible. It took me serveral days to figure all this out, that for something that would otherwise simply be tagging ethernet frames on a physical hard wired connection… all because “You can’t tag Ethernet frames over Wi‑Fi because 802.11 wireless doesn’t carry 802.1Q VLAN tags the way wired Ethernet does. Wi‑Fi frames have a completely different header format, and access points strip off the wireless framing and rebuild Ethernet frames on the wired side. Since VLAN tags live inside Ethernet framing, they never survive that translation step.”

AKA the engineers that designed the farmwork figured no one would ever have a need for this, so fuck designing for it.

I hope this blog post helps someone out. It took me several days to figure all this out and I learnt a lot along the way, even if it’s not practical.

Wireless ESXi Host

The Story

So, the other day I pondered an idea. I wanted to start making some special art pieces made from old motherboards, and then I also started to wonder could I actually make such an art piece… and have it functional?

I took an apart my old build that was a 1U server I made from an old PA-500 and a motherboard I repurposed from a colleague who gifted me their old broken system. Since it was a 1U system, I had purchased 2 special pieces to make it work, a special CPU heatsink (complete solid copper, with a side blower fan, and a 300 watt 1U PSU. both of which made lots of noise.

I also have another project going called “Operation Shut the fuck up” in which all the noisy servers I run will be either shutdown or modified to make zero noise. I hope with the project to also reduce my overall power consumption.

So I started by simply benching the Mobo and working off that, which spurred a whole interest into open case computer designs. I managed to find some projects on Thingiverse for 2020 extrusions and corner braces, cable ties… the works. The build was coming along swimmingly. There was just one thing that kept bugging me about the build… The wires…

Now I know the power cable will be required reguardless, but my hope was to have/install an outlet at the level the art piece was going to be placed at and have it nicely nested behind the art piece to hide it. Now there were a couple ways to resolve this.

  1. Use an Ethernet over Power (Powerline) adapter to use the existing copper power lines already installed in the house. (Not to be confused with PoE).
    There was just one problem with this, my existing Powerline kit died right when I wanted to use it for the purpose. (Looking inside looks like the, soldered to the board, fuse blew, might be as simple as replacing that but it could be a component behind the fuse failed and replacing it would simply blow the new fuse).
    *This is still a very solid option as the default physical port can be used and no other software/configuration/hackery needs to be done, (Plug n Play).
  2.  The next best option would be to use one of these RJ45 to Wireless adapters:
    Wireless Portable WiFi Repeater/Bridge/AP Modes, VONETS VAP11G-300.
    VONETS VAP11G-500S Industrial 2.4GHz Mini WiFi Bridge Wireless Repeater/Router Ethernet to WiFi Adapter
    This option is not as good as the signal quality over wireless is not has good as physical even when using Powerline adapters. However, this option much like the Powerline option, again allows the use of the default NIC, and only the device itself would need to be preconfigured using another system but otherwise again no software/configuration/hackery needs to be done.
  3.  Straight up use a WiFi Adapter on the ESXi host.

Now if you look up this option you’ll see many different responses from:

  1. It can’t be done at all. But USB NICs have community drivers.
    This is true and I’ve used it for ESXi hosts that didn’t have enough NICs for the different Networks that were available (And VLAN was not a viable option for the network design). But I digress here, that’s not what were are after, Wifi ESXi, yes?
  2.  It can’t be done. But option 1, powerline is mentioned, as well as option 2 to use a WiFi bridge to connect to the physical port.
  3.  Can’t be done, use a bridge. Option 2 specified above. and finally…
  4.  Yeah, ESXi doesn’t support Wifi (as mentioned many times) but….. If you pass the WiFi hardware to a VM, then use the vSwitching on the host.. Maybe…

As directly quoted by.. “deleted” – “I mean….if you can find a wifi card that capable, or you make a VM such as pfsense that has a wifi card passed through and that has drivers and then you router all traffic through some internal NIC thats connected to pfsense….”

It was this guys comment that I ran with this crazy idea to see if it could be done…. Spoiler alert, yes that’s why I’m writing this blog post.

The Tasks

The Caveats

While going through this project I was hit with one pretty big hiccup which really sucks but I was able to work past it. That is… It won’t be possible to Bridge the WAN/LAN network segments in OPNsense/PFsense with this setup. Which really sucked that I had to find this out the hard way… as mentioned by pfsense parent company here:

“BSS and IBSS wireless and Bridging

Due to the way wireless works in BSS mode (Basic Service Set, client mode) and IBSS mode (Independent Basic Service Set, Ad-Hoc mode), and the way bridging works, a wireless interface cannot be bridged in BSS or IBSS mode. Every device connected to a wireless card in BSS or IBSS mode must present the same MAC address. With bridging, the MAC address passed is the actual MAC of the connected device. This is normally a desirable facet of how bridging works. With wireless, the only way this can function is if all the devices behind that wireless card present the same MAC address on the wireless network. This is explained in depth by noted wireless expert Jim Thompson in a mailing list post.

As one example, when VMware Player, Workstation, or Server is configured to bridge to a wireless interface, it automatically translates the MAC address to that of the wireless card. Because there is no way to translate a MAC address in FreeBSD, and because of the way bridging in FreeBSD works, it is difficult to provide any workarounds similar to what VMware offers. At some point pfSense® software may support this, but it is not currently on the roadmap.”

Cool what does that mean? It means that if you are running a flat /24 network, as most people in home networks run a private subnet of 192.168.0.0/24, that this device will not be able to communicate in the layer 2 broadcast domain. The good news is ESXi doesn’t needs to work, or utilizes features of broadcast domains. It does however mean that we will need to manage routes as communications to the host using this method will have to be on it’s own dedicated subnet and be routed accordingly based on your network infrastructure. If you have no idea what I’m talking about here then it’s probably best not to continue on with this blog post.

Let’s get started. Oh another thing, at the time of this writing a physical port is still required to get this setup as lots of initial configurations still need to take place on the ESXi host via the Web GUI which can initially only be accessible via the physical port, maybe when I’m done I can make a mirco image of the ESXi hdd with the required VM, but even then the passthrough would have to be configured… ignore this rambling I’m just thinking stupid things…

Step 1) Have a ESXi host with a PCI-e based WiFi card.

I’ve tested this with both desktop Mobo with a PCI-e Wifi card, and a laptop with a built in Wifi Card, in both cases this process worked.

As you can see here I have a very basic ESXi server with some old hardware but otherwise still perfectly useable. For this setup it will be ESXi on USB stick, and for fun I made a Datastore on the remaining space on the USB stick since it was a 64 Gig stick. This is generally a bad idea, again for the same reasons mentioned above that USB sticks are not good at HIGH random I/O, and persistent I/O on top of that, but since this whole blog post is getting an ESXi host managed via WiFi which is also frowned upon why not just go the extra mile and really piss everyone off.

Again I could have done everything on the existing SATA based SSD and avoid so much potential future issue…. but here I am… anyway…

You may also note that at this time in the post I am connecting to a physical adapter on the ESXi host as noted by the IP addresses… once complete these IP addresses will not be used but remain bound the physical NIC.

Step 2) Create VM to manage the WiFi.

Again I’m choosing to use OPNsense cause they are awesome in my opinion.

I found I was able to get away with 1 GB of memory (even though min stated is 2) and 16 GB HDD, if I tried 8 GB the OPNsense installer would fail even though it states to be able to install on 4 GB SD Cards.

Also note I manually change boot from BIOS to EFI which has long been supported. At this stage also check off boot into EFI menu, this allows the VMRC tool to connect to ISO images from my desktop machine that I’m using to manage the ESXi host at this time.

Installing OPNsense

Now this would be much faster had I simply used the SSD, but since I’m doing everything the dumbest way possible, the max speed here will be roughly 8 MB/s… I know this from the extensive testing I’ve done on these USB drives from the ESXi install. (The install caused me so much grief hahah).

Wow 22 MB/s amazing, just remember though that this will be the HDD for just the OPNsense server that won’t need storage I/O, it’ll simply boot and manage the traffic over the WiFi card.

And much like how ESXi installed on the exact same USB drive, we are going to configure OPNsense to not burn out the drive. By following the suggestions in this thread.

Configuring  OPNsense

Much like the ESXi host itself at this point I have this VM connected to the same VMPG that connects to my flat 192.168 network. This will allow us to gain access to the web interface to configure the OPNsense server exactly in the same manner we are currently configuring the ESXi host. However, for some reason the main interface while it will default assign to LAN it won’t be configured for DHCP and assumes 192.168.1.1/24 IP… cool, so log into the console and configure the LAN IP address to be reachable per your config, in my case I’m going to give it an IP address in my 192.168.0.0/24 network.

Again this IP will be temporary to configure the VM via the Web GUI. Technically the next couple steps can be done via the CLI but this is just a preference for me at this time, if you know what you are doing feel free to configure these steps as you see fit.

I’m in! At this point I configure SSH access and allow root and password login. Since this it a WiFi bridged VM and not one acting as a firewall between my private network and the public facing internet this is fine for me and allows more management access. Change these how you see fit.

At this point, I skip the GUI wizard.  Then configured the settings per the link above.

Even with only 1 GB of memory defined for the VM, I wonder if this will cause any issues, reboot, system seems to have come up fine… moving on.

Holy crap we finally have the pre-reqs in place. All we have to do now is configure the WiFi card for PCI passthrough, give it to the VM, and reconfigure the network stacks. Let’s go!

Locate WiFi card and Configure Passthrough

So back on the ESXi web interface go to … Host -> Manage -> Hardware and configure the device for pasththrough until, you find all devices are greyed out? What the… I’ve done this 3 times what happed….

All PCI Passthrough devices grayed out on ESXi 6.7U3 : r/vmware (reddit.com)

FFS, OK I should have mentioned this in the pre-reqs but I guess in all my previous builds test this setting must have been enabled and available on the boards I was using… I hope I’m not hooped here yet again in this dang project…

Great went into the BIOS could find nothing specific for VT-d or VT-x (kind of amazed VM were working on this thing the whole time. I found one option  called XD bit or something, it was enabled, I changed it to disabled, and it caused the system to go into a boot loop. It would start the ESXi boot up and then half way in randomly reboot, I changed the setting back and it works just fine again.

I’m trying super hard right now not to get angry cause everything I have tried to get this server up and running while not having to use the physical NIC has failed… even though I know it’s possible cause I did this 2 other times successfully and now I’m hung cause of another STUPID ****ING technicality.

K I have one other dumb idea up my ass… I have a USB based WiFi NIC, maybe just maybe I can pass that to OPNsense…

VMware seems to possibly allow it: Add USB Devices from an ESXi Host to a Virtual Machine (vmware.com)

OPNsense… Maybe? compatible USB Wifi (opnsense.org)

Here goes my last and final attempt at this hardware….

Attempting USB WiFi Passthrough

Add device, USB Controller 2.0.

Add Device, Find USB device on host from drop down menu.

Boot VM….. (my hearts racing right now, cause I’m in a HAB (Heightened Anger Baseline) and I have no idea if this final work around is going to work or not).

Damn it doesn’t seem to be showing under interfaces… checking dmesg on the shell…

I mean it there’s it has the same name as the PCI-e based WiFi card I was trying to use, but that is 1) pulled from the machine, and 2) we couldn’t pass it through, and dmesg shows it’s on the usbus1… that has to be it… but why can’t I see it in the OPNsense GUI?

OMG… I think this worked… I went to Interfaces wireless, then added the run0 I saw in dmesg….

I then added as an available interface….

For some weird reason it gave it a weird assignment as WifIBridge… I went back into the console and selected option 2 to assign interfaces:

Yay now I can see an assignable interface to WAN. I pick run0

Now back into OPNsense GUI… OMG… there we go I think we can move forward!

Once you see this we can FINALLY start to configure the wireless connection that will drive this whole design! Time for a quick break.

Configuring WiFi on OPNsense

No matter if you did PCI-e passthrough or USB passthrough you should now have an accessible OPNsense via LAN, and assigned the WiFi device interface to WAN. Now we need to get WAN connected to the actual WiFi.

So… Step 1) remove all blocking options to prevent any network issues, again this is an internal bridge/router, and not a Edge Firewall/NAT.

Uncheck Block Private Networks (Since we will be assigning the WAN interface a Private IP), and uncheck Block bogon networks.

Step 2) Define your IP info. In my case I’m going to be providing it a Static IP. I want to give it the one that is currently being used to access it that is bound to the vNIC, but since it’s alread bound and in use we’ll give it another IP in the same subnet and move the IP once it’s released from the other interface. For now we will also leave it as a slash 32 to prevent a network overlap of the interface bound on LAN thats configured for a /24.

No IPv6.

Step 3) Define SSID to connect to and Password.

I did this and clicked apply and to my dismay.. I couldn’t get a ping response… I ssh’d into the device by the current VMX nic IP and even the device itself couldn’t ping it (interface is down, something is wrong).

Checking the OPNsense GUI under INterface Assignments I noticed 2 WiFI interfaces (somehow I guess from me creating it above, and then running the wizard on the console?).

Dang I wanted to grab a snip, but from picking the main one (the other one was called a clone), it has now been removed from the dropdown, and after picking that one the pings started working!

Not sure what to say here, but now at this point you should have a OPNsnese server accessible by LAN (192.168.0.x) and WAN (192.168.0.x). The next thing is we need to make the Web interface accessible by the WAN (Wireless) interface.

Basically, something as horrendous as this drawing here:

Anyway… the first goal is to see if the WiFi hold up, to test this I simply unplug the physical cable from the beaitful diagram above, and make sure the pings to the WAN interface stay up… and they both went down….

This happened to me on my first go around on testing this setup… I know I fixed it.. I just can’t remember how… maybe a reboot of the VM, replug in physical cable. Before I reboot this device I’ll configure a gateway as well.

Interesting, so yup that fixed the WiFi issue, OPNsense now came up clean and WiFi still ping response even when physical nic is removed from the ESXi host… we are gonna make it!

interesting the LAN IP did not come up and disappeared. But that’s OK cause I can access the Web GUI via the WAN IP (Wirelessly).

finally OK, we finally have our wireless connection, now we just need to create a new vSwitch and MGMT network on the ESXi host that we will connect to the OPNsense on the VMX0 side (LAN) that you can see is free to reconfigure. This also free’d the IP address I wanted to use for the WAN, but since I’ve had so many issues… I’m just going to keep the one I got working and move on.

Configure the Special Managment network.

I’m going to go on record and say I’m doing it this way simply cause I got this way to work, if you can make it work by using the existing vSwitch and MGMT interfaces, by all means giver! I’m keeping my existing IPs and MGMT interfaces on the default switch0 and creating a new one for the wireless connection simply so that if I want to physically connect to the existing connection.. I simply plug in the cable.

Having said that on the ESXi host it’s time to create a new vSwitch:

Now create the new VMK, the IP given here is the in the new subnet that will be routed behind the OPNsense WAN. In my example I created a new subnet 192.168.68.0/24 this will be routed to the WAN IP address given to OPNsense in my example here that will be 192.168.0.33. (Outside the scope of this blog post I have created routes for this on my devices gateway devices, also since my machine is in the same subnet at the OPNsense WAN IP, but the OPNsense WAN IP address is not my subnets gateway IP this can cause what is known as asymetric routing, to resolve this you simply have to add the same route I just mentioned to the machine managing the devices. You have been warned, design your stuff better than I’m doing here… this is all simply for educational purposes… don’t do this ever in production)

Now we need to create a VMPG for the VM to connect the VMX0 IP into the new vSwitch to provide it the gateway IP for that new subnet (192.168.68.1/24)

Now we can finally configure the vNIC on the OPNsense VM to this new VMPG:

Before we configure the OPNsense box to have this new IP address let’s configure the ESXi gateway to be that:

OK finally back on the OPNsense side let’s configure the IP address…

Now to validate this it should simply be making sure the ESXi host can ping this IP…

All I should have to do now is configure the route on my machine doing all this work and I should also be able to ping it…

More success… final step.. unplug physical nic to pings stay up?? OMG and they do!!! hahaha:

As you can see the physical NIC IP drops but the new secret MGMT IPs behind the WiFi stay up! There’s one final thing we need to do though.

Configure Auto Start of OPNsense

This is a critical step in the design setup as the OPNsense needs to come up automatically in order to be able to manage the ESXi host if there is ever a reboot of the host.

Then simply configure the auto start setting for this VM:

I also go in and change the auto start delay to 30 seconds.

Summary

And there you have it… and ESXi host completely managed via WiFi….

There are a ton of limitations:

  1. No Bridging so you can’t keep a flat layer 2 broadcast domain. Thus:
  2. Requires dedicated routes and complex networking.
  3. All VM traffic is best handled directly on internal vSwitch otherwise all other VM traffic will share the same WiFi gateway providing a terrible experince.
  4. The Web interface will become sluggish when the network interface is under load.
  5.  However it is overall actually possible.
  6. * Using PCI-e passthrough disallows snapshots/vMotions of the OPNsense VM but USB does allow it, when doing a storage vMotion the VM crashed on me, for some reason auto start disabled too had to manually start the VM back up. (I did this by re-IPing the ESXi server via console and plugging in a phsyical cable)
  7. With USB WiFi Nic connections can be connected/disconnected from the host, but with PCI-e Passthrough these options are disabled.
  8. With USB NIC you can add more vNICs to OPNsense and configure them, it just brings down the network overall for about 4-5 min, but be patient it does work.Here’s a Speedtest from a Windows Virtual Machine on the ESXi host.

Hope you all enjoyed this blog post. See ya all next time!

*UPDATE* Remember when I stated I wanted to keep those VMKs in place incase I ever wanted to plug the physical cable back in? Yeah that burnt me pretty hard. If you want a backup physical IP make it something different then you existing network subets and write it down on the NIC…

For some really strange reason HTTPS would work but all other connections such as SSH would timeout very similar to an asymmetric routing issue, and it actually cause it kind was. I’m kinda shocked that HTTPS even managed to work… huh…

Here’s a conversation I had with other on VMware IRC channel trying to troubleshoot the issue. Man I felt so dumb when I finally figured out what was going on.

*Update 2* I notice that the CPU usage on the OPNsense VM would be very high when traffic through it was taking place (and not even high bandwidth here either) AND with the pffilter service disabled, meaning it working it pure routing mode.

High CPU load with 600Mbit (opnsense.org)

Poor speeds and high CPU usage when going through OPNsense?

“Furthermore, set the CPU to 1 core and 4 sockets. Make sure you use VirtIO nics and set Multiqueue to 4 or 8. There is some debate going on if it should be 4 or 8. By my understanding, setting it to 4 will force the amount of queues to 4, which in this case matches your amount of CPU cores. Setting it to 8 will make OPNsense/FreeBSD select the correct amount.” Says Mars

“In this case this is also comparing a linux-based router to a BSD based one. Linux will be able to scale throughput much easily with less CPU power required when compared to the available BSD-based routers. Hopefully with FreeBSD 13 we’ll see more optimization in this regard and maybe close the gap a bit compared to what Linux can do.” Says opnfwb

Mhmmm ok I guess first thing I can try is upping the CPU core count. But this VM also hosts the connection I need to manage it… Seems others have hit this problem too…

Can you add CPU cores to VM at next restart? : r/vmware (reddit.com)

while the script is decent, the comment by cowherd is exactly what I was thinking I was going to do here: “Could you clone the firewall, add cores to the clone, then start it powering up and immediately hard power off the original?”

I’ll test this out when time permits and hopefully provide some charts and stats.