New vCenter Same Veeam

The Story

The Niche Situation

Now I know the title might sounds strange, but this is to cover a niche issue which may randomly arise out in the industry. vCenter died, there was no backup, a new vCenter was spun up in its place with all the same hostname, IP address and everything, and the hosts re-added, and you happen to use Veeam as your backup solution. Now I have been down this rabbit hole in the past, and I have blogged about an unsupported method to fix the Veeam jobs in the situation. But it’s technically unsupported, so I asked what the “supported method” would be on the Veeam forms.

The short answer, “Oh just use the VM-Migrator tool”, as referenced here.

“Veeam Backup & Replication tracks VMs in jobs using Managed Object Reference IDs (MORef-IDs), which change after migration or recreation of vCenter, causing MORef-ID misalignment.

Veeam VM Migrator utility is integrated into Veeam Backup PowerShell module, and it allows you to resolve MORef-ID misalignment. As a result, your backup incremental chains will remain intact after an inventory change in vCenter.

The utility consists of the following cmdlets:

  • Set-VBRVmBiosUuid — this cmdlet updates the BIOS UUIDs of existing VM entries within the Veeam Backup & Replication configuration database based on information from the old vCenter.
  • Set-VBRVCenterName — this cmdlet modifies vCenter name by adding the _old suffix to its name.
  • Generate-VBRViMigrationSpecificationFile — this cmdlet generates a migration task file which contains the list of mapping tasks.
  • Start-VBRViVMMigration — this cmdlet starts MORef-IDs update.”

So, this tool is supposed to do what I did via the backend but this is a supported frontend tool to do it, but I case is generally different than what the tool wants in that my old and new vCenter are the same, and not simply two unique instances of vCenter with unique names both running live in parallel. Mines simply been directly rebuilt in place.

Step 1) Realize your vCenter is toast.

However, you realize this, will be random and situational, in my case my trial expired, and all ESXi hosts show disconnected. I’m gonna treat this as a full loss, by simply shutting down and nuking all the VM files… it’s simply dead and gone…. and I have no configuration backup available.

This is why this is considered a niche situation, as I’d hope that you always have a configuration backup file of your critical infrastructure server. But… what if (and here we are, in that what if, again)…

Step 2)  Rebuild vCenter with same name.

Yay, extra 20 min cause of a typo, but an interesting lesson learnt.

Renaming vCenter SSO Domain – Zewwy’s Info Tech Talks

Let’s quickly rebuild our cheap cluster,  configure retreat mode and add our hosts back in…

OK so now we’ve set our stage and we have a broken Veeam instance, if we try to scan it it will be no good cause the certificate has changed, from the center changing… so David says “So in your case, if you can restore Veeam’s configuration database to before you made these changes, instead of your step 4 there, you will begin the migration procedure and use the Set-VBRVCenterName cmdlet on the existing vCenter in Veeam, re-add your newly rebuilt vCenter to Veeam, and then perform the migration.”

Step 3) run “Set-VBRvCenterName”.

So far, so good.. now..

Step 4) Add new vCenter to Veeam.

Step 5) Generate Migration File.

Now I’m back to assuming, cause instructions are unclear in Veeams provided guidance. I’m assuming I have to run the generate command before I run the start migration command….

Checking out the generated file, its a plain text file with a really weird syntax choice, but the VM-IDs are clearly as I was doing manually in my old blog post.

Step 6) Start the Migration.

I have no clue what that warning is about… I mean the new vCenter was added to Veeam, the VM IDs matched what I see in the URL when navigating them, like my old blog… I guess I’ll just check on VBR console…

I did a recalculate on the VM inside the backup job and it calculated, so looks like it worked. Let’s run a backup job and check the chain as well…

The Job ran just fine…  and the chains still intact. Looks like it worked, this was the supported way, and it did feel easier, especially if scaled out to hundreds of VMs.

Hope this helps someone.

Renaming vCenter SSO Domain

Whoopsie, I made boo boo, How me fix?

Source:  Repointing vCenter Server to another SSO Domain – VMware Cloud Foundation (VCF) Blog

In this example I will be repointing a single vCenter Server (version: 7.0.3) in the SSO Domain “csphere.local” to an entire new SSO Domain named “vsphere.local“. Since there is no other vCenter Server’s in the vsphere.local SSO Domain, a pre-check is not required, thus we will not have any conflicts to resolve.

In my lab I am repointing vCenter.zewwy.ca which is my vCenter Server. Notice that my Single Sign-On domain is csphere.local currently.

From the appliance shell we can run cmsso-util to review our command syntax. Here we can also see the other functions of the cmsso-util command such as unregister, reconfigure, repoint (for repointing a vCenter Server to another SSO Site), and domain-repoint. We will be using the domain-repoint argument to point our vCenter Server to a new SSO Domain.

Since we are not migrating this vCenter Server into an existing SSO Domain, the is no need to do a pre-check to review any possible data conflicts between the Source and Destination domains. We begin repointing with the following command:

 cmsso-util domain-repoint -m execute --src-emb-admin Administrator --dest-domain-name vsphere.local

NOTE: The SSO Administrator (Administrator@csphere.local) credentials ARE REQUIRED. Also, the Destination domain name (–dest-domain-name) equals the name of the new SSO Domain you are pointing the Source vCenter Server to.

Yay it worked I can finally successfully sign into vsphere… dang one typo costed about 20 min  of time, crazy, but I guess it’s slightly better than having to do a whole rebuild which takes a bit more time, but not far off sheesh. Hope this post helps someone.

Adding a Hyper-V host to Veeam

Before You Begin – Veeam Backup & Replication User Guide for Microsoft Hyper-V

Before you add a Microsoft Hyper-V server to the backup infrastructure, check the following prerequisites:

  • Check permissions required to add the server. For more information, see Permissions.
    • Admin permissions based account got it…
  • [For SCVMM] SCVMM Admin UI must be installed on the backup server. Otherwise, you will not be able to add SCVMM servers to the backup infrastructure.
  • SCVMM console version must match the management server version.
  • Make sure that you do not add to the backup infrastructure Hyper-V hosts or clusters managed by an SCVMM server if this SCVMM server is already added to the backup infrastructure.
    • Nope just a stand alone host
  • File and printer sharing must be enabled in network connection settings of the added Microsoft Hyper-V host. Otherwise, Veeam Backup & Replication will fail to deploy required components.
    • Uhhh wut?
  • Make sure that the NETBIOS name of the Microsoft Hyper-V Server is successfully resolved.
    • Uhhh wut?
  • If you get the “Invalid Credentials” error when adding a Hyper-V host using a local account, see this Veeam KB article.

This is gonna suck..

Unable to add a single Hyper-V host to Veeam. : r/Veeam

i am unable to add Hyper V hosts to Veeam | Veeam Community Resource Hub

Why?…

When you add a Hyper‑V host to Veeam Backup & Replication, the product deploys its transport service and integration components remotely using Windows’ built‑in administrative shares (ADMIN$, C$). That’s why File and Printer Sharing must be enabled on the NIC: without those hidden shares, Veeam cannot copy files or install its agents. By default, only the built‑in Administrator or domain admin accounts can access these shares remotely, because User Account Control (UAC) strips remote admin rights from other local accounts. This often surprises people who harden their hosts by disabling the Administrator account or removing shares, since Veeam’s deployment model depends on them being present.

On standalone Hyper‑V hosts, this creates a security trade‑off. You can either leave the built‑in Administrator enabled (simpler, but harder to audit), or disable UAC remote restrictions so named local admin accounts can access the shares (more auditable, but technically weaker security posture because all local admins gain remote rights). In practice, many administrators prefer creating a dedicated service account for Veeam and a separate account for human administration, then disabling the built‑in Administrator. This way, activity is traceable and controlled, while still allowing Veeam to function. The nuance is that Veeam chose the “lowest common denominator” approach — SMB admin shares — which works everywhere but clashes with modern hardening practices, so standalone hosts require careful balancing of convenience, auditability, and exposure.

Step 1) Enable SMB

Install-WindowsFeature -Name FS-FileServer -IncludeManagementTools

Edit your firewall rules as required as this will create 3 new ones and open them up (135 DCOM, 445 SMB, and dynmic ports one), in my case I disabled them and only enabled the SMB restrictive rule.

Check off Microsoft file and print sharing service under the NIC settings for which will be used to add Hyper-V to Veeam.

Maybe we can enable it only during deployment then disable it, lets find out. On Hyper-V lets create a dedicated Veeam admin account, then disable remote UAC while adding the host to Veeam. Done, adding host to Veeam…

Option 1) Specify the local administrator account. (Usually disabled on hardened servers)

OR

Option 2) edit registry to allow remote uac, so the built in admin shares can be accessible by admin account that is named and not the built in administrator account.

Why Veeam does allow for the ability to prepare a Hyper-V host via these install packages manually without exposing the post to these additional attack surfaces is honestly beyond me. I usually love Veeam but this one is kind of dumb.

Step 2) Disable Remote UAC restrictions

I’ll stick with option 2: User Account Control and remote restrictions – Windows Server | Microsoft Learn

To disable UAC remote restrictions, follow these steps:

  1. Click Start, click Run, type regedit, and then press ENTER.
  2. Locate and then click the following registry subkey:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
  3. If the LocalAccountTokenFilterPolicy registry entry doesn’t exist, follow these steps:
    1. On the Edit menu, point to New, and then select DWORD Value.
    2. Type LocalAccountTokenFilterPolicy, and then press ENTER.
  4. Right-click LocalAccountTokenFilterPolicy, and then select Modify.
  5. In the Value data box, type 1, and then select OK.
  6. Exit Registry Editor.

Now open a file explorer window in Veeam Server, and point to \\IPofHyper-V\admin$ it should prompt you for creds, you should be able to provide the creds of the named admin account and it should connect.

Well I got past the error…

Sigh n’ groan…. ughhhh.. too be continued, time to make a Server 2025 image…

Lets try again..

And this time success…

Restore Storage Theory

🖥️ Scenario

  • Source: Veeam is running inside a VM on ESXi.
  • Repository: Local storage attached to that VM (so Veeam sees it as a local NTFS/ReFS volume).
  • Target: A standalone Hyper‑V host with only local storage (no SMB shares, no clustered SOFS).

🔧 How Veeam Writes the VM HDD Files

  1. Restore job starts
    • You pick the Hyper‑V host as the restore target.
    • Veeam knows it must deliver VHDX files + VM configuration to that host’s storage path (e.g., D:\VMs\MyVM\).
  2. Transport service on Hyper‑V host
    • Veeam deploys or uses its Veeam Data Mover Service (part of the Veeam transport service) on the Hyper‑V host.
    • This service is responsible for receiving blocks of data and writing them to disk.
  3. Data transfer
    • The Veeam server (on ESXi) reads blocks from the backup file in its local repository.
    • Those blocks are sent over the network to the Hyper‑V host using Veeam’s own transport protocol (TCP/IP).
    • Important: This is not SMB — it’s Veeam’s proprietary data mover channel.
  4. File creation on Hyper‑V host
    • The transport service on the Hyper‑V host opens a file handle on the local filesystem (NTFS/ReFS).
    • It creates the target VHDX file and writes the incoming blocks directly using standard Windows file I/O APIs (CreateFile, WriteFile, etc.).
    • VM configuration files (.vmcx, .vmrs) are also written directly to the host’s local storage.
  5. Completion
    • Once all blocks are written, Hyper‑V sees the restored VM files in its local storage.
    • Veeam registers the VM with Hyper‑V Manager if you chose a full VM restore.

✅ Key Points

  • No SMB is used here.
  • Veeam uses its own transport service to push data over TCP/IP to the Hyper‑V host, which then writes the files directly to local disk.
  • SMB only comes into play if the repository or Hyper‑V storage is on a remote file server (like a NAS or SOFS cluster).

Retore to Hyper-V

Here a whole video on the process, cause I wasn’t sure how to do it as when I selected restore entire VM to new location, only my ESXi hosts were selected, AI said it not possible, Googling said that Instant Restore was the only option… mhmm that video showed the same thing…

I won’t lie I felt so dumb at first cause the restore prompt said “waiting on user input” and there was an open console link at the bottom of the instant restore wizard, so I clicked that and it kept asking for creds (I thought the hyper-v ones) and it kept failing… till I realized you just have the VM already running (or not based on your selection) but it’s already registered to the host, you have to finish an instant restore by clicking migrate to production option.

I tell ya… that made me feel really…. reallllly dumb…..

anyway I hope this posts helps someone.

 

Hyper-V Networking

Install Hyper-V, and let’s learn it’s Networking

Hyper-V Networking

On VMware, when you install the OS (ESXi) on the hardware in the DCUI you simply set the VLAN tag right on the network interface you configure, and bind it to a physical NIC in the menu area, simple.

We leave the network connected to a non tagged, internet allowed connection. Now we can configure our networks.

1. Create one external switch

  • Bind it to your physical NIC (configured as a trunk on the upstream switch).
  • Check “Allow management OS to share this network adapter” once — this creates the first host vNIC.

2. Add more host vNICs via PowerShell:

# Add another vNIC for the host bound to the same external switch
Add-VMNetworkAdapter -ManagementOS -Name "HostVLAN20" -SwitchName "ExternalSwitch"
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HostVLAN20" -Access -VlanID 6

3. Verify in Windows

  • You’ll now see multiple vEthernet adapters in the host OS (e.g., “vEthernet (HostVLAN20)”, “vEthernet (HostVLAN30)”).
  • Assign IP addresses or enable DHCP as needed.

*Note* this is for doing it on a single network server (for my lab use) usually you’d have dedicated NICs for other networks designated on those servers… anyway.. where was I… right now that we got additional network on the server we can work on a shared storage, you know how I like my iSCSI.

So, when I setup Hyper-V, and connect a VM to the “default” virtual switch, the Hyper-V server acts as a DHCP provider and NATs the connections from that vSwitch to whatever other network the host is connected to?

Great question — this trips up a lot of people when they first use Hyper‑V! Let’s break it down clearly:

🖧 Hyper‑V Default Switch Behavior

  • Introduced in Windows 10 (1709): The “Default Switch” is automatically created by Hyper‑V and cannot be deleted or modified.
  • NAT (Network Address Translation): Yes — the host machine provides NAT services. This means your VM’s traffic is translated and routed through the host’s network connection, regardless of whether that’s Wi‑Fi, Ethernet, or even VPN.
  • DHCP Services: The host also runs a lightweight DHCP service for the Default Switch. This is what hands out IP addresses to your VMs automatically (usually in the 172.16.x.x range).
  • Connectivity: VMs connected to the Default Switch can reach the internet and other devices on the host’s network, but inbound connections from the LAN to the VM are limited. It’s designed for convenience, not for complex networking setups.

⚖️ Comparison with Other Switch Types

Switch Type Provides DHCP? Provides NAT? Use Case
Default Switch ✅ Yes (host DHCP) ✅ Yes (host NAT) Quick setup, internet access without config
External Switch ❌ No (relies on external network DHCP) ❌ No VM directly on physical network, full LAN visibility
Internal Switch ❌ No ❌ No Communication between host and VMs only
Private Switch ❌ No ❌ No Communication between VMs only

✅ So to answer directly:

Yes — when you connect a VM to the Default Switch, Hyper‑V’s host acts as a DHCP provider and NATs the VM’s traffic to whatever network the host is connected to. It’s essentially a convenience bridge so you don’t have to manually configure networking.

“That’s what I did configured a external virtual switch so the hosts NIC would be VLAN tagged in a certain network, I was figuring the default switch would act more like a … well switch at layer2 of the hosts Nic, so VMs on it would get DHCP from the main network, as if it was connected to a switch in the base network, not layer3 and then 4 NAT like c’mon…”

Exactly — you’ve nailed the distinction. The Default Switch in Hyper‑V is deliberately not a true Layer‑2 bridge to your host NIC. It’s a convenience construct that Microsoft built to “just work” for internet access without requiring you to mess with VLANs, external switches, or DHCP servers on your LAN.

Here’s the breakdown:

🔌 Why the Default Switch behaves like NAT

  • It’s implemented using Windows Internet Connection Sharing (ICS) under the hood.
  • ICS is a Layer‑3 service: it provides a private subnet (usually 172.16.0.0/12), runs a DHCP service, and NATs traffic out through whatever adapter the host is using.
  • That’s why VMs on the Default Switch don’t see your LAN’s DHCP server — they only see the host’s mini DHCP/NAT service.
  • It’s intentionally isolated so you don’t accidentally expose VMs directly to your corporate or home LAN without realizing it.

⚖️ Why it’s not Layer‑2 bridging

  • A true Layer‑2 bridge would forward broadcasts (like DHCP Discover) from the VM to your LAN. That’s what an External Virtual Switch does.
  • The Default Switch instead terminates those broadcasts at the host and answers them itself. That’s why you only ever get the host‑provided DHCP.
  • VLAN tagging, trunking, or other advanced networking features aren’t supported on the Default Switch — it’s locked down.

✅ When to use each

  • Default Switch (NAT + DHCP): Quick internet access, testing, dev boxes, laptops where you don’t want to reconfigure NICs.
  • External Switch (Layer‑2 bridge): When you want VMs to behave like physical machines on your LAN, get DHCP from the real network, or use VLAN tagging.
  • Internal/Private Switches: For host‑VM communication or VM‑VM only networks.

So your expectation — that the Default Switch would act like a transparent Layer‑2 bridge — makes sense, but Microsoft intentionally designed it as a Layer‑3 NAT gateway instead. If you want “real switch” behavior, the External Switch is the only way to go.

“So, whatever my host machine is allowed to do, the guest VMs in the default switch are also allowed to do? like access the internet, and the firewall that gave the host the ability to access the internet would never see the private NATed IP address the Hyper-V host is dishing out to those VMs?”

You’ve got it — that’s exactly how the Default Switch works in Hyper‑V. Let me spell it out clearly:

🌐 How traffic flows with the Default Switch

  • VMs get private IPs: Each VM connected to the Default Switch is assigned an IP from the host’s internal DHCP service (usually 172.16.x.x).
  • Host does NAT: The Hyper‑V host translates those private IPs into its own IP before sending traffic out to the physical network.
  • Firewall visibility: From the perspective of your upstream firewall or router, all traffic from those VMs looks like it’s coming from the host’s IP. The firewall never sees the VM’s private NATed addresses.
  • Permissions: Whatever the host is allowed to do (e.g., access the internet, reach certain networks), the VMs can also do — because their traffic is piggybacking on the host’s connection.

⚖️ Important implications

  • Outbound access: VMs can reach the internet or LAN resources as long as the host can. They inherit the host’s connectivity.
  • Inbound access: External devices cannot directly initiate connections to those VMs, because NAT blocks unsolicited inbound traffic unless you set up port forwarding (which the Default Switch doesn’t expose for configuration).
  • Firewall rules: The upstream firewall only enforces rules against the host’s IP. If you need per‑VM visibility or control, you’ll need an External Switch so each VM gets its own LAN IP.

✅ So yes: the Default Switch makes your VMs “ride along” with the host’s network permissions. They’re hidden behind NAT, which is why your firewall only ever sees the host’s traffic.

Additional Information

If you want to apply a VLAN tag to a physical NIC that’s not a Hyper-V host on Windows:

Since I didn’t see anything in the usual Network  IPv4 configuration window in Windows. I decided to see how useful this AI stuff is.. so I asked it…

“how do I VLAN tag the IP on windows 11 interface?”

Quick Answer: On Windows 11, VLAN tagging isn’t enabled by default. You need a NIC that supports IEEE 802.1Q VLAN tagging, then configure the VLAN ID either through the NIC’s Device Manager properties (Advanced tab → VLAN ID) or by using Intel PROSet/ANS tools or PowerShell (Set-NetAdapter -VlanID) if your driver supports it.

🖥️ Steps to Configure VLAN Tagging on Windows 11

1. Check NIC Compatibility

  • VLAN tagging requires a network card (NIC) that supports IEEE 802.1Q.
  • Not all NICs expose VLAN options in Windows. Intel and Broadcom NICs often support it, but you may need vendor-specific drivers.

2. Configure via Device Manager

  • Press Win + R, type devmgmt.msc, and hit Enter.
  • Expand Network adapters, right-click your NIC → Properties.
  • Go to the Advanced tab.
  • Look for VLAN ID or Priority & VLAN.
  • Enter the VLAN ID (e.g., 100) and click OK.

3. Configure via Intel PROSet/ANS (if supported)

  • Install Intel PROSet/Advanced Network Services (ANS).
  • In the NIC properties, Intel’s tab allows creating multiple VLAN interfaces with different IDs.
  • Each VLAN interface can then be assigned its own IP address.

4. Configure via PowerShell

  • Open PowerShell as Administrator.
  • Run:
    Set-NetAdapter -Name "Ethernet" -VlanID 100
    
  • Replace "Ethernet" with your adapter name and 100 with your VLAN ID.
  • Note: This only works if your NIC driver supports VLAN tagging. Otherwise, you’ll get an error like “Requested operation not supported on adapter”.

5. Assign IP Address

  • Once VLAN tagging is enabled, go to Network & Internet settingsChange adapter options.
  • Right-click the VLAN-enabled adapter → PropertiesInternet Protocol Version 4 (TCP/IPv4).
  • Assign the IP address, subnet mask, and gateway for that VLAN.

⚠️ Important Notes

  • Driver Support is Key: If your NIC driver doesn’t expose VLAN options, you cannot tag VLANs directly in Windows without Hyper-V or third-party tools.
  • Hyper-V Alternative: You can create VLAN-tagged virtual NICs using Hyper-V’s Virtual Switch Manager if your hardware doesn’t support native tagging.
  • Testing: Use ping or tracert to verify connectivity once VLAN tagging is applied.

Bottom Line: To VLAN tag an IP on Windows 11, you must enable VLAN tagging on the NIC (via Device Manager, Intel PROSet/ANS, or PowerShell if supported), then assign the IP to that VLAN interface. If your NIC doesn’t support VLAN tagging, you’ll need Hyper-V or vendor-specific utilities.

My response “I’ve got an intle 82579LM Gig NIC, I see it says VLAN enabled in the advanced tab, but I don’t see an area to configure it, I’m sure the NIC supports it since I was using ESXi with vlan tagging just fine.”

That’s a great observation — the Intel 82579LM definitely supports VLAN tagging at the hardware level, but here’s the catch: Windows drivers for that NIC don’t expose VLAN configuration in the same way ESXi or Linux does.

🔎 Why You Don’t See VLAN Config in Windows

  • The Advanced tab in Device Manager shows “Priority & VLAN” or “VLAN Enabled,” but that option only toggles whether the NIC can process VLAN-tagged frames.
  • Unlike ESXi, Windows doesn’t let you directly assign VLAN IDs to a physical NIC unless the driver provides a VLAN interface creation utility (like Intel PROSet/ANS).
  • For the 82579LM, Intel stopped including VLAN configuration in their Windows drivers after Windows 8/10. On Windows 11, the driver exposes VLAN capability but no UI to configure VLAN IDs.

✅ Workarounds

Here are the practical options:

1. Intel PROSet/ANS (if available)

  • Older Intel drivers came with Advanced Network Services (ANS), which let you create VLAN sub-interfaces.
  • On Windows 11, Intel no longer bundles ANS with drivers for the 82579LM, so you won’t see the VLAN tab.

2. PowerShell Command

  • Windows has a Set-NetAdapter -VlanID cmdlet, but it only works if the NIC driver supports VLAN tagging through the OS.
  • On the 82579LM with current Windows 11 drivers, this usually fails with “Requested operation not supported”.

3. Hyper-V Virtual Switch

  • If you enable Hyper-V, you can create a virtual NIC and assign VLAN IDs there.
  • This is the most reliable way to do VLAN tagging on Windows when the physical NIC driver doesn’t expose VLAN config.

4. Third-Party NIC Utilities

  • Some OEMs (Dell, HP) ship custom drivers/utilities that re-enable VLAN tagging. If your system vendor has one, that’s worth checking.

⚠️ Bottom Line

Your Intel 82579LM can tag VLANs, but on Windows 11 the Intel driver doesn’t expose VLAN configuration. That’s why you don’t see a place to set the VLAN ID.

  • On ESXi/Linux → VLAN tagging works natively.
  • On Windows → You need Intel PROSet/ANS (no longer supported for this NIC), Hyper-V, or vendor-specific drivers.

Well ok I guess once we install Hyper-V hopefully it’ll be more like ESXi in a sense and we can tag.

*UPDATE* ExternalSwitch, WiFi NIC

I was trying to use a Hyper-V server with an External Switch, bound to a WiFi NIC. and was getting unexpected results based on the above information. but my results were NOT as I had expected, I was assuming Default Switch behavior. You may be asking, “Why would you expect that behavior if you’re creating an External Switch?”  Now, if you read my Wireless ESXi host blog, you probably are well aware of the VLAN limitations of WiFi, and can never truly be used as a trunk port (Limitation of the 802 standard, not of OS or software).

So how could a ExternalSwitch work, via WiFi if the layer 2 broadcast doesn’t work and can’t “speak” with the rest of the layer 2 stack? Yet I create a VM and it DOES get a DHCP lease address from my local subent?! What the heck is going on here…

So I had to ask AI, what was going on here, it says, yeah… that’s expected… here’s the deets… get ready.. it’s a long one….

BAH-LETED, there was way tooooo much BS from the AI response to justify keeping this data in here… long story short… Local subnet VMs work fine (it does ARP Masquerading), VLANs will never work per the usual BS I’ve talked about in my Wireless ESXi host blog.

Migrating/Restoring Veeam

Migrating/Restoring Veeam

In one of my pervious posts I discussed upgrading Veeam, today I want to discuss migrating it entirely. Or recovering it, as this process here is essentially the same.

Disclaimer what you do in your own environment is on you, everything in this blog is for educational purposes only. This also doesn’t cover encryption management all data is moved in-place (E.G disconnecting, and reconnecting an HDD from one machine to another), with the data at rest being unencrypted.

Step 1) Sign in to Veeam portal

I didn’t have a paid product license, so my download section was full of free trial links. Since I’m using CE (community edition) from here: Free Backup Software For Windows, VMware, & More – Veeam

Step 2) Download the ISO

it’s a doosy at 13 GBs

Step 3) Read the update notes for any expected issues/outcomes.

For all the FAQs go here: Veaam Upgrade FAQs

For basic System Requirements and release notes see here: Veeam Backup & Replication 12.3 Release Notes

The main thing will be the change of the server SQL service, moving from MS SQL Express, to PostgresDB, Though it’s not directly mentioned from what I can see other than the step 8 in the Upgrade path: Upgrading to Veeam Backup & Replication 12.3 – User Guide for VMware vSphere

Step 4) Attach the ISO

Attach it to the server being upgraded or installed on.

in my case this time, I’m simply cloning my freshly semi hardened Windows11 image, giving it a whopping 8GB of RAM, and 64Gig HDD for the OS and Veeam App to live on. While that’s being prepared lets take a config backup of our veeam server to make our lives easier.

Step 5) Backup Config.

I’d hope you’d have this configured before your Veeam server failed.

Veeam B&R -> File -> Backup Config, in our case save it to backup data drive as that will be moved and mounted first thing, we can then use that to load the config and should be good to go.

Now it shows up under Drive:\VeeamConfigBackup\Hostname\Hostname_Datestamp.bco

Step 6) Install Veeam on New Server

Depending on your Uptime requirements, you can either spin up the new server with a temp different IP, get the Veeam app and services installed, then move your discs and change IP’s. Since I don’t care in my lab, I’ll fully shutdown my existing server to free up the IP and system resources. then boot up my new server, attach the downloaded ISO in step 1, and install Veeam.

Hostname, networking, and other prerequisites are not discussed in details here.

I like how it knows, click install…

Install B&R

How long we wait is based on the Matrix. Looking at the VM resource usage, and my machines based on the setup, looks like it’s reading from the ISO to load installation files. and writing it somewhere to disk, my setup only yielded me about 40 MB’s and took roughly 8 minutes.

Agree to the EULA.

License upgrade: (I’ll try not selecting this since CE, nope wizard wouldn’t let me for CE, shucks hahah)

Service account, Local System (recommended). I left this default, next.

This is why I like Veeam, made by sysadmins for sysadmins.

Install, and now we wait… once complete

Step 7) Attach disk with backup data

How you do this is up to you, I got the needful done.

Step 8) Open Veeam B&R Console, and import config backup.

In Veeam B&R Console, click what should be file -> Config Backup, then click restore button.

Now, I picked restore since I shutdown my OG server to move the data as a whole, so I picked restore:

The config deets check em over, I don’t know what the minimum gap between version is allowed, but in this case 12.3.1 source, to target 12.3.2

Target Data is localhost, pay attention to the login name, if you ever change the local admin account or whatever account installs Veeam, this could be an issue to your SQL Veeam config.

yes…

Restore…

Yes…

Wait for services to all stop…

success… until it’s not…

This for some reason failed…

I clicked start and it seemed to start everything up just fine…

But no matter what when I tried to rescan any repos in the console it would complain that not all components were upgraded. Everything AI was telling me was off and felt wrong.. I found this one thread with the statement “It seems that not all Windows 10 installations are facing this problem. We’ll try to figure out of certain builds are involved in this. On the other hand, a fresh v12 install in Win10 works without any problems.” Well This is a fresh install, it happened after the backup import, when I did the last upgrade back in March, it was ain in place upgrade from 12.1 to 12.3, and I didn’t have this problem.

After enough fooling around I found my answer here, which was to run the provided script. finding the component listed with 0.0 as noted in the thread. Strange.

Then finally the part of the wizard completed:

Stop Killing Games

This habit of game design sucks. So much so a movement has started: Stop Killing Games – Wikipedia

I have one to add to this, one of my gavorite games of to play “The Golf Club 2019 feature the PGA Tour.” The Golf Club 2019 can no longer go online because its servers were permanently shut down on October 30, 2025, as part of a “server sunsetting” process to focus on newer games. This means that after this date, all online features, including multiplayer, online courses, and in-game store purchases, are no longer available.

Now, I’m on the best pace ever on a PGA tour game, to find out randomly I can’t play my campaign cause it says I have to be online (to play an offline portion of the game).

As H8 Gamers states “Why does the PGA tour element of the game require an online server, I’ve only played a little of the game and thought only multiplayer required an online server to make it work. Now it will feel like a pointless purchase if there is no story mode, unless I don’t understand the tour mode correctly as I only played a few holes.”

I get discontinuing online servers to support legacy games becomes strenuous vs the amount of player (player base), but all offline features should be recoded with a last time update to release them from their coded dependencies on servers they know aren’t going to be online.

TLDR, this sucks, stop making games like this…. watch I bet they still have it listed for purchase on stream too…  “Notice: The Golf Club™ 2019 featuring PGA TOUR is no longer available on the Steam store.” alright touche on that one.. but still… ****

Docker on Core Linux

Docker Running on Core Linux

Is it possible? … Yes? However I didn’t write down some of the pre-requisites for the Core Server (whoops, maybe one day I’ll redo it from scratch). But if you do manage to get the base binaries installed this post should be helpful for all the caveats I faced along the way…

In my previous post I mentioned that Docker wouldn’t run unless it was 64bit machine, so I created a Core Linux 64 bit image and showed how to get the base OS up and running… but what about Docker itself.

Now I got this “working” but I didn’t exactly write down all my steps (it took a long time to figure out). From looking at the VM’s history looks like I simply used the tc account to download and extract the base Docker binaries:

now this doesn’t tell me the relative path I was on when some of the relative paths are called, but I do know it was the tc account so some safe assumptions can be made.

Reviewing my AI chat and notes I took, and getting it running again after a reboot, it seem after the “install” (copy base files to path shown above image, line 51) I also added “var/lib/docker” and “etc/docker” to the filetool.lst file, so they stay persisted after reboot. Strangely only /var/lib/docker is populated, but I can’t see how that’s the case from the history review. I was pretty positive the script itself failed to execute… I really should start from scratch else this post will be a bit useless…. butt…. F*** it….

The next issues seems to be tired to cgroups and certificates…

Fixing Cgroups Error

sudo mount -t tmpfs cgroup_root /sys/fs/cgroup/
sudo mkdir /sys/fs/cgroup/devices
sudo mount -t cgroup -o devices none /sys/fs/cgroup/devices

That should be it… but we need this to be persisted and auto run at boot time so we don’t have to do this every time…

sudo vi /opt/dockerd.sh
i
mount -t tmpfs cgroup_root /sys/fs/cgroup/
mkdir /sys/fs/cgroup/devices
mount -t cgroup -o devices none /sys/fs/cgroup/devices
ESC
:wq
sudo vi /opt/bootlocal.sh
*append with*
/opt/dockerd.sh
:wq
sudo chmod +x /opt/dockerd.sh
filetool.sh -b

The next issue seems that docker would load, but when pulling a container to load it would just seem to fail complaining about certificates.

Fixing Certificate Error

I found the point in my notes rambling with AI when I figured it out…

“NO F***KIN WAY!!!!!!! https://stackoverflow.com/questions/75696690/how-to-resolve-tls-failed-to-verify-certificate-x509-certificate-signed-by-un I read this thread and read the answer by Andrei Nicolae… which said just durr copy ca certs to /etc/ssl/certs I was like, I bet docker is hard coded to look there, which is why it was first suggested but all other apps on tiny core linux know to use /usr/local/etc/ssl/certs, so yeah docker never was using the expectects paths liek I suspected from the begining cause we manualy installed it for a OS not supported. so with this I did sudo mkdir -p /etc/ssl/certs sudo cp /usr/local/etc/ssl/certs/* /etc/ssl/certs sudo pkill dockerd sudo dockerd & sudo docker pull hello-world and guess what it finally freaking worked”

But I realized instead of copying them I could just make a symlink

sudo mkdir /etc/ssl/
ln -s /usr/local/etc/ssl/certs/ /etc/ssl/

I simply placed these lines in /opt/dockerd.sh file I created earlier, rebooted and verified that /etc/ssl/certs was populated with certs and it was.

And finally…

Running Dockerd

sudo DOCKER_RAMDISK=true dockerd &

Pulling Image

sudo docker pull hello-world

Running Image

sudo docker run --rm hello-world

Yay we actually ran a container from Core Linux.. Mind Blown… I swear I had it all running at only 90MB of RAM, but checking now show 116MB Bah…

To get Docker to run at boot my final /opt/dockerd.sh looked like this:

*Update* outisde of hello world I wasn’t able to spin up any other real containers due to other technical limitations, this was a huge waste of time.

Installing CorePure64

Back Story

So in my previous post I shared how to setup a very small footprint Linux server called Linux Core: Installing Core Linux – Zewwy’s Info Tech Talks

but…… I tried getting docker running on it was hit with an error “Line 1: ELF: File not found”.

AI, after giving all the required command to do a “manual install”, stated, “duuuuurrrrrrrrrrrrrr docker don’t give 32 bit binaries”, to which I was replied huh… I guess I installed 32 bit Core Linux… do they have 64bit versions?

It gave me some dumb link to some dumb third party source.. the answer is yes.. here: Index of /16.x/x86_64/release/

So here we go again….

Installing CorePure64

Step 1) Download Install image CorePure64-16.0.iso

Step 2) get x64 bit hardware, or create a VM that supports 64 bit. I have 64 bit hypervisors, so I will create a VM as I did in my first post.

This time 2 CPU, 1 GB RAM, 60GB HDD, thin, VMparavirtual scsi controller, EFI enabled with secure boot, let’s see if this works out…. No boot… Flip boot settings to BIOS mode… ISO boots.. ah man FFS its x64 based but still relies on BIOS for booting… that sucks… of well moving on….

Booting and Installing Core Linux

Attach ISO boot. Core Linux boots automatically from ISO:

For some reason the source doesn’t tell you what to do next. type tc-install and the console says doesn’t know what you are talking about:

AI Chat was kind enough to help me out here, and told me I had to run:

tce-load -wi tc-install

Which required an internet connection:

However even after this, attempting to run gave the same error.. mhmm, using the find command I find it, but it needs to be run as root, so:

sudo su
/tmp/tcloop/tc-install/usr/local/bin/tc-install.sh

C for install from CDrom:

Lets keep things frugal around here:

1 for the whole disk:

y we want a bootloader (It’s extlinux btw located [/mnt/sda1/boot/extlinux/extlinux.conf}):

Press enter again to bypass “Install Extensions from..”

3 for ext4:

Like the install source guide says add boot options for HDD (opt=sda1 home=sda1 tce=sda1)

last chance… (Dooo it!) y:

Congrats… you installed TC-Linux:

Once rebooted the partition and disk free will look different, before reboot, running from memory:

after reboot:

Cool, the install process was 100% the same as the 32bit process…

but running uname -m we see we are now 64 bit instead of 32 bit.

Changing TC Password

Step 1) Edit /opt/.filetool.lst (use vi as root)
– add etc/passwd and etc/shadow

Step 2) run:

filetool.sh -b

Step 3) run

passwd tc

Step 4) run

filetool.sh -b

Now reboot, you may not notice that it applied due to the auto login, however, if you type exit to get back to the actual login banner, type in tc and you will be prompted for the password you just set. Now we can move on to the next step which is to disable the auto login.

Disable Auto-Login

Step 1) Run

sudo su
echo 'echo "booting" > /etc/sysconfig/noautologin' >> /opt/bootsync.sh

Step 2) Run

filetool.sh -b
reboot

K on to the next fun task… static IP…

Static IP Address

For some reason AI said I had to create a script that runs the manual step… not sure if this is the proper way… I looked all over the Wiki: wiki:start – Tiny Core Linux Wiki I can’t find nothing.. I know this works so we’ll just do it this way:

Step 1)  Run:

echo "ifconfig eth0 192.168.0.69 netmask 255.255.255.0 up" > /opt/eth0.sh
echo "route add default gw 192.168.0.1" >> /opt/eth0.sh
echo 'echo "nameserver 192.168.0.7" > /etc/resolv.conf' >> /opt/eth0.sh
chmod +x /opt/eth0.sh
echo "/opt/eth0.sh" >> /opt/bootlocal.sh
filetool.sh -b

Step 2) reboot to apply and verify.

It didn’t work, but is I add “sleep 2” just before /opt/eth0.sh in the bootsync.sh file, then it works, not the greatest but I’ll take it for now.

Updates?

Tiny Core Linux, updating is a bit different from traditional distros. Here’s how it works:

🔄 Updating Tiny Core Linux

Tiny Core doesn’t have a single tc-update command for the whole system, but it does have tools for updating extensions and managing packages:

🧰 Extension Update

Use the tce-update command to update installed extensions:

bash
tce-update

This checks for newer versions of your installed .tcz extensions and updates them.
📦 Other Useful Commands

  • tce-load: Loads extensions from your /tce directory.
  • tce-ab: Opens the AppBrowser (if you have GUI).
  • tce-audit: Audits installed extensions.
  • tce-remove: Removes extensions.

🧱 Core System Update

To update the core system itself (like the kernel and initrd), you’ll need to manually download the latest bzImage and core.gz (or tinycore.gz) from the Tiny Core Mirrors then replace them in your boot partition.

Steps:

  1. Download the latest files.
  2. Mount your boot partition.
  3. Replace bzImage and core.gz.
  4. Update your bootloader config (e.g., GRUB or syslinux).
  5. Reboot.

Upgrade vCenter from 7 to 8

Upgrading vCenter

Let me start by having a firm base, a working vCenter 7 with a proper connected Veeam server. Since my server is dead I’m going to start from scratch.

Pre-req Step (Base vCenter 7 with Veeam)

Other Pre-reqs:
DNS server (or local host records)

Step 1) Install vCenter 7

20 minutes later, complete. Now I made one mistake along the way (to replicate what I think happened the first go around) and that was to give the VM name vCenter.zewwy.ca, while when configuring the network FQDN I gave it vcenter.zewwy.ca. After install, I was able to replicate the findings I had from my first go around which I now know will cause and upgrade problem for the upgrade to vCenter 8. When this is done, the install wizard has no issues with this and the install will complete successfully. However now the hostname with have a uppercase, while the PNID will have a lowercase:

It’s this case while will cause the upgrade to have hiccups which I’ll cover below. However, I need to add my hosts which were connected to my previous upgrade, let’s see if it’ll take my hosts…

Step 2) I followed my blog post to fix up my Veeam jobs again. 

Step 3) Upgrade to vCenter 8.

Stage 1 Error

Remember when I showed above the mistake I made by entering the FQDN without a capital when configuring the network when I ran the vCenter7 installer. Putting lower case here, provided the above error, to get past it had to use the case sensitive hostname with a capital.

Stage 2 Error

Now pick your host and you have another error to bypass at Stage 2.

I had another weird issue where even though the VM deployed, it was not reachable over the network and the installer timed out. To resolve this I simply changed the VM network VMPG, saved, and then changed it back to the proper one saved, and it was pingable. To get back to stage 2 simly navigate to the VAMI web page on port 5480. When you get to the stage to connect to the source you enter the details and get this error at the pre-upgrade checks:

changing the vcenter source entry again to a capital as we did before will not work the same error pops up showing to go to that blog post on how to change the FQDN. Since my FQDN already looked correct (with a capital), but the command was showing (the PNID) was with a lower case, Instead of changing the FQDN to be lowercase, and going through all the steps in that blog (there are a lot) I simply set the PNID to have a capital in it

Get:

/usr/lib/vmware-vmafd/bin/vmafd-cli get-pnid --server-name localhost

Set:

/usr/lib/vmware-vmafd/bin/vmafd-cli set-pnid --server-name localhost --pnid vCenter.mydomain.com

Yay no more error:

Now pick your content to migrate, and at this point you should stop using vCenter for the duration of the migration.

At this point the Pings dropped (roughly 12 no reponse at about 1-2 minute down time…. then they came back up as the new vCenter IP changed over. At which point step 2 began.

I went to play a bit of party animals, it had timed out on the old vami IP, which the installer may have auto switched over, but logging into VAMI on the new server showed everything was green.

Logging in, everything looks good, checking Veeam yup, rescan of vcenter worked without issue, check re-calculating VM on a backup job, yup it works.

Success. This post doesn’t cover additional steps (applying new license keys, checking using migrations (vsphere.local), remoting loging services or connections (rsyslog)) all those you’d have to verify after completing these steps. Now to upgrade the ESXi hosts, both esxi hosts remained on 7.0.3x versions for this post.

Canada 3G Discontinuation and Its Impacts

Story

Now this is a story all about how my life got flipped turned upside down and I’d like to take a minute, just sit right there, I’ll tell you how I fucked by Canadian Telcos.

Me sitting here writing this story, (Canadian Telcos) *SLAP to my face*.

Wow, dude! I didn’t tell a telco joke.  What’s the most annoying about all this is just how little time they have provided affected people to take action… the first news on this was only from Aug 12th of this year on mobile syrup (SpeakOut wireless, other MVNOs struggle with Rogers amid 3G shutdown).

Backstory

If you’re not familiar with SpeakOut it’s a cell service provider offered by 7eleven. They used to provide good PayG option to Canadians.

While I was looking for alternatives (since they didn’t make it clear that SpeakOut was going away, and all they did was offer people the option to get a Lucky Mobile Sim for free [$5 value, Oooooeeee]), the only other option that AI managed to tell me about was good2gomobile (previously Petro Mobility, offered by Petro-Canada), however if you check out their site they say the same thing!  (Important Update: Network Changes and Free SIM Option with Lucky Mobile – Action will be needed by August 31, 2025 | Good2Go Mobile)

“As part of a national modernization of wireless networks, The retirement of the 3G (HSPA) infrastructure will be starting August 31, 2025. This will affect Good2Go customers who rely on that network.

To help you stay connected with your current phone, we’ve partnered with Lucky Mobile to offer affected customers a free SIM card and an easy option to move to a new, reliable 4G LTE network.”

Also, both SpeakOut and Good2GOMobile are backed by Ztar Mobile, and that connection explains a lot about their shared fate in Canada’s shifting wireless landscape.

Who Is Ztar Mobile?

Ztar Mobile is a Mobile Virtual Network Operator (MVNO) enabler. That means:

  1. They don’t own their own network infrastructure
  2. Instead, they lease access from major carriers like Rogers
  3. They create branded services like SpeakOut (via 7-Eleven) and Good2Go (formerly Petro-Canada Mobility)

This explains why both PayG providers are saying the same thing in verbatim.

Who’s affected?

  1. People with old phones that are 3G only.
    – While is number maybe small, they will be out there and mostly people with lower income thresholds.
    – This is bad for e-waste as now old phones that were perfectly useable will have reduced functionality.
  2.  People who are on PayG plans.
    -While SpeakOut has just announced that they will still offer plans and they are in-fact not being discontinued, they are still forcing people to get new Sim Cards and thus[1}, change of available plans, and you guessed it the old PayG with roll over is not among the new plans[2], or it is? [3].[1] 7-Eleven SpeakOut Wireless Confirms it’s Staying—Free 4G SIM Cards Coming | iPhone in Canada
    “A new option posted in stores and sent by email obtained by iPhone in Canada confirms customers no longer have to switch to Lucky Mobile if they want to stay. Instead, they can stay with SpeakOut Wireless. How? They can soon pick up a free migration SIM card at 7-Eleven stores beginning the week of September 8, 2025.Customers with a VoLTE-compatible phone can activate new SpeakOut plans, with more details coming by email soon. SpeakOut warns that if no action is taken, phone numbers will eventually be lost. “Important: SpeakOut is not going out of business—but you must choose one of these options to keep your number active,” the company told customers. The deadline to switch to a new SpeakOut SIM or move to Lucky Mobile is October 31, 2025.”

[2] 30 Day Cell Phone Plans | SpeakOut Wireless
[3] Talk & Text Cell Phone Rates | SpeakOut Wireless

“*Valid 08/27/25 – 10/28/25 at participating locations. $50 top up must be included with phone purchase to receive discount. While supplies last. Cannot be combined with any other promotional offer. Speakout reserves the right to terminate or change the rates of this plan at any time. ©2025 7-Eleven Canada, Inc. All Rights Reserved.”

OK now I’m super confused… are they still offering the PayG on the new network? It seems they still have this offer up for PayG, but the astrix at the bottom seems to imply it’s only good till Oct. Which seems to match the news articles final comments. So I’m still assuming that they won’t offer the PayG when they finish their “transition” to 4G/LTE. *Cough* Ztar basically getting a new agreement (buying a section of bandwidth and numbers) on the new network, hence the requirement to get a new SIM card.

Once I get this all figured out I will update this blog post…

What has the CRTC said about this?

Nothing they have always been a bunch of useless turds, and in this case they are no different here’s a summary AI has provided me:

CRTC’s stance has been notably hands-off, which has left many Canadians frustrated.

🏛️ What the CRTC Has Said

The CRTC (Canadian Radio-television and Telecommunications Commission) has acknowledged the 3G network phase-out on its official site, but its messaging is mostly informational, not regulatory:

  • They confirm that 3G networks are being retired across Canada, and that VoLTE (Voice over LTE) is now the standard for voice calls
  • They advise consumers to check with their provider to see if their phone and SIM are compatible with newer networks
  • They note that some affiliate brands (like SpeakOut, Good2Go, Chatr, Lucky Mobile, etc.) may be affected, but do not mandate any specific protections or timelines

⚠️ What They Haven’t Done

  • No requirement for minimum notice periods to customers
  • No mandate to preserve account balances during forced migrations
  • No enforcement of grace periods for porting numbers
  • No public comment on Ztar’s dispute with Rogers, which is central to the SpeakOut/Good2Go crisis

3G network phase-out: essential information for cellphone users | CRTC

Wow, useless as always, why do they even exist?

What can I do right now?

Option 1 – Lucky Mobile

Take their terrible offer to move to Lucky Mobile, that doesn’t have PayG options (not to be confused with prepaid, which is available)

Cost: $228 per year.

Considering if you managed a low usage on SpeakOuts old PayG plan, it was less than $30 a year (if you saved your amount and topped up in time). This is a $200/year increase. Now if you actually use your phone alot, this makes sense, but not for the emergency user or those using it for simply MFA purposes.

Option 2 – Wait till September 8th, and get a new SpeakOut SIM.

AFAIK, you won’t get your usual top up (since that service type will be phased out, I presume).

Cost:  $180 per year.

Again, considering the old PayG, this is again a $150 increase from before.

Are you starting to see a trend here? Seems a bit sus.

Option 3 – Freedom Mobile

I had a friend mention the moved someone to Freedom Mobile and somehow got a plan with data for $140 a year. I can’t seem to find this on their site and I’ve asked them to provide more information, plans for Freedom mobile can be found here:

Cell Phone Plans – Unlimited 5G Mobile Plans | Freedom Mobile

The cheapest I saw was $20 a month. so again $240 a year.

So a $210 increase from my previous PayG plan I was on.

Option 4 – Fizz

Fizz, originally Quebec-based, has expanded to Manitoba and other provinces like Ontario, Alberta, and B.C.2. That means Winnipeg residents can now sign up, and yes—you can get a SIM or eSIM directly from their website. No need to visit a store; everything is done online.

🧾 Fizz Plans: Prepaid, Customizable, and Wallet-Friendly

Fizz offers prepaid plans that are:

  • Contract-free and fully customizable
  • Include data rollover, data gifting, and loyalty rewards
  • Can be tailored with or without minutes, texts, or data
  • Allow wallet top-ups for pay-per-use calling if you skip minutes in your plan

So while Fizz isn’t “classic PAYG” like SpeakOut used to be, it does let you:

  • Build a barebones plan with no minutes or texts
  • Add funds to your wallet for per-use calling (e.g., $0.25/min)
  • Skip monthly data if you want a minimalist setup

🛒 Where to Get Fizz SIMs

  • Online at Fizz.ca – Order a SIM or activate an eSIM instantly
  • Circle K stores – Available in some provinces, though not confirmed in Winnipeg yet (Spoiler they are)

Fizz is actually one of the few carriers still offering true customization and no surprise fees. If you’re looking for flexibility without being pushed into a full monthly bundle, it’s a solid contender.

So now I managed to pick up a Fizz SIM card, I will blog about the process of number porting and getting the account setup. However, I want to publish this as is right now and will update this blog with all steps I took and headaches along the way, stay tuned!

I will blog about my migration to Fizz, I just recently had some other home issue crop up that has taken priority. I’m hoping I still have till end of Oct to transfer my number. Sorry for the delay in updates to this post.

Step 1) Source SIM

I got mine from a Circle K store.

Step 2) Register Fizz Account

Requires email address.

Step 3) Activate Sim

Enter address, may complain not match ip geo, continue

It’s blipped on me when I entered my code the first time, I entered it again then it let me pick my service options.

Step 4) Pick Your Service Options

I thought they had pre-paid options, but looks like not, the lowest I got was $11 a month, so much for the $75 a year, now it’s $132 a year, bahh.

Step 5) Pay, They only take Credit Card, so this is another barrier to entry for many. The Invoice showed only one month service, seems charged monthly, but there was no inclination that the card would be help on record? It appears it does from checking my account details on the website.

Step 6) Transfer Number

I provided my cell number, and the current provider information, it then asked for 1 of 3 things, either the account number (I was logged into the account on SpeakOut and saw nothing with that info and I didn’t have it written down), two; the PIN code (this I knew, so provided the PIN code here), three; the IEMI number (which I could have eventually got from my current phone.

It then just said this…

Now… we wait…