Word document always opening with properties sidebar

SharePoint Story

The Problem

User: “I get this properties panel opening every time I open a word doc from this SharePoint site.”

The Burdensome Solution

Me Google, find Fix:

Here’re steps:

  1. Go to Library Settings -> Advanced Settings -> Set “Allow management of Content Types?” as “Yes”.
  2. Go to Library Settings -> Click the Document content type under content types section -> Document Information panel Settings -> Uncheck the box “Always Show the Document Information Panel”.

I think we can do better than that

Which based on the answer requires 2 steps (enabling editing of content types), then flipping the switch. Which as you may have guessed does not scale well, and would be really time consuming against hundreds of lists. If you know front ends they always rely on some backend, so how’s the backend doing it? How to fix via backend

This didn’t work, Why? Cause the first linked site which is the real answer is doing it per list’s document content type, where the answer above it doing it just at the site level. The difference is noted at the beginning of this long TechNet post.

What it did tell me is how the SchemaXml property is edited, which seems to be by editing the XmlDocuments array property.

So with these three references in mind, we should now be to actually fix the problem via the backend.

The Superuser Solution

First we need to build some variables:

$plainSchema = "http://schemas.microsoft.com/office/2006/metadata/customXsn"

While this variable may not change (in this case its for SharePoint 2016), How this this derived? How? From this:

((((Get-spweb http://spsite.consoto.com).Lists) | ?{($_.ContentTypes["Document"]).SchemaXml -match "<openByDefault>True"})[0].ContentTypes["Document"]).XmlDocuments[1]

This takes a SharePoint Web Site, goes through all it’s lists, but only grab the lists which have a content type of Document associated with them,
all of these objects will have a property “SchemaXml” now only grab the ones that have the Schema property of “openByDefault” that are set to true,
from this list of objects only grab the first one”[0]”, grab it’s Document Content Type Object, and spit out the second XmlDocument “.XmlDocuments[1]”.
From this String XML output we want the xml property of “customXsn”:

<customXsn xmlns="http://schemas.microsoft.com/office/2006/metadata/customXsn"><xsnLocation></xsnLocation><cached>True</cached><openByDefault>True</openByDefault><xsnScope></xsnScope></customXsn>

Why? For some reason the Content Type’s SchemaXml property can not be directly edited.

Why? Unsure, but it is this field property that gets changed when doing the fix via the front end.

$goodSchema = "<customXsn xmlns="http://schemas.microsoft.com/office/2006/metadata/customXsn"><xsnLocation></xsnLocation><cached>True</cached><openByDefault>False</openByDefault><xsnScope></xsnScope></customXsn>"

This can also be derived by using a Replace operation, flipping true to false.

After this is done we need to build an object (type of array) that will hold all lists that are affected (have openByDefault set to true):

$problemDroids = ((Get-spweb http://spsite.consoto.com).Lists) | ?{($_.ContentTypes["Document"]).SchemaXml -match "<openByDefault>True"}
$problemDroids | %{($_.ContentTypes["Document"]).XmlDocuments.Delete($plainSchema)}
$problemDroids | %{($_.ContentTypes["Document"]).XmlDocuments.Add($GoodSchema)}
$problemDroids | %{($_.ContentTypes["Document"]).Update()}

Not good Enough

User: “It’s not working on all sites”

Solution: The above code will go through all document libraries affected at the root site, if you have subsites you simply have to add .Webs[0].Webs to the initial call for creating the “problemDroids” variable. The level of how deep you need to go depends on how many subsites your SharePoint implementation has.

$problemDroids = ((Get-spweb http://spsite.consoto.com).Webs[0].webs.Lists) | ?{($_.ContentTypes["Document"]).SchemaXml -match "<openByDefault>True"}

Summary

Something that should have been a boolean type property on the object was really a boolean nested in XML, which was of a string type.

Standing Ovation. Fun had by all parties involved. Tune in next week when I post more SharePoint content.

Xbox One No Video Output
Replace Xbox One HDD

Expectation: Existing Slot, Easy pull out, Plug in new HDD, have USB stick with offline installer that you plug into unit and power on, and done.

Reality:

First off, Existing Slot, Easy Pull out. hahahahah. Try complicated clipped casing, and a caddy for a caddy with 11+ screws for just mounting the HDD to the chassis. If you need a video on that process you can watch this one by Joe:

Microsoft Xbox One S Hard Drive HDD Replacement | Repair Tutorial – YouTube

Then expectation that OS install will partition and format disk… no, you have to preformat it, does MS give you a tool, no the community had to do it:

Xbox One Windows and Linux Internal Hard Drive Partitioning Script | GBAtemp.net – The Independent Video Game Community

Then, you need a 8GB USB stick formatted to NTFS, to copy the Offline OS Installer on to.

Perform an offline system update | Xbox Support

For the best compresneive step by step watch this video by XFiX:

Xbox One Internal Hard Drive Repair or Replace Using Windows Series 7 – YouTube

Unfortunetly for me the system I was working on had no video output after booting, and no matter what I did, including installing a new HDD, I couldn’t get video to work.

If you have any thoughts or suggestions on how to fix a no video display issue (I already did the eject and power on hold for 10 seconds to default video output, didn’t work), please leave a comment. 🙂

*Update* the Video problem was related to the HDD.

I tried a couple more times and had the following results.

Using the old hdd would seem good enough to boot but fail on all update attempts, and would end up in 200 or 106 error state. If I got a boot and into the maintence window, if I hot swapped the HDD and did a offline update, I’d get a 101 error, if rebooted a 102 or 106 error.

I didn’t have any good 500 of bigger 2.5″ hdd around, only smaller ones, so I ended up finding this video using smaller drives by XFiX I ended upfollowing along with the video and when the step to copy the data came up the process came to a halt, on the SYSTEM UPDATE partition none the  less, since I knew it complete up to this point, I hard killed the script and it hung the linux machine. After a reboot, I completed the last part define “stage 3” defining the GUIDs.

I then poped the HDD into the Xbox and it actually showed the maintence screen almost instantly, then doing then offline update actually succeeded without issue.

After a reboot, the box was fixed and fully working!

Manually Fix Veeam Backup Job after VM-ID change

The Story

There’s been a couple time where my VM-IS’s change:

  • A vSphere server has crashed beyond a recoverable state.
  • A server has been removed and added back into the inventory in vSphere.
  • Manually move a VM to a new ESXi host.
    • VM removed from inventory, and readded.
  • Loss vCenter Server.
  • Full VM Recovery via Veeam.

What sucks is when you go to run the Job in Veeam after any of the above, the job simply fails to find the object. You can edit the job by removing the VM and re-adding it, but this will build a whole new chain, which you can see in the repo of Veeam after such events occur:

As you can see two chains, this has been an annoyance for a long time for me, as there’s no way to manually set the VM-ID in vCenter, it’s all automanaged.

I found this Veeam thread discussing the same issue, and someone mentioned “an old trick” which may apply, and linked to a blog post by someone named “Ideen Jahanshahi”.

I had no idea about this, let’s try…

Determine VM-ID on vCenter

The source uses powerCLI, which I’ve covered installing, but easier is to just use the Web UI, and in the address bar grab it after the vms parameter.

Determine VM-ID in Veeam

The source installs SSMS, and much like my fixing WSUS post, I don’t like installing heavy stuff on my servers to do managerial tasks. Lucky for me, SQLCMD is already installed on the Veeam server so no extra software needed.

Pre-reqs for SQLCMD

You’ll need the hostname. (run command hostname).

You’ll need the Instance name. (Use services.msc to list SQL services)

Connect to Veeam DB

Open CMD as admin

sqlcmd -E -S Veeam\VEEAMSQL2012

use VeeamBackup
:setvar SQLCMDMAXVARTYPEWIDTH 30
:setvar SQLCMDMAXFIXEDTYPEWIDTH 30
SELECT bj.name, bo.object_id FROM bjob bj INNER JOIN ObjectsInJobs oij ON bj.id = oij.job_id INNER JOIN Bobjects bo ON bo.id = oij.object_id WHERE bj.type=0
go

Some reason above code wouldn’t work on my latest build/install of Veeam, but this one worked:

SELECT name, job_id, bo.object_id FROM bjobs bj INNER JOIN ObjectsInJobs oij ON bj.id = oij.job_id INNER JOIN BObjects bo ON bo.id = oij.object_id WHERE bj.type=0

In my case after remove the VM from inventory and readding it:

As you can see they do not match, and when I check the VM size in the job properties the size can’t be calculated cause the link is gone.

Fix the Broken Job

UPDATE bobjects SET object_id = 'vm-55633' WHERE object_id='vm-53657'

After this I checked the VM size in the job properties and it was calculated, to my amazement it fully worked it even retained the CBT points, and the backup job ran perfectly. Woo-hoo!

This info is for educational purposes only, what you do in your own environment is on you. Cheers, hope this helps someone.

vCLS High CPU usage

The Story

So I went to vMotion a VM to do some maintenance work on a host. Target machine well over 50% CPU usage.. what?! That can’t be right, it’s not running anything…

I tried hard powering the VM off, but it just came right back up suckin CPU cycles with it….

The Hunt

alright Google, what ya got for me… I found this blog post by “Tripp W Black” he mentions stopping a vCenter Service called “VMware ESX Agent Manager”, which he stops and then deletes the offending VMs, sounds like a plan. Let’s try it, so login into VAMI. (vcenter.consonto.com:5480)

K, let’s stop it… let me hard power off the VM now… ehh the VM is staying dead and host CPU:

K let’s go kill the other droid I have causing an issue…

ok I got them all down now, but the odd part is I can’t delete them from disk much like Sir Black mentioned in their blog post. The options is greyed out for me, let’s start the service and see what happens…

The Pain

Well, that was extremely annoying, it seemed to have worked only for a moment and the CPU usages came right back, so I stopped the service again, but I can’t delete the VMs…

Similar issues in vSphere 8, even suggestions to stay running in retreat mode, which I’ll get to in a moment. So, if you are unfamiliar, vCLS are small VMs that are distributed to ESXi hosts to keep HA and DRS features operational, even if vCenter itself goes down. The thing is, I’m not even using HA or DRS, I created a cluster for merely EVC purposes, so I can move VMs between hosts live at my own leisure and without downtime. What’s annoying is I shouldn’t have to spend half my weekend day trying to solve a bug in my HomeLab due to poor design choices.

The Constructive Criticism

VMware…. do not assume a cluster alone requires vCLS. Instead, enable vCLS only when HA or DRS features are enabled.

Now that we have that very simple thing out of the way.

The Fix

So, as we mentioned we are able to stop the vCLS VMs when we stop the EAM service on vCenter, but that won’t be a solution if the server gets rebooted. I decided to Google to see how other people delete vCLS when it doesn’t seem possible.

I found this reddit thread, in which they discuss the same thing mentioned above “Retreat Mode”. However, after setting the required settings (which is apparently tattoo’d after done), I still couldn’t delete the VMs, even after restarting the vpxd service. Much like ‘bananna_roboto’ I ended up deleting the vCLS VMs from the ESXi host UI directly, however when checking vCenter UI the still showed on all the hosts.

After rebooting the vCenter server, all the vCLS VMs were gone, at first, I thought they’d come back, but since the retreat mode setting was applied it seems they do not get recreated. Hence, I will leave Retreat mode enabled as suggested in the reddit thread for now, since I am not using HA or DRS.

So if you want to use EVC in a cluster, but not HA and DRS and would like to skim even more memory from your hosts, while saving on buggy CPU cycles, apparently “Retreat mode” is what you need.

If you do need those features, and you are unable to delete the old vCLS VMs, and restarting the EAM service doesn’t resolve your issue (which it didn’t for me), you may have to open a support case with VMware.

Any, I hope this helped someone. Cheers.

USB NICs on ESXi hosts

Quick post here, I wanted to use a USB based NIC to allow one of my hosts to be able to host the firewall used for internet access, this would allow for host upgrades without downtime.

My first concern was the USB bus on the host, being a bit older, I double checked and sad days it was only USB 2.0. Checking my internet speed, it turns out it’s 300 mbps, and USB 2.0 is 480 mbps, so while I may only be able to use less then half of the full speed of the gig NIC, it was still within spec of the backend, and thus won’t be a bottle neck.

Now when I plugged in the USB nic, I sadly was not presented with a new NIC option on the host.

When I googled this I found an awesome post by non-other than one of my online hero’s Willam Lam. Which he states the following:

“With the release of ESXi 7.0, a USB CDCE (Communication Device Class Ethernet) driver was added to enable support for hardware platforms that now leverages a Virtual EEM (Ethernet Emulation Module) for their out-of-band (OOB) management interface, which was the primary motivation for this enhancement.

One interesting and beneficial side effect of this enhancement is that for any USB network adapters that conforms to the CDCE specification, they would automatically get claimed by ESXi and show up as an available network interface demonstrated in my homelab with the screenshot below.”

Then shows a snippet of running a command:

esxcfg-nics -l

Which for me listed the same results as the UI:

Considering I’m running the latest built of 7.x, I guess the device not “conform to the CDCE specification”.

A bit further in the post he shows running:

lsusb

When ran shows the device is seen by the host:

Let’s try to install the Flings USB Driver, see if it works.

“This Fling supports the most popular USB network adapter chipsets found in the market. The ASIX USB 2.0 gigabit network ASIX88178a, ASIX USB 3.0 gigabit network ASIX88179, Realtek USB 3.0 gigabit network RTL8152/RTL8153 and Aquantia AQC111U.”

Step 1 – Download the ZIP file for the specific version of your ESXi host and upload to ESXi host using SCP or Datastore Browser. Done

Luck the error message was clickable, and it provided a helpful hint to navigate to the host as it maybe due to certificate not trusted, and sure enough that was the case.

Step 2 – Place the ESXi host into Maintenance Mode using the vSphere UI or CLI (e.g. esxcli system maintenanceMode set -e true)

Some reason the command line wasn’t returning from the command above, and I had to enable Maintenance mode via the UI. Done.

Step 3 – Install the ESXi Offline Bundle (6.5/6.7) or Component (7.0)

For (7.0+) – Run the following command on ESXi Shell to install ESXi Component:

esxcli software component apply -d /path/to/the component zip

For (6.5/6.7) – Run the following command on ESXi Shell to install ESXi Offline Bundle:

esxcli software vib install -d /path/to/the offline bundle zip

and my results:

Ohhh FFS… Google!!!!!! HELP!!!! Only one hit…

only only 2 responses close to an answer are… “Ok I can confirm that if you create a 7u1 ISO and upgrade to that first, you can then add the latest fling module to it. Key bit of info that is not in the installation instructions” and “Workaround: Update the ESXi host to 7.0 Update 1. Retry the driver installation.”

Uhhhh I thought I just updated my hosts to the latest patches… what am I running?

“7.0.3, 21686933″… checking the source Flings page, oh… it’s a dropdown menu… *facepalm*

I downloaded the ESXi 8 version, let me try the 703 one…

ESXi703-VMKUSB-NIC-FLING-55634242-component-19849370.zip

Reboot! and?

Ehhh it worked, I can now bind it to a vSwitch. I hope this helps someone :). I’m also wondering if this will burn me on future ESXi updates/upgrade. I’ll post any updates if it does.

Share NTFS USB HDD via SMB on FreeNAS

I’m boiling down an entire night of knowledge as short as possible:

Is it possible? Yes, reference (this post)

Does the internet say it’s possible? No and More, No

Jeff “In the FreeNAS documentation it says using USB attached devices as shares is not allowed.”

Let’s do it anyway. Couple point notes:
*I created an account on FreeNAS “veeam” account ID 1001.

  1. Mounting The USB HDD to FreeNAS:
    Using the “Import Disk” option doesn’t work well:

    1. requires existing zpool aka volume, configured.
    2. when completed doesn’t show files properly.
    3. Mounts Disk in Read Only.
    4. Much like the link shared above we just mount it manually via the backend.
      1. ntfs-3g /dev/da6s1 /mnt/USBHDD/ -o rw,user_allow_other,uid=1001,gid=1000
      2. to make this stick after reboots have to edit fstab file. *I haven’t done this yet, when I have and tested it, I’ll update this area.
      3. The command mounts the NTFS using FUSE, and you can’t change ownership of files n folders after mounting only during.
  2. Sharing the Drive via SMB:
    1. Attempting to create a share via the Front End UI will show the path available in the path selector but it will simply state “This field is required” when trying to create the SMB Share. or you might get “The path None does not exist“.
    2. symlinking or mounting directly to existing zpool pool path that’s already shared via SMB, results in failure accessing the drive and Freenas Logs “smbd: dnssd_clientstub write_all(36) failed -1/53 57 Socket is not connected
    3. The above line alone, I went through hell trying to solve, it’s what lead me to learning about FUSE and the chown issues and all that jazz, I went down so many rabbit holes I thought I was defeated, till I had one final idea: just like I manually mucked with the backend to get NTFS mounted in RW, maybe I can edit the backend Samba config to share the path since the front end python scripts were coded to prevent it.
      1. Find the config file: Samba config file:
        /usr/local/etc/smb4.conf
      2. Add a shared path entry:
        [usbhddd] 
            path = "/mnt/USBHDD"
            printable = no
            veto files = /.snapshot/.windows/.mac/.zfs/
            writeable = yes
            browseable = yes
            access based share enum = no
            hide dot files = yes
            guest ok = no
      3. Save the file and restart the Samba Service:
        service samba_server restart
        

When I saw that share path available, and when I double clicked it and I saw the files saved there show up, my jaw dropped!!! I couldn’t believe it worked.

Much like the manually having to edit the FSTAB to get the drive to mount automatic at boot, I have a feeling the smb4.conf file maybe overwritten at boot, which may require a cron job script to resolve. I again haven’t got to that point yet, I just finished this proof of concept that was, from my research, deemed to be impossible. Yet here I am blogging my success. See below for some info regarding Samba.

Samba options

Samba for FreeBSD

Key take away is that there’s a “link” between the Unix user and the “SMB” user. “FreeBSD user accounts must be mapped to the SambaSAMAccount database for Windows® clients to access the share. Map existing FreeBSD user accounts using pdbedit(8):”

pdbedit -a -u username

Final Note. I did this so I could have Backup Copy Jobs run, the Veeam server is a VM and this allows the VM to be migrated to other hosts while still being able to do both regular backup jobs and Backup copy jobs. and now that the USB drive on FreeNAS is NTFS based, I can just take the drive plug it into a windows machine and start restore operations. Having said that I’m doing this for my HomeLab and is for educational purposes only.

Here’s a snip of the repo in use via Veeam.

IP Camera Recording Software

The Story

Who doesn’t love a good story? 😀 To start I’ve been running Zoneminder for a while personally, and it’s done great. Then again, it’s been great with older PTZ cameras (think Wanscam). See this blog from 8 years ago (time sure flies).

This is an unpublished blog that I never got around to writing up. So I’m just going to write my experience around IP cameras real quick here and just post this as is.

Zoneminder

OS: Linux

I started off with Zoneminder and it’s still good for free and if you are using older IP cameras just based of JPEG based images streams off HTTP.

However modern cameras usually support RTSP (Real Time Streaming Protocol).

I haven’t used the product in a while, and maybe they support that now, but when I originally started to write this article they didn’t. Checking the home page looks pretty modern, so possibly the application is even better then when I first started using it.

Overall I liked Zoneminder. Though the management UI was def lacking and dated.

Shinobi

OS: Linux

For a good while I ran Shinobi which amazing still seems to be supporting the latest Ubuntu release. However, I found the application to be buggy and the community to be lacking.

Installation was a pain. Management was a pain. Data Retention was a pain, support was a pain.

But if you had it up and running properly, it did well for itself. At least in supporting RTSP and motion detection and file recording quality.

XProtect

OS: Windows

My cousin told me about this product by Milestone Systems: XProtect. I dabbled with this one a bit, overall, I liked it. The setup was OK, as most windows setups are, just double click and wizard away.

The Design was for the most part intuitive, some design aspects I felt were a bit clunky, and the software felt like it ran on an older framework (which I’m all for, given it remains stable).

Installation of Cameras I remember wasn’t too bad, but it’s been a while since I used it, and I unfortunately never blogged about its install and use. Sorry.

When my bike got stolen from my Garage and I had to go back to view footage, it was nice as it kept all recording for up to a defined point, and motion detected section where highlighted. This made it really easy to find the section of interest and export the video for evidence submission.

Being Windows and all it is of course more resource intensive then the linux counter parts, but if compute power is not a concern then I’d def say give this one a shot.

  Surveillance Station by Synology

OS: Embedded Linux

If you happen to use a Synology NAS, did you know it comes with 2 licenses for Surveillance Station. If you have any modern IP Cameras this is by far the best option.

Installation is a breeze, just look for the app in DiskStation Manager (DSM), install, done.

Given you don’t have any complex networks with firewalls blocking traffic (flat layer 2 switched is best), installation of IP Cameras is also unreal easy as you are more then likely to find your Camera in the provided device list.

For me this was the case with some more modern based IP cameras, then simple auth configuration, IP configurations and you’re off to the races. Configuring motion detection was also fairly straight forward with this application.

The thing is you only get two licenses for 2 IP cameras, if you have more you have to shell out a license for each one, they are perpetual, so there’s at least that.

Surveillance Station by QNAP

OS: Embedded Linux

If you happen to run a QNAP, they have similar software available under the exact same name.

I tested this out with a QNAP and some older cameras, I did manage to get them connected but still haven’t got motion recording to work, for some reason it seems to want to rely on FTP service on the host NAS, even though it’s running on it and could easily in theory save to a path the NAS itself can write to. This seems rather dumb, I’ll have to further test to find out but this one is nowhere near as nice as Synology’s version.

Blue Iris

OS: Windows

There’s also this one I’ve heard about and come across when searching for solutions, which is a product called Blue Iris. I personally have not had the chance to play with this software so I can’t comment on it’s installation, and ease of use.

If you’ve used this product, please leave a comment with your experiences.

That’s all for this old never posted post, that will now be posted. Cheers.

Fixing Vaultwarden 502 Bad Gateway

So anyway, the other day I updated the base OS for the instance of Vaultwarden I’m running. If you are interested in setting up your own you can follow this old guide, however you’ll have to note the YML config differences as noted in my other post upgrading to Vaultwarden, and this post.

It’s running on Ubuntu which was easy enough to update to the latest release build.

apt update
apt upgrade
reboot
do-release-upgrade

Simple enough, and everything went to plan. I made backups along each step too. The service was up, and life was good after a full complete system upgrade. Yay… or so I thought…. until I went to bring up a instance of pihole via docker compose and it errored out on me. When I looked up the error it seemed to be related to python.

So, I figured I’d install it, or try to?

Da faq? I remember this python/pip stuff being a pain intially too, what did I run again?

OK, maybe need to get the newer stuff? Old one no good?

Not now Kaa!!! let’s see…

apt install python3-pip

and…

pip install docker-compose

Looks like the command is working again. 🙂

But after pulling the latest build, and bringing it back up, there were no errors return to the command, but when I tried to access the service all I got was a 502 Gad Gateway from the loadbalancer. Since I knew it lived outside the container and was unaltered that it most likely was not the culprit, I ask on the #nginx channel and was told to check the container status with:

docker ps

OK, but why? then another helpful hint by a user:

docker-compose logs -f

This is when things get a bit weird/funny. I found this post about the SMTP Depreciated Warning, which stated that had nothing to do with the service not coming up. Which then linked to this issue post more likely to be the cause.

So, I kept trying, the log wouldn’t change from the snip above, I thought for sure, it has to be this “Rocket Address”, surely. I just wasn’t sure what .env was in that issue’s context. Another helpful hint from IRC:

docker inspect bitwardenzewwyca_app_1

When looking at it was already defined as 0.0.0.0.

This post, same exact problem, saying the exact same thing, but I don’t know what env.sh is in their context either, slowly losing hope, despair ensues.

I even try changing the log location, create a file, with cmod 777 on it:

Dang it! But this is when things take a strange turn…. I decided even though the SMTP_SSL wasn’t the root cause to change it anyway as it suggests:

changed SMTP_SSL=false to SMTP_SECURITY=false

Well, it finally shut up about the depreciated setting, but same dang issue, can I just get rid of the log file option? (since it’s just in the yml config file anyway…) Get rid of the log file entry in the YML file, and then…

Woah a different error message, wait the option is important, and I marked it wrong, ok final changed

changed SMTP_Security=false to SMTP_SECURITY=off.

bring down and up the container and…

Ehhhhhhhhh! My Vaultwardens back! Time to see if I can bring up ze PiHole!

I hope this post helps someone in the same boat.

Irssi on QNAP TS-212p

Back Story

As you may or may not know my ASUS RT-N16 shit the bed after I tried to beef up the cooling system on it; some unreal strong double-sided tape instead of regular thermal paste was used from factory. Anyway I loved it since it was almost always running and kept my chat history persistent in the channel’s I was in. I recently tried lubuntu, and while it was nice I didn’t like having a VM to manage, and sucking up resources for such a small requirement. It’s like using a sledgehammer to drive a frame mounting nail. Anyway, I still h ave this really old NAS kicking around that does a great job; a QNAP TS-212p. Looking at the available apps in the app center wasn’t exactly overwhelming. Googling I found this nice thread of someone asking the exact same thing…. over 10 years ago.

Not good, but like my original post of getting Optware on the ASUS -RT-N16 so I remained hopeful. The basic answer he came back with, after someone called him out for the usual useless “I fixed it myself answer”, was that they installed Optware then Irssi.

Oh Nice… OK…  What is Optware and how do you install Optware?

“OBSELETE: as of January 2019, Optware is no longer listed in the QNAP “App Center”. It appears that QNAP withdrew it sometime in 2015 or 2016. Entware is a non-QNAP QPKG which serves the same purpose of giving access to many command-line software tools used on a wide range of NAS systems.”

What kind of useless shit tits is this? FFS, ok so “ipkg” is Optware (defunct) and “opkg” is Entware. The link in the Wiki to Entware is a blank wiki page, great. OK, jesus, How do I install Entware?

Well, I googled; I found this one nice thread which asks what I was initially going for “Optware on TS-212“, which all comes down to what we already figured out, Optware is dead and Entware has taken over, and the only link in the thread to Entware is dead. Pricks. Here’s a reddit thread I found kind of discussing the different version of Entware, but again no help on installation.

Installing Entware

Searching some more I eventually found this link, which is apparently the source. It seems most downloads are based around the CPU architecture, while some are very specific based on the QNAP model. So, I searched for that as well and seems it’s ARM based.

So, I downloaded the ARM based image, and clicked into App Center in the QNAP Web MGMT, clicked manual upload and…

Holy crap it worked. Checking the console…

Nice! OK, but just before we move on, I have a couple questions…

Where are Apps going to be installed to? When I did this on the router it required /opt be mounted to a USB drive to store the data as the router has no storage of its own. Now a NAS does, but the firmware is one part, and the OS the other. Doing a df -h shows me a bit of info I may need to take note of:

As you can see / only has 16 MB of space, however doing a ls -la on the root showed me that opt is linked to the primary NAS’s storage:

Add User

Now like mentioned in my ASSU post running Tomato that I’m not a fan running things in admin space, let’s see if I can do the same and make a standard user for running irssi. In that post there seemed to have been a bug/logical design issue when attempting to use adduser, quickly googling I see a post all the way back from 2008 showing code using adduser (*Note the difference between useradd and adduser) and no complaints, so let’s try the simple thing first.

adduser {username}

My concern was the home path.

As you can see I checked and since /home is linked to /mnt/ext/home and /mnt is mounted to root, there’d only be 16M available on this user home path. Switching to the user,changing to the home path, and checking it shows its actually under /opt, so we’re fine.

Can I SSH in as this user? Appears that’s a no, mhmmmm what did I miss. Searching the interwebs I came across this interesting thread

First suggestion is to use the Web UI to provide the permission, but I could only see the user that was created via the UI, and the user created in the above snippet was not showing to grant permission to… someone in the thread states:

“The ‘Edit Access Permission’ page only gives you a list of administrators, not all users.

To allow other than administrators access you need to install your own version of ssh server. If you really want to do this then search the forum as it has been done before.” -Don

Why? Who knows, what a PITA, the OP apparently went the extra distance to do it just to have it fail on him anyway. Trying telnet the putty window just closes instantly when I try, now I can try a couple things which is can do as Don suggested and the OP tried running OpenSSH server via Entware which we just got installed.

Mhmmm when I click on user within the UI, it does see the account. *thinking….* no way it worked…..

So, since I did see the account in the UI, and as Don mentioned, and the final comment by chrisonnas. I added the account above to the administrators group, then went back into the edit access permission and the zewwy account was there, granted access to ssh (my exiting connection drop, indicative of the change applying and the service restarting). SSH in as the newly craeted account and success! Yes.

I then went back in the UI and removed Administrative rights, and was still able to SSH in, boo yeah.

OK, now Irssi finally!

Installing Irssi

opkg install irssi

Da faq?

What packages exist?

Uhhh, cause I didn’t update from source?

FFS man… No way… This worked?!?!

“OK, fixed by opkg install opkg …”

opkg install opkg

This isn’t going to work, cause it’s already poo…..

Sweet Jesus Murphey… ok.. now install Irssi?

opkg install irssi

Alright! Wo-wo-wo-wo! Now can the standard Zewwy user do a screen instance, and run Irssi??!?!?

SSH in: Check
Screen session: Check
Irssi:

sometimes… man…  I was about to give up when I spun up lubuntu to hope on IRC (LOL) to ask for help, I wasn’t sure where to ask, the #qnap channel in libera.chat was dead, I knew #linux was a busy place and the QNAP runs on linux. So I asked for help there. There was a friendly guy by the name of DLange who was nice enough to tell me the “Entware ship had set sailed 5 years ago”. Which kind of matches the time frame. Then out of nowhere another friendly chap by the name of Nei shared this QNAP thread with someone having the same issue but for nano instead of irssi.

checking the version, he motioned was the exact one I linked and downloaded. The only diff was the update/ upgrade commands…

so…

opkg update
opkg upgrade

Irssi:  CHECK! (finally dang nabbit)

Then do your basic Irssi setups like auto network joining, and the likes. 🙂

See you on IRC. 😀

/ignore -channels #channel * JOINS PARTS QUITS NICKS

ASUS calling Microsoft

Back Story

I’ll try to keep this post short as I’m behind on many other posts I have to finish. hahah :S

Anyway, I was thinking it’s time to update my pihole, when I checked the admin web interface to check for clients to see who’d still be using it for DNS, and then I’d make a list and be prepared to change them as required (any outside of DHCP of course, as I’d simply change the IP there). Now you might be wondering, why change the IP address? Which is a fair question, I could just update the one in question, but I had bigger plans to move it to another server, I didn’t want to give the other server multiple IPs, so I figured it be easier to spin up the new service on that server and simply change the DNS on the DHCP server/service. Anyway… where was I, oh right, checking the web admin I noticed the top client was my new ASUS RT-AX88U. I was hoping to get a model that supported Tomato like the old RT-N16 I had for so many years which I recently broke and so replaced it with this unit. It currently can’t run Tomato like I managed to do with the RT-N16. So, I just had configured it for AP mode. Figured it doesn’t need to do much else for now besides serve unreal good WiFi.

Yet it’s calling home to “dns.msftncsi.com”, when I looked up this domain it seems to be used mostly by windows machines to check to make sure they are online.

Fix This

Looking a bit further into it I managed to find this magical Reddit post (I really love reddit, I’ve found so many helpful posts there). Anyway let’s see if we can follow the steps on this router.

Step 1 – Enable Access

The source uses telnet, but I’m not a fan of transferring creds in cleartext, unless I know for certain it’s a completely isolated network. Since the router supports SSH, I enabled that instead and logged in. *note* I had to remove the fingerprint from the old RT-N16 I used to SSH into.

Step 2 – Gain Shell Access to your Router

login & password is the same as the web interface.

K, with that done, let’s see if we can edit the nvram, but let’s take a look as the OP suggests.

Step 3 – Look deep into NVRAM

nvram show | sort | less

I used the less command instead, as my old linux instructor once said “less is more” using less you can use the up and down arrow keys to scroll through the results, and look-e-here: (Press Q to exit less)

Step 4 – Finding the Droids

The droids I was after. Time to eliminate them.

Step 5 – Kill the Probe Content Droid

nvram set dns_probe_content=127.0.0.1

Step 6 – Kill the Probe Host Droid

nvram set dns_probe_host=""

Step 7 – Prevent Droid Resurrection

nvram commit

Step 8 – Fully Enforce Your New Empire

reboot

Verify:

Noice!