SharePoint – Can’t Delete ContentType

The Story

You know every post has to start with a story. So it’s story time, It all started with a site that needed to be templated and used to create new sites. Now when the user went to actually deploy the new template via “create new site” link under the site contents area, it errored out stating that there was an error.

Create SharePoint Site Template

I wouldn’t have blog posts if everything work via the happy path, there’s other people to blog about that…

This of course required jumping through some hoops to even make the site savable as a template, in my case it was just the property to be set using PowerShell:

$web = Get-SPWeb http://your_ site
$web.AllProperties["SaveSiteAsTemplateEnabled"] = "true"
$web.Update()
$web.Dispose()

Else you’ll get the following error:

So once this is done, you can finally create a template.

However, now we have to actually deploy it.

*NOTE* when you create a template of a site, you are secretly creating and activating a “Solution” to the main site. So if you need to manage, or delete a template you first have to deactivate the solution then you can delete it.

Deploy New SharePoint Site Template!

Would you expect anything else form my blog post? 😛 OK this should be easy enough let’s just delete this old content type as it was a legacy one left behind from a migration.

So first since this template is trash, you’d figure these type of checks would take place at creation of the template.

Ahhh SharePoint never cease to piss me off… OK, let’s google this…

The first source is dead on the solution… However it required making a direct Database change. To keep SharePoint “supported” state, although obviously broken. The alternative solution is to either find the original feature package, and re-install it either via command line (stsadm.exe), or Powershell or the front end. Of course if this is a third party feature, and you only have installation for older SharePoint, then this would have to be cleaned up on the old environment before migration. If I find the link (didn’t save at the moment) 🙁 there apparently is a way to map this ContentType to “Dummy” features, delete the content type, then delete the dummy feature. This is the only alternative way while doing it via the front end to stay “supported”.

In the mean time, you can also spin up the site in a test environment, and do the needful on the content type in the database backend (connected to the instance, and Database for the site content (WSS_Content by default):

Update DBO.ContentTypes
Set IsFromFeature = 0
Where ContentTypeID = 0xIDNum

The content type ID can be extract from the address bar via the front end as it is known by the web parameter ctypeID:

Now you’d figure there be no problem delete the content type, until another error shows up with a different reason. (OK I remember it being different but until I run through these test again maybe they were the same, as the second source explains…)

[Insert Picture of error after DB change]

Googling I came across this guys very nice blog post about the same issue!

Really short version… the content type is still used/referenced by another SharePoint object within the environment. He does show and reference some really nice C# code to help track the issuing objects. However I have no interest in building an App, just to find these… there has to be another way!

Ohh stackexchange how beautiful you are

$site = Get-SPSite("your-site-url");
foreach ($web in $site.AllWebs) {
   $ctype = $web.ContentTypes["Your Content Type"]
   $usages = [Microsoft.Sharepoint.SPContentTypeUsage]::GetUsages($ctype)
   foreach ($usage in $usages) {
      Write-Host $usage.Url
   }
}

Which helped me track the objects, in my case Lists…

Turned out to be the list in all subsites called “Tasks” now this is a SharePoint created list object, however they were created after this particular feature was enabled on the site, thus all subsites inherited the issue.

Now there are some nice online references to delete content types, or lists and other objects via PowerShell.

However if you know the object model well enough you can pull one liners to do wonders…

$spsite = Get-SPSite http://yoursite
$webs = $spsite.AllWebs
($webs.Lists | ?{$_.Title -eq "Tasks"}).Delete()
$webs.Dispose()
$spsite.Dispose()

And just like that hundreds of old SharePoint lists that were no longer used were gone. If the lists you have contain data that is to be kept, you are going to have to migrate the data to a new list, then delete the offending list and migrate the data back.

OK, NOW you can create a template from the site, and deploy it and it should succeed without issue. You can now navigate to the site content area where the solution packages are stored and copy it out, and then upload it to your production environment and create new clean sites. However note that this won’t fix the issue in your production side.

So you’ll have trade offs to consider in which way you decide to handle the issue.

Summary

SharePoint is a beast of a designed machine, and can often include some bugs that were not expected. I hope to extent this blogs and provide more SharePoint related content in the future. Cheers, I hoped this helped someone out there.

UniFi Shows MAC address instead of Hostname

I noticed this recently, that the UniFi management interface would show some clients as just their mac addresses instead of the host names like most other devices.

Searching I found this one, but it was after an update, I did not update the software.

Then I found this thread which was more what I was looking for, which tells me how the name is retrieved … “DHCP Snooping”.

Alright, so taking a look at the DHCP server, I noticed it was indeed empty names on the IPs that were given out.

Didn’t take me long to determine that it was Android devices. When I wanted to configure a hostname to the device I found out with the latest version.. I can’t?

“Hostname is used to easily identify and remember hosts connected to a network. It’s set on boot, e.g. from /etc/hostname on Linux based systems. Hostname is also a part of DHCPREQUEST (standardized as code 12 by IETF) which a DHCP client (Android device in our case) makes to DHCP server (WiFi router) to get an IP address assigned. DHCP server stores the hostnames to offer services like DNS. See details in How to ping a local network host by hostname?.

Android – instead of using Linux kernel’s hostname service – used property net.hostname (since Android 2.2) to set a unique host name for every device which was based on android_id. This hostname property was used for DHCP handshake (as added in Android 2.2 and 4.0). In Android 6 net.hostname continued to be used (1, 2, 3, 4) in new Java DHCP client when native dhcpcd was abandoned and later service was removed in Android 7. Since Android 8 – when android_id became unique to apps – net.hostname is no more set, so a null is sent in DHCPREQUEST. See Android 8 Privacy Changes and Security Enhancements:

net.hostname is now empty and the dhcp client no longer sends a hostname

So the WiFi routers show no host names for Android 8+, neither we can set / unset / change it.

However on rooted devices you can set net.hostname manually using setprop command or add in some init’s .rc file to set on every boot. Or use a third party client like busybox udhcpc to send desired hostname and other options to router. See Connecting to WiFi via ADB Shell.”

Well then… Now I have to manually set Aliases and use DHCP reservations just to be able to track these devices… cause “privacy

Summary…. Thumbs up… man!

Palo Alto Networks – Service Routes

The Story

You can read about Service routes from PAN directly here.

Basically … “The firewall uses the management (MGT) interface by default to access external services, such as DNS servers, external authentication servers, Palo Alto Networks services such as software, URL updates, licenses and AutoFocus. An alternative to using the MGT interface is to configure a data port (a regular interface) to access these services. The path from the interface to the service on a server is known as a service route. The service packets exit the firewall on the port assigned for the external service and the server sends its response to the configured source interface and source IP address.”

This is generally used if you configure the firewall, but don’t actually happen to physically plug anything into the MGMT port of the Firewall (MGMT on Physical or VNIC0 on VMs). However the device does have a internet connection, or has some interface on the dataplane that has access to a specific service. Whatever the need may be they can be useful to know they exist and can be utilized for certain situations.

When I discussed this with a friend who deploys many of these devices, it was opted to use the MGMT interface for most things. I did note one case such as Email, where you could configure the service route for that via the gateway interface for the mail server, thus only require one IP in the ACLs of the mail relay/server.

He did note that you could not test email from the passive firewall, as the interface won’t be active. Which could be problematic for other monitoring services such as SNMP, if utilized. Which was noted. Luckily many different services (SNMP/Email/LDAP) can be configured independently and all  default to the MGMT interface.

Summary

The main reason I even noticed this was due to email not working  on the alternative firewall after it took over from a failover, even though the dashboard on both firewall stated the running configs are both the same. Well it turns out that service routes I guess are not tested for synchronization between peers.

So yeah… not that if you are using Service Routes with PAN firewalls.

HP Laptop – OS Boot Loop

I just wanted to make a short post today on how I fixed a laptop I thought was fully toast.

The Story

This story being months ago, a user’s laptop wouldn’t boot properly following a Windows Update. Taking a look at it, and after he mentioned it just going into a “looping cycle” it was acting really weird! Symptoms of the device:

  1. The system would boot into the EFUI/BIOS menu without any issues, and could stay running here endlessly.
  2. The system could run all EFUI based hardware testes, and all reported functional hardware with no faults.
  3. As soon as you would get into the boot loader of any OS, the system would hard shutdown and power back on, wash, rinse, repeat.

What had me so baffled was that any OS boot would cause the hard shutdown (power lights all go off, screen goes dead blank), and then the power LED would come back on, and the POST screen would show, If I interrupted it, by going into the BIOS or doing self tests, it wouldn’t hard shutdown at all.

I tried everything (I had a few of these laptops already taken apart, so even tried swapping all the parts, including the battery (which is these particular laptops source of power for the CMOS) yup,  the laptop battery is the BIOS config saving power source. However even that didn’t fix it, and thus it sat on a shelf for months.

Till Today

I was working on another project when I got hit with a layer 2 segregation issue in the design plans, which had me really upset, and mind hurting. So I decided to step back from the problem and just happed to have this particular laptop on my desk that day as I needed some laptops for testing and realized it was this machine, so it just sat there.

I decided to take another shot at it. Since I was already on a path of failure, figured what’s the worst, just a bit more wasted time before going home.

So anyway, I thought I might as well see if there’s some new firmware and maybe that might help fix it (seems almost firmware related). So low n behold I grab the latest firmware for this laptop and create a “recovery USB stick”, then find out you simply plug that USB stick into the laptop, power off the machine, press n hold the “Windows Key + b” then power the unit on while still holding that key combination.

Holy crap, first time I follow instructions and it actually works, mind blown. So it completes the firmware update, everything seems find try to boot a linux OS from a USB drive. Boot loop, ahhh FFS.

I decided to vent my fustrations on the local #SkullSpace IRC channel, and another IT tech from the states, said something of the usual nature “Open and reseat all the things?”. Which of course as I stated about had a couple of these already open for repair and swapped all the goodies with no different result.

When I made the moment back to them about what I stated above: “I tried everything (I had a few of these laptops already taken apart, so even tried swapping all the parts, including the battery (which is these particular laptops source of power for the CMOS) yup,  the laptop battery is the BIOS config saving power source”, and when I mentioned that to them I noticed I had done the whole firmware upgrade without the battery plugged in at all.

I decided to plug in the battery and try to boot (of course this was always done before so didn’t think anything of it), when I booted it stated the CMOS had been reset (well yeah the battery was unplugged the whole time), and pressed enter to continue… and it didn’t boot loop.

At this moment I was like “WTF”. I was blown away to see after months of collecting dust I somehow magically managed to get this laptop to boot normally.

That’s what I call a good way to end the day…. now about that layer 2 segregation issue….

*Update* It went right back into the OS boot loop, it’s effed. 😉 would require a full mainboard replacement, not happening.

Bitwarden… Don’t do this

What Happened?!

I wanted to write up a quick blog post on something that I was rather upset about. That’s a change that was very badly communicated and caused people to click things they shouldn’t have without verification, but because it’s a “web app” they seem to be able to do these things.

And here is that issue: Extension disabled due to new permissions · Issue #1548 · bitwarden/browser · GitHub

and Bitwarden permission change warning on brave browser · Issue #1549 · bitwarden/browser · GitHub

Now I don’t have to explain why this was bad on so many levels, those of course being (1) the change that was really unneeded, (2) was not optional and (3) caused users icon to disappear.

It’s also not the fact that, yes they made it easy as it only required a click, and did not require admin permissions, but guess what…. this is exactly how getting compromised works. So when you attempt to educate end users not to do that, and stuff like this applies that there’s nothing wrong with something like “accept permissions” out of the blue!

Now I’m going to share some comments I 100% agree with from those issues from a lad called cleclap:

“Bitwarden is a highly sensitive security application managing 100 and more passwords. It is not a good idea to have this application require additional permissions to communicate with other applications. I rather take this as a worrying indication that the development of Bitwarden is turning into a bad and sad and wrong direction.

And, yes, Bitwarden should definitely make this additional request for permissions optional.

Where can I download the old version of the extension? I do not want this extension to operate with more permissions than is necessary for the most fundamental options.”

Now there’s a coupe dislikes and that could be due to the comment mentioned after by “github-account1111”

@clecap I agree with the premise, but if security is important, then using older versions is counterproductive, as it leads to a potentially less secure environment than with an up-to-date version (even one that has more permissions).”

Now I will put my two cents in right here…. It’d not the same to mix features in with security, updates to features almost never brings additional security, it’s usually the opposite and in this case it is.

As again cleclap explains:

@github-account1111 absolutely yes – provided the updates move into the right direction. Here I have, sorry to say, some serious doubts. While I certainly understand the convenience of all kinds of additional UI features and while I am certainly grateful that they exist they (1) definitely should be optional, (2) trade convenience for security, (3) were not reasonably communicated to end users and (4) came as a “oops, my system has been hacked” surprise to me.

And therefore my trust that updates move into the right direction of more secure software is, here, shaken.

All I want from a password store is to keep my passwords safe – and communicating them to “cooperating programs” by means of some “click ok or have your password store disabled” is the textbook example of what I am not expecting from secure system design. Sorry.”

I again have to 100% agree with him here. Now for the response from the “officials”?

cscharf commented yesterday

Hi All,

We’ve been discussing fervently today internally around this, and while we’ve figured out a way to make this permission optional in chromium based browsers, obviously we won’t be able to do so in Firefox.

After deliberation and discussion, and before our official product release announcement, we’ve decided that it would be better to exclude Firefox from browser biometric authentication, for now, until the upstream issue is resolved: https://bugzilla.mozilla.org/show_bug.cgi?id=1630415 rather than forcing all Firefox Bitwarden users to accept the new permission.

Extension update will be published soon as we’re working on appropriate PRs to make this change, along with supporting documentation.

Thank you for your feedback and continued support, patience and input, it’s extremely valuable and part of what makes open source amazing!

Sincerely,
The Bitwarden Team.

OK? So…. because it couldn’t be optional on one platform it was worth the reduction in security for a bigger attack surface, so the feature was introduced “without say” to end users. That makes no sense when security should be the first and foremost from the product, not features.

Final Words.

This feels like a upper management making a poor judgment call due to peer pressure and stepping outside of the company’s mission statement. What a sad day….

 

Repair a Corrupted Windows Boot… Again

The Story

This one begins with a support request that a system is non responsive. The usual suggestion of a hard shutdown and reboot is suggested.
They responded that it was erroring with something else, then stated it would go into “attempting repair” and restart and this cycle would continue.
Once I got a hold of the laptop, I attempted a boot repair using the recovery apps from the Windows 10 boot options. After that failed I resorted to my old blog post: blog post with a similar problem from years ago, showed the same symptoms :
bootrec /FixMBR (didn’t work)
bootrec /FixBoot (access denied)
bootrec /ScanOS (Found 0 installed instances)
bootrec /RebuildBCD (Found 0 installed instances)
Quickly Googling the access denied on fixing brought me to this answers page on MS, where billy reminded be about assigning the boot partition a drive letter. As well as a newer command to run which worked!
1) Diskpart
2) List Vol
3) Select Vol (3 or 4, which ever is ~100MB)
4) Assign letter V
5) bcdboot C:\windows /s V: /f UEFI
.
I was pretty shocked to see Windows boot, and glad one system I didn’t have to re-image and manually save files. 😀

Palo Alto Networks – Email

Story

Well back to work, so what other than another story of fun times troubleshooting what should be a super simple task. When I was hit with a delayed greyed out screen on the management UI and the subsequent error.

“Unable to send email via gateway (email server IP)”

The

Hunt

Let’s see if others have hit this problem:

First ones a dead end.

Second and Third basically state to ensure legit email addresses are applied to both to and addition to fields. My case I know the only one email to address is fine.

And finally the How to By Palo Alto Networks themselves.

Well that’s annoying, bascially tell you to ensure the email server is accessible but they do so from other devices cause the PA can’t even do a telnet test… uhh ok useless, I know it’s open.

Things to Know

I had contacted my buddy who specializes in PA firewalls. There are some things to note.

  1. Service Routing
    By default all traffic from the firewall, will go out the MGMT interface. Unless otherwise specified. In my case I was using a Service Route for Email to use the interface that was acting as the gateway for the subnet in which the email server was residing.
  2. Intrazone and Interzone Rules
    By default if traffic doesn’t hit any rule it will be dropped, watch the video by Joe Delio for greater in-depth understanding.

The Solution

Now even though I had a “clean up” rule as stated by Joe. I was still not seeing the traffic being blocked (and I know it was being blocked).

Once my buddy told me to override the intrazone rule and enabled logging on that rule, I was finally able to see the packets being dropped by the PAN firewall within the Traffic Logs/Session Logs.

Sure enough it was my own mistake as I had forgot to extent an existing rule which should have had the PAN’s gateway IP within it. After I noticed this I extended the rule to allow SMTP port 25 from the PA IP (not the mgmt IP) I was able to send emails from the PAN firewall.

Hope this helps someone.

Also note I ensured a dedicated receive connector on the email server to ensure the email would be allowed to flow though.

Resolving a 503 response from HAProxy

Story

A while ago I blogged about using OPNsense with HAProxy as a reverse proxy for Exchange services. Now you can serve many other applications but HTTP(s) has become very common place. This has simplified network requirements at layer 4 and has pushed most security up to level 7 (either patch management (updates) or a next generation firewall (NGF)). Anyway, sometimes the best form of security is simply blocking access to areas that shouldn’t need to be accessed, specially from public facing sides. Imagine a dedicated room, such as a server room, you would keep the doors to this area locked, and generally not directly accessibly from the outside (a door facing an outside wall), same concept applies here for services. Of course you still want users to be able to access the receptionist area. In this case, receptionist area is like the OWA portal, and the server room access is like the ECP portal.

Now in my previous post, I did attempt to not have a public way access to the ECP area, you’d have to be on the inside network to reach it. However much like the comment on that post, if you new about the redirect URL with application layer (HTTP requests with URL parameters) and manually entered the redirect URL path you would still manage to get the ECP login page from the public facing side. (whoops).

Now this isn’t the point of this blog post but will be a nice follow up once the actual concept of this post is… presented?

The issue

Anyway, when using HA proxy one might notice that the logging is rather low. (this is by design for them as to prevent flooding the server’s local storage with well, logs). Why don’t they simply define limit based logging and do FIFO (first in, first out) log rotation based on these limits? Not sure, anyway, first thing you’ll notice is that you’ll get 503 responses, and nothing but “client connections” in the log area:

As you can tell, pretty ****in’ useless. Nothing we didn’t already know, connections on port 80/443 are allowed and passed to the load balancer. However the load balancer is still not servicing content correctly. Let’s move on.

Troubleshooting

At first I was fairly confident all my real servers, conditions, and rules were created successfully and the order was good within the “public services”(interface listener).

Googling the generic issue provided, well, generic answers which didn’t help me. If I knew what the HAProxy service was doing I could stand a way better chance to solve it.

Enable Logging

First we enable logging on the actual service from “info” to “Debug”.

*Note remember to change it back to info to avoid log flooding*

However, This still didn’t provide me any insight when I went to check out the log section.

Turns out there’s separate level of logging for each listener you have. So under your specific “Public Service” aka interface listener, enable advanced logging on it:

Once I had this level of logging enabled I could finally see which backend server was being hit after the request.

Solution

In my case it turned out it was hitting a completely different backend then what the rules defined within the “Public Service”/Listener was defined. When I checked the rule on which the wrong backend it was hitting, it turned out this rule was missing the very condition it was suppose to have on it, and actually had no conditions defined. As such it was hit on any request that was passed to it, since it was higher up in the list of rules in the list of rules on the “Public Service”/Listener.

I hope that made sense, anyway. In this case I ensured the rule for that backend server had the actual condition attached to it that it was suppose to serve. In this case it’s all mostly hostname based and not even complicated using things like regex, or path parameters, etc.

Icing on the Cake

Now remember my story at the beginning trying to block ECP and failing at the redirect. Now I didn’t like that and I came up with a Condition and Rule set that works.

Now as you can see from this, I created two conidtions, if the path ends with ecp (this might be an issue if there are any other backends that happened to have a path that ends in ecp) lucky for me that’s not the case. This woulda been great if managing alternative domains on the same interface, but the second condition is a bit more direct/specific. As you can see from the first image it states to look out for any URL with the parameter of URL if the parameter of the redirect to the ECP. Then in the rule specified the OR condition so if either condition is met, the request is blocked.

Cheers!

Lync/Skype Enable User – Email is Invalid

I’ll make this post really short. The other day I needed to enable some new users within a domain that has trusts, users in one domain with some services in the trusted domain. This service in question is Exchange, and thus these were linked mailboxes.

First Symptom:

Opening Outlook for the first time and letting auto configure wizard run wouldn’t auto populate the User name and email in the second window of the wizard.

At this point I simply worked around the issue by filling in the name and email address, leaving the password field blank and clicking next, the rest of auto configure worked without a hitch.

Second Symptom:

Lync/Skype control panel, enable user; Email address is invalid.

At this point I sort of had an ‘ah ha’ moment and decided to check the user’s object in AD (on the source domain with the active accounts, not the disabled accounts in the exchange domain) and sure enough their email fields were blank, normally this would be populated if exchange was on the same domain, but since they were linked mailboxes with disabled accounts within the trusted domain, this is something Exchange I guess just doesn’t do in this situation.

Solution: Populated the email field on the User’s AD object on the source domain.

This sure enough resolved the first symptom as well 😀

Removing “Network” from File Explorer

SOURCE: Winareo

Update I wouldn’t recommend this way.

  1. Go to the following Registry key:
    HKEY_CLASSES_ROOT\CLSID\{F02C1A0D-BE21-4350-88B0-7367FC96EF3C}\ShellFolder
  2. Set the value data of the DWORD value Attributes to b0940064.If you are running a 64-bit operating system, repeat the steps above for the following Registry key:
    HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{F02C1A0D-BE21-4350-88B0-7367FC96EF3C}}\ShellFolder

The Issue with this method is it requires you to take ownership of the key, usually by running regedit as system using psexec. I thought maybe if I created a GPO to deploy these  settings it would work, but instead got Error Code: 0x80070005, which apprently means access denied.

After farting around a bit down a rabbit hole about HKCR and how it’s apparently derived from HCLM\Software\Classes. I then decided to simply ask Google how to remove that icon via a GPO as much easier techniques usually exist. Where I found this Spice works thread post where a user by the name of Adam Sneed provided a adm file, which if you are unaware create configuration areas within GPMC to manage workstation. If you also know GPO’s generally when pushed down to client machines are nothing more then registry changes. So opening up the shared adm file from Adam shows the following:

CLASS User

CATEGORY !!Custom

CATEGORY !!ExplorerExtras

 POLICY !!HideNetworkInExplorer
 KEYNAME "SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\NonEnum"
 EXPLAIN !!HideNetworkInExplorer_Help
 VALUENAME "{F02C1A0D-BE21-4350-88B0-7367FC96EF3C}"
 VALUEON NUMERIC 1
 VALUEOFF NUMERIC 0
 END POLICY

END CATEGORY

END CATEGORY

[strings]
 Custom="Custom Policies"
 ExplorerExtras="Windows Explorer Extra's"
 HideNetworkInExplorer="Hide the Network Icon in Explorer 2008/Vista/Windows 7"
 HideNetworkInExplorer_Help="Enable this to hide the netowrk icon, disable or unconfigure to show the network icon."

As you can see the key we are interested in is “SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\NonEnum”

Checking it out manually on the client machine is HKLM, which I later found out is directly answered in this TechNet post.

Hive: HKEY_LOCAL_MACHINE
Key Path: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\NonEnum
Value name: {F02C1A0D-BE21-4350-88B0-7367FC96EF3C}
Value type: REG_DWORD
Value Data (hex): 00000001

Doh, requires reboot to work.

*UPDATE* Bonus, remove Quick Access.

Hive: HKEY_LOCAL_MACHINE 
Key Path: SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer
Value name: HubMode
Value type: REG_DWORD 
Value Data (hex): 00000001

No reboot required, just reopen file explorer.