Getting A’s for my Site

Secure HTTP(S)

Yes, securing the secure…

First up SSL(The Certificates serving this website):  SSL Server Test

Old Score: B

Since I use HAProxy Plugin for OPNsense. OPNsense HTTP Admin Page -> Services (Left hand Nav) -> HA Proxy -> Settings -> Global Parameters -> SSL Default Settings (enable) -> Min Version TLS1.2. -> Cipher List

Old list:  ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256

Remove the last entry: ECDHE-RSA-AES128-SHA256.

Apply. New Score now gives A. Yay.

What about your short(Security) game?
What are you talking about… oh… HTTP headers… oh man here we goo…

HTTP Header Security

Site to test: https://securityheaders.com/

results:

Phhh get real… an F… c’mon, so this took me WAY longer than I’d like to admit and I went down several rabbit holes before finally coming to the answer. We’ll cover one at a time:

Referrer Policy

What is it?

Well we got two sources… W3C and Mozilla which is a bit more readable.. in short:

“The Referrer-Policy HTTP header controls how much referrer information (sent with the Referer header) should be included with requests. Aside from the HTTP header, you can set this policy in HTML.”

Bunch of tracking rubbish it seems like.

What types can be configured?

Referrer-Policy: no-referrer
Referrer-Policy: no-referrer-when-downgrade
Referrer-Policy: origin
Referrer-Policy: origin-when-cross-origin
Referrer-Policy: same-origin
Referrer-Policy: strict-origin
Referrer-Policy: strict-origin-when-cross-origin
Referrer-Policy: unsafe-url

Which is the safest?

(As in most browser compatible) From checking the Mozilla site seems like “strict-origin-when-cross-origin” but this type seem to give you an F grade. on Security. I’m assuming “no-referrer” is most secure (our goal).

What’s the impact?

Site working on different web browsers, old browsers may stop working.

When a user leaves your website from a link that points elsewhere, it may be useful for the destination server to know where the user came from (your website). It might also be more appropriate that you don’t tell them any information about your website. The referrer header that is sent is typically a string that includes the URL of the page that the user clicked the link to the destination. There are multiple ways to configure if and what information is sent, but things to keep in mind are referrers may be necessary to properly configure web advertisements, analytics, and some authentication platforms. You can also ensure that an HTTPS URL is not leaked into HTTP headers (and consequently leaking website path information unencrypted across the internet).”

How do you configure it?

This is a bit of a loaded question, which took me a while to figure out. If you are behind a load balancer, do you configure it on the load balancer side, or the backend server that actually servs the web content? (Turns out you configure it on the backend server).

In my case HA Proxy is the service that serves the website externally, while the backend server that hosts the web content is actually Apache (subject to change), but at the time of this writing that’s the hosting back end. Now it was actually a TurnKey Linux appliance, but maintained manually (OS patches and updates).

Now I found many guides online stating how to apply it, however some failed to mention all the pre-reqs, however the use of apache2 or httpd is dependent on the Linux distro. In our case it’s apache2. What is that dependency you might be asking… well it’s the headers module. Which can be verified is available by checking: /etc/apache2/mods-available/headers.load

To enable it:

a2enmod headers

failure to do so will cause the service to not start when calling the service restart command, and if you decided to use the .htaccess method instead the service will successfully restart but when you try to navigate to the website you’ll be greeted with an internal server error page.

After that I finally added this to my apache config file (which is also another loaded statement as to which file this is, as it’s referenced as so many different things on the internet, in my case it was /etc/apache2/site-enabled/wordpress.conf)

Header always set Referrer-Policy: "no-referrer"

I also added the no downgrade as a root option, but the no referrer was set under both VirtualHosts (even though the snippet below shows just port 80 host being configured).

Even though, for some reason, I can’t explain, the root will never obey the policy change defined:

Just ignore it… we got it completed on the scan 🙂 (Double checking if you ignore “General” and look down at “Response Headers” you can actually see it take affect there.)

Finally one down.. and a D… a solid D if you know what I mean…

Content Security Policy

What is it?

Content-Security-Policy is a security header that can (and should) be included on communication from your website’s server to a client. When a user goes to your website, headers are used for the client and server to exchange information about the browsing session. This is typically all done in the background unbeknownst to the user. Some of those headers can change the user experience, and some, such as the Content-Security-Policy affect how the web-browser will handle loading certain resources (like CSS files, javascript, images, etc) on the web page.

Content-Security-Policy tells the web-browser what resource locations are trusted by the web-server and is okay to load. If a resource from an untrusted location is added to the webpage by a MiTM or in dynamic code, the browser will know that the resource isn’t trusted and will fail to process that resource.

Pretty much, you know what your sites uses for external dependencies and you strictly allow only what you know you should be serving. If someone tries to mimic your website and do drive by downloads to alternative domain, this should block it.

For more details: Content-Security-Policy (CSP) Header Quick Reference

 How to enable?

Again, subjective but in our case just added to our apache/httpd conf file:

Header always set Content-Security-Policy: "default-src ‘self’ zewwy.ca"

What’s the Impact?

This one is actually a bigger PITA then I ordinally thought, keep reading to see.

The only thing I noticed failing was calls to strip.com, who are they… mhmmm… official source states:

“Stripe. js (and its iOS and Android SDK counterparts) is a JavaScript library that businesses use to integrate Stripe and accept online payments. Once Stripe. js is added to a site or mobile app, fraud signals are used to differentiate legitimate behavior from fraudulent behavior.”

In preparation for this I did see a call out to Facebook, and I know I use a plugin “Ultimate Social Media PLUS” that manages the social links and buttons on my site (I guess those might have to be added, not sure how hyperlinks go in this regard). Anyway, here’s the snip, and I simply disabled the button for that social link and it was gone from my home page. Here’s a snip of what it looked like:

K I was about to get into another rabbit hole and test and validate an assumption, as the only plugin I have that would connect to something like that is my donations button/plugin. However while attempting to manage it I kept getting a pop up about disabling a plugin on my WordPress admin page:

(Funny cause as I was testing this externally, I also forgot about my image hosting provider imgur so my pictures weren’t rendering. Will have to add them too.)

I temp reverted the config as it was causing havoc on my website.

Now lets try and deal with all the things…:

  1. Stripe… so after reading this guys blog post, it seems to be true. Pretty invasive for something I’m not using. I’m using PayPal donate button but not Stripe. There’s an option in the Plugin I’m using, but turning it off still shows the js call being made on my homepage with nothing on it. Only by deactivating the plugin entirely does the call go away. So be it I guess, even though I just fixed my donations button…
    Meh, took this button link and saved it on my homepage for now.
  2. When navigating to my blog directory, I noticed one network connection I was unaware of “s.w.org”

Not knowing what this was, I started looking in the HTML for where it might be:

What is that? 9 years ago, and this is still the case?!?

Following a more modern guide, I simply added the following to my theme functions.php file:

/**
* Disable the emoji's
*/
function disable_emojis() {
remove_action( 'wp_head', 'print_emoji_detection_script', 7 );
remove_action( 'admin_print_scripts', 'print_emoji_detection_script' );
remove_action( 'wp_print_styles', 'print_emoji_styles' );
remove_action( 'admin_print_styles', 'print_emoji_styles' ); 
remove_filter( 'the_content_feed', 'wp_staticize_emoji' );
remove_filter( 'comment_text_rss', 'wp_staticize_emoji' ); 
remove_filter( 'wp_mail', 'wp_staticize_emoji_for_email' );
add_filter( 'tiny_mce_plugins', 'disable_emojis_tinymce' );
add_filter( 'wp_resource_hints', 'disable_emojis_remove_dns_prefetch', 10, 2 );
}
add_action( 'init', 'disable_emojis' );

/**
* Filter function used to remove the tinymce emoji plugin.
* 
* @param array $plugins 
* @return array Difference betwen the two arrays
*/
function disable_emojis_tinymce( $plugins ) {
if ( is_array( $plugins ) ) {
return array_diff( $plugins, array( 'wpemoji' ) );
} else {
return array();
}
}

/**
* Remove emoji CDN hostname from DNS prefetching hints.
*
* @param array $urls URLs to print for resource hints.
* @param string $relation_type The relation type the URLs are printed for.
* @return array Difference betwen the two arrays.
*/
function disable_emojis_remove_dns_prefetch( $urls, $relation_type ) {
if ( 'dns-prefetch' == $relation_type ) {
/** This filter is documented in wp-includes/formatting.php */
$emoji_svg_url = apply_filters( 'emoji_svg_url', 'https://s.w.org/images/core/emoji/2/svg/' );

$urls = array_diff( $urls, array( $emoji_svg_url ) );
}

return $urls;
}

Restarted Apache and all was good.

3. Imgur, legit, all the pictures on my sites, lets add it to the policy.

adding just imgur.com didn’t work, but since I see them all coming from i.imgur.com, I added that and it seems to be working now. What I can’t understand it how this policy is making my icons from this one plugin change size…:


Normal


With Policy enabled. Besides that after all that freaking work do I get a reward?!

Yes, but I had to turn it off again cause the plugin deactivation problem.

I’ve seen that unsafe-inline in a reference somewhere… but I didn’t want to use it as then it felt it made the content security policy useless? This thread implies the same thing

What I don’t know is what plugin is having an issue. I did notice I forgot to include gravatar for user icons, I don’t think that would be the one though.

I fixed the images, by defining a separate images part of the policy. Then to resolve the icon size and the plugin pop up alert I also added a style-src. So now it looks like this:

Header always set Content-Security-Policy: "default-src 'self' zewwy.ca;img-src 'self' *.imgur.com secure.gravatar.com; script-src 'self';style-src 'self' 'unsafe-inline'"

It still breaks my Classic WordPress editor though, and the charts on the dashboard don’t work, but I guess I can enable it when I’m not working on managing my server or writing these blog posts, as a temp work around until I can figure out how to properly define the CSP.

Strict Transport Security

What is it?

A PITA is what it is. Have you ever SSH’d into a server, and then had the server key’s change? Then when you go to SSH it fails cause the finger print of the server changed, so you have to go and deleted the old fingerprint in your .ssh path. This is that, but for web browsers/web sites.

I guess back a decade, you were vulnerable to MitM I guess, but with browsers defaulting to trying https first this is not so much the case anymore. Possibly still are but from my understanding is HSTS only works if you:

  1. Connect to the legit server the first time.
  2. Keep the public key/fingerprint incase it changes.

How to enable it?

Pretty much like my first example. But alright let’s configure it anyway.

Header always set Strict-Transport-Security: "max-age=31536000; includeSubDomains"

Rescan… got it:

Still a D cause the content policy was turned off… we’ll get there.

What is the impact?

Someone has to access the site for the first time and will save a copy of the certificate (public certificate of the service being hosted, I.E. Website) in the browser cache. Then if being man in the middle,  cause the certificate provided won’t match the saved one, and the user will get a error message and the website simple will not load, and there won’t be a way to load the page from any buttons that exist on the website.

This however can also happen if the site being visited certificate as legit changed for reasons like expiry of the old one, or someone changing Certificate providers.

In this case you have to clear the cache, or use an incognito window which won’t have a copy of the old certificate stored and will simply connect to the website.

Permission Policy

What is it?

According to Mozilla:

“Permissions Policy provides mechanisms for web developers to explicitly declare what functionality can and cannot be used on a website. You define a set of “policies” that restrict what APIs the site’s code can access or modify the browser’s default behavior for certain features. This allows you to enforce best practices, even as the codebase evolves — as well as more safely compose third-party content.”

What’s the Impact?

I don’t know. If your apps have geolocation or require access to camera or microphone it might affect that.

I just want a checkbox while having my site still work… so…

How do enable?

Well I saw nothing particular about iframes, so lets just block geolocation and see what happens?

Header always set Permissions-Policy: "geolocation=()"

alright… C baby!

X Frame Options

What is it?

The X-Frame-Options header (RFC), or XFO header, protects your visitors against clickjacking attacks. An attacker can load up an iframe on their site and set your site as the source, it’s quite easy:

 <iframe src="https://zewwy.ca"></iframe>.

Using some crafty CSS they can hide your site in the background and create some genuine looking overlays. When your visitors click on what they think is a harmless link, they’re actually clicking on links on your website in the background. That might not seem so bad until we realize that the browser will execute those requests in the context of the user, which could include them being logged in and authenticated to your site! Troy Hunt has a great blog on Clickjack attack – the hidden threat right in front of you. Valid values include DENY meaning your site can’t be framed, SAMEORIGIN which allows you to frame your own site or ALLOW-FROM https://example.com/ which lets you specify sites that are permitted to frame your own site.

I get it, using HTML trickery to hide the actual link behind something else, in Troy’s case the assumption is made that people don’t log out of their banking website, and have active session cookies. Blah blah blah…

How do you enable it?

For all options go to Hardening your HTTP response headers (scotthelme.co.uk)

Since I’m using Apache:

Header always set X-Frame-Options "DENY"

What’s the Impact?

Since I don’t have user logins, or self reference my site using frames, and I have no plans to have any collaboration in which I would allow someone to frame my site, DENY is perfectly fine and there’s been zero impact on my site, other than my now B Grade. 😀

X-Content-Type-Options

What is it?

Nice and easy to configure, this header only has one valid value, nosniff. It prevents Google Chrome and Internet Explorer from trying to mime-sniff the content-type of a response away from the one being declared by the server. It reduces exposure to drive-by downloads and the risks of user uploaded content that, with clever naming, could be treated as a different content-type, like an executable.

*Smiles n nods*  Yup, mhmmm, whatever you say Scotty.

How do you enable it?

Again I’m using Apache so:

Header always set X-Content-Type-Options "nosniff"

What’s the Impact?

Nothing I can tell so far, but even with the heaviest hitter disabled (due to the pain in the ass impact) and I still have yet to get to tuned properly… I finally got that dang A… Wooooooo!

Summary

I’ll enable the CSP Header, when I’m not working on my site, aka writing these blog posts. Then hopefully tune it so I can leave it on all the time. However, for now I’ll reenable to once this post is done and Ill check my score.

*Update* OK, I temp disabled the Content Security Policy cause it was breaking my floating Table of Contents, and while I do love keeping a site as simple as humanly possible I do like having some cool features. However I got this nice snip of an A+ before I turned it back off 😉

Hope all this helps someone.

ACME HTTP Validation with HTTPs redirection

I had this got this to work with this requirement for an external A host record, redirects, negate rules. It was quite complex, and, in the end, it did work. I was excited, I got ready to write this long post, then I realized, I had somehow missed the obvious. I found this post on the forms with someone having the exact same issue what amazed me the most, was how simple their solution was.

So, I tested it…

The HTTP to HTTPS redirect condition:

and this will take any HTTP request and convert them into HTTPS.  If you configured HTTP validation though this will be a problem when the request from ACME comes in to hit the backend created by the ACME plugin.

As stated by the guy, he simply made a clone of the condition, and made it a negate.

then apply it to the redirect rule…

then apply this to the http listener

Test a cert renewal… it worked

That was way simpler than I thought up. lol

Hope this helps someone.

HTTP to HTTPS redirect Sub-CA Core

The Story

One day I noticed I had configured my 2008 R2 CA server to automatically redirect to the certsrv site over HTTPS even when navigating to the root site via HTTP. There was however no URL rewrite module… and I didn’t blog about so I had to figure out…. how did I do it?! Why?….. Cause this…

Sucks, and why would you issue certificates over unsecure HTTP (yeah yeah, locked down networks don’t matter, but still, if its easy enough to secure, why not).

The First Problem

The first problem should be pretty evident from the title alone…. yeah it’s core, which means; No Desktop, no GUI tools, much of anything on the server itself. So we will have to manage IIS settings remotely.

SubCA:

Nice, and…

Windows 10:

as well install IIS RM 1.2 (Google Drive share) Why… see here

and finally connect to the sub-CAs IIS…

and

Expand Sites, and highlight the default site…

Default Settings

By default you can notice a few things, first there’s no binding for the alternative default port of 443 which HTTPS standardizes on.

Now you can simply select the same Computer based certificate that was issued to the computer for the actual Sub-CA itself.. and this will work…

however navigating to the site gave cert warnings as I was accessing the site by a hostname different than the common name, and without any SANs specified for this you get certificate errors/warnings, not a great choice. So let’s create a new certificate for IIS.

Alright, no worries I blogged about this as well

On the Windows 10 client machine, open MMC…

Certificates Snap in -> Comp -> SubCA

-> Personal -> Certificates -> Right Click open area -> All Tasks -> Advanced Operations -> Create Custom Request….

Next, Pick AD enrollment, Next, Template: Web Server; PKCS #10, Next,

Click Details, then Properties, populate the CN and SANS, Next

Save the request file, Open the CA Snap-in…. sign the cert…

provide the request file, and save the certificate…

import it back to the CA via the remote MMC cert snap-in…

Now back on IIS… let’s change the cert on the binding…

Mhmmmm not showing up in the list… let’s re-open IIS manager… nope cause…

I don’t have the key.

The Second Problem

I see so even though I created the CSR on the server remotely… it doesn’t have the key after importing… I didn’t have this issue on my initial testing at work, so I’m not exactly sure what happened here considering I followed all the steps I did before exactly…. so ok weird…I think this might be an LTSB bug (Nope Tested on a 1903 client VM) or something, it’s the only difference I can think of at this moment.

In my initial tests of this the SubCA did have the key with the cert but when attempting to bind it in IIS would always error out with an interesting error.

Which now I’ll have to get a snippet of, as my home lab provided different results… which kind of annoys the shit out of me right now. So even if you get the key with the “first method” it won’t work you get the above ever, or you simply don’t get the key with the request and import and it never shows in the IIS bindings dropdown list.

Anyway, I only managed to resolve it by following the second method of creating a cert on IIS Core.

Enabling RDP on Core

Now I’m lazy and didn’t want to type out the whole inf file, and my first attempts to RDP in failed cause of course you have to configure it, i know how on desktop version, but luckily MS documented this finally…

so on the console of the SubCA:

cscript C:\Windows\System32\Scregedit.wsf /ar 0

open notepad and create CSR on SubCA directly…

save it, and convert it, and submit it!

Save!!!! the Cert!

Accept! The Cert!

Now in cert snap-in you can see the system has the key:

and should now be selectable in IIS, and not give and error like shown above.

But first the default error messages section:

and add the new port binding:

Now we should be able to access the certsrv page securely or you know the welcome splash…

Now for the magic, I took the idea of this guy”

Make sure that under SSL Settings, Require SSL is not checked. Otherwise it will complain with 403.4.forbidden

” response from this site I sourced in my original HTTP to HTTPS redirect

So…

Creating a custom Error Page

which gives this:

and finally, enable require SSL:

Now if you navigate to http://subca you get https://subca/certsrv

No URL rewrite module required:

Press enter.. and TADA:

Summary

There’s always multiple ways to accomplish something, I like this method cause I didn’t have to install and alternative module on my SubCA server. This also always enforces a secure connection when using the web portal to issue certificates. I also found no impact on any regular MMC requests either. All good all around.

I hope someone enjoys this post! Cheers!

*UPDATE 2023* This trick caused my SubCA CA services to not start. Stating failed to retrieve CRL, this was due to any attempt to retrieve the CRL over regular HTTP to fail as those requests would redirect back to the certsrv site, but requests to the same CRL via HTTPS would work. So only implement this change if you have already edited your Offline and SubCA Certificates to have CRL’s pointing to a https based URL references.

Secure a WordPress Site with HTTPS

Intro

Well it is slowly becoming a requirement, even for a site that simply shares content and has no portal or user information… such as my site… but may as well do it now since we can get certified certificates for free!!! Wooohoooo!

So doing a bit of research….

Research

Securing WordPress

TurnKey SSL Certs

Let’s Encrypt!

Cert-Bot

TurnKey WordPress uses Debian… what version?

The Tasks

Alright so we are running Debian 8, let’s follow that cert-bot tut….

Let’s start by creating a snapshot, at this point I don’t exactly have backups running yet… I know I know… I was suppose to do Free Hypervisor Backup Part 3 where I redesign ghettoVCB’s script…. unfortunately I can only do so much and I have many projects on the run. I will get it to though, I promise!


Now with that out of the way, running Cert-Bot…
Then I ran into some errors… oopsies….


What happened?!?!
Well I was working through a lot of network redesign, and my public website, the very WordPress I was trying to get a certificate for,
had a NAT rule to get out to the internet, which is why the grabbing and running of the CertBot succeeded up until this point.
I didn’t create the NAT rule to allow HTTP traffic just yet as I was wanting to create this certificate first. Little did I know it was going to be a prerequisite
Anyway I had to update my Websites DNS record to point to my new public IP,
as well as create a NAT and security rule to allow my website to be accessible from the outside world…


I had to wait a while for DNS to replicate to other servers outside, specifically whichever ones Lets Encrypt servers use to locate and validate my requests from CertBot.
so…
after making the changes, and waiting a while I attempted to access my website from the internet again,
it was failing and then I realized my mistake was in the security rule I defined. correcting my security rule, I could access my website.
running Certbot again…

Yay, and it listed all the virtual hosts hosted by my turnkey wordpress..

then created another NAT rule to hanndle https traffic… and then the security rule…

That was literally it! CertBot made it so easy! Yay that’s a first! 😀

IIS redirect HTTP to HTTPS

The Requirement

Recently I posted about creating and using a certificate for use on and IIS website. I did this recently for my Dev to deploy his new web app, which required use of a devices camera for QR scanning purposes, well it runed out that the camera could not be used unless the app was secured with TLS (a certificate). So we created a cert and created a new secure binding.

However, it was soon pretty apparent every time I pressed the down arrow key on my browsers URL it would use the regular HTTP (at first we removed the port 80 binding which caused site unavailable issue). So after re-adding the standard port 80 binding for regular HTTP, I decided to use a rewrite rule to handle the redirection.

It wasn’t as intuitive as I thought it would be so a little google search and I was on my way…

As the source states, after you have your certificate and port binding for HTTPS, as well as one for regular HTTP.

In order to force a secure connection on your website, it is necessary to set up a certain HTTP/HTTPS redirection rule. This way, anyone who enters your site using a link like “yourdomain.com” will be redirected to “https://yourdomain.com” or “https://www.yourdomain.com” (depending on your choice) making the traffic encrypted between the server and the client side.

Below you can find the steps for setting up the required redirect:

The Source

    1. Download and install the “URL Rewrite” module.
    2. Open the “IIS Manager” console and select the website you would like to apply the redirection to in the left-side menu:iisred1
    3. Double-click on the “URL Rewrite” icon.
    4. Click “Add Rule(s)” in the right-side menu.
    5. Select “Blank Rule” in the “Inbound” section, then press “OK”:iisred2
    6. Enter any rule name you wish.
    7. In the “Match URL” section:- Select “Matches the Pattern” in the “Requested URL” drop-down menu
      – Select “Regular Expressions” in the “Using” drop-down menu
      – Enter the following pattern in the “Match URL” section: “(.*)”
      – Check the “Ignore case” box

      iisred3

    8. In the “Conditions” section, select “Match all” under the “Logical Grouping” drop-down menu and press “Add”.
    9. In the prompted window:
      – Enter “{HTTPS}” as a condition input
      – Select “Matches the Pattern” from the drop-down menu
      – Enter “^OFF$” as a pattern
      – Press “OK”

      iisred4

    10. In the “Action” section, select “Redirect” as the action type and specify the following for “Redirect URL”:https://{HTTP_HOST}/{R:1}
    11. Check the “Append query string” box.
    12. Select the Redirection Type of your choice. The whole “Action” section should look like this:iisred5

The Note

NOTE: There are 4 redirect types of the redirect rule that can be selected in that menu:
– Permanent (301) – preferable type in this case, which tells clients that the content of the site is permanently moved to the HTTPS version. Good for SEO, as it brings all the traffic to your HTTPS website making a positive effect on its ranking in search engines.
– Found (302) – should be used only if you moved the content of certain pages to a new place *temporarily*. This way the SEO traffic goes in favour of the previous content’s location. This option is generally not recommended for a HTTP/HTTPS redirect.
– See Other (303) – specific redirect type for GET requests. Not recommended for HTTP/HTTPS.
– Temporary (307) – HTTP/1.1 successor of 302 redirect type. Not recommended for HTTP/HTTPS.

  1. Click on “Apply” on the right side of the “Actions” menu.

The redirect can be checked by accessing your site via http:// specified in the URL. To make sure that your browser displays not the cached version of your site, you can use anonymous mode of the browser.

The Helping Hand

The rule is created in IIS, but the site is still not redirected to https://

Normally, the redirection rule gets written into the web.config file located in the document root directory of your website. If the redirection does not work for some reason, make sure that web.config exists and check if it contains the appropriate rule.

To do this, follow these steps:

  1. In the sites list of IIS, right-click on your site. Choose the “Explore” option:iisred6
  2. “Explore” will open the document root directory of the site. Check if the web.config file is there.
  3. The web.config file must have the following code block:
    <configuration>
    <system.webServer>
    <rewrite>
    <rules>
    <rule name=”HTTPS force” enabled=”true” stopProcessing=”true”>
    <match url=”(.*)” />
    <conditions>
    <add input=”{HTTPS}” pattern=”^OFF$” />
    </conditions>
    <action type=”Redirect” url=”https://{HTTP_HOST}/{R:1}” redirectType=”Permanent” />
    </rule>
    </rules>
    </rewrite>
    </system.webServer>
    </configuration>
  4. If the web.config file is missing, you can create a new .txt file, put the aforementioned code there, save and then rename the file to web.config.

This one, much like my Offline Root CA post is a direct copy of the source. For the same reason, I felt I had no need to re-word it as it was very well written.

Thanks Namecheap, although it would have been nice to thank the actual author who took the time to do the beautiful write up with snippets. 🙂