Bitwarden… Don’t do this

What Happened?!

I wanted to write up a quick blog post on something that I was rather upset about. That’s a change that was very badly communicated and caused people to click things they shouldn’t have without verification, but because it’s a “web app” they seem to be able to do these things.

And here is that issue: Extension disabled due to new permissions · Issue #1548 · bitwarden/browser · GitHub

and Bitwarden permission change warning on brave browser · Issue #1549 · bitwarden/browser · GitHub

Now I don’t have to explain why this was bad on so many levels, those of course being (1) the change that was really unneeded, (2) was not optional and (3) caused users icon to disappear.

It’s also not the fact that, yes they made it easy as it only required a click, and did not require admin permissions, but guess what…. this is exactly how getting compromised works. So when you attempt to educate end users not to do that, and stuff like this applies that there’s nothing wrong with something like “accept permissions” out of the blue!

Now I’m going to share some comments I 100% agree with from those issues from a lad called cleclap:

“Bitwarden is a highly sensitive security application managing 100 and more passwords. It is not a good idea to have this application require additional permissions to communicate with other applications. I rather take this as a worrying indication that the development of Bitwarden is turning into a bad and sad and wrong direction.

And, yes, Bitwarden should definitely make this additional request for permissions optional.

Where can I download the old version of the extension? I do not want this extension to operate with more permissions than is necessary for the most fundamental options.”

Now there’s a coupe dislikes and that could be due to the comment mentioned after by “github-account1111”

@clecap I agree with the premise, but if security is important, then using older versions is counterproductive, as it leads to a potentially less secure environment than with an up-to-date version (even one that has more permissions).”

Now I will put my two cents in right here…. It’d not the same to mix features in with security, updates to features almost never brings additional security, it’s usually the opposite and in this case it is.

As again cleclap explains:

@github-account1111 absolutely yes – provided the updates move into the right direction. Here I have, sorry to say, some serious doubts. While I certainly understand the convenience of all kinds of additional UI features and while I am certainly grateful that they exist they (1) definitely should be optional, (2) trade convenience for security, (3) were not reasonably communicated to end users and (4) came as a “oops, my system has been hacked” surprise to me.

And therefore my trust that updates move into the right direction of more secure software is, here, shaken.

All I want from a password store is to keep my passwords safe – and communicating them to “cooperating programs” by means of some “click ok or have your password store disabled” is the textbook example of what I am not expecting from secure system design. Sorry.”

I again have to 100% agree with him here. Now for the response from the “officials”?

cscharf commented yesterday

Hi All,

We’ve been discussing fervently today internally around this, and while we’ve figured out a way to make this permission optional in chromium based browsers, obviously we won’t be able to do so in Firefox.

After deliberation and discussion, and before our official product release announcement, we’ve decided that it would be better to exclude Firefox from browser biometric authentication, for now, until the upstream issue is resolved: https://bugzilla.mozilla.org/show_bug.cgi?id=1630415 rather than forcing all Firefox Bitwarden users to accept the new permission.

Extension update will be published soon as we’re working on appropriate PRs to make this change, along with supporting documentation.

Thank you for your feedback and continued support, patience and input, it’s extremely valuable and part of what makes open source amazing!

Sincerely,
The Bitwarden Team.

OK? So…. because it couldn’t be optional on one platform it was worth the reduction in security for a bigger attack surface, so the feature was introduced “without say” to end users. That makes no sense when security should be the first and foremost from the product, not features.

Final Words.

This feels like a upper management making a poor judgment call due to peer pressure and stepping outside of the company’s mission statement. What a sad day….

 

Repair a Corrupted Windows Boot… Again

The Story

This one begins with a support request that a system is non responsive. The usual suggestion of a hard shutdown and reboot is suggested.
They responded that it was erroring with something else, then stated it would go into “attempting repair” and restart and this cycle would continue.
Once I got a hold of the laptop, I attempted a boot repair using the recovery apps from the Windows 10 boot options. After that failed I resorted to my old blog post: blog post with a similar problem from years ago, showed the same symptoms :
bootrec /FixMBR (didn’t work)
bootrec /FixBoot (access denied)
bootrec /ScanOS (Found 0 installed instances)
bootrec /RebuildBCD (Found 0 installed instances)
Quickly Googling the access denied on fixing brought me to this answers page on MS, where billy reminded be about assigning the boot partition a drive letter. As well as a newer command to run which worked!
1) Diskpart
2) List Vol
3) Select Vol (3 or 4, which ever is ~100MB)
4) Assign letter V
5) bcdboot C:\windows /s V: /f UEFI
.
I was pretty shocked to see Windows boot, and glad one system I didn’t have to re-image and manually save files. 😀

Palo Alto Networks – Email

Story

Well back to work, so what other than another story of fun times troubleshooting what should be a super simple task. When I was hit with a delayed greyed out screen on the management UI and the subsequent error.

“Unable to send email via gateway (email server IP)”

The

Hunt

Let’s see if others have hit this problem:

First ones a dead end.

Second and Third basically state to ensure legit email addresses are applied to both to and addition to fields. My case I know the only one email to address is fine.

And finally the How to By Palo Alto Networks themselves.

Well that’s annoying, bascially tell you to ensure the email server is accessible but they do so from other devices cause the PA can’t even do a telnet test… uhh ok useless, I know it’s open.

Things to Know

I had contacted my buddy who specializes in PA firewalls. There are some things to note.

  1. Service Routing
    By default all traffic from the firewall, will go out the MGMT interface. Unless otherwise specified. In my case I was using a Service Route for Email to use the interface that was acting as the gateway for the subnet in which the email server was residing.
  2. Intrazone and Interzone Rules
    By default if traffic doesn’t hit any rule it will be dropped, watch the video by Joe Delio for greater in-depth understanding.

The Solution

Now even though I had a “clean up” rule as stated by Joe. I was still not seeing the traffic being blocked (and I know it was being blocked).

Once my buddy told me to override the intrazone rule and enabled logging on that rule, I was finally able to see the packets being dropped by the PAN firewall within the Traffic Logs/Session Logs.

Sure enough it was my own mistake as I had forgot to extent an existing rule which should have had the PAN’s gateway IP within it. After I noticed this I extended the rule to allow SMTP port 25 from the PA IP (not the mgmt IP) I was able to send emails from the PAN firewall.

Hope this helps someone.

Also note I ensured a dedicated receive connector on the email server to ensure the email would be allowed to flow though.

Resolving a 503 response from HAProxy

Story

A while ago I blogged about using OPNsense with HAProxy as a reverse proxy for Exchange services. Now you can serve many other applications but HTTP(s) has become very common place. This has simplified network requirements at layer 4 and has pushed most security up to level 7 (either patch management (updates) or a next generation firewall (NGF)). Anyway, sometimes the best form of security is simply blocking access to areas that shouldn’t need to be accessed, specially from public facing sides. Imagine a dedicated room, such as a server room, you would keep the doors to this area locked, and generally not directly accessibly from the outside (a door facing an outside wall), same concept applies here for services. Of course you still want users to be able to access the receptionist area. In this case, receptionist area is like the OWA portal, and the server room access is like the ECP portal.

Now in my previous post, I did attempt to not have a public way access to the ECP area, you’d have to be on the inside network to reach it. However much like the comment on that post, if you new about the redirect URL with application layer (HTTP requests with URL parameters) and manually entered the redirect URL path you would still manage to get the ECP login page from the public facing side. (whoops).

Now this isn’t the point of this blog post but will be a nice follow up once the actual concept of this post is… presented?

The issue

Anyway, when using HA proxy one might notice that the logging is rather low. (this is by design for them as to prevent flooding the server’s local storage with well, logs). Why don’t they simply define limit based logging and do FIFO (first in, first out) log rotation based on these limits? Not sure, anyway, first thing you’ll notice is that you’ll get 503 responses, and nothing but “client connections” in the log area:

As you can tell, pretty ****in’ useless. Nothing we didn’t already know, connections on port 80/443 are allowed and passed to the load balancer. However the load balancer is still not servicing content correctly. Let’s move on.

Troubleshooting

At first I was fairly confident all my real servers, conditions, and rules were created successfully and the order was good within the “public services”(interface listener).

Googling the generic issue provided, well, generic answers which didn’t help me. If I knew what the HAProxy service was doing I could stand a way better chance to solve it.

Enable Logging

First we enable logging on the actual service from “info” to “Debug”.

*Note remember to change it back to info to avoid log flooding*

However, This still didn’t provide me any insight when I went to check out the log section.

Turns out there’s separate level of logging for each listener you have. So under your specific “Public Service” aka interface listener, enable advanced logging on it:

Once I had this level of logging enabled I could finally see which backend server was being hit after the request.

Solution

In my case it turned out it was hitting a completely different backend then what the rules defined within the “Public Service”/Listener was defined. When I checked the rule on which the wrong backend it was hitting, it turned out this rule was missing the very condition it was suppose to have on it, and actually had no conditions defined. As such it was hit on any request that was passed to it, since it was higher up in the list of rules in the list of rules on the “Public Service”/Listener.

I hope that made sense, anyway. In this case I ensured the rule for that backend server had the actual condition attached to it that it was suppose to serve. In this case it’s all mostly hostname based and not even complicated using things like regex, or path parameters, etc.

Icing on the Cake

Now remember my story at the beginning trying to block ECP and failing at the redirect. Now I didn’t like that and I came up with a Condition and Rule set that works.

Now as you can see from this, I created two conidtions, if the path ends with ecp (this might be an issue if there are any other backends that happened to have a path that ends in ecp) lucky for me that’s not the case. This woulda been great if managing alternative domains on the same interface, but the second condition is a bit more direct/specific. As you can see from the first image it states to look out for any URL with the parameter of URL if the parameter of the redirect to the ECP. Then in the rule specified the OR condition so if either condition is met, the request is blocked.

Cheers!

Lync/Skype Enable User – Email is Invalid

I’ll make this post really short. The other day I needed to enable some new users within a domain that has trusts, users in one domain with some services in the trusted domain. This service in question is Exchange, and thus these were linked mailboxes.

First Symptom:

Opening Outlook for the first time and letting auto configure wizard run wouldn’t auto populate the User name and email in the second window of the wizard.

At this point I simply worked around the issue by filling in the name and email address, leaving the password field blank and clicking next, the rest of auto configure worked without a hitch.

Second Symptom:

Lync/Skype control panel, enable user; Email address is invalid.

At this point I sort of had an ‘ah ha’ moment and decided to check the user’s object in AD (on the source domain with the active accounts, not the disabled accounts in the exchange domain) and sure enough their email fields were blank, normally this would be populated if exchange was on the same domain, but since they were linked mailboxes with disabled accounts within the trusted domain, this is something Exchange I guess just doesn’t do in this situation.

Solution: Populated the email field on the User’s AD object on the source domain.

This sure enough resolved the first symptom as well 😀

Removing “Network” from File Explorer

SOURCE: Winareo

Update I wouldn’t recommend this way.

  1. Go to the following Registry key:
    HKEY_CLASSES_ROOT\CLSID\{F02C1A0D-BE21-4350-88B0-7367FC96EF3C}\ShellFolder
  2. Set the value data of the DWORD value Attributes to b0940064.If you are running a 64-bit operating system, repeat the steps above for the following Registry key:
    HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{F02C1A0D-BE21-4350-88B0-7367FC96EF3C}}\ShellFolder

The Issue with this method is it requires you to take ownership of the key, usually by running regedit as system using psexec. I thought maybe if I created a GPO to deploy these  settings it would work, but instead got Error Code: 0x80070005, which apprently means access denied.

After farting around a bit down a rabbit hole about HKCR and how it’s apparently derived from HCLM\Software\Classes. I then decided to simply ask Google how to remove that icon via a GPO as much easier techniques usually exist. Where I found this Spice works thread post where a user by the name of Adam Sneed provided a adm file, which if you are unaware create configuration areas within GPMC to manage workstation. If you also know GPO’s generally when pushed down to client machines are nothing more then registry changes. So opening up the shared adm file from Adam shows the following:

CLASS User

CATEGORY !!Custom

CATEGORY !!ExplorerExtras

 POLICY !!HideNetworkInExplorer
 KEYNAME "SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\NonEnum"
 EXPLAIN !!HideNetworkInExplorer_Help
 VALUENAME "{F02C1A0D-BE21-4350-88B0-7367FC96EF3C}"
 VALUEON NUMERIC 1
 VALUEOFF NUMERIC 0
 END POLICY

END CATEGORY

END CATEGORY

[strings]
 Custom="Custom Policies"
 ExplorerExtras="Windows Explorer Extra's"
 HideNetworkInExplorer="Hide the Network Icon in Explorer 2008/Vista/Windows 7"
 HideNetworkInExplorer_Help="Enable this to hide the netowrk icon, disable or unconfigure to show the network icon."

As you can see the key we are interested in is “SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\NonEnum”

Checking it out manually on the client machine is HKLM, which I later found out is directly answered in this TechNet post.

Hive: HKEY_LOCAL_MACHINE
Key Path: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\NonEnum
Value name: {F02C1A0D-BE21-4350-88B0-7367FC96EF3C}
Value type: REG_DWORD
Value Data (hex): 00000001

Doh

3D Printing

3D Printing

Overview

I wanted a 3D printer for a while since being introduced to it from our current hackerspace in the city; Skullspace. Check em out. Many of the awesome guys I know are right on the homepage. Amazing people.

Anyway, they had some first couple version that had many issues which had me worried when getting into it. However it turns out the Ender 3 has amazing reviews, and a solid following with it being introduced into some good software we will cover aa bit later in this blog.

Buy a 3D printer

So I finally pressed the trigger and bought a 3d Printer… since I got an amazon gift card I searched 3D printer on amazon and the main thing that showed up was a Ender-3 as their choice pick.

Set up Printer

Was a really good price specially with the gift card I had. I got it really quickly too, and on top of it came really well packaged with minimal assembly required. I was a confused at only a couple steps during the setup so I watched this guys YouTube video to get it assembled. Which was an amazing help, he even provides upgrade parts you can print once you get the printer running, which I’m currently printing as I write this.

However there was one part I was a bit confused about and that was how to level the bed.

Leveling the Bed

Now as mentioned by Vlad in his setup video, he was surprised to find there was no auto leveling. This doesn’t surprise me for the price of this amazing machine, beggars can’t be choosers and let’s level the bed.

For this I watched this great video by 3dprintingcanada on youtube.

After following these 2 video I was ready for printing and my first couple prints came out amazing. Of course this requires other basic knowledge I haven’t covered yet.

How 3D Printing Works

Now in order to understand what’s going on in the next bit it’s important to understand how 3D printing works. and that’s basically like this: “A 3D printer essentially works by extruding molten plastic through a tiny nozzle that it moves around precisely under computer control. It prints one layer, waits for it to dry, and then prints the next layer on top.”

In order to do this normally you model your object (with FreeCAD, or Fusion360, or whatever) and once you export your model out it usually comes in an STL file for a normal object file you can use to CnCing or other things… like “slicing” which takes the object and well “slices” it into layers which will determine the resolution of the final print (that and the diameter of the extrusion nozzle being used by the 3D printer.

Slicing

Now normally you have to do some math’s and calculate all these things and enter their values into the Slicer of choice. Two top ones right now in the FOSS area are Cura and Slic3r. I have most of my experience with Slic3r and then I read about this gem; PrusiaSlicer!

“PrusaSlicer now comes with Ender3 profile”

Sure enough running it, out of the box I could select a profile for my Ender3 (having profile is a specific set of the variables I mentioned above already configured for specific printers out of the box). Since my Endor-3 was out of the box without any modifications (to the extruder nozzle mainly) I was good to go.

With the setup and testing off all parts good (as mentioned and linked to the two YouTube video to accomplish this) it was time to grab an object and slice it.

The First Print

The first thing I printed was a Ghost from Packman.

Then I quickly went to upgrades to the Ener3, starting with the main upgrade which was the upgraded blower nozzle.

Final Thoughts

Over all the first couple prints without changing any infill settings, or supports were super easy going, and I’d say the industry has finally got it down pretty well and cheap enough I’d say go for it. This is a great starter printer.

Watch this video on more about how to customize supports in PrusiaSlicer.

If you run a 3D printer let me know what you run in the comments, or if you have suggestions.

Thanks for reading!

Manage iPhone4 Music Library in 2020

What a pain… tried linux could easily see the photos, but then again I could also see that in Windows, so stuck with using Windows. Since it an older phone figured older OS would work just fine, I tried an older copy I had of CopyTrans manager but would constantly not show the phone in the app when connected.

I wanted to see if iTunes could see it, the latest download says to go to the Microsoft Store (gross), so you have to find an alternative download for 12.5 or something to grab an actual executable installer.

I was getting this weird error installing it about the service not starting and I read all these posts, here and here all full of advise that was useless, you check the service it shows up, do all the things shows up fine at boot but complains with iTunes. and I’d always get this annoying pop up

“This iPod cannot be used because the required software is not installed.”

Well what kind of rubbish lies is this, everything was installed just fine and for the proper version (x64), like what the heck gives. Then I stumble upon this random post from over 11 years ago…

“I got it. I just first uninstalled iTunes then this:
1.Open up the Command Prompt as an Administrator (Go to All Programs > Accessories and Right Click on Command Prompt and then choose Run as administrator)
2.Type cd C:\Windows\SysWOW64
3.Type regsvr32 vbscript.dll (This registers VB Script with your computer.)
4.Now install iTunes as you normally would by double clicking on the install program and wait for iTunes to finish installing.
5.Type regsvr32 /u vbscript.dll (This unregisters VB Script with your computer.)”

I followed the same steps and low and behold I saw my iPhone 4 in iTunes. I was like Woah… but also like… I don’t want this I wanted simple drag n drop with CopyTrans Manager.

If you grab the latest copy of it.. it’s now shareware with limited use, when running it.. I couldn’t even get a delete context menu when (this version even wanted to install drivers even though itunes was installed, and wanted to uninstall itunes in the process…. well don’t let me stop you). But still now the phone shows up in the app but I couldn’t delete. I managed to find an older copy I think version 1.2 or something. Will have to double check which I ran standalone after the most recent version install, and I was able to get the add and delete buttons to show up. And then updated my playlist and finally updated the phone and it worked!

I now created a backup of this VM for future use as is.

FreeNAS Volume Down.

Quick Note, This is NOT a deep dive post into troubleshooting a downed volume, in this case I knew the drive was unavailable since boot and my goal was to re add the logical drive after correcting the physical connection issue.

This happened to me due to a Hardware issue. A power surge killed my UPS, like fully in that it wouldn’t turn on. SO had to rip it out and rebuild my DataCentre since I’m a poor man without proper servers, or server mounts. It’s a ghetto mans DataCenter.,.. anyway. The single USB enclosure housing a 2 TB HDD which was mounted and shared via SMB on the FreeNAS server didn’t power on. I decided to open the case to see if I could find the issue  (the PSU was fine as I was reading 12 v from the standard barrel connector. After I removed the case I was shocked find it was powering on… ok what gives. Put the case back on and nothing, it’s like the power barrel isn’t reaching the internal pins all of a sudden. I’m not sure if this was cause I swapped it with another 12v unit within the rack, either way I found an adapter to fit the same female and male ends and amazingly it worked lol, how useless but randomly came in use in my life.

So now back to FreeNAS with the USB drive powered on and connected.

First thing on the UI was the critical alert of the Volume being down. I wasn’t sure how to bring it back online with commands like lsusb being useless.

I found this FreeNAS form post with someone having a similar issue were the logs stated the simplest solution:

Recovery can be attempted by executing ‘zpool import -F vol1′

I SSH’d in and ran that command ageist the known volume that was down and lo and behold it appeared to have fixed my mounted USB drive…. but my SMB share just wasn’t available…

SO restart the SMB share… nothing… OK what gives… I dont’ remember documenting exactly how I set this up and it older FreeNAS 11.1-U1… so now I check the source server via SSH…

“zpool status” now shows the volume is there. checking “df -h” shows it’s mounted as /SMB… yet going to the Sharing -> Windows Shares and checking the shared volume states it should be /mnt/SMB but it’s not mounted as such hence why it’s not showing up…

Now 2 questions pop in my head 1) did I mis-configure something or 2) is the mount process different during boot in which it will mount the volume under /mnt instead of the root… not sure what happened here.. also not sure exactly how I should fix it. I want to avoid a reboot as it hosts iSCSI based VMFS volumes for my ESXI hosts.. what a pain…

ok… sigh mmmm I can either link or mount the volume accordingly at this time, but not sure how that will affect the server at boot….

So after talking to the “experts” apparently I did something wrong (how classic) due to a mix of my ignorance and … ahem… a system design in which the backend shouldn’t be touched outside the frontend… like lame SharePoint… anyway to read the details see this snippet:

Though have to give credit where it’s due and it’s nice to get clarification on things that piss me off so much it actually triggers my “flight or fight” response in my brain and I get like raged.

So taking a few minutes to cool down to hopefully resolve what should have, as usual, been a rather easy process became a royal pain in the fucking ass. But a “learning” experience none the less. Say that shit more than enough times in this stupid field of shit… ughhhh

OK now not pissed…. I went to Storage -> Volumes via the front end, and even though it showed green and healthy from the backend import command, I clicked the volume and selected “detach” from the bottom. I chose not to destroy my data (default, good stuff), and to not remove the share configuration (SMB service stopped anyway).

Then I clicked import volume (no encryption) and lucky for me the volume in question was the only one available in the dropdown list. The wizard successfully imported the volume, and sure enough doing a “df -h” on teh backend showed it mounted as /mnt/SMB ands retarting the SMB services worked and navigating the share also worked.

Yay well this sure was a learning experience…. don’t mess with the backend too much with FreeNAS (soon to be TrueNAS CORE).

Cheers

 

Windows MPIO to FreeNAS iSCSI Target

Intro

Well I made some mistake, the system worked but not utilizing its max capabilities..

I had been successfully using FreeNAS as a iSCSI target for  a disk mounted in Windows Server, but only one path being used at all times…

Windows Side

Source

I first needed the MPIO feature installed:

  1. Click Manage > Add Roles And Features.
  2. Click Next to get to the Features screen.
  3. Check the box for Multipath I/O (MPIO).
  4. Complete the wizard and wait for the installation to complete.

Noice.

Then we need to configure MPIO to use iSCSI

  1. Click Start and run MPIO.
  2. Navigate to the Discover Multi-Paths tab.
  3. Check the box to Add Support For iSCSI Devices.
  4. Click OK and reboot the server when prompted.

For me I didn’t get prompted for a reboot and reopening MPIO showed the checkbox unchecked, I had to click the add button then I got a prompt to reboot:

Now before I continue to get MPIO working on the source side, I need to fix some mistakes I made on the Target side. To ensure I was safe to make the required changes on the target side I first did the following:

  1. Completed any tasks that were using the disk for I/O
  2. Validated no I/O for disk via Resource manager
  3. Stopped any services that might use the disk for I/O
  4. Took the disk offline in Disk Manager
  5. Disconnected the Disc in iSCSI initiator

We are now safe to make the changes on the target before reconnecting the disk to this server, now on to FreeNAS.

FreeNAS Side

Source

I much like the source specified added an IP to the existing portal.. which I apparently shouldn’t have done.

Stop the iSCSI service for changes to be made.

Now delete the secondary IP from the one portal:

Now click add portal to create the secondary portal with the alternative IP.

There we go now just have to edit the target:

Now, that you have multiple portals/Group IDs configured with different IP addresses, these can be added to the targets.

Editing the existing targets to add iSCSI Group IDs

Once you have a target defined, you can click the Add extra iSCSI Group link to add the multiple Port Group ID backings.

Add extra iSCSI group IDs to each target in FreeNAS

Make sure you have the iSCSI service running. It does hurt at this point to bounce the service to ensure everything is reading the latest configuration, however with FreeNAS the configuration should take effect immediately.

Make sure iSCSI service is running in FreeNAS

Now we can go back to Windows to get the final configurations done. 🙂

Back on Windows

Configuring iSCSI

Launch iSCSI on the application server and select the iSCSI service to start automatically. Browse to the Discovery tab. Do the following for each iSCSI interface on the storage appliance:

  1. Click Discover Portal.
  2. Enter the IP address of the iSCSI appliance.
  3. Click OK.
  4. Repeat the above for each IP address on the iSCSI storage appliance.

Browse to Targets. An entry will appear for each available volume/LUN that the server can see on the storage appliance.

Configure Each Volume

For each volume, do the following:

  1. Click Connect to open the Connect To Target dialogue.
  2. Check the box to Enable Multi-Path.
  3. Click Advanced. This will allow us how to connect the first iSCSI session from the first NIC on the server. We can connect to the first interface on the iSCSI appliance.
  4. In the Advanced Settings box, select Microsoft iSCSI Initiator in Local Adapter, the first NIC of the server in Initiator IP, and the first NIC of the storage appliance in Target Portal IP.
  5. Click OK to close Advanced Settings.
  6. Click OK to close Connect To Target.

The volume is now connected. However, we only have 1 session between the first NIC of the server and the first NIC of the storage appliance. We do not have a fault-tolerant connection enabled:

  1. Click Properties in the Targets dialogue to edit the properties of the volume connection.
  2. Click Add Session.
  3. Check the box to Enable Multi-Path.
  4. Click Advanced.
  5. Select Microsoft iSCSI Initiator in Local Adapter. Select the second iSCSI NIC of the server in Initiator IP and the second NIC of the storage appliance in Target Portal IP.

Click OK a bunch of times.

If you open Disk Management, your new volume(s) should appear. You can right-click a disk or volume that you connected, select properties, and browse to MPIO. From there, you should see the paths and the MPIO customizable policies that are being used by this disk.

I left the load balancing algo to Round Robin, as Noted from here:

MCS

Fail Over Only – This policy utilizes one path as the active path and designates all other paths as standby. Upon failure of the active path the standby paths are enumerated in a round robin fashion until a suitable path is found.
Round Robin – This policy will attempt to balance incoming requests evenly against all paths.
Round Robin With Subset – This policy applies the round robin technique to the designated active paths. Upon failure standby paths are enumerated round robin style until a suitable path is found.
Least Queue Depth – This policy determines the load on each path and attempts to re direct I\O to paths that are lighter in load.
Weighted Paths – This policy allows the user to specify the path order by using weights. The larger the number assigned to the path the lower the priority.
MPIO

As above plus

Least Blocks – This policy sends requests to the path with the least number of pending I\O blocks.

Now did it actually work?

Seems like it.. performance is still not as good as I expected. must keep optimizing!

Hope this helps someone…