{"id":1551,"date":"2024-05-20T10:23:34","date_gmt":"2024-05-20T15:23:34","guid":{"rendered":"https:\/\/zewwy.ca\/?p=1551"},"modified":"2025-06-06T14:18:02","modified_gmt":"2025-06-06T19:18:02","slug":"configuring-shared-lvm-over-iscsi-on-proxmox","status":"publish","type":"post","link":"https:\/\/zewwy.ca\/index.php\/2024\/05\/20\/configuring-shared-lvm-over-iscsi-on-proxmox\/","title":{"rendered":"Configuring shared LVM over iSCSI on Proxmox"},"content":{"rendered":"<p>So, I&#8217;ve been recently playing with Proxmox for virtualization. It&#8217;s pretty nice, but in my cluster (which consisted of two old laptops) whenever I would migrate VM&#8217;s or Containers it would have to migrate the storage over the network as well. Since they are just old laptops everything connects together with 1 gbps to switches with the same rated ports.<\/p>\n<p>I&#8217;m used to iSCSI so I checked the <a href=\"https:\/\/pve.proxmox.com\/wiki\/Storage\">Proxmox storage guidance<\/a> to see what I could use.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/EkcZN2Q.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/EkcZN2Q.png\" alt=\"\" width=\"2336\" height=\"763\" \/><\/a><\/p>\n<p>I was interested in <a href=\"https:\/\/pve.proxmox.com\/wiki\/Storage:_ZFS_over_ISCSI\">ZFS over iSCSI<\/a>. However, I temporarily gave up on this cause for some reason&#8230; you have to allow root access to the FreeNAS box over SSH, on the same network that the iSCSI is for&#8230;.<\/p>\n<p>&#8220;<a href=\"https:\/\/forum.proxmox.com\/threads\/guide-setup-zfs-over-iscsi-with-pve-5x-and-freenas-11.54611\/\">First of all we need to setup SSH keys to the freenas box, the SSH connection needs to be on the same subnet as the iSCSI Portal, so if you are like me and have a separate VLAN and subnet for iSCSI the SSH connection needs to be established to the iSCSI Portal IP and not to the LAN\/Management IP on the FreeNAS box.<\/a><br \/>\nThe SSH connection is only used to list the ZFS pools&#8221;<\/p>\n<p>Also <a href=\"https:\/\/forum.proxmox.com\/threads\/proxmox-ve-and-zfs-over-iscsi-on-truenas-scale-my-steps-to-make-it-work.125387\/\">mentioned in this guide<\/a>.<\/p>\n<p>This was further verified when I attempted to setup ZFS on an iSCSI disk, I go this error message:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/NLnFbLp.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/NLnFbLp.png\" alt=\"\" width=\"1218\" height=\"120\" \/><\/a><\/p>\n<p>Since I didn&#8217;t want to configure my NAS to have root access over SSH, on the iSCSI network. I was still curious then what the point of iSCSI was for PVE if you can&#8217;t use a drive shared&#8230; Reviewing the chart above, and this comment &#8220;<a href=\"https:\/\/forum.proxmox.com\/threads\/iscsi-setup.35589\/\">i guess the best way to do it, is to create a iscsi storage via the gui and then an lvm storage also via the gui (if you want to use lvm to manage the disks) or directly use the luns (they have to be managed on the storage server side)<\/a>&#8221;<\/p>\n<p>I ended up using LVM on the disk &#8220;3: It is possible to use LVM on top of an iSCSI or FC-based storage. That way you get a shared LVM storage&#8221;<\/p>\n<p>However, using this model you can&#8217;t use snapshots. \ud83d\ude41<br \/>\nYou can use LVM-Thin but that&#8217;s not shared.<\/p>\n<h2 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"Step_1_Setup_Storage_Server\"><\/span>Step 1) Setup Storage Server<span class=\"ez-toc-section-end\"><\/span><\/h2><div id=\"ez-toc-container\" class=\"ez-toc-v2_0_77 ez-toc-wrap-right counter-hierarchy ez-toc-counter ez-toc-black ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/zewwy.ca\/index.php\/2024\/05\/20\/configuring-shared-lvm-over-iscsi-on-proxmox\/#Step_1_Setup_Storage_Server\" >Step 1) Setup Storage Server<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/zewwy.ca\/index.php\/2024\/05\/20\/configuring-shared-lvm-over-iscsi-on-proxmox\/#Step_2_Configure_PVE_hosts\" >Step 2) Configure PVE hosts<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/zewwy.ca\/index.php\/2024\/05\/20\/configuring-shared-lvm-over-iscsi-on-proxmox\/#Step_3_Setup_LVM\" >Step 3) Setup LVM<\/a><\/li><\/ul><\/nav><\/div>\n\n<p>In my case I&#8217;m using a FreeNAS server, with spare drive ports, so for this test I took a 2TB drive (3.5&#8243;), plugged it in and wiped it from the web UI.<\/p>\n<p>After this I configured a new extent as a raw device share.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/M6sh2eF.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/M6sh2eF.png\" alt=\"\" width=\"1071\" height=\"695\" \/><\/a><\/p>\n<p>Created the associated targets and portals. Once this was done (since I had dynamic discovery on my ESXi hosts) they discovered the disk. I left them be, but probably best to have separate networks&#8230;. but I&#8217;ll admit&#8230; I was lazy.<\/p>\n<h2 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"Step_2_Configure_PVE_hosts\"><\/span>Step 2) Configure PVE hosts<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>In my case I had to add the iSCSI network (VLAN tagged) on to my hosts. This is easy enough Host -&gt; System -&gt; Network -&gt; Create Linux VLAN<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/jlmceMM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/jlmceMM.png\" alt=\"\" width=\"1424\" height=\"730\" \/><\/a><\/p>\n<p>OK, so where in ESXi you simply add an iSCSI adapter, in PVE you have to install it first? Sure ok lets do that&#8230; Turns out it was already installed.<br \/>\nafter reading that and seeing what my ESXi did, I managed to edit my \/etc\/pve\/storage.cfg and added<\/p>\n<p>iscsi: freenas<br \/>\nportal 172.16.69.2<br \/>\ntarget iqn.2005-10.org.freenass.ctl:proxhdd<br \/>\ncontent none<\/p>\n<p>To my surprise&#8230; it showed as a storage unit on both my PVE hosts. :O<\/p>\n<p>mhmm doing a df -h, I don&#8217;t see anything&#8230; but doing a fdisk -l sure enough I see the drive.. so cool \ud83d\ude42<br \/>\nSo now that I got both hosts to see the same disk, I guess it simply comes down to creating a file system on the raw disk.<br \/>\nOr not&#8230; when I try to create a ZFS using the WebUI it just says no disk are available.<\/p>\n<h2 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"Step_3_Setup_LVM\"><\/span>Step 3) Setup LVM<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>However, adding an LVM works:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/PO7dzfQ.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/PO7dzfQ.png\" alt=\"\" width=\"1464\" height=\"773\" \/><\/a><\/p>\n<p>After setting up LVM the data source should show up on all nodes in the cluster that have access to the disk. One on of my nodes it wasn&#8217;t showing as accessible until I rebooted the node that had no problems accessing it. \u00af\\_(\u30c4)_\/\u00af<\/p>\n<p>So, there&#8217;s no option to pick storage when migrating a VM, you have to go into the VM&#8217;s hardware settings and &#8220;move the disk&#8221;.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/x2liQco.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/x2liQco.png\" alt=\"\" width=\"1226\" height=\"707\" \/><\/a><\/p>\n<p>When I went to do my first live VM migration, I got an error:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/wiBNNuh.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/wiBNNuh.png\" alt=\"\" width=\"1371\" height=\"827\" \/><\/a><\/p>\n<p>I soon realized this was just my mistake by not having selected &#8220;delete source&#8221; since when &#8220;moving the disk&#8221; it actually converted the disk from qcow2 to raw and didn&#8217;t delete the old qcow2 file. So I simply deleted it. then tried again&#8230;<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/aoebejU.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/aoebejU.png\" alt=\"\" width=\"1204\" height=\"537\" \/><\/a><\/p>\n<p><a href=\"https:\/\/i.imgur.com\/EwTOLS0.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/EwTOLS0.png\" alt=\"\" width=\"1839\" height=\"1082\" \/><\/a><\/p>\n<p>and it worked! Now the only problem is no snapshots. I attempted to create an LVM-Thin on top the LVM, and it did create it, but as noted in the chart both my hosts could not access it at the same time, so not shared.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/HbyBu6e.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/HbyBu6e.png\" alt=\"\" width=\"306\" height=\"435\" \/><\/a><\/p>\n<p>Guess I&#8217;ll have to see how Ceph works. That&#8217;ll be a post for another day. Cheers.<\/p>\n<p>*Update* I&#8217;ll have to implement a filter on FreeNAS cause Proxmox I guess won&#8217;t implement a fix that was given to them for free.<\/p>\n<p><a href=\"https:\/\/forum.proxmox.com\/threads\/iscsi-reconnecting-every-10-seconds-to-freenas-solution.21205\/#post-163412\">https:\/\/forum.proxmox.com\/threads\/iscsi-reconnecting-every-10-seconds-to-freenas-solution.21205\/#post-163412<\/a><\/p>\n<p><a href=\"https:\/\/bugzilla.proxmox.com\/show_bug.cgi?id=957\">https:\/\/bugzilla.proxmox.com\/show_bug.cgi?id=957<\/a><\/p>\n<p>*UPDATE May 2025*<\/p>\n<p>Ohhh Looks like they may have finally got off their butts and implemented a fix&#8230;<\/p>\n<p>Fedrich &#8220;As Victor notes, their patch is applied and available in libpve-storage-perl &gt;= 8.3.4, which is part of Proxmox VE 8.4. Thanks for your contribution!<\/p>\n<p>One thing I want to point out is that, even with this patch, the Proxmox VE node will still perform a connection check (via TCP ping) when there is no active session (yet) on some occasions, e.g., when first logging in or after boot. However, with this patch it will not do TCP pings against a portal if there is an active session to the portal, and this should get rid of the large majority of (recurring) TCP pings against portals.&#8221;<\/p>\n<p>I have not personally had a chance to test or verify this however.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>So, I&#8217;ve been recently playing with Proxmox for virtualization. It&#8217;s pretty nice, but in my cluster (which consisted of two old laptops) whenever I would migrate VM&#8217;s or Containers it would have to migrate the storage over the network as well. Since they are just old laptops everything connects together with 1 gbps to switches &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/zewwy.ca\/index.php\/2024\/05\/20\/configuring-shared-lvm-over-iscsi-on-proxmox\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Configuring shared LVM over iSCSI on Proxmox&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"sfsi_plus_gutenberg_text_before_share":"","sfsi_plus_gutenberg_show_text_before_share":"","sfsi_plus_gutenberg_icon_type":"","sfsi_plus_gutenberg_icon_alignemt":"","sfsi_plus_gutenburg_max_per_row":"","footnotes":""},"categories":[5,8],"tags":[460,461,462],"class_list":["post-1551","post","type-post","status-publish","format-standard","hentry","category-hypervisors","category-server-administration","tag-lvm","tag-pve","tag-shared-storage"],"_links":{"self":[{"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/posts\/1551","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/comments?post=1551"}],"version-history":[{"count":4,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/posts\/1551\/revisions"}],"predecessor-version":[{"id":1671,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/posts\/1551\/revisions\/1671"}],"wp:attachment":[{"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/media?parent=1551"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/categories?post=1551"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/tags?post=1551"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}