{"id":1576,"date":"2024-06-16T23:39:47","date_gmt":"2024-06-17T04:39:47","guid":{"rendered":"https:\/\/zewwy.ca\/?p=1576"},"modified":"2024-06-18T23:05:51","modified_gmt":"2024-06-19T04:05:51","slug":"remove-orphaned-datastore-in-vcenter-again","status":"publish","type":"post","link":"https:\/\/zewwy.ca\/index.php\/2024\/06\/16\/remove-orphaned-datastore-in-vcenter-again\/","title":{"rendered":"Remove Orphaned Datastore in vCenter Again"},"content":{"rendered":"<h1 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"Story\"><\/span><strong>Story<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h1>\n<p>I did this once before, but that time was due to rebuilding <a href=\"https:\/\/zewwy.ca\/index.php\/2022\/05\/07\/fix-orphaned-datastore-in-vcenter\/\">a ESXi host and not removing the old<\/a> datastore. This time however it&#8217;s due to the storage server failing.<\/p>\n<h2 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"The_Precursor\"><\/span><span id=\"The_Precursor\" class=\"ez-toc-section\"><\/span>The Precursor<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>I did NOT want to write this blog post. This post comes from the fact that VMware is not perfect and I\u2019m here to air some dirty laundry\u2026. Let\u2019s get started.<\/p>\n<h2 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"What_Happened\"><\/span>What Happened?<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The short answer is I don&#8217;t fully know, all I know is that the backend storage server (FreeNas 11.1u7) running iSCSI started showing weird signs of problems (Reporting Graphs not rendering). Since I wanted to possibly do some Frankenstein surgery on the unit (iOmega px12-350r). I started to vMotion the primary VMs I needed on to local ESXi storage.<\/p>\n<p>Even though I checked the logs, I can&#8217;t determine what is causing all the services to not start.\u00a0 Trying to manually start it, just showed gibberish in the system log.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/wCUofCY.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/wCUofCY.png\" alt=\"\" width=\"805\" height=\"392\" \/><\/a><\/p>\n<h2 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"The_Problem\"><\/span>The Problem<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Since I couldn&#8217;t get it back up they show as inaccessible in vCenter:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/hVJ72Wz.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/hVJ72Wz.png\" alt=\"\" width=\"843\" height=\"259\" \/><\/a><\/p>\n<p>Attempting to unmount them results in an error:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/TTOcNNf.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/TTOcNNf.png\" alt=\"\" width=\"419\" height=\"151\" \/><\/a><\/p>\n<p>Not sure what that means, I even put the host in maintenance mode and gives the same error. Attempting to remove the iSCSI configuration to which hosts those datastores, also errors out with:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/27sGn20.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/27sGn20.png\" alt=\"\" width=\"418\" height=\"191\" \/><\/a><\/p>\n<p>Strange how can there be active sessions when it literally dead?<\/p>\n<p>I tried following my old blog post on a similar case, but I was only able to successfully unmount the datastore via esxcli but the Web GUI would still show them&#8230;<\/p>\n<pre>esxcli storage filesystem list\r\nesxcli storage filesystem unmount -u UID<\/pre>\n<p><a href=\"https:\/\/i.imgur.com\/lYkJOmp.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/lYkJOmp.png\" alt=\"\" width=\"1264\" height=\"303\" \/><\/a><\/p>\n<p>Any attempt to set them as offline failed as they were status as dead anyway&#8230;<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/bpl21iE.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/bpl21iE.png\" alt=\"\" width=\"535\" height=\"181\" \/><\/a><\/p>\n<p>As you can see no diff:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/HyFP30W.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/HyFP30W.png\" alt=\"\" width=\"708\" height=\"198\" \/><\/a><\/p>\n<h2 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"Solutions\"><\/span>Solutions?<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>I attempted to look up solutions, I found one post of a similar nature here:<\/p>\n<p><a href=\"https:\/\/tomaskalabis.com\/wordpress\/how-remove-unmountedinaccessible-datastore-from-esxi-host\/\">How to remove unmounted\/inaccessible datastore from ESXi Host (tomaskalabis.com)<\/a><\/p>\n<p>When I attempted to run the command,<\/p>\n<pre class=\"lang:default decode:true\">esxcli storage core device detached remove -d naa.ID<\/pre>\n<p>it sadly failed for me:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/39pDHMz.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/39pDHMz.png\" alt=\"\" width=\"1191\" height=\"30\" \/><\/a><\/p>\n<p>I was at a dead end&#8230; I could see the dead devices with no files or I\/O bound to them, but I can&#8217;t seem to removed them.. they show as detached&#8230;<\/p>\n<pre>esxcli storage core device detached list<\/pre>\n<p><a href=\"https:\/\/i.imgur.com\/8ps9lN3.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/8ps9lN3.png\" alt=\"\" width=\"434\" height=\"64\" \/><\/a><\/p>\n<p>as a last ditch effort I rescanned one last time and then ran the command to check for devices.<\/p>\n<pre>esxcli storage core adapter rescan --all\r\nesxcli storage core device list<\/pre>\n<p><a href=\"https:\/\/i.imgur.com\/wAoQR4B.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/wAoQR4B.png\" alt=\"\" width=\"781\" height=\"133\" \/><\/a><\/p>\n<p>checking the Web Gui I could see the Datastores gone but the iSCSI config was still there, attempting to remove it would result in the same error as above. Then I realized there were still static records defined, once I deleted them, everything was finally clean on the host.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/VLYzTKX.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/VLYzTKX.png\" alt=\"\" width=\"864\" height=\"254\" \/><\/a><\/p>\n<h2 style=\"text-align: center;\"><span class=\"ez-toc-section\" id=\"Do_It_Again\"><\/span>Do It Again!<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Since this seem to be a per host thing lets see if we can fix it without maintenance mode, or moving VMs. Test host.. this broken datastores check:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/A0rEXpS.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/A0rEXpS.png\" alt=\"\" width=\"904\" height=\"293\" \/><\/a><\/p>\n<p>Turns out its even easier&#8230; just remove static iSCSI targets, remove dynamic target, rescan storage and adapters:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/6Ypocgc.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/6Ypocgc.png\" alt=\"\" width=\"861\" height=\"257\" \/><\/a><\/p>\n<p>I guess sometimes you just overthink things and get lead down rabbit holes when a simple solution already easily exists. I followed these simple steps on the final host and oddly one datastore lingered:<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/NM4ucvM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/NM4ucvM.png\" alt=\"\" width=\"871\" height=\"334\" \/><\/a>Well let&#8217;s enable SSH and see what&#8217;s going on here&#8230;<\/p>\n<pre>esxcli storage filesystem list\r\nesxcli storage filesystem unmount -u 643e34da-56b15cb2-0373-288023d8f36f<\/pre>\n<p><a href=\"https:\/\/i.imgur.com\/9NbYhSX.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/9NbYhSX.png\" alt=\"\" width=\"1260\" height=\"281\" \/><\/a><\/p>\n<pre>esxcli storage core device list\r\nesxcli storage core device set -d naa.6589cfc0000005e95e5e4104f101a307 --state=off<\/pre>\n<p>&#8220;Unable to set device&#8217;s status. Error was: Unable to change device state, the device is marked as &#8216;busy&#8217; by the VMkernel.: Busy&#8221;<\/p>\n<p>Mhmmm different then last time, which might explain why it wasn&#8217;t auto removed.<\/p>\n<pre>esxcli storage core device world list -d naa.6589cfc0000005e95e5e4104f101a307<\/pre>\n<p><a href=\"https:\/\/i.imgur.com\/YPffeb2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/YPffeb2.png\" alt=\"\" width=\"690\" height=\"54\" \/><\/a><\/p>\n<p>hostd-worker and if I run the command to get process VMs it doesn&#8217;t show makes me think the old scratch\/core dump&#8230;<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/7mtkHWl.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/7mtkHWl.png\" alt=\"\" width=\"1393\" height=\"41\" \/><\/a><\/p>\n<p>I&#8217;m not sure what restarting HostD does so I&#8217;ll move critical VMs off just to be save and then test restarting that service to see if it released it&#8217;s strangle hold&#8230;<\/p>\n<pre>\/etc\/init.d\/hostd restart<\/pre>\n<p>After this it did show disconnected from vCenter for a short while, then came back, and the old Datastore was done.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/q02pz2g.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/q02pz2g.png\" alt=\"\" width=\"895\" height=\"311\" \/><\/a><\/p>\n<p>Although the datastore was gone.. the disk remained, and I couldn&#8217;t get rid of it.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/dwEckf8.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/dwEckf8.png\" alt=\"\" width=\"926\" height=\"209\" \/><\/a><\/p>\n<p>I don&#8217;t get it&#8230; do I have to reboot this host&#8230;.<\/p>\n<p>ughh reboot worked&#8230; what a pain though.<\/p>\n<p>If you want to know what datastore\/UUID is linked to what disk run<\/p>\n<pre>esxcli storage vmfs extent list<\/pre>\n<p>Now for G9-SSD2, I tried to remove it since it showed signs of on the way out. and I couldn&#8217;t&#8230; seem like an on going story here. I could only unmount it from the CLI.<\/p>\n<p><a href=\"https:\/\/i.imgur.com\/SZiOFg5.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/i.imgur.com\/SZiOFg5.png\" alt=\"\" width=\"1258\" height=\"255\" \/><\/a><\/p>\n<p>Weird, I deleted The G9-SSD3 normally, then I detached the disk containing G9-SSD2. Then when I recreated G9-SSD3, the G9-SSD2 just disappeared. The drive still shows as unconsumed and detached.<\/p>\n<p>Now I have to go rebuilt my shared storage server&#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Story I did this once before, but that time was due to rebuilding a ESXi host and not removing the old datastore. This time however it&#8217;s due to the storage server failing. The Precursor I did NOT want to write this blog post. This post comes from the fact that VMware is not perfect and &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/zewwy.ca\/index.php\/2024\/06\/16\/remove-orphaned-datastore-in-vcenter-again\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Remove Orphaned Datastore in vCenter Again&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"sfsi_plus_gutenberg_text_before_share":"","sfsi_plus_gutenberg_show_text_before_share":"","sfsi_plus_gutenberg_icon_type":"","sfsi_plus_gutenberg_icon_alignemt":"","sfsi_plus_gutenburg_max_per_row":"","footnotes":""},"categories":[5,8],"tags":[234,233],"class_list":["post-1576","post","type-post","status-publish","format-standard","hentry","category-hypervisors","category-server-administration","tag-datastore","tag-remove"],"_links":{"self":[{"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/posts\/1576","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/comments?post=1576"}],"version-history":[{"count":4,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/posts\/1576\/revisions"}],"predecessor-version":[{"id":1580,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/posts\/1576\/revisions\/1580"}],"wp:attachment":[{"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/media?parent=1576"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/categories?post=1576"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zewwy.ca\/index.php\/wp-json\/wp\/v2\/tags?post=1576"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}