All times are UTC - 5 hours [ DST ]




Post new topic Reply to topic  [ 3 posts ] 
Author Message
 Post subject: BAD - Removing Disk While Active!
PostPosted: June 27th, 2019, 7:22 
Offline

Joined: December 29th, 2015, 4:37
Posts: 4
Location: United States
Sigh.  I did a very bad, stupid thing. Of course, it was an accident, but that doesn't make it any better for a $500, 14 TB drive! I was attempting to hot swap a different drive, which was...of course...not active, but I pulled the wrong one. What do I actually need to be worried about other than the data I was writing to the drive at that time? I know I need to check that. I am pretty sure it is unrelated, but just to freak me out, a short time after this occurred, all the drives on my enclosure for some reason "reloaded" (if that's the right word) in Windows. That is, removed from being detected and re-added. And I was copying data when that happened too! It will be a miracle if this file is not corrupt by the time I get it copied! LOL

Thanks much in advance.


Top
 Profile  
 
 Post subject: Re: BAD - Removing Disk While Active!
PostPosted: June 27th, 2019, 14:20 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Perhaps you should be more specific regarding the exact type of setting (RAID or otherwise) those drives were in, and/or what type of enclosure they were in. Regarding the damage assessment, as far as I know (no pro), it could range from nothing at all to logical bad sector(s) to physical bad sector(s) to... a lot worse. In my (limited) experience, shutting down a HDD during a write operation is likely to cause “logical bad sectors”, i.e. sectors which are still physically operational but are in such an inconsistent state that they can no longer be read, and that can confuse the system to the point of making programs freeze when they try to access those particular sectors. It's normally solved by overwriting the problematic sector(s), but of course their contents are lost in the process. See this for instance (the HGST drive). I would first check the SMART status of that drive plugged alone (outside of the enclosure) on a Linux system (not Windows as it will mount it automatically and might write on it which you do not want).
Then if there are either physical or logical bad sectors, in limited number (I would say up to 10 to stay on the safe side, beyond that it would be very risky with such a large volume of data to attempt anything on your own with limited tools & knowledge & experience {*}), the wise course of action would be to clone that drive entirely – of course, it means having a spare $500, 14TB drive, but if you can afford to hoard that much data you should afford to backup that much data, or at least what's really important (and a side question would be, is it worth hoarding data which is not really important – a question I'm asking myself regularly regarding my own hoarding habits). Because, as you may be about to learn the hard way, RAID is no backup solution.
Then, whether or not there is a problem with that particular drive, if all the drives in that enclosure were in a RAID setting, the whole array may be in an inconsistent state. Since one other drive was about to be swapped, and you removed that one instead, with a little help from Murphy's Law there may not be enough redundancy with the remaining drives to rebuild what's missing (I'm not sure about this, I only ever used RAID 1 myself, but I think that when a drive in a RAID 5 setting or similar is detected as “bad”, even if just a few clusters are missing / inconsistent, the controler will want to rebuild it entirely).
So, in a nutshell, you may be in for a lot more trouble than just that one file which was being written when that fateful pull happened. :?


{*} Recently I recovered data from a 4TB HDD, which is a lot less than 14TB but already a lot of data (as it takes about 8 hours to copy the whole contents of a 4TB HDD in perfect condition), and one head failed midway through ; as I don't have the training or equipment to perform a replacement of a head stack assembly, all I could do was continue the cloning until the end with 7 working heads out of 8 and then assess the damage. Luckily there was only about 550GB of data on it, and it wasn't in a RAID setting, so the final recovery rate was quite good. But with a 14TB drive having even small media damage and/or head damage due to the sudden shutdown, and which is part of a RAID array, that would be akin to playing russian roulette with a machine gun.


Top
 Profile  
 
 Post subject: Re: BAD - Removing Disk While Active!
PostPosted: June 27th, 2019, 17:15 
Offline

Joined: December 29th, 2015, 4:37
Posts: 4
Location: United States
Lucky for me (well, assuming nothing else is wrong anyway), it was not in a RAID at the time. I just bought a Synology NAS I am going to use in SHR (my first NAS, though I have used enclosures and RAID for two decades...they have just always been direct connect), and I am working to get drives added into the NAS, and move stuff off my current setup, put more drives in the NAS, etc. I was intending to eject an 8TB drive that I was using to store data, when I got the wrong one. I am about to find out about the tests and all. Wish me luck! :)


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 

All times are UTC - 5 hours [ DST ]


Who is online

Users browsing this forum: SALIM and 103 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group