@labtech
Thanks for these concise yet precise explanations.
“...threshold overflow” => That's when there are too many detected bad sectors, right ?
Here is a screenshot I took early on :
Attachment:
2019-05-23-190019_1330x1024_scrot.png [ 38.46 KiB | Viewed 16699 times ]
It shows a relatively wide area of contiguous bad or weak reads. Eventually most of those blocks became green so most of those sectors must have been recovered (although the grid aspect might not be accurate enough to ascertain this). Is it more likely that there is a damaged area at that particular spot on the corresponding platter, which fragilized the corresponding head – the one which eventually failed – each time it tried to “hammer” on those sectors, or an early sign of head malfunction ? Or both ?
Incidentally, the owner told me that he used to shutdown his computer equipments, including the WD MyCloud device which contained that HDD, by turning off the power bar powering them all instead of doing a proper shutdown, and is himself hypothesising that it may have been the leading cause of that failure. The fact that this seemingly damaged area seems to be located at the edge of a platter would seem to be in favor of that scenario. Is that so ?
Interestingly, starting from ~79%, the defective head, which seemingly hadn't been able to read a single byte for a long while, seems to be reading again, albeit very slowly, between 10KB/s and 3MB/s, more like 100-500KB/s on average, and there are less skipped areas according to the preview in HDDSCViewer, even none in one particular spot where there should be grey blocks based on the general pattern. How is that even possible ?
Attachment:
2019-05-29-182607_1920x1080_scrot - cut + modif.png [ 152.63 KiB | Viewed 16699 times ]
Problem is, it's taking way too long to copy almost (?) only 00s, with reading times up to 7000ms, so I stopped and put a 1000ms threshold (instead of the default 300000 which seems way too high, since phase 1 is supposed to get the easily readable areas as fast as possible), now it's proceeding at a steady rate of 50-60MB/s with only brief slowdowns (and a few spikes above 100MB/s).
And I'm finally seeing non-zero data in the preview just beyond 3TB...
(That's what Christopher Colombus must have felt like when he finally saw some land after months of navigation in an empty sea and in starvation.)
Attachment:
2019-05-29-200951_1920x1080_scrot - cut.png [ 147.47 KiB | Viewed 16699 times ]
When I stopped it this morning to
GTFTS, it looked like this :
Attachment:
2019-05-30-021854_1920x1080_scrot - cut.png [ 25.86 KiB | Viewed 16699 times ]
It's a bit of a mess because the last 500GB or so had already been scanned (when I skipped a large chunk to verify if the “stripes” pattern would be the same everywhere), at some point I forced the copy to resart from about 47% where it was stopped the first time (and go back to phase 1 as it was then in phase 2), and once it caught up with that last portion there were only grey and black blocks left (the “current” yellow block is in the lower-right area of the grid). But it was still reading something once in a quite long while, with a low to very low speed as mentioned above. But I suppose that now the most efficient course of action would be to complete the clone in reverse from the very end, to at least get the most out of the black (non tried) blocks, instead of waiting for HDDSuperClone to reach that step on its own. Then there's probably little point in trying to hammer the drive further for days or weeks, blindly hoping to get some more useful bytes out of an overwhelming (and underwhelming) stream of emptiness.
I remember reading in articles about the ddrescue algorithm that it would attempt to read until an unreadable sector is encountered, then leap forward until a good area is found, then proceed backward right away until an unreadable sector is encountered, then proceed forward again starting from right after the point where it leaped previously. Isn't it more efficient than HDDSuperClone (if I understand it correctly) trying to determine the most adequate skip size based on the previous patterns of readability, and only much later doing a backward pass ? (I may have misconceptions on both tools' operation.)
@maximus
Quote:
No, those tools will not be implemented in hddsuperclone, for a couple reasons. First, there are things that I still don’t understand about the NTFS filesystem, so those tools are not as robust as they could be, and I don’t plan on digging any deeper. Second, processing a file system can be very complicated, and there is not always good sources of information on how they work (can you say “proprietary?”), which is why I chose to implement the virtual driver mode. I can leave the complicated part of processing the file systems up to other tools, and focus on working with the drive itself.
But ddru_findbad, if I understood correctly, is already relying on Sleuthkit to perform the actual analysis.
In what circumstances could ddru_ntfsbitmap or ddru_ntfsfindbad (which is said to be more reliable and faster than the general purpose ddru_findbad for NTFS partitions) produce unreliable results ?
Quote:
As for a demo version, probably not. I have given out a few short term licenses for testing purposes, but with the intention of someone actually spending time to test and report back results. I set the short term price fairly low so that if someone thought they wanted to try it out, they could without breaking the bank. In your situation with so few cases, it probably doesn’t make sense to purchase it. And your current case will likely be solved as best possible without the need. I wish the best for the recovery.
In this particular case, I'll admit that the actual recovery may have been done in a fraction of the time with the “Pro” version (and possibly with a higher final success rate since it might have been possible to get all the still accessible useful sectors while all heads were still operational), but just to get up to speed in using the complete setup proficiently would have required about the same amount of time that it would have saved !
(Of course, time spent learning something new is never lost.)