All times are UTC - 5 hours [ DST ]




Post new topic Reply to topic  [ 40 posts ]  Go to page Previous  1, 2
Author Message
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 27th, 2019, 22:53 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
The GParted error message in your original post I suspect is showing the output of dumpe2fs -h ExtPartitionName. And it shows the meta data of a damaged Ext superblock with 553.46GB of space used out of 3.63TB, or about 15%. And viewing the HDDSuperClone log file you posted shows mostly a sea of green for the first 43% of the drive. So certainly at this point most user data has been imaged. The task now is to extract the data from the 43% partial image.

Thanks for the feedback.
This figure is deduced from those two values I suppose ? ((975575808 - 830488873) / 1024^3 = 553,46)
Code:
Block count:              975575808
Free blocks:              830488873

How can the free space be correctly calculated if there are missing “$BlockBitmap” files ?
Is there any way I could determine what exactly is damaged in the filesystem and try to repair it somehow ? (As I already wrote, I first made a partial image with ddrescue, which I saved, and when I compared that image with the clone I noticed many differences ; I haven't yet tried to merge them, it may or may not fix something. I'm wondering if those differences are due to skipped sectors, or if something was altered when I plugged the recovery drive on Windows... Normally Windows shouldn't alter Linux partitions, but I have a driver installed called “Ext2 Volume Manager”, and I got a BSOD when I accidentally double-clicked on the letter of the main partition – if I remember correctly, I may have BSODed myself.)
As I explained, I observed that starting from about 12% only 00s were copied, and when I examined the recovery drive after stopping the cloning at 47% / 1.7TB, it was empty beyond 438GB (I didn't examine it thoroughly, there might be some small islands of data in that sea of emptiness, but nothing significant enough in size to be visible when scrolling up and down in WinHex). So where are the remaining 115GB ? Could it be because of “sparse” writing enabled globally ? (See previous post.) Or could there be a large chunk beyond 1.7TB, with that much totally empty space before ?
And the issue with extracting data from those 438GB is that key metadata components are currently missing, because they are located beyond what is recovered. And even if I finish the cloning process, with one defective head there are almost certainly going to be missing “INodes” or whatnot. In which case the original directory structure and the original file names will be lost, and fragmented files won't be recovered correctly. Right ?

Quote:
In my remote data recovery practice I use a technique I call drive hybridization which takes a partial image and joins it to the unimaged remainder of the source drive to create a hybrid drive that is both part image and part hard drive which can then be used for data recovery operations. In today's world of multi TB drives, performing full drive imaging can be very inefficient and places unneeded stress on the source drive. If we say that copying one sector one time equals one unit of drive stress then full drive imaging would earn the max stress score of 100% while the stress score for partial imaging can be considerably less. And really what's the point of imaging TB's of nothing but zeros?

Well, I'm not sure... I'm still learning in that highly complex and convoluted field, where the information is scarce and sparse, but it would seem less stressful to read every single sector once (provided that the heads can “go the distance”), than trying to read repeatedly the same already fragilized area from the source (containing metadata structures) in an attempt to improve time and effort efficiency. Right after that head failed, I tried to mount the main partition on the source drive and it worked (albeit with difficulty – several seconds to open a subfolder), the key metadata structures were still intact, they could have been added to the clone at that point ; now it no longer mounts, so some important sectors are no longer accessible, and even with this “drive hybrization” technique (which might indeed be helpful in other situations) it probably would be too late to get the whole original directory structure.

Quote:
Anyway if you would like help with your case let me know.

Well, I'll get something for this service, but not enough (by a long shot – let's just say it's a two digits figure) to request remote assistance from a seasoned bona fide data recovery professional... :)
(And not even enough to pay the veterinarian fees just this week... é_è Cancer sucks.)


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 28th, 2019, 11:29 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Again, I'm quite surprised by the lack of feedback in this thread. It seems to me that it's a “cas d'école” when it comes to software only recovery in general, and recovery from Ext4 partitioned drives in particular (if it had been a NTFS partition it wouldn't be so much of an issue, all the metadata would be there already, plus I could have targeted the allocated data from the begining), or recovery from large capacity drives with only a small portion of allocated data. It could certainly interest people having a similar issue in the future, and they might be frustrated by the lack of replies to what I think are serious and relevant questions. So is there a particular reason for this ?


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 28th, 2019, 11:53 
Offline

Joined: February 16th, 2016, 21:07
Posts: 43
Location: Boston, USA
Space used = block count - free blocks * block size

There are several alternate super blocks spread out across the filesystem. It is likely one of them is good and can be used to mount the filesystem.

So I think a hybrid drive would work well in your case to mount with one of the alternate super blocks if you have what looks like a 43% image that is 99% good.

I offer help for free / donation. Currently there is a 9 day waiting period for the free pricing tier.

Good luck with the Vet.

_________________
On-Line Data Recovery Consultant. RAID / NAS / Linux Specialist.
Serving clients worldwide since 2011
FreeDataRecovery.us


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 28th, 2019, 17:13 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
Space used = block count - free blocks * block size

Yes, that's what I calculated, but how can those informations be retrieved if parts of the bitmap are missing ?

Quote:
There are several alternate super blocks spread out across the filesystem. It is likely one of them is good and can be used to mount the filesystem.

But then why why would the partition on the original drive not be mounted anymore ? Because it would have lost access to other filesystem components which might have been recovered on the clone already ?

Quote:
So I think a hybrid drive would work well in your case to mount with one of the alternate super blocks if you have what looks like a 43% image that is 99% good.

Right now it's at 58%. Surprisingly today I get much higher rates on some of the good portions, up to 125MB/s (others being around 60 as before).
And the “pending sector count” doesn't seem to have increased since yesterday (1521). (I got GSmartControl working again, it works when installed through the terminal, but not through the software library. Also, twice already the system became unresponsive, only the mouse pointer could be moved slowly but had no effect, all I could see was a 100% CPU usage but I don't know what caused it, between HDDSuperClone, HDDSCViewer, GSmartControl, and not much else.)

Quote:
I offer help for free / donation. Currently there is a 9 day waiting period for the free pricing tier.

Then I'll definitely think about it. Even if it doesn't improve this recovery significantly it's a great learning opportunity.

Quote:
Good luck with the Vet.

Thanks ! :) She still got the eye of the tiger and the thrill of the fight...


~~~

Another general question (with little hope that it gets answered at this point) : so one head failed completely, yet the drive is still working, is not clicking ; in what circumstances does a head failure result in a HDD becoming completely unresponsive ? Is there one particular head which has to access the System Area, and then the failure of that one head does cause the drive to click, but not the others ? If this is the case, why aren't the drives designed with a redundancy of the System Area on all sides of all platters, since the amount of information must be relatively small, to increase the odds of recovery if only one head becomes defective ?


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 28th, 2019, 18:37 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
In this particular case, what's very inefficient is spending hours copying hundreds of GB of 00s with 7 heads out of 8 when what I want is the missing blocks of metadata and (possibly) 115GB worth of actual data. But with the way those metadata files are organized it seems impossible at this point to do it another way. Had I known all that from the begining I may have been able to device a neat trick (what I wrote above seems to be on the right track), but now it most likely wouldn't be possible to dump all the required files from the source.

One can never know what will happen with DIY recovery. That is why a pro will open the drive in a clean room and examine the heads before even powering on the drive, so they can make an assessment of what to expect. Even then, a head (or the drive) could die at any time during the recovery from unseen issues.

Quote:
Well, I'll get something for this service, but not enough (by a long shot – let's just say it's a two digits figure)

Since this is DIY without pro recovery, and you are on a very low / zero budget, you have to deal with the inefficiency of the cheap / free tools available. But think about this. The short term 60 day HDDSuperClone license can be bought for $25 USD, which should in most cases be long enough for a single recovery case. This gives the virtual disk option for the purpose of data extraction, to be able to target files without thrashing the drive, not to mention the ability to control the timeouts to potentially speed things up. Targeting files does require another 3rd party tool such as R-Studio or DMDE. A Linux license for R-Studio is about $80 USD last time I checked (recommended). DMDE is about $20 USD for a 1 year license (although it has more quirks and doesn’t work as well, but still can work, although I likely won’t offer any support using it). And if you play the free version game with DMDE I think you can do up to 4000 files at a time, but only one directory level at a time. So with some work, it may be possible to do the needed recovery process with the free version of DMDE.

So a 60 day temp HDDSuperClone license is $25, R-Studio is $80, that is $105. That is cheap in the data recovery world. With DMDE the total would be $45. And if you try to go with the free version of DMDE and deal with the complications from that, now you are at $25.

The next step up from all of this is the hardware imagers that cost a few thousand dollars. I just felt the need to put things in perspective when looking at a low dollar DIY recovery.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 29th, 2019, 1:11 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
One can never know what will happen with DIY recovery. That is why a pro will open the drive in a clean room and examine the heads before even powering on the drive, so they can make an assessment of what to expect. Even then, a head (or the drive) could die at any time during the recovery from unseen issues.

In this case the head which failed was operational at the begining, and for quite a long while before giving up ; would it have been visible upon close examination ? (I may ber wrong but it seems unlikely.) Or would it have shown early signs of malfunction through advanced software like PC3000 or MRT ?
I proposed my help when the owner mentioned that 1) the drive was not clicking, 2) he could briefly see the contents, so I figured that the physical components were in good-ish working order, that the drive most likely had a bunch of defective sectors, some of which in filesystem areas, and that there was a good shot at software cloning with a high success rate. And I explained beforehand to the best of my knowledge what were the likeliest causes, the possible remedies, the risks involved, and what was pretty much guaranteed to fail (the guy talked about this video, to put things in perspective ! :) Even if I don't get everything that could have been possibly recovered with more advanced methods, at least I prevented him from opening the damn thing and ruining his data forever...)

Quote:
Since this is DIY without pro recovery, and you are on a very low / zero budget, you have to deal with the inefficiency of the cheap / free tools available. But think about this. The short term 60 day HDDSuperClone license can be bought for $25 USD, which should in most cases be long enough for a single recovery case. This gives the virtual disk option for the purpose of data extraction, to be able to target files without thrashing the drive, not to mention the ability to control the timeouts to potentially speed things up. Targeting files does require another 3rd party tool such as R-Studio or DMDE. A Linux license for R-Studio is about $80 USD last time I checked (recommended). DMDE is about $20 USD for a 1 year license (although it has more quirks and doesn’t work as well, but still can work, although I likely won’t offer any support using it). And if you play the free version game with DMDE I think you can do up to 4000 files at a time, but only one directory level at a time. So with some work, it may be possible to do the needed recovery process with the free version of DMDE.

So a 60 day temp HDDSuperClone license is $25, R-Studio is $80, that is $105. That is cheap in the data recovery world. With DMDE the total would be $45. And if you try to go with the free version of DMDE and deal with the complications from that, now you are at $25.

The next step up from all of this is the hardware imagers that cost a few thousand dollars. I just felt the need to put things in perspective when looking at a low dollar DIY recovery.

I'll definitely think about it (the time-limited license at least), but it would be nice to propose some kind of demo version, allowing to experiment with the extra features without waiting for an actual case which may require them. Right now it is a very occasional service for me, like 3/4 times a year I have an opportunity (and some of those times I get nothing).
And while I understand that some special features are reserved for the commercial version, the very fact that you yourself released ddr_utility goes to show that little tools with quite advanced functionality exist in the freeware world and can go a long way. While ddru_ntfsbitmap is probably nowhere near as sophisticated as this virtual drive module, or as convenient since it doesn't allow to target specific files, it's already a significant improvement over the blind cloning of a whole large drive containing only a small portion of allocated data. And the two ...findbad tools are also very useful to deal with the damage done. Is there a chance that they could be integrated into HDDSuperClone at some point, and if not, why ?


Meanwhile, cloning is at about 70%, and lo and behold, the main partition can now be mounted on the recovery drive ! (Although the contents aren't complete... yet ?)


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 29th, 2019, 5:37 
Offline

Joined: August 18th, 2010, 17:35
Posts: 3636
Location: Massachusetts, USA
Another general question (with little hope that it gets answered at this point) :
so one head failed completely, yet the drive is still working, is not clicking ; in what circumstances does a head failure result in a HDD becoming completely unresponsive?

It is possible to have a failed head in a drive with multiple heads stack, yet operating without clicking. Excluding physical shocks scenarios, a drive typically gets degraded with bad sectors, firmware eventually starts becoming problematic with threshold overflow, and then the more the drive is pushed to operate in failing state, the heads becoming prone to failure due to increased generated heat and further potential surface touching.

Is there one particular head which has to access the System Area,
Typically yes, with the general concept that in very mild SA issues, a drive is able to compensate "on its own" by referring to the other SA copy. At least, this is how it was in older drives. With new drives, not so easy to tell.

and then the failure of that one head does cause the drive to click, but not the others?
Upon running its own internal testing with power on cycle, the drive clicks because it determines that at least one head isn't working correctly, therefore the head stack resets (one click). Every reset correlates to a clicking sound.

If this is the case, why aren't the drives designed with a redundancy of the System Area on all sides of all platters, since the amount of information must be relatively small, to increase the odds of recovery if only one head becomes defective?
There are exceptions, but most drives have at least 2 copies of the SA. The problem typically is being able to access the additional "good" SA copies, when the main copy is in "bad shape". Obviously, this is huge problem conceptually in expecting the drive to be able to handle which SA to read on its own and continue operating reliably. It is not "that smart".

_________________
Hard Disk Drive, SSD, USB Drive and RAID Data Recovery Specialist in Massachusetts


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 29th, 2019, 18:11 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
I'll definitely think about it (the time-limited license at least), but it would be nice to propose some kind of demo version, allowing to experiment with the extra features without waiting for an actual case which may require them. Right now it is a very occasional service for me, like 3/4 times a year I have an opportunity (and some of those times I get nothing).
And while I understand that some special features are reserved for the commercial version, the very fact that you yourself released ddr_utility goes to show that little tools with quite advanced functionality exist in the freeware world and can go a long way. While ddru_ntfsbitmap is probably nowhere near as sophisticated as this virtual drive module, or as convenient since it doesn't allow to target specific files, it's already a significant improvement over the blind cloning of a whole large drive containing only a small portion of allocated data. And the two ...findbad tools are also very useful to deal with the damage done. Is there a chance that they could be integrated into HDDSuperClone at some point, and if not, why ?

No, those tools will not be implemented in hddsuperclone, for a couple reasons. First, there are things that I still don’t understand about the NTFS filesystem, so those tools are not as robust as they could be, and I don’t plan on digging any deeper. Second, processing a file system can be very complicated, and there is not always good sources of information on how they work (can you say “proprietary?”), which is why I chose to implement the virtual driver mode. I can leave the complicated part of processing the file systems up to other tools, and focus on working with the drive itself.

As for a demo version, probably not. I have given out a few short term licenses for testing purposes, but with the intention of someone actually spending time to test and report back results. I set the short term price fairly low so that if someone thought they wanted to try it out, they could without breaking the bank. In your situation with so few cases, it probably doesn’t make sense to purchase it. And your current case will likely be solved as best possible without the need. I wish the best for the recovery.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 30th, 2019, 15:38 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
@labtech
Thanks for these concise yet precise explanations.
“...threshold overflow” => That's when there are too many detected bad sectors, right ?

Here is a screenshot I took early on :
Attachment:
2019-05-23-190019_1330x1024_scrot.png
2019-05-23-190019_1330x1024_scrot.png [ 38.46 KiB | Viewed 16698 times ]

It shows a relatively wide area of contiguous bad or weak reads. Eventually most of those blocks became green so most of those sectors must have been recovered (although the grid aspect might not be accurate enough to ascertain this). Is it more likely that there is a damaged area at that particular spot on the corresponding platter, which fragilized the corresponding head – the one which eventually failed – each time it tried to “hammer” on those sectors, or an early sign of head malfunction ? Or both ?
Incidentally, the owner told me that he used to shutdown his computer equipments, including the WD MyCloud device which contained that HDD, by turning off the power bar powering them all instead of doing a proper shutdown, and is himself hypothesising that it may have been the leading cause of that failure. The fact that this seemingly damaged area seems to be located at the edge of a platter would seem to be in favor of that scenario. Is that so ?


Interestingly, starting from ~79%, the defective head, which seemingly hadn't been able to read a single byte for a long while, seems to be reading again, albeit very slowly, between 10KB/s and 3MB/s, more like 100-500KB/s on average, and there are less skipped areas according to the preview in HDDSCViewer, even none in one particular spot where there should be grey blocks based on the general pattern. How is that even possible ?
Attachment:
2019-05-29-182607_1920x1080_scrot - cut + modif.png
2019-05-29-182607_1920x1080_scrot - cut + modif.png [ 152.63 KiB | Viewed 16698 times ]

Problem is, it's taking way too long to copy almost (?) only 00s, with reading times up to 7000ms, so I stopped and put a 1000ms threshold (instead of the default 300000 which seems way too high, since phase 1 is supposed to get the easily readable areas as fast as possible), now it's proceeding at a steady rate of 50-60MB/s with only brief slowdowns (and a few spikes above 100MB/s).
And I'm finally seeing non-zero data in the preview just beyond 3TB... :shock: (That's what Christopher Colombus must have felt like when he finally saw some land after months of navigation in an empty sea and in starvation.)
Attachment:
2019-05-29-200951_1920x1080_scrot - cut.png
2019-05-29-200951_1920x1080_scrot - cut.png [ 147.47 KiB | Viewed 16698 times ]



When I stopped it this morning to GTFTS, it looked like this :
Attachment:
2019-05-30-021854_1920x1080_scrot - cut.png
2019-05-30-021854_1920x1080_scrot - cut.png [ 25.86 KiB | Viewed 16698 times ]

It's a bit of a mess because the last 500GB or so had already been scanned (when I skipped a large chunk to verify if the “stripes” pattern would be the same everywhere), at some point I forced the copy to resart from about 47% where it was stopped the first time (and go back to phase 1 as it was then in phase 2), and once it caught up with that last portion there were only grey and black blocks left (the “current” yellow block is in the lower-right area of the grid). But it was still reading something once in a quite long while, with a low to very low speed as mentioned above. But I suppose that now the most efficient course of action would be to complete the clone in reverse from the very end, to at least get the most out of the black (non tried) blocks, instead of waiting for HDDSuperClone to reach that step on its own. Then there's probably little point in trying to hammer the drive further for days or weeks, blindly hoping to get some more useful bytes out of an overwhelming (and underwhelming) stream of emptiness.
I remember reading in articles about the ddrescue algorithm that it would attempt to read until an unreadable sector is encountered, then leap forward until a good area is found, then proceed backward right away until an unreadable sector is encountered, then proceed forward again starting from right after the point where it leaped previously. Isn't it more efficient than HDDSuperClone (if I understand it correctly) trying to determine the most adequate skip size based on the previous patterns of readability, and only much later doing a backward pass ? (I may have misconceptions on both tools' operation.)


@maximus
Quote:
No, those tools will not be implemented in hddsuperclone, for a couple reasons. First, there are things that I still don’t understand about the NTFS filesystem, so those tools are not as robust as they could be, and I don’t plan on digging any deeper. Second, processing a file system can be very complicated, and there is not always good sources of information on how they work (can you say “proprietary?”), which is why I chose to implement the virtual driver mode. I can leave the complicated part of processing the file systems up to other tools, and focus on working with the drive itself.

But ddru_findbad, if I understood correctly, is already relying on Sleuthkit to perform the actual analysis.
In what circumstances could ddru_ntfsbitmap or ddru_ntfsfindbad (which is said to be more reliable and faster than the general purpose ddru_findbad for NTFS partitions) produce unreliable results ?

Quote:
As for a demo version, probably not. I have given out a few short term licenses for testing purposes, but with the intention of someone actually spending time to test and report back results. I set the short term price fairly low so that if someone thought they wanted to try it out, they could without breaking the bank. In your situation with so few cases, it probably doesn’t make sense to purchase it. And your current case will likely be solved as best possible without the need. I wish the best for the recovery.

In this particular case, I'll admit that the actual recovery may have been done in a fraction of the time with the “Pro” version (and possibly with a higher final success rate since it might have been possible to get all the still accessible useful sectors while all heads were still operational), but just to get up to speed in using the complete setup proficiently would have required about the same amount of time that it would have saved ! :) (Of course, time spent learning something new is never lost.)


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 30th, 2019, 18:10 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
But I suppose that now the most efficient course of action would be to complete the clone in reverse from the very end, to at least get the most out of the black (non tried) blocks, instead of waiting for HDDSuperClone to reach that step on its own.

Ummm, THAT is what phase 2 does, it is a reverse pass to get the data that was over skipped from phase 1. Phase 2 is a compliment to phase 1, they are designed to get the most good data in the shortest time possible. Phase 3 and beyond can start hammering the drive by going back for the bad spots. But do whatever you want, even though the algorithm is designed to be as efficient as possible, because you know best. (sorry, nothing personal, but I get frustrated when people play with the options to the point of affecting the recovery process, just because they think they know how it works)

Quote:
In this particular case, I'll admit that the actual recovery may have been done in a fraction of the time with the “Pro” version (and possibly with a higher final success rate since it might have been possible to get all the still accessible useful sectors while all heads were still operational), but just to get up to speed in using the complete setup proficiently would have required about the same amount of time that it would have saved ! :) (Of course, time spent learning something new is never lost.)
I won't deny that the time to get familiar with pro version may very well not been worth it in your case.

Quote:
But ddru_findbad, if I understood correctly, is already relying on Sleuthkit to perform the actual analysis.
In what circumstances could ddru_ntfsbitmap or ddru_ntfsfindbad (which is said to be more reliable and faster than the general purpose ddru_findbad for NTFS partitions) produce unreliable results ?
You might be able to find it by reading through the discussions, but the flaw is when the fragment location data is too big to fit in the actual MFT record, and is itself put into another record. I never got around to dealing with that, and that was also part of the cause of me deciding not to pursue file system level recovery, because it is just too difficult to figure out and keep up with. With the virtual driver, I can leave that to much better 3rd party software (R-Studio is awesome).

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 30th, 2019, 20:41 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
Ummm, THAT is what phase 2 does, it is a reverse pass to get the data that was over skipped from phase 1. Phase 2 is a compliment to phase 1, they are designed to get the most good data in the shortest time possible. Phase 3 and beyond can start hammering the drive by going back for the bad spots. But do whatever you want, even though the algorithm is designed to be as efficient as possible, because you know best. (sorry, nothing personal, but I get frustrated when people play with the options to the point of affecting the recovery process, just because they think they know how it works)

Sorry for this foolish failure to follow the flowchart on my part, I was just trying to adapt. The recovery process is certainly well designed for most cases, but this is a very specific situation in which 1) one head failed midway through the recovery, 2) the drive only has about an eighth of its capacity with allocated data (and had been reading seemingly only 00s for hours when that head failed). The main purpose of creating this thread was trying to find a method which could have improved the recovery efficiency in that context {*}. Failing that, I proceeded through intuition, trial and error, and the final outcome shouldn't be too bad, considering. And if I have made one mistake, it's pursuing the recovery from the point where I made a short test run (near sector 7.000.000.000), instead of going back to the point where I stopped the first time around (near sector 3.600.000.000) ; but I tried to (in the log file I changed the position in the header but not the actual “current position” value in hexadecimal as I did before), and it may have been the right move if the missing chunk of data had been at the very end (which seemed the most plausible then – had I done that test at 6.000.000.000 I would have found it much earlier).
I found out later on, re-reading the manual (sorry I didn't remember all the specifics about what each phase does), that indeed phase 2 is automatically set to read backwards ; since the current position was closer to the end than I thought, I let it finish phase 1 (but it was still a matter of several hours just to get a few more MB of mostly 00s – arguably not the best use of time and cumulated stress to the drive in that specific situation), and switch to phase 2 (there's a bit more data to be read in reverse, but strangely some black blocks already scanned in both directions stay black, whereas they should be considered as “tried” by now).


{*} I also asked about that issue on SuperUser, if anyone's interested, now or in the future... and was wisely advised to put the defective drive in the freezer overnight... :?


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 30th, 2019, 21:17 
Offline

Joined: August 18th, 2010, 17:35
Posts: 3636
Location: Massachusetts, USA
abolibibelot wrote:
and was wisely advised to put the defective drive in the freezer overnight... :?

Will this suggestion ever go away?!? Goodness.

_________________
Hard Disk Drive, SSD, USB Drive and RAID Data Recovery Specialist in Massachusetts


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: May 30th, 2019, 21:29 
Offline

Joined: August 18th, 2010, 17:35
Posts: 3636
Location: Massachusetts, USA
abolibibelot wrote:
“...threshold overflow” => That's when there are too many detected bad sectors, right ?

Right.
You are welcome.

_________________
Hard Disk Drive, SSD, USB Drive and RAID Data Recovery Specialist in Massachusetts


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: June 24th, 2019, 17:35 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
An update about the end of this recovery, and a few final questions.

1) I stopped the recovery when there were no more black blocks visible in HDSCViewer (a few sectors could still be copied but at an excruciatingly slow rate). Yet at that point, according to HDDSuperClone, there were still about 510000000 sectors or 260GB which were considered as “non-tried”. How come ? When a problematic area gets tried forward (pass 1) and backward (pass 2), shouldn't the sectors in between the skipping points be marked as “non-trimmed” ?
Attachment:
2019-06-03-113753_1354x865_scrot.png
2019-06-03-113753_1354x865_scrot.png [ 44.51 KiB | Viewed 16237 times ]

Attachment:
2019-06-03-120236_802x730_scrot.png
2019-06-03-120236_802x730_scrot.png [ 157.12 KiB | Viewed 16237 times ]


2) At that point the main partition could finally be mounted, but ddru_findbad still couldn't perform its analysis, as it couldn't detect the partition type or file system type. The command line tools it relies on, used independantly, reported the same result :
Code:
mmstat /dev/sda
Cannot determine partition type (GPT or DOS at 0)
fsstat /dev/sda
Cannot determine file system type

Is it to be expected ? Is it due to the GPT partitioning scheme, or to the size of the volume, or something else ? If this is due to missing file system structures, how come the partition can be mounted and explored, while supposedly low-level analysis algorithms can't retrieve the most basic informations ?

3) In a case like this, is there another way of getting a thorough list of files affected by unreadable sectors ?
I used R-Studio and its “Show files in HexEditor” to at least get a broad idea of which kind of files or which folders were to be found in the damaged area, but it's neither convenient nor accurate. From what I could see, luckily, most of the damaged files were downloaded movies / episodes from TV shows, but at least one folder containing personal picture and video files was in that area. I could then identify corrupted files in that folder, and repair a few of them which had duplicates elsewhere, identified with AllDup in size-only mode (in one instance both duplicates of a MP4 video file were corrupted but one at the begining and one at the end so I could regenerate 100% of the original which otherwise would have been lost), or partially repair a few MP4 video files with Grau Video Repair (when the begining was valid and a segment was missing at the end, making it otherwise unreadable). Then I applied NTFS compression on the remaining corrupted files which I could identify – both to save space (as the owner chose a 500GB external HDD as the storage drive for the recovery and I wasn't sure that everything would fit) and to visually mark them (as compressed files normally appear in blue in Windows Explorer).
Also, many large files (downloaded movies) were extracted completely empty, probably because the metadata was incomplete (no cluster allocation information in R-Studio's hexadecimal viewer). Some of them could be found by the “raw file carving” method (in “Extra found files”), valid and seemingly complete, but most were either truncated or mixed with foreign data, most likely because they were fragmented ; in those cases the compression saved a lot of space, reducing a multi-GB file to just 4KB. In the end, with the combined use of NTFS compression and hard-links (R-Studio extracted some files as hard-links, and AllDup has a convenient option to automatically replace a duplicate by a hard-link), I got a total allocated size of about 368GB (465GB minus 97GB of free space), for a total of 555GB calculated in the folder's properties.

4) Again, how can it be explained that such a large chunk of data (more than 100GB out of ~550GB) got written beyond the 3TB mark, with more than 2TB of empty space between that chunk and the larger chunk at the begining ?

5) Based on the log file, what is the exact size of each head's “stroke”, and hence the expected size of each error in contiguous files ? Is it supposed to be perfectly constant, or can it vary sligthly ? As I mentioned earlier, the bad head seemed to be reading something in some areas, albeit very slowly (whereas in most areas corresponding to this head nothing could be read at all) : does this mean that the head itself was randomly getting (barely and briefly) functional again (then why ?), or that, for some reason, the magnetic signal was slightly stronger in those areas, enough to pass the “readability threshold” of the bad head, if that makes sense, and allow it to get a successful read instead of noise ? {*}


{*} Hypothesis based on a comment by “fzabkar” posted a few years back :
“To get an idea of the progression in technology, it might be worth following the changes in the Hardware ECC Recovered SMART attribute in Seagate's models. AFAICT, the normalised value for this attribute has steadily declined with newer models, to the point that Seagate no longer appears to report it. ISTM that HDDs are nowadays digging data out of noise.”


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: June 24th, 2019, 19:53 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
1) Phase 1 and 2 perform skipping, and anything skipped is still considered non-tried because it has not been tried yet. Phase 3 and 4 are for digging into the skipped parts.

2) If the file system is too corrupt or damaged, then those tools will not be able to process it. And I no longer support ddrutility. But what I do see is that you are running the file system commands against the whole disk, and not a partition. Not sure if that is the issue or not.

3) You can choose to mark fill the unrecovered areas. Then any file recovered will contain the marking data, and can be considered corrupt. In Linux you can use grep to search, there should be examples you can find for ddrescue, which will work the same.

4) That can only be explained by someone that truly knows how that file system works as it was designed. Ask Microsoft.

5) You did not attach the logfile, but from the screenshot I would suspect 191MB (from last run size). From the screenshot it also appears that the head was reading until half way through the recovery, and then developed an issue and possibly died. I can’t tell more without the actual log file.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: June 26th, 2019, 12:34 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
1) Phase 1 and 2 perform skipping, and anything skipped is still considered non-tried because it has not been tried yet. Phase 3 and 4 are for digging into the skipped parts.

Alright. But in a case like this, isn't it kinda overkill to let all phases finish, considering that (based on the general pattern) one head is known to read (almost) nothing at all ? (It already took several extra days once the bulk of the recovery was done to grind those black blocks and go from 3696,56GB / 92,395927% to 3736,55GB / 93,395248%, and again most of the extra recovered data consisted of empty sectors.)
There should be a warning saying that, based on the collected recovery data, if it becomes clear that one head is no longer functional at all, there would be little point in attempting to read further inside the areas corresponding to that head (and perhaps also strongly advising to have the drive serviced by a bona fide data recovery company, as the odds of a successful recovery with no replacement of the head stack assembly are low, especially if that head is already dysfunctional at the begining of the software-only recovery attempt).

Quote:
2) If the file system is too corrupt or damaged, then those tools will not be able to process it. And I no longer support ddrutility. But what I do see is that you are running the file system commands against the whole disk, and not a partition. Not sure if that is the issue or not.

I tried both. I wasn't sure if it was supposed to work against a full drive or a partition, as the manual is quite sketchy. But since it is no longer supported... é_è
If I remember correctly, running the [ddru_findbad] command against the main partition I got a long list of “Illegal number” warnings (so many that I could only copy the end of the terminal output for future reference, and now I'm not sure which exact command produced this output), and the output file came out empty.
I may have made the mistake (at least for some attempts) of running it directly against the HDDSuperClone log file, instead of exporting a ddrescue-formatted log file and using that instead. I haven't yet overwritten the recovery drive so before I do I will try it again and double-check everything.

Quote:
3) You can choose to mark fill the unrecovered areas. Then any file recovered will contain the marking data, and can be considered corrupt. In Linux you can use grep to search, there should be examples you can find for ddrescue, which will work the same.

Thanks, I'll run some tests then. Too late for this particular case (I already gave the owner the finished recovery), but it could come in handy in a similar situation.

Quote:
4) That can only be explained by someone that truly knows how that file system works as it was designed. Ask Microsoft.

The original drive was formatted with Linux partitions, the main one was in EXT4.

Quote:
5) You did not attach the logfile, but from the screenshot I would suspect 191MB (from last run size). From the screenshot it also appears that the head was reading until half way through the recovery, and then developed an issue and possibly died. I can’t tell more without the actual log file.

Sorry, I thought that the one I posted earlier (page 1 of this thread, 12th post) would be enough to answer that question, but it was very partial indeed.
So here it is in its final form :
Attachment:
WD40EFRX 2.log [3.68 MiB]
Downloaded 567 times


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: June 26th, 2019, 17:50 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
Alright. But in a case like this, isn't it kinda overkill to let all phases finish, considering that (based on the general pattern) one head is known to read (almost) nothing at all ? (It already took several extra days once the bulk of the recovery was done to grind those black blocks and go from 3696,56GB / 92,395927% to 3736,55GB / 93,395248%, and again most of the extra recovered data consisted of empty sectors.)
There should be a warning saying that, based on the collected recovery data, if it becomes clear that one head is no longer functional at all, there would be little point in attempting to read further inside the areas corresponding to that head (and perhaps also strongly advising to have the drive serviced by a bona fide data recovery company, as the odds of a successful recovery with no replacement of the head stack assembly are low, especially if that head is already dysfunctional at the begining of the software-only recovery attempt).

It is up to you to assess the condition of the drive as the recovery proceeds. I can’t make the program smart enough to think like a human. All I can do is provide a tool with a user manual. It is up to the user to read and understand the theory of operation section, and make decisions based on that and how the drive is reacting.

Quote:
5) Based on the log file, what is the exact size of each head's “stroke”, and hence the expected size of each error in contiguous files ? Is it supposed to be perfectly constant, or can it vary sligthly ? As I mentioned earlier, the bad head seemed to be reading something in some areas, albeit very slowly (whereas in most areas corresponding to this head nothing could be read at all) : does this mean that the head itself was randomly getting (barely and briefly) functional again (then why ?), or that, for some reason, the magnetic signal was slightly stronger in those areas, enough to pass the “readability threshold” of the bad head, if that makes sense, and allow it to get a successful read instead of noise ? {*}

To further follow up on this, I now see also the previous screenshot that the read size per head stroke is about 191-192MB. But this does change, it will be bigger at the start and get smaller towards the end. This is due to density changes because the physical diameter is smaller as it reads farther in. As for why it can read some data and not other data, I don’t have the answer. That is how a weak head works sometimes. I looked at the log and there are large sections where it did read some and others where it didn’t read any data (play with the settings in hddscviewer, making sure to enable highlight good data). Sometimes density changes can affect the reading, but that is all I know.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: June 27th, 2019, 16:56 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Since my post yesterday went into a black hole after I edited it, here it is again (I have learned to write anything more than a couple sentences in a word processer before posting for cases just like this where the post just disappears).
Quote:
Alright. But in a case like this, isn't it kinda overkill to let all phases finish, considering that (based on the general pattern) one head is known to read (almost) nothing at all ? (It already took several extra days once the bulk of the recovery was done to grind those black blocks and go from 3696,56GB / 92,395927% to 3736,55GB / 93,395248%, and again most of the extra recovered data consisted of empty sectors.)
There should be a warning saying that, based on the collected recovery data, if it becomes clear that one head is no longer functional at all, there would be little point in attempting to read further inside the areas corresponding to that head (and perhaps also strongly advising to have the drive serviced by a bona fide data recovery company, as the odds of a successful recovery with no replacement of the head stack assembly are low, especially if that head is already dysfunctional at the begining of the software-only recovery attempt).

It is up to you to assess the condition of the drive as the recovery proceeds. I can’t make the program smart enough to think like a human. All I can do is provide a tool with a user manual. It is up to the user to read and understand the theory of operation section, and make decisions based on that and how the drive is reacting.

Quote:
5) Based on the log file, what is the exact size of each head's “stroke”, and hence the expected size of each error in contiguous files ? Is it supposed to be perfectly constant, or can it vary sligthly ? As I mentioned earlier, the bad head seemed to be reading something in some areas, albeit very slowly (whereas in most areas corresponding to this head nothing could be read at all) : does this mean that the head itself was randomly getting (barely and briefly) functional again (then why ?), or that, for some reason, the magnetic signal was slightly stronger in those areas, enough to pass the “readability threshold” of the bad head, if that makes sense, and allow it to get a successful read instead of noise ? {*}

To further follow up on this, I now see also the previous screenshot that the read size per head stroke is about 191-192MB. But this does change, it will be bigger at the start and get smaller towards the end. This is due to density changes because the physical diameter is smaller as it reads farther in. As for why it can read some data and not other data, I don’t have the answer. That is how a weak head works sometimes. I looked at the log and there are large sections where it did read some and others where it didn’t read any data (play with the options in hddscviewer, making sure to enable highlight good data). Sometimes density changes can affect the reading, but that is all I know.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: June 28th, 2019, 9:51 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
Since my post yesterday went into a black hole after I edited it, here it is again (I have learned to write anything more than a couple sentences in a word processer before posting for cases just like this where the post just disappears).

Yep, I hate that too... :x Sometimes I find myself digging through the whole system partition or through a complete RAM dump with WinHex, looking for remnants of a relatively long and/or elaborate chunk of text I just typed, or which I sent somewhere only to find that it has vanished. I use a clipboard manager to try to avoid that kind of situation (less hassle than having to open a full-fledged word processor), but don't always think about doing a copy in time before some SNAFU makes it all disappear. I read about a Firefox extension called “Lazarus”, made for that purpose, but haven't tried it, and don't know if it has been updated since the Quantum debacle.
In this case it was weird because on the main page this thread appeared to have been updated with a new post by “maximus”, yet it wasn't there – and now both the original post and the copy have appeared, go figure.

Quote:
It is up to you to assess the condition of the drive as the recovery proceeds. I can’t make the program smart enough to think like a human. All I can do is provide a tool with a user manual. It is up to the user to read and understand the theory of operation section, and make decisions based on that and how the drive is reacting.

I get that, but you also said that an end user who does not have a deep understanding of how the program's algorithm is designed and the intricacies of data recovery on a failing storage device (i.e. the vast majority of end users) shouldn't try to mess with it or even attempt to make case-by-case decisions... :) The fact that HDDSC (at least the commercial version) can indicate if one head is failing is already a step in that direction. What I don't know (because I haven't tried – as I said it was already getting too long with too little reward without going that far in that particular case) is how it deals with a bad head situation beyond phases 1-2. Does it try to read every single sector (which might be a suitable approach when dealing with actual bad sectors on the platters), which must be extremely long when nothing is readable over a large area (because of that head malfunction), or it there a point where it gives up and marks the area as “tried” anyway ?

Quote:
To further follow up on this, I now see also the previous screenshot that the read size per head stroke is about 191-192MB. But this does change, it will be bigger at the start and get smaller towards the end. This is due to density changes because the physical diameter is smaller as it reads farther in. As for why it can read some data and not other data, I don’t have the answer. That is how a weak head works sometimes. I looked at the log and there are large sections where it did read some and others where it didn’t read any data (play with the options in hddscviewer, making sure to enable highlight good data). Sometimes density changes can affect the reading, but that is all I know.

It would seem about consistent with the maximum length of missing data I observed on the files which I examined thoroughly. And about in the ballpark of what are shown to be typical read size per head stroke in this thread (except for Samsung drives apparently) :
http://www.hddoracle.com/viewtopic.php?f=59&t=650


Top
 Profile  
 
 Post subject: Re: WD40EFRX, issues when cloning with HDDSuperClone
PostPosted: June 28th, 2019, 20:07 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
Quote:
It is up to you to assess the condition of the drive as the recovery proceeds. I can’t make the program smart enough to think like a human. All I can do is provide a tool with a user manual. It is up to the user to read and understand the theory of operation section, and make decisions based on that and how the drive is reacting.

I get that, but you also said that an end user who does not have a deep understanding of how the program's algorithm is designed and the intricacies of data recovery on a failing storage device (i.e. the vast majority of end users) shouldn't try to mess with it or even attempt to make case-by-case decisions... :) The fact that HDDSC (at least the commercial version) can indicate if one head is failing is already a step in that direction. What I don't know (because I haven't tried – as I said it was already getting too long with too little reward without going that far in that particular case) is how it deals with a bad head situation beyond phases 1-2. Does it try to read every single sector (which might be a suitable approach when dealing with actual bad sectors on the platters), which must be extremely long when nothing is readable over a large area (because of that head malfunction), or it there a point where it gives up and marks the area as “tried” anyway ?

Excerpts from the user manual:
Quote:
Phase 1 is a copy pass forward with skipping. Phase 2 is a copy pass backward with skipping. Together they offer the best attempt to get the most good data first and fast from the good areas of the drive…… These two passes are the money passes. If after these two passes you don’t have a percentage complete in the upper 90’s, then you likely have a weak/damaged head, and the next phases could take a long time to complete.

Quote:
Skips is the total number of times the program has skipped since the program was started. Skip runs is how many skip runs have happened since the program was started. If you see the run count growing, it likely means there is a weak/damaged head.

So there is the indication of a bad head, no pro version needed.
As for how the further phases work:
Quote:
Phase 3 is a copy pass forward with skipping. But the skipping size for this pass does not self adjust, and skipping is based on the read rate instead of read errors. If the read rate is below the value of –rate-skip for two consecutive reads, it skips ahead the amount of –skip-size.

Phase 4 is a copy pass forward without skipping. This gets all the remaining non-tried areas.

All failed blocks larger than one sector (LBA) in size are marked as non-trimmed by the first 4 phases. Failed blocks that are one sector in size may be marked as bad if timeouts are not used.

Trimming reads each non-trimmed one sector at a time forward until a read error, and then backward until a read error. Any trimmed blocks larger than one sector are marked as non-scraped.

Dividing is only performed if trimming is turned off. If trimming is turned off, then 1 or 2 dividing copy passes are done instead. The default is for only one dividing pass, to activate the second pass use the –do-divide2 option. If there is only one dividing pass, then it reads the non-trimmed blocks with the cluster size / 8, and marks the failed blocks as non-scraped. If the –do-divide2 option is used, then the first dividing pass reads non-trimmed blocks with the cluster size / 4. The second dividing pass reads non-divided blocks with the cluster size / 16. The first dividing pass marks failed blocks larger than one sector as non-divided, the second pass marks them as non-scraped. Trimming has been found to be more efficient than dividing with scsi-passthrough mode. Dividing can be more efficient with ata-passthrough mode (not in the free version as the marking of reported bad sectors is disabled) and the direct modes (more so with direct mode when using timers), but how efficient depends on how the drive is reacting.

Scraping reads the non-scraped blocks one sectors one at a time forwards. Failed sectors are marked as bad.


If you really want to try to understand how it works, then test with it on some drives that are not important, NOT with drives that you are currently trying to recover for a client.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 40 posts ]  Go to page Previous  1, 2

All times are UTC - 5 hours [ DST ]


Who is online

Users browsing this forum: Google [Bot], Google Adsense [Bot] and 67 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group