May 24th, 2019, 20:32
lubuntu@lubuntu:~$ sudo ddru_findbad /dev/sda /media/lubuntu/Q_SD/WD40EFRX_ddrescue_export.log -o /media/lubuntu/Q_SD/WD40EFRX_findbad.log
Command line input was processed succesfully
ddru_findbad 1.11 20141015
Target = /dev/sda
Logfile = /media/lubuntu/Q_SD/WD40EFRX_ddrescue_export.log
Output base name = /media/lubuntu/Q_SD/WD40EFRX_findbad.log
Sector size = 512
Loop wait time = 2
More info = false
Extra output = false
Quick = false
Quick ntfs = false
Target /dev/sda is detected to be a block device
Neither mmstat or fsstat gave results
The partition table or file system could be corrupt
Cannot determine if target is whole drive or single partition
See file /media/lubuntu/Q_SD/WD40EFRX_findbad2.log_debug.txt for more info
Filesystem volume name: <none>
Last mounted on: /media/sdb4
Filesystem UUID: 1b5a9115-a28c-4f16-8e7d-c4c3d9d4cb09
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: unsigned_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean with errors
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 243900416
Block count: 975575808
Reserved block count: 19511516
Free blocks: 830488873
Free inodes: 243819640
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 791
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Thu Dec 10 08:54:28 2015
Last mount time: Mon Apr 15 13:26:30 2019
Last write time: Mon Apr 15 13:59:43 2019
Mount count: 172
Maximum mount count: -1
Last checked: Thu Dec 10 08:54:28 2015
Check interval: 0 (<none>)
Lifetime writes: 26 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: e1c45096-875e-4d9c-a701-f8dcb039618e
Journal backup: inode blocks
FS Error count: 2751
First error time: Sun Apr 14 07:06:01 2019
First error function: ext4_find_entry
First error line #: 935
First error inode #: 18219009
First error block #: 0
Last error time: Mon Apr 15 13:59:43 2019
Last error function: __ext4_get_inode_loc
Last error line #: 4199
Last error inode #: 18219197
Last error block #: 72876075</i>
<i>dumpe2fs 1.42.9 (4-Feb-2014)
Journal superblock magic number invalid!</i>
<i>Impossible de lire le contenu du système de fichiers.
Pour cette raison, certaines opérations peuvent être indisponibles.
La raison peut être l'absence d'un paquet logiciel.
Voici la liste des paquets logiciels nécessaires pour la prise en charge du système de fichiers ext4 : e2fsprogs v1.41+.
May 24th, 2019, 23:38
May 25th, 2019, 13:18
May 25th, 2019, 14:00
0xa3f014 0x22abef80 0x7f 0x0 0x0
May 25th, 2019, 14:20
You are correct that the pattern indicates an issue with a head. I can’t tell if the head is dead or just weak from the image, maybe I could tell from the log file. It was fine up until a certain point, it is possible that is just how it is reacting to a weak head, or the head is suddenly getting worse. One way to tell is to start a new recovery with the destination of NULL (no destination) and let it run for a few minutes or so to see if it still reads fine at the beginning like it already has, or if you now see the same pattern starting from the beginning.
But I wouldn’t mess with it too much since you are trying to get the data out of it.
As for targeting just the wanted data with free tools, good luck. If it were NTFS I would point you towards the discussion section of ddrutility, where someone figured out how to use the ntfs utilities along with ddrescue to manually target files. But I have nothing for the EXT filesystem. It is reasons like that why I made the virtual driver mode in the paid pro version, but even then you also need to purchase a Linux license for R-Studio (optionally can try DMDE, but R-Studio definitely worked best in testing). Plus you need space for the clone along with space for the recovered files at the same time. The next option up would be the expensive hardware imagers with data extraction. The next option down would be attempting to copy files directly with a file utility, likely without the ability to handle bad sectors or resume. And the last alternative option: write your own program to do it
I guess my suggestion would be that unless you are willing to fork out some money, then just upgrade to the latest version of HDDSuperClone and resume the cloning with the free version as you were. It will skip around the bad/weak head and get all the good sectors first. If you think you know where all the desired data is within reason, use the input offset for the start point, and the size to limit how far it will go (sorry, no stop offset, must do some math and use size). I don’t know why you had an issue with setting the input offset before (except for being an old beta version:)).
May 25th, 2019, 14:48
Maybe the data is not all recovered, you just think it is. Obviously something is still different between the source and destination. You could try examining the destination with DMDE, although I can't help with that, as I don't use it.But the thing is, as I explained, the relevant data should already be recovered, if there's only about 550GB of it. Before I proceed any further I'm trying to understand why it doesn't seem to be the case, and why the partition isn't already operational on the recovery drive, considering that it's been copying zeroes for hours when that WD40EFRX drive started having serious issues.
I do not know of a tool for that. On a healthy drive, I use Clonezilla which can copy only the used sectors, but it does not handle bad sectors very well at all. I don't know of a tool that will list the used space, especially without hammering on the drive.Alright, I'll try that. But, since the partition can still be accessed, is there a tool which could analyse it and return the intervals of occupied sectors, with as little stress as possible, just like ddru_ntfsbitmap does by analysing the $Bitmap file, even if I have then to manually set the copy intervals based on that information ?
May 25th, 2019, 16:52
Maybe the data is not all recovered, you just think it is. Obviously something is still different between the source and destination. You could try examining the destination with DMDE, although I can't help with that, as I don't use it.
I do not know of a tool for that. On a healthy drive, I use Clonezilla which can copy only the used sectors, but it does not handle bad sectors very well at all. I don't know of a tool that will list the used space, especially without hammering on the drive.
May 25th, 2019, 18:09
Good call on making sure the owner understands that this requires professional recovery for the best results, and that you may not be able to recover it on your own, at least not with good results.I told the owner that, at this point, to maximize the recovery rate, it would be necessary to replace the heads, which I can't do. If he unterstands the risk and is absolutely certain that he will not pay the fee of a full-blown recovery service, then I'll proceed, but I'd like to do it in the wisest and most efficient way possible with the available means. There's still a lot that could be recovered out of what's seemingly missing, even with 1 bad head out of 8 on that model, but there could be another one failing anytime soon if I try to scan everything that has not been scanned yet.
May 25th, 2019, 19:12
You are attempting to recover data from a drive that needs pro data recovery, and you will only get so far on your own.
If you do get permission to continue, I would finish the cloning with hddsuperclone through phase 2, that will get the most good data from the good heads. After that, I can't say what to do, as every case is different. You want to target data, but you don't know how, and targeting data is not easy. If you get enough data from the initial clone and the file table information is all there, you may be able to figure something out while using the clone as to how to target the rest of the needed data.
May 25th, 2019, 19:38
The Linux equivalent of robocopy is rsync. I can’t help with the options to say if there is an equivalent command to what you want.Regarding another question I asked, do you happen to know a Linux command equivalent to Robocopy /CREATE, which would copy the entire file tree with 0 byte files ? Or is this also more complicated on Linux partitions without “hammering” the drive ?
With the weak/bad head, I would stop the cloning after phase 2 and assess the results. Phases 1 and 2 get the most good data from the good heads with the least effort, after that it can start to thrash the drive. It is all about understanding what you are dealing with. I offer to analyze hddsuperclone logs, as long as they are not messed up by multiple runs and changed settings.Indeed, if I finish the cloning process I should see where the rest of the data is, but at this point I'm afraid that the drive will be too exhausted to then try to insist on a particular area of interest... Hence why I'm asking for some specific advice, to preemptively avoid such a scenario, if at all possible.
May 25th, 2019, 19:43
And I have an older 2 head drive drive that has a weak head, and has survived many many testing hours with little change. And also a 6 head drive with a dying head that survived my recovery attempts to get the most out of it. But a head (or even heads) can die at any time if the drive is failing. That is the nature of DIY.I've recovered two drives of mine with upward of 200 bad sectors, one was a 100% success with only a NTFS system file corrupted and the other had 6 corrupted files out of 3TB, and those were from the dreaded Seagate STx000DM001 series !), but now I realize that 4TB is a lot to handle for a drive with even minor issues ; and also that what seems like minor issues at first can quickly evolve into serious trouble.
May 25th, 2019, 21:52
With the weak/bad head, I would stop the cloning after phase 2 and assess the results. Phases 1 and 2 get the most good data from the good heads with the least effort, after that it can start to thrash the drive. It is all about understanding what you are dealing with. I offer to analyze hddsuperclone logs, as long as they are not messed up by multiple runs and changed settings.
I offer to analyze hddsuperclone logs, as long as they are not messed up by multiple runs and changed settings.
May 26th, 2019, 7:57
There is something wrong with the skipping in the version you are using, and the skip size is not growing, possibly due to a bug with the skip resets. I don't remember that specific issue (too long ago), but that doesn't mean it didn't exist. It is supposed to adjust so that it only performs about 7 reads in each section of the bad head during phases 1 and 2.It seems to me that even in phase 1 it spends too much time on problematic areas – but then again it's an outdated version I was using, I'll definitely update it next time I use it. (I'll run some tests until I get a reply from the owner on how he wants to proceed.)
May 26th, 2019, 9:11
May 26th, 2019, 9:25
The bad news is that it looks like the head died and is not reading any data. Make sure you don’t do a zero fill again so as not to possibly wipe out any data that has been recovered while the head was working, just to be safe. Also, there is always the chance that the head could cause platter damage, making it more difficult for professional recovery of that surface, so any further attempts should be done so with that knowledge. FYI according to search results that model has 4 platters 8 heads.
May 26th, 2019, 9:59
May 27th, 2019, 0:20
May 27th, 2019, 8:19
You are not missing anything, the option is not there. And the reason is that manually jumping around can mess with the algorithm, and make for a less efficient recovery.As for changing the starting position before a copy run (which, I realize now, is different from the “input position” which affects the whole recovery), I can't find another way than editing the value in the log file. Is it how it's supposed to be done (I tried to RTFM but found nothing relevant) or am I missing something ?
May 27th, 2019, 13:02
May 27th, 2019, 21:38
File system: Ext4
Total capacity: 3 995 958 509 568 bytes = 3,6 TB
Sector count: 7 804 606 464
Bytes per sector: 512
Bytes per cluster: 4 096
Free clusters: 830 488 873 = 85% free
Total clusters: 975 575 808
No. of Inodes: 243 900 416
No. of free Inodes: 243 819 640
No. of block groups: 29 773
Blocks per group: 32 768
Inodes per group: 8 192
Inode size: 256
[b]Uses sparse superblocks: Yes[/b]
Last mount time: 15/04/2019, 13:26:30
Last write time: 15/04/2019, 13:59:43
You are not missing anything, the option is not there. And the reason is that manually jumping around can mess with the algorithm, and make for a less efficient recovery.
Powered by phpBB © phpBB Group.