July 8th, 2018, 11:00
July 8th, 2018, 16:05
How can I identify which file(s) was/were corrupted as a result of this small unreadable area ? Is there a tool on Linux or Windows which can check the consistency of the recovered HFS+ partition ?
July 8th, 2018, 16:23
– If so, how should I interpret those strange SMART values ?
July 8th, 2018, 22:20
It has been awhile since I wrote it, and no longer support it, and I never had an HFS+ partition to test, but you could try ddru_findbad from ddrutility. It uses the ddrescue log and some other Linux utilities to find which file a bad sector belongs to. For a small amount of sectors like you have, it should work... in theory
You need to use a tool that shows the raw value as hex so that it can better be understood. The way some tools interpret the value sometimes does not make sense. For Linux I know that HDDSuperTool will do that. I think that CrystalDiskInfo for Windows has an option to show the value in hex. For any other tools that can show the value in hex you are on your own.
The reason it helps to see it in hex is because sometime vendors do weird things with the value that does not always conform to what you would consider normal.
Smart structure version= 10
ID# FLAG VALUE WORST THRESH RAW DATA ATTRIBUTE NAME
1 0x000f 113 99 6 0x000000036f9f38 Read Error Rate
3 0x0003 99 99 0 0x00000000000000 Spin-Up Time
4 0x0032 99 99 20 0x000000000007e2 Start/Stop Count
5 0x0033 100 100 36 0x00000000000000 Reallocated Sectors Count
7 0x000f 100 253 30 0x000006000a9d85 Seek Error Rate
9 0x0032 99 99 0 0x284197000005b2 Power-On Hours Count
10 0x0013 100 100 97 0x00000000000000 Spin Retry Count
12 0x0032 99 99 20 0x00000000000788 Power Cycle Count
184 0x0032 100 100 99 0x00000000000000 End-to-End error
187 0x0032 100 100 0 0x00000000000000 Reported Uncorrectable Errors
188 0x0032 100 99 0 0x00000000000012 Command Timeout
189 0x003a 100 100 0 0x00000000000000 High Fly Writes
190 0x0022 71 49 45 0x0000001d1b001d Temperature
191 0x0032 100 100 0 0x00000000000023 G-Sense Errors
192 0x0032 100 100 0 0x000000000001ad Power-Off Retract Cycles
193 0x0032 96 96 0 0x00000000002116 Load/Unload Cycles
194 0x0022 29 51 0 0x0000100000001d Temperature
196 0x000f 100 100 30 0x29549b00000329 Reallocation Events
197 0x0012 100 100 0 0x00000000000000 Current Pending Sectors
198 0x0010 100 100 0 0x00000000000000 Off-line Uncorrectable
199 0x003e 200 200 0 0x00000000000000 UDMA CRC Error Rate
240 0x0000 100 100 0 0x29549b00000329 Head Flying Hours
241 0x0000 100 253 0 0x0000000b9ed79b Total LBAs Written
242 0x0000 100 253 0 0x0000004218bc34 Total LBAs Read
254 0x0032 100 100 0 0x00000000000000 Free Fall Protection
July 9th, 2018, 19:20
July 9th, 2018, 19:22
I already used ddru_ntfsbitmap and ddru_ntfsfindbad in the past, it worked very well (on a Knoppix live system), this time I couldn't use any of those tools (on your own Lubuntu-based HDDLiveCD) : it said “command not found”
July 9th, 2018, 19:28
Noone has any clue ?
Another strange thing here is that the “reallocation events” and “head flying hours” values are exactly the same in hexadecimal...
July 10th, 2018, 14:19
lubuntu@lubuntu:~$ sudo ddrescue -o 312881152 -s 53248 -f /dev/zero /dev/sdb /media/lubuntu/354E48E260FCFD84/dev_zero_dev_sdb.log
GNU ddrescue 1.22
ipos: 0 B, non-trimmed: 0 B, current rate: 53248 B/s
opos: 312881 kB, non-scraped: 0 B, average rate: 53248 B/s
non-tried: 0 B, bad-sector: 0 B, error rate: 0 B/s
rescued: 53248 B, bad areas: 0, run time: 0s
pct rescued: 100.00%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Finished
lubuntu@lubuntu:~$ sudo ddrescue -S -P -v -s 1073741824 /dev/sdb /media/lubuntu/354E48E260FCFD84/HGST_HTS541010A9E680_JD1092DP1M6WAU_1G_2.dd /media/lubuntu/354E48E260FCFD84/HGST_HTS541010A9E680_JD1092DP1M6WAU_1G_2.log
GNU ddrescue 1.22
About to copy 1073 MBytes from '/dev/sdb' to '/media/lubuntu/354E48E260FCFD84/HGST_HTS541010A9E680_JD1092DP1M6WAU_1G_2.dd'
Starting positions: infile = 0 B, outfile = 0 B
Copy block size: 128 sectors Initial skip size: 19584 sectors
Sector size: 512 Bytes
Data preview:to interrupt
003FFF0000 1C 00 27 A7 EB DA 6D F3 EE D3 4E 12 FC 5C 57 B5 ..'...m...N..\W.
003FFF0010 CF 44 FA 31 7E 7F A3 8F 80 5F B1 AA 7D A1 9F 4F .D.1~...._..}..O
003FFF0020 BE DA D6 AD 9C 7E FB E7 CE 0B 69 55 5F 0F 03 06 .....~....iU_...
ipos: 1073 MB, non-trimmed: 0 B, current rate: 11730 kB/s
opos: 1073 MB, non-scraped: 0 B, average rate: 59652 kB/s
non-tried: 0 B, bad-sector: 0 B, error rate: 0 B/s
rescued: 1073 MB, bad areas: 0, run time: 17s
pct rescued: 100.00%, read errors: 0, remaining time: n/a
time since last successful read: n/a
Finished
Partition /dev/loop1p2 Type HFS DeviceSector 611096 PartitionSector 201456 Block 25182 Allocated yes Inode 16 File /.journal
...
Partition /dev/loop1p2 Type HFS DeviceSector 611199 PartitionSector 201559 Block 25194 Allocated yes Inode 16 File /.journal
########## ddru_findbad 1.11 20141015 summary output file ##########
There are 104 bad sectors total from the log file
104 sectors were in partitions that were able to be processed
104 sectors are listed as allocated
104 of those have a file or data listing related to them
leaving 0 that do not have a file or data listing related to them
0 sectors are listed as not allocated
...................................................................
Below is the list of files related to the bad sectors
with the number of bad sectors in each file
...................................................................
BadSectors=104 Inode 16 File /.journal
Maybe you did a read/verify and some bad blocks were added as "pending". Then you did a zero fill or write to those sectors and the sectors were found to be good and removed from pending list but they do still cound as a Reallocation Event. "Maybe" those sectors that are now considered good were marked / counted on attribute 240 as Write Head.
Normally attributes are "standard" even if not implemented exactly the same way. Standards are not allways implemented as "expected" so different brands might have different responses even to standard commands ... Example are some drives/brands that will add sectors to G-List on READ/Verify attempts, etc ...
July 10th, 2018, 17:31
Spildit wrote:maximus wrote:Good catch. Maybe that should be your biggest clue. Maybe the reallocation events is not really that. Any values from SMART are totally vendor specific, meaning they can do anything they want. The fact that there is other higher order data in the value is an indication to me that it is something other than a normal reallocation event count.
196=Reallocation Event Count.
240=Head Flying Hours / Write Head.
Maybe you did a read/verify and some bad blocks were added as "pending". Then you did a zero fill or write to those sectors and the sectors were found to be good and removed from pending list but they do still cound as a Reallocation Event. "Maybe" those sectors that are now considered good were marked / counted on attribute 240 as Write Head.
Normally attributes are "standard" even if not implemented exactly the same way. Standards are not allways implemented as "expected" so different brands might have different responses even to standard commands ... Example are some drives/brands that will add sectors to G-List on READ/Verify attempts, etc ...
July 10th, 2018, 17:48
maximus wrote:Both attributes are reporting the exact same values. But if you look between the hddsupertool and crystaldiskinfo results, the values changed (but are still identical). And he just obtained those results from both tools. That is an indication of a value that is very dynamic, with nothing to do with the reallocation event count. Maybe it is a firmware bug ...
July 10th, 2018, 18:09
maximus wrote:Both attributes are reporting the exact same values. But if you look between the hddsupertool and crystaldiskinfo results, the values changed (but are still identical). And he just obtained those results from both tools. That is an indication of a value that is very dynamic, with nothing to do with the reallocation event count. Maybe it is a firmware bug ...
+1
July 10th, 2018, 18:24
Yes, i do agree.
July 10th, 2018, 19:54
abolibibelot wrote:Alright, so everybody agrees, but what is the operational relevance of that information, if any ? :)
July 10th, 2018, 20:46
[Regarding the HGST 1TB drive]
So, with no further insight, I did what my intuition told me : I attempted to overwrite just the tiny unreadable area with this ddrescue command (could be done with the more basic dd command but I'm less familiar with it) :
So, am I correct that this was a case of “logical” bad sectors, and that the drive is physically fine, and safe to use again ? What is the likely cause of this, perhaps a write operation interrupted by an improper shutdown ? Is this a common issue, and does it commonly render the drive inoperant, when it affects a system file ?
July 10th, 2018, 21:12
I would like to add that personally, I don't trust any drive that has had a failure in any way. So while I say that logically the drive should be ok for use, I personally would never trust it. But that may be just me being paranoidIf after the writing of the bad sectors there are no reallocated or pending sectors, then logically one would consider the drive ok for further use.
Powered by phpBB © phpBB Group.