All times are UTC - 5 hours [ DST ]




Post new topic Reply to topic  [ 15 posts ] 
Author Message
 Post subject: Weird SMART data on two different external HDDs
PostPosted: July 8th, 2018, 11:00 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Hi,

I got an inquiry from two persons who each had their external HDD no longer recognized.
In a nutshell : I could image both successfully, but SMART data seem inconsistent.

The first one is a HGST HTS541010A9E680, 2.5", 1TB capacity ; the owner brought it to me with just a USB 2.0 bridge (no enclosure) having the pins between the SATA connector and the circuit board bent at a ~30° angle. He said that the drive was no longer correctly accessible on his MacBook computer : sometimes the partition's name and/or root folder would briefly appear and then vanish.
When I looked at the SMART data in GSMARTControl, everything looked about normal, except a value of 5 in “UDMA CRC Error Count”, but I have drives with a higher value there which seem fine otherwise.
Then I imaged it with ddrescue : it copied everything except a 53248 bytes portion (104 sectors) near the 300MB mark ; the copy rate was correct at about 30MB/s on average, but lower than I'd have expected for that kind of drive, considering that it was connected directly in SATA, and the destination file was written onto a 2TB HDD also in direct SATA connection ; there were some brief slowdowns where it went down to about 5MB/s, and some bursts around 100MB/s (most likely empty areas, as I used the -S switch to write the image in “sparse” mode). At that point, there was still no “pending sector” detected in SMART. Then I ran the short self-test : it reported “Completed with read failure” – yet there's still no warning whatsoever in SMART.
– Could this bad area be a benign case of “logical” bad sectors, i.e., sectors which are still physically operational but are in an inconsistent state, and if so, could they be repaired simply by overwriting them ?
– Is the drive safe to use again ?

Attachment:
2018-07-06-122347_1061x748_scrot.png
2018-07-06-122347_1061x748_scrot.png [ 175.27 KiB | Viewed 13579 times ]

Attachment:
2018-07-07-093510_1061x748_scrot.png
2018-07-07-093510_1061x748_scrot.png [ 139.24 KiB | Viewed 13579 times ]

Attachment:
2018-07-07-093413_675x549_scrot.png
2018-07-07-093413_675x549_scrot.png [ 79.21 KiB | Viewed 13579 times ]

Attachment:
2018-07-07-094649_1061x748_scrot.png
2018-07-07-094649_1061x748_scrot.png [ 138 KiB | Viewed 13579 times ]

Attachment:
2018-07-07-095125_1061x748_scrot.png
2018-07-07-095125_1061x748_scrot.png [ 91.4 KiB | Viewed 13579 times ]


The second one is a Seagate ST500LT012, 2.5", 500GB capacity, in a Seagate USB 3.0 enclosure. The owner said that when she tried to connect it she could hear a motor noise, but the LED would no longer flicker as before, and it couldn't be accessed anymore, either on her computer or some standalone device. I first tried plugging it with its original cable, got the same result as described. Noticing that this cable was quite loose, I then tried connecting the enclosure as-is with two other cables of mine : same result (sometimes it wouldn't start at all, and just slightly pulling the connector would turn it on, but it wasn't recognized by Lubuntu). Then I opened the enclosure (which really was a pain in the... fingertips ! finally I had to use a knife to get it done, really stupid design...), plugged it in SATA : then it was properly recognized. But the SMART status showed some weirdness again : it reported 786 “reallocation events”, yet not a single “reallocated sector”. How is that possible ? Also, the “Head Flying Hours” value seems abnormally high, while the “Seek Error Rate” doesn't look good either.
I also imaged that drive : 100% was copied (not a single error), with a better copy rate than the HGST, about 50MB/s on average. Both the short and the long self-test were “completed without error”. But now the “Reallocation Event Count” has risen to 803. Then I looked more closely at the USB connector : the left connector seems damaged.
– Is it consistent with the behaviour described above, when the drive is connected through that enclosure ?
– Is this drive itself safe to use again ? (Obviously the enclosure is not, and it wouldn't be worth the trouble to try to fix it, since a new one would be about 10€.)
– If so, how should I interpret those strange SMART values ?

Attachment:
2018-07-07-100458_1084x857_scrot.png
2018-07-07-100458_1084x857_scrot.png [ 224.78 KiB | Viewed 13579 times ]

Attachment:
2018-07-08-112503_1094x817_scrot.png
2018-07-08-112503_1094x817_scrot.png [ 223.63 KiB | Viewed 13579 times ]

Attachment:
SAM_0691 détail 960x720.png
SAM_0691 détail 960x720.png [ 912.76 KiB | Viewed 13579 times ]

Attachment:
SAM_0692 détail 960x720.png
SAM_0692 détail 960x720.png [ 886.27 KiB | Viewed 13579 times ]


Side question :
I have three spare external drives which could be used for the extraction or final cloning of each user's data : a 320GB (USB 2.0), a 500GB, a 1TB (both USB 3.0). The 1TB HGST is HFS+ formatted and is only filled to about 270Go. It's the first time I deal with an Apple formatted drive. What would be the simplest solution to either clone or extract the user's data to a smaller capacity drive, on Lubuntu, so that it is readily readable on the owner's Apple computer ? Is there a way to directly write the image file in such a way that it fits on a smaller capacity drive, while preserving the consistency of the partition table ? Or is it safer to create a HFS+ partition and just copy the data ? Would this natively preserve the timestamps and other relevant attributes, or do I have to use a specific method to keep the metadata ? (The user will probably not care about that, but I would, and it'd prefer not to mess with these if it's not too much trouble.)

Another side question :
How can I identify which file(s) was/were corrupted as a result of this small unreadable area ? Is there a tool on Linux or Windows which can check the consistency of the recovered HFS+ partition ?

Thanks.


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 8th, 2018, 16:05 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
If this is a duplicate, I apologize, I thought I hit submit but my original reply is not showing up...
Quote:
How can I identify which file(s) was/were corrupted as a result of this small unreadable area ? Is there a tool on Linux or Windows which can check the consistency of the recovered HFS+ partition ?

It has been awhile since I wrote it, and no longer support it, and I never had an HFS+ partition to test, but you could try ddru_findbad from ddrutility. It uses the ddrescue log and some other Linux utilities to find which file a bad sector belongs to. For a small amount of sectors like you have, it should work... in theory :)

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 8th, 2018, 16:23 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
– If so, how should I interpret those strange SMART values ?

You need to use a tool that shows the raw value as hex so that it can better be understood. The way some tools interpret the value sometimes does not make sense. For Linux I know that HDDSuperTool will do that. I think that CrystalDiskInfo for Windows has an option to show the value in hex. For any other tools that can show the value in hex you are on your own.

The reason it helps to see it in hex is because sometime vendors do weird things with the value that does not always conform to what you would consider normal.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 8th, 2018, 22:20 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
It has been awhile since I wrote it, and no longer support it, and I never had an HFS+ partition to test, but you could try ddru_findbad from ddrutility. It uses the ddrescue log and some other Linux utilities to find which file a bad sector belongs to. For a small amount of sectors like you have, it should work... in theory :)

Alright, so you're also the author of ddr_utilities, I feel doubly humbled ! :)
I already used ddru_ntfsbitmap and ddru_ntfsfindbad in the past, it worked very well (on a Knoppix live system), this time I couldn't use any of those tools (on your own Lubuntu-based HDDLiveCD) : it said “command not found”, I did a brief research and tried a few tricks, to no avail (I don't have a lot of experience on Linux systems, which I've mostly used for data recovery purposes so far, so I'm not very keen on the general issues and their workarounds). Maybe it was only a glitch, a temporary setback, I'll try again later on.

Any idea on whether those sectors are actually bad, as in physically bad, or if they are somehow “fixable” ?
Any idea on how to write the image to a smaller capacity drive ?


Quote:
You need to use a tool that shows the raw value as hex so that it can better be understood. The way some tools interpret the value sometimes does not make sense. For Linux I know that HDDSuperTool will do that. I think that CrystalDiskInfo for Windows has an option to show the value in hex. For any other tools that can show the value in hex you are on your own.
The reason it helps to see it in hex is because sometime vendors do weird things with the value that does not always conform to what you would consider normal.

So here are the SMART values from HDDSuperTool for the Seagate drive :
Code:
Smart structure version= 10
ID#   FLAG  VALUE WORST THRESH   RAW DATA          ATTRIBUTE NAME
  1  0x000f  113    99     6   0x000000036f9f38   Read Error Rate
  3  0x0003   99    99     0   0x00000000000000   Spin-Up Time
  4  0x0032   99    99    20   0x000000000007e2   Start/Stop Count
  5  0x0033  100   100    36   0x00000000000000   Reallocated Sectors Count
  7  0x000f  100   253    30   0x000006000a9d85   Seek Error Rate
  9  0x0032   99    99     0   0x284197000005b2   Power-On Hours Count
10  0x0013  100   100    97   0x00000000000000   Spin Retry Count
12  0x0032   99    99    20   0x00000000000788   Power Cycle Count
184  0x0032  100   100    99   0x00000000000000   End-to-End error
187  0x0032  100   100     0   0x00000000000000   Reported Uncorrectable Errors
188  0x0032  100    99     0   0x00000000000012   Command Timeout
189  0x003a  100   100     0   0x00000000000000   High Fly Writes
190  0x0022   71    49    45   0x0000001d1b001d   Temperature
191  0x0032  100   100     0   0x00000000000023   G-Sense Errors
192  0x0032  100   100     0   0x000000000001ad   Power-Off Retract Cycles
193  0x0032   96    96     0   0x00000000002116   Load/Unload Cycles
194  0x0022   29    51     0   0x0000100000001d   Temperature
196  0x000f  100   100    30   0x29549b00000329   Reallocation Events
197  0x0012  100   100     0   0x00000000000000   Current Pending Sectors
198  0x0010  100   100     0   0x00000000000000   Off-line Uncorrectable
199  0x003e  200   200     0   0x00000000000000   UDMA CRC Error Rate
240  0x0000  100   100     0   0x29549b00000329   Head Flying Hours
241  0x0000  100   253     0   0x0000000b9ed79b   Total LBAs Written
242  0x0000  100   253     0   0x0000004218bc34   Total LBAs Read
254  0x0032  100   100     0   0x00000000000000   Free Fall Protection

Does this tell you (or anybody else) a more complete or more consistent story ?
I haven't checked the HGST that way, but the “pending sector count” and “reallocated sector count” would most likely be 0 in hex as well.


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 9th, 2018, 19:20 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Noone has any clue ?
Another strange thing here is that the “reallocation events” and “head flying hours” values are exactly the same in hexadecimal...
Hard Disk Sentinel, which also displays the raw hex values, issues no warning about the drive's status :
Attachment:
ST500LT012 HDSentinel SMART “reallocation event count”.png
ST500LT012 HDSentinel SMART “reallocation event count”.png [ 172.47 KiB | Viewed 13443 times ]


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 9th, 2018, 19:22 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
I already used ddru_ntfsbitmap and ddru_ntfsfindbad in the past, it worked very well (on a Knoppix live system), this time I couldn't use any of those tools (on your own Lubuntu-based HDDLiveCD) : it said “command not found”

While I don't support ddrutiltiy any more, I do support that it should work as originally intended on my own live cd. If you are using an older version of the live cd, please try the latest version. Maybe there were versions where is was not included properly, but I can't find that in the changelog. It has been included since the second release of the live cd. If you still continue to have issues with not being able to run anything from ddrutility on the latest HDDLiveCD, please contact me directly at sdcomputingservice@gmail.com with as much information as possible.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 9th, 2018, 19:28 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
Noone has any clue ?
Another strange thing here is that the “reallocation events” and “head flying hours” values are exactly the same in hexadecimal...

Good catch. Maybe that should be your biggest clue. Maybe the reallocation events is not really that. Any values from SMART are totally vendor specific, meaning they can do anything they want. The fact that there is other higher order data in the value is an indication to me that it is something other than a normal reallocation event count.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 10th, 2018, 14:19 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
[Regarding the HGST 1TB drive]

So, with no further insight, I did what my intuition told me : I attempted to overwrite just the tiny unreadable area with this ddrescue command (could be done with the more basic dd command but I'm less familiar with it) :
Code:
lubuntu@lubuntu:~$ sudo ddrescue -o 312881152 -s 53248 -f /dev/zero /dev/sdb /media/lubuntu/354E48E260FCFD84/dev_zero_dev_sdb.log
GNU ddrescue 1.22
     ipos:        0 B, non-trimmed:        0 B,  current rate:   53248 B/s
     opos:  312881 kB, non-scraped:        0 B,  average rate:   53248 B/s
non-tried:        0 B,  bad-sector:        0 B,    error rate:       0 B/s
  rescued:    53248 B,   bad areas:        0,        run time:          0s
pct rescued:  100.00%, read errors:        0,  remaining time:         n/a
                              time since last successful read:         n/a
Finished

[Note : the -f switch is necessary here since there is natively a protection preventing ddrescue from writing directly to a physical device.]
And it worked : to verify I re-imaged the first GB and this time there was no error (I had tried this partial imaging before running the above command and the error area was still there then, with the exact same location and size, I also noticed that it was skipped right away, with no slowdown, contrary to what usually happens when there's an actual “physical” bad sector and it slows down or hangs for a few seconds before skipping) ; the “short self-test” now completes with no error as well.
Code:
lubuntu@lubuntu:~$ sudo ddrescue -S -P -v -s 1073741824 /dev/sdb /media/lubuntu/354E48E260FCFD84/HGST_HTS541010A9E680_JD1092DP1M6WAU_1G_2.dd /media/lubuntu/354E48E260FCFD84/HGST_HTS541010A9E680_JD1092DP1M6WAU_1G_2.log
GNU ddrescue 1.22
About to copy 1073 MBytes from '/dev/sdb' to '/media/lubuntu/354E48E260FCFD84/HGST_HTS541010A9E680_JD1092DP1M6WAU_1G_2.dd'
    Starting positions: infile = 0 B,  outfile = 0 B
    Copy block size: 128 sectors       Initial skip size: 19584 sectors
Sector size: 512 Bytes

Data preview:to interrupt
003FFF0000  1C 00 27 A7 EB DA 6D F3  EE D3 4E 12 FC 5C 57 B5  ..'...m...N..\W.
003FFF0010  CF 44 FA 31 7E 7F A3 8F  80 5F B1 AA 7D A1 9F 4F  .D.1~...._..}..O
003FFF0020  BE DA D6 AD 9C 7E FB E7  CE 0B 69 55 5F 0F 03 06  .....~....iU_...

     ipos:    1073 MB, non-trimmed:        0 B,  current rate:  11730 kB/s
     opos:    1073 MB, non-scraped:        0 B,  average rate:  59652 kB/s
non-tried:        0 B,  bad-sector:        0 B,    error rate:       0 B/s
  rescued:    1073 MB,   bad areas:        0,        run time:         17s
pct rescued:  100.00%, read errors:        0,  remaining time:         n/a
                              time since last successful read:         n/a
Finished

Attachment:
2018-07-10-040019_750x803_scrot.png
2018-07-10-040019_750x803_scrot.png [ 93.19 KiB | Viewed 13364 times ]

Before that I tried some Windows tools : a read scan with Hard Disk Sentinel made it freeze indefinitely, I had to shut down the drive ; likewise, trying to access the problematic area with WinHex made it freeze until the drive was shut down.
So, am I correct that this was a case of “logical” bad sectors, and that the drive is physically fine, and safe to use again ? What is the likely cause of this, perhaps a write operation interrupted by an improper shutdown ? Is this a common issue, and does it commonly render the drive inoperant, when it affects a system file ?

Regarding ddr_utilities, I tried again after starting fresh from the same live USB drive (actually a SD card connected through a USB reader) and it worked. Previously, I had downloaded ddrutilities-2.8.tar.gz, extracted it and installed it following the included manual, not knowing that it was pre-installed on this ISO, maybe there was a version conflict or something. And according to the analysis from ddru_findbad all the formerly unreadable sectors belonged to the same “/.journal” file :
Code:
Partition /dev/loop1p2 Type HFS DeviceSector 611096 PartitionSector 201456 Block 25182 Allocated yes Inode 16 File /.journal
...
Partition /dev/loop1p2 Type HFS DeviceSector 611199 PartitionSector 201559 Block 25194 Allocated yes Inode 16 File /.journal

Code:
########## ddru_findbad 1.11 20141015 summary output file ##########
There are 104 bad sectors total from the log file
104 sectors were in partitions that were able to be processed
104 sectors are listed as allocated
104 of those have a file or data listing related to them
leaving 0 that do not have a file or data listing related to them
0 sectors are listed as not allocated
...................................................................
Below is the list of files related to the bad sectors
with the number of bad sectors in each file
...................................................................
BadSectors=104 Inode 16 File /.journal

I guess that this is roughly equivalent to the $LogFile and $UsnJrnl files in NTFS. Could someone please elaborate ? Does the owner have to run some integrity check procedure to use the drive safely, and how is it called in MacOS ? (Again, I have very little experience with anything from Apple computers.)


[Regarding the Seagate drive]
Quote:
Maybe you did a read/verify and some bad blocks were added as "pending". Then you did a zero fill or write to those sectors and the sectors were found to be good and removed from pending list but they do still cound as a Reallocation Event. "Maybe" those sectors that are now considered good were marked / counted on attribute 240 as Write Head.
Normally attributes are "standard" even if not implemented exactly the same way. Standards are not allways implemented as "expected" so different brands might have different responses even to standard commands ... Example are some drives/brands that will add sectors to G-List on READ/Verify attempts, etc ...

No, I imaged the drive right away, didn't write anything on it, and at no point there was anything but “0” in the “Pending Sector Count” field.
From what I understand, a reallocation event is just that, as the name implies, when a sector gets actually reallocated, i.e. moved to G-list, not when it's counted as “pending” then cleared again – right ?
What exactly is “Write Head”, and how come the same SMART attribute code can refer to two distinct notions ? As “maximus” said, these values are vendor specific, so does anyone have a specific experience with that model or that range of models, and a definite knowledge of what those values mean here ? Again, can I confidently tell the owner that the drive is safe to use ? (Well, except for the fact that it's a Seagate, alright...)


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 10th, 2018, 17:31 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Spildit wrote:
maximus wrote:
Good catch. Maybe that should be your biggest clue. Maybe the reallocation events is not really that. Any values from SMART are totally vendor specific, meaning they can do anything they want. The fact that there is other higher order data in the value is an indication to me that it is something other than a normal reallocation event count.


196=Reallocation Event Count.
240=Head Flying Hours / Write Head.

Maybe you did a read/verify and some bad blocks were added as "pending". Then you did a zero fill or write to those sectors and the sectors were found to be good and removed from pending list but they do still cound as a Reallocation Event. "Maybe" those sectors that are now considered good were marked / counted on attribute 240 as Write Head.

Normally attributes are "standard" even if not implemented exactly the same way. Standards are not allways implemented as "expected" so different brands might have different responses even to standard commands ... Example are some drives/brands that will add sectors to G-List on READ/Verify attempts, etc ...

Both attributes are reporting the exact same values. But if you look between the hddsupertool and crystaldiskinfo results, the values changed (but are still identical). And he just obtained those results from both tools. That is an indication of a value that is very dynamic, with nothing to do with the reallocation event count. Maybe it is a firmware bug that the attributes are producing the same value, or just something they did. But in this case, I can't see how attribute 196 could actually be a reallocation count of any sort. That does not make sense in the context presented.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 10th, 2018, 17:48 
Offline
User avatar

Joined: September 8th, 2009, 18:21
Posts: 15440
Location: Australia
maximus wrote:
Both attributes are reporting the exact same values. But if you look between the hddsupertool and crystaldiskinfo results, the values changed (but are still identical). And he just obtained those results from both tools. That is an indication of a value that is very dynamic, with nothing to do with the reallocation event count. Maybe it is a firmware bug ...

+1

_________________
A backup a day keeps DR away.


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 10th, 2018, 18:09 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
maximus wrote:
Quote:
Both attributes are reporting the exact same values. But if you look between the hddsupertool and crystaldiskinfo results, the values changed (but are still identical). And he just obtained those results from both tools. That is an indication of a value that is very dynamic, with nothing to do with the reallocation event count. Maybe it is a firmware bug ...

+1

Would that bug affect this drive model in general, or just that particular unit ? And is it something to worry about, from the end user's standpoint, or not at all ? Have you already seen something similar ?
(Correction : the screenshot above is from Hard Disk Sentinel, not CrystalDiskInfo, although CDI would most likely have reported the same values at the same point in time.)

Any insight regarding those 104 sectors formerly known as bad ? (On the HGST drive.)


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 10th, 2018, 18:24 
Offline

Joined: November 22nd, 2017, 21:47
Posts: 309
Location: France
Quote:
Yes, i do agree.

Alright, so everybody agrees, but what is the operational relevance of that information, if any ? :)


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 10th, 2018, 19:54 
Offline
User avatar

Joined: September 8th, 2009, 18:21
Posts: 15440
Location: Australia
abolibibelot wrote:
Alright, so everybody agrees, but what is the operational relevance of that information, if any ? :)

If you read the following documents, then you will know as much as anyone else. :-|

Seagate SMART Attribute Specification:
http://t1.daumcdn.net/brunch/service/user/axm/file/zRYOdwPu3OMoKYmBOby1fEEQEbU.pdf

Normal SATA SMART Attribute Behavior (Seagate):
http://t1.daumcdn.net/brunch/service/user/axm/file/Vw3RJSZllYbDc86ssL6bofiL4r0.pdf

_________________
A backup a day keeps DR away.


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 10th, 2018, 20:46 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
[Regarding the HGST 1TB drive]
So, with no further insight, I did what my intuition told me : I attempted to overwrite just the tiny unreadable area with this ddrescue command (could be done with the more basic dd command but I'm less familiar with it) :
Quote:
So, am I correct that this was a case of “logical” bad sectors, and that the drive is physically fine, and safe to use again ? What is the likely cause of this, perhaps a write operation interrupted by an improper shutdown ? Is this a common issue, and does it commonly render the drive inoperant, when it affects a system file ?

If there are no reallocated sectors in SMART, then very possible it was a logical issue. But you did not post the SMART output afterwards so we do not know. As to why it could happen, you did say the drive came to you with a usb bridge attached with bent pins. A bad power connection issue to the drive could always possibly cause some issues. It would be unclear if that was the cause, but a possibility. If after the writing of the bad sectors there are no reallocated or pending sectors, then logically one would consider the drive ok for further use.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: Weird SMART data on two different external HDDs
PostPosted: July 10th, 2018, 21:12 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
If after the writing of the bad sectors there are no reallocated or pending sectors, then logically one would consider the drive ok for further use.
I would like to add that personally, I don't trust any drive that has had a failure in any way. So while I say that logically the drive should be ok for use, I personally would never trust it. But that may be just me being paranoid :wink:

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 15 posts ] 

All times are UTC - 5 hours [ DST ]


Who is online

Users browsing this forum: Google Adsense [Bot] and 47 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group