Quote:
Very old WD drives did had a WDC MCU. Latest WD drives with WDC MCU were Arch-VI Cyl 32 and they were then replaced with Marvell MCU. The first WD drives on the market with Marvell MCU did had a slight diferent firmware than "modern" ROYL. There are just a bunch of families on the market that were Marvell based an NOT ROYL. You can identify ROYL drives by checking the modules of the firmware. Here : -
http://yura.puslapiai.lt/files/wd/mhdd/wd_royl_rom.html but any recent WD drive that you will encounter will be ROYL.
I would guess that the O.P. (if that person is still stealthily lurking around here
) doesn't know what a “MCU” is, and neither do I... Maybe... “micro chip unit” ? (Just a guess, didn't search.)
Quote:
There is NO FIX for the slow issue. Once the drive starts to have problems then the drive MUST be replaced. Even ARCO / SELF-SCAN can't be considered a very reliable way to fix the drive. There is a "patch" that simply disable media scan for bad sectors and clear the re-lo list. This is to make the drive fast again in order to RECOVER THE DATA. This is NOT for drive re-use.
Alright, the term “fixed” wasn't the most appropriate, but that was kinda evident in that context : using a hard disk drive with even a few bad sectors for important data is always a bad idea, regardless of the brand, even without that specific issue on that range of WD drives, so I meant “fixed” in the sense “that particular issue can be dealt with if push comes to shove”.
Quote:
No hardware tool required and it's NOT that advanced ... It's quite simple honestly.
Most people (heck, even most general purpose computer professionals) don't know that hard drives have a firmware which can receive special commands, so even if it's relatively simple to do once you know how to do it, I would still say that this is an advanced procedure.
Quote:
Again if you have "pro" tools like PC-3000 DE when you clone a drive you can see what blocks are bad and what files belong to those blocks.
Yeah, I hope that pro tools (why do you use quotations marks here ? isn't this “professional” stuff in the strictest sense ?) can do everything that can be done in the most efficient and reliable way possible... but I currently don't have access to that kind of very expensive tools, so I do what I can do with what I can have !
Quote:
Regarding cloning of bad drives if heads are dying it might be worth to go after the files you need first and only then attempt to clone the rest of the drive....
So my instincts were right (for once !) when I dealt with the 3TB one. But if everything is pretty much of the same importance, or it would be too tedious and time-consuming (a precious time when dealing with a failing HDD) to sort out what is absolutely crucial / important / useful / trivial, then it's probably best to do a full clone right away and hope for the best, like I did with the 2TB one.
What are the obvious symptoms of heads dying, as opposed to surface defects ?
For instance,
here's a ddrescueview screenshot of a 1TB Hitachi HDD which was handed to me last summer. I tried to recover as much data as possible from it with software methods (ddrescue + R-Studio), and was quite succesful (luckily, it was only filled to about a quarter, so I got almost all the user data, only about 130 files were corrupted, according to ddru_ntfsfindbad, then as there were many duplicated files, which I identified with DoubleKiller, I managed to further repair more than 100 of them, the guy was very happy). Does such a pattern, with alternating stripes of good and bad areas, mean that one head was
kaputt at that point ? At first I tried to copy everything, there were slowdowns but it was copying steadily ; then after about 200GB it started disconnecting, making frightening clicks, after a few power cycles it was writing again but very slowly, at which point I wondered if writing the image to a NTFS partition could be the cause as I had read in a ddrescue tutorial (I exposed the issue
here) ; then I started all over again, strangely this time there were no slowdowns at the begining, but it started to have major hiccups earlier, around 160GB ; at some point I tried ddru_ntfsbitmap, which proved tremendously useful in a case like this (I should have done that at the begining), that's why large areas are non-tried beyond 250GB (those areas were marked as empty in the bitmap) ; I still had to give up after a while, as it had slowed down to a crawl and it would have taken hours to get 1MB.
Quote:
One VERY IMPORTANT consideration about the "slow issue" on WD drives and the "Pending BUG" on the Seagate ones is that on modern drives there is a BACKGROUND MEDIA SCAN meaning that you might have your drive idle doing nothing, your OS might not be requesting any data from the drive, might not be sending any command or in fact you might even get the SATA cable disconnected from the host and the drive itself while powered up might be scanning the surface for sectors that can't be properly read and be adding those sectors to the pending list. You don't even need to request to "read" or "verify" a problematic LBA, the drive might do that itself at firmware level unless you disable the correspondent "feature" by patching firmware .....
That answers a question I asked about the reallocated sector count from my ST2000DL003 which kept increasing even when letting it run idly for a few days – and that contradicts a common notion that bad sectors can only be reallocated after a failed
write attempt.
Is this background scan reliable at least ? Or can it wrongly mark sectors as bad when in fact they're not ? (Meaning that the data they contained is lost.) The strange thing in this case is that surface scans with HD Sentinel still come out 100% clean...