Hey guys, bit of a weird one:
I have a total of five Samsung HDDs as follows:
1TB - single drive config
2x1.5TB - RAID0
2x2TB - RAID0
And a Crucial M4 128GB SSD for my OS. Other potentially relevant system specs:
Asus P5Q P45 / ICH10R
Windows 7 x64
Latest Intel storage system / drivers (plus tried on a clean install using MS' AHCI drivers).
Recently my I noticed copying files to either of my stripes was intermittently slow. When copying large numbers of large files, after a while the speed would drop from the usual ~150MB/s to a fairly consistent 30MB/s, sometimes speeding up again later. So I fired up resmon to take a look. First weird thing I noticed was the activity graphs for each disk that're usually on the right side of the Disks tab in Windows 7 resmon were not there. Only the top 'Overall' disk I/O graph was shown. I figured it was a driver issue / OS screw-up and was probably related to my slow RAID I/O speed, so I reinstalled Windows 7. This fixed the resmon issue but the performance issue was still there. In resmon it manifested as a maximum queue length and >98% busy time on the affected drive(s).
Next thing I figured was that one or more of my HDDs had errors, so I began scanning them with MHDD. I used to work for a system builder where we used MHDD on an industrial scale to check for faulty drives, so I know what a failing drive looks like as well as what a healthy drive should look like. In my opinion, my results are kinda strange:
all of the drives that have been in RAID are showing approximately the same range of read times; a very large number of <10ms and <50ms (far more than a new / healthy drive should have), quite a few <150ms, and a small number of <500ms (but not as many as I'd expect to see from a mechanically failing drive). If I'd seen these results for one of the drives, or even one from each RAID stripe, I could accept that I had an impending faulty drive or two on my hands and replace them. However all four RAID drives are like this, but the 1TB drive which has not been in RAID tests almost like a brand new drive: almost completely <3ms with just a very small number of <10ms and <50ms. Considering this is by far the oldest of the five drives, I find this very strange.
I have to say I think the odds of all four drives failing pretty much simultaneously are astronomical. I'm therefore wondering if
something the RAID controller has done could have screwed up the drives at a block-level, causing slow access to certain areas of the drive?
I've bought an additional 2TB drive and backed up everything off the 2x1.5TB stripe. Is there anything you guys can suggest that I could try running / doing with those drives to kinda zero every sector and restore the drives to an unused state (low-level format??). Bear in mind the alternative is to replace another three HDDs (expensive) and I have an sure-fire way of gauging success (run MHDD again), I might as well try anything that might work. I've got a bootable USB with Hiren's boot CD so I have a wide range of tools at my disposal, so fire away with suggestions

Cheers!