wuming79 wrote:
I didn't know I had to indicate the reasons here.
Of course you don't
have to explain the reasons - but the context of questions can be important (as in this case) to avoid wasting time giving irrelevant answers or levels of detail.
This is especially true, since the answers to your specific original questions probably won't help you, which is why I was asking for the reasons
behind your questions. Here are the answers to your initial questions (excluding some of the fine details involved in things like read-ahead and NCQ re-ordering), and you'll see why I didn't expect that these answers would be what you expected:
wuming79 wrote:
when does a HDD do random read/write
When it is told to do that, by receiving commands from the host system (e.g. PC).
wuming79 wrote:
when does it do sequential read/write?
When it is told to do that, by receiving commands from the host system (e.g. PC).
In summary: The disk drive does what it is told to do, by the OS on the host system. So you see that the background/context to your questions is needed, to try to understand your real issue a little better...
wuming79 wrote:
I was copying some files to my PC and it seems to be very slow. . So I did an experiment using HD Tune to get the throughput and measure my own throughput of file copy time of the 40GB files and it doesn't really match up.
You haven't given exact figures, but in short, HDTune is using "synthetic benchmarks" showing the best performance that the disk can give, for its specific tests. That does not mean that you will see that same performance when performing reads/writes to a filesystem, nor in most other real-life situations (although reading large files, from an unfragmented filesystem, can be close to a sequential read, for example).
wuming79 wrote:
I can't really find what is going on with the HD tune Software and what is the difference between using the software and manual calculating my throughput. So this question pops into my mind when I read about sequential and random read/write.
In general terms, there is an expected difference between raw benchmark performance figures and real life throughput - some of the reasons for the difference include filesystem & OS overhead, sub-optimal I/O size, and filesystem fragmentation which prevents sequential I/O.
You might also have a real underlying problem, in addition to that inevitable difference between file copy throughput and performance from a benchmark, but any investigation of that would require much more details.