Short version:
AFAICT, between the two SMART reports, about 400GB of data have been written to the drive, and about 350GB of data have been read from the drive.
Long version:
AFAICT, the Current and Worst values of the Accumulated Runtime Bad Blocks attribute are raw counts rather than normalised health scores. Comparison with other SMART reports suggests that a drive begins life with zeros for both values.
Here are 3 examples:
http://www.oczforum.com/forum/showthrea ... (unplanned)
http://www.ocztechnologyforum.com/forum ... 13473.htmlhttp://forum.notebookreview.com/solid-s ... -data.htmlAlso, a normalised value of 80 for the Available Over-Provisioned Block Count appears to be how the drive begins life, even though one would expect that a perfectly healthy drive might show 100 for this attribute. The raw numbers appear to reflect the number of OP sectors (or LBAs) rather than NAND blocks. Also, over-provisioning seems to be achieved simply by virtue of the difference between decimal gigabytes and binary gigabytes.
For example, the OP's 256GB Vector would have 16 x 16GiB NAND chips for a total capacity of 256GiB. The actual number of reported LBAs is not given in either screenshot. However, the 512.1 GB example below reports 1000215216 sectors.
The total number of sectors in a 512GiB array would be ...
512 x 1024 x 1024 x 1024 / 512 = 1073741824
The difference in the decimal and binary capacities is ...
1073741824 - 1000215216 = 73526608 sectors
... which roughly corresponds to the raw value of attribute 171 (73518416) in the example below.
OCZ-VECTOR 256.0 GB
AB Available Over-Provisioned Block Count 80 80 0 22C2D50 (= 36449616)
OCZ-VECTOR 512.1 GB 1000215216 sectors
AB _80 _80 __0 00000461CD50 (= 73518416) Available Over-Provisioned Block Count
OCZ-VERTEX450 128,035,676,160 bytes
171 Unknown_Attribute 080 080 000 - 17907024
OCZ-VECTOR 128GB
171 Available OP block count 0x0000 080 080 18365776
OCZ-VECTOR 128GB
171 Available OP block count 0x0000 080 080 18185552
As for the amount of data written and read, ISTM that attributes F1 and F2 count the amount of data transferred over the SATA interface, whereas attribute F9 records the number of actual NAND writes, including wear levelling, etc. Therefore F1 and F2 should accurately reflect the user's usage pattern.
In fact we could compare the CrystalDiskInfo and HD Tune reports and see whether the differences in the attribrute values correspond to the actual usage over the intervening 10 days.
The Power On Hours Count differs by 166 hours (= 328 - 162).
The Average Erase Count has increased from 31 to 37.
The increase in Host Writes = 0x160F - 5250 = 397GB
The increase in Host Reads = 0xA06 - 2215 = 351GB
The increase in NAND Programming Count = 0x1A1AAEED - 388567443 = 49 388 890
CrystalDiskInfo believes that the Total Host Writes/Reads are reported in GB, and it appears that it assumes that each NAND Programming cycle addresses 16KB.
According to CrystalDiskInfo, 0x1A1AAEED NAND writes equates to 6682 GB.
So ...
6682 gigabytes / 0x1A1AAEED = 16382.3 bytes
Therefore the additional NAND writes between the two SMART reports amount to ...
49388890 x 16KiB = 753.6 GiB
That's a write amplification of approximately 2:1.
AFAICS, the difference in the Average Erase Count would suggest that the entire NAND array has been erased 6 times, which amounts to 1.5TB (= 6 x 256GB). That's twice the increase in NAND writes. Does that make sense, or should the two results be roughly equal, in which case the write amplification would be 4:1, and each NAND programming cycle would address 32KB?