All times are UTC - 5 hours [ DST ]




Post new topic Reply to topic  [ 12 posts ] 
Author Message
 Post subject: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 17th, 2019, 23:46 
Offline

Joined: December 23rd, 2009, 17:16
Posts: 23
Location: Middle East, Iran
A 250 GB external HDD I own has a few bad sectors and I ran Badblocks on it with the -o argument so that I could use the output file to format it, to mark the bad sectors as useless in the file-system*.

But there was a power outage mid-process and now GParted detects the HDD as being only 232.89 GiB.

I'm hoping there's a Linux or Windows utility you guys can point me to, one that can revert it back to being 250 GB again.

*Not sure if file-system is the right term here. Hope you know what I mean.

PS: I'm running Windows 7 SP1 with the latest updates + The HDD is a Seagate FreeAgent Go from 2009-10. Also, The output file is empty.


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 18th, 2019, 0:08 
Offline
User avatar

Joined: September 8th, 2009, 18:21
Posts: 15528
Location: Australia
http://www.google.com/search?q=232.89+GiB+in+GB

    232.89 gibibytes = 250.064 gigabytes

_________________
A backup a day keeps DR away.


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 18th, 2019, 1:03 
Offline

Joined: December 23rd, 2009, 17:16
Posts: 23
Location: Middle East, Iran
Oh, no! That's embarrassing.

But I'm sure GParted showed the number 250 before I ran Badblocks on it...

Also, I'd like to know if such situation is recoverable since I will be doing the same on a few other drives and I fear the power might go out again - or my system hangs up.


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 18th, 2019, 4:37 
Offline

Joined: December 23rd, 2009, 17:16
Posts: 23
Location: Middle East, Iran
To add to my previous post:

I'm trying to figure out the device's blocksize to run Badblocks on it again and I've tried a few commands. Problem is, they're returning contradictory results.

Code:
stat -fc %s /dev/sde

returns: 4096
While
Code:
sudo blockdev --getbsz /dev/sde

returns: 1024

Before this whole thing happened, they both returned 4096.

PS: I tried the following commands too, but they both return "Bad magic number in super-block while trying to open /dev/sde"
Code:
sudo dumpe2fs -h /dev/sde

Code:
sudo tune2fs -l /dev/sde | grep -i 'block size'


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 18th, 2019, 18:22 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
Quote:
I'm trying to figure out the device's blocksize to run Badblocks on it again and I've tried a few commands. Problem is, they're returning contradictory results.

Here is a command that will show both the logical and physical block size of a device in Linux:
Code:
lsblk -d -b -o NAME,SIZE,LOG-SEC,PHY-SEC,MODEL

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 18th, 2019, 23:47 
Offline

Joined: December 23rd, 2009, 17:16
Posts: 23
Location: Middle East, Iran
maximus wrote:
Here is a command that will show both the logical and physical block size of a device in Linux:
Code:
lsblk -d -b -o NAME,SIZE,LOG-SEC,PHY-SEC,MODEL

It returns yet another Blocksize:
Code:
NAME         SIZE LOG-SEC PHY-SEC MODEL
sde  250059348992     512     512 FreeAgent Go

I'm stumped. Don't know which Blocksize to run Badblocks with.


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 19th, 2019, 3:22 
Offline
User avatar

Joined: September 8th, 2009, 18:21
Posts: 15528
Location: Australia
lsblk reports the block size of the physical device.

All other commands report the block size of the partition, file system, etc.

Which file system do you intend to install on the drive?

_________________
A backup a day keeps DR away.


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 19th, 2019, 12:46 
Offline

Joined: December 23rd, 2009, 17:16
Posts: 23
Location: Middle East, Iran
I see.

I'll be installing a Swap partition and an Ext4 partition.


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 19th, 2019, 13:08 
Offline

Joined: February 2nd, 2019, 1:21
Posts: 192
Location: Sri Lanka
hi.
you check your hard drive label carefully. vendor note capacity and LBA. normally 250GB hard drive has
488397168 LBA. itmean HDD has 488397168 sectors. every sector has 512 bytes.

so 488397168 x 512 = 250059350016 bytes.

in digital world 1kb = 1024 bytes.but commonly rounded to 1000.

like
250059350016/1000=250059350.016 kb
250059350 /1000 = 250059.35 mb
250059/1000 = 250.05 gb

but real digital world

250059350016/1024=244198584 kb
244198584/1024 = 238475.179 mb
238475.179/1024 = 232.88 gb.


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 19th, 2019, 13:52 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
From the man page for badblocks:
Quote:
Important note: If the output of badblocks is going to be fed to the e2fsck or mke2fs programs, it is important that the block size is properly specified, since the block numbers which are generated are very dependent on the block size in use by the filesystem. For this reason, it is strongly recommended that users not run badblocks directly, but rather use the -c option of the e2fsck and mke2fs programs.

From the man page for e2fsck:
Quote:
-c
This option causes e2fsck to use badblocks(8) program to do a read-only scan of the device in order to find any bad blocks. If any bad blocks are found, they are added to the bad block inode to prevent them from being allocated to a file or directory. If this option is specified twice, then the bad block scan will be done using a non-destructive read-write test.

From the man page for mke2fs:
Quote:
-c
Check the device for bad blocks before creating the file system. If this option is specified twice, then a slower read-write test is used instead of a fast read-only test.

So why not use the -c option of mke2fs to make the filesystem, or e2fsck if you create the filesystem by some other means (such as GParted)? Either way, you don't have to worry about the block size, as it will be handled for you.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 20th, 2019, 3:52 
Offline

Joined: December 23rd, 2009, 17:16
Posts: 23
Location: Middle East, Iran
Hmm, I didn't mention this, but I'm interested in modifying Badblock's output file to include 1 or 2 blocks before and after each bad block found when creating the filesystems, just to be safe. Besides, it doesn't look like doing it that way gives me the option of having Badblocks run in destructive read-write mode (Badblocks' -w option), which is how I'd like to run it as it seems more thorough.


Top
 Profile  
 
 Post subject: Re: Badblocks prematurely stopped. 250GB Drive now 232GB.
PostPosted: April 20th, 2019, 9:40 
Offline

Joined: January 29th, 2012, 1:43
Posts: 982
Location: United States
You are trying to get the block size from the whole disk (/dev/sde). The block size of the whole device is technically the sector size, in your case 512. You want the block size of the file system of a partition (something like /dev/sde1), which, if you ran a destructive badblocks command against the whole disk, no longer exists. That would explain why you are having trouble getting the file system block size.

Assuming that the output from badblocks is based on the input device, using the badblocks output from /dev/sde on a command that is partition based (/dev/sde1) would put the bad blocks in the wrong location. So I hope you know what you are doing, and are planning on doing some math to adjust the bad block list as needed if this is your method.

Since you are running the write test, the disk itself will relocate the bad sectors, and you won’t know where they were, because now they are good. If you wanted to avoid extra areas away from the bad sectors, you should have run badblocks in read only mode. You also should have checked the SMART values before and after running badblocks for pending and reallocated sector counts. It is possible for the disk to run out of spare sectors, but if it does run out, the disk is probably in bad shape and will only get worse, and this could be a lot of effort to try to squeeze the last bit of life from it.

So how to move forward. First, get the SMART values (make sure you use something like CrystalDiskInfo that shows the raw values so you can get an exact count of what is happening). Then create the empty partitions on the disk. You can check the file system block size then (or set it during partition creation, 4096 should be good). Then run badblocks on each partition (not the whole disk). This will give an output of the bad blocks in relation to each partition, so that you can use the values in e2fsck.

If you want to make sure you keep the partition start points away from any bad areas, you could run badblocks on the whole disk, and use that data to decide partition sizes to keep the boundaries in safer areas.

_________________
http://www.hddsuperclone.com
Home of HDDSuperClone


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 12 posts ] 

All times are UTC - 5 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 111 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group