Page 1 of 1
HowTo: increase badblocks speed for 4k disks
Posted: Thu Dec 08, 2016 2:45 pm
by peter_b
When running badblocks on a newer disk with 4k blocksize, badblock's default blocksize of 1k has a severe impact on speed. A negative one
On my setup with fresh Seagate 3TB disks, running badblocks-write test with default settings gave me lousy 6.5 MB/s.
I knew this couldn't be the maximum with new SATA disks on a fresh server board...
Adding the option "-b" to adjust the blocksize to 4096 Bytes (=4k), the speed increased up to 23 MB/s:
Increasing the number of blocks to check at once (parameter "-c") from 64 (default) to e.g. 65535 increased the speed for the same 4 disks from 23 MB/s to 180 MB/s.
Awesome...
Have fun!
Useful links:
Re: HowTo: increase badblocks speed for 4k disks
Posted: Mon Dec 19, 2016 5:11 pm
by gilthanaz
Running badblocks on newer disks is pretty obsolete: Modern disks often have a controller and a "spare blocks/sectors" area that will have them remap any bad blocks they encounter while normal operation without the users (and the OS!) even noticing... So you'll most likely get a OK result from badblocks while there were some block errors that've been silently remapped.
The "modern" way to find out if the disk is starting to have bad blocks is to read out the smart values with smartctl and check for "Reallocated Sector Count". That is where you see if the disk actually had any bad blocks that made the controller remap data to prevent data loss. This is completely silent for the operating systems (with the exception of smart-aware monitoring tools)!
If you monitor the disks and then create e.g. a ZFS pool, zfs will in the background "format" (while already functioning as ONLINE pool) the disks in the pool. This takes a while and the smart monitoring would show you if there's anything to worry about after a few hours.
Re: HowTo: increase badblocks speed for 4k disks
Posted: Mon Jan 09, 2017 4:15 pm
by peter_b
You're right, but I found it very helpful (and very effective) to use badblocks to run initial "burn-in" tests on new hardware/disks.
Even if the badblocks-tests are invalid/useless (as you say), there are a number of things that one can detect and verify using this method:
- All components in the signal chain from HDD to CPU (Chipset, HDD controller, etc)
- Temperature behavior
- Abnormalities/trends in SMART values
- ...
If the tests run without any errors, you can be pretty sure that all the involved components of your hardware function somewhat properly
I've used this test setup for Backblaze pods with 45 drives in each machine, and I was able to detect the following errors before we went productive:
- 4 bad harddisks (died during the test)
- 2 bad SATA multiplexer backplanes (produced massive I/O errors during the tests)
- 1 broken SATA controller (I/O errors during tests)
Re: HowTo: increase badblocks speed for 4k disks
Posted: Mon Jan 09, 2017 10:54 pm
by gilthanaz
You're perfectly right! That badblocks as a burn-in test is also kind of testing the whole signal-chain and surrounding hardware is indeed a good argument. So badblocks might not be needed for most modern filesystems but still has it's value when testing how everything works together