When running badblocks on a newer disk with 4k blocksize, badblock's default blocksize of 1k has a severe impact on speed. A negative one
On my setup with fresh Seagate 3TB disks, running badblocks-write test with default settings gave me lousy 6.5 MB/s.
I knew this couldn't be the maximum with new SATA disks on a fresh server board...
Adding the option "-b" to adjust the blocksize to 4096 Bytes (=4k), the speed increased up to 23 MB/s:
Increasing the number of blocks to check at once (parameter "-c") from 64 (default) to e.g. 65535 increased the speed for the same 4 disks from 23 MB/s to 180 MB/s.
Running badblocks on newer disks is pretty obsolete: Modern disks often have a controller and a "spare blocks/sectors" area that will have them remap any bad blocks they encounter while normal operation without the users (and the OS!) even noticing... So you'll most likely get a OK result from badblocks while there were some block errors that've been silently remapped.
The "modern" way to find out if the disk is starting to have bad blocks is to read out the smart values with smartctl and check for "Reallocated Sector Count". That is where you see if the disk actually had any bad blocks that made the controller remap data to prevent data loss. This is completely silent for the operating systems (with the exception of smart-aware monitoring tools)!
If you monitor the disks and then create e.g. a ZFS pool, zfs will in the background "format" (while already functioning as ONLINE pool) the disks in the pool. This takes a while and the smart monitoring would show you if there's anything to worry about after a few hours.
You're perfectly right! That badblocks as a burn-in test is also kind of testing the whole signal-chain and surrounding hardware is indeed a good argument. So badblocks might not be needed for most modern filesystems but still has it's value when testing how everything works together