If any zfs fan here fancies doing some testing, this pool of mine was semi-reliably triggering the metaslab corruption bug with 2.4.1 and Linux 6.18.13 just by repeatedly running KDiskMark on the empty "1M" dataset.
zpool create -o ashift=12 -o failmode=continue -o feature@blake3=enabled -o feature@large_dnode=enabled -o feature@block_cloning=enabled -o feature@dynamic_gang_header=enabled -o feature@edonr=enabled -o feature@embedded_data=enabled -o feature@large_blocks=enabled -o feature@livelist=enabled -o feature@log_spacemap=enabled -o feature@lz4_compress=enabled -o feature@spacemap_histogram=enabled -o feature@spacemap_v2=enabled -o feature@zilsaxattr=enabled -o feature@zstd_compress=enabled -O atime=off -O compression=off -O relatime=off -O checksum=blake3 -O utf8only=on -O devices=off -O direct=standard -O dnodesize=auto -O setuid=off -O logbias=throughput -f -m /mnt/bad bad mirror sdb1 sdc1
zfs create -v -o recordsize=1M -o compression=off bad/1M