Home > Sun Solaris, zfs > ZFS Borked

ZFS Borked

I have a Sun x4500 at work, with 48 500 GB disks in it, with 46 of those configured as a gigantic ZFS filesystem. A couple of weeks ago when I had to restart it, it wasn’t able to mount the ZFS filesystem. After bashing on it a bit to get it to boot up without mounting the ZFS filesystem, I was able to use Solaris’s format command to determine that I have two disks with bad blocks, but format was not able to repair them. Fortunately the two disks are in two different raidz groups, so the data is all still there.

I have been trying to disable the problem disks, so that I can mount the ZFS filesystems in degraded mode and at least get at my data. I use cfgadm -c unconfigure device to turn off the SATA port of the two problem disks, and then zpool import pool to import the pool. That takes forever, but during the import, I can manually mount some of the zfs filesystems and access the data for a while until the server locks up.

This is a real nuisance. I don’t know why disabling the bad disks doesn’t allow the system to work normally until my replacement disks arrive. Ok Internets, any ideas?

About these ads
Categories: Sun Solaris, zfs
  1. Ivan
    2009-04-29 at 03:28

    Can you locate and physically pull the bad disks, vs. disabling the controller they’re attached to, then see if the Thumper will boot on it’s own again?

    Normally, you should be able to do a zpool offline or zpool detach of the bad disks, but that would assume you could boot properly into ZFS – hense resorting to physically pulling the disks…

    • 2009-04-29 at 08:02

      Thanks for the suggestion. I know exactly which ones of the disks are bad, thanks to the hd utility for the x4500. I’ll give that a try.

      • Ivan
        2009-05-01 at 03:14

        Scott, any luck?

  2. Joe
    2009-05-03 at 22:45

    Curious myself. I have two thumpers I havent had an issue with yet..

  3. 2009-06-17 at 20:59

    Comment in passing: format for Solaris will not be able to repair bad blocks on ATA or SATA disks. ATA/SATA do not have a “grown defect list” that is user (software) managable; there is no ATA command which says “internally mark LBA [x] bad”.

    SCSI, on the other hand, has this capability. You might have seen it under the “defect” menu, when showing “grown” defects.

    The bottom line with ATA/SATA disks and bad blocks is: replace the disk immediately. There’s a large number of internal “spare” blocks which the drive can silently remap data to (assuming the internal re-read or re-write was successful) which the kernel/userland has no knowledge of (though you might see a READ/WRITE timeout when this happens). Once all those spare blocks are used up, the drive will begin spitting back to the controller (thus the kernel) what LBA had a read/write failure.

    The best way to determine the state of ATA/SATA disks is to use smartmontools, specifically smartctl -a, e.g. smartctl -a /dev/rdsk/c0t0d0s0. The SMART statistics shown will probably confuse you, but if you can provide some of them here, I’ll help decipher them + teach you how to read them.

    • 2009-06-18 at 08:40

      koitsu, thanks for the comment. This has fallen off the top of my priority list for a bit, but when I get back to it I will post some comments here and maybe email you for some help, if the offer still stands. Then, if I learn something useful, I’l post it here for others to see too.

  4. 2009-09-25 at 10:04

    I have posted an update to this saga that was unexptected. The zfs data and the drives associated with the apparently b0rked zfs volume were not the problem. There was something screwed with the operating system.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: