Friday, May 9, 2008

Destroying Storage Pools And File Systems On Solaris 10 Unix ZFS

Hey once more,

Today we're finally ready to wrap up our series of posts on using ZFS and storage pools on Solaris 10 Unix. We started out with a post on ZFS and storage pool creation commands, followed that up with a two-parter on maintenance and usage commands and even more helpful usage commands, and have finally arrived here, today.

This post marks the end in this little series, and, befittingly, it has to deal with destroying ZFS file systems and storage pools. It's wonderful how that worked out so appropriately ;)

I hope this series has been as much a help to you as it has been in crippling my fingers ;) Seriously, here's hoping we all got something out of it :)

Let the end begin:

1. If you decide you want to get rid of your storage pool (none of my business... ;), you can do so like this:

host # zpool destroy zvdevs
host # zpool list zvdevs
cannot open 'zvdevs': no such pool


2. You can also easily remove mirrors from storage pools by doing the following:

host # zpool detach zvdevs /vdev3
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0

errors: No known data errors


3. In a Solaris 10 storage pool, you can't remove any active disks using "zpool remove." If you attempt to, it's not a big deal, because you'll just get an error, like this:

host # zpool remove zvdevs /vdev3
cannot remove /vdev3: only inactive hot spares can be removed


But, you can still remove active disks (as long as doing so isn't detrimental...) with the -f flag to force it. To do it the polite way, you just need to "detach" the virtual device in the exact same way you'd detach a mirror from a storage pool, like so:

host # zpool detach zvdevs /vdev3
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: resilver completed with 0 errors on Mon May 5 10:05:33 2008
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0

errors: No known data errors


4. Of course, you can easily remove vdevs from a storage pool if they're "hot spares" and they're "inactive." Here, we'll add a spare and then remove it:

host # zpool add zvdevs spare /vdev3
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: resilver completed with 0 errors on Mon May 5 10:05:33 2008
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0
spares
/vdev3 AVAIL

errors: No known data errors
host # zpool remove zvdevs /vdev3
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: resilver completed with 0 errors on Mon May 5 10:05:33 2008
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0

errors: No known data errors


5. If you want to remove ZFS file systems, just use the "destroy" option, shown below:

host # zfs destroy zvdevs/vusers4
host # zfs list|grep zvdevs
zvdevs 200M 784M 28.5K /zvdevs
zvdevs/vusers 24.5K 200M 24.5K /zvdevs/vusers
zvdevs/vusers@backup 0 - 24.5K -
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3


6. Of course, there's a catch to destroying ZFS file systems. It won't work straight off if the file system has "children" (Unix has always been a family-oriented OS ;)

host # zfs destroy zvdevs/vusers
cannot destroy 'zvdevs/vusers': filesystem has children
use '-r' to destroy the following datasets:
zvdevs/vusers@backup


In this instance, we either need to recursively delete the file system ( with the -r flag ), like so:

host # zfs destroy -r zvdevs/vusers
host # zfs list|grep zvdevs
zvdevs 100M 884M 27.5K /zvdevs
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3


or promote the filesystem so that it's no longer a clone and no longer dependant on its origin snapshot, at which point we can destroy it like this (If you recall, many moons ago - this gets confusing - we took a snapshot of zvdevs/vusers. Then we cloned the snapshot to a new file system, zvdevs/vusers4, in order to do a rollback... That's why we have to promote zvdevs/vusers4 to remove zvdev/vusers):

host # zfs promote zvdevs/vusers4
host # zfs list|grep zvdevs
zvdevs 100M 884M 29.5K /zvdevs
zvdevs/vusers 0 884M 24.5K /zvdevs/vusers
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3
zvdevs/vusers4 24.5K 884M 24.5K /zvdevs/vusers4
zvdevs/vusers4@backup 0 - 24.5K -
host # zfs promote zvdevs/vusers4
host # zfs destroy zvdevs/vusers
host # zfs list|grep zvdevs
zvdevs 100M 884M 28.5K /zvdevs
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3
zvdevs/vusers4 24.5K 884M 24.5K /zvdevs/vusers4
zvdevs/vusers4@backup 0 - 24.5K -


7. If you've had enough and you want to destroy your storage pool, that's your call. Just remember the hierarchical nature of ZFS and don't get discouraged ;)

host # zpool destroy zvdevs

The only time this won't work is if your pool is not empty (which means containing data, rather than containing file systems) or if it has mounted file systems. Simple variations on regular UFS commands can fix the mounting/unmounting issue for you. Destroying individual ZFS filesystems in your pool will take care of the other issue (So will the "-f" flag, if you just want to use force and be done with it):

host # zpool destroy zvdevs
cannot destroy 'zvdevs': pool is not empty
use '-f' to force destruction anyway
Can’t destroy a pool with active filesystems.
host # zfs mount myzfs/zvdevs
host # zfs unmount myzfs/zvdevs


Note that you may have to unmount filesystems overlay-mounted on top of the main file system. No problem. Again, you can use "-f" to save you the hassle.

And that's all she wrote about this subject... for now...

Have a great weekend :)

, Mike