Wednesday, May 7, 2008

Maintenance And Usage Commands For ZFS On Solaris 10 Unix

Hey again,

Today, we're back with part two of what was going to be a three part series on working with ZFS and storage pools. Actually, this was originally going to be one post, but (luckily ?) it's grown into four gigantic ones ;) This one, and tomorrow's, are going to be the "big daddies" of the bunch.

Yesterday we looked at ZFS storage pool and file system creation and today we're going to move on to commands that we can use to manipulate those storage pools and file systems that we've made.

Please note, again, that for all commands where I specifically name a virtual device or storage pool, you can get a full listing of all available devices by simply not specifying any storage pool at all.

And, without a moment to spare, here come those maintenance/usage commands (please enjoy responsibly ;)

1. If you need to know as much as possible about your storage pools, you can use this command:

host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0
/vdev3 ONLINE 0 0 0

errors: No known data errors


2. In Solaris 10 storage pool management, you can also "online" and "offline" virtual devices. You might need to do this from time to time if you need to replace an "actual" device that may be faulty. Here's an example of offlining and then onlining a vdev. Note that, if you use the "-t" flag when offlining, the device will only be temporarily disabled. Normal offlining is persistent and the storage pool will maintain the vdev in an offline state even after a reboot:

host # zpool offline zvdevs /vdev2
Bringing device /vdev2 offline
host # zpool status -v zvdevs
pool: zvdevs
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: resilver completed with 0 errors on Mon May 5 10:05:33 2008
config:

NAME STATE READ WRITE CKSUM
zvdevs DEGRADED 0 0 0
mirror DEGRADED 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 OFFLINE 0 0 0

errors: No known data errors
host # zpool online zvdevs /vdev2
Bringing device /vdev2 online
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: resilver completed with 0 errors on Mon May 5 10:18:34 2008
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0

errors: No known data errors


3. If you want to attach another disk to your storage pool mirror, it's just as simple. This process will create a simple mirror if you only have one device in your pool (???) or create a triplicate, quadruplicate, etc, mirror if you already have a simple mirror set up:

host # zpool attach zvdevs /vdev1 /vdev3 <--- Note that we're specifically saying we want to mirror /vdev1 and not /vdev2. It doesn't really matter, since they're both mirrors of each other, but you can't just attach a device without naming a device to mirror!
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: resilver completed with 0 errors on Mon May 5 10:05:33 2008
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0
/vdev3 ONLINE 0 0 0

errors: No known data errors


5. For a little shell-game, if you ever need to replace a vdev in your storage pool (say, with a hot spare), you can do it easily, like this:

host # zpool add zvdevs spare /vdev3 <--- This may not be necessary, but I removed the spare and am re-adding it.
host # zpool replace zvdevs /vdev1 /vdev3
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: resilver completed with 0 errors on Mon May 5 10:20:58 2008
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
spare ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev3 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0
spares
/vdev3 INUSE currently in use

errors: No known data errors


6. If you suspect file system damage, you can "scrub" your storage pool. zpool will verify that everything is okay (if it is ;) and will auto-repair any problems on mirror or raid pools :)

host # zpool scrub zvdevs <--- Depending on how much disk you have and how full it is, this can take a while and chew up I/O cycles like nuts.

7. If you want to share storage pools between systems (real or virtual), you can "export" your storage pool like so (Note that this will make the storage pool not show up as being "owned" by your system anymore, although you can reimport it):

host # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datadg 33.8G 191M 33.6G 0% ONLINE -
rootdg 9.75G 5.11G 4.64G 52% ONLINE -
zvdevs 1016M 127K 1016M 0% ONLINE -
host # zpool export zvdevs
host # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datadg 33.8G 191M 33.6G 0% ONLINE -
rootdg 9.75G 5.11G 4.64G 52% ONLINE -


8. In order to "import" an exported filesystem, you just need to have the system permission to do so (being root in the global zone is the perfect place to be when you try this. Just be careful ;)

host # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datadg 33.8G 191M 33.6G 0% ONLINE -
rootdg 9.75G 5.11G 4.64G 52% ONLINE -
host # zpool import -d / zvdevs
host # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datadg 33.8G 191M 33.6G 0% ONLINE -
rootdg 9.75G 5.11G 4.64G 52% ONLINE -
zvdevs 1016M 92K 1016M 0% ONLINE -


You can run this command without -d (to specify the base directory of the storage pool) and zpool will search /dev/dsk for a place to import. In our case it won't find it, like this:

host # zpool import zvdevs
cannot import 'zvdevs': no such pool available


9. If you need to know what version of ZFS your system is using, you can use zpool's "upgrade" option. Don't worry. Without the proper flags, it just lets you know what version is running and some other information, like this:

host # zpool upgrade
This system is currently running ZFS version 4.

All pools are formatted using this version.


10. If you want to see what features your version of ZFS has, you can use the "upgrade" option with the -v flag. Everything is retro, so (in our case), since we're running version 4, we have all the capabilities of versions 3, 2 and 1 but not of versions 5 and higher (At the time of this post, ZFS is up to version 10):

host # zpool upgrade -v
This system is currently running ZFS version 4.

The following versions are supported:

VER DESCRIPTION
--- --------------------------------------------------------
1 Initial ZFS version
2 Ditto blocks (replicated metadata)
3 Hot spares and double parity RAID-Z
4 zpool history
...


Now, go and have fun with those commands :)

Until we meet again,

, Mike