Hey there,
Today we're here with part three of our four-part mini-series on ZFS and storage pool commands and tips. We started out with ZFS and storage pool creation commands, and followed that up with yesterday's post on maintenance and usage commands. Today's post is actually a continuation of yesterday's post, which was originally going to be part two of three. The inmates must be running the asylum ;)
For any notes pertaining to this entire four-post series, please refer to the first ZFS and storage pool post. I'm getting tired of hearing my own voice in my head ;)
Today, we'll keep on going with usage and maintenance tips. Enjoy and don't be afraid to come back for seconds :)
1. You can run statistics specifically against your ZFS storage pools (which can help in identifying a bottleneck on a system with a lot of virtual devices, zones and pools). These stats aren't too exciting since I just created the pool for this little cheat-sheet:
<--- As with regular iostat, this is a summary linehost # zpool iostat zvdevs 1 5
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zvdevs 92K 1016M 0 0 83 633zvdevs 92K 1016M 0 0 0 0
zvdevs 92K 1016M 0 0 0 0
zvdevs 92K 1016M 0 0 0 0
zvdevs 92K 1016M 0 0 0 0
2. If you ever need to list out your ZFS file systems, just use the "list" option, like this:host # zfs list |grep zvdevs
zvdevs 120K 984M 26.5K /zvdevs
zvdevs/vusers 24.5K 984M 24.5K /zvdevs/vusers
3. One problem you might note with new ZFS files systems and listing them out, is that they all have access to the entire storage pool. So, theoretically, any of them could fill up the entire storage pool space and leave the other two out of disk space. This can be fixed by setting "reservations" to ensure that each VFS file system gets to keep a certain amount of space for itself no matter what (here, vusers gets at least 100 Megabytes, and vusers2 and 3 will get at least 50 Megabytes each):host # zfs set reservation=100m zvdevs/vusers
host # zfs set reservation=50m zvdevs/vusers2
host # zfs set reservation=50m zvdevs/vusers3
host # zfs list -o name,reservation|grep zvdevs
zvdevs none
zvdevs/vusers 100M
zvdevs/vusers2 50M
zvdevs/vusers3 50M
4. On the other hand, you may want to restrict space-hogs from taking up too much space. You can do this with quota's, almost just like on older versions of Solaris Unix:
host # zfs set quota=200m zvdevs/vusers
host # zfs set quota=100m zvdevs/vusers2
host # zfs set quota=100m zvdevs/vusers3
host # zfs list -o name,quota|grep zvdevs
zvdevs none
zvdevs/vusers 200M
zvdevs/vusers2 100M
zvdevs/vusers3 100M
5. You can also set up file systems to use compression, selectively, like so (Whether or not this is worthwhile is debatable):host # zfs set compression=on zvdevs/vusers2
host # zfs list -o name,compression|grep zvdevs
zvdevs off
zvdevs/vusers off
zvdevs/vusers2 on
zvdevs/vusers3 off
6. ZFS also has built-in snapshot capabilities, so you can prep yourself (like, before you're about to try something crazy ;) and be able to rollback to an earlier point in time (We'll call our snapshot "backup"), like this:host # zfs snapshot zvdevs/vusers@backup
host # zfs list|grep zvdevs
zvdevs 200M 784M 28.5K /zvdevs
zvdevs/vusers 24.5K 200M 24.5K /zvdevs/vusers
zvdevs/vusers@backup 0 - 24.5K -
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3
7. Now, assuming that you screwed everything up so badly that you need to back out your changes (but not so bad that you can't access Solaris ;), you can use ZFS's built-in rollback capability. Here we'll do a rollback, based on our snapshot, and clone that to a separate ZFS file system, so we can move back whatever files we need to their original locations (Unfortunately, the extra cloning steps are necessary, as snapshots are not directly accessible and you also can't clone to an existing vdev! ):
host # zfs rollback zvdevs/vusers@backup
host # zfs clone zvdevs/vusers@backup zvdevs/vusers
cannot create 'zvdevs/vusers': dataset already exists <--- Didn't I just remind myself that I can't do this ;)host # zfs clone zvdevs/vusers@backup zvdevs/vusers4
host # zfs list|grep zvdevs
zvdevs 200M 784M 29.5K /zvdevs
zvdevs/vusers 24.5K 200M 24.5K /zvdevs/vusers
zvdevs/vusers@backup 0 - 24.5K -
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3
zvdevs/vusers4 0 784M 24.5K /zvdevs/vusers4
8. Another cool thing you can do is to rename your file systems, like this (note that any attached snapshots will be renamed as well):
host # zfs rename zvdevs/vusers4 zvdevs/garbage
host # zfs list|grep zvdevs
zvdevs 100M 884M 28.5K /zvdevs
zvdevs/garbage 24.5K 884M 24.5K /zvdevs/garbage
zvdevs/garbage@backup 0 - 24.5K -
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3
9. You can also rename snapshots, if you don't want anybody (including yourself in a few months ;) to be able to easily tell what file system is a backup of what other file system, like this:host # zfs rename zvdevs/garbage@backup zvdevs/garbage@rollbacksys
host # zfs list|grep zvdevs
zvdevs 100M 884M 28.5K /zvdevs
zvdevs/garbage 24.5K 884M 24.5K /zvdevs/garbage
zvdevs/garbage@rollbacksys 0 - 24.5K -
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3
10. And, just in case you need to have all the information you could ever want about your ZFS file systems, you can "get" it "all" :
host # zfs get all zvdevs
NAME PROPERTY VALUE SOURCE
zvdevs type filesystem -
zvdevs creation Mon May 5 10:00 2008 -
zvdevs used 100M -
zvdevs available 884M -
zvdevs referenced 28.5K -
zvdevs compressratio 1.00x -
zvdevs mounted yes -
zvdevs quota none default
zvdevs reservation none default
zvdevs recordsize 128K default
zvdevs mountpoint /zvdevs default
zvdevs sharenfs off default
zvdevs checksum on default
zvdevs compression off default
zvdevs atime on default
zvdevs devices on default
zvdevs exec on default
zvdevs setuid on default
zvdevs readonly off default
zvdevs zoned off default
zvdevs snapdir hidden default
zvdevs aclmode groupmask default
zvdevs aclinherit secure default
zvdevs canmount on default
zvdevs shareiscsi off default
zvdevs xattr on default
11. It's better to send than to receive :) Maybe not... In any event, you can use the ZFS send and receive commands to send snapshots to other filesystems locally or remotely (much in the same way you can use the system subshell to move content using tar), like so:host # zfs send zvdevs/vusers@garbage | ssh localhost zfs receive zvdev/newbackup
host # zfs list
host # zfs list|grep zvdevs
zvdevs 100M 884M 28.5K /zvdevs
zvdevs/garbage 24.5K 884M 24.5K /zvdevs/garbage
zvdevs/vusers@backup 0 - 24.5K -
zvdevs/vusers2 24.5K 100M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 100M 24.5K /zvdevs/vusers3
zvdevs/newbackups 24.5K 100M 24.5K /zvdevs/newbackup
12. If you get nostalgic from time to time, or you need to figure out why something bad happened, you can use the ZFS filesystem history command, like so (Note that no history will be available after you destroy a pool -- I did that already, to zvdevs, so this output is for another storage pool on the same system):
# zpool history
History for 'datadg':
2008-02-27.20:25:32 zpool create -f -m legacy datadg c0t1d0
2008-02-27.20:27:57 zfs set mountpoint=legacy datadg
2008-02-27.20:27:58 zfs create -o mountpoint -o quota=5m datadg/data
2008-02-27.20:27:59 zfs create -V 20gb datadg/swap
2008-02-27.20:31:47 zfs create -o mountpoint -o quota=200m datadg/oracleclient
Cheers,
, Mike
linux unix internet technology