Showing posts with label create. Show all posts
Showing posts with label create. Show all posts

Friday, October 24, 2008

Creating And Deleting Local Zones On Solaris 10 Unix

Hey There,

Today's post is the final post in our quick series on dealing with local zones on Solaris 10. If you want to check the previous entries out, finish this paragraph. If you don't (or already have), just skip past this one :) The previous posts in this series have dealt with already-created local zones and how to create new file systems in a local zone on Solaris 10, modify filesystems in existing local zones and remove file systems in a local zone.

Today's "how to" is going to cover the bookends of this mini-series; creating local zones in Solaris 10 and destroying them.

NOTE: This notation is fairly obvious in this post (it was more appropriate in the following three), but I'll include it for completeness' sake. You have to be in the global zone to create or destroy a new local zone. See what I mean? ;) Also, resource/storage pools, etc, are outside the scope of this series of posts. If you're curious to learn a bit more about that aspect, check out our older series of posts dealing with using storage pools in Solaris 10. The link is to the final post, but all the preceding posts are linked to on that page; much like they are on this one.

Now, we'll go, step by step, through creating a local zone (up through the final "installation" phase) and, subsequently, undoing all of our hard work by destroying it ;)

1. First of all, fire up your old friend zonecfg and we'll get the party started. For our examples, we'll call our new local zone "DING" to maintain consistency with our other posts and we'll call our pool "CLOCK" (ding dong, yeah, it's sappy ;) You'll notice that you define the zone name when you invoke zonecfg, although just doing that doesn't actually "create" it, like so:

host # zonecfg -z DING <-- This is going to give you an error, but you can safely ignore it. Basically, the error just indicates that you're trying to configure a zone that hasn't been created yet. That's o.k., because we're just about to :)
zonecfg:DING> create

2. Next, we'll set up all of the minimally necessary parts of this local zone (note that we've already checked that enough resources (disk, IP, etc) exist in order for us to be able to install our new zone):

zonecfg:DING> set zonepath=/zones/DING/
zonecfg:DING> set autoboot=true
<-- I would recommend leaving this as "false" until you know you're good, but the worst that can happen isn't really all that bad, since a local zone caught in an auto-boot-loop won't cause you the same headache a straight-up box with the same issue would.
zonecfg:DING> add net <-- Notice how this puts you in a sub-menu, which shows up in the prompt. This happens for most device configuration and is pretty helpful if you have to walk away from your build for a while and come back to a screen where someone's hit enter 500 times ;)
zonecfg:DING:net> set address=99.99.99.1
zonecfg:DING:net> set physical=bge0
zonecfg:DING:net> end
<-- and this, universally, takes us back down (or up, depending on how you look at it) a menu, so we end up back at
zonecfg:DING>

3. Then, with the process just about completed, we'll assign the zone to the "CLOCK" storage pool (outside the scope of these posts, but you can checkout our older 4-part post on working with storage pools). Once that's complete, we'll verify our zone and commit the configuration, like this:

zonecfg:DING> set pool=CLOCK
zonecfg:DING> verify
zonecfg:DING> commit
<-- This writes the configuration, which, up until this point, is held in-memory.
zonecfg:DING> exit (or ^D [Ctrl d])

4. The last step in creating a zone isn't entirely obvious. The first time I did this, I thought I was done when I completed step three. The zone was created, I'd assigned it all of the resources it required (and tweaked all of those) and then verified and committed the zone. Alas, I was wrong, there was still one last thing left to do.

The final act, in creating and enabling your new local zone is to "install" the zone with the zoneadm command. This will not only re-verify all of your zone's resources, check and see if your new zone will run on your system (or any other, for that matter), but also installs all the necessary files in the local zone's root filesystem and creates all mount points as necessary. It's very simple to run and very easy to deal with (Although it may take a little bit more time, even if your setup is good :) - Just run it, simply, like this:

host # zoneadm -z DING install
Preparing to install zone email-zone
...
Zone DING is initialized.


And that's, basically, all you need to do to set up a local zone :) Of course, you should run "zoneadm -z DING boot," zlogin, etc, to boot the zone up, login and run your own tests, just to be sure everything is the way you like it. If you need to make any adjustments, you can still use zonecfg to modify your zone's configuration (theoretically ad infinitum).

Now let's get to work ruining it all, by destroying our local zone ;) It's actually very simple and only includes one "real" step. We'll do the first part, just do to be polite, and shutdown the zone before we destroy it, like so:

host # zoneadm -z DING halt <-- Totally unnecessary if you're going to utterly destroy your zone anyway

Then, all we need to do is run one command to end it all:

host # zonecfg -z DING delete <-- You can also use the -F flag if it won't go away. Note, also, that no commit is necessary, as it was when we created the local zone.

And there we have it; the poignant story a local zone from birth to death ;)

Cheers,

, Mike




Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Tuesday, May 6, 2008

ZFS Command Sheet For Solaris Unix 10 - Pool And File System Creation

Hey There,

Today, we're going back to the Solaris 10 Unix well and slapping together a few useful commands (or, at least, a few commands that you'll probably use a lot ;). We've already covered ZFS, and Solaris 10 zones, in our previous posts on creating storage pools for ZFS and patching Solaris 10 Unix zones, but those were more specific, while this post is meant to be a little quick-stop command repository (and only part one, today). This series also is going to focus more on ZFS and less on the "zone" aspect of the Solaris 10 OS.

Apologies if the explanations aren't as long as my normal posts are. ...Then again, some of you may be thanking me for the very same thing ;)

So, without further ado, some Solaris 10-specific commands that will hopefully help you in a pinch :) Note that for all commands where I specify a virtual device or storage pool, you can get a full listing of all available devices/pools by "not specifying" any storage pool. I'm just trying to keep the output to the point so this doesn't get out of hand.

Today we're going to take storage pools and ZFS file systems and look at creation-based commands, tomorrow we'll look at maintenance/usage commands, and then we'll dig on destructive commands and cleaning up the mess :)

1. To create virtual devices (vdevs), which can, technically, be virtual (disk made from a part, or parts, of real disk) or "real" disk if you have it available to you, you can do this:

host # mkfile 1g vdev1 vdev2 vdev3
host # # ls -l vdev[123]
-rw------T 1 root root 1073741824 May 5 09:47 vdev1
-rw------T 1 root root 1073741824 May 5 09:47 vdev2
-rw------T 1 root root 1073741824 May 5 09:48 vdev3


2. To create a storage pool, and check it out, you can do the following:

# zpool create zvdevs /vdev1 /vdev2 /vdev3
# zpool list zvdevs
<--- Don't specify the name of the pool if you want to get a listing of all storage pools!
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zvdevs 2.98G 90K 2.98G 0% ONLINE -


3. If you want to create a mirror of two vdev's of different size, this can be done, but you'll be stuck with the smallest possible mirror (as it would be physically impossible to put more information on one disk that it can contain. That seems like common sense ;)

host # zpool create -f vzdevs mirror /vdev1 /smaller_vdev <--- The mirrored storage pool will be the size of the "smaller_vdev"

4. If you want to create a mirror, with all the disks (or vdevs) the same size (like they should be :), you can do it like this:

host # zpool create zvdevs mirror /vdev1 /vdev2 /vdev3 /vdevn... <--- I haven't hit the max yet, but I know you can create a "lot" of mirrors in the same set. Of course, you'd be wasting a lot of disk and it would probably make data access slower...

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
myzfs 95.5M 112K 95.4M 0% ONLINE -
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0
/vdev3 ONLINE 0 0 0

errors: No known data errors


5. You can create new directories, add file systems on them and mount them in your storage pool very easily. All you need to do is "create" them with the "zfs" command. Three tasks in one! (as easy as creating a pool with the zpool command):

host # zfs create zvdevs/vusers
host # df -h zvdevs/vusers
Filesystem size used avail capacity Mounted on
zvdevs/vusers 984M 24K 984M 1% /zvdevs/vusers


6. If you need to create additional ZFS file systems, the command is the same, just lather rinse and repeat ;)

host # zfs create zvdevs/vusers2
host # zfs create zvdevs/vusers3
host # zfs list |grep zvdevs
zvdevs 182K 984M 27.5K /zvdevs
zvdevs/vusers 24.5K 984M 24.5K /zvdevs/vusers
zvdevs/vusers2 24.5K 984M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 984M 24.5K /zvdevs/vusers3


See you tomorrow, for more fun with Solaris 10 ZFS/Storage Pool maintenance/usage commands :)

Cheers,

, Mike

Thursday, April 24, 2008

Creating Storage Pools For Solaris ZFS

Hey there,

As a way of getting on to addressing Solaris 10 Unix issues beyond patching zones and migrating zones, today we're going to put together a slam-bang setup of a Solaris ZFS storage pool. For those of you unfamiliar with ZFS (and I only mention this because I still barely ever use it -- Lots of folks are stuck on either Solaris 8 or 9 and are having a hard time letting go ;), it simply stands for "Zettabyte File System."

A Zettabyte = 1024 Exabytes.
An Exabyte = 1024 Petabytes.
A Petabyte = 1024 Terabytes.
A Terabyte = 1024 Gigabytes.
A Gigabyte = 1024 Megabytes.


And, at about this point the number scale makes sense to most of us. And, even though a Zettabyte seems like it's an unbelievably large amount of space, the next generation of operating systems will require twice as much RAM and disk space in order to respond to your key presses in under a minute ;)

Anyway, now that we have that out of the way, let's set up a ZFS pool.

First we'll create the pool, in a pretty straightforward fashion, using the zpool command and adding the disks (which can be in either cXtXdX notation, or specified by slice with cXtXdXsX notation. You can also use full logical path notation if you want (e.g. /dev/disk/cXtXdXsX). We'll assume two entire disks (unmirrored), create them and mount them for use:

host # zpool create MyPoolName c0t0d0 c0t1d0

Note that you can run "zpool create -n" to do a dry-run. This will allow you to find any mistakes you may be making in your command line as zpool doesn't "really" create the pool when you run it with the "-n" flag.

We also won't have to do anything extra to create the mount point, as it defaults to the name of the disk group. So, in this case, we would have a storage pool with a default mount point of /MyPoolName. This directory has to either not exist (in which case it will be automatically created) or exist and be empty (in which case the root dataset will be mounted over the existing directory). If you want to specify a different mount point, you can use the "-m" flag for zpool, like so:

host # zpool create -m /pools/MyPool MyPoolName c0t0d0 c0t1d0

And your ZFS storage pool is created! You can destroy it just as easily by running:

host # zpool destroy MyPoolName

or, if the pool is in a faulted state (or you experience any other error, but know you want to get rid of it), you can always force the issue ;)

host # zpool destroy -f MyPoolName

And you're just about ready to use your new "device." All you need to do is ( just like you would when mounting a regular disk device ) create a filesytem on your storage pool. You can do this pretty quickly with the zfs command, like so:

host # zfs create /MyPoolName/MyFileSystem

Note that the pool name directory is the base of the storage pool and your file system is actually created at the next level up. Basically, /MyPoolName is not your file system, but /MyPoolName/MyFileSystem is. And you can manage this file system, in much the same way you manage a regular ufs file system, using the zfs command.

Again, you can always opt-out and destroy it if you don't like it (here, as well, you have the option to "force" if you want):

host # zfs destroy /MyPoolName/MyFileSystem

Aside from "-f" (to force execution), the most common flags to use with "zfs destroy" are "-r" to do a recursive destroy or "-R" to recursively destroy all the regular descendants in the file system tree, plus all cloned file systems outside of your target directory (Be careful with that switch!)

We've only just brushed on getting started with zfs, but, hopefully, it's been somewhat helpful and, at the very least, readable ;)

Enjoy,

, Mike