Showing posts with label Storage. Show all posts
Showing posts with label Storage. Show all posts

Wednesday, February 4, 2009

The Linux/Unix SysAdmin Covert File Storage Method Number 57

Hey there,

Today's post is the 57th installment of a two or three part series of posts that refuses to play by the rules. Look for Volume 3 and Issue 437 in the near future ;)

This post's trick (actually it's more of a gimmick - or a way any one of us has probably screwed up at some point in time ;) is fairly simple and, as is generally the case, inversely proportionate in complexity to the work I'm currently gettting paid to do so my wife, kids and 5 animals don't go hungry. I'm somewhat obsessive-compulsive and tend to forget to eat more often than I remember to. Metabolism, of course, is just another one of life's cruel jokes. I'm not gigantic, but my waistline implies a lavish and sedentary lifestyle I don't enjoy. Actually, my fitness-oriented friends tell me that I'd lose the spare-tire if I just ate more regularly. While this makes perfect sense, I generally don't ;) ...and then, every once in a while, I digress...

This little goof is pretty simple to pull off (assuming you're a sysadmin and/or have the access, privilege and opportunity to do it) and can be a life-saver. Technically, you shouldn't ever need to do this, but sometimes convenience trumps sanity...

You may recall a post we did a long long time ago, in a galaxy just down the block, regarding finding space hogs on multiple overlay-mounted filesystems. This little way to hide bits of information works relatively along the same lines. The one serious limitation it has is that, while you'll be secretly storing your information, you won't be hiding the actual disk space it consumes, so this method of packing away all the stuff you're not supposed to have on the company's production web server has its limitations.

For today, we'll use a /usr/local mount point that we have on a Solaris machine (independent of the /usr mount point) to demonstrate.

Step 1: Take a lay of the land. In order for this to work, you need to have enough space to stow away what you need to and, hopefully, enough space to make your addition barely noticeable. Our setup isn't bad, especially since the "actual" filesystem that's going to be impacted will be the /usr filesystem underneath /usr/local (If it were /usr/local, the change might be noticed since the filesystem is so "empty")

host # ls /usr/local
PKG-Get etc lib lost+found pkgs share
bin info libexec man sbin
host # df -k /usr/local
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s5 51650439 167184 50966751 1% /usr/local
host # df -k /usr
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s3 5233086 3550979 1629777 69% /usr


Step 2: Peel back the carpet. This is where you have to be quick, and is the essence of our little shell-game. First, we'll unmount the /usr/local filesystem, leaving us with this (df -k for /usr/local now shows that it's just a simple directory on /usr):

host # umount /usr/local
host # df -k /usr/local
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s3 5233086 3550979 1629777 69% /usr
host # ls /usr/local


Under normal circumstances, this directory should now be empty (unless someone else is doing the same thing as you, or just forgot to clean up before they created the separate /usr/local overlay mount).

Step 3: Sweep your non-work-related-stuff under the rug, or into the /usr/local directory, as it were:

host # mv non-work-related-stuff /usr/local/
host # ls /usr/local
non-work-related-stuff


Step 4: Make sure things look normal. See if your addition makes a noticeable difference in your df output (It probably won't unless you're going to try and sack-away your mp3 collection ;)

host # df -k /usr
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s3 5233086 3550981 1629775 69% /usr


Step 5: Pretend nothing happened. Once you're satisfied (which should be as soon as possible), remount /usr/local and verify that everything looks the same (excepting your modification of the /usr filesystem):

host # mount /usr/local
host # ls /usr/local
PKG-Get etc lib lost+found pkgs share
bin info libexec man sbin
host # df -k /usr/local
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s5 51650439 167184 50966751 1% /usr/local
host # df -k /usr
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s3 5233086 3550981 1629775 69% /usr


And you're all set. You can get your stuff back eventually (even sooner if you don't care what people think ;)

Of course, that's an old trick, but one we've never covered here. As this blog gets larger, we're going to try and devote a little less time to being original. Of course, we mean that in a good way :) Since this is essentially a knowledge-dumping-ground, meant for users and admins of all skill levels, every post can't be about some crazy way to do something you'd have to be insane to want to do in the first place!

Come on in; the mediocrity's fine ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Tuesday, May 6, 2008

ZFS Command Sheet For Solaris Unix 10 - Pool And File System Creation

Hey There,

Today, we're going back to the Solaris 10 Unix well and slapping together a few useful commands (or, at least, a few commands that you'll probably use a lot ;). We've already covered ZFS, and Solaris 10 zones, in our previous posts on creating storage pools for ZFS and patching Solaris 10 Unix zones, but those were more specific, while this post is meant to be a little quick-stop command repository (and only part one, today). This series also is going to focus more on ZFS and less on the "zone" aspect of the Solaris 10 OS.

Apologies if the explanations aren't as long as my normal posts are. ...Then again, some of you may be thanking me for the very same thing ;)

So, without further ado, some Solaris 10-specific commands that will hopefully help you in a pinch :) Note that for all commands where I specify a virtual device or storage pool, you can get a full listing of all available devices/pools by "not specifying" any storage pool. I'm just trying to keep the output to the point so this doesn't get out of hand.

Today we're going to take storage pools and ZFS file systems and look at creation-based commands, tomorrow we'll look at maintenance/usage commands, and then we'll dig on destructive commands and cleaning up the mess :)

1. To create virtual devices (vdevs), which can, technically, be virtual (disk made from a part, or parts, of real disk) or "real" disk if you have it available to you, you can do this:

host # mkfile 1g vdev1 vdev2 vdev3
host # # ls -l vdev[123]
-rw------T 1 root root 1073741824 May 5 09:47 vdev1
-rw------T 1 root root 1073741824 May 5 09:47 vdev2
-rw------T 1 root root 1073741824 May 5 09:48 vdev3


2. To create a storage pool, and check it out, you can do the following:

# zpool create zvdevs /vdev1 /vdev2 /vdev3
# zpool list zvdevs
<--- Don't specify the name of the pool if you want to get a listing of all storage pools!
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zvdevs 2.98G 90K 2.98G 0% ONLINE -


3. If you want to create a mirror of two vdev's of different size, this can be done, but you'll be stuck with the smallest possible mirror (as it would be physically impossible to put more information on one disk that it can contain. That seems like common sense ;)

host # zpool create -f vzdevs mirror /vdev1 /smaller_vdev <--- The mirrored storage pool will be the size of the "smaller_vdev"

4. If you want to create a mirror, with all the disks (or vdevs) the same size (like they should be :), you can do it like this:

host # zpool create zvdevs mirror /vdev1 /vdev2 /vdev3 /vdevn... <--- I haven't hit the max yet, but I know you can create a "lot" of mirrors in the same set. Of course, you'd be wasting a lot of disk and it would probably make data access slower...

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
myzfs 95.5M 112K 95.4M 0% ONLINE -
host # zpool status -v zvdevs
pool: zvdevs
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zvdevs ONLINE 0 0 0
mirror ONLINE 0 0 0
/vdev1 ONLINE 0 0 0
/vdev2 ONLINE 0 0 0
/vdev3 ONLINE 0 0 0

errors: No known data errors


5. You can create new directories, add file systems on them and mount them in your storage pool very easily. All you need to do is "create" them with the "zfs" command. Three tasks in one! (as easy as creating a pool with the zpool command):

host # zfs create zvdevs/vusers
host # df -h zvdevs/vusers
Filesystem size used avail capacity Mounted on
zvdevs/vusers 984M 24K 984M 1% /zvdevs/vusers


6. If you need to create additional ZFS file systems, the command is the same, just lather rinse and repeat ;)

host # zfs create zvdevs/vusers2
host # zfs create zvdevs/vusers3
host # zfs list |grep zvdevs
zvdevs 182K 984M 27.5K /zvdevs
zvdevs/vusers 24.5K 984M 24.5K /zvdevs/vusers
zvdevs/vusers2 24.5K 984M 24.5K /zvdevs/vusers2
zvdevs/vusers3 24.5K 984M 24.5K /zvdevs/vusers3


See you tomorrow, for more fun with Solaris 10 ZFS/Storage Pool maintenance/usage commands :)

Cheers,

, Mike

Thursday, April 24, 2008

Creating Storage Pools For Solaris ZFS

Hey there,

As a way of getting on to addressing Solaris 10 Unix issues beyond patching zones and migrating zones, today we're going to put together a slam-bang setup of a Solaris ZFS storage pool. For those of you unfamiliar with ZFS (and I only mention this because I still barely ever use it -- Lots of folks are stuck on either Solaris 8 or 9 and are having a hard time letting go ;), it simply stands for "Zettabyte File System."

A Zettabyte = 1024 Exabytes.
An Exabyte = 1024 Petabytes.
A Petabyte = 1024 Terabytes.
A Terabyte = 1024 Gigabytes.
A Gigabyte = 1024 Megabytes.


And, at about this point the number scale makes sense to most of us. And, even though a Zettabyte seems like it's an unbelievably large amount of space, the next generation of operating systems will require twice as much RAM and disk space in order to respond to your key presses in under a minute ;)

Anyway, now that we have that out of the way, let's set up a ZFS pool.

First we'll create the pool, in a pretty straightforward fashion, using the zpool command and adding the disks (which can be in either cXtXdX notation, or specified by slice with cXtXdXsX notation. You can also use full logical path notation if you want (e.g. /dev/disk/cXtXdXsX). We'll assume two entire disks (unmirrored), create them and mount them for use:

host # zpool create MyPoolName c0t0d0 c0t1d0

Note that you can run "zpool create -n" to do a dry-run. This will allow you to find any mistakes you may be making in your command line as zpool doesn't "really" create the pool when you run it with the "-n" flag.

We also won't have to do anything extra to create the mount point, as it defaults to the name of the disk group. So, in this case, we would have a storage pool with a default mount point of /MyPoolName. This directory has to either not exist (in which case it will be automatically created) or exist and be empty (in which case the root dataset will be mounted over the existing directory). If you want to specify a different mount point, you can use the "-m" flag for zpool, like so:

host # zpool create -m /pools/MyPool MyPoolName c0t0d0 c0t1d0

And your ZFS storage pool is created! You can destroy it just as easily by running:

host # zpool destroy MyPoolName

or, if the pool is in a faulted state (or you experience any other error, but know you want to get rid of it), you can always force the issue ;)

host # zpool destroy -f MyPoolName

And you're just about ready to use your new "device." All you need to do is ( just like you would when mounting a regular disk device ) create a filesytem on your storage pool. You can do this pretty quickly with the zfs command, like so:

host # zfs create /MyPoolName/MyFileSystem

Note that the pool name directory is the base of the storage pool and your file system is actually created at the next level up. Basically, /MyPoolName is not your file system, but /MyPoolName/MyFileSystem is. And you can manage this file system, in much the same way you manage a regular ufs file system, using the zfs command.

Again, you can always opt-out and destroy it if you don't like it (here, as well, you have the option to "force" if you want):

host # zfs destroy /MyPoolName/MyFileSystem

Aside from "-f" (to force execution), the most common flags to use with "zfs destroy" are "-r" to do a recursive destroy or "-R" to recursively destroy all the regular descendants in the file system tree, plus all cloned file systems outside of your target directory (Be careful with that switch!)

We've only just brushed on getting started with zfs, but, hopefully, it's been somewhat helpful and, at the very least, readable ;)

Enjoy,

, Mike

Sunday, November 25, 2007

TSM Backup's Most Misleading Error Message

As much as I'd love to, I won't give you my opinion of Tivoli Storage Manager (TSM) in this post. It's not really about what I think, but will hopefully help out a few folks. TSM, in my experience, is not too difficult to install and set up on a host, but troubleshooting it can be another issue. A lot of times, the error messages it gives you lead you away from the problem rather than to it.

Which brings us to today's post topic: TSM's most misleading error message. This error specifically has to do with Oracle backups.

Assuming your installation has gone without incident, you've already logged into the TSM server from your host and verified all's well, like so:

host.xyz.com# dsmc q sess

And, after logging in, if you haven't set that up already, you see some marginal information about the TSM version, when the last backup was run, the schedule of backups, etc.

Now, you're ready to turn the product over to the client. It's been installed, but never run. And, surely enough, the next day (at the latest ;) you receive a call that the Oracle backups didn't work. Some minimal investigation into the log files (since logging into the TSM interface from the host using "dsmc" doesn't seem to show any issues, except that no Oracle backups have run) shows that something's very very wrong! You find yourself faced with this scary error:

ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer Linux Error: 106: Transport endpoint is already connected
Additional information: 7011
ORA-19511: Error received from media manager layer, error text:
SBT error = 7011, errno = 106, sbtopen: system error


So, your first inclination might be to go talk to the folks that manage site backups, because there's obviously a problem with the tape loader or a tape drive. But, this couldn't be farther from the truth.

Here's the kicker. The real issue has nothing at all to do with the error message! The issue is with the "dsmc" process not being able to write to its logs as the oracle user id! Yes, that's true :)

The good news is that, once you've gotten past the mystery, the solution is relatively simple. You can fix this problem very quickly by doing the following:

1. View the dsm.sys file that you've set up (it should be linked to /usr/bin/dsm.sys) and find out where the error logs and activity logs are being written. If you haven't specified this, your logs should be under the default installation directory under logs/tivoli/tsm -- depending on your version, the location may vary)

2. Now simply fix the permissions on the log so that the oracle user id can write to them:

chgrp dba errlog.log <--- Assuming dba is oracle's primary group.
chmod g+w errlog.log

3. Verify that all directories above this are accessible as well. The easiest way to figure out if you've gotten it right is to use the oracle user id to do the check:

su - oracle
cd /log/location/
echo hi >>errlog.log
vi errlog.log
<--- To remove the line with "hi" on it if you want.

Once that's all set, the "bad tape" error goes away and Oracle backups start working. How about that? :)

, Mike