Friday, July 11, 2008

Using Screen, Script, Mkfifo And Redirection To Watch Or Log User Sessions

Hey there,

Today we're going to continue, in a sense, where we left off on yesterday's post regarding using mkfifo for user monitoring. In this post, we're going to look at that a bit more specifically, and also touch on some other ways you can do the same thing (if not better). Everything we'll be going over today should be either freeware or come with your OS (or both :)

Again, please check our older posts on executing new file descriptors and shell redirection if you need that part of the process fleshed-out more. We're going to take it as a given that you basically get how that works (This is the biggest hazard of writing a tech blog for everyone. Sometimes it's hard to explain the basics and not insult the experienced user and/or explain a more advanced concept while not seeming to talk down to the new user. Whatever I write here; no offense meant to any reader. Except a few of you. You know who you are ;)

Now, we'll get to the meat and go after logging and/or monitoring user sessions. You have various options, depending on what you want to do. Personally, I think it's cool to be able to watch someone's process in real-time, but sometimes (especially if you want to make sure you're always keeping what you're seeing) its better to dump output to files and just cat or tail those in almost-real-time. If being a few usec's behind the user you're monitoring has the power to ruin your entire day, please stop reading now. This will end badly no matter what ;)

1. Using mkfifo and enhanced profile security: This can be fun, and it has the potential for being miserable, depending on the OS you're running. For instance, Solaris' standard "script" command only has one flag (-a, to append to a previous script file) and is notorious for rarely flushing the buffer (so you'll get 50 pages of user messages at once and then nothing for 20 minutes) and, often times, will clip the information sent through a FIFO if the session ends before it flushes for the first time, leaving you with incomplete output!

The first thing you'll want to do is modify your users' .profile (or appropriate shell settings file) to be secured as if the account was su-only. If your user can modify the file after you've made your changes, you can bet they'll remove the script line as soon as possible. If you want to meet your users half-way, insist on a 750 root:root owned .profile and have the last line source in another file that they can use to make their own customizations, as deemed appropriate or possible. Here's what I'd put in somebody's .profile to create a FIFO and start script running when they login and pipe to that FIFO (Note that, although script normally makes it so you have to exit twice to log out - because it opens up in a subshell - using it in the .profile makes it so that when the user exits once, the connection is terminated!) :

tempvar=$$
user="`logname`.$tempvar"
myfifo=/tmp/$user
/usr/bin/rm -rf $myfifo
/usr/bin/bash -isc "/usr/bin/script $myfifo"


Then, as soon as the user logs in, you can have an automated process begin reading from the named pipe (/tmp/username.112327, for instance) and either begin writing it to the screen or dumping it to a file, or both. If you want to read the output to the screen you're on, just do:

host # cat /tmp/username.112327

or

host # tail -f /tmp/username.112327

to dump off the output into a log, just do:

host # cat /tmp/username.112327 >/tmp/log

to do both, just do something like:

host # cat /tmp/username.112327 2>&1|tee /var/logdir/log

For my money, like I said, you're better off just dumping to log. You won't have to worry about catching logins with another process and you can always read the log later (or watch it in real time). If you decide that's the best option, change that section of your users' .profile to read like this instead:

tempvar=$$
user="`logname`.$tempvar"
myfifo=/tmp/$user
/usr/bin/rm -rf $myfifo
/usr/bin/bash -isc "/usr/bin/script $myfifo"
cat $myfifo >/var/logdir/$user.log 2>&1 &


2. Using screen and enhanced config file security: Gnu Screen is another alternative for root (or users of similar privilege) to keep an eye on users. The rules for securing screens configuration files, are the same as those we went over regarding su-only accounts, just applied to different files. The theme is the same: Make the files usable, but not modifiable, by the user.

Screen is a bit easier to set up for ethical-hijacking. All you need to do is modify the users' "screenrc" file, make it so they can't modify it and add the lines:

multiuser on
acladd root


and, of course, set up their profiles to launch screen at login. With these conditions in place, all you need to do, as root, to watch a user session is to type the following at the command line:

host # screen -x user

If you're not sure who's on, or one user has multiple screens open:

host # screen -x

with no specific argument should let you know :) You may then, as with script, decide that you're better off dumping to a logfile and not constantly monitoring (or using resources to monitor). With screen, you don't need to jump through any gigantic hoops (making FIFO's etc, unless you want to ;) If your original entry in the users' .profile was:

screen

all you would have to do to change it to log the output to a file, as well as the screen, would be to change that to:

screen -L PickANameForALog <-- or just a number if you want to accept the default screenfile.NUMBER output

3. Wrapping up - Big differences between "script" implementations:

I just thought I'd end on this note, since it's crazy, and I can't tell you for sure what to expect from "script" between distro's. The implementation of this command is wildly different. For instance, as I mentioned above, Solaris' "script" only has a "-a" option to append to an existing script output file. BSD's "script" command (and most Linux distro's, it seems) have a lot of other useful options that you can use to make your life (and doing this sort of thing) easier. Here are the additional flags that I know of:

-c COMMAND <-- this will run a command, which would make it so you could run bash from script directly rather than having to re-invoke bash and call script from there.

-f &;lt-- This output will flush the buffer after every read/write. So, unlike the Solaris "script," you can actually get your results back in real-time. Very helpful if you need to watch a user's activity.

-q <-- Be "extremely" verbose... Just kidding ;) This tells script to not announce itself starting and stopping. This might prevent certain users from fighting you over this from day-one!

-t <-- This tells script to output timing data. It's run on an existing scriptfile and spits its output to STDERR. So, if you wanted to capture it in a file, something like this would do:

host # script -t 2>myNewTimeLog MyOriginalOutput

and then you can use that file to read your original "script" output back at relatively the same speed at which it was captured, using the "scriptreplay" command, like this:

host # scriptreplay MyNewTimeLog MyOriginalOutput

If you've got an hour's worth of output and don't want to go through the hassle of removing all the control characters, this is a great help. If you just cat the original scriptfile, it'll run through itself in under a minute and may cause seizures ;)

Now you should be all set to start monitoring users' activities and output. If your company wants to purchase a commercial product instead, you can always put that on your resume!

Hope this has been a helpful, if not enjoyable, diversion :)

Cheers,

, Mike