Showing posts with label ksh. Show all posts
Showing posts with label ksh. Show all posts

Wednesday, July 9, 2008

Using Traps Outside Of Shell Scripts On Unix Or Linux

Hey There,

Generally, when most folks think of the trap command, it's in the context of shell scripting. This is for good reason since it's always good practice to trap any signals that you need to handle. For instance, if you don't want your script to be crashed by someone hitting control-c (or sending a SIGINT), you can trap signal number 2 (SIGINT's corresponding numeric value) and execute some other command (or nothing) if a user tries to crash out of your script that way. It's not iron-clad/bullet-proof security, but it's good enough to protect against accidental damage or interruption. If you hit the "previous post" link a hundred times (or just check this link to our old post on trapping signals in shell and Perl ;) that should give you some idea of the advantages of trapping signals in your scripts.

Another venue, that often gets overlooked, but is just as useful (if not more so), is the use of traps in the command line interface. These can easily be introduced into a user's environment using whatever source file launches by default with the invocation of the shell (.profile for sh/ksh or .profile/.bashrc for bash, etc). Combining this functionality with security measures of a slightly lesser degree than those proposed in our post on making accounts su-only, can make controlling the average user's ability to muck with the shell that much easier to manage for the working sysadmin.

In simplistic fashion, you can test this yourself at the command line, by typing the following (this should work in sh, ksh and bash on Linux or Unix - pick a flavour:

host # trap 'echo Quit Interrupting Me!' 2

and then type a control-c character at your next shell prompt to get the following response:

host # ^C <-- This should be the control key and the "c" key pressed simultaneously. If you ever want to actually print control characters and retain their effects, check out our post on how to really write control characters.

host # Quit Interrupting Me!

host #


You can then, list out the trap you set (for kicks) and simply reset it by doing one of a few things (This is different and, in some instances, the same in sh, ksh and bash):

1. In sh: To list out the traps you've set, do:

host # trap
2: echo Quit Interrupting Me!
<-- Note that sh uses the signal "number"

and to reset them, just type:

host # trap 2
host # trap
<-- To make sure it's gone. In all shells you have to do this for each one separately. Check below for a simple way to reset them all.
host #

2. In ksh: To list out the traps you've set, do:

host # trap
trap -- 'echo Stop Interrupting Me' INT
<-- Note that ksh uses the signal's 3 letter abbreviation

and to reset them, just type:

host # trap - 2 <-- Note the additional - argument ksh requires
host # trap <-- To make sure it's gone.
host #

3. In bash: To list out the traps you've set, do:

host # trap -p
trap -- 'echo Quit Interrupting Me!' SIGINT
<-- Note that bash uses the signal definition found, most commonly, in your sys/io include file.

and to reset them just type:

host # trap - 2 <-- bash also requires the additional - argument.
host # trap
host #


On to resetting traps more efficiently!

As you've noted above, we've gone through the simple steps of setting a trap, executing it and then removing it. While this is relatively painless to do for a trap set on one signal, this can become a huge PITA if you're dealing with more than one signal (and, possibly, multiple signals from a finite pool, the size of which you're not sure, which are all possibly set with traps). Luckily, this is fairly easy to take care of.

Doing the following will wipe all of the traps you have set on all available signals in one fell swoop. Then you can walk away and get on with your life ;) Note that it's slightly different in sh/ksh and in bash.

For sh and ksh, the trap command is external, so you're restricted to it's functionality. On Solaris 8, which I'm using right now, there are no Gnu niceties. But, this is still okay. We already covered listing out all of your traps, and we can reset all of those traps for all three of the shells we're looking at today, each in one step:

1. In sh, to reset all of your traps, you'll need to do some educated guessing to figure out how many basic signals you have to work with, so that you can be sure you reset them all and don't reset way more than you need to. You can usually get this information from an include file, like /usr/include/sys/iso/signal_iso.h. Just start your range at zero and end with the highest number signal in that file. 37, for Solaris 8, is a safe number and covers more bases than most people ever cross. So, to reset the traps you have set on all 38 signals, just type:

host # trap ${seq 0 37}

The rub here is that this actually won't work on the operating system version I'm using, since it doesn't support the "seq" command or the ".." range operator. To preserve the sanctity of this noble old shell, just use a simple while loop to iterate through, like so:

host # x=0;while [ $x -lt 38 ];do trap $x;a=`expr $x + 1`;done
host # trap
<-- To double check that they're all gone.

2. In ksh, you still can't use seq on older versions of Solaris, but it will work on Linux and more recent Solaris releases, so you can cut down your "trap reset time" by doing:

host # trap - $(seq 0 37)
host # trap


3. In bash, just to sidetrack you one last time, you'll be using a shell built-in version of "trap," which has a handy flag that you can use to list out all the available signals (rather than doing it the old-fashioned way, like above):

host # trap -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL
5) SIGTRAP 6) SIGABRT 7) SIGEMT 8) SIGFPE
9) SIGKILL 10) SIGBUS 11) SIGSEGV 12) SIGSYS
13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGUSR1
17) SIGUSR2 18) SIGCHLD 19) SIGPWR 20) SIGWINCH
21) SIGURG 22) SIGIO 23) SIGSTOP 24) SIGTSTP
25) SIGCONT 26) SIGTTIN 27) SIGTTOU 28) SIGVTALRM
29) SIGPROF 30) SIGXCPU 31) SIGXFSZ 32) SIGWAITING
33) SIGLWP 34) SIGFREEZE 35) SIGTHAW 36) SIGCANCEL
37) SIGLOST


Sure, it's a bit more than you needed to know, but it tells you how many signals you have and comes in pretty handy as a quick reference :)

Now to reset all of your signal traps in bash, you can use either seq (on Linux or newer Solaris) or, even better, bash's built-in range operator (which can be used to easily copy files, as shown in our previous post regarding simplified file renaming), like this:

host # trap - {0..38}
host # trap


And you're back to square one ;)

, Mike

Friday, July 4, 2008

Basic I/O Redirection Differences In Sh/Ksh, Bash and Zsh On Linux And Unix

Hey there,

Today we're going to look at four different shells (all tests were run on a Solaris 8 box and an old RedHat Linux 9 server) and some very basic I/O redirection (which can come in handy if you're ever stuck trying to read and write files in a blind shell), to show you how differently, and commonly, they all handle the same information. The only reason I find this sort of stuff remarkable is that it's so common, sometimes its hard to wrap my head around the fact that any distinctions exists (Okay, they're not HUGE, but the differences, where there are any, are enough to make you wonder ;) For instance, in our two experiments, we're only going to find one difference each. But, since the material we're working with is so basic (not meaning "simple"), it's a bit disconcerting (...oh, the drama ;)

First of all, we'll narrow down our list of shells and consider sh and ksh to be the same, since all of my testing, with regards to these two shells, yielded exactly the same results. Note that, for all of our tests, we only tried to create two conditions in a variety of ways; the first was to create an empty file, and the second was to redirect standard error (STDERR) to a file. See what I mean? This is very familiar ground for almost all Unix or Linux admins.

Now let's take a look at a how sh/ksh, bash and zsh are the same, when it comes to basic I/O redirection.

First test: Create an empty file. The following methods worked in all 4 shells (producing the expected output from the file and "wc -l" commands of "empty file" and 0, respectively):

host # : > FILE

This creates an empty file by using the colon (:) placeholder (which, technically, is a simple notation for "true") as output to be redirected into the blank FILE

host # /bin/true >FILE

Pretty much the same as the previous example, except we're running /bin/true explicitly.

host # /bin/false >FILE

Again, the same as the previous example, except we're using /bin/false for a goof. It's another program that doesn't produce output and is mainly used to generate a desired return status.

host # cat /dev/null >FILE

Just catting nothing to the FILE, and it ends up empty, as expected.

host # <>FILE

and this works as well (specifying empty redirection of STDIN and STDOUT) on all 4 shells.

host # echo <> FILE

While this, finally, spit a line of output to the screen, it still produced an empty file in all 4 shells, even with extra commands to echo, since they all spill out to the screen instead of going into the file. Kind of a waste of resources, but interesting somewhat ;)

host # echo how are ya <> FILE
how are ya


Now's here's the only basic thing I could find where they differed, in this respect:

host # >FILE

This produces an empty file on Solaris sh, ksh and bash (generally, it's shorthand for "cat /dev/null >FILE"), but results in a perpetual hang with zsh (zsh will not infer /dev/null (or true) from a lack of argument preceding the > redirection symbol).

Second test: Redirecting STDERR to a file. The similarities.

host # truss echo hi >FILE

This produces a non-empty file (with the word "hi" in it) in all 4 shells, since truss sends its output to STDERR and we're only redirecting STDOUT. This doesn't really prove any part of the experiment, but I wanted to put it out as a baseline, so we know that the truss output is going where we expect it to before we start goofing around :)

host # truss echo hi >FILE 2>&1

This, tying STDOUT and STDERR together to put all output into the file, works on all 4 shells as well :)

But, the following command only "really" works as expected in bash, and produces remarkably different results for the other shells:

host # truss echo hi &>FILE

In bash, the &> notation is shorthand for ">FILE 2>&1" - but this doesn't work for our other shells. In Solaris sh and ksh, the command spits output to the screen, since the & activates the shells' backgrounding feature, the job completes and the output file is completely empty. On zsh, "hi" gets spit to the output terminal and only STDERR goes into the output file (???)

Remember, these have just been really basic examples. Thinking about this sort of stuff makes you realize how truly "different" different shells are. Of course, among these very basic examples, we could only find two major
differences, but that's still a good number considering how rudimentary the shell functions are. In other words, these are I/O redirection issues which you can't change, except by script-wrapping (which probably isn't worth it, since, for each difference we found at least 2 similarities between the 4 shells and both experiments only tested one property).

If you're not sleeping yet, I'll assume this article was, at least somewhat, entertaining and/or helpful ;)

Have a Happy 4th of July!

, Mike

Friday, June 6, 2008

Piped Variable Scoping In The Linux Or Unix Shell

Hey There,

Today we're going to look at variable scoping within a piped-while-loop in a few Linux and Unix shells. We're actually almost at this point in our series of ongoing posts regarding bash, Perl and awk porting.

Probably the most interesting thing about zsh (and shells that share its characteristic, in this sense) is that the scope of variables passed through a pipe is slightly different than in other shells, like bash and ksh (Note that not all vendor's versions are equal even if they have the same name! For instance, HP-UX's sh is the Posix shell, while Solaris' is not) I'm taking care to separate the piping construct from the "is the while loop running in a subshell?" argument, as I don't want to get too far off course. And, given this material, that can happen pretty fast.

For a very simple demonstration of whether the scoping issue is a "problem" (defining problem as either a bug or a feature ;) with the while-loop or pipes, we'll look at a very simple "scriptlet" that sticks to using a while-loop, without any piping, like this:

while true
do
bob=joe
echo " $bob inside the while"
break
done
echo " $bob outside the while"


And we can see, easily, that the value of the "bob" variable stays the same, even after the while loop breaks, for all 3 of our test shells. If the while loop, alone, was the issue, bob shouldn't be defined when the while loop breaks:

host # zsh ./test.sh
joe inside the while
joe outside the while
host # ksh ./test.sh
joe inside the while
joe outside the while
host # bash ./test.sh
joe inside the while
joe outside the while



If we change this scriptlet slightly to make it "pipe" an echo to the while-loop, the behaviour changes dramatically:

echo a|while read a
do
bob=joe
echo " $bob inside the while"
break
done
echo " $bob outside the while"


Now, if we use zsh, the value assigned to the "bob" variable inside our while loop (which has been created on the other side of the pipe) actually maintains it state when coming out of the loop, like this:

host # zsh ./test.sh
joe inside the while
joe outside the while


On most other shells, because of variable scope issues with the pipe, an empty value of the "bob" variable is printed after they break out of the while loop, even though it does get correctly defined within the while loop. This is because (and here's where the technicality, and subtle differences between myriad shells, usually becomes a hotbed of raging debate ;) after the pipe, the read command (as opposed to the while loop) runs in a subshell, like so:

host # bash ./test.sh
joe inside the while
outside the while
host # ksh ./test.sh
joe inside the while
outside the while


Notice, again, that the "echo $bob outside the while" statement in these two executions prints an empty variable when the value bob is declared outside the while loop, even though it is set within the while loop.

For most shells, this is easy to get around in one aspect. The main problem stems from the fact that the value is being piped to the while loop, and not a direct fault of the while loop itself. Therefore, a fix like the following should work, and does. Unfortunately, with the command-pipe (such as an echo statement), you won't be able to use a while-loop in many cases, and would have to substitute a for-loop, like so (In most shells, redirecting at the end of a while loop with << will either result in an error or clip the script at that line):

for x in 1
do
bob=joe
echo " $bob inside the while"
done
echo " $bob outside the while"


host # zsh ./test.sh
joe inside the while
joe outside the while
host # ksh ./test.sh
joe inside the while
joe outside the while
host # bash ./test.sh
joe inside the while
joe outside the while


This gets worse (usually hangs) if you try to get around the pipe by doing some inline subshelling with backticks, like:

while read `echo 1`

However, the following solution (awkward though it may be) does actually do the trick (substitute any other fancy i/o redirection you want, as long as you "avoid the pipe"):

exec 7<>/tmp/bob
echo -n "a" >&7
while read -r line <&7
do
bob=joe
echo " $bob inside the while"
done
echo " $bob outside the while"
exec 7<&-
exec 7>&-
rm /tmp/bob

host # zsh ./test.sh
joe inside the while
joe outside the while
host # ksh ./test.sh
joe inside the while
joe outside the while
host # bash ./test.sh
joe inside the while
joe outside the while


For more examples of input/output redirection, check out our older post on bash networking using file descriptors.

Now, when it comes to reading in files, the case is a bit easier to remedy. If you're in the habit of doing:

cat SOMEFILE |while read x
...


You'll run into the same scoping problem. This is also easily fixed by using i/o redirection, which would change our script to this:

while read x
do
bob=joe
echo " $bob inside the while"
done < SOMEFILE
echo " $bob outside the while"


Assuming the file SOMEFILE had one line of content, you'd get the same results as we got above with the for loop.

And that's about all there is to that (minus the highly-probably ensuing arguments ;). There are, I'm sure, a couple more ways to do this, but using the methods that fixed the "problem" of variable scope in bash and ksh is probably better practice, since zsh (and shell's that share its distinction in this case) is a rare exception to the rule (even though zsh may very well be doing things the "proper" way) and the bash/ksh fix works in zsh, while the opposite is not true.

At long last, good evening :)

, Mike


Thanks for this comment from Douglas Huff, which helps to clarify the underbelly of the process:

A friend of mine pointed me to this article and the
previous one in the series that you wrote [on variable scoping]...

I had two comments on these articles but you seem to have
comments disabled, so I figured I'd email them to you.

First, calling it a "scoping" issue is a bit misleading.
While technically true, understanding the underlying
reasons why this doesn't work as "expected" is key to
understanding how you can work around it in POSIX sh or in
ksh without the zsh/bash syntatical sugar for doing so.

What's going on is that a process cannot modify the
environment of it's parent.

When you do:

something | while read blah; do blah; done

What the shell is doing is first executing a subshell
(separate process) that runs the while with stdin
redirected to read from the unnamed pipe. Then in another
subshell it runs "something" with standard out redirected
to the unnamed pipe.

Knowing this it's quite easy to replicate the behaviour
from bash 2/3 and zsh in POSIX sh and ksh with a bit of
understanding of the underlying mechanics. The trick is to
keep the while inside of the original process (since it is
run by the interpretter and does not require a separate
process) and execute the other command in a subshell.
Which is exactly what the syntactical sugar does for you
behind the scenes in bash2&3/zsh.

Thursday, May 29, 2008

Simple Shell One-Liner To Enumerate File Types In Linux and Unix

Hey there,

Lately, we've been focusing a lot on Perl "one liners," from mass file time syncing to name and IP resolution and I thought it was only fair that we should write a post about a shell "one liner" for once. After all, most standard Unix and Linux shells are perfect for that purpose :)

Here's a very quick way to take an inventory of all the different file types (including directories, sockets, named pipes, etc) that exist in a given directory tree and provide a tally of each file type. I'm not entirely sure that I care if I have 15 ASCII text files and 2 Perl scripts in my current working directory, but this little piece of code must be able to help someone somewhere accomplish something, even if it is only to use as a smaller part of a larger organism ;)

This works in pretty much any standard shell on every flavour of Linux and/or Unix I've tested (ash, sh, bash, jsh, ksh, zsh, even csh and tcsh, which is huge for me since I never use those shells. One day, soon, I will redouble my efforts and just learn how to use them well, which, hopefully, won't result in my writing a whole bunch of posts about "cool" stuff that everyone has known about for the past few decades ;).

This one-liner could actually be written as a script, to make it more readable, like this:

#!/bin/sh
find . -print | xargs -I var file "var"|
awk '{
$1="";
x[$0]++;
}
END {
for (y in x) printf("%d\t%s\n", x[y], y);
}' | sort -nr


But, since I'm a big fan of brevity (which is about as far away from obvious as possible if you consider my writing style ;), I would run it like this:

host # find . -print|xargs file|awk '{$1="";x[$0]++;}END{for(y in x)printf("%d\t%s\n",x[y],y);}'|sort -nr

In my terminal, that all comes out on one line :) And here's the sort of output you can expect:

host # find . -print|xargs file|awk '{$1="";x[$0]++;}END{for(y in x)printf("%d\t%s\n",x[y],y);}'|sort -nr
23 Bourne-Again shell script text
11 ASCII text
10 perl script text
5 pkg Datastream (SVR4)
4 directory
4 ASCII English text
2 UTF-8 Unicode English text, with overstriking
2 Bourne shell script text
1 RPM v3 bin i386 m4-1.4.10-1


and, just to confirm the file count, we'll change the command slightly (to total up everything, instead of being particular -- another "one-liner" that's, admittedly, completely unnecessary unless you're obsessive/compulsive like me ;) and compare that with a straight-up find:

host # find . -print|xargs file|awk 'BEGIN{x=0}{x++}END{print x}'
62
host # find .|wc -l
62


Good news; everything appears to be in order! Hope this helps you out in some way, shape or form :)

Cheers,

, Mike

Friday, April 25, 2008

Determining Signal Definitions In Linux and Unix

Greetings,

Today, we're going to take a quick look at simple return codes (or errno values) that get returned by the bash and ksh shells in Linux (and Solaris Unix). This complements a post we did a while back on trapping signals in Perl and the shell, and adds a bit more to understanding of the fork-and-exec process introduced in our post on running a login shell on a network port.

While these signals are subject to change, the POSIX standard is fairly consistent between most Unix and Linux systems. If you ever need to find out what a signal means, you have a number of options at your disposal to determine that information.

1. At the process level on Solaris, you can figure out what signals are associated with what processes by using the "psig" command. This command will list out all signals associated with a given process (assuming you know its process ID). A sample of this process-finding process would be something like:

host # ps -ef|grep "[l]pd"
daemon 2073 1 0 Mar 21 ? 0:00 /usr/local/sbin/lpd
host # psig 2073
2073: /usr/local/sbin/lpd
HUP caught sigacthandler
...
ILL default
...
PIPE ignored
...


basically, you'll get a ton of output. And none of it may make any sense to you. That's okay for now :)

2. At the process level on Linux, you can get approximately the same functionality from the "crash" command ( used interactively, if available ) or, even better, this publicly available script written specifically to emulate Solaris' psig functionality, for use on Linux. It's called psig.sh for Linux and is an excellent tool for replicating the psig experience for Linux users who are used to Solaris' proc commands.

3. On either Linux or Unix (any flavor), you can determine what signals are handled by your machine, what their names are and what their signal numbers are, by checking your system include files. On Solaris, the header file that contains this information is almost guaranteed to be named /usr/include/sys/iso/signal_iso.h (at least for Solaris 9 and 10). In Linux, it will generally be very specific to the architecture of your build system (something like: /usr/include/asm-x86_64/signal.h)

In any event, you can follow a simple process to figure out where your system's signal definitions are loaded, no matter what flavor of Unix or Linux you're running. It all comes down to derivation and, luckily, you'll always have the same starting point (I may be wrong on this, but I've never experienced it ;).

Here's a quick command line walkthrough of two different ways to figure out where your signal definitions are listed (that is, the include/header file that defines the signal numbers, their abbreviated signal names and definitions) that can be used on almost any *nix system. As a starting point, /usr/include/signal.h is probably the best, since it exists on most systems. We'll also use SIGHUP (the standard HangUp signal and signal number 1) to guide our search:

The painful way:

host # grep SIGHUP /usr/include/signal.h|grep -w 1
host # grep include /usr/include/signal.h
#include <sys/feature_tests.h>
#include <sys/types.h>
...
host # grep include /usr/include/signal.h|sed 's/^.*<\([^>]*\)>/\1/'|xargs -ivar grep SIGHUP /usr/include/var
host # grep include /usr/include/signal.h|sed 's/^.*<\([^>]*\)>/\1/'|xargs -ivar grep "^#include" /usr/include/var|grep sig
#include <sys/iso/signal_iso.h>
#include <sys/siginfo.h>
...


... and so on, and so on, until you find the right file. Basically, just follow the includes until you reach the end. This can be tedious and time consuming.

The incredibly easy way:

Solaris_9_or_10_host # find /usr/include|xargs grep SIGHUP /dev/null|grep -w 1
/usr/include/sys/iso/signal_iso.h:#define SIGHUP 1 /* hangup */
<--- This is our file :)

or

Linux_host # find /usr/include|xargs grep SIGHUP /dev/null|grep -w 1
/usr/include/bits/signum.h:#define SIGHUP 1 /* Hangup (POSIX). */
<--- Note that all 3 of these entries are valid, with this header file being the most generic. Our next command can narrow down the most "specific" correct result.
/usr/include/asm-x86_64/signal.h:#define SIGHUP 1
/usr/include/asm-i386/signal.h:#define SIGHUP 1
Linux_host # uname -pr
2.6.5-7.286-smp x86_64
<--- Now we know that, if we want to be extra sure, the "/usr/include/asm-x86_64/signal.h" is the signal definition file we should be looking at to determine our signal information.

And that's all there is to it :) As an interesting bit of trivia, if you receive an errno value from a process that's higher than 128, you can easily deduce what signal stopped that process. All you need to do is deduct 128 from the return code, like this:

host # sleep 2
host # echo $?
0
host # sleep 200
^C
host # echo $?
130


In this case, when run normally, our sleep command exited with a return code of 0 (success). When we did a control-C during the sleep command, it returned a code of 130. Of course, you won't find this signal number defined in any of your include files, but, utilizing the shell's native reporting of signal interrupts, all you need to do is subtract 128 from 130 and you now know that the sleep command was killed by a signal 2, which is defined as:

SIGINT 2 /* interrupt (rubout) */ <--- on Solaris

and

SIGINT 2 /* Interrupt (ANSI). */ <--- on the Linux box I'm using.

Both answers, although formatted differently, indicate the same signal. Nice :)

Below, I've listed the standard signals for your convenience (from a fairly generic file - signum.h) and take no credit for writing them myself ;)

Enjoy,

Some Basic signals from /usr/include/bits/signum.h on Linux (Note that I'm only listing the basic signals and not I/O polling signals, etc)

#define SIGHUP 1 /* Hangup (POSIX). */
#define SIGINT 2 /* Interrupt (ANSI). */
#define SIGQUIT 3 /* Quit (POSIX). */
#define SIGILL 4 /* Illegal instruction (ANSI). */
#define SIGTRAP 5 /* Trace trap (POSIX). */
#define SIGABRT 6 /* Abort (ANSI). */
#define SIGIOT 6 /* IOT trap (4.2 BSD). */
#define SIGBUS 7 /* BUS error (4.2 BSD). */
#define SIGFPE 8 /* Floating-point exception (ANSI). */
#define SIGKILL 9 /* Kill, unblockable (POSIX). */
#define SIGUSR1 10 /* User-defined signal 1 (POSIX). */
#define SIGSEGV 11 /* Segmentation violation (ANSI). */
#define SIGUSR2 12 /* User-defined signal 2 (POSIX). */
#define SIGPIPE 13 /* Broken pipe (POSIX). */
#define SIGALRM 14 /* Alarm clock (POSIX). */
#define SIGTERM 15 /* Termination (ANSI). */
#define SIGSTKFLT 16 /* Stack fault. */
#define SIGCLD SIGCHLD /* Same as SIGCHLD (System V). */
#define SIGCHLD 17 /* Child status has changed (POSIX). */
#define SIGCONT 18 /* Continue (POSIX). */
#define SIGSTOP 19 /* Stop, unblockable (POSIX). */
#define SIGTSTP 20 /* Keyboard stop (POSIX). */
#define SIGTTIN 21 /* Background read from tty (POSIX). */
#define SIGTTOU 22 /* Background write to tty (POSIX). */
#define SIGURG 23 /* Urgent condition on socket (4.2 BSD). */
#define SIGXCPU 24 /* CPU limit exceeded (4.2 BSD). */
#define SIGXFSZ 25 /* File size limit exceeded (4.2 BSD). */
#define SIGVTALRM 26 /* Virtual alarm clock (4.2 BSD). */
#define SIGPROF 27 /* Profiling alarm clock (4.2 BSD). */
#define SIGWINCH 28 /* Window size change (4.3 BSD, Sun). */
#define SIGPOLL SIGIO /* Pollable event occurred (System V). */
#define SIGIO 29 /* I/O now possible (4.2 BSD). */
#define SIGPWR 30 /* Power failure restart (System V). */
#define SIGSYS 31 /* Bad system call. */
#define SIGUNUSED 31
#define _NSIG 65 /* Biggest signal number + 1


, Mike

Thursday, April 3, 2008

Setting Static And Dynamic Window Titles From The Shell

Hey There,

Following in the tradition of our "terminal beautification" posts on using color in the shell and making fancy looking menus with tput, today we're going to take a look at setting the title of your terminal window from within the shell.

This is actually a slight variation from what we've looked at before, but it can be just as helpful, since you'll be using your shell to modify the "content" of your terminal's title in Windows (or whatever OS you use). This can be a great help in keeping organized. Most terminal clients provide functionality like this as a standard, but don't allow you to modify the defaults (like every window will have the title of the host you're connected to, etc) without jumping through hoops on the Windows (or Mac) end of things.

The simplest thing to do is to set static window titles in "xterm." In fact, if you're running X-Windows, you don't even have to do anything fancy, since most widely-distributed versions of it contain command line arguments specifically for doing this. For instance, the following command line (invoking xterm) would bring it up with the title "This XTerm Is Mine" (and, for goofs since it's an available option, an icon titled "This Icon Is Mine, Too!"):

host # xterm -T "This XTerm Is Mine" -n "his Icon Is Mine, Too!" &

Simple enough. It's also fairly simple to set static window titles for almost any terminal emulator (like PuTTY or SecureCRT) by typing at the command prompt or including a line in your .profile, .bashrc or .zshrc (Or whatever initialization file you use if you use an alternate shell).

Here again, we come back to some concepts that we first went over when we looked at using color in the terminal. Your window title is created by using very similar "echo -e" syntax (include the initial "escape") and can be done at the command line by typing the following:

host # echo -e "\033]0;This Is My Window Title And My Icon Title\007" <--- This pretty much works all the time
host # echo -e "\033]1;This Is Just My Icon Title\007" <--- This works sometimes depending on your terminal
host # echo -e "\033]2;This Is Just My Window Title\007" <--- This pretty much works all the time

In the above three lines, we set the window and icon title, icon title and window title, respectively. Some of these statements may not work on your terminal, so you'll have to test, but (in my experience so far) at least one of them should. Usually the first one (setting both the icon and title) is your best bet.

Now, if we want to have our window and icon title change dynamically (we're going to forget about the icon-only and window-only settings, since they can be derived by just substituting the "0" in the following lines with either "1" or "2," per the examples above) all we have to do is integrate the escape sequences into our command prompt. In some shells this functionality has been accounted for. In others, we can force it.

1. Bash - To set this in bash, we'll just set the value of the PROMPT_COMMAND variable, either at the command prompt or in our .bashrc (or .profile) for permanence:

host # PROMPT_COMMAND='echo -ne "\033]0;${HOSTNAME}:${PWD} # \007"';export PROMPT_COMMAND

and, you'll notice that your window and icon titles are now something like:

host1:/users/user1 #


and that the window and icon titles will change whenever you change directories :) We'll do the same command line string for the other shells, so this post maintains some consistency.

2. Zsh - To set this in zsh, all we need to do is set the value of the precmd function at the command prompt or in our .zshrc, like so:

host # precmd () {print -Pn "\e]0;%n:%~\a"}

3. Ksh - To set this in ksh, all we need to do to get the same result is type the following at the command line, or include it in our .profile (Admittedly, this can be a little tricky since you have to insert the actual control characters into the prompt - For more info on how to do that, check out our previous post on how to really represent control characters within files). Note, also, that setting PS1 will change your main command prompt, so, after the ctl-G, I've added "${PWD}# '" which will be our actual command prompt, as opposed to our new window title. You should replace this with whatever your command prompt already is if you want to keep it the same:

host # PS1='^[]0;${USER}@${HOST}: ${PWD}^G${PWD}# '

And you can use that for most other shells, except...

4. Csh/Tcsh - To set this in csh/tcsh, you can use the "settitle" alias (Again, you'll need to manually put in the real control characters and the title is slightly different since my tcsh/csh knowledge is fair-to-middlin' in this respect ;)

host # alias settitle 'set t=$cwd:h;echo -n "^[]2;${HOST}: "!*"^G"'

And that should be all there is to getting yourself all set up to have dynamic and/or static window and icon titles for whenever you want or need. Here's hoping you can find a window and icon naming scheme you can live with and enjoy :)

Best wishes,

, Mike




Wednesday, April 2, 2008

Using Bash To Access The Network Via File Descriptors

Greetings,

It's been a while since we looked at down-and-dirty shell tricks, and this one comes as a tangential follow-up to our previous post on finding and reading files in the shell when your system is on the verge of a crash.

The first thing I'd like to clear up before we proceed is that, in re-reading that post, I notice that I was being a bit of a ksh-snob in the section regarding reading the contents of a regular file through a file descriptor to save on resource usage. My example line in that post, after exec'ing file descriptor 7, was:

host # while read -u7 line; do echo $line; done

And, as most bash fans probably noted, that syntax doesn't work in bash. That's what I get for only verifying the output in ksh ;) What that line should have read (since this will work in sh, ksh and bash) was:

host # while read line;do echo $line;done <&7

Apologies all around. In any event, that "mistake" serves as a fairly decent segue into what we're going to be looking at today. That same basic process of exec'ing file descriptors to read files directly is how we're going to use bash to read from the network directly using /dev/tcp.

And, just to be clear, this "does not" work in Solaris ksh or sh. It will work on Solaris and every Linux I've tested, but only, to my knowledge, in bash. Every version of ksh I've tested (even Solaris 10) can't create the virtual file system required by this operation. I've attached a small bash script, at the end of the post, to check a mail server and http server, so you have a working example of the process.

This method of accessing the network is very useful to know how to do and, going through it step by step, is actually very simple to accomplish. We'll use a mail server as our example since it involves both reading and writing to the network file descriptor.

The first thing you'll want to do is "exec" the file descriptor you'll be using to communicate on the network via tcp. You can do this like so (Just be sure not to exec file descriptors 0, 1 or 2, as these should already be assigned by your operating system as "Standard Input (STDIN)," "Standard Output (STDOUT)," and "Standard Error (STDERR)" :

host # exec 9<>/dev/tcp/mail_server.xyz.com/25 <-- Here we create file descriptor 9 via exec, open the /dev/tcp/mail_server.xyz.com/25 "virtual file" and assign file descriptor 9 to it.

Note that this line is where the script, or process, will either make or break. ksh (Definitely for Solaris 9 and 10) does not allow the creation of the virtual filesystem below /dev/tcp that this operation requires. If you attempt to do this in ksh you will most likely get the following error:

host # /dev/tcp/mail_server.xyz.com/25: cannot create

Assuming we're using bash, and that worked okay, we can now write to, and read from, our mailserver over port 25. This is analogous to doing something like Telnetting to port 25, but we're using a built in feature of the bash shell to access the network via a pseudo file system created under /dev/tcp (It acts much like a /proc filesystem, although you can't see what you've created by looking at the file that /dev/tcp links to, since it's a character device file and the directory path /dev/tcp/mail_server.xyz.com/25 doesn't "really" exist :)

Now we can send input, and receive output, using standard file descriptor redirection commands. A few "for instances" below:

host # echo "HELO xyz.com" >&9 <--- This sends the HELO string to our mailserver via file descriptor 9, which we exec'ed above.
host # read -r RESPONSE <&9 <--- This reads the data coming in on file descriptor 9 with as little interpretation as possible (for instance, it treats the backslash (\) as a backslash and doesn't imbue it with it's usual "special" shell powers).
host # echo -n "The mail server responded to our greeting with this load of sass: "
host # echo $RESPONSE


And, now, we can write to, and read from, file descriptor 9 until we have nothing left to say ;) When we're all finished, I find it's good practice to close the input and output channels for our file descriptor (although closing the input should be sufficient in most cases), and, if you want to or need to, you can also dump the remainder of the file descriptor's contents before you close it down.

host # cat <&9 2>&1 <--- Dump all of the remaining input/output left on file descriptor 9 to our screen.
host # 9>&- <--- Close the output file descriptor
host # 9<&- <--- Close the input file descriptor

Now, you're back to the way things were and you shouldn't be able to read or write from file descriptor 9 anymore (or, if you're a glass-half-full person, it's now available for use again :) I've tacked on a small script to demonstrate some basic functionality, but, hopefully, you'll have some fun playing around with this and figure out more, and better, ways to manipulate your server's interaction with the network through the file system.

host # ./check_net.sh <--- As simple as that to run it :)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash
#
# check_net.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

mailserver="mailserver.xyz.com"
httpserver="www.xyz.com"
domain="xyz.com"

echo "Testing mail server functionality"
exec 9<>/dev/tcp/$mailserver/25
read -r server_version <&9
echo "Server reports it is: $server_version"
echo "HELO $domain" >&9
read -r greeting <&9
echo "Server responded to our hello with: $greeting"
echo "VRFY username" >&9
read -r vrfy_ok <&9
echo "Server indicates that this is how it feels about the VRFY command: $vrfy_ok"
echo "quit" >&9
read -r salutation <&9
echo "Server signed off with: $salutation"
echo "Dumping any remaining data in the file descriptor"
cat <&9 2>&1
echo "Closing input and output channels for the file descriptor"
9>&-
9<&-
echo "--------------------------------------------------"
echo "Testing web server functionality - Here it comes..."
exec 9<>/dev/tcp/$httpserver/80
echo "GET / HTTP/1.0" >&9
echo "" >&9
while read line
do
echo "$line"
done <&9
echo "Dumping any remaining data in the file descriptor"
cat <&9 2>&1
echo "Closing input and output channels for the file descriptor"
9>&-
9<&-
echo "done"


, Mike




Thursday, March 20, 2008

Generating Only Unique Content From Two Somewhat Similar Files

Hey There,

I see this, in one variation or another, on the message boards from time to time. So, touching back on our earlier post regarding making our own diff to deal with directory permissions, today we're going to look at a script that performs another function not built in to the standard "diff" command on Linux or Unix, and fairly simply implemented in ksh or bash.

Our script today, takes the input of two files (generally text files) of different sizes; however, they can be of equal size. It doesn't seem to make a difference ;) The only restriction on the usage of the script we're presenting today is that, assuming both files are of unequal size, the smaller of the two files should be listed as the primary argument to the script (we'll call it "rdiff") and the larger file should be the secondary argument, like so:

host # ./rdiff smallfile largefile

and, of course, if they're of equal size:

host # ./rdiff file file

The output of the script will be a file named "Unique.out.smallfile.largefile" with "smallfile" and "largefile" being the values of the file names passed to the script on the command line.

Basically, our script uses sed and grep to determine if lines in the smaller file are duplicated in the larger file. If those lines are duplicated, they are then removed from the final output, so that your "unique" output file only includes lines that existed exclusively in the small and large file. All duplicate information is removed.

Enjoy and cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#/bin/ksh

#
# rdiff - find unique content in two files
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -ne 2 ]
then
echo "Usage: $0 SmallFile LargeFile"
exit
fi

FileA=$1
FileB=$2

cp $FileB ${FileB}.old

for x in `<$FileA`
do
grep ^${x}$ $FileB >/dev/null 2>&1
if [ $? -eq 0 ]
then
echo $x >>tmpfile
fi
done

for x in `<tmpfile`
do
sed "/^$x$/d" $FileB >>newtmpfile
mv newtmpfile Unique.out.${FileB}.${FileA}
done

rm tmpfile


, Mike




Thursday, March 13, 2008

Creating Solaris Pkg Files From Already Installed Packages

pkgremk CSWtop package install output

Please click above for a larger resolution version of the sample output shown.

Hey again,

For today, I put together a shell script, in ksh, to create new Solaris pkg files from already installed packages. The motivation for this sometimes seems hard to explain since you generally don't really "need" to have packages of any software that's installed on your system. It's there already; plus you probably have the install CD or a JumpStart server sitting around somewhere on your network with the original installation packages on them, right?

Other options we've looked at in previous posts include using pkg-get to help you out in a pinch and, if you have the means, creating your own packages from source quickly and making them the long, hard old-fashioned way. For today, we're going to assume none of these avenues are available to you.

I've found that, on more than several occasions, I've been stuck on a machine which needed a package installed and been out-of-luck on both counts (No software CD's and no Network Install Server). And some packages are hard to come by, like some of the obscure, but almost absolutely necessary, Solaris system packages like SUNWexplo. Well, okay, you don't really "need" to have Sun Explorer installed on your machine either, but it's usually the first output Sun's phone support requests you send them, so let's just pretend it's essential for argument's sake ;)

With today's script (called, simply, pkgremk <--- Slight homage to pkgmk since we're remaking the packages ;) you can take any pkg that's installed on your Solaris system and create a perfectly valid datastream package for use on any other machine of suitably similar or compatible architecture and/or Solaris Unix version.

So, for example, if you found yourself in a situation where you were stuck without Sun's Explorer to gather evidence on your Solaris 9 box, you needed to run it to send to Sun support and you had it on a second Solaris 9 box, you could run pkgremk on the second box, transfer the pkg from the second box to the first, install Explorer and run it right then and there! Kind of like this (Glossing over tons of little things here ;)

host b # ./pkgremk SUNWexplo
host b # scp SUNWexplo.pkg hosta:/home/user1/.

host a # pkgadd -d /home/user1/SUNWexplo.pkg
host a # /opt/SUNWexplo/bin/explorer
<--- And the data collection begins.

It's really pretty much that simple. The only things the pkgremk script requires are that you already have the software installed on the server that you want to "remake" the package on, and that you pass it the exact package name as the only argument on the command line (you can find this information out easily by either doing an ls of the /var/sadm/pkg directory or just running pkginfo with no arguments).

I tried to take into account all parts of the pkg making process and accounted for getting all the package file names, types and ownership/permission information from /var/sadm/install/contents, as well as all the pkginfo, dependency and installation files in /var/sadm/pkg and playing with those so that you could, literally, type one quick command line and create any package you need to replicate on your machine, for whatever reason, and be able to use it just about anywhere else (On Solaris, of course - I'll be doing this for Linux soon, as well :) The finished product should be easy to move around since we use pkgtrans to make it a single file datastream format package, rather than a directory structure that you'd have to tar up and mess around with. Basically, you're going to be getting a Sun standards-compliant pkg software distribution file!

This script could be nice for insurance purposes, as well. If, for instance, you weren't sure you wanted to remove the CSWtop package. If you didn't have the pkg file you got originally, you could just create it on the fly and then re-install from that new package if you found you really did need it, like so:

host # ls -d /var/sadm/pkg/CSWtop
/var/sadm/pkg/CSWtop
host # ./pkgremk CSWtop
<--- See the picture above for the full command output
...
Done!
host # ls
CSWtop.pkg CSWtop_mk.12473.out
host # pkgrm CSWtop
...
Removal of <CSWtop> was successful.
host # top
bash: top: command not found
<--- For our example's sake, you've just realized how desperately you need top ;)
host # pkgadd -d CSWtop.pkg <--- or "pkgadd -d /full/path/to/CSWtop.pkg" if you're not in the same directory as the package.

A few keystrokes, and you're back in business :) I hope this script helps you out and you can find even more good uses for it!

Best wishes,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/ksh

#
# pkgremk - Create pkgs from existing installations on Solaris
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

trap 'cd $orig_pwd;rm -rf $tmp_dir;rm -rf ${pkgname}_mk.out;exit' 1 2 3 9 15

if [ $# -ne 1 ]
then
echo "Usage: $0 PKGname"
echo "Package must be installed on system"
exit
fi

pkgname=$1
orig_pwd=`pwd`

if [ ! -d /var/sadm/pkg/$pkgname ]
then
echo "$pkgname does not exist in the pkg repository. Exiting..."
exit
fi

tmp_dir=pkgremk.$$

if [ -d $tmp_dir ]
then
echo "Removing existing temp dir"
rm -r $tmp_dir
else
echo "Creating temp work directory..."
mkdir $tmp_dir
fi

cd $tmp_dir

echo "Copying over \"install\" directory"
(cd /var/sadm/pkg/$pkgname;tar cpf - install)|tar xpf -

echo "Copying over pkginfo file"
cp /var/sadm/pkg/$pkgname/pkginfo .

if [ -f prototype ]
then
rm prototype
fi

echo "Deriving new prototype file"
echo "Generating includes"
if [ -f pkginfo ]
then
echo "i pkginfo" >>prototype
else
echo "pkginfo disappeared. Package will not build. Exiting..."
exit
fi

cd install
ls -1d * 2>/dev/null|while read line
do
echo "i $line"
echo "i $line" >>../prototype
mv $line ../
done

cd ..

if [ -d install ]
then
echo "Removing install directory"
rm -r install
fi

grep $pkgname /var/sadm/install/contents |cut -d' ' -f 1,2,3,4,5,6|awk '{print $2 " " $3 " " $1 " " $4 " " $5 " " $6}'>>prototype

orig_basedir=`grep BASEDIR pkginfo|awk -F"=" '{print $2 "/"}'`
old_basedir=`grep BASEDIR pkginfo|awk -F"=" '{print $2 "/"}'|sed 's/\//\\\\\//g'`
if [ $orig_basedir = "//" ]
then
orig_basedir="/"
fi
if [ $old_basedir = "\/\/" ]
then
old_basedir="\/"
fi

echo "Stripping pathnames to compensate for BASEDIR=$orig_basedir"
echo "directive in pkginfo file"
this_temp=$$
sed "s/$old_basedir//" prototype >>$this_temp
mv $this_temp prototype

echo "Removing non-pkginfo/pkgmap lines from prototype"
that_temp=$$
awk '{ if ( $1 ~ /^[ifcsd]/ ) print $0}' prototype >>$that_temp
mv $that_temp prototype

echo "Creating distributable pkg from existing files as ${pkgname}.pkg"
echo "Output of pkgmk command dumping to ${pkgname}_mk.out"
/usr/bin/pkgmk -b $orig_basedir -d /tmp >${orig_pwd}/${pkgname}_mk.$$.out 2>&1
cd ..
echo "Transferring package to datastream format"
if [ -f `pwd`/${pkgname}.pkg ]
then
tmp_pid=$$
echo "Found existing `pwd`/${pkgname}.pkg file"
echo "Copying to `pwd`/${pkgname}.pkg.$tmp_pid"
mv `pwd`/${pkgname}.pkg `pwd`/${pkgname}.pkg.$tmp_pid
fi
/usr/bin/pkgtrans -s /tmp `pwd`/${pkgname}.pkg $pkgname
rm -r /tmp/$pkgname

echo "Done!"
cd $orig_pwd
rm -rf $tmp_dir



, Mike




Thursday, February 28, 2008

Finding and Reading Files In The Shell When Your System Is Hung

Howdy,

Today we're going to look at a situation that occurs probably more often than it should ;) If you've done your fair share of system administration on Linux or Unix, you've probably run into a predicament where you got paged, called, etc, to look at a machine that was hanging on the brink, only to find that (asssuming you could log in at all) it was so completely trashed that it couldn't even muster up the strength to run the simple system commands you needed to diagnose the problem before just giving up and rebooting.

Typical errors you'd get at the command line, in this sort of situation, would be similar to:

host # ls /tmp
Insufficient Memory!


or

host # cat file
Cannot fork new process!


Basically, you're stuck in a situation where you can't do anything that relies on executing anything other than the login shell that you're lucky you got in the first place ;)

The good news is that you can, quite possibly, get as much information as you need before rebooting by simply using your shell and it's built-ins. All of these examples should work for sh, ksh, bash, etc (Possibly not in csh, but, hopefully that's not your system default root shell). If you have a good idea what's wrong before your machine goes down entirely, they can even help you decide what you want to do before you boot the system back up (Check out this post for some simple tricks to figure out what commands are available to you in Solaris' PROM).

Here are the things I try to do when I find myself in that sort of situation (in no order of importance ;)

1. Move around the filesystem.

Luckily, the "cd" and "pwd" commands are built into the shell, so you can always move around your filesystem (even if you are, figuratively, in the dark) and get to the hot spots you want to check. For instance:

host # cd /var/log

will work just fine. This is the most obvious thing you can do, but cd (on its own) depends on you to know where you're going. You can't cd to a directory that doesn't exist even when things are up and running perfectly ;)

If you happen to get lost, you can figure out where you are, using the built-in "pwd," like so:

host # pwd
/var/log


2. Take a look at the contents of your filesystem.

This is actually pretty obvious, as well, once you realize how to do it. You won't be able to use "ls" any more, since that is a command that the shell invokes outside of itself, but you can always use the built-in "echo" command, like so:

host # echo *
bin opt sbin usr tmp var


Note that this output won't usually be so pretty. If there are 50 files and/or directories in the directory you're in, you'll just get 50 filenames in a row on one line. But, it's better than nothing :)

3. Capture the contents of critical files that can help you troubleshoot and/or find your root cause so this won't ever happen again (hopefully).

This last one is slightly less obvious, but can be done in a variety of ways. The first two ways are messy, but they work. Simply read your file as though you were sourcing it, using either the "source" or dot "." built-ins. The reason I don't prefer these two methods is that your screen fills up with a lot of garbage and you may hang your system by sourcing the contents of a file that contains executable statements or commands. For our example here, we'll assume a file named BOB with one line in it that says "hi":

host # source ./BOB
-bash: hi: command not found


or

host # . ./BOB
-bash: hi: command not found


You've gotten your output, but you can see where the potential problem would lie. What if "hi" was a command and your "source" directive tried to run it on your already half-dead machine? It might put it all the way down right then and there.

In these instances, I think output redirection is your best bet. You can read the contents of a file by simply executing a new file descriptor and reading from that with "echo." You can use pretty much any number that works (although, try to stay away from 0, 1 and 2 as these are your system's STDIN, STDOUT and STDERR file descriptors), you won't have to read your file through the clutter of a bunch of error messages and you will be insulating yourself from accidentally executing any commands. For instance, the following would get you much better results:

host # exec 7<BOB <--- Execute new file descriptor 7 and redirect the output of your BOB file to it.
host # while read -u7 line; do echo $line; done <--- Then, just read from the file descriptor.
hi

And that's about it. If you use all 3 of these methods in whatever combinations are necessary, you should be able to collect most of the information you need to assess your situation and/or provide root cause.

For one last example that uses all 3, this is how I would go about getting the contents of my /var/log/syslog file (I'll shorten the output to only include the relevant stuff) - Note that I'm also doing the syslog reads in a command line "for loop" because I want to get all the information I can with as little typing as possible:

host # pwd
/home/mydirectory
host # cd /var/log
host # echo *
sysylog syslog.1 syslog.2
host # for x in syslog.2 syslog.1 syslog; do exec 7<$x;while read -u7 line;do echo $line;done;done
blah....
<--- All of the sylog files' output. Notice that I did them in reverse, so that the output would be from oldest to newest. It's also a good idea (if possible) to either log your terminal session or set your terminal client's buffer to a very large number so that you can cut and paste this output into your desktop editor.

Hope this helps you out :)

Best wishes,


, Mike




Friday, February 15, 2008

C Code To Add User Accounts And Introduce C/Shell/Perl Porting.

Ahoy (I've always wanted to use that greeting ;)

Today, we're putting out a little (?) c code we wrote to standardize user account creations across any sized environment on Unix or Linux (slight modifications may need to be made depending on your environment).

C code isn't generally what we concentrate on in this blog, but we thought it would be interesting to put this out today, and follow it up with the exact same script in ksh/bash and Perl on the next successive days. Kind of a slam-bang intro to porting (Which, you may recall, we began a long time - and many scripts - ago - in this post on the shebang line.

We will, eventually, come full circle with that. It's the blessing and the curse of a blog like this: There's so much to write about and share that maintaining a really specific thread (especially a long one) can sometimes take a while and be presented in a scatter shot manner. Thank goodness for HTML hyperlinks ;)

Note that, in the code below, everything is set up to be interactive and only a few assumptions are made (which you can, of course, change to your liking). We'll keep them consistent between ports of this code, so it's easier to follow, but none of this stuff is set in stone.

Things to look for in this c code that you might want to change and/or may be confusing:

1. The "userdir" variable is the main user directory (like "/users" or something), except we ask that no slashes be used in the input. We actually do ask that in the code itself. We tried to keep it polite ;)

2. The "username" variable is used to both name the user and his/her account. So the user "bobby" would have a home directory of "/users/bobby" in this case.

3. After the fopen of /etc/passwd, we've hard coded the group number to "1" and the shell to "/bin/ksh"

4. You can change anything about this c code that you want to. This code is actually utile, but is also being used as a massive example of porting that we'll explore in the following few posts.

5. You can compile this program easily with gcc, like so:

host # gcc -o whateverNameYouWantToCallTheBinary adduser.c

For now, whether it makes sense to you or not, enjoy!

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License


#include
#include
#include
#include
#include

/* adduser.c
Add Users, Set Up Profiles,
Set The Password And Email
An Admin
2008 - Mike Golvach - eggi@comcast.net
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
*/

main(int argc, char **argv)
{

struct passwd *userlist;
int count, usernumber;
FILE *tmp, *stmp, *mailer, *profile;
char *commentfield, *username, *userdir, *home;
char *mailcomment, *mailemail, reply;

commentfield = (char *)malloc(1024*sizeof(char));
username = (char *)malloc(8*sizeof(char));
userdir = (char *)malloc(256*sizeof(char));
home = (char *)malloc(256*sizeof(char));
mailcomment = (char *)malloc(1024*sizeof(char));
mailemail = (char *)malloc(512*sizeof(char));

if (argc != 4) {
printf("Usage: %s [dirname - no slashes ] [ logname ] [ comment - in quotes ]\n", argv[0]);
exit(1);
}

if (strlen(argv[2]) > 8 || strlen(argv[2]) < 5) {
printf("Please choose a logname between 5 and 8 characters!\n");
exit(1);
}

signal(SIGHUP, SIG_IGN);
signal(SIGINT, SIG_IGN);

setpwent();

count = 0;

while ((userlist = getpwent()) != NULL) {
if (count < userlist->pw_uid) {
count = userlist->pw_uid;
usernumber = count+1;
}
}

endpwent();

sprintf(commentfield, "%s", argv[3]);
sprintf(userdir, "%s", argv[1]);
sprintf(username, "%s", argv[2]);
sprintf(home, "/%s/%s", argv[1], argv[2]);

printf("\n");
printf("Check this out before proceeding!!!\n");
printf("-----------------------------------\n");
printf("Logname:\t%s\n", username);
printf("Homedir:\t/%s/%s\n", userdir, username);
printf("Comment:\t%s\n", commentfield);
printf("-----------------------------------\n");
printf("\n");

printf("\n");
printf("All of this ok?\n");
printf("\n");
printf("y or n [n is the default]\n");
printf("\n");

scanf("%c", &reply);

if ( reply != 'y') {
printf("\n");
printf("All right, give it another shot if you want!\n");
exit(0);
}

tmp = fopen("/etc/passwd", "a");
fprintf(tmp, "%s:x:%d:1:%s:/%s/%s:/bin/ksh\n", username, usernumber, commentfield, userdir, username);
fclose(tmp);

stmp = fopen("/etc/shadow", "a");
fprintf(stmp, "%s:*LK*:::::::\n", username);
fclose(stmp);

mkdir(home, 0755);
chdir(home);

profile = fopen(".profile", "a");
fprintf(profile, "stty istrip\n");
fprintf(profile, "PATH=/bin:/usr/bin:/usr/local/bin:/usr/share/bin:.\n");
fprintf(profile, "export PATH\n");
fprintf(profile, "\n");
fprintf(profile, "\n");
fclose(profile);

chown(home, usernumber, 1);
chown(".profile", usernumber, 1);
chmod(".profile", 0644);

if ((mailer = popen("/usr/lib/sendmail -t", "w")) == NULL) {
perror("Mailer");
exit(1);
}

sprintf(mailcomment, "%s\n", commentfield);
sprintf(mailemail, "The email address is %s@host.com!\n", username);

fputs("To: bob@host.com\n", mailer);
fputs("Subject: New User Added!!!\n", mailer);
fputs("\n", mailer);
fputs(mailcomment, mailer);
fputs("has a new account set up!\n", mailer);
fputs(mailemail, mailer);
fputs("\n", mailer);
fputs("Thank you,\n", mailer);
fputs("\t Mike Golvach\n", mailer);

pclose(mailer);

printf("\n");
printf("All Done!!!\n");
printf("\n");
printf("Now set the Password!\n");
printf("\n");
execl("/usr/bin/passwd", "passwd", username, NULL);
printf("\n");
printf("Password set!!! Take a break...\n");

}


, Mike




Friday, December 28, 2007

Using Your Shell To Generate RANDOMness For Security Software

Random number generation, or entropy gathering is something I've always found interesting, because it's so rarely used (to the extent that is possible) and the basic underlying principle is becoming so heavily depended upon. As we enter an age where encryption is becoming not only the standard for network security, but, at times, the only option, this seems insane. I've literally walked into dozens of security software "situations" where it had been agreed upon, all around, that an impasse had been reached, because prngd, egd or the /dev/random and/or /dev/urandom character special devices were no longer functioning correctly. There was simply nothing left to do but wait for vendor support to figure out the problem or, at the very best, wait while we re-installed such and such an OS or software package.

This is a huge problem when your security department has decided that, not only is Telnet no longer a safe connection method, but it's so unsafe that it shouldn't even be on the OS as a backup method for connection. You're left with three options: A secure connection, direct console connection or nothing!

Of course, in a situation like the one I'm describing above, when the security software (Let's say SSH) stops functioning (and no one has a lingering session), you'll need to do your work via the system console. But you should, theoretically, be able to get SSH back up on its feet and running in just slightly more time than it would take you to restart it, once you've logged into the console (hopefully from home :)

The main reason that I see SSH go down is a problem with whatever random number generator it was compiled to use. Of the few I mentioned above, the /dev/random and /dev/urandom character special devices are the most commonly built-in "randomness" generators for SSH. On some older setups (Or for those traditionalists), this is left open and/or set to read random informatiom from a straight file (like /etc/random, or whatever you decide to call it).

On the older versions (where the program reads from a file to "generate randomness") you almost never see this problem, because there's nothing convoluted about the process. We're assuming SSH is fine for all of these examples, and (unless you don't have read access to the file or the filesystem that its on is corrupted), there's almost no way straight-file reads to create randomness can go wrong.

When you're reading from a domain socket or a character special device (like /dev/urandom) is when you may end up having an issue. However you can get around this in your shell using a common variable called RANDOM (Not very cleverly named, but a good indicator of its actual function - available in ksh, bash and almost all shells). This variable produces a random number from between a range of 0 to 32767 (Actual results may vary depending on OS, bit strength of OS, etc) and its value changes every time it's invoked. For instance:

host # a=0;while [ $a -lt 5 ];do echo $RANDOM;let a=$a+1;done
17417
6453
11016
3054
8647


Now, to proceed. We're going to assume that the default file your SSH uses to generate randomness and transfer it to /dev/urandom is missing. Otherwise, you'd just have to fix that and this would be no fun at all ;) In order to incorporate this functionality into your downed SSH server, you'll need to, in effect, create your own /dev/urandom and move the other one out of the way (It's cooked anyway, right? ;)

host # mv /dev/random /dev/norandom
host # mv /dev/urandom /dev/nourandom


Then proceed to create your simple script (We're just going to script on the command line, here) which, while it won't be nearly as secure as the "real" /dev/random was, will keep you afloat until you get the fix in. Theoretically, you could make it really complicated and as secure as you like just as long as you just follow the general outline like this:

First recreate the two necessary random files:

host # mknod /dev/random c 1 8 &&
host # mknod /dev/urandom c 1 9 &&
host # chown root:root /dev/random /dev/urandom
host # chmod 0644 /dev/random /dev/urandom


Then create the random text file, like so (We only need 512 bytes):

host # a=0;while [ $a -lt 92160 ];do echo $RANDOM >>BING;let a=$a+1;done
host # cat BING >/dev/urandom


Note that the above command line (which dumps its output into the file named BING could be a full fledged shell script. It could be highly convoluted and as hard to comprehend as you prefer. The script doesn't even need to work as long as it dumps out enough error data and you're redirecting STDERR to STDOUT -- e.g. 2>&1)

If you want, you can save this output for later, but, hopefully, you won't be needing it. Just dd (disk to disk copy) it back to itself, like this:

host # dd if=/dev/urandom of=BING count=1 >/dev/null 2>&1

Now you should be able to startup your SSH Daemon and finally get back to work ;)

Cheers,

, Mike




Thursday, December 27, 2007

Securing SUID Programs Using A Simple C Wrapper

This is an issue that comes up almost constantly, given the very nature of the Linux and Unix security model and the environments in which most of us admins work. More often than not, application users on machines will "need" to run scripts that require root privilege to function correctly. One such example would be the "ping" command. Although this seems like a harmless, and openly available, network troubleshooting tool, the only reason regular users can run it is because it's SUID-root. This, as simply as possible, means that the command runs as the user "root" no matter who actually invokes it. The setuid bit (the "s" in -r-sr-xr-x) is a permission that indicates that the given command will run as the userid that owns it. root owns the ping command; therefore, when users run ping, they're running it as the user root.

Now, the ping command has been around for quite a while and, as with almost all setuid programs, it's been the subject of many security compromises and exploits over the years. In general, because of the fact that "ping" is such an integral part of the OS, you don't need to worry about wrapping it (or other programs like it) in order to protect yourself against harm (Your vendor - or the development community - should be trying their hardest to do that for you :)

Instances do exist where regular users require that an uncommon command be run on a frequent basis, in order for them to do their jobs. That program (we'll just call it "PROGRAM" for no particular reason ;) needs to be run as another user for it to function correctly and it has to be run frequently enough that it becomes an inconvenience to "not" allow the users to run the command themselves. SUID (or setuid) wrapper scripts can be most effectively used in these sorts of situations.

A wrapper script/program, is (for all intents and purposes) just another layer of protection for the admins, users and operating system. If created properly and used judiciously, they can help minimize the risk associated with allowing regular users to run commands as userid's other than their own.

Optimally, you would want to limit SUID script/program execution to another generic user (if possible). So, for instance, if an application user needs a program to be run as the user oracle, setting them up with a shell wrapper to run that command as the oracle user shouldn't be cause for too much concern. The greatest security risk (no matter the relative security-weight of different accounts on your system) is when you need to wrap a script or program to be run as root.

Below, I've put together a simple wrapper written in c ( Check out Wietse Zweitze Venema's website, and work, for a really really really secure wrapper script ). I write the majority of my scripts in bash, ksh and Perl, but the SUID wrapper really requires that it be compiled in order to serve it's purpose most effectively. If people can easily read your code, it'll be easier for them to figure out ways around whatever steps you're taking to secure your servers. I'm not saying that, just because they could read your code, they could break it; but it would certainly make it easier for them. In the other extreme circumstance, if anyone got write access to a SUID script (assuming root privilege, since almost every OS now resets the setuid bits if a setuid script is modified by a regular user), they could (easily) change it a little and stand a good chance that no one would notice that they'd created a backdoor for themselves. If you modify a compiled c binary, it probably won't run anymore (which is the best security there is ;)

We'll dive into the sea of "c" in a future post, since it can be complicated and is rarely necessary to know in order to administrate, or use, a system.

For the script below, just substitute the PROGRAM you want to wrap, the arguments, if any (This script assumes only one "-v" - If you have more, add them as comma separated entries just like the first, and before the NULL entry specification), and the groupid check (Comment this out if you don't want to use it as an extra level of access checking security). We also make sure to change the real and effective uid and gid to "root" (make this any id you want) only after performing the access checks! Extra care is taken to make sure we reset to the regular user's real and effective uid and gid even before all that.

Note also that we use the strncmp command instead of strcmp (for string comparison) to check the command line arguments. The reason we use this is that the strncmp command requires you to give it a number as it's final argument and will not read past that many chars (I use start and stop as my only two options, and you can see that in strncmp arguments accordingly. This helps prevent a malicious user from executing a string buffer overflow which might allow them to crack your wrapper from the command line!

This c code can be compiled on gcc at least all the way back to version 2.81.x - It can be compiled very simply, like so:

gcc -o PROGRAM wrapper.c (With -o wrapper being the option ("-o") of whatever you want to call the compiled PROGRAM and wrapper.c being the text c code below)

Enjoy!


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <signal.h>
#include <strings.h>

/********************************************
* Wrapper - Secure Yourself *
* *
* 2007 - Mike Golvach - eggi@comcast.net *
* *
* Usage: COMMAND [start|stop] *
* *
********************************************/

/* Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License */

/* Define global variables */

int gid;

/* main(int argc, char **argv) - main process loop */

int main(int argc, char **argv)
{

/* Set euid and egid to actual user */

gid = getgid();
setegid(getgid());
seteuid(getuid());

/* Confirm user is in GROUP(999) group */

if ( gid != 999 ) {
printf("User Not Authorized! Exiting...\n");
exit(1);
}

/* Check argc count only at this point */

if ( argc != 2 ) {
printf("Usage: COMMAND [start|stop]\n");
exit(1);
}

/* Set uid, gid, euid and egid to root */

setegid(0);
seteuid(0);
setgid(0);
setuid(0);

/* Check argv for proper arguments and run
* the corresponding script, if invoked.
*/

if ( strncmp(argv[1], "start", 5) == 0 ) {
if (execl("/usr/local/bin/COMMAND", "COMMAND", "-v", NULL) < 0) {
perror("Execl:");
}
} else if ( strncmp(argv[1], "stop", 4) == 0 ) {
if (execl("/usr/local/bin/COMMAND", "COMMAND", "-v", NULL) < 0) {
perror("Execl:");
}
} else {
printf("Usage: COMMAND [start|stop]\n");
exit(1);
}
exit(0);
}


, Mike