Wednesday, April 30, 2008

Creating Solaris Pkg's From RPM's - Take Two

Hey there,

Special thanks to K.Jans for pointing out the problem with our original script's specificity and for his help in putting together this new version which will address some other RPM's CPIO signatures!

I don't know if we can chalk it up to my enthusiasm or transient thought process, but, of course, the rpm2pkg rpm-to-pkg conversion script that we put out a while ago doesn't perfectly translate every Linux RPM into a usable Solaris Unix datastream pkg file (Mostly because of Solaris being fussy with the pkg names). Unfortunately, there are too many vendors creating too many different versions of the same file type, and using Perl to scrape around inside their CPIO files requires a little bit of modification for every single variation. Not that I'm complaining. I love to do this stuff :)

Still, I will probably write a post on the "process" one of these days (very soon), so that anyone can do it without having to try and read what might be some cryptic scripting ;)

So, today, we have another version of our original RPM to PKG conversion script (which will work with more RPM's, but not the one's the original one worked for, and probably not a lot of other ones ;) for you. If there's any interest in a patch, I can post one (or just email it to you if you request), but it's probably easier to just use one or the other script at this point.

And, here we go again... Enjoy :)


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/usr/bin/perl

#
# rpm2pkg - creating Solaris pkg files from Linux rpm's
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

$rpm=$ARGV[0];
$tmp_dir="dir.$$";
$orig_dir=`pwd`;

mkdir("$tmp_dir");
chdir("$tmp_dir");
## FIX system("rpm2cpio ../$rpm|cpio -dim");
system("rpm2cpio ../$rpm >rpm.cpio.file");
system("cpio -d -i -m <rpm.cpio.file");
## ENDFIX
$proto_list=`find . -print|pkgproto|grep -v prototype`;
open(TMP_FILE, ">>prototype");
print TMP_FILE "i pkginfo\n";
print TMP_FILE $proto_list;
close(TMP_FILE);
# FIX @pkg_info=`strings ../$rpm`;
@pkg_info=`strings rpm.cpio.file`;
$count = 0;
foreach (@pkg_info) {
if ( $_ =~ /^Begin4/ ) {
push(@rpm_info, $_);
$count++;
} elsif ( $_ =~ /(^Copying-policy:|GPL$)/ ) {
last;
} elsif ( $count > 0 ) {
push(@rpm_info, $_);
}
}
$uname_s = `uname -s`;
$uname_r = `uname -r`;
$uname_p = `uname -p`;
$rpm_version = $rpm;
$rpm_version =~ s/\.rpm//;
chomp $uname_s;
chomp $uname_r;
chomp $uname_p;
chomp $rpm_info[1];
chomp $rpm_info[2];
chomp $rpm_info[4];
$rpm_info[1] =~ s/ * *//g;
$rpm_info[2] =~ s/ * *//g;
$rpm_info[4] =~ s/ * *//g;
$rpm_info[1] =~ s/Title://g;
$rpm_info[2] =~ s/Version://g;
open(PKGINFO, ">>pkginfo");
print PKGINFO "SUNW_PRODNAME=\"$uname_s\"\n";
print PKGINFO "SUNW_PRODVERS=\"$uname_r\"\n";
print PKGINFO "SUNW_PKGTYPE=\"usr\"\n";
print PKGINFO "PKG=\"$rpm_info[1]\"\n";
print PKGINFO "NAME=\"$rpm_info[1]\"\n";
print PKGINFO "VERSION=\"$rpm_info[2]\"\n";
print PKGINFO "VSTOCK=\"$rpm_version\"\n";
print PKGINFO "VENDOR=\"GENERIC\"\n";
print PKGINFO "ARCH=\"$uname_p\"\n";
print PKGINFO "EMAIL=\"me@xyz.com\"\n";
print PKGINFO "CATEGORY=\"application\"\n";
print PKGINFO "BASEDIR=/\n";
print PKGINFO "DESC=\"$rpm_info[4]\"\n";
print PKGINFO "PSTAMP=\"Your Name Here\"\n";
print PKGINFO "CLASSES=\"none\"\n";
close(PKGINFO);
system("pkgmk -o -b `pwd` -d /tmp");
system("pkgtrans -o -s /tmp `pwd`/${rpm_info[1]}.pkg $rpm_info[1]");
system("mv ${rpm_info[1]}.pkg ../");
system("cd ../;pwd;rm -r $tmp_dir");


, Mike

Tuesday, April 29, 2008

Grabbing Telnet Information On Linux Using TcpDump

Hello again,

Today, I thought we'd continue along our "snooping" path and clear the way for a solution that would satisfy more than one Operating System. Snoop, although a fine program in my estimation, isn't available at all except for use on Solaris Unix (If I'm wrong, please let me know :) After yesterday's post on snooping the ftp port and our earlier post on capturing logins and passwords over Telnet with snoop, it seemed like it was about time to get back to Linux.

Linux may not have the "snoop" command, but it hardly matters, since freeware programs like tcpdump, ethereal, snort, etc are all out there and can be compiled on pretty much any system (if your Linux distro doesn't already have a package readily available; which is doubtful ;)

For a more detailed look at the "process" involved in our tcpdump'ing, check out the old post on capturing logins and passwords even though it was for Solaris. The method of achieving the end results is so similar, and it took up so much HTML real-estate to convey, that it would be annoying (to put it mildly) to slap it right in the middle of this post :)

In any event, today we've written a quick bash shell script called "tcptelnet.sh." It takes standard input (STDIN) as its source of input (which can be fairly easily modified) and was made to run in a pipe-chain after a tcpdump command of the format below:

host # tcpdump -l -vvv -x -X port 23 |./tcptelnet.sh <--- Just so you have all the info in one place, the command line arguments listed here are: -l (to make the standard output (STDOUT) line buffered, or easier to read), -vvv (to make the output very very verbose), -x and -X (to print in HEX and ASCII, print out headers and pad packets with null bytes).

The reason the format of the tcpdump command is important is because changing it would change the way tcpdump spewed it's output, and the "tcptelnet.sh" script (much like our other shell scripts with regard to this topic) is a simple parser for the output to make it more readable. For instance, you must be the "root" user (or have equivalent privilege on the host you're running the script from) for this to work. This is not a way to "hack" or "get around" Linux security. This is just a demonstration of how readily available all unsecured information is, even on a secured network.

Following is a sample of the output you should expect. Note that, given tcpdump's output format, and my desire to not repeat the unnecessary complexity of our snoop script, I sacrificed the capital U for the sake of all-around convenience. This is only interesting in that, if you note a dot (.) where you think there should be a character, it's probably a capital U. All of the strings that our command generates show these on the echo-backs and I chose to only parse those, since they get sent with all regular, as well as password, information. You are, of course, welcome to make this shell script better. The nature of network traffic makes it a bit difficult to pin things down to absolutes, I've found ;)

And here's that sample session:

host # tcpdump -l -vvv -x -X port 23 |./tcptelnet.sh
tcpdump: listening on qfe0, link-type EN10MB (Ethernet), capture size 68 bytes

Possible session info to follow:
u.s.e.r.0.0.1..
Possible password to follow:
.listpy..
Possible login ID to follow:
.u.s.e.r.0.0.1..
Possible password to follow:
.sm0k3BomB..
Possible login ID to follow:
.h...m.m.e.r..
Possible password to follow:
.H.MM.S........
Possible login ID to follow:
.u.s.e.r.0.0.1..
Possible password to follow:
.MyS3cr3tP@ssw0rd.

Possible session info to follow:
....l.s...pw.d...e.x.i.t...^C254 packets captured
891 packets received by filter
0 packets dropped by kernel


As I noted above, the capital U gets whacked on some lines due to the output format and the way we process it in the bash script. For instance, the line that reads:

.h...m.m.e.r

was actually typed into the Telnet session as:

hUmmer

Once you note the pattern of input (and have the knowledge that this is the only letter - and only in capital form - that is excluded from our matches), it becomes fairly simple to fill in that small blank.

Hope you enjoy this script and do "good things" with it. Use it for security, not insecurity :)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# tcptelnet.sh - extract password and session information tcpdump
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

login_or_pwd=0
sess_info=0

while read line
do
echo $line| grep "Last" >/dev/null 2>&1
ll_yes_or_no=$?
if [ $ll_yes_or_no -eq 0 ]
then
echo
login_or_pwd=0
continue
fi
echo $line| grep "gin:" >/dev/null 2>&1
l_yes_or_no=$?
if [ $l_yes_or_no -eq 0 ]
then
echo
echo "Possible login ID to follow: "
login_or_pwd=1
sess_info=0
continue
fi
echo $line| grep "sword" >/dev/null 2>&1
p_yes_or_no=$?
if [ $p_yes_or_no -eq 0 ]
then
echo
echo "Possible password to follow: "
login_or_pwd=1
sess_info=0
continue
fi
if [ $login_or_pwd -eq 1 ]
then
echo $line|awk '{ if ( $NF ~ /UUUUU$/ ) print $NF }'|sed 's/^.*\(.\)UUUUU$/\1/'|sed 's/^U/\./' |xargs -ivar echo "var\c"
continue
fi
if [ $login_or_pwd -eq 0 -a $sess_info -eq 0 ]
then
echo
echo "Possible session info to follow:"
sess_info=1
fi
if [ $sess_info -eq 1 ]
then
echo $line|awk '{ if ( $NF ~ /UUUUU$/ ) print $NF }'|sed 's/^.*\(.\)UUUUU$/\1/'|sed 's/^U/\./' |xargs -ivar echo "var\c"
continue
fi
done
echo
exit 0


, Mike

Monday, April 28, 2008

Snooping The FTP Command Port On Solaris





Check out the video above to see this simple script in action.

Hey There,

Today, we're going to take another look at parsing the output from "snoop" (good to go on Solaris Unix 8 through 10), as we did in our previous post on finding and printing out logins, passwords and other session information over Telnet. Today, we're going to see what we can find by monitoring FTP port 21 (as opposed to FTP data port 20).

Again, a quick note: Today's simple script and demonstration are provided simply to shine a light on a vulnerability that has existed for quite some time, and not as an invitation to illegal activity. I'd like to think that posts, and articles everywhere, on this subject serve as a consistent reminder that, if you really want to try and keep your information secure, you shouldn't use unsecure protocols. For instance, as an alternate to straight-up FTP, programs like SCP and SFTP (Technically a subsystem of SSH, running on port 22) are freely available and would make this method of gaining information impossible.

Hopefully this bash shell script will help out a few sys admins out there. Actually, getting information from FTP port 21 is so simple that the script could actually be written on one succinct command line.

Rather than go into a long convoluted dissection of how the process works (which we beat to death in our post on grabbing passwords with snoop over Telnet), I've attached a small video to this post (see above). If you can, download it and play it in slow motion. The player above should freeeze on the final frame, which is really the shot that shows you how much information you can get by just "listening."

Note, also, that the one big difference between this script and our last script for grabbing passwords using snoop (other than that we're doing it on FTP port 21) is that this script has been written to take standard input (STDIN) rather than read a binary snoop file. So, you'll need to run it like this:

host # snoop -v port 21|./ftppass.sh

You can change the original Telnet script also. All you need to do is comment out this part:

if [ $# -ne 1 ]
then
echo "Usage: $0 snoop_file"
exit 1
fi

snoop_file=$1

if [ ! -f $snoop_file ]
then
echo "Snoop Output $file does not exist. Exiting!"
exit 1
fi


and change this line by removing the "<$snoop_file" reference, so it just says "done":

done <$snoop_file

Just in case you have problems viewing the video above (codecs, no plug-in for your browser, etc), even though I put it up on youtube in hopes that would make it most accessible, I've included another run of output below (same thing, only slightly different and shorter ;):

host # snoop -v port 21|./ftppass.sh
Using device /dev/qfe (promiscuous mode)
220 host FTP server ready.
USER test
331 Password required for test.
PASS binger
530 Login incorrect.
SYST
530 Please login with USER and PASS.
USER test
331 Password required for test.
PASS testing123
230 User test logged in.
PWD
257 "/home/test" is current directory.
QUIT
221-You have transferred 0 bytes in 0 files.
221-Total traffic for this session was 360 bytes in 0 transf


Hopefully this will help you help others see the benefit of using secure FTP whenever possible (even on a "secure" network).

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# ftppass.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

while read line
do
echo $line|awk '{ if ( $1 ~ /FTP:/ && $2 && $2 !~ /^""$/ && $3 !~ /FTP:/ ) print substr($0,index($0,$2)) }'|sed -e 's/^\"\(.*\)\"$/\1/' -e 's/rn$//'
done


, Mike

Sunday, April 27, 2008

Perl Script To Translate Signal Names And Numbers

Hey There,

For this "Lazy Sunday" post I threw together a quick little Perl script to help out further with what we went over more thoroughly in our previous post on signal definitions in Linux and Unix.

This script takes at least one argument (or it throws a fit and refuses to cooperate with you ;) of either a signal name or signal number, which it will then translate to the other (does the way I wrote that make sense? ;)

So, if you want to know what signals 4 and 5 do, you could run:

host # ./siggy.pl 4 5
Signal Number 4 = ILL
Signal Number 5 = TRAP


And see the (somewhat more verbose) translations from the system includes. Conversely, you could want to know what numbers the TERM and KILL signals are, which you could get by running:

host # ./siggy.pl TERM KILL
Signal Name TERM = 15
Signal Name KILL = 9


Or you can mix and match. Do whatever, you want. If you throw the script an argument it doesn't understand it will react accordingly ;)

host # ./siggy.pl hup 8 ... 9
Signal Name HUP = 1
Signal Number 8 = FPE
...: What is that supposed to mean?
Signal Number 9 = KILL


One interesting thing in the script is the use of an absolute subroutine call. Even though I put the "use Sys::SigAction;" line in the script, Perl was interpreting the function calls (like sig_number()) as belonging to the Main:: module. Pretty much every function, or command, in Perl does. "print," for example, is actually Main::print, so I had to use "absolute" naming. Writing "Sys::SigAction::sig_number()" (in this instance) is one way to make sure that Perl knows exactly where to look for that subroutine to run :)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/usr/bin/perl

#
# siggy.pl
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if ( $#ARGV < 0 ) {
print "Usage: $0 signalNumber1 [signalNumbern...]\n";
exit(1);
}

use Sys::SigAction;

@signos = @ARGV;

foreach $sig (@signos) {
if ( $sig =~ /[A-Za-z]+/ ) {
$sig = uc($sig);
$signum=Sys::SigAction::sig_number( $sig );
print "Signal Name $sig = $signum\n";
} elsif ($sig =~ /[0-9]+/) {
$signame=Sys::SigAction::sig_name( $sig );
print "Signal Number $sig = $signame\n";
} else {
print "$sig: What is that supposed to mean?\n";
}
}


, Mike

Saturday, April 26, 2008

Shell Special Variables In Bash

Hello again,

Today, we're going to take a look at the Bash shell (for Linux or Unix) and go over some of (I find) the most useful aspects of it. I could be talking about quite a lot of things, actually. It's hard to say what I really like the "best" about anything, but the built in special variables in Bash come in handy an awful lot (Especially if you get stuck in the shell on a crashing system or have to access the network under equally hopeless conditions).

Today, we're going to run down these very basic variables, one by one. And, the reason that they're called "special" isn't an indication of their relative value. These variables are "special" because they can only be referenced and never directly assigned values to :)

1. * : The * variable expands to all the parameters on a command line. Since we're talking about built in variables today, I don't mean *, like in "ls *", but * as in "echo $*", which produces nothing. However if there are other parameters on the command line, expanding this variable equals all of the command line parameters, like $1, $2, $3, etc. If $* is surrounded by quotes ("$*"), it equals all of the parameters as one value, separated by the default field separator (IFS - usually a space, tab or newline), like "$1 $2 $3"

2. @ : The @ variable expands the same as the * variable when called without quotes as $@. When called between double quotes, as "$@", it expands into all the command line parameters, but each parameter is separate (rather than all together in one giant double quoted string, separated by spaces, as with "$*"), like "$1", "$2", "$3", etc.

3. # : The # variable expands to the number of parameters on a line. It's most often used to check to see if the proper amount of arguments have been passed to a script. For example, this would show how the $# variable could be used to test that a script is being called with only 2 arguments:

if [ $# -ne 2 ]
then
echo "Usage: $0 arg1 arg2. Quitting"
exit
fi


4. ? : The ? variable expands ( as $? ) to the return code (or errno value) of the last executed command. In a pipe-chain, it equals the return code of the last executed command in the chain.

5. - : The - variable expands to the current shell's options. For instance, if you where logged into a shell and executed "echo $-", you'd probably see something similar to this:

host # echo $-
himBH


Which (of course ;) would mean that the shell had been invoked with the -i (forces the shell to interactive mode, so it reads the .bashrc when it starts), -h (remembers where commands are when they get looked up), -m (enables job control, so you can run background processes), -B (enables brace expansion in the shell whereby, for instance, "file{a,b}" would equal "filea fileb") and -H (enables ! character history substitution) flags.

6. $ : The $ variable expands to the process ID of the shell, or subshell (as happens when a script is executed, for instance), in which it's invoked (as $$). It's generally used to determine the process ID of a shell in programming and it should be noted that, if it's used within a subshell that is generated with parentheses (e.g. ($$)), it will actually retain the process ID of the parent shell, rather than the subshell.

7. ! : The ! variable (which you might remember from your options list when checking $-) expands to the process ID of the last run background command. This is different than $?, which reports on the return code of the last run command. For instance, this is one way to demonstrate using $!:

host # echo $! <--- No value because we have no jobs running in the background
host # sleep 200000 &
[1] 23902
host # echo $!
23902


8. 0 : $0 expands to the name of the shell you're in, or the shell script that it's being called from. It's generally found in usage messages, like in example 3 in the Usage message from the test against $#'s value. From within a script called blackhat.sh,

"Usage: $0 arg1 arg2. Exiting."

would print something like:

"Usage: ./blackhat.sh arg1 arg2. Exiting."

In certain circumstances it can resolve, or expand, to the first argument after the string set to execute when a shell is invoked with the "-c" option, or it can be set to the file name used to invoke Bash, if Bash is called by another name (like "rbash").

9. _ : The _ variable is set to the absolute (not relative) file name of your shell when you start it up (e.g. $_ = /bin/bash) or the script being executed if it's passed in an argument list when the shell is invoked. After that, it always expands to the value of the last command executed, or argument typed. For instance:

host # vmstat 1 1
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr m1 m1 m1 m1 in sy cs us sy id
0 0 0 37857184 2413576 16 152 0 0 0 0 0 0 1 1 0 1204 996 483 1 2 97
host # echo $_
1


As a side note, when you're using /bin/mail, the $_ variable is equal to the mail file you're checking.

And that's it, as far as I know (The basic ones, anyway ;)

Hopefully being aware of these Bash built in variables, and knowing what they all do and mean, will help you out or, at the very least, make your shell scripting easier and/or more productive in the future :)

Best wishes,

, Mike

Friday, April 25, 2008

Determining Signal Definitions In Linux and Unix

Greetings,

Today, we're going to take a quick look at simple return codes (or errno values) that get returned by the bash and ksh shells in Linux (and Solaris Unix). This complements a post we did a while back on trapping signals in Perl and the shell, and adds a bit more to understanding of the fork-and-exec process introduced in our post on running a login shell on a network port.

While these signals are subject to change, the POSIX standard is fairly consistent between most Unix and Linux systems. If you ever need to find out what a signal means, you have a number of options at your disposal to determine that information.

1. At the process level on Solaris, you can figure out what signals are associated with what processes by using the "psig" command. This command will list out all signals associated with a given process (assuming you know its process ID). A sample of this process-finding process would be something like:

host # ps -ef|grep "[l]pd"
daemon 2073 1 0 Mar 21 ? 0:00 /usr/local/sbin/lpd
host # psig 2073
2073: /usr/local/sbin/lpd
HUP caught sigacthandler
...
ILL default
...
PIPE ignored
...


basically, you'll get a ton of output. And none of it may make any sense to you. That's okay for now :)

2. At the process level on Linux, you can get approximately the same functionality from the "crash" command ( used interactively, if available ) or, even better, this publicly available script written specifically to emulate Solaris' psig functionality, for use on Linux. It's called psig.sh for Linux and is an excellent tool for replicating the psig experience for Linux users who are used to Solaris' proc commands.

3. On either Linux or Unix (any flavor), you can determine what signals are handled by your machine, what their names are and what their signal numbers are, by checking your system include files. On Solaris, the header file that contains this information is almost guaranteed to be named /usr/include/sys/iso/signal_iso.h (at least for Solaris 9 and 10). In Linux, it will generally be very specific to the architecture of your build system (something like: /usr/include/asm-x86_64/signal.h)

In any event, you can follow a simple process to figure out where your system's signal definitions are loaded, no matter what flavor of Unix or Linux you're running. It all comes down to derivation and, luckily, you'll always have the same starting point (I may be wrong on this, but I've never experienced it ;).

Here's a quick command line walkthrough of two different ways to figure out where your signal definitions are listed (that is, the include/header file that defines the signal numbers, their abbreviated signal names and definitions) that can be used on almost any *nix system. As a starting point, /usr/include/signal.h is probably the best, since it exists on most systems. We'll also use SIGHUP (the standard HangUp signal and signal number 1) to guide our search:

The painful way:

host # grep SIGHUP /usr/include/signal.h|grep -w 1
host # grep include /usr/include/signal.h
#include <sys/feature_tests.h>
#include <sys/types.h>
...
host # grep include /usr/include/signal.h|sed 's/^.*<\([^>]*\)>/\1/'|xargs -ivar grep SIGHUP /usr/include/var
host # grep include /usr/include/signal.h|sed 's/^.*<\([^>]*\)>/\1/'|xargs -ivar grep "^#include" /usr/include/var|grep sig
#include <sys/iso/signal_iso.h>
#include <sys/siginfo.h>
...


... and so on, and so on, until you find the right file. Basically, just follow the includes until you reach the end. This can be tedious and time consuming.

The incredibly easy way:

Solaris_9_or_10_host # find /usr/include|xargs grep SIGHUP /dev/null|grep -w 1
/usr/include/sys/iso/signal_iso.h:#define SIGHUP 1 /* hangup */
<--- This is our file :)

or

Linux_host # find /usr/include|xargs grep SIGHUP /dev/null|grep -w 1
/usr/include/bits/signum.h:#define SIGHUP 1 /* Hangup (POSIX). */
<--- Note that all 3 of these entries are valid, with this header file being the most generic. Our next command can narrow down the most "specific" correct result.
/usr/include/asm-x86_64/signal.h:#define SIGHUP 1
/usr/include/asm-i386/signal.h:#define SIGHUP 1
Linux_host # uname -pr
2.6.5-7.286-smp x86_64
<--- Now we know that, if we want to be extra sure, the "/usr/include/asm-x86_64/signal.h" is the signal definition file we should be looking at to determine our signal information.

And that's all there is to it :) As an interesting bit of trivia, if you receive an errno value from a process that's higher than 128, you can easily deduce what signal stopped that process. All you need to do is deduct 128 from the return code, like this:

host # sleep 2
host # echo $?
0
host # sleep 200
^C
host # echo $?
130


In this case, when run normally, our sleep command exited with a return code of 0 (success). When we did a control-C during the sleep command, it returned a code of 130. Of course, you won't find this signal number defined in any of your include files, but, utilizing the shell's native reporting of signal interrupts, all you need to do is subtract 128 from 130 and you now know that the sleep command was killed by a signal 2, which is defined as:

SIGINT 2 /* interrupt (rubout) */ <--- on Solaris

and

SIGINT 2 /* Interrupt (ANSI). */ <--- on the Linux box I'm using.

Both answers, although formatted differently, indicate the same signal. Nice :)

Below, I've listed the standard signals for your convenience (from a fairly generic file - signum.h) and take no credit for writing them myself ;)

Enjoy,

Some Basic signals from /usr/include/bits/signum.h on Linux (Note that I'm only listing the basic signals and not I/O polling signals, etc)

#define SIGHUP 1 /* Hangup (POSIX). */
#define SIGINT 2 /* Interrupt (ANSI). */
#define SIGQUIT 3 /* Quit (POSIX). */
#define SIGILL 4 /* Illegal instruction (ANSI). */
#define SIGTRAP 5 /* Trace trap (POSIX). */
#define SIGABRT 6 /* Abort (ANSI). */
#define SIGIOT 6 /* IOT trap (4.2 BSD). */
#define SIGBUS 7 /* BUS error (4.2 BSD). */
#define SIGFPE 8 /* Floating-point exception (ANSI). */
#define SIGKILL 9 /* Kill, unblockable (POSIX). */
#define SIGUSR1 10 /* User-defined signal 1 (POSIX). */
#define SIGSEGV 11 /* Segmentation violation (ANSI). */
#define SIGUSR2 12 /* User-defined signal 2 (POSIX). */
#define SIGPIPE 13 /* Broken pipe (POSIX). */
#define SIGALRM 14 /* Alarm clock (POSIX). */
#define SIGTERM 15 /* Termination (ANSI). */
#define SIGSTKFLT 16 /* Stack fault. */
#define SIGCLD SIGCHLD /* Same as SIGCHLD (System V). */
#define SIGCHLD 17 /* Child status has changed (POSIX). */
#define SIGCONT 18 /* Continue (POSIX). */
#define SIGSTOP 19 /* Stop, unblockable (POSIX). */
#define SIGTSTP 20 /* Keyboard stop (POSIX). */
#define SIGTTIN 21 /* Background read from tty (POSIX). */
#define SIGTTOU 22 /* Background write to tty (POSIX). */
#define SIGURG 23 /* Urgent condition on socket (4.2 BSD). */
#define SIGXCPU 24 /* CPU limit exceeded (4.2 BSD). */
#define SIGXFSZ 25 /* File size limit exceeded (4.2 BSD). */
#define SIGVTALRM 26 /* Virtual alarm clock (4.2 BSD). */
#define SIGPROF 27 /* Profiling alarm clock (4.2 BSD). */
#define SIGWINCH 28 /* Window size change (4.3 BSD, Sun). */
#define SIGPOLL SIGIO /* Pollable event occurred (System V). */
#define SIGIO 29 /* I/O now possible (4.2 BSD). */
#define SIGPWR 30 /* Power failure restart (System V). */
#define SIGSYS 31 /* Bad system call. */
#define SIGUNUSED 31
#define _NSIG 65 /* Biggest signal number + 1


, Mike

Thursday, April 24, 2008

Creating Storage Pools For Solaris ZFS

Hey there,

As a way of getting on to addressing Solaris 10 Unix issues beyond patching zones and migrating zones, today we're going to put together a slam-bang setup of a Solaris ZFS storage pool. For those of you unfamiliar with ZFS (and I only mention this because I still barely ever use it -- Lots of folks are stuck on either Solaris 8 or 9 and are having a hard time letting go ;), it simply stands for "Zettabyte File System."

A Zettabyte = 1024 Exabytes.
An Exabyte = 1024 Petabytes.
A Petabyte = 1024 Terabytes.
A Terabyte = 1024 Gigabytes.
A Gigabyte = 1024 Megabytes.


And, at about this point the number scale makes sense to most of us. And, even though a Zettabyte seems like it's an unbelievably large amount of space, the next generation of operating systems will require twice as much RAM and disk space in order to respond to your key presses in under a minute ;)

Anyway, now that we have that out of the way, let's set up a ZFS pool.

First we'll create the pool, in a pretty straightforward fashion, using the zpool command and adding the disks (which can be in either cXtXdX notation, or specified by slice with cXtXdXsX notation. You can also use full logical path notation if you want (e.g. /dev/disk/cXtXdXsX). We'll assume two entire disks (unmirrored), create them and mount them for use:

host # zpool create MyPoolName c0t0d0 c0t1d0

Note that you can run "zpool create -n" to do a dry-run. This will allow you to find any mistakes you may be making in your command line as zpool doesn't "really" create the pool when you run it with the "-n" flag.

We also won't have to do anything extra to create the mount point, as it defaults to the name of the disk group. So, in this case, we would have a storage pool with a default mount point of /MyPoolName. This directory has to either not exist (in which case it will be automatically created) or exist and be empty (in which case the root dataset will be mounted over the existing directory). If you want to specify a different mount point, you can use the "-m" flag for zpool, like so:

host # zpool create -m /pools/MyPool MyPoolName c0t0d0 c0t1d0

And your ZFS storage pool is created! You can destroy it just as easily by running:

host # zpool destroy MyPoolName

or, if the pool is in a faulted state (or you experience any other error, but know you want to get rid of it), you can always force the issue ;)

host # zpool destroy -f MyPoolName

And you're just about ready to use your new "device." All you need to do is ( just like you would when mounting a regular disk device ) create a filesytem on your storage pool. You can do this pretty quickly with the zfs command, like so:

host # zfs create /MyPoolName/MyFileSystem

Note that the pool name directory is the base of the storage pool and your file system is actually created at the next level up. Basically, /MyPoolName is not your file system, but /MyPoolName/MyFileSystem is. And you can manage this file system, in much the same way you manage a regular ufs file system, using the zfs command.

Again, you can always opt-out and destroy it if you don't like it (here, as well, you have the option to "force" if you want):

host # zfs destroy /MyPoolName/MyFileSystem

Aside from "-f" (to force execution), the most common flags to use with "zfs destroy" are "-r" to do a recursive destroy or "-R" to recursively destroy all the regular descendants in the file system tree, plus all cloned file systems outside of your target directory (Be careful with that switch!)

We've only just brushed on getting started with zfs, but, hopefully, it's been somewhat helpful and, at the very least, readable ;)

Enjoy,

, Mike

Wednesday, April 23, 2008

Setting Locked, No-Login and Login Accounts On Solaris 10

Hello again,

Today we're going to look at a feature of Solaris' passwd command that has been around since Solaris 9 ( If you read this and you like the functionality, but are still using Solaris 8, check out this page regarding Sun Bug ID 6227256, which basically states that this functionality will never be back-ported to Solaris 8. I always thought it was strange that they considered this a Solaris 8 "bug" since the extra passwd functionality wasn't introduced until Solaris 9. I would have filed it under "feature-set enhancement," like the /dev/random patch, but I guess that's neither here nor there.

In any event, our post today is going to cover the difference between using the passwd command to lock user accounts, make them no-login (which can also be done if you set the user shell to a "nologin" shell) and, ultimately, reverse the process if necessary. Of course, all of this can be done manually if you like to edit the /etc/shadow file by hand. I'd advise against it. It has the potential to turn out very badly ;) This "added" functionality can also be used to make our script to remove old users network wide (and its Linux counterpart) a little more flexible. Sometimes it's good to have options other than just "locking" an account (or manually setting NP) if you need to keep an account around, but don't want the user logging in anymore.

Generally, if you look in a standard Solaris box's /etc/shadow file, in the password column, you'll either see a regular password entry (encrypted), like "557LIW.RqHi92," a locked password entry, like "*LK*" or a "No Password" password entry, like: NP (The fair equivalent of a no-login password on Solaris 9 and 10).

You can always check what kind of password you have by running:

host # passwd -s
user1 PS


This shows that our logged in account "user1" has a password set (PS). You can look at all users' password settings by running "passwd -sa," but you need to be root to do so. LK will stand for "locked" and "NL" means "no-login."

Now, moving along, there may be plenty of reasons you need to have an account locked and not removed from your system (That's your business. I'll stay out of it ;) The one thing to remember about a "locked" account (*LK* in /etc/shadow) is that it is truly barred from any activity. Not only can the account not login to the machine, it also can't make use of other services like cron, etc. A user account that has "no password" (NP in /etc/shadow) can't login to the machine either, but "can" use system facilities like cron. Changing a user account from "locked" to "no password" can be tremendously advantageous if you can't let the user log in any more, but still need to keep the account around to run nightly jobs, or what have you.

So, let's say we have a user (his login will be "badseed1") that we needed to restrict access to on our Solaris Box. Let's also say that we did it when the box was running Solaris 8, and we did the simplest thing possible and locked up his account by running:

host # passwd -l badseed1

So, now that we've upgraded to Solaris 9 (or 10), we get the following when we check on his account (let's say it turns out we really need his account to be performing some action that it hasn't been performing since we stopped letting him log in. We didn't notice for a really long time. We were busy!!! ;)

host # passwd -s badseed1
badseeed1 LK


And we have confirmation that his password is locked. This is good (in that it makes sense) because we know that "locked" accounts can't use cron. The first thing we're going to do is get his shadow entry up to speed. That is, we're going to leave his account locked, but change the way Solaris handles his entry. In Solaris 8, his password entry in /etc/shadow simply consisted of "*LK*", but we're going to change that to make it like everyone else's (This also makes our previous post on generating encrypted password strings somewhat obsolete, but still entertaining :). It will only take two passwd commands to do this and then we can see the difference:

host # grep badseed1 /etc/shadow
badseed1:*LK*:::::::
host # passwd badseed1
New Password:
Re-enter new Password:
passwd: password successfully changed for badseed1
host # grep badseed1 /etc/shadow
badseed1:Xe0.pVr4gvvh6:::::::
host # passwd -l badseed1
host # grep badseed1 /etc/shadow
badseed1:*LK*Xe0.pVr4gvvh6:::::::


Now, as you see, the user account is still locked but, instead of having just "*LK*" as his or her shadow password entry, the locked password is now simply the regular password with the "*LK*" string prepended to it. This allows us to do the following if we want to "unlock" the password and restore the user to the password they originally had without having to do anything extra!

host # passwd -u badseed1
host # grep badseed1 /etc/shadow
badseed1:Xe0.pVr4gvvh6:::::::


Now, we just have one last thing to do before we can get on with our lives. We need to set up badseed1's account so that it's "no-login." We want badseed1's account to be able to run cron jobs, but we don't want the user to be able to login, so providing the "no-login" shell is a happy medium between the two extremes of absolute availability and complete lockdown. And, again, this can now be done with a single command:

host # passwd -N badseed1
host # grep badseed1 /etc/shadow
badseed1:NP::::::


And we're all set. badseed1 can no longer log into our systems, but his or her cron jobs will run!

Hopefully you'll find some good use for these two very helpful new options to passwd ("-u" and "-N"), as well as the newer locking system used by the standard option "-l" ...ok, the options, as of the time of this writing, aren't technically "new," but they were at one point, and, hopefully, if they aren't new to you, this post has served as an enjoyable refresher :)

Best wishes,

, Mike




Tuesday, April 22, 2008

Functions Vs. Subroutines In Perl And Bash - Palindromes Revisited

Hey There,

As the clever title of today's post suggests, this is the follow up to our post on scripting out a way to determine if a string is a palindrome using Perl.

One of the main things to note (which isn't the only difference, of course) is the way in which the Bash shell and Perl deal differently with "routines." In our Perl script, we did the meat of our recursive work inside a "subroutine." In today's Bash script, that same work is handled inside a "function."

Functionally speaking, both a Perl "subroutine" (Identified by the "sub" declaration) and a Bash "function" (Identified by the "function" declaration) do the same things. They allow you to create a block of code for use within in your script. Naturally, functions and/or subroutines, are a great help if you find that your script consists of typing the same set of instructions more than once (if you have to type the same block of code more than twice, they're even better ;)

Technically speaking, there are some major differences between the two that should be noted. These should be platform independent and true for both Bash and Perl on Linux or Unix.

In Perl, since all of the information in the script is processed before the script is run, you can include your "subroutine" anywhere in the script, even if it's after the line on which you call it. It's common practice to put subroutines at the bottom of the script, but you can put them anywhere within the script that you like, if you're so inclined.

In Bash, since the script is parsed from top-to-bottom in a left-to-right fashion, "functions" absolutely need to appear in the script before the line on which they are called. If you put a function definition at the bottom of your Bash shell script and call that function 15 lines prior, you'll receive something along the order of a "command not found" error.

In Perl, when you pass arguments to a subroutine, you have a few different ways you can do it (basically, deprecated methods still work), but the most elegant way to pass simple variables to your Perl subroutine is to include them within parentheses, separated by commas, like so:

MySub( $var1, $var2, $var3);

The subroutine that would accept, and process, all of those arguments may look something like this:

sub MySub( $var1, $var2, $var3) {
print "$var1 $var2 $var3\n";
}


In Bash, when you pass arguments to a function, you can just pass them as if they were arguments to a regular command, like this:

MyFunction $var1 $var2 $var3

The function that would accept, and process, these arguments may look something like this:

function MyFunction {
echo "$var1 $var2 $var3"
}


Interesting side note: In Bash, if you declare a function, you can leave out the reserved word "function" if you want to, but this requires that you use parentheses, like with Perl. So you could write your function like this:

MyFunction () {
echo "$var1 $var2 $var3"
}


However, if you opt to use the reserved word "function" when declaring your function (like we do), the parentheses following the function name are optional :)

You may have noted that we haven't mentioned anything about "scope" with regards to variables and functions or subroutines. This is on purpose. We'll inevitably tackle that in a later post, but it's beyond the "scope" of this one (that was a truly painful pun for all of us... but utterly unavoidable ;)

On that note (for the palindromes), in our Bash script we've taken advantage of that "lack of scope" and simply set the "$status" variable globally within the palindrome function. In our Perl script, we used "return" to pass back either a 0 or a 1 to the calling procedure. The major difference here is that our Perl script was returning a code back to a variable which was defining its value based on the outcome of the subroutine process, so:

$status = palindrome( @string, $chars, $count );

was giving the "$status" variable a value of 0 or 1, depending on what the "palindrome" subroutine returned. In Bash, we jumped right over the middle man and just set the global "$status" variable from within the function (This isn't generally recommended, but okay for our purposes here). Assuming there are no bugs in your version of Bash that mangle the scope of variables within functions, the only risk you're taking, by defining a global variable from within a function, is that you'll forget about it and redefine it outside the function, thereby overwriting the value. But, again, that's for another day, as I can feel an essay coming on ;)

Hope you enjoy the Bash version of our simple palindrome script and that it helps you see the correlation between not only functions and subroutines, but some other aspects of coding that are either common, or unique, to both Bash and Perl.

Best wishes,

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# bashpal.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

function palindrome {
if [ "${string:${count}:1}" == "${string:${chars}-${count}-1:1}" ]
then
let count=$count+1;
palindrome "$string" "$chars" "$count"
fi
if [ $count -eq $chars ]
then
status=1;
else
status=0;
fi
}

printf "Enter A String: "
read string

chars=${#string}
count=0
palindrome "$string" "$chars" "$count"

if [ $status -eq 1 ]
then
printf "\n That String Is A Palindrome.\n\n"
else
printf "\n That String Is Not A Palindrome.\n\n"
fi


, Mike




Monday, April 21, 2008

Sar, Vmstat, Free Memory And Swap On Linux or Unix

Hey There,

Today's post will, hopefully, help out folks in the future, since I see questions about this a lot on the boards (and am happy to keep reposting my original answers :) Actually, if you have the time and any of this seems nutty, you might check out an earlier post we did on dealing with simple arithmetic in the shell or our post on floating point arithmetic and percentages if you're feeling frisky ;)

Linux is actually quite a bit better with this diagnostic issue, at least when it comes to reporting free memory, than Unix is. My heaviest bias on Unix is toward Solaris, since I use it the most. I'm not saying that the way they handle it is bad, incorrect or in any other way improper. I'm just saying that, on its face, it can be bit confusing. So we'll tackle both free memory, and free swap, reporting on vmstat and sar, one by one:

1. Free memory: Generally, like I said, on most Linux distro's, "sar -r"'s "kbmemfree" column and vmstat's "memory/free" sections use the same standard of measurement. You can easily compare the two and see that they match, since they're both reported in kilobytes.

Solaris' implementation is somewhat different, however. While vmstat's "memory/free" output is displayed in kilobytes, the output of "sar -r" is in displayed in the "number of pages" (???) Okay, so right off the bat you can be sure that the numbers won't agree. For instance, if vmstat's output showed 4840496 on Solaris, the output from "sar -r" would be 605367 (from a semi-concurrent run of both commands). Those numbers aren't even close, which is good since you'll at least notice that something is off and not necessarily think you have a huge problem ;)

So, now you'll need to calculate how many pages it takes to equal the amount of kilobytes reported by vmstat. First you'll want to run a simple command to find out the default size of memory pages on your Solaris Unix system:

host # pagesize
8192


In this case, the page size is 8192 (which is reported in bytes), which gives us an 8 kilobyte page. Then, we'll take that number and multiply it by the number of pages being reported by "sar -r":

host # let a=605367*8 <--- Since vmstat reports in kilobytes, we want to multiply the number of pages shown in "sar -r" by the number of kilobytes in our default page (8 kilobytes, composed of 8192 bytes)

host # echo $a
4842936


Now the numbers are pretty much the same. A slight skew is probable, since we didn't run the commands at the "exact same time," memory gets thrown around between processes a lot and a kilobyte (especially these days) is a very small unit of measurement. Still, 4840496 and 4842936 are too eerily close enough for this to have been an accident ;) In short; all is well. The numbers agree.

2. Free Swap is a little bit different on most Linux distro's. In vmstat, it shows up under "memory/swpd" which really means the memory that is currently swapped out, not what's free. But that's okay, because it's reported in the same unit of measurement (kilobytes), and in the same way, that it is in "sar -r" under the "kbswpused" column.

Again, you'll need to make allowances for Solaris Unix, but (before you go cursing it up and down the figurative block) remember that the techniques used to derive the free memory above (and the almost-inverse that we're going to do to figure out if the swap numbers are okay) sometimes show up elsewhere, on other Linux and Unix distributions, so this is actually a good thing :) Really...

Anyway. When you check for how much swap you have free on Solaris, you'll notice that the numbers don't match, again! For example, if "vmstat" is showing 8993040 (in kilobytes, again) for "memory/swap," you'll note that "sar -r" is showing the much larger number, 17993811, in its "freeswap" column. Now we have a situation approximately the opposite of when we tried to make the free physical memory numbers agree.

This time the answer's a little simpler. On Solaris, "sar -r" reports free swap in blocks (which are 512 bytes, of course ;) Two of these equal 1024 bytes which is exactly the size of a kilobyte. Simple. Just divide "sar -r"'s number by 2, and you have the size in kilobytes instead of blocks, so 17993811 (divided by 2) becomes 8996905 (kilobytes), which is pretty close to the 8993040 kilobytes reported by vmstat.

Hopefully this has been more helpful than confusing :)

Best wishes,

, Mike




Sunday, April 20, 2008

Porting Perl To Shell Again - Palindromes

Hey There,

For this "Lazy Sunday" post, we're going to take a look at a Perl script, for Linux or Unix, that checks whether or not a string you input is a palindrome. As you probably know (but I feel I should explain anyway, just to be thorough ;), a palindrome is any set of letters, numbers, spaces, etc, that reads exactly the same forward or backward.

Check out this little script and notice the use of recursion to both maintain a pseudo "state" and the use of a single subroutine to perform what could be a complex operation.

We'll port this to shell in a few days (like we've done in our post on porting a web element reporting script and its improved log checking follow up), with more explanation of what's going on, and the why and how of what gets changed when we port from Perl to shell.

Enjoy your Sunday and relax :)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/usr/bin/perl

#
# perlpal.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

print "Enter A String: ";
$string=<STDIN>;
chomp($string);

$chars = length($string);
@string = split(//, $string);
$count=0;
$status = palindrome( @string, $chars, $count );


if ( $status == 1 ) {
print "\n That String Is A Palindrome.\n\n";
} else {
print "\n That String Is Not A Palindrome.\n\n";
}

sub palindrome(@string, $n\chars, $count) {
if ( $string[$count] eq $string[$chars-$count-1] ) {
$count++;
palindrome($string, $chars, $count);
}
if ($count == $chars) {
return 1;
} else {
return 0;
}
}


, Mike




Saturday, April 19, 2008

Snooping Through Email On Solaris

Good morning/afternoon/evening,

For this Saturday's post, I thought I'd put together another bash script on Solaris Unix using snoop. The last time we did this it was to grab cleartext logins and passwords. This time, I thought we'd look at email. SMTP, port 25, in particular.

Snooping through someone's physical mail (delivered to their home) would be a federal offense, but (to my knowledge) if you're a Unix or Linux administrator and have to analyze network traffic and interrogate packets at your place of work, there's almost no way you can avoid getting into other people's business. Most company's have an HR policy that your email is considered private and gaining access to another user's email, without their consent, is blah, blah, blah leading up to, and including termination. Of course, five minutes after said employee leaves the company, you're probably going to be called upon to provide just that sort of access (or the information gained by that access) to the same department that demanded you never ever do that sort of thing in the first place.

That being said, I'm pretty sure the law at this point is that a company can do whatever it wants with any data you create or use on their systems. If they need to, or want to, they can look at the email you send out and receive and where you go on the web, etc. I'm not sure why corporate America insists on assuring the average employee that their company-owned data is "personal and confidential" when all that email and web traffic has to be scanned by 15 security appliances before it can be allowed to enter or leave the company network? You can't block access to a website (even to, say, everyone in the company) without having to examine where every employee is going when they surf the web, etc. It probably makes for some yawn-inspiring legal battles ;)

Anyway, for the sake of today's argument, you're the root user (or someone with sufficient privilege to run snoop on Solaris Unix) and you're inspecting network traffic on an interface on a machine for a semi-legitimate reason.

If you need to check mail traffic, the simplest thing to do is snoop on port 25. This is the SMTP port and is used for sending and receiving email (you can also look at the POP and IMAP ports, but we'll wrap that into the everything's-pretty-much-all-the-same-when-you-get-right-down-to-it closing). You can get a good deal of information just running a straight-up snoop, like so:

host # snoop -o output_file port 25 <--- This snoop will use the default network interface, only capture traffic on port 25 and write the output to a file called "output_file."

The only problem you have is that, although it's readily clear who sent mail to whom, you can't see "what" they wrote. That's the stuff you want to see if you're going to be intercepting that information in real-time.

Note: In order to see the full contents of a packet, you don't need to use the "-v" flag. In fact, I'd strongly discourage it, since it pumps out about 50+ lines of IP stack layer information that you don't need for each and every packet!

If you want to see the full contents of the packets in snoop, just use the "-x" option and pass it the argument of an offset. "-x" gives you the entire packet, in both HEX and ASCII formats. You don't really need the HEX, since most humans read ASCII encoded text (like this) a lot more easily ;) A quick way to dump the HEX portion of each packet is to set the offset to 54 (for TCP traffic) and 42 (for UDP traffic). So, if you wanted to grab each packet and look at the ASCII contents only, you would type:

host # snoop -x 54 -o output_file port 25 <--- We're assuming TCP for the email transmission, although we'd capture any UDP packets that went to that port, also.

If you're ever snooping a protocol that doesn't fit the standards, or you forget these, you can almost always get the exact same effect (for TCP, UDP and any other protocol) by piping your output_file to awk when you're ready to read it, and just printing out the last field of every record, like so:

host # snoop -o output_file port 25
host # snoop -i output_file 2>&1 | awk '{ print $NF }'


The script we wrote for today doesn't take any arguments (but you should modify the snoop line if you want to specify a NIC with the "-d" option) and can be run simply, like this:

host # ./snoopmail.sh

And you'll get somewhat-ugly, but ultimately satisfying, results like this (yet another reason to never send a password via email):

..............To
...some.poor.guy
@xyz12345.com..S
ubject
Update 4....Hi
again....How are
ya,....Your new
password is bU
ggl3s....Please
do not share thi
s information wi
th any one!....T
hanks,.....Secur
ity..


Enjoy your Saturday, everyone :) Best wishes,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# snoopmail.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -ne 1 ]
then
echo "Usage: $0 SnoopOutputFile"
echo "Please capture packets with the suggested"
echo "settings: snoop -o output port 25"
exit 1
fi

snoop_file=$1

if [ ! -f $snoop_file ]
then
echo "Cannot find snoop output file $snoop_file. Exiting..."
exit 1
fi

snoop -i $snoop_file -x 54|sed -n '/DATA/,/QUIT/p'|grep -v SMTP|awk -F":" '{print $2}'|cut -c41- -


, Mike




Friday, April 18, 2008

Shell Script To Send Mail Using Bash File Descriptors

Hey There,

It's the weekend again, and I thought I'd wrap up this week's postings with quick script that, again, demonstrates a great way to take advantage of networking with bash using file descriptors on Linux or Unix. If you find this sort of thing fascinating (which I seem to ;), be sure to check out our orignal post on networking with bash file descriptors and the follow up regarding more things you can do with bash networking.

Note: Interestingly enough, apart from file descriptors 0, 1 and 2, you should also stay away from file descriptor 5. It seems bash uses this as its default file descriptor when a child process is created. Thankfully, I chose the number 9 (No Beatles pun intended ;)

Today's script sends email from the shell, directly to port 25 out, and is simple to run. It only takes a few arguments: Your From address, your To address, your mail server or relay, your domain and your message text (which you can put in any sort of file, as long as it's readable).

Ex:

host # ./mail.sh me@xyz.com you@xyz.com xyz.com localhost fileName


Hope you enjoy it and find some good use for it. If anything, it might make shooting emails out from the shell when you have a good idea much easier (before you have to fuss with Windows and lose your train of thought - Does that happen to everyone else, or just me ;).

Have a great weekend :)


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# mail.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -ne 5 ]
then
echo "Usage: $0 FromAdress ToAdress Domain MailServer MailText"
exit 1
fi

from=$1
to=$2
domain=$3
mailserver=$4
mailtext=$5

if [ ! -f $mailtext ]
then
echo "Cannot find your mail text file. Exiting..."
exit 1
fi

exec 9<>/dev/tcp/$mailserver/25
echo "HELO $domain" >&9
read -r temp <&9
echo "$temp"
echo "Mail From: $from" >&9
read -r temp <&9
echo "$temp"
echo "Rcpt To: $to" >&9
read -r temp <&9
echo "$temp"
echo "Data" >&9
read -r temp <&9
echo "$temp"
cat $mailtext >&9
echo "." >&9
read -r temp <&9
echo "$temp"
echo "quit" >&9
read -r temp <&9
echo "$temp"
9>&-
9<&-
echo "All Done Sending Email. See above for errors"
exit 0


, Mike




Thursday, April 17, 2008

More Fun With Bash Networking

Greetings,

You may recall a post we did about a week or so ago on accessing the network directly with bash. Today's post is a bit of a follow up to that introduction, with some tips and clarification added for clarity and/or enjoyment ;)

1. The basic setup (this will be the only part revisited from our original post on bash networking. In order to get any of this started, you'll need to exec a file descriptor in bash. The file descriptor can, theoretically, be any number, but you should avoid 0, 1 and 2, as these are generally reserved by most Unix or Linux shells for standard input, standard output and standard error I/O. To exploit bash's "virtual" /dev/tcp filesystem (note that you can use /dev/udp as well; it just depends what you're planning on doing - if you wanted to interact with an old NIS server, you might "need" to use the udp protocol), the setup is simple enough. Assuming we wanted to use file descriptor 9 to hit a google web server on port 80, this is what we'd type on the command line:

host # exec 9<>/dev/tcp/www.google.com/80 <--- We've now connected to www.google.com.

2. The rest of it depends on what you want to do. The most basic operation would be to send information to the new file descriptor, read information from it and close it. Those three things can be done like this:

host # echo -e "GET /search?q=linux+unix+menagerie HTTP/1.0\n\n" >&9 <--- And now, we've requested a search page
host # while read line <&9
do
echo -n $line >&1
done
<--- Now, we read the HTML page that gets returned.
host # exec 9<&- <--- This closes the standard input for our new file descriptor
host # exec 9>&- <--- and this closes the standard output

3. Using HTTP 1.1 instead of 1.0. This is actually quite easily done (and you can do it through Telnet to port 80 on any given host, as well. Telnet will allow you to connect to any specified port on a host and interact with it. You just need to know what to type ;). It just requires a bit more typing on our part. Assuming we have the initial file descriptor set up, per step 1, we would do this to initiate an HTTP 1.1 connection and read from it.

host # echo -e "GET /search?q=linux+unix+menagerie HTTP/1.1" >&9 <--- Note the lack of the double carriage return. This isn't absolutely necessary, but if those returns are typed, they obviate the next line.
host # echo -e "Host: www.google.com\nConnection: close\n" >&9 <--- This line tells HTTP/1.1 that we want to close the connection once our query, or page request, completes. HTTP 1.1 will keep the connection open until it times out unless you do this, while HTTP 1.0 will close it immediately after you send one request.

And then you can go ahead and read the response and close the file descriptors in the same manner as above.

4. Implementing host headers. You may find that more and more web servers require "host headers" when you send them an HTTP 1.1 request. These, technically are only required for virtual hosting, but it's becoming more and more common for out-of-the-box webservers to run the main server as a virtual host. It's easy to send a request to our file descriptor in this manner, as well. We just need to modify it slightly.

host # echo -e "GET /search?q=linux+unix+menagerie HTTP/1.1\nhost: http://www.google.com\n\n" >&9

5. Sending a POST request rather than the standard GET. This again, is just a modification on the main method. All you need to tell the server is that you're performing a POST action and send it the POST data (This is basically how the folks in security check to see if they can mess with your cgi forms):

host # echo -e "POST /search?q=linux+unix+menagerie HTTP/1.1\n\nHere Are All My POST Variable=Value Pairs" >&9

6. Ignoring the header information when requesting an HTML page using bash networking through file descriptors. This is kind of "voodoo," but is almost 100% guaranteed to work. The first blank line you encounter when you read your response to the query you send to a web server is (almost) always the end of the header section. The header section includes varying information on web server type, version, age, etc, but isn't worth reading if you just want the web page back. You can avoid looking at it by running this after you make the initial GET request:

host # while read <&9
do
line=${response//$'\r'/}
if [ -z "$line" ]
then
break
fi
done


And then you can move on and continue reading from the file descriptor, and closing it, just like in step 2:

host # while read response <&9
do
echo -n $response >&1
done
exec 9<&-
exec 9>&-


7. What you get if you settle. Interestingly enough, bash's "virtual" tcp (or udp) filesystem defaults to connecting to a webserver. If you simply exec a new file descriptor without any arguments, the "virtual" /dev/tcp, and /dev/udp, file system is still created. It defaults to port 80 and the tcp protocol. In fact, the virtual file system "always" exists. Why? Because your shell automatically opens file descriptors 0, 1 and 2 when it initializes. So, basically, when you login, you've already got a path to the net :) How to use it meaningfully is another thing entirely, but you can do this right out of the gate, just after logging in:

host # echo -n "GET / HTTP/1.0\n\n" >/dev/tcp/www.google.com/80

although, thankfully, it doesn't work all that simply and involves serious risk.

Exec'ing an additional file descriptor is a better practice when doing this sort of thing, because you don't want to muck with the 3 basic ones. If you make a mistake on file descriptor 9, for example, it's no big deal if it gets disconnected or something else weird happens to it. If file descriptor 0 get whacked, you won't be able to type to the terminal, and if file descriptors 1 or 2 get destroyed, you won't be able to see some, or all, of the output the shell is sending to your terminal. In either case, you risk losing your connection to your terminal session entirely.

Again, you can use bash's "virtual" networking to connect to any port that already has a service listening on it. Unfortunately, as far as I know, at this time, there is no way to "initiate" (or bind to) a network port using this method and the bash shell (or any Unix or Linux shell, for that matter). If it ever becomes possible, I'll be sure to post about it, because it will open up a whole new can of worms for all of us ;)

Again, if you would like further basic explanation (and a sample script) regarding setting up bash's "virtual" network connections, see our previous post on bash file descriptor networking.

Cheers, and enjoy!

, Mike