Wednesday, October 31, 2007

Figuring out your NIC's speed and duplex on Solaris

Here's another one admins and users have to do all time. If there's ever an issue with networking, you need to be able to confidently say that your NIC is up at 1Gb full duplex (or whatever your network admin insists).

The way to check this has changed somewhat in Solaris 10, but the old way to check is still available; although not totally reliable.

For instance, you can use "ndd" in all flavors of Solaris (at least from 2.6 up) to get information from /dev/hme (or whatever your NIC's device driver is). Generally, you would look at the speed and duplex settings using the following commands (slight variation depending on NIC's - e.g. 100Mb hme's don't have values for the 1000 Mb queries)

The following commands are pretty useful, and non-destructive, for any device driver, even though you'll get errors for all the stuff that isn't supported:

/usr/sbin/ndd -set /dev/ce instance 0
/usr/sbin/ndd -get /dev/ce adv_1000fdx_cap
/usr/sbin/ndd -get /dev/ce adv_1000hdx_cap
/usr/sbin/ndd -get /dev/ce adv_100fdx_cap
/usr/sbin/ndd -get /dev/ce adv_100hdx_cap
/usr/sbin/ndd -get /dev/ce adv_10fdx_cap
/usr/sbin/ndd -get /dev/ce adv_10hdx_cap
/usr/sbin/ndd -get /dev/ce adv_autoneg_cap


Of course, replace the "/dev/ce" with your particular driver. The only downside to this hack-and-slash method is that you may see 1's (indicating that the parameter is set) rather than 0's (indicating that the paramet is not set) in more than one place (like in adv_1000fdx_cap , adv_100hdx_cap and adv_autoneg_cap all at once ???)

The best way to do it, in my experience is to use either "netstat -k" (In Solaris up to, and including, version 9) or "kstat -p."

In Solaris 9, assuming the same NIC driver and instance "ce0," you can do the following to find out the status of your NIC:

netstat -k ce|grep 0|egrep 'link_speed|link_dupl'

On Solaris 10, you'd do this:

kstat -p|grep ce0|egrep 'speed|dupl'

Basically, the speed should be 1000 for Gb, 100 for 100Mb, etc. Your duplex is represented numerically as either 0, 1 or 2.

0 = No Duplex (or Down)
1 = Half Duplex
2 = Full Duplex


This output is great to show anyone who doubts your conviction ;)

Take care,

, Mike





Tuesday, October 30, 2007

An Easy To Understand Shell Script To Save You Some Hassle

Hey again,

Just to round off the day, I thought this might be of interest to someone other than me.

Personally, I hate editing a shell script and then writing it to disk, chmod'ing it and then running it. I know it's petty and not that big of a deal, but that's what shell scripting's for, right? Taking care of all those boring things you have to do over and over and over again.

This little script I call "vix" and I call it instead of "vi" or "vim" (Your preference; just modify the script - use emacs if you want ;) when I'm going to be writing a new executable script.


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/ksh

#
# 2007 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

/usr/bin/touch $@
/usr/bin/chmod 755 $@
/usr/bin/vi $@


Simple, right? $@ is just your command line arguments (the names of the scripts you're going to edit/write), so you can call:

vix scripta.sh scriptb.sh ...

instead of

vi scripta.sh scriptb.sh ...

and your script will be executable when you're done editing it. Silly and simplistic? Yes. Useful, too, if you're sick and tired of typing chmod 755 (or whatever) after every script you write :)

, Mike





Keeping Grep Out of Your Grep Output

Hello, again,

Most folks who've worked on Linux or Unix before know that when you run a command like:

ps -ef|grep process

You end up, sometimes, with two lines of output. One with the process you're grepping for (if it exists) and one that shows your grep command executing.

Generally, most people I've known do this:

ps -ef|grep process|grep -v grep

While this is perfectly legitimate, you can actually use the shell's builtin functions to achieve the same thing with fewer keystrokes. It's look cooler too :) Do this instead:

ps -ef|grep "[p]rocess"

The big difference here is the thing we're grepping for. Note that it's in quotes and the first letter (although it could be the second, third or whichever) is in square brackets ([]). This will only produce one line of output: The line with the process you're grepping for.

The reason this works is that the shell interprets the character within the square brackets as a "range" (albeit of one character) and effectively strips the brackets when it interprets the command (it's still running your original "grep process"). And the reason it will never match itself is that, in the "ps -ef" output, your literal brackets will still be there and, of course, won't match.

Enjoy :)

, Mike





Monday, October 29, 2007

Sorting "Human Readable" Files By Size Numerically

Hey There,

This one used to drive me nuts! Not less than few revisions ago, Solaris' implementation of df and du only had the "-k" option, so you could look at file sizes, etc, in terms of kilobytes rather than blocks (which could be variably sized).

Then everything started getting fancy with Solaris 9+ (Linux, I think, sported this option first). They began including the "-h" (Human Readable) option so that you could see file sizes, etc, in very easy to read format (e.g. 1Mb rather than 1024 in kilobytes).

The main problem I had with this was that you could no longer run the command stream "du -sk *|sort -n" and have the files list out in numerical order from smallest to largest (or vice versa). If you ran "du -h|sort -n" you'd almost get the same thing, but 2Mb might be listed right after 1Gb and before 3Gb. The numbers were still in order, but the sizes really weren't being listed out from smallest to largest correctly.

Here's a quick little scriptlet that I put together to avoid this headache. If they haven't fixed it yet, hopefully they will soon (assuming it's an "error," which, technically, it isn't).

Just put the following in a script and run it as "./myscript" (or add command line options, like you would to du, if you want to be more selective):


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/ksh

#
# 2007 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ ! -z $1 ]
then
ARGS="$@"
else
ARGS="*"
fi

/usr/bin/du -sk $ARGS|/usr/bin/sort -n|/usr/bin/awk '{print $2}'|/usr/bin/xargs -ia /usr/bin/du -sh "a"


Voila! The sizes are listed in Human Readable format and they're also correctly listed in size from smallest to largest :)

, Mike





Sunday, October 28, 2007

How to Really Represent Control Characters

This short one just in,

A lot of times, you might need to specify a control character (like ctl-c, for instance) in a file or on the command line.

A common instance of this is when you need to set your backspace key as the erase key using stty in most shells.

Some folks, rightfully, assume that you can put that in your .profile as - literal - ^H - That's the way it looks when you hit the backspace key.

Unfortunately, the shell reads this as caret (^) H, and not the actual backspace key. The trick here is to print the actual control key sequence. This can be done very easily by doing the following:

1. Type ctl-v
2. Hit your backspace key

And there's your ^H - except it's real this time :) Your line, in vi, will look like: stty erase ^H

One easy way to tell if you've accurately captured a control key sequence in a file is to run "cat" on the file. Doing this, if you've actually captured the control key sequence, won't show you anything in the place of the ^H. If you still want to see it, just to be sure, use "cat -v" instead.

Best wishes,

, Mike





Floating Point Arithmetic and Percentage Comparison in Ksh!

Hello again,

Today, we've got a somewhat heavy script. If you're an administrator, or just a concerned user, you've probably run into a situation where you needed to be notified if a file got too large. This is a fairly common concern if you're going to have to do lots of extra work if you don't get some fair warning before the file fills up the partition it resides on!

Below is a little something I whipped up to keep tabs on pretty much any file. It involves several concepts, which we'll go over in the future. These include, mailing out to users from within the script, watching a file's size and comparing it with an "expected" size and using the shell's limited integer arithmetic to evaluate size and comparative percentage with simulated floating-point decimals.

Below: The script. Over the next few days or weeks, I'll revisit some of the finer points involved here, as they require way too much space when dealt with all at once!


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/ksh

#
# 2007 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

concerned_users="user1@xyz.com,user2@xyz.com"

if [ ! -f $db_file ]
then
(echo "Subject: Database File Missing!";echo;echo;echo "The file $db_file does not seem to exist!?";echo "From: root@xyz.com";for x in $concerned_users;do echo "To: $x";done;echo "Please check this out a.s.a.p. unless it is a known issue!")|/usr/lib/sendmail -t
exit 1
fi

db_file=/path/to/db.file
db_dusk=`/usr/bin/du -sk $db_file`
db_size=`echo $db_dusk|/usr/bin/awk '{print $1}'`
db_limit=2097152.20
let threshold=`echo 4 k ${db_size} $db_limit / p |/usr/bin/dc|/usr/bin/sed 's/\.//'`
threshold_pct=`echo $threshold|/usr/bin/sed 's/^\(.*\)\(..\)$/\1\.\2/'`

# Assuming 2Gb Crash Point - 2097152.2 Kb
# -- Start Warning At 85%

if [ $db_size -gt 1782579 ]
then
(echo "Subject: Database Approaching Maximum Capacity";echo "From: root@xyz.com";for x in $concerned_users;do echo "To: $x";done;echo "";echo "";echo "Database $db_file is currently at";echo "$db_size Kb";echo "This exceeds the 2Gb 85% threshold at ${threshold_pct}%.";echo "Please reduce the size immediately, if possible")|/usr/lib/sendmail -t
fi


Hopefully, this script will be helpful to you. I look forward to digging deeper into the specifics in future posts :)

, Mike





Saturday, October 27, 2007

Getting SSH to Run in a Shell Loop

This is something I hear a lot, because it's still the norm for SSH.

A lot of times, you'll want to run a command remotely, and to do so expediently, you'll want to run that command in a shell loop. This works perfectly well for rsh (we're assuming for rsh that you've set up your .rhosts files and for SSH you have passwordless key exchange set up so these loops don't require user interaction):

for x in hosta hostb hostc
do
rsh $x "hostname" <----------- Or whatever command you want to run
done

This produces: hosta hostb hostc

That will run the command on every machine in your "for x" line and spit out the results. Interestingly enough, if you do this with SSH, only the first host will get processed, and the script will exit, producing only this: hosta

The issue here is with the way SSH deals with tty's. When that first loop terminates, SSH closes the tty and exits hard, breaking out of the script loop and terminating the script.

The solution is simple enough; just set up your SSH command so that it treats its terminal, in each invocation of the loop, as a "null terminal," like so (the "-n" option is available as a standard, but may vary depending on what release of SSH you're using):

for x in hosta hostb hostc
do
ssh -n $x "hostname"
done


Problem solved :)

I usually add a little extra to the end of the SSH line: ssh -n $x "hostname" 2>/dev/null

The "2>/dev/null" eliminates STDERR output so you don't have to see all the connect and disconnect information. It's useful for simple debugging, but, once your script is working okay, it's just more clutter.

Hopefully this will help you automate processes more securely and more easily in the future!

, Mike





Friday, October 26, 2007

Using Perl To Figure Out How Old Your Files Really Are!

Here's a little something I cooked up out of job-ennui ;) It's a complicated looking, but pretty basic, script to let you know how old your files are at the instant you run it.

The outer part of the script is left out - basically just processing file names, which you can do however you wish, and dumping them into an array (@files).

This is a pretty good example of what you can do with Perl's included "stat" function ( you can do a whole lot more, as evidenced in line 5 of this script on the left hand side of the assignment).

This script would actually look a whole lot nicer if we also used Perl's built in "localtime" function (would save us all the hassle of doing division), but what fun would that be?

This is safe to try at work, unless you're supposed to be doing something else and your boss is a micro-manager ;)


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

$now = time;
$|=1;
foreach $file (@files) {
chomp($file);
($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks)=stat($file);
$fileage_s = $now - $mtime;
$|=1;
print "$file :";
if ( $fileage_s < 60 ) {
printf (" %02d second(s) old", $fileage_s);
} elsif ( $fileage_s < 3600 ) {
$fileage_m = sprintf("%d", $fileage_s/60);
$fileage_s = $fileage_s-($fileage_m*60);
printf (" %02d minute(s) %02d second(s) old", $fileage_m, $fileage_s);
} elsif ( $fileage_s < 86400 ) {
$fileage_h = sprintf("%d", $fileage_s/3600);
$fileage_m = sprintf("%d", ($fileage_s-($fileage_h*3600))/60);
$fileage_s = $fileage_s-(($fileage_m*60)+($fileage_h*3600));
printf (" %02d hour(s) %02d minute(s) %02d second(s) old", $fileage_h, $fileage_m, $fileage_s);
} elsif ( $fileage_s >= 86400 ) {
$fileage_d = sprintf("%d", $fileage_s/86400);
$fileage_h = sprintf("%d", ($fileage_s-($fileage_d*86400))/3600);
$fileage_m = sprintf("%d", ($fileage_s-(($fileage_d*86400)+($fileage_h*3600)))/60);
$fileage_s = $fileage_s-(($fileage_m*60)+($fileage_h*3600)+($fileage_d*86400));
printf (" %d day(s) %02d hour(s) %02d minute(s) %02d second(s) old", $fileage_d, $fileage_h, $fileage_m, $fileage_s);
}
print "\n";
}


Until tomorrow,

, Mike





Thursday, October 25, 2007

Cherry Picking - Listing Out File Date/Size Information with Ksh

Here we go,

This is a quick way to list out file date/size info from a list of files.

For instance, if you have the following in a file (your list of files you want to watch - for our purposes, they'll be in the same directory as the script)

100 HE01 file1.txt check_it
200 LTXS file2.txt out


The first two fields don't matter and neither does the last. We just put them in there so that this example would be able to illustrate picking out fields that we care about (just the filenames - file1.txt, etc)

This script right here will read that file (called input.txt) and print out the following information:

file1.txt 3 Oct_25_22:27
file2.txt 12 Oct_25_22:27


And here it is:


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/ksh

#
# 2007 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

cat input.txt|while read v w x y
do
if [ -e $x ]
then
num_lines=`wc -l $x|sed -e 's/ *//' -e 's/ .*$//'`
time_stamp=`ls -l $x|awk '{print $6 "_" $7 "_" $8}'`
echo "$x $num_lines $time_stamp"
fi
done


Simple enough. Notice on the first line that we use "while read" and then list out 4 variables. Basically "while read" variables are consumptive. That is if we just said "while read x" it would assign the entire line of that file to the variable x. Here we only wanted the third field on the line.

The variables v and w are important because they set the stage for letting x become the file name in the script. y takes up the whole rest of the line, also ensuring that x is just the 3rd field (the text name that we want to work on).

All the sed and awk for later. Hopefully this little bit of script will prove useful to you at some point in your trials :)

, Mike





Wednesday, October 24, 2007

Simple But Useful: Finding the Highest Number

Hey There,

This is something you do every day, probably. Compare things. And then, based on whatever your objective is, you ferret out the one object that meets your requirements.

This is a really simple script, assuming input on the command line of any number of numbers, separated by spaces. When it's done, it will print out the largest number of the bunch.

Not super exciting, but fundamental and applicable to lots of things, depending on what you want to compare. Later on, I'll post the equivalent alpha version of this numeric scriptlet


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/ksh

#
# 2007 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

highest_number=0
for x in $@
do
if [ $x -gt $highest_number ]
then
highest_number=$x
fi
done
print "Highest Number is $highest_number"


Yes, today is my day of rest ;)

,Mike





Tuesday, October 23, 2007

Only Printing Out Matches With Perl Regular Expressions

Hey there,

This will deal with something every shell/perl scripter has to deal with from time to time. How do I extract "whatever" from this file and just output my match?

We'll assume you have a logfile that consists of lines like this:

<td class="prompt" height="15" width="30%"><span><td class="prompt" height="15" width="30%">Text-Text_Text</td>
<td class="tablecontent" width="70%">09/23/08 </td></span>

and you just want to get the date.

It's relatively easy to do (although this case is far from challenging since the date format is so different than the rest of the lines).

First you need to figure out what you're not going to print out. So, after you read the file into an array and are iterating through it (or however you prefer to process your files with Perl), you'll first want to put this line in your loop.

This will make sure that you don't bother printing out any lines that don't contain your match:

next if $match !~ /\d\d\/\d\d\/\d\d/;

This says don't do anything and continue to the next line if the pattern "\d(Any Decimal Character)\d\/(A forward slash - backslashed, since it's a special character)\d\d\/\d\d" isn't on the line.

Then you'll want to print out only the date for all the lines that contain it, with this following line:

$match =~ s/^.* (\d\d\/\d\d\/\d\d) .*$/$1/; print $match

Note that the $1 on the right hand side of the expression represents everything we've captured in between parentheses on the left hand side of the expression. We've basically substituted the entire line with the date :) And just so we see some output, we print the contents of $match (the date).

Simple enough :)

, Mike





What A Night - Changing Solaris Machine Names

Hey there,

Some more possibly boring, but definitely useful, trivia about working on Solaris. Just got off an emergency shift re-purposing a whole bunch of boxes.

It's actually really simple work to change a solaris hosts name. Just need to make sure you hit all the bases.

The files you have to edit are the same for Solaris 8/9 and 10, except for one.

All you need to edit, and run, to change your hostname is:

Edit: /etc/hosts - make your main hostname change here. If possible include an entry for your defaultrouter!
/etc/hostname.NIC (e.g. hostname.ce0) - may be more than one of these depending.
/etc/nodename - the main hostname of your server
/etc/inet/net/*/hosts (Solaris 8/9) - just change the hostname in the hosts files.
/etc/inet/ipnodes (Solaris 10) - change the hostname here
/etc/defaultrouter - you can enter an IP here, but I prefer to enter a hostname and list that in /etc/hosts (above)

Then execute:
hostname NewHostname (as root)
uname -S NewHostname (as root)

And reboot if you like. There's always running sys-unconfig, but it takes out every modification you've ever made. It'll even zero out your resolv.conf file (why???)

Anyway - Enjoy your rest - I have to be back at work in a few minutes ;)

, Mike





Sunday, October 21, 2007

Crontab And Your Environment

Hey There,

This is a question that gets asked by everyone, at some point or another. My script works fine when I run it at the command line, but it never executes correctly when I run it in cron. Why not?

This usually boils down to a few reasons:

1. The cron entry is vague (Cron, at least in the old days) used to be very strict. That is, it ran in a totally different environment than the environment the user had while logged into the shell. It's always good practice to type

* * * * * /usr/bin/ls
rather than
* * * * * ls

2. That same rule applies for the inside of your scripts. Absolute paths to executables should be used as opposed to just the names of the executables (excluding shell built-ins, of course). So, again:

/bin/grep
rather than
grep

3. The easiest way you can make sure that you have same environment in cron as you have when running any script as the regular user is to "source" the environment into the script by adding a line like:

. /etc/profile
. /home/user/.profile


to the top of your script (below the #! line). The literal dot, space, filename patterns tells your shell to read in all variables in that named file, so you could run your cron job with the same environment as when you test it manually, which might avoid issues caused by points 1 and 2 above.

4. Finally, some newer versions of cron have configuration files (like /etc/default/cron or /etc/sysconfig/cron) where you can add additional environment variables that you always want set when you run your cron jobs. You can also do this in some newer implementations of cron by adding environment variable lines as the first lines of your crontab. Like:

PATH=/usr/bin:/usr/sbin

If all else fails, points 1 and 2 should almost always fix your problem :)

Have a great night :)

, Mike





Saturday, October 20, 2007

Using VCS Failover to Upgrade Group Components!

Hey There,

This, like many of my posts, may be old hat to some folks, but here's a little something for those of you who might find it of interest.

We'll assume we have two systems (sysa and sysb) each with the ability to run one or more of 4 service groups (grpa, grpb, grpc and grpd) and their related services. For this example's sake, sysa is running grpa and grpb and sysb is running grpc and grpd.

So now, both machines (sysa and sysb) need to have a patch installed, or something like that, and you can't afford for your product to go down. With Veritas Cluster HA failover (VCS), it's a pretty simple thing to do (now, we're assuming everything goes well, of course ;)

Since we're going to upgrade sysa first, we need to move all the services over to sysb. You have, basically, two options here (at minimum) since you want to move everything off. Either (from sysa):

hagrp -switch grpa -to sysb
hagrp -switch grpb -to sysb

or
hagrp -offline grpa -sys sysb
hagrp -offline grpb -sys sysb


The second option (using "-offline") means that you'll need to online the service groups on sysb if they don't fail over automatically

hagrp -online grpa -sys sysb
hagrp -online grpb -sys sysb


To make sure it sticks during your work (i.e. in case Veritas gets smart and decides that sysa is okay and tries to move the service groups back!)

From sysa:
hagrp -freeze grpa
hagrp -freeze grpb


You can use the "-persistent" flag on the freeze, if you need it to be down after reboot (e.g. hagrp -freeze grpa -persistent)

You basically do the same thing in reverse to get stuff back to normal, and then switch all the service groups to sysa so you can upgrade sysb. There are other options to make things go faster, but it all gets a little complicated and long-winded at some point ;)

Just a few quick tips. Someone could write a book about the little things in VCS. I think a few thousand people already have :)

, Mike





Friday, October 19, 2007

Skipping Blank Lines in Input Files Using Perl

Hey There,

This is all based on the presumptions that you've read in a file, examined the comma-delimited fields of the lines and put them back together (you've performed whatever actions you needed to on the existing lines and now want to move or print the output, but don't want to print any lines that don't have any content after your parsing).

After you've executed the line of code that puts your line back together (added the values back into a comma delimited line), you can do a chomp on the scalar value (the $ string).

So, rather than just add an eol character (\n,\r, etc), make it conditional to remove thost blank lines (below example might not be exactly what you're working with, but the spirit's the same).

I've used SPACE to denote an actual space key type and TAB to denote an actual tab key type. You can use \s and \t for this also, but it's my habit. Had to use the SPACE and TAB words here because the space and tab key strikes won't show in this post)

Run this in a loop on your input (while )

chomp($line);
if ( $line !~ /^SPACE*TAB*$/ ) {
print "$line\n";
}

And you've got your output, parsed per your requirements and no empty lines mucking it up :)

, Mike





Thursday, October 18, 2007

Dealing with Getopts OPTIND variable and the dreaded double-hyphen

Hey There,

As some scripters out there know, a really easy way to parse options to your shell script is to use the getopts function, especially if you have a lot of options to process.

OPTIND - the automatic variable set to represent the index of options (sometimes shifted and subtracted by 1) is generally used to determine whether you've run out of options to process once getopts is finished.

Of course ksh has the double-hypen (--), which, if you enter it as an argument to a script, means "stop processing arguments" and populates OPTIND, thus making checking whether its empty, a non-bullet-proof way of telling if someone has passed no arguments to your script.

Fortunately, it's pretty easy to get around by just setting a counter variable (this is necessary, because getopts never gets the -- and, therefore, can't process it; it's handled by the shell first... bummer)

Here's an example using opts_selected as the name of my tracking variable (usage, in this example is a reference to function in the script that just prints out a usage message and exits):

opts_selected=0

while getopts c:hmo: option
do
case $option in
c)
opts_selected=1
;;
h)
usage
;;
k)
opts_selected=1
;;
m)
opts_selected=1
;;
o)
opts_selected=1
;;
\?)
usage
;;
*)
usage
;;
esac
done
if [ $opts_selected -eq 0 ]
then
WhateverCodeYouWantToPutHere
fi


Now you can be sure you're script is definitely not being passed any arguments, including the double-hyphen, before you proceed to write the remainder!

, Mike





Wednesday, October 17, 2007

Some Simple Perl Concatenation Advice

Hey There,

I tripped across on the forums today ( A good place to get ideas for content, for me, is going out and seeing who I can help or who can help me :)

Thought it might be interesting, since I've been writing perl and shell script for so long that I hadn't really considered this since I can't remember when. Hopefully, it's still interesting to others, as well :)

Q. I need to concatenate three values in the date format like,
$fullDate = $ARGV[3] . "/" . $ARGV[4] . "/" . $ARGV[5];
But while concatenating, it is performing division on values. I have tried '.',':','-' . But its not working.

A. You can do the same thing without concatenating in the "conventional" sense, like this:
$fullDate="${ARGV[3]}/${ARGV[4]}/${ARGV[5]}";
The {} around the variable names may not even be necessary. It's a habit I'm in because it makes sure that perl knows $ARGV[3] is my variable and the following "/" is a character and can't possibly make the mistake of thinking $ARGV[3]/ is the name of my variable.

Hope this helps :)

, Mike





Tuesday, October 16, 2007

Hosting Tutorial Articles At A New Location

Well,

The place I was housing my articles at seems to have a strange policy of disabling your account if you don't use it enough. They don't take away your space or delete your stuff; they just make it so that you, and everyone else, can't get to it unless you ask for it back.

Since I get free webspace from my internet provider, I decided to move those articles here instead.

Nothing spectacular really ;) I will be rewriting these first two articles to chop them up into, at least 4, as I received a number of comments that they were too long.

No problem. I'm here to please :)

I've updated the old posts to point to the correct locations now, but here they are again, for your convenience:

Linux Shell for Web Hosting - Getting Started - Ezine Version
Linux Shell for Web Hosting - Getting Started - GoArticles Version
Linux Shell for Web Hosting - Exploring The Shell - Ezine Version
Linux Shell for Web Hosting - Exploring The Shell - GoArticles Version

Happy Reading!

, Mike





Getting Google Search Results On Your Site Without A Google SearchBar

How do?

This is actually fairly simple. All you need to do is create an html form with a text box where users can enter their search terms.

Assuming you've captured the search term(s) in your form, you just need to add them to a static redirect (show that however you want) to the google site.

The only issue that you'd have to really deal with is using "join" (I'm assuming Perl, but adjust to your form processing language of preference) to take your array of terms, if more than one was entered into your form, so that they all become one string with the terms connected by plus (+) symbols.

So your form entry of "mustangs and other cars" would get massaged into:

http://www.google.com/search?hl=en&q=mustangs+and+other+cars

"http://www.google.com/search?hl=en&q=" would be static and you'd just need to append your search term string to complete the URL

Simple enough - format it to your liking. As far as I know, there's nothing illegal or unethical about it. You just lose out on Adsense revenue, but you save a lot of hassle and don't have to keep your site in line with Google's standards.

Enjoy :)

, Mike





Monday, October 15, 2007

Capturing Errors With Apache's Internal Handler

Howdy, Late nighters :)

Here's a tip if you're looking to capture any old error message with apache and want to do something creative with it.

The default server action is usually good enough, but, sometimes you might want to do something extra (like redirect to a script that does a little something extra on your backend, redirect to internal or external links that will show up in reports or just make your error messages fun ...Is that possible?)

The "ErrorDocument" keyword is used for exactly this purpose. Here are a few examples below (all scripts, sites, paths, etc, are purely made up and probaby don't reflect your server configuration)

The syntax is: ErrorDocument ErrorNumber Action(First is plain text, next two are internal redirects, last is external)
ErrorDocument 301 "You got an error 301. Awesome!"
ErrorDocument 301 /public_html/errors/error_301.html
ErrorDocument 301 /cgi-bin/someErrorHandlingProgram.pl
ErrorDocument 301 http://www.errrors.com/error301.html


If there's interest, there might be something to doing some real custom whiz-work in a perl/cgi script to really "handle" errors :P

, Mike





Sunday, October 14, 2007

Using Apache's mod_rewrite for multiple URL character substitutions with only one client redirect

If you need to replace a number of characters, and don't know how many times they're going to show up in a URL, here's how to make sure they get replaced using Apaches mod_rewrite.

The key here isn't really to do the substitution, but to do it any number of times (undetermined) and only redirect the client once!

RewriteRule ^/(.*)A(.*)$ /$1a$2 [N,E=redir:yes]
RewriteRule ^/(.*)B(.*)$ /$1b$2 [N,E=redir:yes]
RewriteCond %{ENV:redir} ^yes$


The RewriteRule lines can be as many as you like. The important thing is that, in the third part of the rule, N instructs mod rewrite only to redirect internally (rewrite without notifying the user browser) and set the environment variable redir to yes.

Then the RewriteCond line sends the redirect to the user browser once your URL is completely rewritten.

Enjoy :)

, MIke





Using Cron To Send Out Raw Email

Hey there,

It's been a while since work hasn't gotten in the way of my blog-baby here ;) Unfortunately, I've been on-call and it's taken every moment of my time. I'll be handing off the pager here in just about an hour. Yay!!

Got a question about how to use "cron" to send out emails without calling a separate script and using a linux/unix built-in mailer. It's actually very simple to do; it just looks complicated.

For instance, if you wanted to send out an email every 30 minutes on the hour and half hour, with a Subject of "Your Regular Email" and a body with just a few lines in it, like:

"Howdy,
Hope you like this email!

, Mike"

You could just add this entry to your crontab:

0,30 * * * * (echo "Subject: Your Regular Email";echo;echo "Howdy";echo " Hope you like this email";echo;echo " , Mike")|/usr/lib/sendmail you@email.com

It's a simple concept - You create the email on the left side of the pipe (|) and use sendmail on the right side to mail that out to the address (you@email.com).

The most important thing to remember is to write out the entire contents of your email in a subshell (in parentheses). If you wrote all those echo statements outside of a subshell, only the last echo would get passed to Sendmail, and you'd get an email with no subject and a body that said:

" , Mike"

Generally, you only send mail this way for really simple stuff, but it can be as complicated as you like :)

, Mike






Thursday, October 4, 2007

Our First Question - Moving Content With "tar" Without Hogging Space

Evening,

Finally got our first email response to this blog. Not bad since it's still pretty much in it's beginning stages. I'll paraphrase the question below.

Q: I've got to move some huge .dbf files off of a 400gb partition (larger than other free slices) to make more space for the DBA's. Only problem is I've got 70% utilization on the partition. Tarring all of them at once doubles my disk space usage (which is impossible) and doing them one by one might not even be possible. How can I archive and move content under these sorts of conditions?

A: Since you're used to using tar already, here's a trick a lot folks don't know or just have never used. It'll work so you don't have to take up any extra space on your partition or any other.
What you'll want to do is use the "-" argument to tar and you'll be able to move the files to another partition or another machine without taking up any additional space.
To move a file from one location to another on the same machine, do this (I'm assuming you're in the directory the files are located. Just switch the absolute path and empty path (.) if you prefer to do it the other way. Also, of course, you could use * to tar up everything).

From partition to partition on a single host (if you like some output you can add v to the arguments for tar on either side of the pipe - if you do it on both you'll get a lot of output you probably don't want - up to you):

tar cpf - filea fileb filec|(cd /that/other/partition;tar xpf -)

From a partition on the local host to a remote host (again, you can reverse the path - in this case host:/directory/name - if you want or need to do it the other way):

tar cpf - filea fileb filec|ssh remotehost "cd /remote/partition;tar xpf -"

Basically you're creating your tar to standard output, and then passing it through a pipe (|) which converts the first command's standard output into the standard input of the next command. In this case, your untarring from standard input. Now, hopefully, your files are somewhere more manageable and you didn't need to take up any extra space to do the move.

Note that (on localhost to localhost) the parantheses are important around the second command, since they run them both in a subshell. If you were to remove the parantheses, your standard output would be fed to the "cd" command and tar would have nothing to extract. The same principle applies for using double-quotes with ssh (or rsh)

Hope this helped :)

, Mike





Online Articles

In order to keep this blog searchable by family filters, I've just gotta say oh golly, oh gosh, it's hard to get an article on EzineArticles.com :)

If anyone out there is like me,when you write a page for the web, you include certain tags and special characters, since html really has no concept of the "space" or the "hard return," etc.

Maybe the reason I have such a hard time dealing with online submissions these days is because things are so much easier than they used to be. Anyway, apparently, I'm turning into a fossil, putting in my own tags and such...

In any event, there's no link to my second article on Linux shell basics for the hosted website administrator/user, but I do have them both stored locally and you can get them at the links below. As I noted, the goarticles.com one is usually a little shorter than the ezinearticles.com one due to character count restrictions. Generally that would be all right, except I have to hack out whole Tips in order to squeeze in sometimes.

Hope you're doing well :) As always, if you have any questions or comments, I'd love to hear from you!

, Mike

Basic Linux for Ease of Use and Management of a Hosted Website: Exploring the Shell - GoArticles.com version

Basic Linux for Ease of Use and Management of a Hosted Website: Exploring the Shell - EzineArticles.com version (If they cut me a break for finally sending in my submission in totally plain text)





Tuesday, October 2, 2007

What a day! Second Linux Article Live!

I'm getting to this a little later than usual today. Had a Sun 15k server go limp on me at work and it took a little while to get that bad boy back up and running. Of course, it didn't help that we had to replace a few system boards and wait for the parts to be delivered.

Other than that, it's been fairly uneventful.

By the way, the next segment of my series on beginning Linux showed up on goarticles.com today. I've included the link below and hope you'll read it and enjoy it.

Exploring The Shell - GoArticles version

As soon as it shows up on ezinearticles.com, I'll post that link, as well as the 2 links to the articles hosted on my own server.

Have a great night (by which I mean I hope you,unlike me, are busy sleeping at 3am :)

Mike

BTW, if you're new to the blog, feel free to email me from this page, or post comments to my posts. In the future, I'm hoping this will become an exchange of ideas; a place where you can get better, and more practical, answers than might turn up on any search engine query.