Saturday, January 31, 2009

Linux and Unix Open Source Small Business Humor

Howdy Once again and we hope you're having a wonderful weekend.

To start it off, we're going back to the well for some of the simple stuff. Although this blog has built a sterling reputation as an online sheltered-community where Linux and Unix aficionados of all skill levels can come and read hundreds of words on a single web page (and not interact with each other at all), we haven't lost touch with our inner children. We're pretty sure they're still at summer camp ;)

And, hey, as much as we hate to stick it to MS (since, let's face it, it's been done to death), every once in a while the well comes up wet (??) That didn't sound right ...or read well ...or ...something. Basically, we found some funny MS bashing humor over at OpenSource Solutions For Small Business in a post they did on a few very good reasons you should get out there RIGHT NOW and buy MS Office 2007 ;) Stop by there, check the site out and enjoy it as much as possible. They actually provide a service that one-up's MS, but that's why the humor works.

Happy Saturday, again. And, if you've got your calendars handy, be sure to make note that "Don't Forget To Not Be Unkind To Animals Week" (or some other equally confusing memorial date range ;) is just around the corner!

Cheers,



Top 10 reasons why you should buy Office 2007


Monday, June 16th, 2008


  1. You want to make sure nobody will be able to read your documents in 10 years

  2. You want to help your buddy who works for Microsoft have enough income to buy a private island in the Caribbean, because maybe he would invite you to come for a weekend

  3. You feel sorry for the PC on the Mac commercials

  4. Your buddy is buying it for you from the Microsoft company store, so you’re actually saving hundreds of dollars! You can’t get those types of deals on free software.

  5. You hope that the extra emails it takes between you and your customers, partners, and vendors to get formats that they can open will improve your relationship with them

  6. Having the newest software from Microsoft makes you cool

  7. You want to extend Microsoft’s monopoly on the desktop, it’s just easier that way

  8. You have a big technology budget, and can’t think of any better way to spend those dollars

  9. You already spent the money on it, may as well force others to pay their Microsoft tax, too

  10. You’re a big fan of Survivor, and like being dropped into an unfamiliar environment and having to figure out all over again how to do the things you need to survive

  11. (Bonus!) You don’t know a better solution exists





, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Friday, January 30, 2009

Unix and Linux Humor - Inspirational Posters

Hey there and a fine Friday to you!

After this week, we probably all need a laugh (I'm being presumptuous in assuming your week was at least 51% miserable, too ;)

This quick and easy-to-digest post is only peripherally related to Linux and/or Unix, but I think we've all been subjected to those ridiculous (and sometimes framed) posters designed to inspire us to, apparently, never stop and wonder "What the Hell am I doing here?" ;)

The following set of pictures are just a few of the ones I like. They're not the real-deal schmaltzy pseudo-inspirational kitsch that you see in the office (inevitably located somewhere in between your cube and the bathroom ;). These demotivational posters, and more, can be found on Despair.com. Definitely give that site a look if a good laugh is all that's standing between you and an act of data center vandalism that you'll, ultimately, regret ;)

Enjoy :)































, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Thursday, January 29, 2009

Waldo's Been Found In The PI

Hey There,

Last Friday, we posted a little puzzle we made up entitled "Where's Waldo?" where the objective was to find Waldo amongst the first million digits to the right side of the decimal point in the irrational number PI. Apologies for the picture as my editor couldn't deal with the length of the line and insisted on chopping it up into bits. The numbers were in order, so all that was necessary to get the entire string without spaces was doing simple deletion of all the space characters.

Of course, there was only supposed to be one answer, but, if you're among the many who found Waldo all over the place, I guess we can count that, too. Although I wasn't specific, I assumed a knowledge of Waldo that wasn't shared by everyone (He's a goofy bastard that likes to hide in large crowds of people wearing a silly hat ;) No matter. For instance, many folks posited that any number could represent Waldo (just as a picture - a singular thing - of a cat could represent the three letters C, A and T). This is logically valid, and those of you who took that position are probably still laid out by Waldo-overload ;)

Also (closer to our solution) it was suggested that any 5 consecutive-and-unique numbers could represent Waldo. This is also true.

However, and this is why we put the puzzle up (because we figured this out by accident and it seemed kind of cool) there is a way to determine the "ONE" Waldo that's hiding in those million digits.

WARNING: If any of the above put you off and you want to go find the "ONE" Waldo on your own, please stop reading now, because this post will ruin it for you ;)

Here's how we found the "ONE" Waldo:

a. First, we tried the really simple stuff that was guaranteed to fail, but tested just to make sure that it did, anyway. And, we're happy to report that it all did :)

For instance(s):
i. Waldo, in binary, with a capital W (0101011101100001011011000110010001101111) and a lowercase w (0111011101100001011011000110010001101111) doesn't show up anywhere in the sequence.

ii. Waldo, in decimal converted from binary, with a capital W (8797108100111) and a lowercase w (11997108100111) doesn't show up either.

iii. Waldo, in simple a=1, b=2, etc, code (where case doesn't matter and both Waldo's are 23112416) doesn't exist within the million digits either. I actually thought that one might be there.

b. Then we went for the answer that ended up exposing where the "ONE" Waldo was. To tell you the truth, we didn't expect to find him, so this spur-of-the-moment puzzle was probably just as much fun (or just as irritating) for us as it was for you ;)

i. As above, the assumption was that Waldo had to belong to a group of 5 digits that included no duplicates, since there are no duplicate letters in Waldo's name.

ii. In order to find all of these possible combinations within the million digits we employed an expanded routine based on a previous post we did on finding overlapping matches using Perl lookahead assertions to run through all 5-digit combinations in the million numbers, looping through those to find all the ones that only appeared once in the million-digit chain and then, from that subset, only keeping those that didn't contain any duplicate digits (I put all the digits into a one line file called "pi"), resulting in the following suspects (The command line's ugly... I know ;)

host # perl -e 'open(PI, "<pi");$variable = <PI>; @variable = $variable =~ /\d(?=(\d\d\d\d\d))/g;for $var (@variable) {$matches{$var}++;};while (($key,$value) = each %matches) { if ( $value == 1 ) { $key =~ tr/0-9/0-9/s;if ( length($key) == 5 ) {print "$key => $value\n"} } };'
27315 => 1
04937 => 1
75138 => 1
92758 => 1
87329 => 1
76923 => 1
04504 => 1
69768 => 1
67482 => 1
40212 => 1
10686 => 1
35052 => 1
15320 => 1
95485 => 1
50134 => 1
74839 => 1
91867 => 1
72382 => 1
31640 => 1
75205 => 1
97823 => 1
63502 => 1
29090 => 1
23291 => 1
85476 => 1
26323 => 1
32013 => 1
01456 => 1
94357 => 1
96874 => 1
63160 => 1
91493 => 1
53268 => 1
91926 => 1
57521 => 1
36748 => 1
09074 => 1
81764 => 1


iii. Then, to find the "ONE" Waldo, we just augmented the previous Perl command line program to ensure that it met the following simple specifications: Based on simple case-insensitive alphabet-numeral code (a=1 .. z=26), the numbers had to appear in an order that would satisfy the name "Waldo" (e.g. W had to be greater than a, l, d and o, etc), which broke down to this simple table (actually listed above as one of the numeral sequences that couldn't be found... of course, it changes after we're done here, since we're applying the relative weight of each letter, rather than assigning each a specific numeric value):

W a l d o = 23 1 12 4 16


using this template, the "ONE" number from the list above would have to satisfy the relative-weight differences between all 5 letters/numerals

Name: W a l d o
Pos: 0 1 2 3 4
Weight: 23 1 12 4 16


in short, in the "ONE" correct 5-digit string, in subsequent order, the first digit would have to be greater than the fifth, which would have to be greater than the third, which would have to be greater than the forth, which would have to be greater than the second.

If you'll pardon the monstrous code, the command line Perl used to find "Waldo" follows:

host # perl -e 'open(PI, "<pi");$variable = <PI>; @variable = $variable =~ /\d(?=(\d\d\d\d\d))/g;for $var (@variable) {$matches{$var}++;};while (($key,$value) = each %matches) { if ( $value == 1 ) { $key =~ tr/0-9/0-9/s; @key = split(//, $key);if ( length($key) == 5 && ( $key[0] > $key[4] ) && ( $key[4] > $key[2] ) && ( $key[2] > $key[3] ) && ( $key[3] > $key[1] )) {print "$key => $value\n"} } };'
92758 => 1


leaving us with our answer: 92758. Below are the original list of million digits and the same list with "Waldo" in bold red font (hyperlinked to an offsite location for the actual million digit lists through pictures of parts of them).

Hope you enjoyed the challenge. Everyone who gave this some thought won, even if they didn't reach the same conclusion as we did :) And if you did find another way to reach "one" conclusion, please email us and let us know. We'd love to post the alternative answer!

Cheers,


Where's Waldo?


Where Is Waldo?


There's Waldo!


There Is Waldo!

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Wednesday, January 28, 2009

Old School Keyboard Control and HTML Character Entities On Linux And Unix

Hey There,

I think the sickness is finally leaving my body (I'm referring to the virus, or whatever I have, and not the mental condition that I fear I'm going to be stuck with until they tag my toe ;), so, in between bouts of feeling like I'm not getting better, I'm cranking out this disjointed post. It's going to be a 2 point post, but there's absolutely no natural flow between the first and second topics. Basically, it goes against everything I was taught in school, but I'll be damned if "the man" is gonna keep me down any longer ;)

1. Stopping people from rebooting your Linux box with control-alt-delete and giving them something to think about.

This is pretty old-school (and, probably, old-hat) but I've never written about it before. Basically, what we want to do in this part of today's post is prevent folks with physical access to our Linux machines from rebooting them by hitting the control-alt-delete keys. Most people just disable this behaviour in /etc/inittab and use echo to issue a stern warning. I like to take it a bit too far ;)

a. Find the line in /etc/inittab that handles control-alt-delete functionality. It probably looks something like this:

ca::ctrlaltdel:/sbin/shutdown -r -t 4 now

Basically, we only need to worry about the last section (with sections being delimited by colons) that states the machine should execute "/sbin/shutdown -r -t 4 now" whenever someone presses the control-alt-delete key combination (ctrlaltdel in the file).

b. Now, edit this line so that no one can reboot the box using this key combo. Generally, this is how it's done:

ca::ctrlaltdel:/bin/echo "Control-Alt-Delete Key Combination Has Been Disabled"

and then run:

init q

or

init Q

to cause init to reread the /etc/inittab. This works very well, and is recommended for machines that need to meet certain availability requirements. If this is your personal machine, or a machine your group works on that isn't "important," try this change instead (followed by "init q" or "init Q")

ca::ctrlaltdel:/bin/crashland

And then make a little executable file that you can create by grabbing the worst errors out of your /var/log/messages log, or any other file, and adding a zinger at the end :)

host # chmod 755 /bin/crashland
host # cat /bin/crashland

#!/bin/sh

/bin/echo `date "+%b %d %H:%M:%S"` `hostname` kernel: rpcd[26560]: segfault at 00000000bafa1b44 rip 00000000555c3be9 rsp 0000000056e3ea90 error 4
/bin/echo `date "+%b %d %H:%M:%S"` `hostname` kernel: vcagentd[4968]: segfault at 0000000000000001 rip 000000005558d5d2 rsp 0000000056818858 error 4
/bin/echo `date "+%b %d %H:%M:%S"` `hostname` kernel: rpcd[16785]: segfault at 00000000bafa2b44 rip 00000000555c4be9 rsp 0000000056c3fa90 error 4
/bin/echo "This message has been repeated 537 times"
/bin/echo "Unrecoverable disk error on root disk. This partition may require"
/bin/echo "Emergency maintenance - please gracefully reboot if possible! - ctrl-alt-delete on your keyboard"


Oh, the fun ;)

2. Here's a little something goofy dealing with HTML character entities in general. It's a little exercise I started and then decided not to finish since it has no end that a human can reach without growing substantially older, slower and more bitter ;)

Enjoy the descent into meta-meta-meta-... Hell...



Testing Character Entities




printing < in HTML is actually typing &lt;


printing &lt; in HTML is actually typing &amp;lt;


printing &amp;lt; in HTML is actually typing &amp;amp;lt;


printing &amp;amp;lt; in HTML is actually typing &amp;amp;amp;lt;


printing &amp;amp;amp;lt; is becoming a much more apparent, never-ending, pain in the arse...




Hope everyone out there is staying healthy or, at least, not being sick ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Tuesday, January 27, 2009

Adding NFS Management To A VCS Cluster With No Downtime

Hey there,

Today we're following up on yesterday's post regarding adding NFS management to an existing VCS cluster. In that post, we took care of adding the resource quickly, although the method did require us to take down VCS on all of the nodes in the cluster (And the distinction to be made is that we did it by bringing down VCS only (using "hastop -force") and not VCS and every resource it managed (using "hastop" straight up. In this way, although we temporarily lost HA/Fault Tolerance/Failover, none of our managed applications ever stopped, so, theoretically, no one else was the wiser).

Today we're going to take a look at the very same subject (before my aging mind wanders and I forget that I promised to follow up and/or change my Depends. I still remember that I owe you a few "Waldos," so I'm not completely gone yet ;), although, this time, we won't be bringing down VCS "and" we won't be bringing down the services and resources that it manages. That's assuming we don't screw up :)

To iterate the basic assumptions set in yesterday's post, for today's purposes, we're going to assume (per the title) that we're going to add an NFS resource to an existing VCS cluster on Linux or Unix (If you need extra help with SMF on Solaris 10, check yesterday's NFS-VCS post). We're also going to assume that we're dealing with a two-node cluster since that's easier to write about and cuts down on my over-explanation. It also means that I basically just had to slightly rewrite this exact same paragraph today ;) Also, we've disabled all NFS services on the host OS (meaning that we've stopped them and removed them from init scripts so that the OS won't try to manage them itself! A simple "ps -ef|grep nfs" check before you proceed should show you any remaining running NFS programs that you should kill).

Yesterday, we took care of phase 1 (The simple way according to me). Today, we'll forge boldly ahead and turn the technical-writing world on its ear by starting a numbered list with 2 ;)

2. The somewhat-less-simple way (according to me :)

a. We won't be bringing down VCS on any nodes today, but, as with yesterday, it's good practice to do all of your modifications on whatever node you choose to be your primary for the exercise and to make backups of the main.cf file (even though, using today's method, VCS should do this for you). Opening up the configuration file read/write on one node and running the VCS commands to modify its contents on another is highly discouraged. Luckily, doing that sort of thing contradicts common sense, so it's an easy mistake to avoid ;) It's also best to do this work on your primary node (the node that all the others have built their main.cf's from), although it isn't absolutely necessary (Read: could cause issues that don't need to possibly exist ;)

So, to add NFS management to our running VCS cluster, on the command line, we'd do the following on the primary node (as a special side-note, I like to run "hastatus" in another window while I do this. Since that command, with no arguments, gives out real-time broad diagnostic information, it can be a great help if you make a mistake, since you'll clearly see the ONLINE/OFFLINE messages, etc). First we'll freeze the service group (SG1) to ensure that no faults occur while we mess with it, and open up the configuration file:

host # hagrp -freeze SG1 -persistent <-- the -persistent option is only necessary if you want the freeze to survive a reboot, which we do, just in case.
host # haconf -makerw

b. Now, we'll run all the commands necessary to add the NFS, NFSRestart and Share resources to our configuration, and set them up in the exact same fashion as we did yesterday. Note that we're not actually going to run commands to create the "types" since these are all already available and included in types.cf, but (perhaps in a future post) we can do this quite simply if we ever need to using "hatype" and "haattr." For now, it's getting in the way of me finishing this post before you fall asleep ;) For each step, I'll also preface the command line with the specific part of the main.cf file that we're building, for back-reference.

c. First, we'll create the NFS resource:

NFS NFS_server_host1 (
Critical = 0
)


from the CLI:

host # hares -add NFS_server_host1 NFS SG_host1 <-- SG_host1 is just the name of host1 in our service group SG1. hares needs to know what service group it's adding a resource to.
host # hares -modify NFS_server_host1 Critical 0
host # hares -modify NFS_server_host1 Enabled 1


d. Then, we'll create the Share resource:

Share SHR_sharename_host1 (
Critical = 0
PathName = "/your/nfs/shared/directory"
Options = "rw,anon=0"
)


from the CLI:

host # hares -add SHR_sharename_host1 Share SG_host1
host # hares -modify SHR_sharename_host1 Critical 0
host # hares -modify SHR_sharename_host1 PathName "/your/nfs/shared/directory"
host # hares -modify SHR_sharename_host1 Options "rw,anon=0"
host # hares -modify SHR_sharename_host1 Enabled 1


e. And then we'll add the NFSRestart resource:

NFSRestart NFS_nfsrestart_host1 (
Critical = 0
NFSRes = NFS_server_host1
LocksPathName = "/opt/VRTSvcs/lock"
NFSLockFailover = 1
)


from the CLI:

host # hares -add NFS_nfsrestart_host1 NFSRestart SG_host1
host # hares -modify NFS_nfsrestart_host1 Critical 0
host # hares -modify NFS_nfsrestart_host1 NFSRes NFS_server_host1
host # hares -modify NFS_nfsrestart_host1 LocksPathName "/opt/VRTSvcs/lock"
host # hares -modify NFS_nfsrestart_host1 NFSLockFailover 1
host # hares -modify NFS_nfsrestart_host1 Enabled 1


f. Almost last, but possibly least, we'll setup the dependencies for the new resources:

NFS_nfsrestart_host1 requires hostip1
SHR_sharename_host1 requires MNT_mount1_host1
SHR_sharename_host1 requires NFS_server_host1


from the CLI:

host # hares -link NFS_nfsrestart_host1 hostip1
host # hares -link SHR_sharename_host1 MNT_mount1_host1
host # hares -link SHR_sharename_host1 NFS_server_host1


You'll note that I set all of the new entries to non-critical (Critical = 0), since VCS's default is to make the resource critical, which would mean that, if we made mistake, the NFS resource could cause a failover that we don't want if it doesn't work as expected. We'll reverse this condition once we know everything is good. One could argue that having a critical resource in a frozen service group is no big deal. Let the polemic begin ;)

g. Then we'll dump the configuration file (without locking it) so that the other nodes in the cluster will update as well, like so:

host # haconf -dump

Syntax checking isn't necessary, using this method, since (if you mistype) you'll get your error messages right away. This makes running "hacf -verify" unnecessary. And, once we're done testing the basic failover to our satisfaction, by forced onlining and offlining using "hares -switch/-online/-offline" (we can't do normal failover since the service group is frozen, but it's better this way for reasons mentioned above), we're ready to put everything back to normal, set the resources as "Critical" (if we want to) and thaw VCS back out ;) Again, if you get an error about the "lock" file, just touch (create yourself) the file you named as your lockfile when you ran the "hares -modify NFS_nfsrestart_host1 LocksPathName" command:

host # touch /opt/VRTSvcs/lock <-- (optional)
host # hares -modify NFS_server_host1 Critical 1
host # hares -modify NFS_nfsrestart_host1 Critical 1
host # hares -modify SHR_sharename_host1 Critical 1
host # haconf -dump -makero
host # hagrp -unfreeze SG1 -persistent
<-- note that we need to use the "-persistent" option to unfreeze the service group here, since it was frozen that way. Failure to do this can result in interesting calls in the middle of the night ;)

And, as infomercial-superstar Billy Mays might bellow like a wild African jungle beast jacked to the ceiling on Meth, "KABOOM!!! YOU'RE ALL SET!!!" Seriously, though, that guy really needs to calm down. The cleaning solution cleans; I get it. Please quit yelling at me ;) ...click on Billy's hyperlink if you need a good laugh and check out the "Gangsta Remix." :)

Believe it or not, there's an even simpler way to do this. I'll give it a rest for a while before I touch on this subject again.

You're welcome ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Monday, January 26, 2009

Adding NFS Management To An Existing VCS Cluster - Part One

Hey there,

Today we're going to take a look at a subject we've covered a bit in the past. If, after reading today's post, you want to things the quick and dirty way, check out our previous post on manually editing the main.cf file.

Today's task, although it sounds grim (since it involves VCS, mostly ;), is actually fairly simple to do and can be done in at least two different ways.

For today's purposes, we're going to show one of those ways and we're going to assume (per the title) that we're going to add an NFS resource to an existing VCS cluster on Linux or Unix (with special notes for any OS-specific steps). We're also going to assume that we're dealing with a two-node cluster since that's easier to write about and cuts down on my over-explanation ;) Also, we've disabled all NFS services on the host OS (meaning that we've stopped them and removed them from init scripts so that the OS won't try to manage them itself! A simple "ps -ef|grep nfs" check before you proceed should show you any remaining running nfs programs which you should kill)

1. The simple way (according to me :)

a. Bring down your entire VCS cluster (all nodes) from the node that you'll be doing the modification of the main.cf file on. It's VERY IMPORTANT that you use the -force flag when running the "hastop -all" command, as this will cause VCS to go down, but won't affect any of the applications running. To put it another way, if you just run "hastop -all" without the -force flag, VCS will go down on all of your cluster's nodes cleanly and bring down all the resources it manages with it. We want to do this as covertly as possible.

host # hastop -all -force

b. Now, proceed to edit your main.cf file. To totally cover our arses, we're going to make a quick backup of main.cf and copy the existing version into a new directory to do our edits:

host # cp main.cf main.cf.`date +%m%d%y`
host # pwd
/etc/VRTSvcs/conf/config
host # mkdir tmp
host # cp main.cf tmp/.
host # cd /etc/VRTSvcs/conf/config/tmp
host # vi main.cf


c. Make the changes and save (Note that you don't have to add any additional "includes" - like types.cf - to the file, as the NFS resource is already supported in the existing types.cf include file). All additions here should be made within the service group that the NFS resource will be a member of. Our service group is going to be called SG (there I go, getting all creative again ;). Add the following lines within the service group definition. These should be added after the SystemList definition, which I've kept in the example (but you don't need to modify), here:

group SG (
SystemList = { host1 = 0, host2 = 1 }
AutoStartList = { host1 }
)

NFS NFS_server_host1 (
Critical = 0
)

NFSRestart NFS_nfsrestart_host1 (
Critical = 0
NFSRes = NFS_server_host1
LocksPathName = "/opt/VRTSvcs/lock"
NFSLockFailover = 1
)

Share SHR_sharename_host1 (
Critical = 0
PathName = "/your/nfs/shared/directory"
Options = "rw,anon=0"
)

NFS_nfsrestart_host1 requires hostip1
SHR_sharename_host1 requires MNT_mount1_host1
SHR_sharename_host1 requires NFS_server_host1


You'll note that I set all of the new entries to non-critical (Critical = 0), since VCS's default is to make the resource critical, which would mean that, if we made mistake, the NFS resource bombing would cause a failover that we don't want. You can, and should, remove this line once you know everything is good. Also, check out the 3 new dependency-tree lines I've added. You don't need to modify the pretty commented version that VCS creates automatically; just the 3 basic dependencies (or requirements) that your NFS resource should rely on.

d. Now, we just need to check our new configuration to make sure it's syntactically correct:

host # hacf -verify /etc/VRTSvcs/conf/config/tmp/main.cf

if this shows any errors, fix them and run this command again until it just returns you to a blank prompt. Although it can be aggravating, VCS' hacf command doesn't tell you anything if it thinks the main.cf is fine ;)

e. Since everything went well, we just need to move our new main.cf file into its proper location (/etc/VRTSvcs/conf/config/). We've already backed up the original, earlier, so - if things get nuts, or the addition causes problems - we can switch back to the working configuration fairly quickly.

host # pwd
/etc/VRTSvcs/conf/config/tmp
host # mv main.cf ../.


f. Now, we just need to start up VCS again.

IMPORTANT NOTE: Only start VCS on the node on which you did your edits. Do not start it on any other nodes until the node you're working on is up and running VCS without errors. You can check the status of your host's VCS processes using either "hastatus -sum" over and over again, or just "hastatus" which provides information in real time, but is slightly more confusing to read than the output of "hastatus -sum." If you get an error about the "lock" file, just touch (create it yourself) the file you named as your lockfile in the new main.cf. You can touch it even if it exists, just to be sure, without causing any issues:

host # touch /opt/VRTSvcs/lock
host # hastart


g. Now that we've established the VCS is running fine on our main node, we can start up the other node(s). "hastatus -sum" and "hastatus" will show them (at an early point) in a REMOTE_BUILD state. This indicates that they're updating their main.cf's to match the one that we just changed.

Voila! All set. Once you're happy, be sure to set the NFS resource to "critical," if you want that. It's as easy as typing these commands on the command line (again, on your main node, if possible):

host # haconf -makerw
host # hares -modify NFS_server_host1 Critical 1
host # hares -modify NFS_nfsrestart_host1 Critical 1
host # hares -modify SHR_sharename_host1 Critical 1
host # haconf -dump -makero


A special note for Solaris 10 users: If you want to use VCS to manage your NFS resources, you'll need to delete certain resources from the SMF framework. The following commands should do the trick for you:

host # svccfg delete -f svc:/network/nfs/server:default
host # svccfg delete -f svc:/network/nfs/status:default
host # svccfg delete -f svc:/network/nfs/nlockmgr:default


Tomorrow, we'll take a look at another way to do this without taking VCS down :)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Sunday, January 25, 2009

The Ubuntu Linux, Possibly Humorous, Religious Marathon Continues

Hey there,

Following up on yesterday's post regarding Ubuntu's Christian Edition, I found this, much more comprehensive page, at The Baheyeldin Dynasty.

If you get a chance, go check the site out. Not only do they have links to Ubuntu Christian Edition, but they've also got links to Jewbuntu (A joke site, but you'll still feel guilty for a long, long time after you read it ;), Ubuntu Muslim Edition (not a joke, but includes a "No Fatwa" guarantee) and a whole lot of other Linux/Unix/Religion-related links (mostly jokes).

If you can take your Faith with an ounce of humour, hop on over there. Half of the site is hilarious and the other half is somewhat-hilarious-yet-somewhat-disturbing-because-it's-true ;)

Also (late breaking news) thanks to Lester McGrath for pointing us in the direction of this entirely separate distribution of Christian Linux!

Enjoy,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Saturday, January 24, 2009

Ubuntu Christian Edition: Truth Remains Stranger Than Fiction

Hey There,

If you haven't had a chance yet, and you're the type who likes to mix your religion in with your work (oh yeah, and you're Christian ;), this Unique Ubuntu Distribution may be for you!

Ubuntu CE (or Christian Edition) offers the user many unique features, including (but not limited to) a Built-in Bible Memorizer (which will allow you to edit the Bible... like Wikipedia, except with the possibility of eternal damnation if you put something incorrect in there on purpose ;) and a Virtual Rosary, which (for some reason I have yet to understand) requires WINE. It would be too easy to go for the Holy Communion connection. Perhaps I'll have an epiphany after I'm finished running this distro through it's paces ;)

For now, enjoy the screenshots below and check out Ubuntu CE for yourself. Just like religion, it's free of charge but will accept donations ;)

May your deity be with you,

The Desktop That Makes You Feel Self-Conscious 24 Hours A Day
Jesus On The Desktop


Think You Can Make The Bible A Better Read? Now's Your Chance!
Edit The Bible


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Friday, January 23, 2009

A Unix/Linux "Where's Waldo?" Or "Who Wants Some PI?"

Hey there,

I thought I'd finish off this miserable week (lots of work; little sleep) with a puzzle for everyone, like we do from time to time, if you ever want to check out older posts on seemingly random numbers and the M I U puzzle. There are even more that you can find by just searching the index of the blog.

Today, I'm leaching off a somewhat-familiar challenge: Where's Waldo? Except, in this version he's hiding somewhere in the first million numbers to the right of the decimal point of PI. Finding him will require some really simple conversion, but I have checked to verify that he is, in fact, in there (in more ways than one, depending on how you decide to do your translation of integers to... what have you? :)

Hope you guys have fun with this. I'll post my answers (and their respective rationales) in a near-future post. I've left off the leading 3, since it throws off the nice pretty columns.

CLUE/SPOILER-ALERT: In this version, Waldo is not wearing his trademark hat!!! ;)

Enjoy, and have some PI - Click the picture below to be taken to my offsite page that contains the the million digits. It won't work in Blogspot and it would probably destroy the RSS feed, anyway ;)

Cheers,

who wants more pi?

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Thursday, January 22, 2009

Setting Up Dual-Dual NIC Bonding On Ubuntu In 30 Seconds

Howdy,

Today we're going to cruise at maximum velocity, assuming that everything's fine (which it probably isn't) and plow through setting up bonding on a quad port nic card on Ubuntu in 30 seconds or less. Of course, it will probably take you longer than 30 seconds to read this post, but the people who publish books like "Linux in 5 Minutes" and "3 minute Abs" are pushing guys like me into a corner. The expectation of today's society is that everything in life should follow theory (If it "can" be done, it can be done ;) and, also, that it's very important to get things done as quickly as possible, especially if it involves dealing with people halfway across the globe from you. Of course, these expectations become more unrealistic by the minute (tick, tock) and most folks' innate need to feel like they "belong" (to what, I won't presuppose ;) keeps this bullet-train hurtling forward at ever-faster speed.

I predict that, in the future, everyone will need to see a psychiatrist daily and swallow approximately 50 mg of Valium, minimum, every 4 hours, in order to keep the heart-palpitations and nervous sweats from killing them either figuratively (as the "New World Office" eats the weak alive) or literally, since children will eventually be born with the irrational fear that they're not accomplishing as much as they should be, and most 12 year old's will be working on their 7th or 8th nervous-breakdown if they're lucky (?) enough to still be alive, in which case they can look forward to their early 20's, by which time they'll probably have worked themselves up to a nice quadruple bypass.

Now, crank the clock back about 60 years. Just 60 years. I have no point here. Just wondering: What's the rush?

The modern workplace always reminds me of a line from the film of Stephen King's "The Shawshank Redemption" - For a point of reference, the narration is that of a prison lifer who's just gotten released on parole in his golden years:

I saw an automobile once when I was young. Now they're everywhere. The world went and got itself in a big damn hurry.


Enough said. Here's a quick rundown of how to setup dual-dual bonding (two bonds of two interfaces each) on Ubuntu as quickly as possible. I wish I could spend more time with you today, but I've got s### to do ;)

The specific instructions which may or may not work for you (If they don't, don't worry, because you can't worry enough ;)

1. Add two lines to /etc/modules

bonding bond0 -o bond0 mode=1 miimon=100
bonding bond1 -o bond1 mode=1 miimon=100


If you're very good at managing your time, just remember that miimon's option determines how often the bond is monitored for failure and that mode can be one of:

0 - Round robin balancing
1 - Active back-up
2 - Transmit based on MAC address for load balancing/fault tolerance
3 - Broadcasting - provides fault tolerance by transmitting on all slave interfaces
4 - Aggregates links, assuming all nics support same speeds and duplex settings
5 - Transmit load balancing - balancing is handled by the bond based on load
6 - Same as 5, but also uses arp to balance load "better"

2. Install the ifenslave package if you haven't already. You can use apt-get to grab it if you don't:

host # apt-get install ifenslave-x.x

3. Ensure that the package actually installed:

# dpkg --get-selections | grep enslave
ifenslave-x.x install


4. Set up your interface files:

host # cat /etc/network/interfaces (only including the parts you probably need - substitute IP addresses, netmasks, etc):

auto lo
iface lo inet loopback

auto bond0
iface bond0 inet static
address 10.10.125.88
netmask 255.255.255.0
network 10.10.125.0
gateway 10.10.125.1
post-up ifenslave bond0 eth0 eth2
pre-down ifenslave -d bond0 eth0 eth2

auto bond1
iface bond1 inet static
address 10.10.127.88
netmask 255.255.255.0
network 10.10.127.0
gateway 10.10.127.1
post-up ifenslave bond1 eth1 eth3
pre-down ifenslave -d bond1 eth1 eth3


5. Add lines to the bottom of your architecture's modprobe files, reboot and pray:

host # cat /etc/modprobe.d/arch/i386

alias bond0 bonding
options bond0 mode=1 miimon=5000 max_bonds=2

alias bond1 bonding
options bond1 mode=1 miimon=5000 max_bonds=2


Congratulations, you're all set. Now get back to work ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Wednesday, January 21, 2009

Patching Solaris Unix - The Rules Change Again!

Hey There,

I was going to do a bit today that carried on from yesterday's post on Solaris 10 boot archive patching issues, and walk through using smpatch to keep your machine up to date (especially if you prefer to stay away from the ui gui bits ;)

I reserve the right to do that, maybe tomorrow/maybe not (I've found that posting on a streak - or too many similar posts in a row - doesn't play out very well). In any event, when I went to update my patches on my SunBlade, I got an interesting message from "smpatch." It's going to be the topic of today's post; if you can even really call it a topic. It seems like many of the "old ways" Sun used to allow you to use to patch your machine are going the way of the Dodo :)

And, now, the Announcement Which Services The Public (Sounds much worse than Public Service Announcement ;)

If you manage, or own, any Sun boxes (and use any of their built-in-or-free utilities to keep your machine(s) current) this EOL (End Of Life) notice regarding certain patch updating/management utilities may be of interest to you.

NOTE: I included the entire messages queue for completeness, but Message "2 of 2" (regarding smpatch) is the one that might interest you, unless you've been hassling with trying to get SunConnection working since March of 2008 ;)

Enjoy (in moderation :)

host # smpatch messages -a

Messages Of The Day

Message 1 of 2
Date: 2008/02/29
Title: Attention all Sun Connection Hosted users
Description: Users of Sun Update Manager, smpatch, and Sun Connection Enterprise (UCE) are not affected
by this EOL announcement, but should continue to read on to determine if their systems are
enabled for hosted management.

Sun Connection Hosted Customers: On March 1, 2008, https://sunconnection.sun.com/ will
reach End of Service Life. You will still be able to receive updates via Sun Update Manager,
smpatch and UCE. We suggest that you check your systems to ensure the Sun Connection Hosted
transport mechanism is disabled.

To see if your system has been enabled for hosted management, check the value of your system
by using the following command:
/usr/lib/cc-ccr/bin/ccr -g cns.service.swupPortalMgmt.status
This command will return a value of "disabled", "enabled", or "". If the value is "disabled"
or "", thank you for checking, no further action is required. If "enabled", please take
action by performing one of the following steps:

You can disable the transport mechanism from Update Manager to Sun Connection hosted by
applying the patch 121081-08 (sparc) or 121082-08 (x86) which will be available on Feb.
29, 2008. If you want more control, you can also disconnect your system from Sun
Connection Hosted and shutdown the associated daemons on your system(s) via either one of
these two options.

1) From the Hosted Management Portal
[Note: This option will not be available once Sun Connection Hosted has been shutdown.]
a) Go to https://updates.sun.com/ and login to the Hosted management site with your Sun
Online Account.
b) Select the Systems tab
c) For each system listed, click the Edit System Settings icon (second icon to the right
of the system name)
d) Scroll to the bottom of the Edit Your System Settings page and click the Delete System
button
e) When asked for confirmation click Continue

This option will unregister your systems from the Portal and send a job down to your system
telling it to disable the portal management functionality.

2) Disable Hosted Management functionality from the client
a) From a terminal window on the target client system, su to root
b) Execute the following command to shutdown the local Hosted daemons:
# /usr/lib/cc-cfw/framework/lib/cc-client-adm stop
c) Execute the following command to prevent the daemons from being restarted at system boot
# /usr/lib/cc-cfw/framework/lib/cc-client-adm disable

Once complete, either option will disassociate your client system with Sun Connection Hosted.

Message 2 of 2
Date: 2009/01/13
Title: Attention all Sun Connection users
Description: The following patches for Sun Update Connection clients will soon be necessary to validate patch downloads.

Solaris 10 (Sparc) 121118-15
Solaris 10 (x86) 121119-15

Solaris 9 (Sparc) 140476-01
Solaris 9 (x86) 140477-01

Solaris 8 (Sparc) 140475-01

These patches deliver an updated public certificate into the smpatch keystore which will be required to validate patches signed after the expiry of the current patch signing certificate. Run

smpatch update

to ensure you are running with the latest versions of all required patches. After applying the patch, the default patchset name will become current2. The current2 patchset contains all the patches available today, and will also deliver the newly signed patches once they become available.

Sun Update Connection Proxy (Local Patch Server) users:

A patch will be available shortly to make use of the new certificates. If you wish, you may alter the patchSigningCertAlias property in the file /var/patchsvr/lps/WEB-INF/applicationContext-lps.xml to

patchsigning:patchsigning2:patchsigning3

After installing the above patch on your system to make use of this new certificate. A restart of patchsvr will be required to pick up the changes.


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Tuesday, January 20, 2009

Solaris 10 Unix Patch Update Boot Archive Woes

Ahoy there,

Today we're going to take a look at some "Solaris 10" specific stuff (at least, I hope it's only the OS ;) that's been making me nuts lately. My problem may have to do with the new patch update/management setup, but I've run down enough dead-ends on that hunt that I'm fairly sure it has to do with the implementation of the Solaris 10 "boot archive" and may also be contained within release "10/08," although I've read complaints from users running earlier versions (actually, strangely enough *** heavy sarcasm *** most of the complaints seem to stem from users of the more recent releases ;)

From a fresh install of 10/08, I thought (for once in my life) I'd do the convenient thing on my SunBlade and setup patch notifications. Usually, for my personal boxes, I'll leave everything alone and never fix anything until I notice that it's causing me a problem (which is generally never -- Not that I've ever run any Solaris versions that were bullet-proof, just that I didn't notice any problems I couldn't live with. Like they say; if it ain't broke... ;)

Now that I've gone through my 4th or 5th update using Sun's update manager (which basically just downloads all the patches I need and then runs patchadd in the correct order), I've tried doing the same thing manually - thinking that might be the issue - but ended up in the same quandary. The problem is starting to irritate me. I'm not so much worried about the fact that this issue occurs at all, just that it occurs on my workstation which doesn't have any sort of console connection to it. Ergo, if I run patch updates from home, I have to wait until I get back in the office to get past the single-user-mode hang-up.

The basic issue plays out like this (assuming a simple one disk system with no mirroring, etc):

1. Patches are added in the correct order, patch installs are validated and the system reboots.

2. After cruising past the ok> prompt and starting to boot back up, the system inevitably fails and stops at the dreaded "control-D-or-enter-root-password-for-maintenance" prompt.

3. The error message is always the same, with slight variations denoted by asterisks:

Warning: The following files in / differ from the boot archive:
***
***
...


Immediate fixing of the issue (assuming you're at the console) is very simple to fix, and (to their credit) Sun does include the exact steps you need to go through to take care of it, right after the error message.

Those steps would be:

1. Bring your system down to the PROM level after entering the root password:

host # init 0

or

host # halt (the stop+a keys for those of you who like to get as much bang for the buck from your keystrokes as possible ;)

2. Bring it up in failsafe mode and fix the boot archive problem (which it will, basically, fix for you):

ok> boot -F failsafe <-- with "-Z zpool_dataset" if you're booting a ZFS Root Pool ("boot -L" will list the pools out for you at the ok> prompt)
...
blah, blah, blah


and then:

Do you wish to automatically update this boot archive? [y,n,?] y

and, more often than not, you'll then have to run fsck against your root partition and either exit from single user mode to continue the boot process or do another "init 0" followed by a straight up "ok> boot"

Another option, after your system has failed to boot up properly following patching, is to just clear the boot archive. This works well also (even when your system is live), but is frowned upon in some academic circles. Just enter the root password to get into single user mode and run:

host # svcadm clear system/boot-archive
host # exit


and your machine will come up fine. I've tried applying some basic logic to the problem by executing that command pre-and-post-patching before rebooting, but I still end up in the same boat (??? Why, God? Why!!!!??? ;)

In any event, I'm sorry that we still don't have this site on new hosting. I would "love" for this to be a post that had comments enabled. Someone out there must know the answer (and not any of the regular ones about "known bugs that may never get resolved" ;)

Just as a "maybe/possibly" in closing, I noticed this on my Sparc workstation. At this point I have a sneaking suspicion it's a kernel patch revision issue (based on bootadm's output, tacked on to the end of the post ;), so (if what I've been reading on the message boards is any indication), I'll have this problem for anywhere from "a while" to forever, unless I decide to put my head on the chopping block and try to patchrm my kernel back to a state it's never been in ;)

Hope this post helps you out if you get stuck in the same situation.

Cheers,

host # bootadm list-archive
platform/FJSV,GPUU/kernel
platform/FJSV,GPUZC-L/kernel
platform/FJSV,GPUZC-M/kernel
platform/SUNW,A70/kernel
platform/SUNW,Netra-210/kernel
platform/SUNW,Netra-240/kernel
platform/SUNW,Netra-440/kernel
platform/SUNW,Netra-CP2300/kernel
platform/SUNW,Netra-CP3010/kernel
platform/SUNW,Netra-T12/kernel
platform/SUNW,Netra-T4/kernel
platform/SUNW,SPARC-Enterprise/kernel
platform/SUNW,Serverblade1/kernel
platform/SUNW,Sun-Blade-100/kernel
platform/SUNW,Sun-Blade-1000/kernel
platform/SUNW,Sun-Blade-1500/kernel
platform/SUNW,Sun-Blade-2500/kernel
platform/SUNW,Sun-Fire-15000/kernel
platform/SUNW,Sun-Fire-280R/kernel
platform/SUNW,Sun-Fire-480R/kernel
platform/SUNW,Sun-Fire-880/kernel
platform/SUNW,Sun-Fire-V210/kernel
platform/SUNW,Sun-Fire-V215/kernel
platform/SUNW,Sun-Fire-V240/kernel
platform/SUNW,Sun-Fire-V245/kernel
platform/SUNW,Sun-Fire-V250/kernel
platform/SUNW,Sun-Fire-V440/kernel
platform/SUNW,Sun-Fire-V445/kernel
platform/SUNW,Sun-Fire-V490/kernel
platform/SUNW,Sun-Fire-V890/kernel
platform/SUNW,Sun-Fire/kernel
platform/SUNW,Ultra-1-Engine/kernel
platform/SUNW,Ultra-250/kernel
platform/SUNW,Ultra-4/kernel
platform/SUNW,Ultra-5_10/kernel
platform/SUNW,Ultra-80/kernel
platform/SUNW,Ultra-Enterprise-10000/kernel
platform/SUNW,Ultra-Enterprise/kernel
platform/SUNW,UltraAX-MP/kernel
platform/SUNW,UltraAX-e/kernel
platform/SUNW,UltraAX-e2/kernel
platform/SUNW,UltraAX-i2/kernel
platform/SUNW,UltraSPARC-IIe-NetraCT-40/kernel
platform/SUNW,UltraSPARC-IIe-NetraCT-60/kernel
platform/SUNW,UltraSPARC-IIi-Engine/kernel
platform/SUNW,UltraSPARC-IIi-Netract/kernel
platform/SUNW,UltraSPARC-IIi-cEngine/kernel
platform/SUNW,UltraSPARCengine_CP-20/kernel
platform/SUNW,UltraSPARCengine_CP-40/kernel
platform/SUNW,UltraSPARCengine_CP-60/kernel
platform/SUNW,UltraSPARCengine_CP-80/kernel
platform/TSBW,8000/kernel
platform/TSBW,Ultra-2e/kernel
platform/TSBW,Ultra-2i/kernel
platform/TSBW,Ultra-3i/kernel
platform/sun4u-us3/kernel
platform/sun4u/kernel
platform/sun4us/kernel
etc/cluster/nodeid
etc/dacf.conf
etc/mach
kernel


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Monday, January 19, 2009

Extracting Different File Types On Linux And Unix - Guest Post

Hey there, everyone, hope you had a pleasant weekend :)

Today's Unix/Linux post is courtesy of TuxHelper and deals with the extraction of various types of compressed files in the bash shell. More specifically, it deals with setting up a very clever function that you can include in your .bashrc.

This little function is brilliant, in that it's so simple, obvious, and convenient that I can't believe I never thought to do it myself (I must be a glutton for punishment ;)

Hope you enjoy it and can "extract" some value from it (I really had to "push" that one ;)

Cheers, (and if anyone else out there would like to submit content and help me stave off that impending nervous collapse, please feel free to send me a comment :)





Thanks to "plb" on the debian forum for providing the tip/trick!

Open your .bashrc file located in your /home/$USER/ directory. It is a hidden file as some like to call it in the Gnu/Linux world. Assuming you have opened your file find a spot to paste in the following text:


function extract()
{
if [ -f "$1" ] ; then
case "$1" in
*.tar.bz2) tar xjf "$1" ;;
*.tar.gz) tar xzf "$1" ;;
*.tar.Z) tar xzf "$1" ;;
*.bz2) bunzip2 "$1" ;;
*.rar) unrar x "$1" ;;
*.gz) gunzip "$1" ;;
*.jar) unzip "$1" ;;
*.tar) tar xf "$1" ;;
*.tbz2) tar xjf "$1" ;;
*.tgz) tar xzf "$1" ;;
*.zip) unzip "$1" ;;
*.Z) uncompress "$1" ;;
*) echo "'$1' cannot be extracted." ;;
esac
else
echo "'$1' is not a file."
fi
}


Now go to your editor/notepad menu and save. The next time you have an archived file that need to be opened open a terminal/console and type: (example): extract myfile.zip!





, Mike



Yannis Tsopokis had this to add - Turns out this is all a lot easier than I ever thought. I'm officially a dinosaur ;)


For Linux there is unp which chooses which utility will extract the file and calls it.

Yannis

A huge supporter of the Rox Desktop had this to add - perhaps now, the correct source will get proper attribution!


This is actually from http://roscidus.com/desktop/Archive

tgz = Extract('tgz', "gunzip -c - | tar xf -")
tbz = Extract('tar.bz2', "bunzip2 -c - | tar xf -")
tarz = Extract('tar.Z', "uncompress -c - | tar xf -")
rar = Extract('rar', "unrar x '%s'")
ace = Extract('ace', "unace x '%s'")
tar = Extract('tar', "tar xf -")
rpm = Extract('rpm', "rpm2cpio - | cpio -id --quiet")
cpio = Extract('cpio', "cpio -id --quiet")
deb = Extract('deb', "ar x '%s'")
zip = Extract('zip', "unzip -q '%s'")
jar = Extract('jar', "unzip -q '%s'")

Justin Li noted this very important point! Thanks for contributing, Justin!


Hi Mike,

I have been a follower of the Menagerie for a while; your posts have taught me quite useful things. For the extract script though, wouldn't it be better if the script used file to find the filetype instead of matching against the file name? Especially since in Linux the file extension is only a convention for humans, the output of file would be much more accurate.

Keep writing about Linux!

Justin


Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Sunday, January 18, 2009

Unix and Linux SysAdmin Humor - Keep It To Yourself!

Happy Sunday :)

This week-end's humorous Linux-and-Unix_laughs tidbit was found on PacketStorm.com in their Unix Humor section. We've pulled some other good jokes from their site, previously and you can check out our favorite packetstorm picks so far, if you're up for some more humour, but don't want to go through all the bad stuff to find the few gems. Of course, this assumes that I have a good sense of humour, which I don't. Seriously. Our editor spins every scathing and vindictive paragraph I write into fluffy family-friendly schmaltz ;)

I remember this joke from back in the day and can't believe I don't have it up here already, but I'm making up for it now and, perhaps, I'll find redemption in that, somehow ;)

Enjoy the disturbing remarks and have a peaceful day (If that dichotomy is even possible to pull off ;)

Cheers,

BTW: Try not to notice that the list is a few short of 101



101 Things you do NOT want your System Administrator to say.




  1. Uh-oh.....
  2. Shit!!
  3. What the hell!?
  4. Go get your backup tape. (You do have a backup tape?)
  5. That's SOOOOO bizarre.
  6. Wow!! Look at this.....
  7. Hey!! The suns don't do this.
  8. Terminated??!
  9. What software license?
  10. Well, it's doing something.....
  11. Wow....that seemed fast.....
  12. I got a better job at Lockheed...
  13. Management says...
  14. Sorry, the new equipment didn't get budgetted.
  15. What do you mean that wasn't a copy?
  16. It didn't do that a minute ago...
  17. Where's the GUI on this thing?
  18. Damn, and I just bought that pop...
  19. Where's the DIR command?
  20. The drive ate the tape but that's OK, I brought my screwdriver.
  21. I cleaned up the root partition and now there's lots of free space.
  22. What's this "any" key I'm supposed to press?
  23. Do you smell something?
  24. What's that grinding sound?
  25. I have never seen it do *that* before...
  26. I think it should not be doing that...
  27. I remember the last time I saw it do that...
  28. You might as well all go home early today ...
  29. My leave starts tomorrow.
  30. Ooops.
  31. Hmm, maybe if I do this...
  32. "Why is my "rm *.o" taking so long?"
  33. Hmmm, curious...
  34. Well, my files were backed up.
  35. What do you mean you needed that directory?
  36. What do you mean /home was on that disk? I umounted it!
  37. Do you really need your home directory to do any work?
  38. Oracle will be down until 8pm, but you can come back in and finish your work when it comes up tonight.
  39. I didn't think anybody would be doing any work at 2am, so I killed your job.
  40. Yes, I chowned all the files to belong to pvcs. Is that a problem to you?
  41. We're standardizing on AIX.
  42. Wonder what this command does?
  43. What did you say your (l)user name was...? ;-)
  44. You did what to the floppy???
  45. Sorry, we deleted that package last week...
  46. NO! Not that button!
  47. Uh huh......"nu -k $USER".. no problem....sure thing...
  48. Sorry, we deleted that package last week...
  49. [looks at workstation] "Say, what version of DOS is this running?"
  50. Oops! (said in a quiet, almost surprised voice)
  51. YEEEHA!!! What a CRASH!!!
  52. What do you mean that could take down the whole network?
  53. What's this switch for anyways...?
  54. Tell me again what that '-r' option to rm does
  55. Say, What does "Superblock Error" mean, anyhow?
  56. If I knew it wasn't going to work, I would have tested it sooner.
  57. Was that your directory?
  58. System coming down in 0 min....
  59. The backup procedure works fine, but the restore is tricky!
  60. Hey Fred, did you save that posting about restoring filesystems with vi and a toothpick? More importantly, did you print it out?
  61. OH, SH*T! (as they scrabble at the keyboard for ^c).
  62. The sprinkler system isn't supposed to leak is it?
  63. It is only a minor upgrade, the system should be back up in a few hours. (This is said on a monday afternoon.)
  64. I think we can plug just one more thing in to this outlet strip with out triping the breaker.
  65. What is all this I here about static charges destroying computers?
  66. I found this rabbit program that is supposed to test system performance and I have it running now.
  67. Ummm... Didn't you say you turned it off?
  68. The network's down, but we're working on it. Come back after diner. (Usually said at 2200 the night before thesis deadline...)
  69. Ooops. Save your work, everyone. FAST!
  70. Boy, it's a lot easier when you know what you're doing.
  71. I hate it when that happens.
  72. And what does it mean 'rm: .o: No such file or directory'?
  73. Why did it say '/bin/rm: not found'?
  74. Nobody was using that file /vmunix, were they?
  75. You can do this patch with the system up...
  76. What happens to a Hard Disk when you drop it?
  77. The only copy of Norton Utilities was on THAT disk???
  78. Well, I've got a backup, but the only copy of the restore program was on THAT disk....
  79. What do mean by "fired"?
  80. hey, what does mkfs do?
  81. where did you say those backup tapes were kept?
  82. ...and if we just swap these two disc controllers like this...
  83. don't do that, it'll crash the sys........ SHIT
  84. what's this hash prompt on my terminal mean?
  85. dd if=/dev/null of=/vmunix
  86. find /usr2 -name nethack -exec rm -f {};
  87. now it's funny you should ask that, because I don't know either
  88. Any more trouble from you and your account gets moved to the 750
  89. Ooohh, lovely, it runs SVR4
  90. SMIT makes it all so much easier......
  91. Can you get VMS for this Sparc thingy?
  92. I don't care what he says, I'm not having it on my network
  93. We don't support that. We won't support that.
  94. ...and after I patched the microcode...
  95. You've got TECO. What more do you want?
  96. We prefer not to change the root password, it's an nice easy one
  97. Just add yourself to the password file and make a directory...


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Saturday, January 17, 2009

Photos From The Golden Age Of Computing

Happy Saturday, everyone :)

Today's post is going to be less humorous than most of our weekend posts, but still, kind of cool and interesting. We may get a laugh from inside of you. ...deep down inside ;)

So, kick back, relax and enjoy these pictures from the golden age of computing. All pictures are linked back to the original source (where required by the military -- no kidding) or linked to from the extensive archives at OldComputers.net. Lots of fun stuff to reminisce about over there, along with links to tons of other sites with old computer information and pictures!

Enjoy :)

Note: Click on the image below to see it in its original context (I forced the picture down to 800x600, since the actual picture is 1200-something by another huge number ;) and enjoy your stay on the Army's webservers ;)



The very first laptop :)



And DEFINITELY check out this link to read up on the bleeding-edge technology from 1955 :)



, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Friday, January 16, 2009

Spinning Unix And Linux Blog Content

Hey There,

Hope you're having a good Friday and the boss let's you go home early with a free ham ;)

Today's post isn't so much Linux and Unix-related as it is post-and-article related. Having been blogging for a good year or so, I've done my fair share of writing posts and putting together articles to try and redirect traffic to this minuscule speck on the web right here.

Now, I'm just as lazy as the next guy and (if it were possible) I'd have all my posts written by an artificial intelligence program that could take a few keywords, some sloppy thoughts and a general outline, and spit out a nice beefy post; replete with bizarre references to pop culture, interesting asides and a good dose of my own supposed sense of humor. Of course, I realize this is asking way too much (I think), and don't expect anything like it to ever be offered up for sale or show. At least, not any time soon.

One thing I find interesting, when checking out Markov-(and other algorithm)-based article rewriters (or spinners) is that the only ones that work are the ones that require you to basically do the rewrite yourself. Essentially, you're being sold a text editor with a thesaurus built in.

I don't want to blast anybody who's out there giving this whole notion of unique content generation a shot (I keep thinking I might ;) and falling short. To me, the idiomatic and subtle nature of any language is such that merely replacing words and phrases (without fulling understanding the context in which the reside) is never going to get the job done. I think this would make for an interest project (although I must admit, the fact that 2 decades of professional language-specific word processing software still can't fix grammatical errors consistently or correctly, has me at a loss. ...for now).

In that vein, I won't reveal what free online article spinner I used to re-generate just one portion of a previous post we did on dealing with "argument list too long" errors. I kept it to the preamble, since, once we get into Unix/Linux-speak, most words are never replaced anyway. I think you'll agree that the outcome is pretty funny and/or reminds you of the many-too-many Internet-pharmacy/Viagra/Spanish-prisoner-scam emails you get every day :)

Here's the original version of that post's intro:

Here's a question that gets asked a lot (and, consequently, answered a lot ;) on the boards. How do you go about dealing with a situation in which you're trying to take care of some business on your Linux or Unix box and you get stopped with the "Argument list too long" error message? It's probably happened to all of us at some point, but it's fairly simple to avoid, and in more than one way. I'm almost positive, even though I know this stuff, that I'll get that error message again at some point in the future. My mind is a terrible thing ;)


The way I try to keep on top of avoiding that error (which, admittedly, isn't all that terrible, assuming its not a predicating factor in a script - like, you don't have any action is your script that might do something destructive if you don't check for errors) is a basic two-prong approach. First, I'll try to figure out what kind of limitations my system can stand. Secondly, I try to stick to some basic methods that guarantee I won't run into the issue no matter what my system's maximum argument length is. Of course, the second part obviates the first part, but I'm a curious guy (take that whatever way you want ;)


and here's the rewrite (set at 50% - midway between most unique and most readable). Enjoy at your own peril. I don't think some of the substitute words really exist ;)

Hey There,

Here's a question that gets asked a aggregation (and, consequently, answered a aggregation ;) on the boards. How do you go about handling with a situation in which you're trying to verify care of some business on your Linux or Unix box and you get stopped with the "Argument list too long" error message? It's belike happened to all of us at some point, but it's fairly simple to avoid, and in more than one way. I'm almost positive, even though I know this stuff, that I'll get that error message again at some point in the future. My mind is a intense thing ;)


The way I try to ready on top of avoiding that error (which, admittedly, isn't all that terrible, assuming its not a predicating factor in a script - like, you don't have some action is your script that might do something destructive if you don't analyse for errors) is a basic two-prong approach. First, I'll try to figure out what kind of limitations my system can stand. Secondly, I try to stick to some basic methods that guarantee I won't run into the issue no matter what my system's maximum argument length is. Of course, the second part obviates the prototypal part, but I'm a curious man (take that whatever way you want ;)


Cheers, and a relaxing weekend to you all,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Thursday, January 15, 2009

Bash script to find all of your indexed web pages on Google

Hey there,

Today's Linux and/or Unix bash script is a kind-of add-on for our original bash script to find your Google search index rank from last August (2008 - for those of you who might be reading this in some barren wasteland in a post-apocalyptic future where, for some inexplicable reason, my blog posts are still legible and being transmitted over the airwaves ...or delivered in paper format by renegade-postman Kevin Costner ;)

While the original script would search Google for any link to the URL you specified, based on a search of any number of keywords, or keyword phrases, you supplied (up to 1000 results, when Google cuts off), this one checks to see if your site is indexed by Google, and how heavily. I know that sounds like almost the exact same thing, but it's slightly different, I promise :)

This version (named "gxrank" since the original script was named "grank," I needed to differentiate between the two in my bin directory and - I'll admit - I wasted not one ounce of imagination on the title ;) will accept a URL you supply and check Google's index for your site. It will then let you know how many pages, and which ones, are in Google's index.

While this might not seem like a valuable script to have, I use it a lot to test how fast I can put sites up for people and get them indexed. For instance, I might put up a site today and do all the standard SEO falderal. Of course, I won't run this script that night, but, usually by the next day, I'll be able to run this script and, at least, get back 1 result for the base URL. Then, over time, I can run this script (usually once per day, so I can feel impressed ;) and see how many pages on my site have been indexed. This script was, most specifically, made to track high-activity blogs (at least a post a day), but I use it for smaller sites, as well. If I put a site up that has 10 pages, it's nice to know when (or should I say if? ;) those 10 pages get fully indexed.

The usage for the script is fairly simple. You can run it from the Linux or Unix command line like:

host # ./gxrank yourUrl.com

You don't need to include the http:// or any other stuff. It's basically a regular expression match, so you can just include enough of a semi-URL to make sure you get back relevant results. You'll notice that I also only have it printing out the first 100 results maximum (you can modify this so it doesn't show you anything, if you want - I just like to see it. Google gives strange results when the amount of returns is less than the amount of maximum results on any given page. Like, it might list 38 index entries and then say that it got that many out of approximately 57 results. They're probably removing similar results but, ultimately, the exact number of index entries isn't all that important since it fluctuates often)

As a "for instance," below is the output of the script when run to check the indexed pages of gotmilk.com (Not for any particular reason; they were just the first site I ran across that didn't return more results than would fit on my screen ;)

Note: Click on the image below to "virtually experience" the bends ;)

gotmilk.com indexed pages on Google

Hope you enjoy this script and get some good use out of it. Sometimes just seeing your numbers grow can pick you up when you're ready to throw in the towel on your website :)

Cheers,

IMPORTANT NOTE: Although this warning is on the original Google search rank index page, it bears repeating here and now. If you use wget (as we are in this script), or any CLI web-browsing/webpage-grabbing software, and want to fake the User-Agent, please be careful. Please check this online article regarding the likelihood that you may be sued if you masquerade as Mozilla.


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# gxrank - how many pages does Google have in its index for you?
#
# 2009 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -ne 1 ]
then
echo "Usage: $0 URL"
echo "URL with or with http(s)://, ftp://, etc"
exit 1
fi

url=$1
shift

base=0
start=0
not_found=0
search_string="site:$url"

echo "Searching For Google Indexed Pages For $url..."
echo

num_results=`wget -q --user-agent=Firefox -O - http://www.google.com/search?q=$search_string\&hl=en\&safe=off\&pwst=1\&start=$start\&sa=N|awk '{ if ( $0 ~ /of about <b>.*<\/b> from/ ) print $0 }'|awk -F"of about" '{print $2}'|awk -F"<b>" '{print $2}'|awk -F"</b>" '{print $1}'`

while :;
do
if [ $not_found -eq 1 ]
then
break
fi
wget -q --user-agent=Firefox -O - http://www.google.com/search?q=$search_string\&num=100\&hl=en\&safe=off\&pwst=1\&start=$start\&sa=N|sed 's/<a href=\"\([^\"]*\)\" class=l>/\n\1\n/g'|awk -v num=$num -v base=$base '{ if ( $1 ~ /^http/ ) print base,num++,$NF }'|awk '{ if ( $2 < 10 ) print "Google Index Number " $1 "0" $2 " For Page: " $3; else if ( $2 == 100 ) print "Google Index Number " $1+1 "00 For Page: " $3;else print "Google Index Number " $1 $2 " For Page: " $3 }'|grep -i $url
if [ $? -ne 0 ]
then
not_found=1
if [ $not_found -eq 1 ]
then
break
fi
else
break
fi

done

if [ $not_found -eq 1 ]
then
echo "Finished Searching Google Index"
echo
fi

echo "Out Of Approximately $num_results Results"
echo
exit 0


, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.