Saturday, February 28, 2009

Discord At Last! Unix and Linux-y Humor

Hey there,

Although this post I found on the web is a bit dated, it's new to me and, hopefully, to you :)

If you're a fan of Robert Anton Wilson and/or Robert Shea, you may be familiar with what I still believe is a cult-classic: The Illuminatus Trilogy

If you aren't familiar with them, I'd suggest virtually anything either of them have ever written, but especially the aforementioned Illuminatus Trilogy. Every time someone mentions "The DaVinci Code," which I grudgingly read, I'm always reminded of this book. It's rooted in the same amount of fictitious fact, but is a superbly well-crafted, self-aware and hilarious read. SPECIAL NOTE FOR THE PURITANICALLY SENSITIVE: While the writing in this book is excellent and the story extremely entertaining, it does contain descriptions of fairly commonplace sexual acts to a detail that may make some readers uncomfortable. Fortunately, the humor, violence, ten-mile-deep plot and prose (in general) are equally as engaging :)

If you are familiar with The Illuminatus Trilogy, the following post on "ddate"; a command that translates regular time into Unix time as well as Discordian Time, should be entertaining and give you a good chuckle. Even if you haven't read the books, this stuff is funny on its own. I found this particular post on CyberCiti where you can find lots of other funny and interesting posts.

As an interesting sidenote, the Principia Discordia to which the post refers actually inspired The Illuminatus Trilogy. Or was the one the inspiration for the other (The first edition having only been found in 2006)? The more of this stuff you read, the more confusing it gets ;) If you want to read more about all of the modern reading, writing, movies and television inspired by Discordianism, check out the Discordianism Page on Wikipedia.

Enjoy and Happy Saturday!

p.s. On a sadder note, both of the authors of The Illuminatus Trilogy have since passed away. You can visit their respective websites (continuing where they left off; their legacies' being carried on by their children) at the Robert Anton Wilson Home Page (he passed on January 11th, 2007, peacefully in his sleep. A number of false reports of his death made the papers from 2006 onward) and BobShea.net (He passed March 10th, 1994 of colon cancer although ( yet another oddity ) many false reports of his death marked him as having passed from AIDS-related illness. Whomever his lifestyle offended ended up being more offensive in the end).

, Mike



ddate: Converts Gregorian dates to Discordian dates


by Vivek Gite






Most of you may be aware of date command. There is also a ddate command. It converts Gregorian dates to Discordian dates or Erisian calendar, which is an alternative calendar used by some adherents of Discordianism. It is specified on page 00034 of the Principia Discordia:


The Discordian calendar has five 73-day seasons: Chaos, Discord, Confusion, Bureaucracy, and The Aftermath. The Discordian year is aligned with the Gregorian calendar and begins on January 1, thus Chaos 1, 3173 YOLD is January 1, 2007 Gregorian.


Just type ddate:

$ ddate

Output:


Today is Setting Orange, the 71st day of Bureaucracy in the YOLD 3173

For fun put following alias in /etc/profile (try out on 1st April ;) ):

alias date=ddate


If called with no arguments, ddate will get the current system date, convert this to the Discordian date format and print this on the standard output.. Alternatively, a Gregorian date may be specified on the command line, in the form of a numerical day, month and year.








Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Friday, February 27, 2009

Subdomain Redirection Using htaccess And mod_rewrite On Apache For Linux Or Unix

Hey there,

Following up on yesterdays post on 301 redirects, which we realize is a tired subject, today we're going to take look at simple ways you can do regular/subdomain redirection using htaccess and "mod_rewrite" for Apache on Linux or Unix. And, while yesterday's subject matter (and perhaps even today's) may be somewhat generic, we're trying to even the balance here. Now that we're in the 500's on post count, we feel it would be a shame if somebody came here to learn something really cool and then couldn't find some other really basic information. As written previously, if this blog's subject matter were more limited this would be much easier to do. On the bright side, we'll probably never run out of things to write about :)

You can reference the basics of mod_rewrite syntax and rules on the link provided that points back to apache.org. While this page (or pages it links to within the same site), obviously, covers all the aspects of using Apache's "mod_rewrite," the style may not be suited to everyone. It's great information, but assumes some level of comfort that you may or may not already have regarding the subject matter.

In our few examples today (which won't cover much but should be explained with a good amount of detail), we're going to use a file named ".htaccess" (the dot at the beginning of the name is important!) placed in our web server's root folder (or base folder, etc) which contains the "mod_rewrite" rules. Our first example will implement the equivalent of a 3xx redirect using "mod_rewrite" instead of the more conventional methods. In order to implement this, just place the following content in your .htaccess file (assuming, again, that we want visitors to www.myxyz.com to be redirected to www.myotherxyz.com):

Options +FollowSymLinks
RewriteEngine On
RewriteBase /
Redirect 301 / http://www.myotherxyz.com


Looks familiar, yeah? We actually covered that yesterday, as it's one of the most basic rewrites you can do. We just added the "standard recommended" starting lines for your .htaccess file. Those top three lines aren't "always" necessary, which is why we didn't include them in yesterday's example.

Here's another example that shows you how to use "mod_rewrite"'s regular expression functionality to redirect from an htm(l) page on one web server to a page on another web server:

Options +FollowSymLinks
RewriteEngine On
RewriteBase /
RewriteRule ^(.+)\.htm$ http://www.myotherxyz.com/$1.html [R,NC]


This example doesn't do all that much more than the previous, but it "is" a lot more flexible. We'll walk through the RewriteRule, to see what it's doing (and, remember, all of the nitty-gritty can be found on Apache's mod_rewrite page). The line itself, if you're familiar with basic regular-expression syntax, isn't too hard to decipher. The characters that are special in the line are the following:

^ <-- The caret indicates that the match is anchoring to the start of the request.
. <-- The dot character is representative of "any" alphanumeric character (including spaces, tabs, goofy characters, etc). It matches "anything"
\ <-- The backslash character "escapes" special match characters. In this case, we're using it to represent a "real" dot, rather than the regular-expression dot described above.
+ <-- The plus symbol indicates that the character preceding it must exist one or more times if the match is to be considered a success
$ <-- The dollar sign is the opposite of the caret. It indicates that the match is anchored at the end of the request
() <-- Anything within parentheses (on the left-hand-side, or LHS, of the expression) is considered to be a "captured" part of the match. It can be referenced on the right hand side (or RHS) of the rewrite rule using the dollar sign. This sign, as shown above, means something completely different on the RHS of the equation.
$1, $2, $3, etc <-- the dollar-sign-plus-number on the RHS of the expression represents the content of an LHS match (within the parentheses, as written above). The first parentheses match on the LHS is represented by $1, the second by $2, etc.
[] <-- Within the match rule itself, the brackets indicate a range, or "class" (a term that doesn't seem correct, but is used in the mod_rewrite terminology) of characters. For instance [abcd] would match the letter a, b, c or d. At the end of the mod_rewrite rule (as its final component) the characters within it hold special significance and "do not" represent a range. You can use more than one and just separate them with commas. In the example above they say that:
R <-- We're doing a "R"edirect. You can add an equals sign and the proper number if you want to be specific. So, a 301 redirect would look like [R=301] and make our expression's end look like [R=301,NC]
NC <-- Even though N and C both have special meanings of their own (in the final part of the rewrite rule), when used together as a single entity they mean that the matching-rule should be "case insensitive" NC equal "No Case" if that helps ;)

And that's that one line. Explained with only 10 or 12 more ;) Basically, it matches any web server request (for an html page - a bit more than our other rule was looking for) and redirects it, like so:

ORIGINAL REQUEST: http://www.myxyz.com/iNdex.html (Sent to the web server as "/iNdex.html)
MATCHING RULES APPLIED: ^(/iNdex).html ==> http://myotherxyz.com//index.html

The double slashes are squashed into one by default, but you could remove them from the LHS of the match rule and make that "" "^(.+)\.htm$" into "^/(.+)\.htm$" - effectively removing the leading forward slash from the () parentheses match and its extrapolated value ($1) on the RHS.

The standard final argument indicators are listed on Apache's mod_rewrite page and below:

[R] Redirect. You can add an =301 or =302, etc, to change the type.
[F] Forces the url to be forbidden. 403 header
[G] Forces the url to be gone. 401 header
[L] Last rule. (You should use this on all your rules that don't link together)
[N] Next round. Rerun the rules again from the start
[C] Chains a rewrite rule together with the next rule.
[T] use T=MIME-type to force the file to be a mime type
[NS] Use if no sub request is requested
[NC] Makes the rule case insensitive
[QSA] Query String Append. Use to add to an existing query string
[NE] Turns off the normal escapes that are the default in the rewrite rule
[PT] Pass through to the handler (together with mod alias)
[S] Skip the next rule. S=2 skips the next 2 rules, etc
[E] E=var sets an enviroment variable that can be called by other rules


In our final example, if you want to redirect all sub-domains of a domain to another site (e.g. redirect home.myxyz.com to www.myxyz.com), you could do it like this:

Options +FollowSymLinks
RewriteEngine On
RewriteBase /
RewriteRule %{HTTP_HOST} ^([^.]+)\.myxyz\.com$ http://www.myotherxyz.com [NC,L]


The %{HTTP_HOST} variable is translated from the variable provided by the standard HTTP headers. The L in the final argument means that, if this match is made, it will be considered the last match and no more rules in your .htaccess file will be processed. In our small example this doesn't really make a difference, but it can help with flow control if you have multiple rules and want to quit matching if you match this condition.

Hopefully today's quick run-down has been helpful. Let us know - too much information, not enough, not-enough-of-some-but-too-much-of-another? We're always interested in hearing your thoughts :)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Thursday, February 26, 2009

Simple Site Redirection On Apache For Linux Or Unix

Hey there,

Today we're going to look at simple ways you can do site redirection using a few basic methods for Apache on Linux or Unix. I'd presume that the functionality works exactly the same for Apache on Windows, but that would (to misquote a cliche) make a "pres" out of "u" and "me" - yeah, the "assume" version of that plays much nicer ;) Tomorrow, we're going to follow up with more advanced ways to do site and subdomain redirects using "mod rewrite".

Site redirection, in and of itself, is fairly easy to do with just straight-up HTML or PHP, but it has its limitations. For instance, if you wanted everyone who visited www.myxyz.com to be redirected to www.myotherxyz.com, all you'd have to do would be to add some simple code to your index.htm(l) or index.php file. In HTML, you could just add this line in the <head> section:

<meta http-equiv="refresh" content="30;url=http://myotherxyz.com/">

This would allow for thirty seconds to pass before the redirect was called. You might want to make that shorter or longer, depending upon how much interactivity you wanted your visitor to have before being redirected. You may want to allow them to use the old site if they want and redirect them if they don't click on a link in 30 seconds, or something like that. If you really want them to not have any choice, try setting the time (content) to 0. Although this, technically, redirects immediately, depending on someone's connection, the server hosting the website, etc, slow page load could cause the user to see the original page before it refreshes to the new site. If you're only using a page for this purpose, you should leave the body blank or with just a similar background image as your new site.

PHP is a bit faster, although still subject to the limitations of the interacting components that affect the HTML redirect (although not to the same degree). In PHP, you could do the same thing with this little chunk of code:

<?
Header( "HTTP/1.1 301 Moved Permanently" );
Header( "Location: http://www.myotherxyz.com/" );
?>


The most efficient method (of the three we're going run through here today) is to use htaccess. This point will be expanded upon to a good degree in tomorrow's post, since the combination of htaccess and "mod rewrite" can allow you to do a lot more than just standard redirection. Using htaccess, you could create a file called, strangely enough, .htaccess in your site's main directory (or root or home directory, or base directory or whatever you prefer to call it) that contained the same directions as above, although with yet another syntax. A simple 301 site redirect in .htaccess can be written this simply (the entire file from top to bottom):

redirect 301 / http://www.myotherxyz.com/

It just doesn't get any simpler than that :) For the htaccess method, since there are no double quotes surrounding your arguments (old-site-name new-site-name) you need to double quote any addresses that include spaces. Simple enough:

redirect 301 "/my old index.html" http://myotherxyz.com/

And that's all there is to it! For a little bit of trivia, you may have noted that we ended all of our redirect destinations with a closing / ( www.myotherxyz.com/ as opposed to www.myotherxyz.com ). The reason we do this is actually still standard protocol. You can actually do the test yourself right now to prove it out. When you send a standard web server a URI request for, say, http://www.google.com (for instance), that actually generates a 301 site redirect and plops you at http://www.google.com/, which is the properly formatted address. It happens very quickly and, of course, only counts when you're requesting a directory. If you specify a file, like http://www.google.com/index.html, you'll either get the page you were expecting or some other error (perhaps even a 301), although it should be noted that some web servers don't do this anymore in all instances (http://www.google.com/news, for example). Still, being a dinosaur, I prefer to stick to the way that should always work.

Give it a shot. It might be more fun than pick-up-sticks ;) If you find a site that does it, you can see the redirect in action using simple Telnet. Take this example using http://abc.go.com/daytime (Although this page actually does a 302 redirect - for temporary redirection - right now and takes you to a specific index page) - I grew tired of trying to find sites that use 301 redirection on sub-directories and you can't telnet to any server and not request a page, unless you want that 404 ("GET /" works, but proving the redirect by doing "GET " doesn't ;)

$ telnet abc.go.com 80
Trying 198.105.194.116...
Connected to abc.go.com.
Escape character is '^]'.
GET /daytime HTTP/1.0

HTTP/1.1 302 Found
Connection: close
Date: Thu, 26 Feb 2009 00:19:27 GMT
Location: /daytime/index?pn=index
Server: Microsoft-IIS/6.0
P3P: CP="CAO DSP COR CURa ADMa DEVa TAIa PSAa PSDa IVAi IVDi CONi
OUR SAMo OTRo BUS PHY ONL UNI PUR COM NAV INT DEM CNT STA PRE"
From: abc07
Set-Cookie: SWID=54AE3DC5-FB42-4F6C-8B10-82E506FFD442; path=/;
expires=Thu, 26-Feb-2029 00:19:27 GMT; domain=.go.com;
Content-Length: 163
X-UA-Compatible: IE=EmulateIE7

<HTML><HEAD><TITLE>Moved Temporarily</TITLE></HEAD><BODY>This document
has moved to <A HREF="/daytime/index?pn=index
">/daytime/index?pn=index
</A>.<BODY></HTML>Connection closed by foreign host.


See you tomorrow for some more-convoluted stuff.

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Wednesday, February 25, 2009

Back To Basics: Getting File Information Using Perl's Stat Function

Hey there,

Today we're going take a (somewhat) beginner's look at extracting lots of useful information from files on Linux or Unix using Perl. More specifically, we'll be using Perl's stat() function. If you're a regular reader, you've probably noticed that we've strayed quite a bit from our original motto (which is screaming in the meta-description tag of every page ;) which is to keep up a blog that appeals to Linux and Unix users and/or admins of any skill level. Lately, we kind of feel like we've neglected the folks newer to the Linux/Unix world. Perhaps we're wrong. We're obviously confused. Perhaps, the next time we start a blog we'll be bit more specific about the subject matter. That being said, we've made our beds and have resigned to lie in them, since the floor is getting uncomfortable again ;)

Today we're going to take a brief introductory look at Perl's stat() function and how you can use it to extract lots of useful information from files, directories, etc (actually, everything in Linux/Unix is a "file" of a certain type). It's incredibly easy to use and, even if you've never used Perl before, very easy to implement. If you're still stuck at the Shebang line in point of reference to Perl/shell scripting, check out that post, as we hope it provides a good base for one of the things you're most likely to find in a Perl or shell script. It also goes beyond just scripting and attacks some other basic concepts.

Here's a very simple Perl script (which, as demonstrated, can be written on the command line as well) that will let you know the user id and group id associated with a particular file. We won't be exploring stat() completely in this post. just enough to get you started and comfortable (you could get the same information by running "ls -l" instead of just "ls," but that would defeat the purpose, right? ;):

host # ls
file.txt
host # perl -e '$file="file.txt";($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks)=stat($file);print "$file : User: $uid - Group: $gid\n"';
file.txt : User: 1000 - Group: 513


Your output may vary depending on what flavour of Linux or Unix you're using. In script format, this command line would look this way (not really all the much different, but, perhaps, easier to read):

#!/usr/bin/perl

$file="file.txt";
($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks)=stat($file);
print "$file : User: $uid - Group: $gid\n"';


When you run this (after giving it execute permission for the user - you :) - ) you get the same result (assuming you called it file.pl):

host# ./file.pl
file.txt : User: 1000 - Group: 513


Granted, today's examples were rudimentary for anyone who's had experience with Perl's functionality and the Unix/Linux shell, but, in this on and off series, we're going to take it a bit slow (we'll probably build on this post every two or three other posts). This is in no way an attempt to dumb-down the blog, just to get back to basics. Think of it as a bracing slap in the face we're giving ourselves to remind us that this blog isn't all about higher concepts.

All of us here, at one point, didn't know what Linux or Unix was, much less how to make any use of it. This interweaving series is for those of you who are just getting started (just like everyone in the business did at some point :)

We look forward to getting more into it. Perhaps the learning curve will be surprisingly fast. At the rate our children are picking up on computers, perhaps the transfer of knowledge won't be quite as difficult as it was in our day. Who can say? We don't know, but we're positive that, as soon as they can, they will ;)

In our next post in this series, we'll look at the components of the Perl stat() function and the rest of the information you can gather from it. Hopefully this post has been easily digestible enough to act as an introduction. If not, please contact us (via the link at the right hand top of every page) and let us know. We write a lot for ourselves (especially Mike, who thinks everything that makes him laugh is funny to everyone else in the entire universe ;), but are committed to providing useful information for you :)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Tuesday, February 24, 2009

M4/5000 XSCP Unintentional Denial Of Service

Hey There,

I'm not sure if this is "standard" yet, since I think I found a "bug/feature" in the Sun M4000 this weekend and had a pretty constant relationship with their support department during the investigation. Except for the fact that I had to work, it was a pretty decent experience ;)

We had a very strange situation happen that, in fact, completely crippled two of our M4000 servers. The XSCF cards on them experienced an "issue" and that meant that both servers were, for all intents and purpose, out of commission. And, I don't know if the phrase "for all intents and purposes" really does the situation justice. Those machine were actually completely inoperable. At least, in any useful sense. On the M4000 server, if the XSCF card isn't working correctly (or - while this didn't happen with us - removed) you have no way to boot the server up properly, much less get power to any of the domains the server runs. Yow! So, even though this wasn't our issue, always be extra careful not to jiggle that card loose, since it's directly accessible on the back and can be jostled around using the squeeze-grip attached to the outside.

The trouble all started when we had to Re-IP the XSCF cards on two servers. We'd never done this before and, although it's a practically painless experience, it does lead to one of the ways you can cause this denial of service (purely by accident, of course - please don't do this on purpose, even though it would very easy to. In fact - don't do it "because" it's so easy - no challenge - where's the reward? ;) Since we had been issued bad IP's in our Network build sheet, we had to Re-IP the XSCF's. Part of the process involved issuing the "rebootxscf" command which (with the configuration above) causes everything (or, at least all platadm and most basic commands) to begin failing strangely (errors on the command which show up as okay in the audit logs, inability to poweron domains, etc).

And this is where it got interesting. Through testing of various methods of power-cycling with this plugged into that and that plugged into this with the keyswitch in various positions, we could only find two ways to reproduce the error that made it seem like our M4000's were completely toast and would have to have the XSCF restored to defaults in order to access our domains that (hopefully) wouldn't get destroyed in the process (which they didn't).

Here are the two ways to accomplish the meltdown (followed by an explanation of the "why" - made, I might add, possible by the helpful folks at Sun tech support - I'm not selling anything, just thinking how a lot of times people who do the right thing don't get the recognition they deserve. Murderers, psychopaths and hate groups couldn't buy the free publicity they get in this country ;):

1. Setup either one, or both, of the Ethernet controllers that come standard on the single XSCF card. Then, after you've booted up and configured a domain (basically, made your M4000 usable and useful), remove the serial cable from the pseudo-serial port and replace it with straight Ethernet (connected to the same subnet). This will not cause you any problems. Things will continue to work as normal, except for a few seconds after you insert the Ethernet in the pseudo serial port when the XSCF readjusts to having an additional "path" to/from itself and the network. Now, ssh or telnet to the XSCF card and type:

XSCF> rebootxscf

you will be prompted that this will reset your system, which is (presumably) what you want. You'll note, within seconds, that your command prompt comes right back up and you never get disconnected. Now, even if you type something like "showdate," you'll get a "permission denied" error. Not to mention the fact that you can't run any other helpful commands like "showlog" or "rebootxscf"

2. Do the same thing you did before and power cycle your M4000. The M4000 doesn't really have an off switch, so the proper way to do this is to put the keyswitch into maintenance (or service) mode (the picture of the wrench ;) and pull the plugs to both power supplies. This will work even if you just pull the power with no regard for the state of your keyswitch. The machine will never fully come up. The amber "check" LED will go blank, but the flashing green "XSCF" LED will never stop flashing (This would usually mean that it was busy initializing). You should be able to ssh or telnet to your XSCF card, anyway, but the limitations on your usage will be as bizarre as in example 1 (As another for instance, you can create any user you want and give them all the administrative privileges you need to - assuming you have them already - but you can't do nslookup, power on/off/connect-to any domains and, as above, will not be able to run "rebootxscf.") All the restrictions, in both instances, are the same. There are so many goofy things to list, it would take a few pages.

HELPFUL HINT: If you get stuck in this situation and don't have physical access to your box, you won't be able to run "showlogs," but if you have auditadm privileges, you can run viewaudit, which can be helpful, but sometimes says that your commands are completing successfully when you get permission denied on the console and nothing happens ;)

Now, for the why as I understand it. The M8000/9000 series servers actually use the pseudo serial ports to connect primary and secondary (MASTER/SLAVE - Active/Standby) XSCF's for a redundant setup. Apparently, this functionality is going to be introduced into the M4000's. Actually, the possibility is already there (slots available, etc). So, in effect, what we did when we created this "crazy" situation was take advantage of a feature that isn't street-ready yet. Basically, once we attached the Ethernet cable to the pseudo serial port, we made the M4000 think it was the SLAVE XSCF in a redundant setup that doesn't (or maybe, at this point, CAN'T) exist. This knowledge pretty much explains all the insane error messages ;)

I hope this rundown, although drier than usual, is helpful to someone out there. Sunsolve, Sun support and Google couldn't find an answer for 3 days (Actually, I'm not sure if Sunsolve or Google have this yet - There are a few Sun FE's that know it now, though ;)

All in all, a bad experience that resulted in something good.

ONE LAST HELPFUL HINT: If you want to prevent this from ever accidentally happening (since it won't if the pseudo serial port isn't being used) one thing you can do is to plug a serial cable or dongle in the pseudo serial port. It doesn't have to be connected to anything and won't cause you any problems, but may discourage other folks from plugging anything else in there :)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Monday, February 23, 2009

Sun M-Series Enterprise Unix Servers And DSCP - Why?

SPECIAL NOTE: Friday's post on the absorption of knowledge was written on Blogspot in IE7 and suffered from some horrible display problems on FireFox and other browsers. It has been completely retooled. Apologies for any inconvenience that post caused. Even though I write some goofy rants, missing text wasn't supposed to be part of that post's intentionally cryptic style. Thanks :) - Mike

Hey there,

Today, we're going to look at DSCP (The Domain to Service processor Communication Protocol) and why it was introduced with the newer M series Enterprise servers from Sun. The first time I ran into it, I thought approximately the following: All right, here's something else that can make my life more complicated ;) Now, not only do I have to hop onto a private network (added security at work) to connect to the service processor (SP), but then I have to be sure that an entirely separate network is functioning correctly in order for me to connect to a domain on my machine if I want to get there directly from the SP. I'm naturally negative, but I keep it inside until I've given myself the time to realize that things are either not as bad or better than I initially assumed, which goes a long way toward maintaining good relationships with other human beings ;)

DSCP, although it "is" another layer of complication, is actually a pretty good idea. In essence, it's an attempt at damage control and resource sharing (which almost never works entirely), on par with earlier "HUGE" Sun computers, which does a pretty decent job at a relatively low cost. And, I use the word "cost" in reference to the amount of work you have to put into getting it all set up and working correctly. It's practically free, in that respect, so kudos (The plural, pronounced kyu-doze) to Sun for adding a little extra protection and making it an additon to an existing model that doesn't encumber the admin or user to a noticeable degree. Once it's set up, it may as well be non-existent. ...unless you have problems with it, but that's a general rule when it comes to anything.

Inter-process communication on the 6800/6900, etc boxes was handled pretty decently as well, but didn't really lay ground for the possiblity of scaling easily in the future. On those boxes, shared memory (for example) was "stuck" to a single SP, so if you had multiple SP's, you essentially had multiple machines, each alienated from the other with regards to system resources. The 6800/6900's were, as one would expect, worse in this regard than the 15k-and-up models (check out the price tags. Some times it can be an indicator of quality. Is the difference in ease of sharing worth it? Depends on your situation)

The 15k-and-up servers ran physical ethernet from the service processor(s) to the domain controllers (whereas, if I forgot to mention above, the older mailbox-architecture was imbedded and much more difficult to "change"). This setup made it slightly easier to re-allocate resources since you basically had a MAN setup on your Sun system (Maintenance Area Network for those of us who still care ;). Since the MAN operated at the application layer of the stack, all you had to do to scale up when you added a new product or application was assign a new TCP port. And that was that: Sharing made much simpler. There's a lot of nitty-gritty behind it all, but who wants to read about that? If you do, check out docs.sun.com and go nuts ;)

So, now, we've made it all the way up to the almost-present (we'll just consider it the "now" for now ;) and the M-Series servers. Since the MAN configuration of the 15k-and-up servers worked out so beneficially, that concept was guaranteed to be built in to the next generation of Enterprise computers. The one big drawback, from an architectural perspective, was the fact that the 15k-and-up servers had a whole bunch of externally exposed cables patched all over the place to maintain that separate MAN network (lots of ways to goof that up). The DSCP is Sun's way of taking care of that issue while maintaining the extended reliablity (and security) introduced with the MAN concept.

DSCP reproduces the 15k-and-up MAN using shared RAM, a pseudo-serial driver and PPP. To the user, this means all the benefits of a MAN, without all of the cords ;) Actually, it's a very nice implementation of physical-to-virtual transformation. And, as we all know, in about 10 to 15 years, we'll all be virtual and the only people left alive on the planet earth will be maintenance technicians ;)

The best thing about DSCP is how incredibly complicated it is to set up. Just kidding ;) It's actually incredibly simple. So simple that, if you're like me, you're wondering when a monkey is going to be able to start doing your job ;) Assuming an unlimited amount of Domains attached to a service processor, setup of DSCP is as simple as this:

XSCF> setdscp -i 10.0.0.1 -m 255.255.255.0
Commit these changes to the database? [y|n] : y


or, even simpler:

XSCF> setdscp
DSCP network [10.0.0.1 ] >
DSCP netmask [255.255.255.0 ] >
XSCF address [10.0.0.2 ] >
Domain #00 address [10.0.0.3 ] >
Domain #01 address [10.0.0.4 ] >


These examples are from an M4000 with only 2 domains, but the flavour stays the same on the larger boxes in the series. The only thing you really have to worry about (just like your MAN networks) is that you pick a network that doesn't get used. Generally, as shown above, using a non-routable network is best practice. Although I accidentally typed a 24-bit netmask for a class C IP (and used 1 for the network instead of 0), it doesn't really make a difference.

Of course, displaying the information is just as simple (should you forget):

XSCF> showdscp

DSCP Configuration:

Network: 10.0.0.1
Netmask: 255.255.255.0

Location Address
---------- ---------
XSCF 10.0.0.2
Domain #00 10.0.0.3
Domain #01 10.0.0.4


The key benefit, aside from the easier resource sharing that carries over from the MAN days, is the extra protection each domain is provided. Assuming an exploit is commited against one domain, whoever's gotten onto your box and screwed up that configuration will have to work harder to get to the other connected domains, since they'll have to go through the XSCF to get there. There is no absolute direct connection between domains. Although, since I know somebody out there is thinking this, it "is" still possible to attack all the domains on an M-Series machine at once; just not in an overt fashion. For instance (and I'm not encouraging this behaviour in any way whatsover) if you can create a situation whereby the administrator needs to power-cycle his/her M-Series server to restore functionality to the exploited domain, you've just brought all of the domains down in one fell swoop.

And, in closing, if you can't get to the XSCF to check out the DSCP configuration and you have the proper privilege and access to a domain hosted on an M-Series server, you can obtain that same information using the prtdscp command. Even more convenient, you can SSH (assuming you've set it up) directly from your domain to the DSCP IP using a command similar to:

host # ssh `prtdscp -s`

If you work somewhere that can afford it, enjoy the convenience :)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Sunday, February 22, 2009

Unix And Linux Cartoon Roundup

Once again, a pleasant Sunday to you all,

SPECIAL NOTE: Friday's post on the absorption of knowledge was written on Blogspot in IE7 and suffered from some horrible display problems on FireFox and other browsers. It has been completely retooled. Apologies for any inconvenience that post caused. Even though I write some goofy rants, missing text wasn't supposed to be part of that post's intentionally cryptic style. I will add this apology/comment to Monday's post as well. Thanks :) - Mike

I found a nice little collection of even more comics online at Linux Toons. Given the site's name, I'm not sure what that says about my powers of observation ;)

I you want to check out the whole collection, please drop by Linux Toons and give 'em a good going over. There are more plentylaughs to be had over on their side of the fence :)

Cheers,





Dilbert,
2007-01-25. I thought Linus was from Finland? Or is that
Eric Raymond?




M again,
2007-12-04. Mads bashing Windows
BSOD, and also
L'Oréal.
The text says "Because you're worth it, Peasant!"






On a completely but not quite unrelated note, here are some
cool Unix products:




Unix nappies. (source)






Unix fire extinguisher.
Picture by Tormod.






, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Saturday, February 21, 2009

More Linux and Unix Comics: Adam@home

Happy Saturday Everyone,

After an exhausting week that isn't over until Sunday Night/Monday Morning, I always like to unwind with some funnies. I found this one over at gocomics.com. It's from a little strip called Adam@home and may be a lot funnier than I think it is ;) I have yet to finish all of them. Lots and lots of words to fuss through ;)

Cheers, and enjoy your weekend :)










, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Friday, February 20, 2009

The Follow-Up: Absorption Of Knowledge In The Computer Age

Hello again, --> GENERAL WARNING: If you don't like to read a lot of text online, blow off this post! It's a tech opinion piece and won't solve any problems that can't be cured by slowing your life down a bit and paying more attention to the moment. That being said, even though this text is meant to be right above the "F" I'm going to put this disclaimer here anyway, just to see how many complaints I get. This post is full of misleading, and often completely incorrect, information and a few typos. For instance, I'm well aware that the square of 25 is not 3. Sometimes, I use the wrong case (we instead of I) on purpose. I don't address some points at all. Depending upon how some folks read
online (based on my own stockpile of comments and responses to posts) you may very
well have either missed what I just wrote or will have forgotten it by the time you
get to the point where I make that false assertion. Hopefully we're below the Adsense block now, so we can begin with the top of the "F."

Today I'm going to follow up on post we did at the beginning of this week regarding absorption of knowledge in the computer age. And I'm going to get a few things out of the way right off the bat, just in case you're the type of "F" reader they refer to in that previous post (more on that farther down the page ;) I was originally going to post some ridiculously bad code, explain that I was doing it on purpose, and why, with text in the "F pattern" and see how many people still complained about the shoddy quality of the code. Instead, I looked back at some older posts where the code was intentionally bad (in order to illustrate concepts related to the porting of shell to Perl or other programming languages more easily - since if two different languages are more structurally and visually similar, it's generally easier for anyone to process their own translation - my opinion, of course ;) and decided that, since I've already done that, it would be cheap to do it again. Easy for me. Lame for you. Believe it or not, that original series of posts on porting continued to generate complaint mails, even though the follow-up (linked to above) and rationale was spelled out within the top section of the post and is just a small part of a pretty sparse read; especially when compared to the endless lunatic yammering I slap down on the virtual page nowadays ;) If that link generates more complaints about improper or inefficient code we'll be numb with indifference... There's only so much caring one man can do ;) This has been the (possibly slightly goofed up by Adsense) top of the "F."

instead, i'm going to attempt to explain (and have fun with)
one basic concept in the aRticle from our prEvious post on computer reADing habiTs.
tHE writeR begIns the article (quite cleverly, actually, since he's assuming
you'll be the type of reader he describes and packs that information in the
relevant places) exploring the theory that most children (and a lot of teenaGers
and adults) are "f" readers when it comes to reading text on tHe inTernet. the
"f" is Somewhat of a double entendre (I'm going with the strict french translation
of "DoublE understanding" since the term is mOstly used to describe a "phrase"
rather than a single letter ;) in this case, since it's used as both a mnemonic
(the less to remember you by, my dear ;) for "F"ast and as a visual descripTion
of the patH most onlIne reader'S eyes follow when reading books or text on the
comPuter screen. if you've reAd this faR, the remAinder of the post should be
completely uninteresting. we tried to do a few thinGs on purpose to make this
post enteRtAining on more than just a Plain level. wHatever doesn't make sense,
probably will when you see the forest :)

Just for the record, I prefer to read books on paper. Not because reading online
makes me lose control like Patty Duke in the presence of a hot dog (see The Patty Duke Show Theme Song
for the full lyrics to that bizarre reference ;), but I find that reading a computer
screen is generally either uncomfortable (sitting in a chair or trying to balance a
laptop on my knees) or outright painful (bad refresh rates and certain color schemes
make my eyes dry up like juicy oranges and twitch uncontrollably). Bless all of you
who enjoy reading lots of text online. If I can, I go the library and get the book
instead. Unprovoked question: If I can't stand reading lots of text on a computer,
why do I write such long blog posts? Am I sadist, a masochist, both or just oblivious? ;)

Generally, I will read every word on an Internet page if, and only if, I have a
particular reason to. This is parallel to my reading habits offline. If I like
an author's work or I find a book on a subject I find interesting, you can bet I'll
read every single word in that book. Speed reading is fine for some (I guess), but
it seems to contradict any sense of enjoyment. Soaking up lyrical prose at 50 pages
a minute would be, probably, a really wonderful way to have an anxiety attack that
would make Evelynn Wood proud ;)

In much the same way (both online and offline), if I'm just looking for a particular
nugget of information, I'm not sure what letter my eye-scan pattern creates, but I
definitely skim. This is one point where I felt the article fell a bit short of
exploring both sides of the issue. The assumption being that people read "everything"
on the Internet the same way they read their Facebook pages doesn't necessarily hold water.
It's certainly true in some situations.

As a for instance: If, for some strange reason, I was surfing the web (nobody seems to do that any more. They stopped driving down the information superhighway, too. Another example of Howard Johnson's Syndrome, except sometimes under water ;), we didn't know the square root of 25 (that's actually supposed to be the strange part ;) and discovered that the only place it was listed was on an online encyclopedia entry on square roots in general, I'd go there and check it out. One humongous old-school web page that covered everything from origin of the square, the root, why
the square and/or the root may have nothing to do with numbers, the history of the square root, detailed biographies of people involved in the development, application
and sustained nutrition of the Lego-like plant, etc. Now, keeping in mind that I only want to know that the square of 25 is 3, I would, of course, skim that page like crazy; probably looking for a table or list of some sort that listed out common
numbers and their square roots. This activity would be considered "F" type reading. More like "spots on a Dalmatian" reading, but who's counting? ;) I would be reading fast, not because I have the attention span of some dumb animal with a very short attention span (that's another situation of an entirely different colour ;), but because there was only one piece of information on that page that I was actually interested in reading. Taken out of context, this might reflect poorly on me and my reading habits, but, taken within the proper context, it would make me seem reasonably intelligent and somewhat efficient.

Another reason kids (there was a big deal made about how having computers in schools
didn't' help improve a child's education, in and of itself) might read more quickly online (And I am doing my very best to defend all of you complete idiots out there -
I'm just kidding, of course, but I'm ready for the hate mail ;) is that a statistical majority of content available for consumption online is worthless crap.
Phenomena like "Ad blindness" and "F" reading are not convenient fictions designed to make us bookish-folk feel superior. Think about how often you go to the library and just browse a few pages from a couple hundred books. Have you ever done that? Really? And, supposing you did, how many of those books would be plastered with advertising (like the advertisements on this page that, if you read regularly enough, basically cease to exist after a certain amount of time)? Or how many would
spontaneously open other books (in the "Adult" section ;)? Or how many would have titles like "The History and Culture of Ancient Sumeria" and actually be about 500 different ways you can eat Haggis and manage not to vomit?

Also, the 10 to 40 dollars that the average person might pay for
unlimited Internet access per month makes the ability to consume
volumes and volumes of useless information incredibly easy. If you
had to pay 40 dollars for a real book, would you buy the same trash
you spend hours mooning over online? My guess is probably not. In
the offline world, that 40 dollar expenditure on an 800 page book
that promised you (no matter how little marketing experience you had)
that it could teach you a revolutionary new method of selling antique
dildoes to geriatric women (or something else less offensive to geriatric
women ;) would seem like "COST." For your 40 dollar
monthly Internet access fee, that 800 page PDF would seem like
"BENEFIT." No matter how I feel about geriatric women
(for some reason, I just can't stop writing that now - I feel like Norm
McDonald talking about "hookers"
;), the way in which you read that
book in the offline world would, most likely, be much different than you way
you would read it online. Offline you'd probably pay more attention. After
all you just spent your 40 dollars and this is all you have to show for it
(those geriatric women won't be on life-support forever ;). Online, you could
skim the book to find all the relevant information you needed about dildoes and
move along, having, theoretically, spent only a few cents of your 40 dollar
investment. You, and whatever other kinks you're carrying around, would still
have 39 dollars and 98 cents worth of perusing to do ;)

DOG'S IS GOOD FOOD! if you've even scanned down THis far, you
probably rEad that line. why? because it was pRintEd In bold in an article that iS
mostly devoid of bolditude (it's NOt a real word, Look it up in the phonE book ;) did you read that and consider that it didn't make a whole lot of sense. are you wondering, right now, if I having just been jerking you around in a solipsistic pseudo-intellectual diatribe with the sole intention of wasting your time? you're probably right. as a matter of FacT, you are almost definitely parked on the left SIDE-street ;) thIs has beeN an experimenT, after all. tHe thIng that'S really most questionable about this Piece is the question of "why would I go to such lengths just to see whAt happens?" think about it. admitting you've just wRitten An opinion piece that doesn't necessarily Gel on puRpose, provides no reAl value and
may not have even been worth scanning is a risky Proposition for a blog autHor. okay, more directly, it's a risky proposition for a blog author who wants you to continue to read his blog and has a reputation to uphold as a semi-competent working
professional in the field. i'm one of those kind of blog writers. but, now that you've read this post and its half-hearted apology are you more or less likely to trust that the next post you read will contain useful and/or relevant information?

Believe it or not, I'm actually interested in your opinion. Let me know what you think. Was this post as fun for you as it was for me? Did you enjoy finding all the incorrect and goofed-up stuff inside it? Do you think you found it all? And, most importantly, does this little math trick at the end redeem this post in any way
whatsoever? I put it on the bottom of the "F" so it would be easy to find, so here it is:

If you want to multiply any number by 5, divide it by 2 and then move the decimal place one over to the right. If you want to multiply any number by 25, divide it by
4 and move the decimal place to the right two spaces. And, yep, it even works for multiplying any number by 125. To do that divide it by 8 and move the decimal place
over 3 spaces. If you have trouble dividing by any number greater than, but a multiple of 2, just divide by 2 twice (to divide by 4) and divide by 2 three times (to divide by 8). If you can't divide by two, I know a great Elementary school. I can probably get you in on my kid's recommendation ;)

Ex:

133 x 5 = (133/2) = 66.5 = 665
133 x 25 = (133/4) = 33.25 = 3325
133 x 125 = (133/8) = 16.625 = 16625

Cool, yeah? "F" me - It's an "E" ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Thursday, February 19, 2009

Cluster Server Failover Testing On Linux And Unix

A fine how do you do :)

WARNING/GUARANTEE: Today's post is the product of a tired mind that just finished working and didn't have much time to think beyond the automatic. If you feel you may be entertained, please continue reading. If you want to learn some really useful tricks to test a two-node cluster's robustness, this article may be for you, too. If you're looking for information you can apply in the workplace without being escorted from the building by armed guards, proceed no further ;)

As today's post title obliquely suggests, I'm going to take another day to properly formulate my response to our "F" reading experiment (not to be confused with the anti-literacy initiative ;) that we began on Monday. I've received a number of very interesting comments on the subject of the article that got the whole idea rolling around in that butter churn in between my ears. Although none of the responses have radically changed my feelings on the subject, they have augmented them and provided some fresh perspective. Although I still intend to throw a little signature "meta" style into the post (because if we all read in the F formation, my post is going to have to play off of that to whatever degree I can manage :), I'm now reconsidering my original rough-draft and, possibly, working some additional angles into it. I've got some emails out there (as I always request permission to use other folks' opinions when they're kind enough to share) and hope to hear back soon. Worst case, I'll post the original tomorrow and add the comments as their own entities (attached, of course) at a later date.

Also, as this post's title overtly suggests, I spent most of my day testing cluster failover scenario's at work. I won't mention any proprietary or freeware brand names, as this post isn't specific enough to warrant the reference, but, after today's exercise (which, of course, I've had to do more than a couple different ways at a couple of different places of employment) I decided to put together a small comprehensive list of two-node cluster disaster/failure/failover scenarios that one should never push a cluster into production without performing.

It goes without saying that the following is a joke. Which is, of course, why I "wrote" it with my lips sealed ;)

Comprehensive Two-Node Cluster Failover Testing Procedure - v0.00001alpha

Main assumption: You have a two-node cluster all set up in a single rack, all service groups and resources are set to critical, no service groups or resources are frozen and pretty much everything should cause flip-flop (technical term ;)

1. Take down one service within each cluster service group (SG), one at a time. Expected result: Each cluster service group should fault and failover to the secondary node. The SG's should show as faulted in your cluster status output on the primary node, and online on the secondary.

2. Turn all the services, for each SG, back on, one by one, on the primary node. Expected result: All of the SG's should offline on the secondary node and come back up on the primary.

3. Do the same thing, but on the secondary. Expected result for the first test: Nothing happens, except the SG's show as faulted on the secondary node. Expected result for the second test: Nothing happens, except the SG's show as back offline on the secondary node.

4. Switch SG's from the primary to secondary node cleanly. Expected result: What did I just write?

5. Switch SG's from the secondary node back to the primary node cleanly. Expected result: Please don't make me repeat myself ;)

6. Unplug all heartbeat cables (serial, high priority ethernet, low priority, disk, etc) except one on the primary node. Expected result: Nothing happens except, if you're on the system console, you can't type anything anymore because the cluster is going freakin' nuts with all of its diagnostic messages!

7. Plug all those cables back in. Expected result: Everything calms down, nothing happens (no cluster failover) except you realize that you accidentally typed a few really harmful commands and may have hit enter while your screen was draped with garbage characters. The primary node may be making strange noises now ;)

8. Do the same thing on the secondary node. Expected result: No cluster failover, but the secondary node may now be making strange low beeping sounds and visibly shaking ;)

9. Pull the power cords out of the primary node. Expected result: Complete cluster failover to the secondary node.

10. Put the plugs back in. Expected result: Complete cluster failback to the primary node.

11. Do the same thing to the secondary node. Expected results for both actions: Absolutely nothing. But you knew this already. Are you just trying to waste the company's time? ;)

12. Kick the rack, containing the primary and secondary node, lightly. Expected results: Hopefully, the noises will stop now...

13. Grab a screwdriver and repeatedly stab the primary node. Expected Result: If you're careful you won't miss and cut yourself on the razor sharp rack mounts. Otherwise, everything should be okay.

14. Pull the fire alarm and run. Expected result: The guy you blame it on may have to spend the night in lock-up ;)

15. Tell everyone everything's fine and the cluster is working as expected. Expected result: General contentment in the ranks of the PMO.

16. Tell everyone something's gone horribly wrong and you have no idea what. Use the console terminal window on your desktop and export it via WebVNC so that everyone can see the output from it. Before exporting your display, start up a program you wrote (possibly using script and running it with the "-t" option to more accurately reflect realistic timing, although a bit faster. Ensure that this program runs in a continuous loop. Expected Result: General pandemonium. Emergency conference calls, 17 or 18 chat sessions asking for status every 5 seconds and dubious reactions to your carefully pitched voice, which should speak in reassuring terms, but tremble just slightly like you're a hair's breadth away from a complete nervous breakdown.

17. Go out to lunch. Expected Result: What do you care? Hopefully, you'll feel full afterward ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Wednesday, February 18, 2009

Simple Unix And Linux Shell Tricks To Save You A Few Gray Hairs

What it is?

Tomorrow, we're going to complete the the experiment we started in Monday's post on absorption of knowledge in the computer age, so, for today, we're just going to focus on a few little tricks that can save you grief, heartache, strife, worry and all those bad feelings people have to take prescription medication to deal with nowadays ;) Not to belittle chronic anxiety/depression (both symptoms treated with the same drugs) but, as fun as they may be, pharmaceuticals rarely actually solve a "mood disorder." And we use that term lightly. When we were kids, we were either happy, sad, pissed off or excited; any number of emotions that required no medication to correct. None of the kids in the neighborhood had ED, ADD, ADHD, LD's, or OCD's - They were all fussy, uninterested, spastic, stupid and had OCD's ;) Nowadays, too many kids have disorders and the meaning of the word has been devalued. We, here, all have CD's. We listen to them when we want to hear music. We will, of course, be asking our respective physicians about ways in which chemically altering ourselves can help us lose our CD's, and (hopefully) not feel compelled to buy new ones.. ;)

Sorry - no offense. We realize it's too late, but, if you have ADD, ADHD or any disorder of that nature, you'll get over this soon enough. Now, what the Hell was this post about, again? ;)

Oh yeah. A few little shell tricks to make your life easier so you can quit popping pills and jump-start the U.S. economy by drinking more :)

1. How to save yourself from having to retype a huge line that you thought you wanted to type, but, about at the end, you realized you couldn't enter until you typed a line preceding it, and you couldn't even tag that line to the beginning of the line you were already typing so you end up hitting ctl-C and typing the whole thing over again :

host # for x in a b c d e f g h i j k l m n o p q r s t u v w x y z ;do ps -ef|grep "[h]orse"|awk '{print $2}'|xargs ptree;don ^C


Rather than stand for that, just do the following thing whenever you log into your shell. Always make sure that you have line editing enabled. In bash, ksh, etc, if you want to enable vi line editing, all you need to do is type:

host # set -o vi

on the command line or, better yet, at it to your .bash_profile, .profile or .bashrc, so line editing will get set every time you log in and you won't have to always remember to do it. If you like emacs, just replace vi in the example above. This way, once you get to the end of that long line, you can type (literally)

[esc]0i#[enter]

That's the escape key (to get into vi command mode), the number 0 (to whisk you back to the beginning of the line), the letter i (to get you out of vi command mode and into insert mode) and the pound (#) symbol (to make the whole line a comment) and then the enter key. This will cause your line to become a comment, just like in a shell script and the shell won't execute it. Then you can type your preceding line and (assuming vi again) type:

[esc]kkx[enter]

Which is the escape key again (to get you into vi command mode) and the "k" key twice to move you up two lines in your history (which goes from newest to oldest from bottom to top), the x key (to delete the # (pound) character)) and then the enter key to have the shell execute your command line :) Yay! One down; one to go. Unfortunately, this won't work for shells that don't support line editing (like the bourne shell, as opposed to most Linux sh's which are usually just bash or dash)

2. How to clean up a huge mess when you untar a file that doesn't contain its own subdirectory. For instance, if you have this in your directory:

host # ls
file1.tar a b c d


and you untar file1.tar (which meets the above conditions), you might end up with this:

host # tar xpf file1.tar
file1.tar ab a bb b cb c db d x y z


and some of those new files might be directories with files in them, etc. This situation shouldn't be too bad, since you have very little in your directory. But sometimes, if you do it in /usr/local/bin (and, to add some more stress, can't rely on the file dates) this can create a very confusing situation. What do you get rid of without destroying everything?

There are a number of ways to get around this issue, but this one seems the fastest, with the least amount of hassle (feel free to combine the steps when you do this yourself; we're just keeping them apart for illustrative purposes):

To see what you've untarred (probably not necessary, but worth it if you happen to eyeball an important file you accidentally overwrote - another issue entirely ;)

host # tar tpf file.tar
ab
bb
bb/a
cb/c/d
cb/c
db
x/y/z.txt
y/x
z


Now, you know what was in your original tar file and, therefore, what you should be deleting :) It's very simple to do, but we'd recommend you run this command once, like this:

host # echo $(ls -1d `tar tpf bob.tar`)

just to be sure, and (if that looks good) remove the tar file contents and then do whatever you want with the tar file (like extract it to another directory):

host # rm $(ls -1d `tar tpf bob.tar`)

Also, if your shell doesn't support the $() expansion operators, you can always backtick your internal backticks like so: (or do something even more clever - there are probably more than a few ways to skin this cat ;)

host # rm `ls -1d \`tar tpf bob.tar\``

We'll see you tomorrow for the reading experiment. SPOILER ALERT: It's not what you think it is, unless you think it's what it is ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Tuesday, February 17, 2009

Adding Slightly Different Types In VCS On Linux And Unix

Hey there,

Today we're going to take a look at creating new "types" for use with Veritas Cluster Server (VCS). In a broad sense of the term, almost everything you'll ever define in your main.cf (the main configuration file for VCS) is based on a specific "type," which is actually described in the only standard include file in that configuration file: types.cf - Note that both main.cf and types.cf are located in /etc/VRTSvcs/conf/config. You could move the types.cf to an alternate location fairly easily and only have to modify the "include" line in main.cf, but it's not recommended. VCS has the potential to make your life miserable in many built-in ways ;)

For instance, if you had an entry like this in your main.cf:

Apache httpd_server (
httpdDir = "/usr/local/apache"
HostName = www
User = nobody
ConfigFile = "/usr/local/apache/conf/httpd.conf"
)


that Apache instance you described (its name "httpd_server" is arbitrary and/or up to you, but is how you would reference that instance of that type later on in the config file) would actually be based on the "Apache" type (all types in VCS are "cAse SensiTIVe ;) in types.cf, which is described thusly (as you can see, it has many attributes that, mostly, are left at their defaults. We've italicized the points that we've specifically defined above):

type Apache (
static str ArgList[] = { ResLogLevel, State, IState, httpdDir, SharedObj
Dir, EnvFile, HostName, Port, User, SecondLevelMonitor, SecondLevelTimeout, Conf
igFile, EnableSSL, DirectiveAfter, DirectiveBefore }
str ResLogLevel = INFO
str httpdDir
str SharedObjDir
str EnvFile
str HostName
int Port = 80
str User
boolean SecondLevelMonitor = 0
int SecondLevelTimeout = 30
str ConfigFile
boolean EnableSSL = 0
str DirectiveAfter{}
str DirectiveBefore{}
)


As you may have noted, a lot of defaults are set in the standard type definition. For instance, the Port is set to 80 by default. You could override that in your main.cf by simply including a line in your "Apache httpd_server {" definition that read:

Port = 8080


or whatever you preferred.

Assuming that you will only be running Apache web servers on either port 80 or 8080 (we're going to skip 443, since it "silently" gets defined if you set "EnableSSL = 1" which includes that port automatically - although we may be putting that in a way that seems slightly off ;) you can either do things the easy way and just describe two differently-named Apache instances in your main.cf, like so (Be sure to check out our older posts on modifying the main.cf file properly if you're uncertain as to whether or not you're updating and/or distributing config files appropriately):

Apache regular_server (
httpdDir = "/usr/local/apache"
HostName = www
User = nobody
Port = 80
ConfigFile = "/usr/local/apache/conf/httpd.conf"
)

Apache alternate_server (
httpdDir = "/usr/local/apache"
HostName = www
User = nobody
Port = 8080
ConfigFile = "/usr/local/apache/conf/httpd.conf"
)


Or do things the hard way. The hard way can be dangerous (especially if you make typos and/or any other serious errors editing the types.cf file - back it up before you muck with it ;) and is generally not necessary. We just made it the topic of today's post to show you how to do it in the event that you want to customize to that degree and/or need to. Keep in mind that (if you change, or add to, types.cf) you should always keep a backup of both the original and your modified version of types.cf handy. I you ever apply a patch or service/maintenance pack from Veritas, it may very well overwrite your custom types.cf file.

Assuming you've read the preceding paragraph and are willing to take the risk, you might modify your types.cf file to include two different versions of the Apache type: One for servers running on port 80 (the default) and one for servers running on port 8080. As we mentioned, types in types.cf are "case sensitive," which makes them simpler to make similarly unique. This works out well in Unix and Linux, since most types are associated with an actual binary directory in /opt/VRTSvcs/bin (which gets the OS involved, and the OS is case sensitive).

So, assuming we wanted to add an "ApachE8080" type to types.cf, our first move would be to duplicate/modify the binary directory in /opt/VRTSvcs. In our example, this is very simplistic, since we're not creating a new type from scratch and doing everything the hack'n'slash way (If you prefer order over speedy-chaos, check out the "hatype," "haattr" and "hares" commands, although not necessarily in that order ;)

host # ls /opt/VRTSvcs/bin/Apache
Apache.pm ApacheAgent monitor online
Apache.xml clean offline


The /opt/VRTSvcs/bin/Apache directory contains a number of programs and generic "methods" required by VCS for most resource types (As a counter-example, online-only types like NIC don't require the "start" or "stop" scripts/methods listed under Apache, since they're "always on" in theory ;). In order for us to create our ApachE8080 type, we'll need to do this one simple step first:

host # cp -pr /opt/VRTSvcs/bin/Apache /opt/VRTSvcs/bin/ApachE8080
host # ls /opt/VRTSvcs/bin/ApachE8080
Apache.pm ApacheAgent monitor online
Apache.xml clean offline
host # diff -r /opt/VRTSvcs/bin/Apache /opt/VRTSvcs/bin/ApachE8080
host #
<-- In this case, no news is good news :)


Now, all we have to do is modify our types.cf file. We're not sure if it's required, but we always duplicate the empty lines between type definitions just in case (It doesn't hurt). This is what our new type will look like (Note that the only line changed - aside from the name of the type - is the "int Port" line which we've italicized again):

type ApachE8080 (
static str ArgList[] = { ResLogLevel, State, IState, httpdDir, SharedObj
Dir, EnvFile, HostName, Port, User, SecondLevelMonitor, SecondLevelTimeout, Conf
igFile, EnableSSL, DirectiveAfter, DirectiveBefore }
str ResLogLevel = INFO
str httpdDir
str SharedObjDir
str EnvFile
str HostName
int Port = 8080
str User
boolean SecondLevelMonitor = 0
int SecondLevelTimeout = 30
str ConfigFile
boolean EnableSSL = 0
str DirectiveAfter{}
str DirectiveBefore{}
)


And, you're all set. Now you can change your main.cf file to look like the following and everything should work just as if you had done it the easy way (Be sure to check out our older posts on modifying the main.cf file properly if you're uncertain as to whether or not you're updating and/or distributing config files appropriately):

Apache regular_server (
httpdDir = "/usr/local/apache"
HostName = www
User = nobody
ConfigFile = "/usr/local/apache/conf/httpd.conf"
)

ApachE8080 alternate_server (
httpdDir = "/usr/local/apache2"
HostName = www
User = nobody
ConfigFile = "/usr/local/apache2/conf/httpd.conf"
)


and, now, you no longer have to go through all the trouble of adding that "Port = 80" (or "Port = 8080") line to your main.cf type specification...

Six of one, half dozen of another? Whatever works for you, depending on your situation, is our basic rule. Or, in other words, 13 of one, baker's dozen of another ;)

Cheers,

, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Monday, February 16, 2009

The Absorption Of Knowledge In the Computer Age: The Setup

Hey There,

Today's post is going to be somewhat of a departure from the material you usually read on this blog, although it will fit perfectly when it gets book-ended in the middle of the week by its follow-up. If you actually read the entire text below, you'll have already disproved the theory it proposes, however, based on an online experiment we want to do on this blog ( which "will" contain useful and relevant information ;) it must be posted before the actual implementation. And, to the detriment of the massive amount of money we make typing this rag every day ( insert appropriate emoticon here ;) we need to post it on a day that's generally among the more heavily read. In fact ( for this post, and its follow-up ) we're going to specifically request that you do not click on any of the ads on this page ( Google, affiliate, etc - which we must point out is in "no way" an effort to direct you to click on any ad on this page. Hopefully that will keep Google from closing our account. It's very important that this not happen, since we need that payday to buy snacks ;)

For a bit of a preview of what we have coming up this week, we somewhat disagree with the opinion put forth in the attached article below. In some respects, though, we think it's dead on. We'll see. And so will you... maybe ;)

We'll revisit this before the week is out. If you want to read more about online reading habits, check out chronicle.com.

Let the experiment begin and enjoy the dry-prose! :)


Online Literacy Is a Lesser Kind



Slow reading counterbalances Web skimming








When Jakob Nielsen, a Web researcher, tested 232 people for how they read pages on screens, a curious disposition emerged. Dubbed by The New York Times "the guru of Web page 'usability,'" Nielsen has gauged user habits and screen experiences for years, charting people's online navigations and aims, using eye-tracking tools to map how vision moves and rests. In this study, he found that people took in hundreds of pages "in a pattern that's very different from what you learned in school." It looks like a capital letter F. At the top, users read all the way across, but as they proceed their descent quickens and horizontal sight contracts, with a slowdown around the middle of the page. Near the bottom, eyes move almost vertically, the lower-right corner of the page largely ignored. It happens quickly, too. "F for fast," Nielsen wrote in a column. "That's how users read your precious content."



The F-pattern isn't the only odd feature of online reading that Nielsen has uncovered in studies conducted through the consulting business Nielsen Norman Group (Donald A. Norman is a cognitive scientist who came from Apple; Nielsen was at Sun Microsystems). A decade ago, he issued an "alert" entitled "How Users Read on the Web." It opened bluntly: "They don't."



In the eye-tracking test, only one in six subjects read Web pages linearly, sentence by sentence. The rest jumped around chasing keywords, bullet points, visuals, and color and typeface variations. In another experiment on how people read e-newsletters, informational e-mail messages, and news feeds, Nielsen exclaimed, "'Reading' is not even the right word." The subjects usually read only the first two words in headlines, and they ignored the introductory sections. They wanted the "nut" and nothing else. A 2003 Nielsen warning asserted that a PDF file strikes users as a "content blob," and they won't read it unless they print it out. A "booklike" page on screen, it seems, turns them off and sends them away. Another Nielsen test found that teenagers skip through the Web even faster than adults do, but with a lower success rate for completing tasks online (55 percent compared to 66 percent). Nielsen writes: "Teens have a short attention span and want to be stimulated. That's also why they leave sites that are difficult to figure out." For them, the Web isn't a place for reading and study and knowledge. It spells the opposite. "Teenagers don't like to read a lot on the Web. They get enough of that at school."



Those and other trials by Nielsen amount to an important research project that helps explain one of the great disappointments of education in our time. I mean the huge investment schools have made in technology, and the meager returns such funds have earned. Ever since the Telecommunications Act of 1996, money has poured into public-school classrooms. At the same time, colleges have raced to out-technologize one another. But while enthusiasm swells, e-bills are passed, smart classrooms multiply, and students cheer — the results keep coming back negative. When the Texas Education Agency evaluated its Technology Immersion Pilot, a $14-million program to install wireless tools in middle schools, the conclusion was unequivocal: "There were no statistically significant effects of immersion in the first year on either reading or mathematics achievement." When University of Chicago economists evaluated California schools before and after federal technology subsidies (the E-Rate program) had granted 30 percent more schools in the state Internet access, they determined that "the additional investments in technology generated by E-Rate had no immediate impact on meas-ured student outcomes." In March 2007, the National Center for Education Evaluation and Regional Assistance evaluated 16 award-winning education technologies and found that "test scores were not significantly higher in classrooms using selected reading and mathematics software products." Last spring a New York State school district decided to drop its laptop program after years of offering it. The school-board president announced why: "After seven years, there was literally no evidence it had any impact on student achievement — none."



Those conclusions apply to middle-school and high-school programs, not to higher education (which has yet to produce any similarly large-scale evaluations). Nevertheless, the results bear consideration by those pushing for more e-learning on campuses.



Backers, providers, and fans of new technology explain the disappointing measures as a matter of circumstance. Teachers didn't get enough training, they say, or schoolwide coordination was spotty, parents not sufficiently involved. Maybe so, to some extent, but Nielsen's studies indicate another source. Digitized classrooms don't come through for an off-campus reason, a factor largely overlooked by educators. When they add laptops to classes and equip kids with on-campus digital tools, they add something else, too: the reading habits kids have developed after thousands of hours with those same tools in leisure time.



To teachers and professors, a row of glistening new laptops in their classroom after a dozen years with nothing but chalk and blackboard, or a podium that has been transformed from a wooden stand into a multimedia console, can appear a stunning conversion. But to the average freshman walking through the door and finding a seat, it's nothing new. Our students have worked and played with computers for years. The Horatio Alger Association found that students in high school use the Internet four and a half hours per week for help with homework (The State of Our Nation's Youth, 2008-2009), while the National School Boards Association measures social networking at nine hours per week, much of it spent on homework help. The gap between viewpoints is huge. Educators envision a whole new pedagogy with the tools, but students see only the chance to extend long-established postures toward the screen. If digitized classrooms did pose strong, novel intellectual challenges to students, we should see some pushback on their part, but few of them complain about having to learn in new ways.



Once again, this is not so much about the content students prefer — Facebook, YouTube, etc. — or whether they use the Web for homework or not. It is about the reading styles they employ. They race across the surface, dicing language and ideas into bullets and graphics, seeking what they already want and shunning the rest. They convert history, philosophy, literature, civics, and fine art into information, material to retrieve and pass along.



That's the drift of screen reading. Yes, it's a kind of literacy, but it breaks down in the face of a dense argument, a Modernist poem, a long political tract, and other texts that require steady focus and linear attention — in a word, slow reading. Fast scanning doesn't foster flexible minds that can adapt to all kinds of texts, and it doesn't translate into academic reading. If it did, then in a 2006 Chronicle survey of college professors, fully 41 percent wouldn't have labeled students "not well prepared" in reading (48 percent rated them "somewhat well prepared"). We would not find that the percentage of college graduates who reached "proficiency" literacy in 1992 was 40 percent, while in 2003 only 31 percent scored "proficient." We would see reading scores inching upward, instead of seeing, for instance, that the percentage of high-school students who reached proficiency dropped from 40 percent to 35 percent from 1992 to 2005.



And we wouldn't see even the better students struggling with "slow reading" tasks. In an "Introduction to Poetry" class awhile back, when I asked students to memorize 20 lines of verse and recite them to the others at the next meeting, a voice blurted, "Why?" The student wasn't being impudent or sullen. She just didn't see any purpose or value in the task. Canny and quick, she judged the plodding process of recording others' words a primitive exercise. Besides, if you can call up the verse any time with a click, why remember it? Last year when I required students in a literature survey course to obtain obituaries of famous writers without using the Internet, they stared in confusion. Checking a reference book, asking a librarian, and finding a microfiche didn't occur to them. So many free deliveries through the screen had sapped that initiative.



This is to say that advocates of e-learning in higher education pursue a risky policy, striving to unite liberal-arts learning with the very devices of acceleration that hinder it. Professors think they can help students adjust to using tools in a more sophisticated way than scattershot e-reading, but it's a lopsided battle. To repeat, college students have spent thousands of hours online acquiring faster and faster eyes and fingers before they even enter college, and they like the pace. It is unrealistic to expect 19-year-olds to perch before a screen and brake the headlong flight, even if it is the Declaration of Independence in hypertext coming through, not a buddy's message.



Some educators spot the momentum and shrug their shoulders, elevating screen scanning to equal status with slow reading. A notable instance occurred last year, when in an essay in The New York Times, Leah Price, a professor of English at Harvard University, criticized a report from the National Endowment for the Arts — "To Read or Not to Read" (to which I contributed) — precisely for downgrading digital scanning. Her article contained some errors of fact, such as that the 2004 NEA report "Reading at Risk" excluded nonfiction, but correctly singled out the NEA distinction between screen reading and print reading. To Price, it's a false one: "Bafflingly, the NEA's time-use charts classify 'e-mailing' and 'surfing Web sites' as competitors to reading, not subsets of it." Indeed, she said, to do so smacks of guile: "It takes some gerrymandering to make a generation logging ever more years in school, and ever more hours on the BlackBerry, look like nonreaders." (In truth, high-school students do no more in-class reading today than they did 20 years ago, according to a 2004 Department of Education report.)



What we are seeing is a strange flattening of the act of reading. It equates handheld screens with Madame Bovary, as if they made the same cognitive demands and inculcated the same habits of attention. It casts peeking at a text message and plowing through Middlemarch as subsets of one general activity. And it treats those quick bursts of words and icons as fully sufficient to sustain the reading culture. The long book may go, Price concluded, but reading will carry on just as it did before: "The file, the list, the label, the memo: These are the genres that will keep reading alive."



The step not taken here is a crucial one, namely to determine the relative effects of reading different "genres." We need an approach that doesn't let teachers and professors so cavalierly violate their charge as stewards of literacy. We must recognize that screen scanning is but one kind of reading, a lesser one, and that it conspires against certain intellectual habits requisite to liberal-arts learning. The inclination to read a huge Victorian novel, the capacity to untangle a metaphor in a line of verse, the desire to study and emulate a distant historical figure, the urge to ponder a concept such as Heidegger's ontic-ontological difference over and over and around and around until it breaks through as a transformative insight — those dispositions melt away with every 100 hours of browsing, blogging, IMing, Twittering, and Facebooking. The shape and tempo of online texts differ so much from academic texts that e-learning initiatives in college classrooms can't bridge them. Screen reading is a mind-set, and we should accept its variance from academic thinking. Nielsen concisely outlines the difference: "I continue to believe in the linear, author-driven narrative for educational purposes. I just don't believe the Web is optimal for delivering this experience. Instead, let's praise old narrative forms like books and sitting around a flickering campfire — or its modern-day counterpart, the PowerPoint projector," he says. "We should accept that the Web is too fast-paced for big-picture learning. No problem; we have other media, and each has its strengths. At the same time, the Web is perfect for narrow, just-in-time learning of information nuggets — so long as the learner already has the conceptual framework in place to make sense of the facts."



So let's restrain the digitizing of all liberal-arts classrooms. More than that, given the tidal wave of technology in young people's lives, let's frame a number of classrooms and courses as slow-reading (and slow-writing) spaces. Digital technology has become an imperial force, and it should meet more antagonists. Educators must keep a portion of the undergraduate experience disconnected, unplugged, and logged off. Pencils, blackboards, and books are no longer the primary instruments of learning, true, but they still play a critical role in the formation of intelligence, as countermeasures to information-age mores. That is a new mission for educators parallel to the mad rush to digitize learning, one that may seem reactionary and retrograde, but in fact strives to keep students' minds open and literacy broad. Students need to decelerate, and they can't do it by themselves, especially if every inch of the campus is on the grid.



Mark Bauerlein is a professor of English at Emory University. His latest book, The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future (Or, Don't Trust Anyone Under 30), was published by Jeremy P. Tarcher/Penguin this year.



, Mike




Discover the Free Ebook that shows you how to make 100% commissions on ClickBank!



Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.