Showing posts with label tcp. Show all posts
Showing posts with label tcp. Show all posts

Wednesday, December 3, 2008

Analyzing TCP Disconnects On Linux Or Unix

Hey there,

To most users of Linux and/or Unix (or pretty much any other OS with networking capabilities), the most basic concepts of the TCP and UDP protocols are, at least, somewhat familiar. At the very worst, you've heard of them and, at the very best, you have an insanely detailed fingernail grip on everything TCP/UDP-protocol related and have probably written your own protocol by now so you won't have to deal with all this convention ;)

For the most part, everyone falls somewhere in between. One question that I get asked a lot (and used to ask a lot ;) has to do with the TCP protocol. More specifically, with how an established connection goes about graciously ending. And, if you haven't guessed, the reason the question gets asked so often is that, with all the different states a graceful TCP disconnect goes through, lots of folks (involved in troubleshooting) are curious as to whether the output they're pouring over is "good" or "bad." We'll pretend, for the remainder of this post that the world is black-and-white, cut-and-dry and everything makes sense (and works the way it should) all the time ;)

Since we've never actually covered this topic on this blog before, it makes the most sense (to me, anyway) to lay down the basics of a graceful TCP disconnect. If there's interest (On your end or mine ;), we may follow up with further posts that delve into more detail on the subject. This, of course, means that the typical sequence of events laid out below isn't necessarily how things are always going to go (there are slight differences between Active and Passive disconnects, for one of more than a few instances). For now, we'll stick to the nitty-gritty.

The gracious TCP disconnect, in as much order as I could make of it. The way it's "supposed" to work. This information is only as reliable as your circumstances :) Note that all examples for this post are from Solaris 10 and your explicit command names (like netstat) may vary or have slightly different arguments you need to pass them. Also, we'll note some ways that the TCP disconnect can occur that are technically correct, but "unlike" the step-by-step process listed below and generally big pains in the arse.

Also, and this point is so important I'm giving it its own line ;), it helps to remember that, although a proper TCP "connection" can only be established in one way (the infamous "Three Way Handshake"), the same is not true of a TCP "disconnect." TCP (Over Ethernet, to be precise) is duplexed, which means that it consists of two flows of data; one flowing in either direction simultaneously. Since all TCP requests have to be acknowledged (unless you're just pulling the plug ;) a TCP disconnect is a 4-Way process. Or, more correctly, a 4 "step" process.

Since we're only dealing with the TCP disconnect, we're going to start in the middle of a TCP session. For a given established TCP connection, you'd see something like the following in the output of "netstat -an" (For this entire post, the only really important column in your netstat output will be the one to the far right - from this point on, we'll just deal with that column, to save on space - this post is going over 1200 words as it is) - This status would be the same for both machines, assuming they were engaged in a mutually-agreed-upon TCP network transaction.

TCP4 0 48 192.99.99.99.22 10.99.99.99.34406 ESTABLISHED


There are basically only two kinds of TCP disconnects: The Active disconnect and the Passive disconnect.

Each one is fairly simple to draw up (see Wikipedia's TCP Page for some actual pictures ...or graphs), and, for each, we'll show the netstat status you'll note during your troubleshooting work, so that you can be reasonably sure what point your process is at in the TCP disconnect sequence (we're assuming, as usual, that the disconnect you're looking at is bad and someone - responsible in some way for signing your paycheck - is demanding answers ;)

1. FIN_WAIT_1: At this point the serving host has decided to terminate the connection (textbook Active Disconnection) and sends a FIN packet to the client host.

2. FIN_WAIT_2: Then the client sends back an ACK (acknowledgement) packet to confirm that it has received the server's FIN packet.

3. TIME_WAIT: This generally occurs when the client responds with its own FIN packet (Sometimes this state will show up as FIN_ACK)

4. CLOSED: This is a two-parter: In an Active disconnect scenario, this state indicates that the server has acknowledged the client's ACK of its original FIN packet by sending an ACK back to the client indicating that it received the client's FIN packet and has agreed to the termination of the TCP session (I couldn't make that sentence any less confusing and dubious if I tried ;) In a Passive Disconnect scenario, this state indicates that the session is over for approximately the exact same reasons except almost in reverse, since the classic Passive Disconnect has the client initiating the disconnection rather than the server and the final step is the client sending the final ACK and, usually, doing the connection cleanup.

5. CLOSE_WAIT: To summarize right through the first 2 (reversed) points above, in a Passive Disconnect, this state is the result of the client sending a FIN packet to the server (initiating the disconnect), to which the server responds by returning a FIN packet, to which the client application responds with an ACK of the server's FIN.

6. LAST_ACK: Here, the client cleans up, and terminates the connection; sending a FIN packet to the server.

Hopefully this didn't all come out wrong. I have to troubleshoot this stuff a lot, but writing about it has me now doubting whether I'm awake or asleep ;)

Here a few helpful tips to take home with you:

1. Generally, if you see a lot of sockets stuck in FIN_WAIT/FIN_WAIT_1 (and especially) FIN_WAIT_2, you should start your stopwatch or note the time on the clock and wait for about 300 to 600 seconds (the old default of double the maximum round trip time (RTT) - or 4 times the keepalive time - isn't always correct anymore). You can usually figure the maximum time you should be waiting in this state by multiplying your kernel variable for the maximum keepalive time by the maximum number of keepalive retries. On a lot of systems these values are tunable, and (if you're running a high volume website, for instance) you might want to look into reducing one, or both, of these, if necessary. If your kernel has a FIN_WAIT_2 timer variable, just tune that. Otherwise, if these last longer, you can fix the problem by doing everything from figuring out what process ID's are associated with what connected/hung network ports and terminating those, stopping and starting your network application or going all the way up to a full reboot of your physical system.

2. If you see a lot of CLOSE_WAIT sockets languishing around, you can be almost fully assured that the issue is on the application end. Usually, if a TCP application doesn't send the final, required, FIN packet, after finishing the connection cleanup, cleans up the connection but never sends the final FIN packet, or both, you'll end up in this condition (yet another monstrosity of a sentence). Your options are relatively the same.

I'm going to go untwist my brain into the pretzel it was before I started writing this ;)

Cheers,

, Mike




Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Tuesday, December 2, 2008

Predicting Solaris 10 TCP Sequence Numbers Part 1: Initial Discovery

Hey there,

Note: All examples today were produced using Solaris 10. As such, commands like snoop may need to be replaced with ethereal or tcpdump, etc, depending on your OS and/or preference.

Today, we're going to look at a subject that gets plenty of attention (but, maybe not as much publicity) in the world of Unix and Linux network computing: TCP packet sequence number prediction, how it's used to protect network transmissions and whether or not, with the advent of packet checksumming, etc, it's even a factor in basic network security anymore. See here, or at the very bottom of the post, for something potentially interesting about the latest Solaris' network protocol enhancements including SDP.

Not all that long ago (which might mean a long time ago to you, since I'm growin' old ;) TCP number sequencing was pretty simple to predict. In essence, the sequence numbering of TCP packets is required since TCP does its own error-correction and will attempt to resend packets that it doesn't "know" reached their specified endpoint. Numbering, at the most basic level, was/is a great way to ensure that all packets sent actually get received. And, believe it or not, at one point in time that numbering was simply incremental. That means that, given a point-to-point TCP transaction, a simple transmission (from the sender's point-of-view, for instance) might consist of 15 packets. If the first packet was numbered 1 (a gross oversimplification, to be sure, but not too far from being exact ;), the next would be numbered 2, then 3 and so forth until the 15th packet (numbered 15) was successfully sent.

Given the above scenario, and with the aid of hindsight, TCP connection hijacking and other spoofing-based attacks were made quite a bit simpler than they might have been. Of course, once all this "bad" activity began happening, the specifications for TCP packet sequence numbering began to change and become more complicated. Many proprietary OS manufacturer's also began to implement their own proprietary sequence numbering algorithms (in accordance with standards so they could communicate with "unlike" systems). Unfortunately, most of these next-gen specifications were based on simplistic rules like (another gross oversimplification) simple mathematic derivations which didn't use any sort of "outside influence" to affect (or make more unpredictable) the sequence numbering. If you could figure out the equation, you could easily guess what packet number would be next, given one or two sequential packets, and then the whole mess would start all over again. Wikipedia actually has a very nice page on TCP with links to all the relevant RFC's.

This is actually still true today, although OS manufacturers, the Open Source community and white/black-hat hackers are stepping up their games to greater degrees and at a much more condensed pace (by which I mean things now, generally, change more often within a shorter period of time - another gross oversimplification ;)

So, now that it's the present, let's take a look at Solaris 10 and check out what they're doing with regard to this matter (From release 10/08 - the most recent to this date, as far as I know). Assuming we have a connection established with a Solaris 10 server (simple SSH), we can use snoop to gather the information we need to even begin attempting to figure out the TCP packet sequence numbering algorithm they're using (assuming - thinking positive thoughts :) that we can. A simple chunk of output is shown below:

host # snoop -r -d eri0 from 10.99.99.99 to 10.98.98.98
Using device /dev/eri (promiscuous mode)
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625835 Seq=673983919 Len=52 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625887 Seq=673983971 Len=52 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625939 Seq=673984023 Len=52 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625939 Seq=673984075 Len=340 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625939 Seq=673984415 Len=148 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625939 Seq=673984563 Len=148 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625939 Seq=673984711 Len=148 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625939 Seq=673984859 Len=148 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625939 Seq=673985007 Len=148 Win=50400
10.99.99.99 -> 10.98.98.98 TCP D=2782 S=22 Push Ack=2865625939 Seq=673985155 Len=148 Win=50400


Of course, the column we're most interested in (at first, anyway) is the 9th column from the left (assuming spaces and tabs as the default field separator). The column beginning with "Seq=" is what we'll look at first. For the sake of this exercise, we've saved all the packets in a file called "packets." We didn't use -o on the command line, but this is a pretty simple cut-and-paste ;). To isolate those sequence numbers, we could just do the following:

host # awk '{print $9}' packets|sed 's/Seq=//'
673983919
673983971
673984023
673984075
673984415
673984563
673984711
673984859
673985007
673985155


And, here's where it becomes difficult. The first test of a reasonably stable (by which I mean, unpredictable) sequence numbering algorithm is to subject it two basic tests. That is to say, is it really secure or is it just a simple numeric or mathematic progression in sheep's clothing?

As we bring part 1 of this endeavour to an end, we'll do the quick and dirty testing (If Solaris fails these, then I may have made a big mistake when I named this post before I finished writing it ;)

1. Are the numbers in simple sequence (e.g. +1 or +2) over and over again?
Answer: No. The numeric difference between the packet pairs (1-2, 2-3, etc) are not infinitely exact.
Proof: The 9 differences in order:
673983971-673983919 = 52
673984023-673983971 = 52
673984075-673984023 = 52
673984415-673984075 = 340
673984563-673984415 = 148
673984711-673984563 = 148
673984859-673984711 = 148
673985007-673984859 = 148
673985155-673985007 = 148


2. Is there any direct correlation between the differences between packet sequence numbers and any other property of the packets?
Answer: Yes. If we scroll up the page and look at the basic snoop output, we can see that the difference in sequence numbers correlates exactly with the length of the packet.
Proof: The packet lengths in order:
host # awk '{print $10}' packets|sed 's/Len=//'
52
52
52
340
148
148
148
148
148
148


It seems as though we have something going here, but it must be harder than this to guess the sequence numbers. It's possible, also, that additional measures are being used that obviate the somewhat-flimsy dependence on convoluted number sequences to safeguard a transmission from "packet injection" or hijacking. For instance, Solaris 10 now makes use of SDP as part of its data transmission security (see, also, the Sockets Direct Protocol Manpage). It may also be that Devil is in the details and we only looked at a surface scan of our packets. We'll look at all of these possibilities in a near-future post. This isn't over yet :)

Cheers,

, Mike




Please note that this blog accepts comments via email only. See our Mission And Policy Statement for further details.

Saturday, April 19, 2008

Snooping Through Email On Solaris

Good morning/afternoon/evening,

For this Saturday's post, I thought I'd put together another bash script on Solaris Unix using snoop. The last time we did this it was to grab cleartext logins and passwords. This time, I thought we'd look at email. SMTP, port 25, in particular.

Snooping through someone's physical mail (delivered to their home) would be a federal offense, but (to my knowledge) if you're a Unix or Linux administrator and have to analyze network traffic and interrogate packets at your place of work, there's almost no way you can avoid getting into other people's business. Most company's have an HR policy that your email is considered private and gaining access to another user's email, without their consent, is blah, blah, blah leading up to, and including termination. Of course, five minutes after said employee leaves the company, you're probably going to be called upon to provide just that sort of access (or the information gained by that access) to the same department that demanded you never ever do that sort of thing in the first place.

That being said, I'm pretty sure the law at this point is that a company can do whatever it wants with any data you create or use on their systems. If they need to, or want to, they can look at the email you send out and receive and where you go on the web, etc. I'm not sure why corporate America insists on assuring the average employee that their company-owned data is "personal and confidential" when all that email and web traffic has to be scanned by 15 security appliances before it can be allowed to enter or leave the company network? You can't block access to a website (even to, say, everyone in the company) without having to examine where every employee is going when they surf the web, etc. It probably makes for some yawn-inspiring legal battles ;)

Anyway, for the sake of today's argument, you're the root user (or someone with sufficient privilege to run snoop on Solaris Unix) and you're inspecting network traffic on an interface on a machine for a semi-legitimate reason.

If you need to check mail traffic, the simplest thing to do is snoop on port 25. This is the SMTP port and is used for sending and receiving email (you can also look at the POP and IMAP ports, but we'll wrap that into the everything's-pretty-much-all-the-same-when-you-get-right-down-to-it closing). You can get a good deal of information just running a straight-up snoop, like so:

host # snoop -o output_file port 25 <--- This snoop will use the default network interface, only capture traffic on port 25 and write the output to a file called "output_file."

The only problem you have is that, although it's readily clear who sent mail to whom, you can't see "what" they wrote. That's the stuff you want to see if you're going to be intercepting that information in real-time.

Note: In order to see the full contents of a packet, you don't need to use the "-v" flag. In fact, I'd strongly discourage it, since it pumps out about 50+ lines of IP stack layer information that you don't need for each and every packet!

If you want to see the full contents of the packets in snoop, just use the "-x" option and pass it the argument of an offset. "-x" gives you the entire packet, in both HEX and ASCII formats. You don't really need the HEX, since most humans read ASCII encoded text (like this) a lot more easily ;) A quick way to dump the HEX portion of each packet is to set the offset to 54 (for TCP traffic) and 42 (for UDP traffic). So, if you wanted to grab each packet and look at the ASCII contents only, you would type:

host # snoop -x 54 -o output_file port 25 <--- We're assuming TCP for the email transmission, although we'd capture any UDP packets that went to that port, also.

If you're ever snooping a protocol that doesn't fit the standards, or you forget these, you can almost always get the exact same effect (for TCP, UDP and any other protocol) by piping your output_file to awk when you're ready to read it, and just printing out the last field of every record, like so:

host # snoop -o output_file port 25
host # snoop -i output_file 2>&1 | awk '{ print $NF }'


The script we wrote for today doesn't take any arguments (but you should modify the snoop line if you want to specify a NIC with the "-d" option) and can be run simply, like this:

host # ./snoopmail.sh

And you'll get somewhat-ugly, but ultimately satisfying, results like this (yet another reason to never send a password via email):

..............To
...some.poor.guy
@xyz12345.com..S
ubject
Update 4....Hi
again....How are
ya,....Your new
password is bU
ggl3s....Please
do not share thi
s information wi
th any one!....T
hanks,.....Secur
ity..


Enjoy your Saturday, everyone :) Best wishes,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# snoopmail.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -ne 1 ]
then
echo "Usage: $0 SnoopOutputFile"
echo "Please capture packets with the suggested"
echo "settings: snoop -o output port 25"
exit 1
fi

snoop_file=$1

if [ ! -f $snoop_file ]
then
echo "Cannot find snoop output file $snoop_file. Exiting..."
exit 1
fi

snoop -i $snoop_file -x 54|sed -n '/DATA/,/QUIT/p'|grep -v SMTP|awk -F":" '{print $2}'|cut -c41- -


, Mike




Thursday, April 17, 2008

More Fun With Bash Networking

Greetings,

You may recall a post we did about a week or so ago on accessing the network directly with bash. Today's post is a bit of a follow up to that introduction, with some tips and clarification added for clarity and/or enjoyment ;)

1. The basic setup (this will be the only part revisited from our original post on bash networking. In order to get any of this started, you'll need to exec a file descriptor in bash. The file descriptor can, theoretically, be any number, but you should avoid 0, 1 and 2, as these are generally reserved by most Unix or Linux shells for standard input, standard output and standard error I/O. To exploit bash's "virtual" /dev/tcp filesystem (note that you can use /dev/udp as well; it just depends what you're planning on doing - if you wanted to interact with an old NIS server, you might "need" to use the udp protocol), the setup is simple enough. Assuming we wanted to use file descriptor 9 to hit a google web server on port 80, this is what we'd type on the command line:

host # exec 9<>/dev/tcp/www.google.com/80 <--- We've now connected to www.google.com.

2. The rest of it depends on what you want to do. The most basic operation would be to send information to the new file descriptor, read information from it and close it. Those three things can be done like this:

host # echo -e "GET /search?q=linux+unix+menagerie HTTP/1.0\n\n" >&9 <--- And now, we've requested a search page
host # while read line <&9
do
echo -n $line >&1
done
<--- Now, we read the HTML page that gets returned.
host # exec 9<&- <--- This closes the standard input for our new file descriptor
host # exec 9>&- <--- and this closes the standard output

3. Using HTTP 1.1 instead of 1.0. This is actually quite easily done (and you can do it through Telnet to port 80 on any given host, as well. Telnet will allow you to connect to any specified port on a host and interact with it. You just need to know what to type ;). It just requires a bit more typing on our part. Assuming we have the initial file descriptor set up, per step 1, we would do this to initiate an HTTP 1.1 connection and read from it.

host # echo -e "GET /search?q=linux+unix+menagerie HTTP/1.1" >&9 <--- Note the lack of the double carriage return. This isn't absolutely necessary, but if those returns are typed, they obviate the next line.
host # echo -e "Host: www.google.com\nConnection: close\n" >&9 <--- This line tells HTTP/1.1 that we want to close the connection once our query, or page request, completes. HTTP 1.1 will keep the connection open until it times out unless you do this, while HTTP 1.0 will close it immediately after you send one request.

And then you can go ahead and read the response and close the file descriptors in the same manner as above.

4. Implementing host headers. You may find that more and more web servers require "host headers" when you send them an HTTP 1.1 request. These, technically are only required for virtual hosting, but it's becoming more and more common for out-of-the-box webservers to run the main server as a virtual host. It's easy to send a request to our file descriptor in this manner, as well. We just need to modify it slightly.

host # echo -e "GET /search?q=linux+unix+menagerie HTTP/1.1\nhost: http://www.google.com\n\n" >&9

5. Sending a POST request rather than the standard GET. This again, is just a modification on the main method. All you need to tell the server is that you're performing a POST action and send it the POST data (This is basically how the folks in security check to see if they can mess with your cgi forms):

host # echo -e "POST /search?q=linux+unix+menagerie HTTP/1.1\n\nHere Are All My POST Variable=Value Pairs" >&9

6. Ignoring the header information when requesting an HTML page using bash networking through file descriptors. This is kind of "voodoo," but is almost 100% guaranteed to work. The first blank line you encounter when you read your response to the query you send to a web server is (almost) always the end of the header section. The header section includes varying information on web server type, version, age, etc, but isn't worth reading if you just want the web page back. You can avoid looking at it by running this after you make the initial GET request:

host # while read <&9
do
line=${response//$'\r'/}
if [ -z "$line" ]
then
break
fi
done


And then you can move on and continue reading from the file descriptor, and closing it, just like in step 2:

host # while read response <&9
do
echo -n $response >&1
done
exec 9<&-
exec 9>&-


7. What you get if you settle. Interestingly enough, bash's "virtual" tcp (or udp) filesystem defaults to connecting to a webserver. If you simply exec a new file descriptor without any arguments, the "virtual" /dev/tcp, and /dev/udp, file system is still created. It defaults to port 80 and the tcp protocol. In fact, the virtual file system "always" exists. Why? Because your shell automatically opens file descriptors 0, 1 and 2 when it initializes. So, basically, when you login, you've already got a path to the net :) How to use it meaningfully is another thing entirely, but you can do this right out of the gate, just after logging in:

host # echo -n "GET / HTTP/1.0\n\n" >/dev/tcp/www.google.com/80

although, thankfully, it doesn't work all that simply and involves serious risk.

Exec'ing an additional file descriptor is a better practice when doing this sort of thing, because you don't want to muck with the 3 basic ones. If you make a mistake on file descriptor 9, for example, it's no big deal if it gets disconnected or something else weird happens to it. If file descriptor 0 get whacked, you won't be able to type to the terminal, and if file descriptors 1 or 2 get destroyed, you won't be able to see some, or all, of the output the shell is sending to your terminal. In either case, you risk losing your connection to your terminal session entirely.

Again, you can use bash's "virtual" networking to connect to any port that already has a service listening on it. Unfortunately, as far as I know, at this time, there is no way to "initiate" (or bind to) a network port using this method and the bash shell (or any Unix or Linux shell, for that matter). If it ever becomes possible, I'll be sure to post about it, because it will open up a whole new can of worms for all of us ;)

Again, if you would like further basic explanation (and a sample script) regarding setting up bash's "virtual" network connections, see our previous post on bash file descriptor networking.

Cheers, and enjoy!

, Mike




Wednesday, April 2, 2008

Using Bash To Access The Network Via File Descriptors

Greetings,

It's been a while since we looked at down-and-dirty shell tricks, and this one comes as a tangential follow-up to our previous post on finding and reading files in the shell when your system is on the verge of a crash.

The first thing I'd like to clear up before we proceed is that, in re-reading that post, I notice that I was being a bit of a ksh-snob in the section regarding reading the contents of a regular file through a file descriptor to save on resource usage. My example line in that post, after exec'ing file descriptor 7, was:

host # while read -u7 line; do echo $line; done

And, as most bash fans probably noted, that syntax doesn't work in bash. That's what I get for only verifying the output in ksh ;) What that line should have read (since this will work in sh, ksh and bash) was:

host # while read line;do echo $line;done <&7

Apologies all around. In any event, that "mistake" serves as a fairly decent segue into what we're going to be looking at today. That same basic process of exec'ing file descriptors to read files directly is how we're going to use bash to read from the network directly using /dev/tcp.

And, just to be clear, this "does not" work in Solaris ksh or sh. It will work on Solaris and every Linux I've tested, but only, to my knowledge, in bash. Every version of ksh I've tested (even Solaris 10) can't create the virtual file system required by this operation. I've attached a small bash script, at the end of the post, to check a mail server and http server, so you have a working example of the process.

This method of accessing the network is very useful to know how to do and, going through it step by step, is actually very simple to accomplish. We'll use a mail server as our example since it involves both reading and writing to the network file descriptor.

The first thing you'll want to do is "exec" the file descriptor you'll be using to communicate on the network via tcp. You can do this like so (Just be sure not to exec file descriptors 0, 1 or 2, as these should already be assigned by your operating system as "Standard Input (STDIN)," "Standard Output (STDOUT)," and "Standard Error (STDERR)" :

host # exec 9<>/dev/tcp/mail_server.xyz.com/25 <-- Here we create file descriptor 9 via exec, open the /dev/tcp/mail_server.xyz.com/25 "virtual file" and assign file descriptor 9 to it.

Note that this line is where the script, or process, will either make or break. ksh (Definitely for Solaris 9 and 10) does not allow the creation of the virtual filesystem below /dev/tcp that this operation requires. If you attempt to do this in ksh you will most likely get the following error:

host # /dev/tcp/mail_server.xyz.com/25: cannot create

Assuming we're using bash, and that worked okay, we can now write to, and read from, our mailserver over port 25. This is analogous to doing something like Telnetting to port 25, but we're using a built in feature of the bash shell to access the network via a pseudo file system created under /dev/tcp (It acts much like a /proc filesystem, although you can't see what you've created by looking at the file that /dev/tcp links to, since it's a character device file and the directory path /dev/tcp/mail_server.xyz.com/25 doesn't "really" exist :)

Now we can send input, and receive output, using standard file descriptor redirection commands. A few "for instances" below:

host # echo "HELO xyz.com" >&9 <--- This sends the HELO string to our mailserver via file descriptor 9, which we exec'ed above.
host # read -r RESPONSE <&9 <--- This reads the data coming in on file descriptor 9 with as little interpretation as possible (for instance, it treats the backslash (\) as a backslash and doesn't imbue it with it's usual "special" shell powers).
host # echo -n "The mail server responded to our greeting with this load of sass: "
host # echo $RESPONSE


And, now, we can write to, and read from, file descriptor 9 until we have nothing left to say ;) When we're all finished, I find it's good practice to close the input and output channels for our file descriptor (although closing the input should be sufficient in most cases), and, if you want to or need to, you can also dump the remainder of the file descriptor's contents before you close it down.

host # cat <&9 2>&1 <--- Dump all of the remaining input/output left on file descriptor 9 to our screen.
host # 9>&- <--- Close the output file descriptor
host # 9<&- <--- Close the input file descriptor

Now, you're back to the way things were and you shouldn't be able to read or write from file descriptor 9 anymore (or, if you're a glass-half-full person, it's now available for use again :) I've tacked on a small script to demonstrate some basic functionality, but, hopefully, you'll have some fun playing around with this and figure out more, and better, ways to manipulate your server's interaction with the network through the file system.

host # ./check_net.sh <--- As simple as that to run it :)

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash
#
# check_net.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

mailserver="mailserver.xyz.com"
httpserver="www.xyz.com"
domain="xyz.com"

echo "Testing mail server functionality"
exec 9<>/dev/tcp/$mailserver/25
read -r server_version <&9
echo "Server reports it is: $server_version"
echo "HELO $domain" >&9
read -r greeting <&9
echo "Server responded to our hello with: $greeting"
echo "VRFY username" >&9
read -r vrfy_ok <&9
echo "Server indicates that this is how it feels about the VRFY command: $vrfy_ok"
echo "quit" >&9
read -r salutation <&9
echo "Server signed off with: $salutation"
echo "Dumping any remaining data in the file descriptor"
cat <&9 2>&1
echo "Closing input and output channels for the file descriptor"
9>&-
9<&-
echo "--------------------------------------------------"
echo "Testing web server functionality - Here it comes..."
exec 9<>/dev/tcp/$httpserver/80
echo "GET / HTTP/1.0" >&9
echo "" >&9
while read line
do
echo "$line"
done <&9
echo "Dumping any remaining data in the file descriptor"
cat <&9 2>&1
echo "Closing input and output channels for the file descriptor"
9>&-
9<&-
echo "done"


, Mike




Saturday, December 29, 2007

Securing All Of A Host's Network Programs At Once Using Extended ACL's

At most shops where I've worked, a lot of time and effort is put into securing this and that network service from certain users. Mostly, it's done on a per process basis. Like segregating Telnet or RSH, since those are considered insecure. But, for the most part, I find, what security folk really want to do, when locking down a box, is to prevent regular users from running any network commands, and only allow certain people in certain groups to avail themselves of those services. This can be likened to most firewall setups: Block everything and allow what you need to, as opposed to allowing everything and blocking what you find out is dangerous (which can be disastrous)

This is actually very simple to accomplish on both Linux and Solaris by using simple group permissions and File ACL's (or facl's). In this post we'll walk, step by step, through setting up a box with network services totally locked down for any users that we don't explicitly want to use them. Here we go:

First, you'll want to work on the group angle. All users are already members of a group (or groups) and these groups are used to limit (or augment) their access to certain areas of the OS and certain programs. For our purposes here, we'll set up an additional group. We'll call it noudptcp for kicks (since we're not going to let anyone who has this group as their primary group use any udp or tcp-based programs). We'll set it up just like any other group, like so:

bash@host # groupadd -g 999 noudptcp

Next, we'll visit the two critical files that we'll need to change in order to pull this all together in the next few steps: /dev/udp and /dev/tcp. They should look like this by default:

bash@host # ls -lL /dev/udp /dev/tcp
crw-rw-rw- 1 root sys 41, 0 Oct 2 2006 /dev/udp
crw-rw-rw- 1 root sys 42, 0 Oct 2 2006 /dev/tcp


Note that both of these character special files (device drivers) are readable and writable by everyone. This is one of the reasons any user can use Telnet, RSH, SSH and many other network-based programs on your system. Here, we'll run the getfacl command to view the ACL for the /dev/tcp device driver file and take a look at it, like so:

bash@host # getfacl /dev/tcp

# file: /dev/tcp
# owner: root
# group: sys
user::rw-
group::rw- #effective:rw-
mask:rw-
other:rw-


Then, we'll augment the ACL on both files (they're essentially the same) my favorite way. I like to just take the output of getfacl, modify it in a file, and then feed that file to setfacl (Note that I'm doing this on Solaris and that I will point out the differences for Linux along the way - for instance, the Linux getfacl output will look slightly different than this, but you can still dump it to a file and modify it), like so:

bash@host # cat FACL_FILE <--- We'll assume I already edited it, because that's so hard to emulate using a simple keyboard ;) -- Note also that I've removed the comments, as they don't get applied by setfacl and may be confusing.

user::rw-
group::rw- #effective:rw-
group:noudptcp:--- #effective:---
mask:rw-
other:rw-


Now, we feed this file to setfacl (Note that we only added the one extra line to set permissions for the noudptcp group) and the ACL will be updated so that members of the noudptcp group won't have any access to either /dev/tcp or /dev/udp (In Linux, use -M instead of -f):

bash@host # setfacl -f FACL_FILE /dev/tcp
bash@host # setfacl -f FACL_FILE /dev/udp


And now we can use getfacl to show the new permissions. In the "ls -lL" output, you'll notice the "+" at the end of the file listing, indicating that the default ACL has been modified, and the output of the getfacl command will be the same as our FACL_FILE, with the comments added by default:

bash@host # ls -lL /dev/udp /dev/tcp
crw-rw-rw-+ 1 root sys 41, 0 Oct 2 2006 /dev/udp
crw-rw-rw-+ 1 root sys 42, 0 Oct 2 2006 /dev/tcp
bash@host # getfacl /dev/tcp

# file: /dev/tcp
# owner: root
# group: sys
user::rw-
group::rw- #effective:rw-
group:noudptcp:--- #effective:---
mask:rw-
other:rw-


Finally, all we have to do is add any users we deem necessary to the noudptcp group and they will not be able to access any network services! Note that this will not prevent them from connecting "to" the box; it will just prevent them from connecting to another box from your box, or using ping, or any program that needs to access /dev/tcp or /dev/udp.

For instance, a few select lines of truss output show what happens when a user in the noudptcp group attempts to SSH off of the box:

open("/dev/udp", O_RDONLY) Err#13 EACCES
so_socket(2, 2, 0, "", 1) Err#13 EACCES
write(2, 0xFFBEF058, 27) = 27
s o c k e t : P e r m i s s i o n d e n i e d\r\n


Congratulations! Now everyone you don't want to be able to use network services on your box is no longer a threat. At least, not in that way ;) And, even better, they can still use everything else that they would normally be allowed to!

Cheers,

, Mike