|AINT Misbehaving: A Taxonomy of
Lawrence R. Halme firstname.lastname@example.org
R. Kenneth Bauer email@example.com
Arca Systems, Inc.
2540 North First St., Suite 301
San Jose, CA 95131-1016
Abstract: This paper examines the basic underlying principles of intrusion control and
distills the universe of anti-intrusion techniques into six high-level, mutually
supportive approaches. System and network intrusions may be prevented, preempted,
deflected, deterred, detected, and/or autonomously countered. This Anti-Intrusion Taxonomy
(AINT) of anti-intrusion techniques considers less explored approaches on the periphery of
"intrusion detection" which are independent of the availability of a rich audit
trail, as well as better known intrusion detection techniques. Much like the Open Systems
Reference Model supports understanding of communications protocols by identifying their
layer and purpose, the authors believe this anti-intrusion taxonomy and associated methods
and techniques help clarify the relationship between anti-intrusion techniques described
in the literature and those implemented by commercially available products. The taxonomy
may be used to assess computing environments which perhaps already support Intrusion
Detection System (IDS) implementations to help identify useful complementary intrusion
Keywords: Intrusion, detection,
misuse, anomaly, countermeasure, taxonomy.
Efforts to combat computer system intrusions have historically included preventive design,
configuration, and operation techniques to make intrusion difficult. Acknowledging that by
bowing to functionality concerns and budgetary constraints theseefforts will be imperfect,
the concept was suggested to detect intrusions by analyzing collected audit data. The
study of anomaly detection was prefaced by the postulate that it would be possible to
distinguish between a masquerader and a legitimate user by identifying deviation from
historical system usage [AND80]. It was hoped that an audit analysis approach would be
useful to identify not only crackers who had acquired identification and authentication
information to permit masquerading as legitimate users, but also legitimate users who were
performing unauthorized activity (misfeasors). Clandestine users able to bypass the
security mechanisms were another identified problem, but considered more difficult to
detect since they could influence system auditing.
Early hands-on experimentation confirmed that user work patterns could be distinguished
using existing audit trails [HAL86]. Techniques were debated to make auditing, which was
originally designed primarily for accounting purposes, more useful to security analysis. A
model was developed which theorized much of the framework for a general-purpose intrusion
detection system [DEN87]. Intrusion detection researchers split into two camps -- those
seeking attack signatures in the audit data which announce known misuse (e.g., MIDAS
[SEB88]), and those seeking evidence of usage which is anomalous from historical norms
(e.g., IDES [LUN88a]). The complementary combination of these approaches into an
investigative tool with autonomous response to particularly threatening deviance was
suggested [HAL88]. Survey papers attest to the dramatic growth in the number of research
efforts investigating different anomaly and misuse detection approaches ([LUN88b],
The early Nineties saw test and commercial installation and operation of a number of
IDSs including SRI's IDES and NIDES, Haystack Laboratory Inc.'s Haystack and
Stalker, and the Air Force's Distributed Intrusion Detection System (DIDS). Emphasis
broadened to include integration of audit sources from multiple heterogeneous platforms,
and platform portability. Distributed intrusion detection is the focus of work at the
University of California at Davis [HEBE92] and at the Air Force [DIDS91]. Intrusion
detection continues to be an active field of research.
Although much has been learned from these research-driven efforts, their focus has been on
developing optimized techniques to detect intrusions. Less thought has been given to
creating an operational view of complementary anti-intrusion approaches. Computer and
Internet misuse has become a frequent topic of todays mainstream media, and the
demand for anti-intrusion technology is exploding. However, intrusion detection products
are as yet esoteric and not well integrated to work together with complementary approaches
such as intrusion preventing firewalls. The taxonomy we present in this paper seeks to
perspective and aid understanding. It provides the basis for the formulation of a
systematic and comprehensive anti-intrusion approach categorization and promotes multiple
2.0 Anti-Intrusion Approaches
Over the past fifteen years a great deal of emphasis has been placed on detection as the
most fruitful area for research and development to combat intrusive activity (both from
external crackers as well as insiders abusing their privileges). Less considered have been
other complementary anti-intrusion techniques which can play valuable roles. As work
environments become more interconnected and exposed, service providers will need
increasingly to rely on a wide range of anti-intrusion techniques, not just IDS's. This
paper organizes these techniques (illustrated in Figure 1) into the Anti-Intrusion
Taxonomy (AINT). The "filtering" of successful intrusions is graphically
depicted by the narrowing of the successful intrusion attempt band.
Figure 1: Anti-Intrusion Approaches
The following text describes the six anti-intrusion approaches. We also provide an
analogous real-world illustration of each approach as applied to combating the possibility
of having your wallet stolen walking down an urban street. Sections follow which elaborate
how these approaches apply to computer systems under the AINT.
- Prevention precludes or severely handicaps the likelihood of a
particular intrusions success.
- Hire hulking bodyguards and avoid bad neighborhoods. A definitive approach when
it works, but expensive and troublesome and unlikely to be operationally 100% foolproof.
Still leaves opportunity for successful attack if bodyguards can be distracted or bribed.
- Preemption strikes offensively against likely threat agents prior to an
intrusion attempt to lessen the likelihood of a particular intrusion occurring later.
- Support vigilante patrols. Non-specific and may affect innocents.
- Deterrence deters the initiation or continuation of an intrusion
attempt by increasing the necessary effort for an attack to succeed, increasing the risk
associated with the attack, and/or devaluing the perceived gain that would come with
- Dress down and walk with excitable Chihuahua dog. Many attackers will move on
to richer-looking easier prey, but if it has been a lean night, a little annoying yapping
dog isnt going to stop a determined mugger.
- Deflection leads an intruder to believe that he has succeeded in an
intrusion attempt, whereas instead he has been attracted or shunted off to where harm is
- Carry two wallets so that when attacked, a decoy wallet with canceled credit cards
can be handed over. Can learn more about how attackers operate, but probably only
works for newbie muggers and it is inconvenient having to carry two wallets.
- Detection discriminates intrusion attempts and intrusion preparation
from normal activity and alerts the authorities.
- Carry a whistle and blow to attract attention from beat cop if attacked.
Limited usefulness if attack is too far from a donut shop for whistle to be heard, or if
car-alarm-syndrome causes authorities to ignore as a false alarm. Also you may not detect
in time that your wallet was stolen if it is surreptitiously pickpocketed.
- Countermeasures actively and autonomously counter an intrusion as it is
- Carry a can of mace, attach mouse trap to wallet, and know karate to counter attack.
Run the risk of being sued by accidentally breaking the arm of Hari Krishna solicitor
offering flowers. With a booby trapped wallet, a pickpocket can be autonomously countered
with necessary speed without conscious detection. However you, as an authorized user,
might mistakenly get your fingers snapped if you forget about the mousetrap.
3.0 Intrusion Prevention
Intrusion Prevention techniques (enforced internally or externally to the system) seek to
preclude or at least severely handicap the likelihood of success of a particular
intrusion. These techniques help ensure that a system is so well conceived, designed,
implemented, configured, and operated that the opportunity for intrusions is minimal.
Because built-in prevention seeks to make it impossible for an intrusion to occur on the
target system, it may be considered the strongest anti-intrusion technique. Ideally, this
approach would prevent all intrusions, negating the need for detection and consequent
reaction techniques. Nevertheless, in
a real world system this technique alone proves untenable and unlikely to be implemented
without some remaining exploitable faults and dependence on configuration/maintenance.
Add-on prevention measures augmenting the defenses of an existing system include
vulnerability scanning tools and network firewalls.
- Correct Design / Implementation techniques represent classic INFOSEC
mechanisms (e.g., identification and authentication, mandatory and discretionary access
control, physical security), and are appropriate to be developed into the target system
itself. These techniques are well explored, but may be cumbersome and expensive, and care
must be taken that they are not poorly configured.
- Vulnerability Scanning Tools examine system and network configurations
for oversights and vulnerabilities. Static configuration scanners are programs and scripts
periodically run manually by the System Security Officer (SSO) to detect system
vulnerabilities. Dynamic configuration scanning tools perform much the same function but
run constantly as a low priority task in the background. Configuration scanning tools can
monitor for a wide range of system irregularities including: unauthorized software,
unauthorized accounts, unprotected logins, inappropriate resource ownership, inappropriate
access permissions, weak passwords, and ghost nodes on a network. Other vulnerability
scanning tools can check for evidence of previous intruder activity, susceptibility to
known attacks, and dormant viruses. Representative UNIX configuration scanning tools
include: Security Profile Inspector (SPI), Internet Security Scanner (ISS), Security
Analysis Tool for Auditing Networks (SATAN), COPS, and Tripwire [FIS94].
- Firewalls examine and control the flow of information and services
between a protected subnetwork and/or hosts and the outside world. They protect one
network from another by blocking specific traffic while allowing other traffic. The most
common use today is connecting corporate and academic networks to the Internet. Firewall
designs have proven effective in thwarting many intruder efforts. The decision as to which
traffic to allow is based upon the content of the traffic itself. Typical decision
criteria include traffic direction, network address, port, protocol type, and service
type. The goal of the firewall is to provide efficient and authorized access for users
"inside" the firewall to the outside world while controlling the access of
"outside" users to protected resources by exporting limited and precisely
controlled services. Firewalls are best implemented on separate hardware for performance
and security reasons, and thus there is expense of acquisition and maintenance.
4.0 Intrusion Preemption
Intrusion Preemption techniques strike offensively prior to an intrusion attempt to lessen
the likelihood of a particular intrusion occurring later. This approach includes such
techniques as education of users, promoting legislation to help eliminate an environment
conducive to intrusion, taking early action against a user who appears increasingly to be
straying from the straight-and-narrow, and infiltrating the cracker community to learn
more about techniques and motivation. Rather than the reactive defenses offered by
detection and countermeasures, preemption refers to proactive action against the source of
as yet unlaunched intrusions. Unchecked use of these techniques can pose civil liberty
- Banishment Vigilance seeks to preempt later intrusions by noticing
preliminary danger signs of impending undesired activity. Examples of this technique
include attempting to discern malicious intent and initial exploratory stages of intrusive
activity, taking strong and early action against users demonstrating a leaning toward
violating system policy, and offering to reward users who spot vulnerabilities or
- Infiltration refers to proactive efforts on the part of the SSO to
acquire attack information from underground sources to supplement vendor bug reports and
Computer Emergency Response Team (CERT) warnings. A more insidious infiltration would
inundate hacker bulletin board systems with false information to confuse and discourage.
5.0 Intrusion Deterrence
Intrusion Deterrence seeks to make any likely reward from an intrusion attempt appear more
troublesome than it is worth. Deterrents encourage an attacker to move on to another
system with a more promising cost-benefit outlook. This approach includes devaluating the
apparent system worth through camouflage, and raising the perceived risk of being caught
by displaying warnings, heightening paranoia of active monitoring, and establishing
obstacles against undesired usage. Intrusion deterrents differ from intrusion prevention
mechanisms in that they are weaker reminder/discomfort mechanisms rather than serious
attempts to preclude anintrusion.
- Camouflage seeks to hide and/or devalue system targets, and encompasses
such straightforward policy as minimizing advertising a system and its contents.
Configuring a dial-in line not to pick up for a number of rings greater than most cracker
demon dialing software, and presenting only generic login banners are other examples of
camouflage. A faceless, boring system is not a prize trophy for a cracker. A disk entitled
"Thermonuclear War" intrigues more than one deglamourized to "tnw".
Camouflage may make a system less usable and intuitive. It also may conflict with the
following deterrent techniques which seek to emphasize active defenses. However, a system
that reveals efforts to secure it may beg an attacker to investigate why such effort was
expended. Simple and weak camouflage techniques may nonetheless prove useful as deterrents
- Warnings inform users that the security of a system is taken seriously
and emphasizing what the penalties are if unauthorized activity is monitored. Sensitive
systems are often configured to display warnings as part of their standard login banners.
Users not contemplating an intrusion should be little inconvenienced. Warnings are easily
implemented and may also be useful from a legal standpoint (especially in the case of
keystroke monitoring), but if the intruder perceives all-bark-no-bite, this is a weak
defense. Warnings may even be counterproductive by piquing the curious, and laying down a
provocative gauntlet to intruders out to prove their mettle. Particular user warnings may
also be implemented to trigger when specific undesirable activity is detected. A concern
for activity-based user warnings is that the potential intruder is alerted to what
thresholds/signatures fire the
- Paranoia refers to increasing the impression (whether true,
exaggerated, or fallacious) that user activity is being closely monitored by a vigilant
SSO. Where having nonstop watchful system administration in place is not practical, it may
be simulated. If the intruder is led to believe the risks of detection and prosecution
from an apparently attentive and motivated SSO are greater than the possible reward, he
may instead move on to "easier pickings". Emulating the "fake car alarm
blinking light" mechanism is the simplest technique to give the misleading impression
of constant live monitoring. A "Scarecrow" processperforming semi-random
standard system administrator activities may be sufficient to ward off casual intruders
who have not seriously cased the system. The deterrent value of this technique is lost,
however, as soon as potential intruders learn that a Scarecrow is present and learn ways
to distinguish between the Scarecrow and a real SSO. An enhancement to this is to
implement a "security camera" technique which admittedly only randomly offers
live-monitoring, but gives no indication when the SSO is actually watching. A potential
intruder in this case can never be sure when he is being live-monitored, but is aware that
may be at any time.
- Obstacles seek to increase the ante of time and effort an attacker must
expend to succeed beyond what the perceived reward warrants. Obstacles, especially on
gateway machines, seek to try the patience of an intruder thereby "ruining his
fun" and providing incentive to move on. Delaying command executions, displaying
false system warnings, apparent exhaustion of resources, and similar obstacles serve to
exasperate, but not advertise detection. Annoying tactics may include showing interesting
but dead-end lures -- dummy accounts or files on which the intruder wastes valuable time
and reveals attack skills, but which award him nothing. Use of this technique risks
inconveniencing authorized users.
6.0 Intrusion Deflection
Intrusion Deflection dupes an intruder into believing that he has succeeded in accessing
system resources, whereas instead he has been attracted or shunted to a specially
prepared, controlled environment for observation (i.e., a "playpen" or
"jail"). Controlled monitoring of an unaware intruder spreading out his bag of
tricks is an excellent source of attack information without undue risk to the
"real" system [STO89]. Some system enforced deflection techniques may be
considered a special type of countermeasure, but the concept also includes techniques
which do not require the protected system to have ever been accessed by the intruder
(e.g., "lightening-rod systems").
- Quarantined Faux Systems are designed to lead intruders (primarily the
unfamiliar "outsider") to believe that they are logged into the target system,
when they are actually locked into a separate "fishbowl" system. This deflection
is accomplished by a network front end system such as a router or firewall. An effective
quarantined faux system encourages an intruder to remain long enough for a response team
to determine the intruder's identity and motive. However, dedicating a separate machine
andthe resources to maintain this charade is expensive, and with distributed environments
and the powerful statusing tools available, this technique may be untenable.
- Controlled Faux Accounts are designed to lead intruders to believe that
they are executing within a compromised standard account, when instead they are locked
into a special limited access account. In this case, the deflection controls are built
right into the target environment operating system or application. This technique
eliminates the need for the separate hardware resources required by a faux system, but
must rely on the target operating system security to ensure isolation from protected
system resources. The constructed environment could contain various inducements to engage
and stall the intruder, and divulge his intent. However, constructing and maintaining a
believable and unbreakable controlled faux account is difficult.
- Lightning Rod Systems / Accounts are similar to the preceding faux
techniques, but rather than the intruder being unknowingly shunted to them, the intruder
is instead lured into pursuing a decoy controlled environment directly of his own
volition. Lightning rod systems are placed "near" assets requiring protection,
are made attractive, and are fully instrumented for intrusion detection and back tracking
(the term "honey pot" has also been used to describe this technique). They are
from the primary resources being protected, and do not need to be concerned about
performance and functionality handicaps to authorized users. A practical and convincing
implementation of nontrivial lightning rods is problematic: they are likely expensive to
install and maintain, and rely upon their true reason for existence remaining secret.
7.0 Intrusion Detection
Intrusion Detection encompasses those techniques that seek to discriminate intrusion
attempts from normal system usage and alert the SSO. Typically, system audit data is
processed for signatures of known attacks, anomalous behavior, and/or specific outcomes of
interest. Intrusion detection, and particularly profiling, is generally predicated upon
the ability to access and analyze audit data of sufficient quality and quantity. If
detection is accomplished in near real-time, and the SSO is available, he could act to
interrupt the intrusion. Because of this necessity for a human to be available to
intervene, Intrusion Detection is not as strong an approach as Intrusion Countermeasures
as it is more likely that intrusion efforts will complete before manual efforts can
interrupt the attack. Intrusion Detection may be accomplished after the fact (as in
postmortem audit analysis), in near-real time (supporting SSO intervention or interaction
with the intruder, such as network trace-back to point of origin), or in real time (in
support of automated countermeasures).
7.1 Anomaly Detection
Anomaly Detection compares observed activity against expected normal usage profiles which
may be developed for users, groups of users, applications, or system resource usage. Audit
event records which fall outside the definition of normal behavior are considered
- Threshold Monitoring sets values for metrics defining acceptable
behavior (e.g., fewer than some number of failed logins per time period). Thresholds
provide a clear, understandable definition of unacceptable behavior and can utilize other
facilities besides system audit logs. Unfortunately it is often difficult to characterize
intrusive behavior solely in terms of thresholds corresponding to available audit records.
It is difficult to establish proper threshold values and time intervals over which to
check. Approximation can result in a high rate of false positives, or high rate of false
negatives across a non-uniform user population.
- User Work Profiling maintains individual work profiles to which the
user is expected to adhere in the future. As the user changes his activities his expected
work profile is updated. Some systems attempt the interaction of short-term versus
long-term profiles; the former to capture recent changing work patterns, the latter to
provide perspective over longer periods of usage. However it remains difficult to profile
an irregular and/or dynamic user base. Too broadly defined profiles allow any activity to
- Group Work Profiling assigns users to specific work groups that
demonstrate a common work pattern and hence a common profile. A group profile is
calculated based upon the historic activities of the entire group. Individual users in the
group are expected to adhere to the group profile. This method can greatly reduce the
number of profiles needing to be maintained. Also a single user is less able to
"broaden" the profile to which they are to conform. There is little operational
experience with choosing appropriate groups (i.e., users with similar job titles may have
quite different work habits). Individual user profiles mimicked by creating groups of one
may be a necessary complication to address users who do not cleanly fit into the defined
- Resource Profiling monitors system-wide use of such resources as
accounts, applications, storage media, protocols, communications ports, etc., and develops
a historic usage profile. Continued system-wide resource usage -- illustrating the user
community's use of system resources as a whole -- is expected to adhere to the system
resources profile. However, it may be difficult to interpret the meaning of changes in
overall system usage. Resource profiling is user-independent, potentially allowing
detection of collaborating intruders.
- Executable Profiling seeks to monitor executables use of system
resources, especially those whose activity cannot always be traced to a particular
originating user. Viruses, Trojan horses, worms, trapdoors, logic bombs and other such
software attacks are addressed by profiling how system objects such as files and printers
are normally used, not only by users, but also by other system subjects on the part of
users. In most conventional systems, for example, a virus inherits all of the privileges
of the user executing the infected software. The software is not limited by the principle
of least privilege to only those privileges needed to properly execute. This openness in
the architecture permits viruses to surreptitiously change and infect totally unrelated
parts of the system. User-independent executable profiling may also be able to detect
- Static Work Profiling updates usage profiles only periodically at the
behest of the SSO. This prevents users from slowly broadening their profile by phasing in
abnormal or deviant activities which are then considered normal and included in the user's
adaptive profile calculation. Performing profile updates may be at the granularity of the
whole profile base or, preferably, configurable to address individual subjects. SSO
controlled updates allow the comparison of discrete user profiles to note differences
between user behavior or changes in user behavior. Unfortunately these profiles must
either be wide and insensitive or frequently updated. Otherwise if user work patterns
change significantly, many false positives will result -- and we all recall the story of
Peter and the Wolf. This approach also requires diligence on the part of the SSO who must
update profiles in response to false positives, and ensure changes represent legitimate
work habit changes.
- Adaptive Work Profiling automatically manages work profiles to reflect
current (acceptable) activity. The work profile is continuously updated to reflect recent
system usage. Profiling may be on user, group, or application. Adaptive work profiling may
allow the SSO to specify whether flagged activity is: 1) intrusive, to be acted upon; 2)
not intrusive, and appropriate as a profile update to reflect this new work pattern, or 3)
not intrusive, but to be ignored as an aberration whose next occurrence will again be of
interest. Activity which is not flagged as intrusive is normally automatically fed into a
profile updating mechanism. If this mechanism is automated, the SSO will not be bothered,
but work profiles may change and continue to change without the SSO's knowledge or
- Adaptive Rule Based Profiling differs from other profiling techniques
by capturing the historical usage patterns of a user, group, or application in the form of
rules. Transactions describing current behavior are checked against the set of developed
rules, and changes from rule-predicted behavior flagged. As opposed to misuse rule-based
systems, no prior expert knowledge of security vulnerabilities of the monitored system is
required. "Normal usage" rules are generated by the tool in its training period.
However, training may be sluggish compared to straight statistical profiling methods.
Also, to be effective, a vast number of rules must be maintained with inherent performance
issues. Management of tools adopting this technique require extensive training, especially
if site-specific rules are to be developed.
7.2 Misuse Detection
Misuse detection essentially checks for "activity that's bad" with comparison to
abstracted descriptions of undesired activity. This approach attempts to draft rules
describing known undesired usage (based on past penetrations or activity which is
theorized would exploit known weaknesses) rather than describing historical
"normal" usage. Rules may be written to recognize a single auditable event that
in and of itself represents a threat to system security, or a sequence of events that
represent a prolonged penetration scenario. The effectiveness of provided misuse detection
rules is dependent upon how knowledgeable the developers (or subsequently SSOs) are
about vulnerabilities. Misuse detection may be implemented by developing expert system
rules, model based reasoning or state transition analysis systems, or neural nets.
- Expert Systems may be used to code misuse signatures as if-then
implication rules. Signature analysis focuses on defining specific descriptions and
instances of attack-type behavior to flag. Signatures describe an attribute of an attack
or class of attacks, and may require the recognition of sequences of events. A misuse
information database provides a quick-and-dirty capability to address newly identified
attacks prior to overcoming the vulnerability on the target system. Typically, misuse
rules tend to be specific to the target machine, and thus not very portable.
- Model Based Reasoning attempts to combine models of misuse with
evidential reasoning to support conclusions about the occurrence of a misuse. This
technique seeks to model intrusions at a higher level of abstraction than the audit
records. In this technique, SSOs develop intrusion descriptions at a high, intuitive level
of abstraction in terms of sequences of events that define the intrusion. This technique
may be useful for identifying intrusions which are closely related, but whose audit trails
patterns are different. It permits the selective narrowing of the focus of the relevant
data, so a smaller part of the collected data needs to be examined. As a rule-based
approach it is still based on being able to define and monitor known intrusions, whereas
new and unknown vulnerabilities and attacks are the greatest threats.
- State Transition Analysis creates a state transition model of known
penetrations. In the Initial State the intruder has some prerequisite access to the
system. The intruder executes a series of actions which take the target system through
intermediate states and may eventually result in a Compromised State. The model specifies
state variables, intruder actions, and defines the meaning of a compromised state.
Evidence is preselected from the audit trail to assess the possibility that current system
activity matches a modeled sequence of intruder penetration activity (i.e., described
state transitions lead to a compromised state). Based upon an ongoing set of partial
matches, specific audit data may be sought for confirmation. The higher level
representation of intrusions allows this technique to recognize variations of scenarios
missed by lower level approaches.
- Neural Networks offer an alternative means of maintaining a model of
expected normal user behavior. They may offer a more efficient, less complex, and better
performing model than mean and standard deviation, time decayed models of system and user
behavior. Neural network techniques are still in the research stage and their utility have
yet to be proven. They may be found to be more efficient and less computationally
intensive than conventional rule-based systems. However, a lengthy, careful training phase
is required with skilled monitoring.
7.3 Hybrid Misuse / Anomaly Detection
Hybrid Detectors adopt some complementary combination of the misuse and anomaly detection
approaches run in parallel or serially. Activity which is flagged as anomalous may not be
noticed by a misuse detector monitoring against descriptions of known undesirable
activity. For example, simple browsing for files that include the string
"nuclear" may not threaten the security or integrity of the system but it would
be useful information for an SSO to review if it was anomalous activity for a particular
account. Likewise, an administrator account may often demonstrate access to sensitive
files and have a profile to permit this, but it would useful for this access to still be
checked against known misuse signatures. There has been a fairly strong consensus in the
anti-intrusion community that effective and mature intrusion detection tools need to
combine both misuse and anomaly detection. There is increasing operational field evidence
that anomaly detection is useful, but requires well briefed SSOs at each site to configure
and tune the detector against a high rate of false positives. Anomaly detection systems
are not turnkey and require sophisticated support at least until profiles have stabilized.
7.4 Continuous System Health Monitoring
Intrusions may be detected by the continuous active monitoring of key "system
health" factors such as performance and an account's use of key system resources.
This technique is more flexible and sophisticated than Static Configuration Checkers, as
such a tool would be run continuously as a background process. It concentrates on
identifying suspicious changes in system-wide activity measures and system resource usage.
An example is to monitor network protocol usage over time, looking for ports experiencing
unexpected traffic increases. Work needs to be done to develop and tune system-wide
measures, and to understand the significance of identified variations.
8.0 Intrusion Countermeasures
Intrusion Countermeasures empower a system with the ability to take autonomous action to
react to a perceived intrusion attempt. This approach seeks to address the limitation of
intrusion detection mechanisms which must rely on the constant attention of an SSO. Most
computing environments do not have the resources to devote an SSO to full-time intrusion
detection monitoring, and certainly not for 24 hours a day, seven days a week. Further, a
human SSO will not be able to react at machine processing speeds if an attack is automated
-- the recent IP spoofing attack attributed to Kevin Mitnick was largely automated and
completed in less than eight minutes [SHI95]. Entrusted with proper authorization, a
system will have much greater likelihood of interrupting an intrusion in progress, but
runs the risk of falsely reacting against valid usage. What must be prevented is the case
where a user is doing something unusual or suspicious, but for honest reasons, and is
wrongfully burdened by a misfiring countermeasure. The concern that a General
Brassknuckles will be enraged by being rudely locked out of the system because he runs
over the allowed page count for printouts, merely reflects an avoidable, overly aggressive
Two primary intrusion countermeasure techniques are autonomously acting IDS's and alarmed
system resources. Although the former may be considered simply giving intrusion detection
techniques teeth, the latter will react to suspicious actions on the system without ever
processing audit data to perform "detection".
- Intrusion Countermeasure Equipment (ICE) refer to mechanisms which not
only detect but also autonomously react to intrusions in close to real-time. Such a tool
would be entrusted with the ability to take increasingly severe autonomous action if
damaging system activity is recognized, especially if no security operator is available.
The following ICE autonomous actions, in ascending order of severity, may be envisioned:
- Alert, Increase Support to SSO (Transparent):
- Note the variance in ICE console window.
- Increase the amount of audit data collection on the irregular user, perhaps down to the
- Alert SSO at the ICE console with a local alarm
- Notify SSOs remotely (e.g., by beeper)
- Seek to Confirm, Increase Available Information on User:
- Reauthenticate user or remote system (i.e., to address attacks originating from
intruders capitalizing on an unattended session, or spoofing packets on an authenticated
- Notify security personnel to get voice/visual confirmation of the users
- Minimize Potential Damage:
- Slow system response or add obstacles
- Only pretend to execute commands (e.g., buffer rather than truly delete)
- Arrest Continued Access:
- Lock local host account / Swallow offending packets
- Trace back network ID and lock out all associated accounts back to entering host,
perform housekeeping at intermediary systems.
- Lock entire host system / Disconnect from network
- Disconnect network from all outside access
ICE offers a number of advantages over manually reviewed IDSs. A system can be
protected without requiring an SSO to be constantly present, and able and willing to make
instant, on-the-spot complex decisions. ICE offers non-distracted, unbiased,
around-the-clock response to even automated attacks. Because ICE suffers from the same
discrimination and profile management issues as intrusion detection mechanisms, but with
potentially no human intervention, care must be taken that service is not disrupted at a
critical time by engineered denial of service attacks.
- Alarmed Files / Accounts refer to seductively named and strategically
located "booby trap" resources which lure an intruder into revealing his
activities. Accessing an alarmed file or account unleashes immediate action. Alarms can be
silent (only notifying the SSO, even remotely) or can prompt immediate retaliatory action
against the intruder. An ideal candidate for an alarmed
account is a default administrator account with default password intact. This technique is
low cost and low tech, but care must be taken that authorized users will not trip the
alarm, especially through accidental stumbling across it by some automatic means (e.g.,
running a nonmalicious find).
This paper has established a comprehensive anti-intrusion taxonomy by working top-down at
a theoretical level, and bottom-up by surveying implemented approaches and those discussed
in the referenced literature. Exercising the taxonomy against real life analogies firmed
and increased intuitive grasp of the concepts. New anti-intrusion techniques will continue
to be developed in this rapidly evolving field of research which may expand our taxonomy.
This taxonomy will serve as a useful tool to catalog and assess the anti-intrusion
techniques used by a particular anti-intrusion system implementation. It is hoped that our
technique organization will provide new insight to the anti-intrusion research community.
The authors are active workers in the field and would be pleased to correspond regarding
additions or modifications.
- [AND80] J. Anderson. Computer Security Threat Monitoring and Surveillance. James P.
Anderson Co., Fort Washington, PA, 15 April 1980.
- [HAL86] L. Halme and J. Van Horne. "Automated Analysis of Computer System Audit
Trails for Security Purposes," Proceedings of the 9th National Computer Security
Conference. Washington DC. September 1986.
- [DEN87] D. Denning. "An Intrusion Detection Model," IEEE Transactions on
Software Engineering, Vol. SE-13, No. 2. February 1987. pp. 222-232.
- [SEB88] E. Sebring, E. Shellhouse, M. Hanna, and R. Whitehurst. "Expert Systems in
Intrusion Detection: A Case Study," Proceedings of the 11th National Computer
Security Conference. Washington DC. October 1988.
- [LUN88a] T. Lunt and R. Jagannathan. "A Prototype Real-Time Intrusion Detection
Expert System," Proceedings of the 1987 IEEE Symposium on Security and Privacy.
Oakland CA. April 1988.
- [HAL88] L. Halme and B. Kahn. "Building a Security Monitor with Adaptive User Work
Profiles," Proceedings of the 11th National Computer Security Conference. Washington
DC. October 1988.
- [LUN88b] T. Lunt. "Automated Audit Analysis and Intrusion Detection: A
Survey," Proceedings of the 11th National Computer Security Conference. Washington
DC. October 1988.
- [TIS90] N. McAuliffe, D. Wolcott, L. Schaefer, N. Kelem, B. Hubbard, T. Haley. "Is
Your Computer Being Misused? A Survey of Current Intrusion Detection System
Technology," Proceedings of the 6th Annual Computer Security Applications Conference
. Tucson, AZ. December 1990.
- [HEBE92] L. Heberlein, B. Mukherjee, K. Levitt. "Internetwork Security Monitor: An
Intrusion-Detection System for Large-Scale Networks," Proceedings of the 15th
National Computer Security Conference. Washington DC. October 1992.
- [DIDS91] S. Snapp, J. Brentano, G. Dias, T. Goan, L. Heberlein, C. Ho, K. Levitt, B.
Mukherjee, S. Smaha, T. Grance, D. Teal, and D. Mansur. "DIDS (Distributed Intrusion
Detection System) - Motivation, Architecture, and an Early Prototype," Procee dings
of the 14th National Computer Security Conference. October 1991.
- [FIS94] W. Cheswick and S. Bellovin. Firewalls and Internet Security Repelling the Wily
Hacker, Addison-Wesley, 1994.
- [STO89] C. Stoll. The Cuckoos Egg: Tracking a Spy Through the Maze of Computer
Espionage, Doubleday, New York, 1989.
- [SHI95] T. Shimomura. The IP Spoofing Attack, in Proceedings of the Third Workshop on
Future Directions in Computer Misuse and Anomaly Detection, eds. M. Bishop, K. Levitt, and
B. Mukherjee. January 1995, appendix A-15.