Foreword
The purpose of locks is not to deter criminals; it is to keep honest people honest.
—Anonymous reformed thief
Cyberspace Is the Wild West
Deception being the major theme of this book is provocative. It makes explicit and unusual something that is inherent and commonplace. As readers of books such as this, we all know that we live in a world surrounded by deceptions, ranging from the trivial of sports competition to the commercial marketplace to the terrorist bomb maker.
What is different or unique about the deceptions involved in the defense of computer networks that makes them worthy of special study? Ubiquity and technology characterize cyberspace. Time and space hardly exist in the cyber world. Actions take place at nearly light speed. Data theft can occur very rapidly and leave no trace—that which was stolen may appear to have been undisturbed. That rapidity of communication virtually negates space. If the electronic means exist, connections can be made from virtually any point on the earth to any other with equal ease and speed. Unlike gold bullion, data copied is as good as the original data. Physical proximity is not required for theft.
Paradoxically, the highly structured and complex technology of computers gives the technically sophisticated thief unique advantages over the inexpert majority who are mere users of networks. It is the highly structured nature of the computer and its programs that makes them all at once so useful, predictable, reliable, and vulnerable to abuse and theft. The user and abuser alike are vulnerable to deceit precisely because the systems are so useful, predictable, reliable, and vulnerable. Only the humans in the system are vulnerable to deception. Yet the advantages of connection to the global network are so great that total isolation from it is possible only in the event of specific and urgent need. The advantages of connection trump the risk of compromise.
The instructions to computers must be unambiguous and specific. If computers are to communicate with each other, they must do so according to protocols understood by attacker and defender. There are, of course, many protocols and systems of instructions, each consistent within itself, intelligible, and unambiguous. The possibility of secret instructions exists, but someone must know them if secrets are to be useful. These necessities impose themselves on the technologies and hardware of networks.
A protected network is one that represents itself to users as protected by requiring users to show evidence of authorization to access it—typically by means of a password. Gaining unauthorized access to information or data from a protected network, however accomplished, is theft. We refer to the intruder who gains this access as the “adversary.”
Most often, attacks on networks have consisted of adversaries taking advantage of well-known, tried-and-true human failings:
imageFailures to follow best practices
imageFailures to heed warnings
imageFailures of management to provide adequately for personnel security issues
imageFailures of individuals to control their appetites
People have been, and almost certainly will continue to be, the primary points of entry to computer-related deception.
Adversaries attack, hack, and intrude on computer networks largely by using their technical skills to exploit human fallibilities. The higher the value of the data they see“k and the more organized the effort, the more likely it is that the technical skills are leveraged from conventional manipulative criminal skills.
Each network is designed as an orderly world, which nevertheless is connected to a chaotic world. Is it possible to be connected and not be infected by the chaos? A few years ago, at a conference on network security, one participant complained that operating a network in the face of constant hacking attempts was like being naked in a hail storm. Was there nothing that could be done to protect oneself? Another participant replied, “No.” Legal and pragmatic constraints made it difficult, if not impossible. Has there been much change? Not if what we read in the newspapers is true.
Even without attackers, as networks expand and the data in them grows, apparently simple questions may lead to unexpected destinations, often by convoluted routes. On the Web at large, simple questions become complex. Settled truths lose their solidity. There is so much information. And it is so hard to keep the true sorted from the false. As the King of Siam said, “Some things nearly so, others nearly not!” (www.lyricsbay.com/a_puzzlement_lyrics-the_king_and_i.html).
As the Internet and cyber world grow in technical complexity and substantive variety, when will the possible permutations of connection with and between networks become infinite? Do any of us truly understand when we drop in a request exactly why we receive a particular answer? I think fondly of the Boston Metropolitan Transit Authority. It inspired the 1950 short story “A Subway Named Moebius,” by A. J. Deutsch, which told the tragic tale of what happens when a network inadvertently goes infinite.1
Even short of such drama, there is no certainty, no matter the perfection of the technology, that the seekers and users of information will ask the right questions, find the right information, or reach correct conclusions from the information they find.
Paradoxically, the search for the perfect vessel—a container for information impervious to unauthorized uses—motivates some others to go to lengths to penetrate it. Therefore, the hider/finder perplexity is always with us, and so are deception games.
Deception is most often thought of in terms of fooling or misleading. It adds to the uncertainty that characterizes real-world situations. Not true!
Properly considered, the purpose of deception is not to fool or mislead. Whether deployed by friend or foe, its purpose is to achieve some advantage unlikely to be conceded if the target or object of the deception understood the deceiver’s intent. The purpose of deception is, in fact, to increase predictability, though for only one side of a transaction. It increases the confidence one side may feel in the outcome to the disadvantage of the other side.
Having an advantage also gives one side the initiative. Seizing the initiative, exercising and benefiting from it, is the ultimate object of deception.
This view raises several questions that cannot be answered, but which must be considered and whose implications must be taken into account if deception is to be either deployed or defended against on behalf of computer networks:
imageWhat exactly is deception?
imageWhy is deception necessary?
imageGiven the necessity of deception, what general issues are, or ought to be, considered before one takes it up?
Definition of Deception
Deception in computer networks is our subject. We live in a sea of deception. Virtually all living things recognize that they are the prey of some other, and survival depends on some combination of physical attributes and wit. Four rules apply:
imageDo not be seen—hide.
imageIf seen, run away.
imageCounterattack if there is no alternative.
imageWhen none of the preceding three are possible, use wits and resort to subterfuge.
Hog-nosed snakes and possums feign death2. Puffer fish make themselves too big and unpleasant to swallow, and skunks, well… you get the idea. The substance of this book explores the human, rational, and computer network analogs.
Deception’s distinguishing characteristic is that its purpose is to affect behavior. (You can’t deceive an inanimate object, after all.) So the purpose of the deception is to manipulate someone to act as he would not do if he understood what the deceiver were up to. However, taking that desired action probably will not be sufficient. Tricking the bank manager into giving up the combination to the vault still leaves the job of gathering up the money and hauling it away, not to mention avoiding the police long enough to enjoy it.
So deception has three parts:
imageDefine the end state (after the deception succeeds, what is the state of things?).
imagePerform the action(s) that causes the adversary to cooperate, or at least not interfere with the deceiver’s action.
imageExecute the action required to secure the intended advantageous state.
We give these parts names: the objective, the deception, and the exploitation. Without all three, there can be no deception plan. It is possible to fool, mislead, or confuse. But to do so may cause the adversary to take some unforeseen or unfavorable action. And unless one has the intent and capability to exploit that action induced in the adversary to achieve a goal, what was the purpose of the whole exercise? Of what benefit was it?
Merely hiding something is not deception. Camouflage is an example. Camouflage hides or distorts the appearance of an object, but it does not alter the hunter’s behavior. A newborn deer is speckled and has no scent—essentially invisible to predators—so that it can be left alone while its mother browses. But deer make no effort to defend a fawn by distracting or attacking predators should the fawn be discovered. In contrast, some ground-nesting birds lay speckled eggs, make nests of local materials, and have chicks of a fuzzy form and indeterminate color to discourage predators. But they also will feign a broken wing in efforts to distract predators and lead them away from their nests. They are deceiving their enemies in a way deer do not. On the other hand, some birds will attack predators near their nest, attempting to drive those predators away, but they don’t try to lead the predators away from the nest.
Deception, then, is about behavior both induced in the adversary and undertaken by the deceiver to exploit it. To deceive, it is not sufficient to induce belief in the adversary; it is necessary also to prepare and execute the exploitation of resultant behavior.
As long as the target or object of our deception does what we want him to do, that should be sufficient for deceptive purposes. The adversary may have doubts. He may take precautions.3 The deceiver’s response is not to embroil himself in attempting to discern the quality of his adversary’s beliefs—a fraught task in the best of times—but to make contingency plans of his own to maintain the initiative and achieve his aims whatever the adversary may do. The adversary’s actions are sufficient warranty for his beliefs.
Purely as a practical matter, how likely is it that the deceiver will be sufficiently certain of his knowledge of the state of mind of an adversary partially known and far away? As deceivers, we may know what the adversary knows because we told him or because we know what someone tells us he was told. But can we know what the adversary believes? What he intends? How today’s environment impacts this morning’s beliefs?
The only thing in the world anyone controls with certainty is his own behavior. From within an organization where action must take place through agents and intermediaries, there is little enough control. As deceivers, we may know only what we intended by acting in a certain way and what we intended if the adversary responded in the anticipated way. The purpose of the deception, after all, is to make the adversary’s actions predictable!
You will say that not knowing his state of mind or beliefs, we cannot know with assurance whether the adversary acted as he did in accord with our intentions or in a deception of his own in which he is using secret knowledge of our intentions. You are right to say so. That is why the deceiver, as well as—and perhaps more than—the deceived must have doubts and contingency plans. It is the deceiver who accepts the added risk of committing to exploiting activity he has initiated.
Card workers (magicians) use the theory of the “Out” as insurance that their tricks will amaze the audience even if they fail. An Out is a piece of business prepared in anticipation of something going wrong in front of live audiences (see “Outs”: Precautions and Challenges for Ambitious Card Workers by Charles H. Hopkins and illustrated by Walter S. Fogg, 1940). Failure, to one extent or another, is highly likely in any effort to manipulate another. By anticipating when and how failure may occur, it is possible to plan actions to not merely cover the failure, but to transition to an alternate path to a successful conclusion.
Does this differ from old-fashioned contingency planning? Perhaps radically. In a contingency plan, typically the rationale is: “I’ll do A. If the adversary does something unanticipated or uncooperative, then I’ll do C, or I’ll cope.” The theory of Outs would have it: “I’ll do A, but at some point the adversary may do something else, B or B’. If so, I am prepared to do C or C’ to enable me, nonetheless, to achieve A.” The emphasis is on having anticipated those points in the operation where circumstances may dictate change and, having prepared alternatives, enabling achievement of the original objective nonetheless. “It’s the end state, stupid!” to paraphrase.
Deception consists of all those things we must do to manipulate the behavior of the target or object of our operations. It follows that deception is not necessarily or even primarily a matter of technical mastery. In the context of this book, it is a state of mind that recognizes it is the value of the information in the network that attracts hostile interest. In order to penetrate protected networks, certain specific items of intelligence are needed. And, therefore, it is the adversary’s interest in these items of information and his need for the data on the network that make it possible to induce him to act against his own interest.
This insight was emphasized by Geoffrey Barkas, a British camouflage expert in North Africa. (Before and after the war, Barkas was a successful movie producer.) After the Germans had captured one of his more elaborate schemes, Barkas thought the Germans, now aware of the extent of British capabilities, could not be fooled again. They were though, and Barkas realized that as long as the enemy had a good intelligence service to which enemy commanders paid attention, it was possible to fool them again and again (as described in The Camouflage Story (From Aintree to Alamein) by Geoffrey and Natalie Barkas, London, Cassell & Company Ltd, 1952).
Barkas realized that it is the need for information and willingness to act on the information acquired that creates the vulnerability to deception. It is no more possible to avoid being deceived than it is to engage in competitive activity without seeking and using information. One can try to do so, and one might succeed for a time. Without intelligence, one could blunder, and in blundering, confuse an opponent into blundering also, but one could not deceive. Deception presupposes a conscious choice. Deception is in the very nature of competitive activity.
The interplay among competitive, conflicting interests must inform the extent, expense, and means used in defending the integrity of the information/data stored in networks. Both attack and defense are preeminently human, not technical.
An excellent, if trivial, example is this football ploy: A quarterback gets down to begin a play, as does the opposing line. He then stands up and calmly walks across the line between the opposing defenders, and then sprints for the end zone. By the time the defenders recover, it is too late. (You can see this in action at http://www.koreus.com/video/football-americain-culot.html.) This is perfectly legal. It’s done right in plain sight. Success depends entirely on surprise (and a speedy quarterback). It won’t work often, but when conditions are right, it meets all the requirements of a deception plan well executed.
Deception is about manipulating the behavior of another to the benefit of oneself without the permission of the other. We undertake to do it on the assumption that the other—who we call the adversary—would not cooperate if he knew what we intended and the consequences. One does not deceive out of idle curiosity, because deception always has consequences. (It’s the consequences, after all, that the deception exists to bring about.) By definition, the consequences of deception are intended to injure the adversary.
Of course, self-deception is a different thing altogether, and is a subject for psychologists, although an approach to a deception might be to encourage an adversary’s self-deception. How easily we slip into the hall of mirrors. Let that be the first caution to would-be deceivers.
There are a seemingly infinite number of ways that one human has been able to manipulate another’s behavior.
The Real Purpose of Deception
What deception is and what its purpose is are separate questions, just as intelligence is not simply information—it is gathered, processed, and distributed for a purpose. Deception is about manipulating behavior of an adversary for a reason—that is, to advance some defined end.
Barring accident or a deliberate violation of rules, one can gain unauthorized access to protected information only by appearing to be an authorized recipient—to deceive. Deception is the unauthorized agent’s method of access.4
Deception is fundamental to hacking. But what was the purpose of the deception? “To get access to the network,” you say. Yes, but there is a deeper, preexisting, and subsequent purpose, such as to increase order and predictability in competitive situations but to the benefit of only one side—the deceiver’s side. Implied is the deceiver’s intent and ability to exploit this relationship. What point is there to manipulating a competitor if there is no intent or ability to benefit by doing so?
Some discussions of deception deal with the adversary’s beliefs. They assert that the objective of deception is to make the adversary certain of his understanding of a situation, but wrong. We would argue that the adversary’s belief is nearly irrelevant. What matters is that he does what the deceiver desires him to do. It is then merely a question of whether the deceiver has been competent enough to ensure his ability to take advantage of the situation he has brought about.
Why not simply take what one wanted—steal it, seize it, or grab it? In the military context, why not conquer, occupy, or destroy? Why bother going to the trouble of dreaming up schemes that may fail or be subverted?
There are many reasons. Here are a few:
imageIt is not always possible to obtain information by directly asking for it without compromising the intent behind the request.
imageOne may not know how to get at what one wants.
imageOne may not be strong or smart enough.
imageOne may be deterred by political, legal, ethical, or social considerations.
imageThe object of deception is to control, to the deceiver’s advantage, the behavior of an adversary.
imageThe threat of deception adds an element of deterrence to other defenses.
imageDeception may facilitate intelligence gathering, which may then be used to improve defenses and as input to future deception plans.
imageDeflecting an adversary causes him to spend his time and resources harmlessly.
But, if the matter is sufficiently important and the goal is sufficiently desirable, an attacker may choose to be undeterred. What then? This is not the place to discuss tactics. Yet, as the manipulation of others’ behavior is the core of this book, we will make a suggestion: Go to an Internet search engine and search for “reflexive control.” You will find much to think about, especially in a report by Vladimir and Victorina Lefebvre, titled “Reflexive Control: The Soviet Concept of Influencing an Adversary’s Decision Making Process” (SAI-84-024-FSRC-E, Science Applications, Inc., Englewood, CO, 1984). You could also do an Internet search for “Vladimir Lefebvre.” Following the links is an interesting and educational journey.
Reflexive control is a concept under which one controls events by sequencing one’s own behavior to induce responses and to create incentives for the adversary to behave as one wishes. This indirect approach proceeds from that one thing over which the deceiver has sure control: his own behavior.
Costs and Risks?
Successful deception may make it possible to achieve one’s goals at a lower cost, however that cost may be calculated. Deception, however, implies consequences. It is well to be aware that even the slickest deceptions will incur costs commensurate with the value of the goal achieved. If deception was necessary to achieve it, someone else was prepared to invest resources in denying it.
Designing and executing deception requires people, time, resources, and effort. Resources are never sufficient to do all the things one might want to do. If a hostile attack can be anticipated—as, indeed, experience shows it must be—and successful defenses are not certain—as experience shows they are not—then deception is only one more sensible defensive option. One does not deceive out of idle curiosity, because deception always has consequences which, by definition, incur some risk.
The obvious way to estimate the costs of deception would be to estimate man-hours spent or requisitions submitted in its planning and execution. Opportunity costs should also be considered—for example, what else were your resources not doing while they were deceiving? Also, what was the exchange ratio between benefits received from successful deception versus the direct costs and losses due to risks accepted? How certain are we that the adversary makes a similar calculation? Assuming the adversary behaves as we wish, will he value our success as we do, or will he accept the loss as “the cost of doing business” with us? In short, what value do we place on successfully deceiving the adversary relative to the costs and risks we have run?
Although cost and risk are central to deception, they are not our subject here.
Who Should Deceive?
The question of who should deceive is implicit in the cost question. And this raises two related questions:
imageWhat is the necessary skill set?
imageHow do cyber deceivers get trained?
Deception is about manipulating behavior. If the manipulation is not conceived, designed, and executed competently, the adversary would be tipped off and withhold his cooperation, or worse, run a counter deception.
In the late 80s, an analysis of tactical deception at the Army’s National Training Center in California was done. It reached one firm conclusion: competent commanders deceive. Not only did they attempt deception more often than others, but their deceptions were more competently executed and their battles had better outcomes in terms of losses incurred and inflicted, and missions accomplished.5 Military deception is only a special case of the survival value of deception displayed by all living things.
Sun Tzu, the Chinese philosopher of war, was very sensitive to the element of competency. He said, “All warfare is based on deception.” But master Sun looked beyond the value of deception in combat. He praised the general who is able to accomplish missions at low cost in lives and treasure “One hundred victories in one hundred battles is not the most skillful. Subduing the other’s military without battle is the most skillful” (from The Art of War: A New Translation by the Denma Translation Group, Shambhala Publications, Inc., Boston, 2001).
Competence at what? As deception is about behavior, this question immediately arises: What does the deceiver want the adversary to do? And what must his behavior be in order to induce that which he desires in the target or object? And beyond that behavior, what, if anything, must the deceiver do to ensure the adversary’s cooperation?
We maintain that a competent competitor is a deceptive one. Involvement in any competitive activity assumes sensitivity to the intelligence and competence of the adversary. One’s own plans must allow for surprise or unexpected adversary action, and to do so, must assume that preparations will be made for that unanticipated occurrence. Otherwise, one is left to rely on overwhelming strength and resources for success. Some leaders, generals, and coaches do try to win with overwhelming force, but the competent leaders, generals, and coaches know enough to prepare for the eventuality that the advantage of overwhelming strength may not be theirs. What then? Already competent, the competitor must resort to guile. If he is more than merely competent, he may not reserve guile for the last resort.
If the defender is morally and ethically justified in using deception to defend the network because the adversary is using deception in pursuit of his ends, who should be responsible for planning and executing defensive deception? Answer: the most competent and most creative defenders.
Any deception plan must balance potential gains against the costs and risks of failure or blowback.6 The basic assumption of any deception operation must be that the adversary is also a competent operator, and thus has made as close a study as he could of the defenders’ behavior to support his attacks. That being so, the defenders must be aware of their own behavior to avoid tipping off the adversary. One might think of a poker player trying very hard not to fall out of his chair when he draws a fifth spade for his straight.
The potential defensive deceiver must be at least technically competent to ensure that the desired message is delivered to the adversary in a credible manner. For that, the deceiver needs to be familiar enough with the adversary to know to what the adversary is likely to react. And, ideally, the defensive deceiver is able to observe the adversary closely enough to know if the message has been received and if the adversary is believing it. The confidence with which a deception can continue is tied to how well the deceiver is able to know whether the ploy has been seen and accepted.
Clearly then, the defensive deceiver must be very knowledgeable of his own system, cleverer than the attacker, and a manager of a complex task. He must both use and generate intelligence. He needs to know and be able to call on and coordinate the efforts of organizations outside his own for information and support as his operation progresses through its life.
Life is the appropriate word. Deceptions end with success, failure, or ambiguity. Something happened, but we can’t say if the deception was responsible. With success, the operation must be closed and lessons learned. With failure, the operation must be closed, the damage limited, and lessons learned. With ambiguity, only the lessons must be learned.
But with all of them, there’s a key question at the end: Is there a way to weave a new effort out of the remnants of the old? Even with a failure, is it possible that the adversary, now that he knows we might try this and that, could be less alert to, or less sensitive to, a variation?
Active vs. Passive Deception
Deception, like intelligence, may have both passive and active aspects. Purely passive deceptions may only cause an attacker to expose his methods for our study. Active deceptions may involve setting up an attacker for an exploitation of our own.
Passivity characterizes most network defenses in that the defender waits for the attacker. Passwords are an example. They merely prevent an attacker from gaining easy access to network content, but by that point, the attacker has already learned something. For the defender, passwords are easy to administer and control. Used well and conscientiously administered in concert with other defenses, passwords can be very effective.
But holding an attacker at bay will not be enough. With sufficient incentive and enough time and resources, a determined attacker may gain access somehow. In the end, passive measures leave the initiative in the attacker’s hands. He calculates how much of his time and resources your data is worth.
As a fondly remembered counterintelligence instructor once said, “The purpose of a lock is not to deter criminals. It is to keep honest people honest.”
A honeynet—a vulnerable net set out to entice attackers so that their methods may be studied—is passive but also active in the sense that it can be placed or designed to attract a certain kind of attacker. It is true that the honeynet itself induces behavior in an attacker, but, if deception were part of the plan at all, the exploitation may be indirect or deferred.
Counterintelligence seeks not only to frustrate hostile attempts to penetrate friendly secrets. At its highest level, counterintelligence seeks ultimately to control the hostile intelligence service.7 Active deception seeks to attract specific attackers so that they may be studied and their networks identified, but exploitation of the attacker and his net is the main aim. It seeks to manipulate the behavior of the attacker, the better to cause him to behave in ways advantageous to the defense. The exploitation is the culminating purpose of counterintelligence. The fact that one intelligence service achieves control over another only rarely testifies to the difficulty of doing so, but the goal persists.
Intelligence may be gathered in the course of a deception operation and then studied and integrated into a deception, but those are incidental spin-off benefits. At minimum, the active deception seeks to disadvantage the hostile attacker by causing him to accept unwise risks, gather erroneous information, or behave in ways embarrassing or damaging to his sponsor. At maximum, active deception seeks to destroy the attacker, at least figuratively, by causing him to behave not merely ineffectively, but also to become a source of disruption or loss to others of his ilk.
Clearly, there is a continuum of risk associated with deception, as there is with any competitive endeavor. The actions taken to beat a competitor are bound to elicit responses from the competitor. And the responses will be commensurate with perceptions of risk or gain on both sides. Risk of failure or blowback is always part of the calculation of how and to what extent deception can be used as an element of network defense.
When to Deceive
The following is a simple diagram that attackers or defenders of networks may use to organize their thinking about deception. Clearly quite simple-minded, it is meant only to provoke. Also, it illustrates the need to think about attacking and attackers along a continuum—a fairly long one. This is also the case with deceptive defenses and defenders.
images
Deception originating in the lower-right corner of this diagram may be least dangerous to the defenders. At first glance, attacks originating in the upper-right corner might be considered most threatening. Defenders probably would be looking here. But the most dangerous attacks might come from the lower-right corner. It depends on how cleverly the adversary can conceal himself and the quality of the defender’s intelligence and analysis.
What is one to make of other parts of the matrix? Have attacks showing high levels of skill occurred, yet posed little danger? Why would a skillful attacker bother? Perhaps this is the exercise ground for high-competence organizations that want to train or experiment without rousing alarm?
From the deceiver’s point of view, the table may look very different. The deceiver’s aim is to induce behavior, not to reap immediate results. One objective might be to not alarm the defense so that attacks could be characterized by persistence at lower levels of threat. The hope would be to find exploitable points where exploitable behavior might be induced or where indications that the adversary was reacting as desired could be gathered. Perhaps the center row is where deceivers may be most populous. Here, the important distinction is between the hacker’s intent only on scoring status points or committing outright crimes on the one hand, and those attempting to manipulate the behavior of networks and their managers for ultimate ends on the other.
Contemplating the upper-right side of the table from the deceiver’s standpoint calls to mind the Stuxnet attack on the Iranian uranium enrichment program. A virus was inserted into the Iranian network, which caused centrifuges to malfunction or self-destruct. But the object of the operation was not merely to interfere with the ongoing program, but also to influence Iranian decision making, as well as American and Russian efforts to limit Iranian nuclear ambitions. This last is evident from the limited duration and destruction of the attack. Those planting the virus could have extended the attack and caused much more damage. Not doing so may have limited Iranian reaction and allowed the attack to function as a warning rather than a declaration of war. The technical, political, and operational sophistication of the operation make it a model of how high-level network-based deception may work. And as such, it indicates the extensive skill set required for success—not merely technical, but also bureaucratic, political, and operational (see Holger Stark’s article, “Stuxnet Virus Opens New Era of Cyber War” at www.spiegel.de/international/world/0,1518,778912,00.html). Stuxnet also suggests the extent of training and coordination underpinning the attack, as well as why it has been so difficult for the United States to field a coherent cyber defense strategy.
Just as deception is an essential element of all attacks on networks, so should deception be a constant element in the defense of networks.
Deception: Strategy and Mind-Set
Deception can be used tactically to achieve local ends for transient advantage. It is the magician’s or confidence man’s approach. The advantage sought is immediate and limited in scope. This is the style of deception that might be used in defense of a network to waste a hacker’s time, to discourage a less competent hacker, or, at most, to gather intelligence on the methods of serious hacking. Such limited deceptions have deterrent value. Their frequent exposure, whether due to failure or success, reminds attackers that they can take little for granted. Deterrence is perhaps their primary goal.
Deception, however, may be used to gain more lasting ends. At the highest level, deception may be a metastrategy—that is, a way of unifying action across lines of activity and across time. Here, the objective is to alter the behavior of a serious attacker—individual or organization—to the defender’s advantage.
In a strategic deception, the objective is to control the adversary’s response even after he finally realized he has been had—to transition from a completed operation or a failure toward a success. Most deceptions eventually are discovered or suspected. Rather than cutting the victim off, leaving him to plot his revenge, it would be better to wean him onto another deception—to keep him on the hook or let him down softly. The deception plan must have a concluding Out.
A strategic approach to deception would demand not only more thought, but the highest quality people. The payoff, however, might be orders of magnitude greater in terms of intelligence gained and in defeating not only the proximate attack, but also the attackers and their future attacks.
Deception is too often conceived of as a matter of tricks or fooling the adversary. One could attempt to mystify the intruder/hacker, keeping him in doubt about friendly security plans and intentions. It could even cause him to doubt the effectiveness of his activity.
One also could behave in ways that would leave an adversary mystified without a deliberate attempt to do so. For example, one could constantly change a network’s operating routines, randomize passwords, and change passwords at odd intervals. Mystifying the adversary, however, does nothing to limit or channel his behavior. As an intelligent and motivated adversary, his creativity may lead him to respond in ways the friendly side may not have imagined. Consequently, network defenders may find themselves dealing with situations for which they were unprepared.8
We do want the adversary to be clear, confident, and wrong when he tries to access protected networks. The intent of deception is to get the adversary to act confidently and predictably. But are we able to provide him the information and/or the incentive to make him so? Seizing and maintaining the initiative is perhaps the most important effect of successful deception. Having the initiative means that the adversary is forced to respond to friendly actions. This can be done by structuring situations such that the outcomes of specific operations create the conditions for subsequent operations. As noted earlier, during research, Vladimir Lefebvre developed a technique called reflexive control. Concerning deception, the essence of reflexive control involves structuring situations such that the victim is led by his personal preferences to behave in certain ways. What does this mean in network defense terms? This is a matter of technical competence and imagination.
Another key to getting others to behave as one wishes them to, against their interest, was provided by Colonel Dudley Clarke. Clarke was instrumental in establishing the British deception organization early in World War II, and then went on to control deception in the Mediterranean. He said that in designing deception, it was necessary to consider what you would tell the enemy commander to do. If you had his ear, what would you tell him? Presumably, you would have in mind what you would do to take advantage of the situation if the adversary complied. The information you would give the adversary is the deception. What you would then do is the exploitation.9
Successful deception, then, is not the result of applying some set of techniques. It results from paying close attention to the enemy and oneself to divine which of one’s own actions will result in a desired response by an adversary. And, most important, planning how one will take advantage. Without taking advantage, what point is there in deceiving?
Intelligence and Deception
Intelligence as a noun describes information that has been gathered and processed somehow to make it of practical use to a decision maker. Not all information is intelligence, and information of intelligence value to one decision maker may be irrelevant to another. In order to gather intelligence of use, the gatherers need to have ideas about what information they need to do whatever it is they want to do, or some authority must direct them so that they can sort useful information from everything else. The intelligence collectors must rely on decision makers to tell them what is wanted or, at least, interact with them enough to allow the collectors to inform themselves.
As a process, intelligence requires prioritization. Resources are never sufficient to gather everything—all that might be useful or interesting. Prioritization exacerbates the uncertainty of intelligence gathered to support competitive activity. That is what defending networks is about by definition. That which makes items of information attractive to adversary collectors makes them attractive to defenders.
Intelligence is of two kinds:
imagePositive intelligence (PI) is information gathered to facilitate one’s own side achieving its ends.
imageCounterintelligence (CI) is information gathered to prevent adversaries from compromising network defenses or defenders. A subset of CI called offensive CI seeks out and tries to penetrate hostile elements for the purpose of compromising or destroying at least the effectiveness of the adversary himself.
The effort required to gather, process, distribute, and use intelligence must be related to the degree that hostile activity does or may interfere with the operation of protected networks.
To do so requires that defenders gather a good deal of information about those who are trying to penetrate or disable their networks: who they are, how they work, what they have to work with, how well they do their work, where they get their information, and so on. The list of questions CI is concerned with is long and detailed, as is the list for PI. When these lists are formalized for the purpose of managing intelligence gatherers, they are called essential elements of information (EEI).
Only the most primitive of threats to networks have no explicit EEI. Whether explicit or implied, adversary EEIs are targets of intelligence interest because such lists can be analyzed to divine what adversaries know or are still looking for, and, by implication, what they are trying to protect or intend to do.
The discussion thus far has brought us to a major conundrum at the center of the subject of this book. From whom are we defending networks and from what? On the one hand, defenders know pretty well what technical techniques are used to penetrate networks. That is constrained by the nature of the technology that others may know as well as we do. On the other hand, we have only very general ideas about who is behind attacks because the potential cast of characters is huge, ranging from precocious schoolboys to major foreign governments and organized crime. The ephemeral nature of networks, even protected networks, and their content does not help focus.
And yet, computer networks are created and operated by human beings for human purposes. At the center of it all are human beings with all their foibles and vulnerabilities. The better those are understood on both the PI and CI sides, the more effective defenses and defensive deceptions may be.
Defenders are hungry for data. The more data they have about the nature of networks, their contents, and the humans who operate and maintain them, the better the networks can be defended. But where do potential deceivers get the needed information? The technology is a large part of the answer. Networks must be stable and predictable. To get information out of a network, an adversary must be able to use the protocols designed into the network, or he must gain the cooperation of someone who can provide, so to speak, the keys to it.
Intelligence and deception are like the chicken and egg.
As a process, intelligence requires prioritization. Gathering intelligence involves defining which elements of information are required and how the means to collect them are to be allocated. This is always done under conditions of uncertainty, because that which makes items of information attractive to collectors makes them attractive to defenders.
Deception inevitably becomes involved as collector and defender vie. That which must exist yet must remain proprietary begs to be disguised, covered up, or surrounded with distractions. So, in addition to the primary information being sought, the intelligence collector needs to gather information about the opposing intelligence. What does he know about what you want to know? How does he conceal, disguise, and distract? The hall of mirrors is an apt metaphor for the endless process of intelligence gathering, denying, and deceiving.
Yet the intelligence process must have limits. There are practical constraints on time and budget. If the process is to have any value at all, conclusions must be reached and actions taken before surprises are sprung or disaster befalls. A product needs to be produced summarizing what is known as a basis for deciding what to do—whether to attempt to deceive, compromise, or destroy a hostile network.
Clearly, transactions in the chaotic, highly technical, and valuable world of computer networks require a high degree of intelligence on the part of the people involved in designing and executing them. But for effective defense, this executive or administrative intelligence is not sufficient. Another kind of intelligence is needed for effective defense.
A manipulative intelligence is needed. This is an understanding of how the various interests that created the network relate to each other. How are those who seek unauthorized access motivated? What advantage do they seek by their access? What means are available to them to determine their technical approaches or their persistence? How could this knowledge be deployed to manipulate the behavior of those seeking harmful access to our protected networks?
The main and obvious means of defending databases and networks are analogs to physical means: walls, guarded gates, passwords, and trusted people. These are as effective in the cyber world as in the physical one—which is to say, rather effective against people inclined to respect them. The old saying that “locks only keep honest people honest” applies here.
Unfortunately, the laws of supply and demand also apply. The more defense-worthy the good, the more effort a competent thief is likely to mobilize to acquire it. A highly motivated thief has been able to penetrate very high walls, get through multiple guarded gates, acquire passwords, and suborn even highly trusted people.
Passive protections do not suffice. Active measures are required. Intelligence is required whatever the answer, and deception requires intelligence—both the product and the trait.
What Constraints Apply
The first constraint is the nature of the computer and the network. To work, they must have clear and consistent rules. In order to work across space and time, the rules must be stable and be widely known and reliably implemented.
At the same time, networks have many interconnections and many rules. It is hard to know what the permutations of all of them may be in terms of allowing access to unauthorized persons. More to the point, networks involve many people who use the content in their work, as well as network administrative and support personnel. All these people—whether accidentally or deliberately—are potential sources of substantive and technical leaks.
If the unauthorized seeker of protected information is necessarily also a deceiver, and if efforts to defend that information necessarily involve deception, an ethical question moves to the center of our concern. How can we actively deceive unknown adversaries in cyberspace without damaging innocent third parties? Deceptive information offered to bait an intruder might be found and used by an innocent party. Such a person might be duped into committing unauthorized acts. Or acts induced by defense-intended deceptive information might cause unanticipated damage to networks or persons far from an intended adversary.
So we must discriminate among adversaries: between the serious criminal and the experimenting teenager, between the hobby hacker curious to see what she can do and the hacker for hire, and so on. To do so requires an intelligence program—a serious, continuing effort to gather information on those attempting to intrude on protected networks. The object of the effort is to allocate defensive resources and responses proportionate to the potential damage or expense of compromise.
If there is anything we can know about deception it is that unintended consequences will follow. How deception is done is not a matter of choosing a technique or a technology. These are limited only by the creativity of the adversary and defender, and the skill with which they deploy their technologies. It is a matter of judgment and design, and the extent to which they each accept responsibility for the consequences of their actions. There is almost certain to be an asymmetrical relationship between the sense of responsibility on each side. One side will be constrained by legal and ethical responsibilities; the other side will not.
In the largest frame, the purpose of deception is to reduce the level of uncertainty that accompanies any transaction between a computer security system and an intruder to the advantage of the defender.
When Failure Happens
Efforts to deceive are likely to fail eventually. By definition, deceptions culminate in an exploitation that typically will be to the disadvantage of the adversary. It would be a dim opponent who did not realize that something had happened. But carnival knock-over-the-bottles games thrive on keeping people paying for the next sure-to-win throw.
The D-Day deceptions of World War II were highly successful but were in the process of failing by the time of the actual invasion, because the object of the deceptions was to cover the date of an event that was certain to occur sooner or later. When the German observers looked out at dawn on June 6, 1944, and saw a sea covered by ships headed in their direction, the deception was over.10
Deception may fail for many reasons along the chain of events that must occur from original idea to conclusion. Every operation may fail due to the quality of the effort, the forces of nature, the laws of probability, or other reasons. Here are just a few possible reasons for failure:
imageWe may fail to get our story to the adversary, thereby failing to influence him.11
imageThe adversary may misinterpret the information we provide him, thereby behaving in ways we may be unprepared to exploit.
imageWe may fail to anticipate all the adversary’s potential responses.
imageWe may fail to execute the deceptive operation competently. Competence is always desirable, but never more so than when executing a deception.
imageWe may fail to prepare the exploitation adequately.
British General Wavell and his deception chief Brigadier Dudley Clarke learned a valuable and highly relevant lesson early in World War II while they were fighting the Italian forces in Abyssinia (Ethiopia). Wavell was planning an attack on the north flank, so he feigned an attack on the south to draw Italian reserves away from the north. Unfortunately, the Italians were not privy to his plans, because they withdrew from the south and reinforced the north. Evidently, the Italians had their own ideas about the relative importance of the two flanks. From this experience, Clarke drew a lesson still relevant. Deception plans start with the question “What do you want the enemy to do?” They never start with “What do you want the enemy to think?” (from David Mure’s Master of Deception: Tangled Webs in London and the Middle East, William Kimber, London, 1980).
Even a great deception success can have ambiguous causes. The D-Day landings at Normandy in June 1944 are a case in point. With apparent great success, thousands of people labored over literally half the globe to plan and execute deceptions to prevent the Germans from learning the exact day and place for the main landings, and to divert German forces from the invasion area. But it is not clear that the success of those deceptions deserves the whole credit for the success of the landings themselves.
It can be argued that the single most critical decision leading to the success on June 6, 1944, was the decision to go ahead with the landings despite the prediction of unfavorable weather in the June window. Allied commanders feared that by the time of the next favorable window of tides, daylight, and moonlight a month later, the Germans would have penetrated the secret and strengthened or altered defenses sufficiently to defeat it. Because of their weather prediction and their very careful and accurate intelligence assessment, the Germans assessed that the Allies would not land at least until during the most favorable window in June. Their analysis of previous Allied amphibious operations had given the Germans high confidence that they understood the Allies’ criteria, and their best weather prediction was that the criteria would not be met during the early June window.
But their weather prediction was wrong. It missed a two-day break in the weather that was coming from the northeast and was due to arrive in the Normandy area early on June 6. Because of that gap in their weather intelligence, the Germans were not at the highest state of readiness that good weather would have dictated. The German commander, Rommel, was at home in Stuttgart for his wife’s birthday. Many of the senior German staff members were away from their headquarters for a war game.
The German weather prediction was off because the US Coast Guard and Navy had uprooted German meteorological stations in Greenland, Iceland, and the North Atlantic early in the war. At the time of the landings in June 1944, German weather reporting was confined to reporting from two or three U-boats in the North Atlantic, which was inadequate for accurate prediction of weather in western Europe (from British Intelligence in the Second World War, vol. 2, “Annex 7: German Meteorological Operations in the Arctic, 1940-1941,” and vol. 3, by F. H. Hinsley et al, Cambridge University Press, New York, 1981).
Does that mean the effort to deceive the Germans was wasted or unnecessary? Not at all. The war required every effort and resource. If resources were available, no possible effort to deceive the enemy was refused. As Churchill said, “In wartime, truth is so precious that she should always be attended by a bodyguard of lies.”
How precious is your network?
1 For more on this story, visit http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf102. It was further popularized by the song “Charlie on the MTA,” recorded by the Kingston Trio in 1959.
2 How is it that hog-nosed snakes and possums know that stillness turns predators off, and other prey animals do not?
3 Discerning the beliefs and intentions of adversaries is always a goal, but often an adversary does not himself know what his beliefs and intentions may be at critical junctures. Before both the 1967 and 1973 Arab-Israeli wars, intelligence agencies sought to know when the war would start (in both cases, war was anticipated). Later research showed that there was no date certain when it would start. Until the moment the “go” command was given, delay or postponement was always possible. For plainspoken insight into this issue see, “Indications, Warning, and Crisis Operations” by Thomas G. Belden, in International Studies Quarterly, Vol. 21, No. 1 (March, 1977).
4 What does this say about the internal thief—the one who has authorized access, but uses that access to steal or cause or allow to be stolen the protected data? As we’ve said, network defense is neither solely nor even primarily a technical matter.
5 For various methodological reasons, the study was never formally published. Fred Feer, however, would be glad to discuss the findings and debate them with anyone interested.
6 Blowback is a term of art referring to the unintended consequences of failed covert operations. Is there any surprise that bested competitors strike back?
7 There is no better introduction to this concept than John LeCarre’s Tinker, Tailor, Soldier, Spy; The Honourable Schoolboy; and Smiley’s People. Although these novels are fictional, they capture the essence of how one must think about deception.
8 A classic military case was the one where the British deceived the Italian forces in Ethiopia early in World War II. They feigned strength in the south hoping to lure Italian forces away from their intended attack in the north. The Italians evidently had a different assessment of the situation. They reinforced the north. For more information, see David Mure’s Master of Deception: Tangled Webs in London and the Middle East (William Kimber, 1980).
9 As an appendix to his book Master of Deception: Tangled Webs in London and the Middle East, David Mure reproduces Colonel Clarke’s reflections on the practice of deception in the Mediterranean in 1941 through 1945.
10 Or was it? David Eisenhower’s Eisenhower at War 1943-1945 (Random House, 1986) suggests that the Allies retained sufficient troops and shipping in England to make “secondary” landings at Calais or Brittany if the Normandy landings failed. Was that also part of the deception plan to force the Germans to disperse their defenses in the critical early days?
11 In his paper, “The Intelligence Process and the Verification Problem” (The RAND Corp., 1985), F. S. Feer illustrates the difficulty in acquiring good intelligence from an uncooperative target. The analogous problem is conveying convincing information to an uncooperative target/adversary
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset