In general, deception refers to a creation of a system component that looks real, but, is in fact, a trap, which helps to achieve several security objectives, such as diversion of attention towards bogus assets, related wasting of energy and time, creation of uncertainty, and conducting of real-time security analysis (Amoroso, 2012). Therefore, a well-developed deceptive system provides a common interface that does not allow the intruder to recognize the differences between real and bogus assets.
The most common form of deception relates to creation of fake attack entry points in the form of honey pots (Cohen, 2006). Thus, creation of deceptive systems requires going through several stages, including scanning (search for exploitable entry points), discovery (finding of exploitable entry points that can be real or fake), exploitation (using of discovered vulnerability), and exposing (observing of behavior of adversary) (Amoroso, 2012). All these stages may raise serious legal and/or social issues, which require the intervention of the national legal community.
As a result, the major goal of the deceptive system usually is to observe the behavior of adversaries in action. Bases on the analysis of these actions, the administrator usually conducts certain actions, such as restricting of access due to frequent guessing of password, accessing of bogus documents that have special placement, and others. However, deception still remains a poorly understood security approach due to its complexity despite the need of an in-depth understanding of the infrastructure. Therefore, construction of an effective deceptive system requires such rationales as selective infrastructure use, sharing of results and insights and reuse of tools and methods.
Question: What are the major inefficiencies in the deception technology nowadays and how they can be mitigated?
Amoroso, E. (2012). Cyber attacks: protecting national infrastructure. Elsevier.
Cohen, F. (2006). The use of deception techniques: Honeypots and decoys. Handbook of Information Security, 3(1), 646-655.
Last Updated on