An Application of Automated Theorem Provers to Computer System Security
The Schematic Protection Model is specified in SAL and theorems about Take-Grant and New Technology File System schemes are proven. Arbitrary systems can be specified in SPM and analyzed. This is the first known automated analysis of SPM specifications in a theorem prover. The SPM specification was created in such a way that new specifications share the underlying framework and are configurable within the specifications file alone. This allows new specifications to be created with ease as demonstrated by the four unique models included within this document. This also allows future users to more easily specify models without recreating the framework. The built-in modules of SAL provided the needed support to make the model flexible and entities asynchronous. This flexibility allows for the number of entities to be dynamic and to meet the needs of different specifications. The models analyzed in this research demonstrate the validity of the specification and its application to real-world systems.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Using Sequence Analysis to Perform Application-Based Anomaly Detection Within an Artificial Immune System Framework
The Air Force and other Department of Defense (DoD) computer systems typically rely on traditional signature-based network IDSs to detect various types of attempted or successful attacks. Signature-based methods are limited to detecting known attacks or similar variants; anomaly-based systems, by contrast, alert on behaviors previously unseen. The development of an effective anomaly-detecting, application-based IDS would increase the Air Force's ability to ward off attacks that are not detected by signature-based network IDSs, thus strengthening the layered defenses necessary to acquire and maintain safe, secure communication capability. This system follows the Artificial Immune System (AIS) framework, which relies on a sense of "self," or normal system states to determine potentially dangerous abnormalities ("non-self").This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Performance Analysis and Comparison of Multiple Routing Protocols in a Large-Area, High-Speed Mobile Node Ad Hoc Network
The U.S. Air Force is interested in developing a standard ad hoc framework using "heavy" aircraft to route data across large regions. The Zone Routing Protocol (ZRP) has the potential to provide seamless large-scale routing for DoD under the Joint Tactical Radio System (JTRS) program. The goal of this study is to determine if there is a difference between routing protocol performance when operating in a large-area MANET with high-speed mobile nodes. This study analyzes MANET performance when using reactive, proactive, and hybrid routing protocols, specifically AODV, DYMO, Fisheye, and ZRP. This analysis compares the performance of the four routing protocols under the same MANET conditions. Average end-to-end delay, number of packets received, and throughput are the performance metrics used. Results indicate that routing protocol selection impacts MANET performance. Reactive protocol performance is better than hybrid and proactive protocol performance in each metric. Average ETE delays are lower using AODV (1.17 secs) and DYMO (2.14 secs) than ZRP (201.9 secs) or Fisheye (169.7 secs). Number of packets received is higher using AODV (531.6) and DYMO (670.2) than ZRP (267.3) or Fisheye (186.3). Throughput is higher using AODV (66,500 bps) and DYMO (87,577 bps) than ZRP (33,659) or Fisheye (23,630). The benefits of ZRP and Fisheye are not able to be taken advantage of in the MANET configurations modeled in this research using a "heavy" aircraft ad hoc framework.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Software Obfuscation With Symmetric Cryptography
Software protection is of great interest to commercial industry. Millions of dollars and years of research are invested in the development of proprietary algorithms used in software programs. A reverse engineer that successfully reverses another company's proprietary algorithms can develop a competing product to market in less time and with less money. The threat is even greater in military applications where adversarial reversers can use reverse engineering on unprotected military software to compromise capabilities on the field or develop their own capabilities with significantly less resources. Thus, it is vital to protect software, especially the software's sensitive internal algorithms, from adversarial analysis. Software protection through obfuscation is a relatively new research initiative. The mathematical and security community have yet to agree upon a model to describe the problem let alone the metrics used to evaluate the practical solutions proposed by computer scientists. We propose evaluating solutions to obfuscation under the intent protection model, a combination of white-box and black-box protection to reflect how reverse engineers analyze programs using a combination white-box and black-box attacks. In addition, we explore use of experimental methods and metrics in analogous and more mature fields of study such as hardware circuits and cryptography. Finally, we implement a solution under the intent protection model that demonstrates application of the methods and evaluation using the metrics adapted from the aforementioned fields of study to reflect the unique challenges in a software-only software protection technique.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Metamorphic Program Fragmentation as a Software Protection
Unauthorized reverse-engineering of programs and algorithms is a major problem for the software industry. Every program released to the public can be analyzed by any number of malicious reverse-engineers. These reversers search for security holes in the program to exploit or try to steal a competitor's vital algorithms. While it can take years and millions of dollars worth of research to develop new software, a determined reverser can reverse-engineer the program in a fraction of the time. To discourage reverse-engineering attempts, developers use a variety of software prote tions to obfuscate their programs. However, these protections are generally static, allowing reverse-engineers to eventually adapt to the protections, defeat them, and sometimes build automated tools to defeat them in the future. Metamorphic software protections add another layer of protection to traditional static obfuscation techniques. Metamorphic protections force a reverser to adjust their attacks as the protection changes. Program fragmentation combines two obfuscation techniques, outlining and obfuscated jump tables, into a new, metamorphic protection. Sections of code are removed from the main program flow and randomly placed throughout memory, reducing the programs locality. These fragments move while the program is running and are called using obfuscated jump tables, making program execution difficult to follow. This research assesses the performance overhead of a program fragmentation metamorphic engine and provides a qualitative analysis of its effectiveness against reverse-engineering techniques. Program fragmentation has very little associated overhead, with execution times for individual fragments of less than one microsecond. This low overhead allow a large numbers of fragments to be inserted into programs for protection. In addition, program fragmentation is an effective technique to complicate program of programs using two common disassembler/debugger tools. Thus, program fraThis work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Offensive Cyber Capability
The subject of cyberterrorism has become a topic of increasing importance to both the U.S. government and military. Offensive cyber capabilities provide a means to mitigate risk to U.S. systems that depend on the Internet to conduct business. In combination with passive security measures, offensive cybercapabilities seem to add to the level of Internet security thereby securing cyberspace for all Americans. The intent of this monograph is to identify the strengths and weaknesses of an offensive cyber capability in order to visualize the various options and tradeoffs necessary to achieve an acceptable level of security. The idea of convergence continues to bring together separate technologies using the Internet in order to interact and become more efficient. The effect of this phenomenon has increased the speed with which information is shared, helped business become more competitive and provided different means to distribute information. This same convergence has made the Internet a prime target as it has the potential to affect the economy, critical infrastructure and limit the freedoms of others in the cyberspace arena. Due to the increasing complexity of technology, vulnerabilities will continue to surface that can be taken advantage of. Technology is also becoming cheaper and easier to operate granting any motivated individual with access to the Internet the ability identify network vulnerabilities and exploit them. These themes are important as they identify that the U.S. is highly dependent on the Internet making it imperative that feasible security options must be identified in order to secure cyberspace. A cyberterrorist act has not occurred therefore there is no empirical evidence to develop case studies upon and generate learning. An agent based model using basic parameters learned from the literature review and logical deductions reveals key several key relationships. First, there is a balance between an offensive cyber capability and passive defensive mThis work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Developing a Gualia-Based Multi-Agent Architecture for use in Malware Detection
Detecting network intruders and malicious software is a significant problem for network administrators and security experts. New threats are emerging at an increasing rate, and current signature and statistics-based techniques are not keeping pace. Intelligent systems that can adapt to new threats are needed to mitigate these new strains of malware as they are released. This research detects malware based on its qualia, or essence rather than its low-level implementation details. By looking for the underlying concepts that make a piece of software malicious, this research avoids the pitfalls of static solutions that focus on predefined bit sequence signatures or anomaly thresholds. 14. ABSTRACT This research develops a novel, hierarchical modeling method to represent a computing system and demonstrates the representation's effectiveness by modeling the Blaster worm. Using Latent Dirichlet Allocation and Support Vector Machines abstract concepts are automatically generated that can be used in the hierarchical model for malware detection. Finally, the research outlines a novel system that uses multiple levels of individual software agents that sharing contextual relationships and information across different levels of abstraction to make decisions. This qualia-based system provides a framework for developing intelligent classification and decision-making systems for a number of application areas.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
To Click or Not to Click
Today's Air Force networks are under frequent attack. One of the most pernicious threats is a sophisticated phishing attack that can lead to complete network penetration. Once an adversary has gained network entry, they are in a position to exfiltrate sensitive data or pursue even more active forms of sabotage. However, there are promising technical advances proposed in current research can help mitigate the threat. Also, user education will continue to play an important role to increase effectiveness in AF defenses. This paper reviews and recommends the most promising suggestions for adaptation and application in today's AF networks.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Detecting Man-in-the-Middle Attacks Against Transport Layer Security Connections With Timing Analysis
The Transport Layer Security (TLS) protocol is a vital component to the protection of data as it traverses across networks. From e-commerce websites to Virtual Private Networks (VPNs), TLS protects massive amounts of private information, and protecting this data from Man-in-the-Middle (MitM) attacks is imperative to keeping the information secure. This thesis illustrates how an attacker can successfully perform a MitM attack against a TLS connection without alerting the user to his activities. By deceiving the client machine into using a false certificate, an attacker takes away the only active defense mechanism a user has against a MitM. The goal for this research is to determine if a time threshold exists that can indicate the presence of a MitM in this scenario. An analysis of the completion times between TLS handshakes without a MitM, with a passive MitM, and with an active MitM is used to determine if this threshold is calculable. Any conclusive findings supporting the existence of a timing baseline can be considered the first steps toward finding the value of the threshold and creating a second layer defense to actively protect against a MitM.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Routing of Time-Sensitive Data in Mobile Ad Hoc Networks
Mobile networks take the communication concept one step further than wireless networks. In these networks, all nodes in the network are assumed to be mobile. These networks are also called mobile ad hoc networks, due to their mobility and random configurations. Ad hoc networking is a relatively new concept; consequently, many researches are in progress focusing on each level of the network stack of ad hoc networks. This research focuses on the routing of time-sensitive data in ad hoc networks. A routing protocol named Ad hoc On-demand Distance Vectoring (AODV), which has been developed by Internet Engineering Task Force (IETF) for ad hoc networks, has been studied. Taking this protocol as a point of departure, a new routing protocol named as Real Time Routing Protocol (RTRP) was developed while considering the characteristics of time-sensitive data. These two routing protocols have been modeled using OPNET, a discrete-event network simulation tool, and simulations were run to compare the performances of these protocols.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Defining Our National Cyberspace Boundaries
In February 2009, the Obama Administration commissioned a 60-day review of the United States' cyber security. A near-term action recommended by the 60-day review was to prepare an updated national strategy to secure information and communications infrastructure.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Toward Cyber Omniscience
It is widely accepted that cyberspace is a vulnerable and highly contested environment. The United Sates has and will face threats to its national security in the realm. As a result, the Office of the Secretary of Defense (OSD) has decided to consider new and evolving theories of deterrence to address the cyber domain. This OSD-sponsored paper examines a new cyberspace deterrence option know as cyber omniscience. Set in the year 2035, this paper will begin the process of developing the theory of cyber omniscience as a DoD deterrent. At the heart of cyber deterrence lays this question: "As technology rapidly advances in the contested cyber domain, can hostile individuals be deterred from employing highly advanced technologies through cyberspace that threaten national survival?" To answer this question, this paper will investigate a number of issues with regard to cyberspace deterrence: anticipated life (societal norms) and technology in 2035, hostile individual threats, what cyber omniscience entails, privacy issues, and policy recommendations. This multi-pronged approach will serve as the catalyst to a better understanding of the future of cyberspace, the threats, and deterrence.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Digital Warfare
Digital Data Warfare (DDW) is an emerging field that has great potential as a means to meet military, political, economic, and personal objectives. Distinguished from the "hacker" variety of malicious computer code, by its predictable nature and the ability to target specific systems, DDW provides the hacker with the means to deny, degrade, decieve, and/or exploit a targeted system. The five phases of DDW attack--penetration, propogation, dormancy, execution, and termination, are presented for the first time by the author in this paper. The nature allows it to be used in the stategic, operational, and tactical warfare roles. Three questions should be considered when developing a strategy for employing DDW: (1) Who should control the employment of DDW? (2) What types of systems should be targeted, and (3) Under what circumstances should DDW be used? Finally, a brief overview of possible countermeasures against DDW is provided as well as an outline of an effective information system security program that would provide a defense against DDW.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cybermad
"Cyberspace has grown in importance to the United States (US), as well as the rest of the word. As such, the impact of cyberspace attacks have increased with time. Threats can be categorized as state or non-state actors. This research paper looks at state actors. It asks the question, should the US adopt a mutually assured destruction (MAD) doctrine for cyberspace? In order to answer this question, this research used a parallel historical case study. The case study was the US's nuclear MAD doctrine of the 1960s.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Patching the Wetware
In the practice of information security, it is increasingly observed that the weakest link in the security chain is the human operator. A reason often cited for this observation is that the human factor is simpler and cheaper to manipulate than the complex technological protections of digital information systems. Current anecdotes where the human was targeted to undermine military information protection systems include the 2008 breach of USCENTCOM computer systems with a USB device, and the more recent 2010 compromise of classified documents published on the WikiLeaks website. These infamous cases, among others, highlight the need for more robust human-centric information security measures to mitigate the risks of social engineering. To address this need, this research effort reviewed seminal works on social engineering and from the social psychology literature in order to conduct a qualitative analysis that establishes a link between the psychological principles underlying social engineering techniques and recognized principles of persuasion and influence. After this connection is established, several theories from the social psychology domain on how to develop resistance to persuasion are discussed as they could be applied to protecting personnel from social engineering attempts. Specifically, the theories of inoculation, forewarning, metacognition, and dispelling the illusion of invulnerability are presented as potential defenses.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Defensive Cyber Battle Damage Assessment Through Attack Methodology Modeling
Due to the growing sophisticated capabilities of advanced persistent cyber threats, it is necessary to understand and accurately assess cyber attack damage to digital assets. This thesis proposes a Defensive Cyber Battle Damage Assessment (DCBDA) process which utilizes the comprehensive understanding of all possible cyber attack methodologies captured in a Cyber Attack Methodology Exhaustive List (CAMEL). This research proposes CAMEL to provide detailed knowledge of cyber attack actions, methods, capabilities, forensic evidence and evidence collection methods. This product is modeled as an attack tree called the Cyber Attack Methodology Attack Tree (CAMAT). The proposed DCBDA process uses CAMAT to analyze potential attack scenarios used by an attacker. These scenarios are utilized to identify the associated digital forensic methods in CAMEL to correctly collect and analyze the damage from a cyber attack. The results from the experimentation of the proposed DCBDA process show the process can be successfully applied to cyber attack scenarios to correctly assess the extent, method and damage caused by a cyber attack.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Geolocation of a Node on a Local Area Network
Geolocation is the process of identifying a node using only its Internet Protocol (IP) address. Locating a node on a LAN poses particular challenges due to the small scale of the problem and the increased significance of queuing delay. This study builds upon existing research in the area of geolocation and develops a heuristic tailored to the difficulties inherent in LANs called the LAN Time to Location Heuristic (LTTLH).LTTLH uses several polling nodes to measure latencies to end nodes, known locations within the LAN. The Euclidean distance algorithm is used to compare the results wit`h the latency of a target in order to determine the target's approximate location.Using only these latency measurements, LTTLH is able to determine which switch a target is connected to 95% of the time. Within certain constraints, this method is able to identify the target location 78% of the time. However, LANs are not always configured within the constraints necessary to geolocate a node. In order for LTTLH to be effective, a network must be configured consistently, with similar length cable runs available to nodes located in the same area. For best results, the network should also be partitioned, grouping nodes of similar proximity behind one switch.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Uscybercom
Even though the Department of Defense has named cyberspace as the newest domain of warfare, the United States is not adequately organized to conduct cyber war. United States Strategic Command (USSTRATCOM) is the functional combatant command responsible for cyberspace but suffers from numerous problems that prevent it from properly planning, coordinating, and conducting cyberspace operations. Among the problems facing USSTRATCOM are insufficient manning, an overly diverse mission set, and the recent failures within America's nuclear enterprise. To overcome USSTRATCOM's problems and to provide the cyber domain the prominence needed to properly protect the United States, a new functional combatant command for cyberspace must be established. This command, United States Cyberspace Command (USCYBERCOM), should be given responsibility for conducting worldwide cyber attack, defense, and intelligence. USCYBERCOM should also serve as a supporting command to the geographic combatant commanders and must establish an in-theater headquarters presence similar to the land, air, maritime, and special operations forces.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Stochastic Estimation and Control of Queues Within a Computer Network
An extended Kalman filter is used to estimate size and packet arrival rate of network queues. These estimates are used by a LQG steady state linear perturbation PI controller to regulate queue size within a computer network. This paper presents the derivation of the transient queue behavior for a system with Poisson traffic and exponential service times. This result is then validated for ideal traffic using a network simulated in OPNET. A more complex OPNET model is then used to test the adequacy of the transient queue size model when non-Poisson traffic is combined. The extended Kalman filter theory is presented and a network state estimatoris designed using the transient queue behavior model. The equations needed for the LQG synthesis of a steady state linear perturbation PI controller are presented. These equations are used to develop a network queue controller based on the transient queue model. The performance of the network state estimator and network queue controller was investigated and shown to provide improved control when compared to other simplistic control algorithms.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Simple Public Key Infrastructure Analysis Protocol Analysis and Design
Secure electronic communication is based on secrecy, authentication and authorization. One means of assuring a communication has these properties is to use Public Key Cryptography (PKC). The framework consisting of standards, protocols and instructions that make PKC usable in communication applications is called a Public Key Infrastructure (PKI). This thesis aims at proving the applicability of the Simple Public Key Infrastructure (SPKI) as a means of PKC. The strand space approach of Guttman and Thayer is used to provide an appropriate model for analysis. A Diffie-Hellman strand space model is combined with mixed strand space proof methods for proving the correctness of multiple protocols operating in the same context. The result is the public key mixed strand space model. This model is ideal for the analysis of SPKI applications operating as sub-protocols of an implementing application. This thesis then models the popular Internet Transport Layer Security (TLS) protocol as a public key mixed strand space model. The model includes the integration of SPKI certificates. To accommodate the functionality of SPKI, a new protocol is designed for certificate validation, the Certificate Chain Validation Protocol (CCV). The CCV protocol operates as a sub-protocol to TLS and provides online certificate validation. The security of the TLS protocol integrated with SPKI certificates and sub-protocols is then analyzed to prove its security properties. The results show that the modified TLS protocol exhibits the same security guarantees in isolation as it does when executing its own sub-protocols and the SPKI Certificate Chain Validation protocol.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Developing a Gualia-Based Multi-Agent Architecture for use in Malware Detection
Detecting network intruders and malicious software is a significant problem for network administrators and security experts. New threats are emerging at an increasing rate, and current signature and statistics-based techniques are not keeping pace. Intelligent systems that can adapt to new threats are needed to mitigate these new strains of malware as they are released. This research detects malware based on its qualia, or essence rather than its low-level implementation details. By looking for the underlying concepts that make a piece of software malicious, this research avoids the pitfalls of static solutions that focus on predefined bit sequence signatures or anomaly thresholds. 14. ABSTRACT This research develops a novel, hierarchical modeling method to represent a computing system and demonstrates the representation's effectiveness by modeling the Blaster worm. Using Latent Dirichlet Allocation and Support Vector Machines abstract concepts are automatically generated that can be used in the hierarchical model for malware detection. Finally, the research outlines a novel system that uses multiple levels of individual software agents that sharing contextual relationships and information across different levels of abstraction to make decisions. This qualia-based system provides a framework for developing intelligent classification and decision-making systems for a number of application areas.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Course Curriculum Development for the Future Cyberwarrior
Cyberspace is one of the latest buzzwords to gain widespread fame and acceptance throughout the world. One can hear the term being used by presidents of states to elementary children delving into computers for the first time. Cyberspace has generated great enthusiasm over the opportunities and possibilities for furthering mankind's knowledge, communication, as well as, creating more convenient methods for accomplishing mundane or tedious tasks.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Analysis of the Performance and Security of J2SDK 1.4 JSSE Implementation of SSL/TLS
The Java SSL/TLS package distributed with the J2SE 1.4.2 runtime is a Java implementation of the SSLv3 and TLSv1 protocols. Java-based web services and other systems deployed by the DoD will depend on this implementation to provide confidentiality, integrity, and authentication. Security and performance assessment of this implementation is critical given the proliferation of web services within DoD channels. This research assessed the performance of the J2SE 1.4.2 SSL and TLS implementations, paying particular attention to identifying performance limitations given a very secure configuration. The performance metrics of this research were CPU utilization, network bandwidth, memory, and maximum number of secure socket that could be created given various factors. This research determined an integral performance relationship between the memory heap size and the encryption algorithm used. By changing the default heap size setting of the Java Virtual Machine from 64 MB to 256 MB and using the symmetric encryption algorithm of AES256, a high performance, highly secure SSL configuration is achievable. This configuration can support over 2000 simultaneous secure sockets with various encrypted data sizes. This yields a 200 percent increase in performance over the default configuration, while providing the additional security of 256-bit symmetric key encryption to the application data.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Leveraging Traditional Battle Damage Assessment Procedures to Measure Eects From A Computer Network Attack
The art of warfare in cyberspace is evolving. Cyberspace, as the newest warfighting domain, requires the tools to synchronize effects from the cyber domain with those of the traditional land, maritime, space, and air domains. Cyberspace can compliment a commander's theater strategy supporting strategic, operational, and tactical objectives. To be effective, or provide an eect, commanders must have a mechanism that allows them to understand if a desired cyber effect was successful which requires a comprehensive cyber battle damage assessment capability. The purpose of this research is to analyze how traditional kinetic battle damage assessment is conducted and apply those concepts in cyberspace. This requires in-depth nodal analysis of the cyberspace target as well as what second and third order effects can be measured to determine if the cyber-attack was successful. This is necessary to measure the impact of the cyber-attack which can be used to increase or decrease the risk level to personnel operating in traditional domains.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Accelerating Malware Detection via a Graphics Processing Unit
Real-time malware analysis requires processing large amounts of data storage to look for suspicious files. This is a time consuming process that (requires a large amount of processing power) often affecting other applications running on a personal computer. This research investigates the viability of using Graphic Processing Units (GPUs), present in many personal computers, to distribute the workload normally precessed by the standard Central Processing Unit (CPU). Three experiments are conducted using an industry standard GPU, the NVIDIA GeForce 9500 GT card. Experimental results show that a GPU can calculate a MD5 signature hash and scan a database of malicious signatures 82% faster then a CPU for files between 0 - 96 kB. If the file size is increased to 97 - 192 kB the GPU is 85% faster than the CPU. This demonstrates that the GPU can provide a greater performance increase over a CPU.These results could help achieve faster anti-malware products, faster network intrusion detection system response times, and faster firewall applications.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Active Computer Network Defense
A Presidential Commission, several writers, and numerous network security incidents have called attention to the potential vulnerability of the Defense Information Infrastructure (DII) to attack. Transmission Control Protocol/Internet Protocol (TCP/IP) networks are inherently resistant to physical attack because of their decentralized structure, but are vulnerable to CNA. Passive defenses can be very effective in forestalling CNA, but their effectiveness relies on the capabilities and attentiveness of system administrators and users. There are still many measures that can be taken to improve the effectiveness of passive defenses, and one of these is active defense. It can be divided into three categories: preemptive attacks, counterattacks, and active deception. Preemptive attacks show little potential for affecting an adversary's CNA capabilities, since these are likely to remain isolated from the Internet until actually beginning their attack. Counterattacks show more promise, but only if begun early enough to permit all preparatory activities to be completed before the adversary's CNA is completed. Active deception also shows promise, but only as long as intrusions can be detected quickly and accurately, and adversaries redirected into "dummy" networks. Active and passive defense measures can work synergistically, to strengthen one another.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Developing a Corpus Specific Stop-List Using Quantitative Comparison
We have become overwhelmed with electronic information and it seems our situation is not going to improve. When computers first became thought of as instruments to assist us and make our lives easier we thought of a future, that would be a manageable one. We envisioned a day when documents, no matter when they were produced, would be as close as a click of the mouse and the typing of a few words. Locating information of interest was not going to take all day. What we have found is technology changes faster than we can keep up with it. This thesis will look at how we can provide faster access to the information we are looking for. Previous research in the area of document/information retrieval has mainly focused on the automated creation of abstracts and indexes. But today's requirements are more closely related to searching for information through the use of queries. At the heart of the query process is the removal of search terms with little or no significance to the search being performed. More often than not stop-lists are constructed from the most commonly occurring words in the English language. This approach may be fine for systems, which handle information from very broad categories.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Geographic Location of a Computer Node Examining a Time-to-Location Algorithm and Multiple Autonomous System Networks
To determine the location of a computer on the Internet without resorting to outside information or databases would greatly increase the security abilities of the US Air Force and the Department of Defense. The geographic location of a computer node has been demonstrated on an autonomous system (AS) network, or a network with one system administration focal point. The work shows that a similar technique will work on networks comprised of a multiple AS network. A time-to-location algorithm can successfully resolve a geographic location of a computer node using only latency information from known sites and mathematically calculating the Euclidean distance to those sites from an unknown location on a single AS network. The time-to-location algorithm on a multiple AS network successfully resolves a geographic location 71.4% of the time. Packets are subject to arbitrary delays in the network; and inconsistencies in latency measurements are discovered when attempting to use a time-to location algorithm on a multiple AS network. To improve accuracy in a multiple AS network, a time-to-location algorithm needs to calculate the link bandwidth when attempting to geographically locate a computer node on a multiple AS network.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cyber Capabilities for Global Strike in 2035
This paper examines global strike, a core Air Force capacity to quickly and precisely attack any target anywhere, anytime, from a cyber perspective. Properly used, cyberspace capabilities can significantly enhance Air Force (AF) capabilities to provide the nation the capacity to influence the strategic behavior of existing and potential adversaries. This paper argues that the AF must improve both the quantity and quality of its cyberspace operations force, by treating cyber warfare capabilities in the same manner as it treats its other weapon systems. It argues that despite preconceptions of future automation capabilities, that cyberspace will be a highly dynamic and fluid environment characterized by interactions with a thinking adversary. As such, while automation is required, cyber warfare will be much more manpower intensive than is currently understood, and will require a force that is very highly trained. The rapid evolution of this man-made domain will also demand a robust developmental science and research investment in constantly keeping cyber warfare capabilities in pace with the technologies of the environment. This paper reaches these conclusions by first providing a glimpse into the world of cyberspace in 2035. The paper then assesses how cyber warfare mechanisms could disrupt, disable, or destroy potential adversary targets. It describes how these capabilities might work in two alternate scenarios, and then describes the steps the AF needs to take in the future to be confident in its ability to fly, fight, and win in cyberspace.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
U.S. Policy Recommendation for Responding to Cyber Attacks Against the United States
U.S. Response Strategy for Cyber Attacks The United States has traditionally looked to its military to defend against all foreign enemies. International telecommunications and computer networks and globalization have now overcome the military's absolute ability to provide for that common defense. More than capable to respond to attacks in traditional war fighting domains of land, sea, air, and even space, the military will not be able to prevent all cyber attacks against U.S. interests. As a result, the U.S. should establish and announce the nature of its strategic responses to cyber attacks - including legal prosecution, diplomacy, or military action. Such a policy pronunciation will serve both as a deterrent to potential attackers and likely be established as a normative international standard. The outline for a response policy begins by addressing attacks based upon the prevailing security environment - peacetime or conflict. The U.S. should respond to peacetime attacks based on the target, reasonably expected damage, attack type, and source. Attacks likely to cause significant injuries and damage warrant a full spectrum of response options, while state-sponsored attacks would justify a forcible response when their type and target indicate destructive effects including widespread injury and damage.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Software Obfuscation With Symmetric Cryptography
Software protection is of great interest to commercial industry. Millions of dollars and years of research are invested in the development of proprietary algorithms used in software programs. A reverse engineer that successfully reverses another company's proprietary algorithms can develop a competing product to market in less time and with less money. The threat is even greater in military applications where adversarial reversers can use reverse engineering on unprotected military software to compromise capabilities on the field or develop their own capabilities with significantly less resources. Thus, it is vital to protect software, especially the software's sensitive internal algorithms, from adversarial analysis. Software protection through obfuscation is a relatively new research initiative. The mathematical and security community have yet to agree upon a model to describe the problem let alone the metrics used to evaluate the practical solutions proposed by computer scientists. We propose evaluating solutions to obfuscation under the intent protection model, a combination of white-box and black-box protection to reflect how reverse engineers analyze programs using a combination white-box and black-box attacks. In addition, we explore use of experimental methods and metrics in analogous and more mature fields of study such as hardware circuits and cryptography. Finally, we implement a solution under the intent protection model that demonstrates application of the methods and evaluation using the metrics adapted from the aforementioned fields of study to reflect the unique challenges in a software-only software protection technique.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Multicast Algorithms for Mobile Satellite Communication Networks
With the rise of mobile computing and an increasing need for ubiquitous high speed data connections, Internet-in-the-sky solutions are becoming increasingly viable. To reduce the network overhead of one-to-many transmissions, the multicast protocol has been devised. The implementation of multicast in these Low Earth Orbit (LEO) constellations is a critical component to achieving an omnipresent network environment. This research examines the system performance associated with two terrestrial-based multicast mobility solutions, Distance Vector Multicast Routing Protocol (DVMRP) with mobile IP and On Demand Multicast Routing Protocol (ODMRP). These protocols are implemented and simulated in a six plane, 66 satellite LEO constellation. Each protocol was subjected to various workload, to include changes in the number of source nodes and the amount of traffic generated by these nodes. Results from the simulation trials show the ODMRP protocol provided greater than 99% reliability in packet deliverability, at the cost of more than 8 bits of overhead for every 1 bit of data for multicast groups with multiple sources. In contrast, DVMRP proved robust and scalable, with data-to-overhead ratios increasing logarithmically with membership levels. DVMRP also had less than 70 ms of average end- to-end delay, providing stable transmissions at high loading and membership levels. Due to the fact that system performance metric values varied as a function of protocol, system design objectives must be considered when choosing a protocol for implementation.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Accelerating Malware Detection via a Graphics Processing Unit
Real-time malware analysis requires processing large amounts of data storage to look for suspicious files. This is a time consuming process that (requires a large amount of processing power) often affecting other applications running on a personal computer. This research investigates the viability of using Graphic Processing Units (GPUs), present in many personal computers, to distribute the workload normally precessed by the standard Central Processing Unit (CPU). Three experiments are conducted using an industry standard GPU, the NVIDIA GeForce 9500 GT card. Experimental results show that a GPU can calculate a MD5 signature hash and scan a database of malicious signatures 82% faster then a CPU for files between 0 - 96 kB. If the file size is increased to 97 - 192 kB the GPU is 85% faster than the CPU. This demonstrates that the GPU can provide a greater performance increase over a CPU.These results could help achieve faster anti-malware products, faster network intrusion detection system response times, and faster firewall applications.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Performance Analysis and Comparison of Multiple Routing Protocols in a Large-Area, High-Speed Mobile Node Ad Hoc Network
The U.S. Air Force is interested in developing a standard ad hoc framework using "heavy" aircraft to route data across large regions. The Zone Routing Protocol (ZRP) has the potential to provide seamless large-scale routing for DoD under the Joint Tactical Radio System (JTRS) program. The goal of this study is to determine if there is a difference between routing protocol performance when operating in a large-area MANET with high-speed mobile nodes. This study analyzes MANET performance when using reactive, proactive, and hybrid routing protocols, specifically AODV, DYMO, Fisheye, and ZRP. This analysis compares the performance of the four routing protocols under the same MANET conditions. Average end-to-end delay, number of packets received, and throughput are the performance metrics used. Results indicate that routing protocol selection impacts MANET performance. Reactive protocol performance is better than hybrid and proactive protocol performance in each metric. Average ETE delays are lower using AODV (1.17 secs) and DYMO (2.14 secs) than ZRP (201.9 secs) or Fisheye (169.7 secs). Number of packets received is higher using AODV (531.6) and DYMO (670.2) than ZRP (267.3) or Fisheye (186.3). Throughput is higher using AODV (66,500 bps) and DYMO (87,577 bps) than ZRP (33,659) or Fisheye (23,630). The benefits of ZRP and Fisheye are not able to be taken advantage of in the MANET configurations modeled in this research using a "heavy" aircraft ad hoc framework.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Graph Theoretical Analysis of Network-centric Operations Using Multi-layer Models
As the Department of Defense continues its transformations to a network centric force, evaluating DoD's progression towards net-centricity remains a challenge. This research proposes to extend the Network Centric Operation Common Framework Version 2.0 (draft) with the metrics based in graph theory and, specifically addresses, among other metrics, the measurement of a net-centric force's mission effectiveness. The research incorporates the importance of understanding network topology for evaluating an environment for net-centricity and using network characteristics to help commanders assess the effects of network changes on mission effectiveness.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Software Protection Against Reverse Engineering Tools
Advances in technology have led to the use of simple to use automated debugging tools which can be extremely helpful in troubleshooting problems in code. However, a malicious attacker can use these same tools. Securely designing software and keeping it secure has become extremely difficult. These same easy to use debuggers can be used to bypass security built into software. While the detection of an altered executable file is possible, it is not as easy to prevent alteration in the first place. One way to prevent alteration is through code obfuscation or hiding the true function of software so as to make alteration difficult. This research executes blocks of code in parallel from within a hidden function to obscure functionality.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Analysis of Biometric Technology as an Enabler to Information Assurance
The use of and dependence on Information technology (IT) has grown tremendously in the lasttwo decades. Still, some believe we are only in the infancy of this growth. This explosive growthhas opened the door to capabilities that were only dreamed of in the past. As easily as it is to seehow advantageous technology is, it is also clear that with those advantages come distinctresponsibilities and new problems that must be addressed. For instance, the minute we beganusing information processing systems, the world of information assurance (IA) became far morecomplex as well. As a result, the push for better IA is necessary.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cybersecurity Essentials for Small Businesses
Cybersecurity Essentials for SMBs: The Complete Guide to Protecting Your Small Business in the Digital AgeCyber threats aren't just targeting large corporations-they're coming for small and medium-sized businesses (SMBs) now more than ever. In fact, over 43% of cyberattacks target small businesses. Why? Because hackers know most SMBs lack the resources, knowledge, or personnel to implement strong cybersecurity defenses. Cybersecurity Essentials for SMBs was written to change that.This practical, jargon-free guide empowers business owners, managers, and team leaders with the knowledge and tools they need to secure their operations, data, and customer trust-without needing a degree in computer science or a full-time IT team.���� What You'll Learn Inside: ���� Cybersecurity Fundamentals for Non-Techies No fluff, no scare tactics-just clear explanations of key cybersecurity concepts like malware, phishing, ransomware, firewalls, multi-factor authentication, and more. You'll gain a foundational understanding of the threats your business faces every day.����️ Real-World Security Strategies Get step-by-step guidance on how to defend your business against cyberattacks, protect sensitive customer data, and prevent business disruptions. Learn how to secure your network, devices, cloud platforms, and employee endpoints-whether you're in-office, remote, or hybrid.���� Low-Cost, High-Impact Solutions Cybersecurity doesn't have to break the bank. This book outlines affordable tools, services, and best practices that are ideal for small businesses on a budget. From password managers and antivirus software to secure cloud storage and encrypted backups, you'll find everything you need.���� Risk Reduction Without the Headache Discover how to conduct a simple cybersecurity risk assessment for your business, identify your most vulnerable areas, and prioritize what to fix first. No tech team required.���� Train Your Team to Be Human Firewalls Employees are your biggest risk-and your greatest defense. Learn how to build a cybersecurity culture in your organization, spot red flags like phishing emails, and train your staff to prevent social engineering attacks.���� Compliance & Data Privacy Made Simple From GDPR to HIPAA to PCI DSS, many small businesses don't even realize they're subject to compliance regulations. This book breaks down what applies to you and how to stay compliant with minimal stress.���� Secure Remote Work & BYOD Policies With more employees working remotely or on personal devices, SMBs face new vulnerabilities. Discover how to implement safe remote work practices, VPN use, mobile device management (MDM), and remote incident response plans.���� Incident Response for Small Teams What happens when a breach occurs? You'll learn how to create a simple incident response plan-so if something goes wrong, you're ready. Know when to call professionals, how to document an attack, and how to recover fast.����]���� Who Is This Book For?Small Business Owners & Startups with no IT background but a desire to protect what they've built.Freelancers & Solopreneurs managing client data, payment systems, or personal business devices.Managers & Team Leads looking to educate their staff and build a security-aware company culture.
Enemy at the Gateways
Every day, hackers use the Internet to "virtually" invade the borders of the United States and its critical infrastructure. National leadership must determine whether these intrusions constitute an attack or merit the declaration of a national emergency. In times of war, cyber attackers may attempt to monitor communications or disrupt information systems and other systems critical to national infrastructure. Formed in 2002, the Department of Homeland Security(DHS) holds lead agency status for many initiatives of the National Strategy to Secure Cyberspace (NSSC). The NSSC identifies critical infrastructures and key resources (CI/KR) that must be protected from physical or virtual attack. Current national strategy calls for the Department of Defense (DoD) to protect the defense industrial base (DIB), one of seven identified sectors of CI/KR. DoD components include the Office of the Secretary of Defense, the Joint Staff, the Military Services, Unified and Specified Commands, Defense Agencies, and field activities. DoD can contribute significantly to the protection of the nation from attacks directed against the United States via cyberspace by leveraging current resources and capabilities to augment ongoing initiatives and working to develop more effective homeland defense solutions. Along the way, DoD must continue working to protect the DIB from the information collection efforts of foreign intelligence services and organized crime, as well as from potential terrorist efforts to destroy or hold hostage critical information. Sensitive but unclassified (SBU) information seems to be more at risk than classified program information at this time, so current DoD efforts aim to secure the unclassified networks and databases of defense contractors. DoD can and should exceed the expectations laid out by the President of the United States in national strategy. Cooperation and information sharing will be the key.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Metamorphism as a Software Protection for Non-Malicious Code
The software protection community is always seeking new methods for defending their products from unwanted reverse engineering, tampering, and piracy. Most current protections are static. Once integrated, the program never modifies them. Being static makes them stationary instead of moving targets. This observation begs a question, "Why not incorporate self-modification as a defensive measure?" Metamorphism is a defensive mechanism used in modern, advanced malware programs. Although the main impetus for this protection in malware is to avoid detection from anti-virus signature scanners by changing the program's form, certain metamorphism techniques also serve as anti-disassembler and anti-debugger protections. For example, opcode shifting is a metamorphic technique to confuse the program disassembly, but malware modifies these shifts dynamically unlike current static approaches. This research assessed the performance overhead of a simple opcode-shifting metamorphic engine and evaluated the instruction reach of this particular metamorphic transform. In addition, dynamic subroutine reordering was examined. Simple opcode shifts take only a few nanoseconds to execute on modern processors and a few shift bytes can mangle several instructions in a program's disassembly. A program can reorder subroutines in a short span of time (microseconds). The combined effects of these metamorphic transforms thwarted advanced debuggers, which are key tools in the attacker's arsenal.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Mitigating Reversing Vulnerabilities in .NET Applications Using Virtualized Software Protection
Protecting intellectual property contained in application source code and preventing tampering with application binaries are both major concerns for software developers. Simply by possessing an application binary, any user is able to attempt to reverse engineer valuable information or produce unanticipated execution results through tampering. As reverse engineering tools become more prevalent, and as the knowledge required to effectively use those tools decreases, applications come under increased attack from malicious users.Emerging development tools such as Microsoft's .NET Application Framework allow diverse source code composed of multiple programming languages to be integrated into a single application binary, but the potential for theft of intellectual property increases due to the metadata-rich construction of compiled .NET binaries. Microsoft's new Software Licensing and Protection Services (SLPS) application is designed to mitigate trivial reversing of .NET applications through the use of virtualization. This research investigates the viability of the SLPS software protection utility Code Protector as a means of mitigating the inherent vulnerabilities of .NET applications.The results of the research show that Code Protector does indeed protect compiled .NET applications from reversing attempts using commonly-available tools. While the performance of protected applications can suffer if the protections are applied to sections of the code that are used repeatedly, it is clear that low-use .NET application code can be protected by Code Protector with little performance impact.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Course Curriculum Development for the Future Cyberwarrior
Cyberspace is one of the latest buzzwords to gain widespread fame and acceptance throughout the world. One can hear the term being used by presidents of states to elementary children delving into computers for the first time. Cyberspace has generated great enthusiasm over the opportunities and possibilities for furthering mankind's knowledge, communication, as well as, creating more convenient methods for accomplishing mundane or tedious tasks.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Using a Distributed Object-Oriented Database Management System in Support of a High-Speed Network Intrusion Detection System Data Repository
The Air Force has multiple initiatives to develop data repositories for high-speed network intrusion detection systems (IDS). All of the developed systems utilize a relational database management system (RDBMS) as the primary data storage mechanism. The purpose of this thesis is to replace the RDBMS in one such system developed by AFRL, the Automated Intrusion Detection Environment (AIDE), with a distributed object-oriented database management system (DOODBMS) and observe a number of areas: its performance against the RDBMS in terms of IDS event insertion and retrieval, the distributed aspects of the new system, and the resulting object-oriented architecture. The resulting system, the Object-Oriented Automated Intrusion Detection Environment (OOAIDE), is designed, built, and tested using the DOODBMS Objectivity/DB. Initial tests indicate that the new system is remarkably faster than the original system in terms of event insertion. Object retrievals are also faster when more than one association is used in the query. The database is then replicated and distributed across a simple heterogeneous network with preliminary tests indicating no loss of performance. A standardized object model is also presented that can accommodate any IDS data repository built around a DOODBMS architecture.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Dynamic Polymorphic Reconfiguration to Effectively Cloak a Circuit's Function
Today's society has become more dependent on the integrity and protection of digital information used in daily transactions resulting in an ever increasing need for information security. Additionally, the need for faster and more secure cryptographic algorithms to provide this information security has become paramount. Hardware implementations of cryptographic algorithms provide the necessary increase in throughput, but at a cost of leaking critical information. Side Channel Analysis (SCA) attacks allow an attacker to exploit the regular and predictable power signatures leaked by cryptographic functions used in algorithms such as RSA. In this research the focus on a means to counteract this vulnerability by creating a Critically Low Observable Anti-Tamper Keeping Circuit (CLOAK) capable of ontinuously changing the way it functions in both power and timing. This research has determined that a polymorphic circuit design capable of varying circuit power consumption and timing can protect a cryptographic device from an Electromagnetic Analysis (EMA) attacks. In essence, we are effectively CLOAKing the circuit functions from an attacker.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Leveraging Traditional Battle Damage Assessment Procedures to Measure Eects From A Computer Network Attack
The art of warfare in cyberspace is evolving. Cyberspace, as the newest warfighting domain, requires the tools to synchronize effects from the cyber domain with those of the traditional land, maritime, space, and air domains. Cyberspace can compliment a commander's theater strategy supporting strategic, operational, and tactical objectives. To be effective, or provide an eect, commanders must have a mechanism that allows them to understand if a desired cyber effect was successful which requires a comprehensive cyber battle damage assessment capability. The purpose of this research is to analyze how traditional kinetic battle damage assessment is conducted and apply those concepts in cyberspace. This requires in-depth nodal analysis of the cyberspace target as well as what second and third order effects can be measured to determine if the cyber-attack was successful. This is necessary to measure the impact of the cyber-attack which can be used to increase or decrease the risk level to personnel operating in traditional domains.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Analysis of Botnet Vulnerabilities
Botnets are a significant threat to computer networks and data stored on networked computers. The ability to inhibit communication between servers controlling the botnet and individual hosts would be an effective countermeasure. The objective of this research was to find vulnerabilities in Unreal IRCd that could be used to shut down the server. Analysis revealed that Unreal IRCd is a very mature and stable IRC server and no significant vulnerabilities were found. While this research does not eliminate the possibility that a critical vulnerability is present in the Unreal IRCd software, none were identified during this effort.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Using Relational Schemata in a Computer Immune System to Detect Multiple-Packet Network Intrusions
Given the increasingly prominent cyber-based threat, there are substantial research and development efforts underway in network and host-based intrusion detection using single-packet traffic analysis. However, there is a noticeable lack of research and development in the intrusion detection realm with regard to attacks that span multiple packets. This leaves a conspicuous gap in intrusion detection capability because not all attacks can be found by examining single packets alone. Some attacks may only be detected by examining multiple network packets collectively, considering how they relate to the "big picture," not how they are represented as individual packets. This research demonstrates a multiple-packet relational sensor in the context of a Computer Immune System (CIS) model to search for attacks that might otherwise go unnoticed via single-packet detection methods. Using relational schemata, multiple-packet CIS sensors define "self" based on equal, less than, and greater than relationships between fields of routine network packet headers. Attacks are then detected by examining how the relationships among attack packets may lay outside of the previously defined "self." Furthermore, this research presents a graphical, user-interactive means of network packet inspection to assist in traffic analysis of suspected intrusions. The visualization techniques demonstrated here provide a valuable tool to assist the network analyst in discriminating between true network attacks and false positives, often a time-intensive, and laborious process.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Stochastic Estimation and Control of Queues Within a Computer Network
An extended Kalman filter is used to estimate size and packet arrival rate of network queues. These estimates are used by a LQG steady state linear perturbation PI controller to regulate queue size within a computer network. This paper presents the derivation of the transient queue behavior for a system with Poisson traffic and exponential service times. This result is then validated for ideal traffic using a network simulated in OPNET. A more complex OPNET model is then used to test the adequacy of the transient queue size model when non-Poisson traffic is combined. The extended Kalman filter theory is presented and a network state estimatoris designed using the transient queue behavior model. The equations needed for the LQG synthesis of a steady state linear perturbation PI controller are presented. These equations are used to develop a network queue controller based on the transient queue model. The performance of the network state estimator and network queue controller was investigated and shown to provide improved control when compared to other simplistic control algorithms.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.