Cloud Computing
This book LNICST 617 constitutes the refereed proceedings of the 12th EAI International Conference on Cloud Computing, CloudComp 2024, held in Luton, UK, during September 9-10, 2024. The 16 full papers were carefully reviewed and selected from 42 submissions. The proceedings focus on topics such as The Cloud-Edging Computing Wireless Networks; Network Security Emerging Applications /The Cloud-Edging Integration Applications
The Cybernetic Society
"An optimistic, shimmering image of a world where AI operates in service to humankind" (Kirkus) argues that both the major risk and opportunity of AI is that humans and computers have fused, giving AI the ability to shape the future of human affairs Artificial intelligence is inescapable: at home, at work, in politics, and on the battlefield. In The Cybernetic Society, technologist Amir Husain argues that AI hasn't simply encroached on everything we do. It has become part of us, and we, it. Humans and intelligent machines, he argues, are enmeshed in a symbiotic hybrid that he calls a "cybernetic society." Husain describes a present and future where AI isn't a tool of humans but our equal partner, one where they can realize their own visions of the world. There is great potential and danger: Saudi Arabia's Neom--a "cognitive city" being built in inhospitable desert--shows how this symbiosis can make life possible where otherwise, it is not. But the profusion of intelligent military drones is making mass destruction possible where otherwise, it is not. As engrossing as it is urgent, The Cybernetic Society offers a new understanding of this revolutionary fusion of machine and mankind, and its profound implications for all our futures. The path ahead is challenging. But Husain shows why we can live harmoniously with our creations.
Information Security
The BiblioGov Project is an effort to expand awareness of the public documents and records of the U.S. Government via print publications. In broadening the public understanding of government and its work, an enlightened democracy can grow and prosper. Ranging from historic Congressional Bills to the most recent Budget of the United States Government, the BiblioGov Project spans a wealth of government information. These works are now made available through an environmentally friendly, print-on-demand basis, using only what is necessary to meet the required demands of an interested public. We invite you to learn of the records of the U.S. Government, heightening the knowledge and debate that can lead from such publications.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Digital Degrowth
We are fast approaching the point of "peak digital", with the continued mass production and excessive consumption of digital technologies set to become a key driver of climate crisis, ecological breakdown and ongoing societal instability. Digital Degrowth is a call to completely rethink our digital futures in these fast-changing times. It explores how degrowth thinking and alternate forms of "radically sustainable computing" might support ambitions of sustainable, scaled-down and equitable ways of living with digital technologies. Neil Selwyn proposes a rebalancing of digital technology use: digital degrowth is not a call for simply making reduced use of the digital technologies that we already have - rather it is an argument to reimagine digital practices that maximise societal benefits with minimal environmental and social impact. Drawing on illustrative examples from across computer science, hacker and environmental activist communities, this book examines how core degrowth principles of conviviality, autonomy and care are already being used to reimagine alternate forms of digital technology. Original and stimulating, this is essential reading for students and scholars of media and communication, sustainability studies, political ecology, computer/data sciences, and across the social sciences.
CompTIA(R) SecurityX(R) CAS-005 Certification Guide - Second Edition
Become a cybersecurity expert with comprehensive CAS-005 preparation using this detailed guide packed with practical insights, mock exams, diagrams, and actionable strategies that align with modern enterprise security demandsKey Features: - Strengthen your grasp of key concepts and real-world security practices across updated exam objectives.- Gauge your preparedness with over 300 practice questions, flashcards, and mock exams- Visualize complex topics with diagrams of AI-driven threats, Zero Trust, cloud security, cryptography, and incident responseBook Description: As cyber threats evolve at unprecedented speed and enterprises demand resilient, scalable security architectures, the CompTIA SecurityX CAS-005 Certification Guide stands as the definitive preparation resource for today's security leaders. This expert-led study guide enables senior security professionals to master the full breadth and depth of the new CAS-005 exam objectives.Written by veteran instructor Mark Birch, this guide draws from over 30 years of experience in teaching, consulting, and implementing cybersecurity controls to deliver clear, actionable content across the four core domains: governance, risk, and compliance; security architecture; security engineering; and security operations. It addresses the most pressing security challenges, from AI-driven threats and Zero Trust design to hybrid cloud environments, post-quantum cryptography, and automation. While exploring cutting-edge developments, it reinforces essential practices such as threat modeling, secure SDLC, advanced incident response, and risk management.Beyond comprehensive content coverage, this guide ensures you are fully prepared to pass the exam through exam tips, review questions, and detailed mock exams, helping you build the confidence and situational readiness needed to succeed in the CAS-005 exam and real-world cybersecurity leadership.What You Will Learn: - Build skills in compliance, governance, and risk management- Understand key standards such as CSA, ISO27000, GDPR, PCI DSS, CCPA, and COPPA- Hunt advanced persistent threats (APTs) with AI, threat detection, and cyber kill frameworks- Apply Kill Chain, MITRE ATT&CK, and Diamond threat models for proactive defense- Design secure hybrid cloud environments with Zero Trust architecture- Secure IoT, ICS, and SCADA systems across enterprise environments- Modernize SecOps workflows with IAC, GenAI, and automation- Use PQC, AEAD, FIPS, and advanced cryptographic toolsWho this book is for: This CompTIA book is for candidates preparing for the SecurityX certification exam who want to advance their career in cybersecurity. It's especially valuable for security architects, senior security engineers, SOC managers, security analysts, IT cybersecurity specialists/INFOSEC specialists, and cyber risk analysts. A background in a technical IT role or a CompTIA Security+ certification or equivalent experience is recommended.Table of Contents- Given a Set of Organizational Security Requirements, Implement the Appropriate Governance Components- Given a Set of Organizational Security Requirements, Perform Risk Management Activities- Explain how compliance affects information security strategies- Given a Scenario, Performing Threat Modeling Activities- Summarize the Information Security Challenges Associated with AI Adoption- Given a Scenario, Analyze Requirements to Design Resilient Systems- Given a Scenario, Implement Security in the Early Stages of the Systems Life Cycle and Throughout Subsequent Stages(N.B. Please use the Read Sample option to see further chapters)
Information Systems Security and Privacy
This book constitutes the refereed post-proceedings of the 9th and 10th International Conference on Information Systems Security and Privacy, ICISSP 2023 and 2024, held in Lisbon, Portugal, and in Rome, Italy during February 22-24, 2023 and February 26-28, 2024, respectively. The 15 full papers included in this book were carefully reviewed and selected from 285 submissions. These papers have been organized under the following topical sections: Management and operations; Applications and services; and Technologies and foundations.
Cognitive Computation and Systems
This book constitutes the refereed proceedings of the Third International Conference on Cognitive Computation and Systems, ICCCS 2024, held in Linyi, China, December 20-22, 2024. The 54 revised full papers presented in these proceedings were carefully reviewed and selected from 155 submissions. The papers are organized in the following topical sections: Part I: Cognitive computing and information processing; Intelligent cooperative control; and Learning and systems. Part II: Cognitive computing and information processing; Intelligent cooperative control; and Learning and systems.
Advanced Intelligent Computing Technology and Applications
The 12-volume set CCIS 2564-2575, together with the 28-volume set LNCS/LNAI/LNBI 15842-15869, constitutes the refereed proceedings of the 21st International Conference on Intelligent Computing, ICIC 2025, held in Ningbo, China, during July 26-29, 2025. The 523 papers presented in these proceedings books were carefully reviewed and selected from 4032 submissions. This year, the conference concentrated mainly on the theories and methodologies as well as the emerging applications of intelligent computing. Its aim was to unify the picture of contemporary intelligent computing techniques as an integral concept that highlights the trends in advanced computational intelligence and bridges theoretical research with applications. Therefore, the theme for this conference was "Advanced Intelligent Computing Technology and Applications".
Cognitive Computation and Systems
This book constitutes the refereed proceedings of the Third International Conference on Cognitive Computation and Systems, ICCCS 2024, held in Linyi, China, December 20-22, 2024. The 54 revised full papers presented in these proceedings were carefully reviewed and selected from 155 submissions. The papers are organized in the following topical sections: Part I: Cognitive computing and information processing; Intelligent cooperative control; and Learning and systems. Part II: Cognitive computing and information processing; Intelligent cooperative control; and Learning and systems.
Mitigating Distributed Denial of Service Attacks in an Anonymous Routing Environment
Network-centric intelligence collection operations use computers and the Internet to identify threats against Department of Defense (DoD) operations and personnel, to assess the strengths and weaknesses of enemy capabilities and to attribute network events to sponsoring organizations. The security of these operations are paramount and attention must be paid to countering enemy attribution efforts. One way for U.S. information operators to avoid being linked to the DoD is to use anonymous communication systems. One such anonymous communication system, Tor, provides a distributed overlay network that anonymizes interactive TCP services such as web browsing, secure shell, and chat. Tor uses the Transport Layer Security (TLS) protocol and is thus vulnerable to a distributed denial-of-service (DDoS) attack that can significantly delay data traversing the Tor network. This research is the first to explore DDoS mitigation in the anonymous routing environment. Defending against DDoS attacks in this environment is challenging as mitigation strategies must account for the distributed characteristics of anonymous communication systems and for anonymity vulnerabilities. In this research, the TLS DDoS attack is mitigated by forcing all clients (malicious or legitimate) to solve a puzzle before a connection is completed.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Spectral Domain RF Fingerprinting for 802.11 Wireless Devices
The increase in availability and reduction in cost of commercial communication devices (IEEE compliant such as 802.11, 802.16, etc) has increased wireless user exposure and the need for techniques to properly identify/classify signals for increased security measures. A communication device's emission includes intentional modulation that enables correct device operation. Hardware and environmental factors alter the ideal response and induce unintentional modulation e ects. If these e ects (features) are su ciently unique it becomes possible to identify a device using its ngerprint, with potential discrimination of not only manufacturers but possibly serial numbers for a given manufacturer.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Virtualization Technology Applied to Rootkit Defense
This research effort examines the idea of applying virtualization hardware to enhance operating system security against rootkits. Rootkits are sets of tools used to hide code and/or functionality from the user and operating system. Rootkits can accomplish this feat through using access to one part of an operating system to change another part that resides at the same privilege level. Hardware assisted virtualization (HAV) provides an opportunity to defeat this tactic through the introduction of a new operating mode. Created to aid operating system virtualization, HAV provides hardware support for managing and saving multiple states of the processor. This hardware support overcomes a problem in pure software virtualization, which is the need to modify guest software to run at a less privileged level. Using HAV, guest software can operate at the pre-HAV most privileged level. This thesis provides a plan to protect data structures targeted by rootkits through unconventional use of HAV technology to secure system resources such as memory. This method of protection will provide true real-time security through OS attack prevention, rather than reaction.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Supplementing an Ad Hoc Wireless Network Routing Protocol With Radio Frequency Identification Tags
Wireless sensor networks (WSNs) have a broad and varied range of applications, yet all of these are limited by the resources available to the sensor nodes that make up the WSN. The most significant resource is energy; a WSN may be deployed to an inhospitable or unreachable area leaving it with a non-replenishable power source. This research examines a technique of reducing energy consumption by augmenting the nodes with radio frequency identification (RFID) tags that contain routing information. It was expected that RFID tags would reduce the network throughput, AODV routing traffic sent, and the amount of energy consumed. However, RFID tags have little effect on the network throughput or the AODV routing traffic sent. They also increase ETE delays in sparse networks as well as the amount of energy consumed in both sparse and dense networks. Furthermore, there was no statistical difference in the amount of user data throughput received. The density of the network is shown to have an effect on the variation of the data but the trends are the same for both sparse and dense networks. This counter-intuitive result is explained and conditions for such a scheme to be effective are discussed.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Stastistical Machine Translation of Japanese
Statistical machine translation (SMT) uses large amounts of language training data to statistically build a knowledge base for translating from one language to another. Before introducing this language data, usually in the form of a parallel set of sentences from both languages, the SMT system has no other linguistic information available to it. With supervised SMT, however, additional linguistic knowledge is allowed in addition to the training data. When translating between languages with little or no common linguistic backgrounds, like English and Japanese, using supervised SMT is extremely useful. By giving the system linguistic rules before training on the parallel corpus, the SMT system can build better alignments between words in both languages.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Policy Changes for Acquisition of Offensive Cyberspace Weapon Systems
Because the cyberspace environment is changing so quickly, the slow, methodical Department of Defense (DoD) acquisition process may not suffice. By following the evolutionary acquisition method and incorporating five policy caveats, the DoD acquisition process can acquire effective systems quickly. The purpose of this research is to provide recommended policy changes in the acquisition of offensive cyberspace weapon systems for the Air Force and DoD in general. This paper describes the current DoD acquisition process, explains how cyberspace is different from the other domains, discusses a few innovative acquisition and development approaches, and concludes with the recommended policy changes. A literature search on the cyberspace community along with DoD and Air Force doctrine provided the bulk of the research. The recommended acquisition policy changes fall into the following categories: expanding the network of development activities, building payloads for specific target sets, security classification, sustainment of cyberspace capabilities and testing throughout the acquisition process.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Analysis of Information Asset Valuation Quantification Methodology for Application With Cyber Information Mission Impact Assessment
The purpose of this research is to develop a standardized Information Asset Valuation (IAV) methodology. The IAV methodology proposes that accurate valuation for an Information Asset (InfoA) is the convergence of information tangible, intangible, and flow attributes to form a functional entity that enhances mission capability. The IAV model attempts to quantify an InfoA to a single value through the summation of weighted criteria. Standardizing the InfoA value criteria will enable decision makers to comparatively analyze dissimilar InfoAs across the tactical, operational, and strategic domains. This research develops the IAV methodology through a review of existing military and non-military valuation methodologies. IAV provides the Air Force (AF) and Department of Defense (DoD) with a standardized methodology that may be utilized enterprise wide when conducting risk and damage assessment and risk management. The IAV methodology is one of the key functions necessary for the Cyber Incident Mission Impact Assessment (CIMIA) program to operationalize a scalable, semi-automated Decision Support System (DSS) tool. The CIMIA DSS intends to provide decision makers with near real-time cyber awareness prior to, during, and post cyber incident situations through documentation of relationships, interdependencies, and criticalities among information assets, the communications infrastructure, and the operations mission impact.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Toward Cyber Omniscience
It is widely accepted that cyberspace is a vulnerable and highly contested environment. The United Sates has and will face threats to its national security in the realm. As a result, the Office of the Secretary of Defense (OSD) has decided to consider new and evolving theories of deterrence to address the cyber domain. This OSD-sponsored paper examines a new cyberspace deterrence option know as cyber omniscience. Set in the year 2035, this paper will begin the process of developing the theory of cyber omniscience as a DoD deterrent. At the heart of cyber deterrence lays this question: "As technology rapidly advances in the contested cyber domain, can hostile individuals be deterred from employing highly advanced technologies through cyberspace that threaten national survival?" To answer this question, this paper will investigate a number of issues with regard to cyberspace deterrence: anticipated life (societal norms) and technology in 2035, hostile individual threats, what cyber omniscience entails, privacy issues, and policy recommendations. This multi-pronged approach will serve as the catalyst to a better understanding of the future of cyberspace, the threats, and deterrence.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
To Click or Not to Click
Today's Air Force networks are under frequent attack. One of the most pernicious threats is a sophisticated phishing attack that can lead to complete network penetration. Once an adversary has gained network entry, they are in a position to exfiltrate sensitive data or pursue even more active forms of sabotage. However, there are promising technical advances proposed in current research can help mitigate the threat. Also, user education will continue to play an important role to increase effectiveness in AF defenses. This paper reviews and recommends the most promising suggestions for adaptation and application in today's AF networks.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Geographic Location of a Computer Node Examining a Time-to-Location Algorithm and Multiple Autonomous System Networks
To determine the location of a computer on the Internet without resorting to outside information or databases would greatly increase the security abilities of the US Air Force and the Department of Defense. The geographic location of a computer node has been demonstrated on an autonomous system (AS) network, or a network with one system administration focal point. The work shows that a similar technique will work on networks comprised of a multiple AS network. A time-to-location algorithm can successfully resolve a geographic location of a computer node using only latency information from known sites and mathematically calculating the Euclidean distance to those sites from an unknown location on a single AS network. The time-to-location algorithm on a multiple AS network successfully resolves a geographic location 71.4% of the time. Packets are subject to arbitrary delays in the network; and inconsistencies in latency measurements are discovered when attempting to use a time-to location algorithm on a multiple AS network. To improve accuracy in a multiple AS network, a time-to-location algorithm needs to calculate the link bandwidth when attempting to geographically locate a computer node on a multiple AS network.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Developing a Corpus Specific Stop-List Using Quantitative Comparison
We have become overwhelmed with electronic information and it seems our situation is not going to improve. When computers first became thought of as instruments to assist us and make our lives easier we thought of a future, that would be a manageable one. We envisioned a day when documents, no matter when they were produced, would be as close as a click of the mouse and the typing of a few words. Locating information of interest was not going to take all day. What we have found is technology changes faster than we can keep up with it. This thesis will look at how we can provide faster access to the information we are looking for. Previous research in the area of document/information retrieval has mainly focused on the automated creation of abstracts and indexes. But today's requirements are more closely related to searching for information through the use of queries. At the heart of the query process is the removal of search terms with little or no significance to the search being performed. More often than not stop-lists are constructed from the most commonly occurring words in the English language. This approach may be fine for systems, which handle information from very broad categories.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Using a Distributed Object-Oriented Database Management System in Support of a High-Speed Network Intrusion Detection System Data Repository
The Air Force has multiple initiatives to develop data repositories for high-speed network intrusion detection systems (IDS). All of the developed systems utilize a relational database management system (RDBMS) as the primary data storage mechanism. The purpose of this thesis is to replace the RDBMS in one such system developed by AFRL, the Automated Intrusion Detection Environment (AIDE), with a distributed object-oriented database management system (DOODBMS) and observe a number of areas: its performance against the RDBMS in terms of IDS event insertion and retrieval, the distributed aspects of the new system, and the resulting object-oriented architecture. The resulting system, the Object-Oriented Automated Intrusion Detection Environment (OOAIDE), is designed, built, and tested using the DOODBMS Objectivity/DB. Initial tests indicate that the new system is remarkably faster than the original system in terms of event insertion. Object retrievals are also faster when more than one association is used in the query. The database is then replicated and distributed across a simple heterogeneous network with preliminary tests indicating no loss of performance. A standardized object model is also presented that can accommodate any IDS data repository built around a DOODBMS architecture.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A Distributed Agent Architecture for a Computer Virus Immune System
Information superiority is identified as an Air Force core competency and is recognized as a key enabler for the success of future missions. Information protection and information assurance are vital components required for achieving superiority in the Infosphere, but these goals are threatened by the exponential birth rate of new computer viruses. The increased global interconnectivity that is empowering advanced information systems is also increasing the spread of malicious code and current anti-virus solutions are quickly becoming overwhelmed by the burden of capturing and classifying new viral stains. To overcome this problem, a distributed computer virus immune system (CVIS) based on biological strategies is developed. The biological immune system (BIS) offers a highly parallel defense-in-depth solution for detecting and eliminating foreign invaders. Each component of the BIS can be viewed as an autonomous agent. Only through the collective actions of this multi-agent system can non-self entities be detected and removed from the body. This research develops a model of the BIS and utilizes software agents to implement a CVIS. The system design validates that agents are an effective methodology for the construction of an artificial immune system largely because the biological basis for the architecture can be described as a system of collaborating agents. The distributed agent architecture provides support for detection and management capabilities that are unavailable in current anti-virus solutions. However, the slow performance of the Java and the Java Shared Data Toolkit implementation indicate the need for a compiled language solution and the importance of understanding the performance issues in agent system design. The detector agents are able to distinguish self from non-self within a probabilistic error rate that is tunable through the proper selection of system parameters. This research also shows that by fighting viruses using an immune system model, tThis work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Application of Automated Theorem Provers to Computer System Security
The Schematic Protection Model is specified in SAL and theorems about Take-Grant and New Technology File System schemes are proven. Arbitrary systems can be specified in SPM and analyzed. This is the first known automated analysis of SPM specifications in a theorem prover. The SPM specification was created in such a way that new specifications share the underlying framework and are configurable within the specifications file alone. This allows new specifications to be created with ease as demonstrated by the four unique models included within this document. This also allows future users to more easily specify models without recreating the framework. The built-in modules of SAL provided the needed support to make the model flexible and entities asynchronous. This flexibility allows for the number of entities to be dynamic and to meet the needs of different specifications. The models analyzed in this research demonstrate the validity of the specification and its application to real-world systems.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Passwords
The purpose of this research was to see how individuals use and remember passwords. Specifically, this thesis sought to answer research questions addressing if organizational parameters are influencing behaviors associated with password choice and to what effect. Volunteers answered the research questions via a web-survey. The research identified the need for an evaluation of how organizations limit password choice by setting parameters for individuals.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Developing a Gualia-Based Multi-Agent Architecture for use in Malware Detection
Detecting network intruders and malicious software is a significant problem for network administrators and security experts. New threats are emerging at an increasing rate, and current signature and statistics-based techniques are not keeping pace. Intelligent systems that can adapt to new threats are needed to mitigate these new strains of malware as they are released. This research detects malware based on its qualia, or essence rather than its low-level implementation details. By looking for the underlying concepts that make a piece of software malicious, this research avoids the pitfalls of static solutions that focus on predefined bit sequence signatures or anomaly thresholds. 14. ABSTRACT This research develops a novel, hierarchical modeling method to represent a computing system and demonstrates the representation's effectiveness by modeling the Blaster worm. Using Latent Dirichlet Allocation and Support Vector Machines abstract concepts are automatically generated that can be used in the hierarchical model for malware detection. Finally, the research outlines a novel system that uses multiple levels of individual software agents that sharing contextual relationships and information across different levels of abstraction to make decisions. This qualia-based system provides a framework for developing intelligent classification and decision-making systems for a number of application areas.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Formal Mitigation Strategies for the Insider Threat
The advancement of technology and reliance on information systems have fostered an environment of sharing and trust. The rapid growth and dependence on these systems, however, creates an increased risk associated with the insider threat. The insider threat is one of the most challenging problems facing the security of information systems because the insider already has capabilities within the system. Despite research efforts to prevent and detect insiders, organizations remain susceptible to this threat because of inadequate security policies and a willingness of some individuals to betray their organization. To investigate these issues, a formal security model and risk analysis framework are used to systematically analyze this threat and develop effective mitigation strategies. This research extends the Schematic Protection Model to produce the first comprehensive security model capable of analyzing the safety of a system against the insider threat. The model is used to determine vulnerabilities in security policies and system implementation. Through analysis, mitigation strategies that effectively reduce the threat are identified. Furthermore, an action-based taxonomy that expresses the insider threat through measurable and definable actions is presented.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Bubble World a Novel Visual Information Retrieval Technique
With the tremendous growth of published electronic information sources in the last decade and the unprecedented reliance on this information to succeed in day-to-day operations, comes the expectation of finding the right information at the right time. Sentential interfaces are currently the only viable solution for searching through large infospheres of unstructured information, however, the simplistic nature of their interaction model and lack of cognitive amplification they can provide severely limit the performance of the interface. Visual information retrieval systems are emerging as possible candidate replacements for the more traditional interfaces, but many lack the cognitive framework to support the knowledge crystallization process found to be essential in information retrieval. This work introduces a novel visual information retrieval technique crafted from two distinct design genres: (1) the cognitive strategies of the human mind to solve problems and (2) observed interaction patterns with existing information retrieval systems. Based on the cognitive and interaction framework developed in this research, a functional prototype information retrieval system, called Bubble World, has been created to demonstrate that significant performance gains can be achieved using this technique when compared to more traditional text-based interfaces.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Scalable and Fault Tolerant Group Key Management
To address the group key management problem for modern networks this research proposes a lightweight group key management protocol with a gossip-based dissemination routine. Experiments show that by slightly increasing workload for the key update mechanism, this protocol is superior to currently available tree-based protocols with respect to reliability and fault tolerance, while remaining scalable to large groups. In addition, it eliminates the need for logical key hierarchy while preserving an overall reduction in rekey messages to rekey a group. The protocol provides a simple "pull" mechanism to ensure perfect rekeys in spite of the primary rekey mechanism's probabilistic guarantees, without burdening key distribution facilities. Benefits of this protocol are quantified versus tree-based dissemination in Java simulations on networks exhibiting various node failure rates.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Enemy at the Gateways
Every day, hackers use the Internet to "virtually" invade the borders of the United States and its critical infrastructure. National leadership must determine whether these intrusions constitute an attack or merit the declaration of a national emergency. In times of war, cyber attackers may attempt to monitor communications or disrupt information systems and other systems critical to national infrastructure. Formed in 2002, the Department of Homeland Security(DHS) holds lead agency status for many initiatives of the National Strategy to Secure Cyberspace (NSSC). The NSSC identifies critical infrastructures and key resources (CI/KR) that must be protected from physical or virtual attack. Current national strategy calls for the Department of Defense (DoD) to protect the defense industrial base (DIB), one of seven identified sectors of CI/KR. DoD components include the Office of the Secretary of Defense, the Joint Staff, the Military Services, Unified and Specified Commands, Defense Agencies, and field activities. DoD can contribute significantly to the protection of the nation from attacks directed against the United States via cyberspace by leveraging current resources and capabilities to augment ongoing initiatives and working to develop more effective homeland defense solutions. Along the way, DoD must continue working to protect the DIB from the information collection efforts of foreign intelligence services and organized crime, as well as from potential terrorist efforts to destroy or hold hostage critical information. Sensitive but unclassified (SBU) information seems to be more at risk than classified program information at this time, so current DoD efforts aim to secure the unclassified networks and databases of defense contractors. DoD can and should exceed the expectations laid out by the President of the United States in national strategy. Cooperation and information sharing will be the key.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Historical Analysis of the Awareness and Key Issues of the Insider Threat to Information Systems
Since information systems have become smaller, faster, cheaper and more interconnected many organizations have become more dependent on them for daily operations and to maintain critical data. This reliance on information systems is not without risk of attack. Because these systems are relied upon so heavily the impact of such an attack also increases, making the protection of these systems essential. Information system security often focuses on the risk of attack and damage from the outsider. High-profile issues such as hackers, viruses and denial-of-service are generally emphasized in literature and other media outlets. A neglected area of computer security that is just as prevalent and potentially more damaging is the threat from a trusted insider. An organizational insider who misuses a system whether intentional or unintentional is often in a position to know where and how to access important information. How do we become aware of such activities and protect against this threat? This research was a historical analysis of the insider threat to information systems to develop a understanding and framework of the topic.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Leveraging Traditional Battle Damage Assessment Procedures to Measure Eects From A Computer Network Attack
The art of warfare in cyberspace is evolving. Cyberspace, as the newest warfighting domain, requires the tools to synchronize effects from the cyber domain with those of the traditional land, maritime, space, and air domains. Cyberspace can compliment a commander's theater strategy supporting strategic, operational, and tactical objectives. To be effective, or provide an eect, commanders must have a mechanism that allows them to understand if a desired cyber effect was successful which requires a comprehensive cyber battle damage assessment capability. The purpose of this research is to analyze how traditional kinetic battle damage assessment is conducted and apply those concepts in cyberspace. This requires in-depth nodal analysis of the cyberspace target as well as what second and third order effects can be measured to determine if the cyber-attack was successful. This is necessary to measure the impact of the cyber-attack which can be used to increase or decrease the risk level to personnel operating in traditional domains.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Dynamic Polymorphic Reconfiguration to Effectively Cloak a Circuit's Function
Today's society has become more dependent on the integrity and protection of digital information used in daily transactions resulting in an ever increasing need for information security. Additionally, the need for faster and more secure cryptographic algorithms to provide this information security has become paramount. Hardware implementations of cryptographic algorithms provide the necessary increase in throughput, but at a cost of leaking critical information. Side Channel Analysis (SCA) attacks allow an attacker to exploit the regular and predictable power signatures leaked by cryptographic functions used in algorithms such as RSA. In this research the focus on a means to counteract this vulnerability by creating a Critically Low Observable Anti-Tamper Keeping Circuit (CLOAK) capable of ontinuously changing the way it functions in both power and timing. This research has determined that a polymorphic circuit design capable of varying circuit power consumption and timing can protect a cryptographic device from an Electromagnetic Analysis (EMA) attacks. In essence, we are effectively CLOAKing the circuit functions from an attacker.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Spear Phishing Attack Detection
This thesis addresses the problem of identifying email spear phishing attacks, which are indicative of cyber espionage. Spear phishing consists of targeted emails sent to entice a victim to open a malicious file attachment or click on a malicious link that leads to a compromise of their computer. Current detection methods fail to detect emails of this kind consistently. The SPEar phishing Attack Detection system (SPEAD) is developed to analyze all incoming emails on a network for the presence of spear phishing attacks. SPEAD analyzes the following file types: Windows Portable Executable and Common Object File Format (PE/COFF), Adobe Reader, and Microsoft Excel, Word, and PowerPoint. SPEAD's malware detection accuracy is compared against five commercially-available email anti-virus solutions. Finally, this research quantifies the time required to perform this detection with email traffic loads emulating an Air Force base network. Results show that SPEAD outperforms the anti-virus products in PE/COFF malware detection with an overall accuracy of 99.68% and an accuracy of 98.2% where new malware is involved. Additionally, SPEAD is comparable to the anti-virus products when it comes to the detection of new Adobe Reader malware with a rate of 88.79%. Ultimately, SPEAD demonstrates a strong tendency to focus its detection on new malware, which is a rare and desirable trait. Finally, after less than 4 minutes of sustained maximum email throughput, SPEAD's non-optimized configuration exhibits one-hour delays in processing files and links.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cryptanalysis of Pseudorandom Number Generators in Wireless Sensor Networks
This work presents a brute-force attack on an elliptic curve cryptosystemimplemented on UC Berkley's TinyOS operating system for wireless sensor networks.The attack exploits the short period of the pseudorandom number generator (PRNG) usedby the cryptosystem to generate private keys. The attack assumes a laptop is listeningpromiscuously to network traffic for key messages and requires only the sensor node'spublic key and network address to discover the private key. Experimental results showthat roughly 50% of the address space leads to a private key compromise in 25 minuteson average.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cyber Capabilities for Global Strike in 2035
This paper examines global strike, a core Air Force capacity to quickly and precisely attack any target anywhere, anytime, from a cyber perspective. Properly used, cyberspace capabilities can significantly enhance Air Force (AF) capabilities to provide the nation the capacity to influence the strategic behavior of existing and potential adversaries. This paper argues that the AF must improve both the quantity and quality of its cyberspace operations force, by treating cyber warfare capabilities in the same manner as it treats its other weapon systems. It argues that despite preconceptions of future automation capabilities, that cyberspace will be a highly dynamic and fluid environment characterized by interactions with a thinking adversary. As such, while automation is required, cyber warfare will be much more manpower intensive than is currently understood, and will require a force that is very highly trained. The rapid evolution of this man-made domain will also demand a robust developmental science and research investment in constantly keeping cyber warfare capabilities in pace with the technologies of the environment. This paper reaches these conclusions by first providing a glimpse into the world of cyberspace in 2035. The paper then assesses how cyber warfare mechanisms could disrupt, disable, or destroy potential adversary targets. It describes how these capabilities might work in two alternate scenarios, and then describes the steps the AF needs to take in the future to be confident in its ability to fly, fight, and win in cyberspace.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Digital Warfare
Digital Data Warfare (DDW) is an emerging field that has great potential as a means to meet military, political, economic, and personal objectives. Distinguished from the "hacker" variety of malicious computer code, by its predictable nature and the ability to target specific systems, DDW provides the hacker with the means to deny, degrade, decieve, and/or exploit a targeted system. The five phases of DDW attack--penetration, propogation, dormancy, execution, and termination, are presented for the first time by the author in this paper. The nature allows it to be used in the stategic, operational, and tactical warfare roles. Three questions should be considered when developing a strategy for employing DDW: (1) Who should control the employment of DDW? (2) What types of systems should be targeted, and (3) Under what circumstances should DDW be used? Finally, a brief overview of possible countermeasures against DDW is provided as well as an outline of an effective information system security program that would provide a defense against DDW.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A Study to Determine Damage Assessment Methods or Models on Air Force Networks
Damage assessment for computer networks is a new area of interest for the Air Force. Previously, there has not been a concerted effort to codify damage assessment or develop a model that can be applied in assessing damage done by criminals, natural disasters, or other methods of damaging a computer network. The research undertaken attempts to identify if the Air Force MAJCOM Network Operations Support Centers (NOSC) use damage assessment models or methods. If the Air Force does use a model or method, an additional question of how the model was attained or decided upon is asked. All information comes from interviews, via e-mail or telephone, of managers in charge of computer security incidents at the Major Command level. The research is qualitative, so there are many biases and opportunities for additional, more research. Currently, there is some evidence to show that several Network Operations Support Centers are using some form of damage assessment, however, each organization has highly individualized damage assessment methods that have been developed internally and not from a re-producible method or model.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Software Obfuscation With Symmetric Cryptography
Software protection is of great interest to commercial industry. Millions of dollars and years of research are invested in the development of proprietary algorithms used in software programs. A reverse engineer that successfully reverses another company's proprietary algorithms can develop a competing product to market in less time and with less money. The threat is even greater in military applications where adversarial reversers can use reverse engineering on unprotected military software to compromise capabilities on the field or develop their own capabilities with significantly less resources. Thus, it is vital to protect software, especially the software's sensitive internal algorithms, from adversarial analysis. Software protection through obfuscation is a relatively new research initiative. The mathematical and security community have yet to agree upon a model to describe the problem let alone the metrics used to evaluate the practical solutions proposed by computer scientists. We propose evaluating solutions to obfuscation under the intent protection model, a combination of white-box and black-box protection to reflect how reverse engineers analyze programs using a combination white-box and black-box attacks. In addition, we explore use of experimental methods and metrics in analogous and more mature fields of study such as hardware circuits and cryptography. Finally, we implement a solution under the intent protection model that demonstrates application of the methods and evaluation using the metrics adapted from the aforementioned fields of study to reflect the unique challenges in a software-only software protection technique.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Using Relational Schemata in a Computer Immune System to Detect Multiple-Packet Network Intrusions
Given the increasingly prominent cyber-based threat, there are substantial research and development efforts underway in network and host-based intrusion detection using single-packet traffic analysis. However, there is a noticeable lack of research and development in the intrusion detection realm with regard to attacks that span multiple packets. This leaves a conspicuous gap in intrusion detection capability because not all attacks can be found by examining single packets alone. Some attacks may only be detected by examining multiple network packets collectively, considering how they relate to the "big picture," not how they are represented as individual packets. This research demonstrates a multiple-packet relational sensor in the context of a Computer Immune System (CIS) model to search for attacks that might otherwise go unnoticed via single-packet detection methods. Using relational schemata, multiple-packet CIS sensors define "self" based on equal, less than, and greater than relationships between fields of routine network packet headers. Attacks are then detected by examining how the relationships among attack packets may lay outside of the previously defined "self." Furthermore, this research presents a graphical, user-interactive means of network packet inspection to assist in traffic analysis of suspected intrusions. The visualization techniques demonstrated here provide a valuable tool to assist the network analyst in discriminating between true network attacks and false positives, often a time-intensive, and laborious process.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Defensive Cyber Battle Damage Assessment Through Attack Methodology Modeling
Due to the growing sophisticated capabilities of advanced persistent cyber threats, it is necessary to understand and accurately assess cyber attack damage to digital assets. This thesis proposes a Defensive Cyber Battle Damage Assessment (DCBDA) process which utilizes the comprehensive understanding of all possible cyber attack methodologies captured in a Cyber Attack Methodology Exhaustive List (CAMEL). This research proposes CAMEL to provide detailed knowledge of cyber attack actions, methods, capabilities, forensic evidence and evidence collection methods. This product is modeled as an attack tree called the Cyber Attack Methodology Attack Tree (CAMAT). The proposed DCBDA process uses CAMAT to analyze potential attack scenarios used by an attacker. These scenarios are utilized to identify the associated digital forensic methods in CAMEL to correctly collect and analyze the damage from a cyber attack. The results from the experimentation of the proposed DCBDA process show the process can be successfully applied to cyber attack scenarios to correctly assess the extent, method and damage caused by a cyber attack.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Uscybercom
Even though the Department of Defense has named cyberspace as the newest domain of warfare, the United States is not adequately organized to conduct cyber war. United States Strategic Command (USSTRATCOM) is the functional combatant command responsible for cyberspace but suffers from numerous problems that prevent it from properly planning, coordinating, and conducting cyberspace operations. Among the problems facing USSTRATCOM are insufficient manning, an overly diverse mission set, and the recent failures within America's nuclear enterprise. To overcome USSTRATCOM's problems and to provide the cyber domain the prominence needed to properly protect the United States, a new functional combatant command for cyberspace must be established. This command, United States Cyberspace Command (USCYBERCOM), should be given responsibility for conducting worldwide cyber attack, defense, and intelligence. USCYBERCOM should also serve as a supporting command to the geographic combatant commanders and must establish an in-theater headquarters presence similar to the land, air, maritime, and special operations forces.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Establishing the Human Firewall
Hackers frequently use social engineering attacks to gain a foothold into a target network. This type of attack is a tremendous challenge to defend against, as the weakness lies in the human users, not in the technology. Thus far, methods for dealing with this threat have included establishing better security policies and educating users on the threat that exists. Existing techniques aren't working as evidenced by the fact that auditing agencies consider it a given that will be able to gain access via social engineering. The purpose of this research is to propose a better method of reducing an individual's vulnerability to social engineering attacks.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Course Curriculum Development for the Future Cyberwarrior
Cyberspace is one of the latest buzzwords to gain widespread fame and acceptance throughout the world. One can hear the term being used by presidents of states to elementary children delving into computers for the first time. Cyberspace has generated great enthusiasm over the opportunities and possibilities for furthering mankind's knowledge, communication, as well as, creating more convenient methods for accomplishing mundane or tedious tasks.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Routing of Time-Sensitive Data in Mobile Ad Hoc Networks
Mobile networks take the communication concept one step further than wireless networks. In these networks, all nodes in the network are assumed to be mobile. These networks are also called mobile ad hoc networks, due to their mobility and random configurations. Ad hoc networking is a relatively new concept; consequently, many researches are in progress focusing on each level of the network stack of ad hoc networks. This research focuses on the routing of time-sensitive data in ad hoc networks. A routing protocol named Ad hoc On-demand Distance Vectoring (AODV), which has been developed by Internet Engineering Task Force (IETF) for ad hoc networks, has been studied. Taking this protocol as a point of departure, a new routing protocol named as Real Time Routing Protocol (RTRP) was developed while considering the characteristics of time-sensitive data. These two routing protocols have been modeled using OPNET, a discrete-event network simulation tool, and simulations were run to compare the performances of these protocols.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Accelerating Malware Detection via a Graphics Processing Unit
Real-time malware analysis requires processing large amounts of data storage to look for suspicious files. This is a time consuming process that (requires a large amount of processing power) often affecting other applications running on a personal computer. This research investigates the viability of using Graphic Processing Units (GPUs), present in many personal computers, to distribute the workload normally precessed by the standard Central Processing Unit (CPU). Three experiments are conducted using an industry standard GPU, the NVIDIA GeForce 9500 GT card. Experimental results show that a GPU can calculate a MD5 signature hash and scan a database of malicious signatures 82% faster then a CPU for files between 0 - 96 kB. If the file size is increased to 97 - 192 kB the GPU is 85% faster than the CPU. This demonstrates that the GPU can provide a greater performance increase over a CPU.These results could help achieve faster anti-malware products, faster network intrusion detection system response times, and faster firewall applications.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Geolocation of a Node on a Local Area Network
Geolocation is the process of identifying a node using only its Internet Protocol (IP) address. Locating a node on a LAN poses particular challenges due to the small scale of the problem and the increased significance of queuing delay. This study builds upon existing research in the area of geolocation and develops a heuristic tailored to the difficulties inherent in LANs called the LAN Time to Location Heuristic (LTTLH).LTTLH uses several polling nodes to measure latencies to end nodes, known locations within the LAN. The Euclidean distance algorithm is used to compare the results wit`h the latency of a target in order to determine the target's approximate location.Using only these latency measurements, LTTLH is able to determine which switch a target is connected to 95% of the time. Within certain constraints, this method is able to identify the target location 78% of the time. However, LANs are not always configured within the constraints necessary to geolocate a node. In order for LTTLH to be effective, a network must be configured consistently, with similar length cable runs available to nodes located in the same area. For best results, the network should also be partitioned, grouping nodes of similar proximity behind one switch.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.