Conceptual Design and Analysis of Service Oriented Architecture for Command And Control of Space Assets
The mission-unique model that has dominated the DoD satellite Command and Control community is costly and inefficient. It requires repeatedly reinventing established common C2 components for each program, unnecessarily inflating budgets and delivery schedules. The effective utilization of standards is scarce, and proprietary, non-open solutions are commonplace. IT professionals have trumpeted Service Oriented Architectures (SOAs) as the solution to large enterprise situations where multiple, functionally redundant but non-compatible information systems create large recurring development, test, maintenance, and tech refresh costs. This thesis describes the current state of Service Oriented Architectures as related to satellite operations and presents a functional analysis used to classify a set of generic C2 services. By assessing the candidate services' suitability through a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis, several C2 functionalities are shown to be more ready than others to be presented as services in the short term. Lastly, key enablers are identified, pinpointing the necessary steps for a full and complete transition from the paradigm of costly mission-unique implementations to the common, interoperable, and reusable space C2 SOA called for by DoD senior leaders.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cryptanalysis of Pseudorandom Number Generators in Wireless Sensor Networks
This work presents a brute-force attack on an elliptic curve cryptosystemimplemented on UC Berkley's TinyOS operating system for wireless sensor networks.The attack exploits the short period of the pseudorandom number generator (PRNG) usedby the cryptosystem to generate private keys. The attack assumes a laptop is listeningpromiscuously to network traffic for key messages and requires only the sensor node'spublic key and network address to discover the private key. Experimental results showthat roughly 50% of the address space leads to a private key compromise in 25 minuteson average.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Speech Recognition Using the Mellin Transform
The purpose of this research was to improve performance in speech recognition. Specifically, a new approach was investigating by applying an integral transform known as the Mellin transform (MT) on the output of an auditory model to improve the recognition rate of phonemes through the scale-invariance property of the Mellin transform. Scale-invariance means that as a time-domain signal is subjected to dilations, the distribution of the signal in the MT domain remains unaffected. An auditory model was used to transform speech waveforms into images representing how the brain "sees" a sound. The MT was applied and features were extracted. The features were used in a speech recognizer based on Hidden Markov Models. The results from speech recognition experiments showed an increase in recognition rates for some phonemes compared to traditional methods.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks
In this thesis, we analyze various factors that affect quality of service (QoS) communication in high-speed, packet-switching sub-networks. We hypothesize that sub-network-wide bandwidth reservation and guaranteed CPU processing power at endpoint systems for handling data traffic are indispensable to achieving hard end-to-end quality of service. Different bandwidth reservation strategies, traffic characterization schemes, and scheduling algorithms affect the network resources and CPU usage as well as the extent that QoS can be achieved. In order to analyze those factors, we design and implement a communication layer. Our experimental analysis supports our research hypothesis. The Resource ReSerVation Protocol (RSVP) is designed to realize resource reservation. Our analysis of RSVP shows that using RSVP solely is insufficient to provide hard end-to-end quality of service in a high-speed sub-network. Analysis of the IEEE 802.lp protocol also supports the research hypothesis.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Implementation and Optimization of the Advanced Encryption Standard Algorithm on an 8-Bit Field Programmable Gate Array Hardware Platform
The contribution of this research is three-fold. The first is a method of converting the area occupied by a circuit implemented on a Field Programmable Gate Array (FPGA) to an equivalent (memory included) as a measure of total gate count. This allows direct comparison between two FPGA implementations independent of the manufacturer or chip family. The second contribution improves the performance of the Advanced Encryption Standard (AES) on an 8-bit computing platform. This research develops an AES design that occupies less than three quarters of the area reported by the smallest design in current literature as well as significantly increases area efficiency. The third contribution of this research is an examination of how various designs for the critical AES SubBytes and MixColumns transformations interact and affect the overall performance of AES. The transformations responsible for the largest variance in performance are identified and the effect is measured in terms of throughput, area efficiency, and area occupied.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
In Pursuit of an Aptitude Test for Potential Cyberspace Warriors
The Air Force has officially assumed the cyberspace mission. In order to perform the mission to the best extent possible, it is important to employ personnel with the necessary skill sets and motivation to work in this type of environment. The first step in employing the right people is to screen all possible candidates and select those with an aptitude for acquiring the skill sets and the motivation to perform this work. This thesis is an attempt to determine the necessary skills and motivations to perform this work and recommend a screening process to select the candidates with the highest probability for success. Since this mission is new, determining what skills and motivations are necessary is difficult. To assist in determining the skills and motivations for cyber warriors, this thesis considers the skills and motivations of computer hackers. If the skills and motivations of successful hackers can be identified, those skills and motivations can be used as a tool for developing an aptitude test to be used as a screening device. Aptitude tests have proven to be a valuable resource to the military and academia. A blueprint for an aptitude test is provided based on the findings of the hacker skills and motivations.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Executable Model Development From Architectural Description With Application to the Time Sensitive Target Problem
As the Department of Defense (DoD) moves to a capability based approach for requirements definition and system development, it has become necessary to conceptualize and evaluate our needs at the System of System (SoS) level. Desired capabilities are often achievable only through seamless integration of many different systems. As the classical system engineering approaches are not suited to effectively handle the complexity of SoS level concepts, an architectures-driven approach has emerged as a way of defining and evaluating these new concepts. While the use of architectures for documenting and tracking interfaces and interoperability concerns is generally understood, architectural analysis and the use of executable models for evaluation of architectures remain an open area of research. With this purpose in mind, this thesis will apply architectural-based analysis to the proposed Time Sensitive Effect Operation (TSEO2012) scenario. This scenario will become the baseline for architectural analysis, and an excursion to this baseline will add a Weapon Born Battle Damage Assessment (WBBDA) capability. By creating an executable model, the two architectural concepts can be compared against each other. The addition of a WBBDA capability to the TSEO architecture improves the efficiency of the time sensitive target operations by shortening the decision cycle for target re-strike. While this effort was successful in obtaining an executable model directly from the architecture description, it highlighted the importance of having sufficient and correct information contained in the architecture products.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
The Need for Censorship on the Internet Exists at the Air Command and Staff College
The researcher will survey the untilization characteristics of both faculty and students to support or refute the need for Internet censorship at ACSC. Internet censorship will be approached from the perspective of "according to Air University commander's intent". Not from a pornography or indecent perspective.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Scaling Ant Colony Optimization With Hierarchical Reinforcement Learning Partitioning
This research merges the hierarchical reinforcement learning (HRL) domain and the ant colony optimization (ACO) domain. The merger produces a HRL ACO algorithm capable of generating solutions for both domains. This research also provides two specific implementations of the new algorithm: the first a modification to Dietterich's MAXQ-Q HRL algorithm, the second a hierarchical ACO algorithm. These implementations generate faster results, with little to no significant change in the quality of solutions for the tested problem domains. The application of ACO to the MAXQ-Q algorithm replaces the reinforcement learning, Q-learning and SARSA, with the modified ant colony optimization method, Ant-Q. This algorithm, MAXQ-AntQ, converges to solutions not significantly different from MAXQ-Q in 88% of the time. This research then transfers HRL techniques to the ACO domain and traveling salesman problem (TSP). To apply HRL to ACO, a hierarchy must be created for the TSP. A data clustering algorithm creates these subtasks, with an ACO algorithm to solve the individual and complete problems. This research tests two clustering algorithms, k-means and G-means.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Forensic Analysis of Digital Image Tampering
The use of digital photography has increased over the past few years, a trend which opens the door for new and creative ways to forge images. The manipulation of images through forgery influences the perception an observer has of the depicted scene, potentially resulting in ill consequences if created with malicious intentions. This poses a need to verify the authenticity of images originating from unknown sources in absence of any prior digital watermarking or authentication technique. This research explores the holes left by existing research; specifically, the ability to detect image forgeries created using multiple image sources and specialized methods tailored to the popular JPEG image format. In an effort to meet these goals, this thesis presents four methods to detect image tampering based on fundamental image attributes common to any forgery. These include discrepancies in 1) lighting and 2) brightness levels, 3) underlying edge inconsistencies, and 4) anomalies in JPEG compression blocks. Overall, these methods proved encouraging in detecting image forgeries with an observed accuracy of 60% in a completely blind experiment containing a mixture of 15 authentic and forged images.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Coalition Formation Under Uncertainty
Many multiagent systems require allocation of agents to tasks in order to ensure successful task execution. Most systems that perform this allocation assume that the quantity of agents needed for a task is known beforehand. Coalition formation approaches relax this assumption, allowing multiple agents to be dynamically assigned. Unfortunately, many current approaches to coalition formation lack provisions for uncertainty. This prevents application of coalition formation techniques to complex domains, such as real-world robotic systems and agent domains where full state knowledge is not available. Those that do handle uncertainty have no ability to handle dynamic addition or removal of agents from the collective and they constrain the environment to limit the sources of uncertainty.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Performance Analysis of Protocol Independent Multicasting-Dense Mode in Low Earth Orbit Satellite Networks
This research explored the implementation of Protocol Independent Multicasting - Dense Mode (PIM-DM) in a LEO satellite constellation. PIM-DM is a terrestrial protocol for distributing traffic efficiently between subscriber nodes by combining data streams into a tree-based structure, spreading from the root of the tree to the branches. Using this structure, a minimum number of connections are required to transfer data, decreasing the load on intermediate satellite routers. The PIM-DM protocol was developed for terrestrial systems and this research implemented an adaptation of this protocol in a satellite system. This research examined the PIM-DM performance characteristics which were compared to earlier work for On- Demand Multicast Routing Protocol (ODMRP) and Distance Vector Multicasting Routing Protocol (DVMRP) - all in a LEO satellite network environment. Experimental results show that PIM-DM is extremely scalable and has equivalent performance across diverse workloads.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Barriers to Electronic Records Management
Corporate and government organizations can use electronic records as an important strategic resource, if the records are managed properly. In addition to meeting legal requirements, electronic records can play a vital role in the management and operation of an organization's activities. Corporate America is facing challenges in managing electronic records, and so too is the U.S. Air Force (USAF). The deployed environment is particularly problematic for electronic records management (ERM). This research, thus, investigates ERM in the deployed environment to identify and characterize the barriers faced by USAF personnel who deployed to locations supporting Operations Enduring Freedom and Iraqi Freedom. This investigation was conducted through a qualitative approach, drawing much of its rich data from in-depth interviews. An exploratory case study was designed using a socio-technical framework and inductive analysis was used to proceed from particular facts to general conclusions. The analysis revealed 15 barriers to ERM. All 15 barriers were determined to exist throughout the entire records lifecycle and were categorized based on common overarching themes. This research reveals some unique barriers contained within the context of a deployed location, while also showing that the barriers are similar to known ERM challenges.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Historical Analysis of the Awareness and Key Issues of the Insider Threat to Information Systems
Since information systems have become smaller, faster, cheaper and more interconnected many organizations have become more dependent on them for daily operations and to maintain critical data. This reliance on information systems is not without risk of attack. Because these systems are relied upon so heavily the impact of such an attack also increases, making the protection of these systems essential. Information system security often focuses on the risk of attack and damage from the outsider. High-profile issues such as hackers, viruses and denial-of-service are generally emphasized in literature and other media outlets. A neglected area of computer security that is just as prevalent and potentially more damaging is the threat from a trusted insider. An organizational insider who misuses a system whether intentional or unintentional is often in a position to know where and how to access important information. How do we become aware of such activities and protect against this threat? This research was a historical analysis of the insider threat to information systems to develop a understanding and framework of the topic.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Evaluation of the Effects of Predicted Associativity on the Reliability and Performance of Mobile Ad Hoc Networks
Routing in Mobile Ad Hoc Networks (MANETs) presents unique challenges not encountered in conventional networks. Limitations in bandwidth and power as well as a dynamic network topology must all be addressed in MANET routing protocols. Predicted Associativity Routing (PAR) is a custom routing protocol designed to address reliability in MANETs. By collecting associativity information on links, PAR calculates the expected lifetime of neighboring links. During route discovery, nodes use this expected lifetime, and their neighbor's connectivity to determine a residual lifetime. The routes are selected from those with the longest remaining lifetimes. Thus, PAR attempts to extend the duration routes are active, thereby improving their reliability. PAR is compared to Ad Hoc On-Demand Distance Vector Routing (AODV) using a variety of reliability and performance metrics. Despite its focus on reliability, PAR does not provide more reliable routes. Rather, AODV produces routes which last as much as three times longer than PAR. However PAR, even with shorter lasting routes, delivers more data and has greater throughput. Both protocols are affected most by the node density of the networks. Node density accounts for 48.62% of the variation in route lifetime in AODV, and 70.66% of the variation in PAR. As node density increases from 25 to 75 nodes route lifetimes are halved, while throughput increases drastically with the increased routing overhead. Furthermore, PAR increases end-to-end delay, while AODV displays better efficiency.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Knowledge Based Operations
With the right knowledge at the right place at the right time, an effects based approach to the employment of airpower can truly be implemented. Sensor fusion, increased bandwidth, reach-back capability, and a common operating picture, have allowed the Combined Joint Force Air Component Commander (CJFACC) and the Combined Air Operations Center (CAOC) to be more connected with more real time information than ever. Emerging today are concepts for managing not only information, but its interrelated meaning in methods to fuel action. Knowledge Management concepts seek to act on information by creating knowledge where it can and interconnecting those who possess knowledge where it can not. Knowledge Based Operations (KBO) is the procedures, tools and organization used to implement Knowledge Management concepts. What are the implications of these concepts on the processes of a CAOC?This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Load Balancing Using Time Series Analysis for Soft Real Time Systems With Statistically Periodic Loads
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Analysis of Information Asset Valuation Quantification Methodology for Application With Cyber Information Mission Impact Assessment
The purpose of this research is to develop a standardized Information Asset Valuation (IAV) methodology. The IAV methodology proposes that accurate valuation for an Information Asset (InfoA) is the convergence of information tangible, intangible, and flow attributes to form a functional entity that enhances mission capability. The IAV model attempts to quantify an InfoA to a single value through the summation of weighted criteria. Standardizing the InfoA value criteria will enable decision makers to comparatively analyze dissimilar InfoAs across the tactical, operational, and strategic domains. This research develops the IAV methodology through a review of existing military and non-military valuation methodologies. IAV provides the Air Force (AF) and Department of Defense (DoD) with a standardized methodology that may be utilized enterprise wide when conducting risk and damage assessment and risk management. The IAV methodology is one of the key functions necessary for the Cyber Incident Mission Impact Assessment (CIMIA) program to operationalize a scalable, semi-automated Decision Support System (DSS) tool. The CIMIA DSS intends to provide decision makers with near real-time cyber awareness prior to, during, and post cyber incident situations through documentation of relationships, interdependencies, and criticalities among information assets, the communications infrastructure, and the operations mission impact.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Tool-Based Integration and Code Generation of Object Models
Today many organizations are faced with multiple large legacy data stores in different formats and the need to use data from each data store with tools based on the other data stores' formats. This thesis presents a tool-based methodology for integrating object-oriented data models with automatic generation of code. The generated code defines a global data format, generates views of global data in individual integrated data formats, and parses data from individual formats to the global formats and from the global format to the individual formats. This allows for legacy data to be translated into the global format, and all future data to be entered in the global format. Once in the global format, the data may be exported to any of the integrated formats for use with the appropriate tools. The methodology is based on using formal methods and knowledge-based engineering techniques with a transformation system and object-oriented views. The methodology is demonstrated by a sample implementation of the integration tool being used to integrate data formats used by three different sensor-based, engagement-level simulation systems.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Enabling Intrusion Detection in IPSec Protected IPv6 Networks Through Secret-Key Sharing
As the Internet Protocol version 6 (IPv6) implementation becomes more widespread, the IP Security (IPSec) features embedded into the next-generation protocol will become more accessible than ever. Though the networklayer encryption provided by IPSec is a boon to data security, its use renders standard network intrusion detection systems (NIDS) useless. The problem of performing intrusion detection on encrypted traffic has been addressed by differing means with each technique requiring one or more static secret keys to be shared with the NIDS beforehand.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Realtime Color Stereovision Processing
Recent developments in aviation have made micro air vehicles (MAVs) a reality. These featherweight palm-sized radio-controlled flying saucers embody the future of air-to-ground combat. No one has ever successfully implemented an autonomous control system for MAVs. Because MAVs are physically small with limited energy supplies, video signals offer superiority over radar for navigational applications. This research takes a step forward in realtime machine vision processing. It investigates techniques for implementing a realtime stereovision processing system using two miniature color cameras. The effects of poor-quality optics are overcome by a robust algorithm, which operates in realtime and achieves frame rates up to 10 fps in ideal conditions. The vision system implements innovative work in the following five areas of vision processing: fast image registration preprocessing, object detection, feature correspondence, distortion-compensated ranging, and multiscale nominal frequency-based object recognition. Results indicate that the system can provide adequate obstacle avoidance feedback for autonomous vehicle control. However, typical relative position errors are about 10%--to high for surveillance applications. The range of operation is also limited to between 6 - 30m. The root of this limitation is imprecise feature correspondence: with perfect feature correspondence the range would extend to between 0.5 - 30m. Stereo camera separation limits the near range, while optical resolution limits the far range. Image frame sizes are 160x120 pixels. Increasing this size will improve far range characteristics but will also decrease frame rate.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Network Visualization Design Using Prefuse Visualization Toolkit
Visualization of network simulation events or network visualization is an effective and low cost method to evaluate the health and status of a network and analyze network designs, protocols, and network algorithms. This research designed and developed a network event visualization framework using an open source general visualization toolkit. This research achieved three major milestones during the development of this framework: A robust network simulator trace file parser, multiple network visualization layouts--including user-defined layouts, and precise visualization timing controls and integrated display of network statistics. The toolkit design is readily extensible allowing developers to easily expand the framework to meet research-specific visualization goals.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Evaluation of GeoBEST Contingency Beddown Planning Software Using the Technology Acceptance Model
GeoBEST (Base Engineer Survey Toolkit) is a software program built under contract with the USAF. It is designed to simplify the contingency beddown planning process through application of geographic information technology. The purpose of this thesis was to thoroughly evaluate GeoBEST using prospective GeoBEST users in a realistic beddown planning scenario. The Technology Acceptance Model (TAM) was applied, which measures a prospective user's perceptions of the technology's usefulness and ease-of- use and predicts their intentions to use the software in the future. The evaluation also included a qualitative evaluation of specific software features. The test group for this thesis was seventy-one Civil Engineering students attending contingency skills training at the Silver Flag training site, Tyndall AFB, FL. The students were given a one-hour interactive demonstration of GeoBEST after which they completed a survey. The students were given the option of using the program for preparation of their assigned beddown plan. Some Silver Flag instructors also completed a separate survey.The results from the TAM predict that the students were only slightly likely to use GeoBEST for beddown planning in the future. Throughout the course of the research, several features of GeoBEST were identified that limit the program's effectiveness. Some of these were minor irritants, while others were serious design flaws. Recommendations are made for implementation of GeoBEST and creation of training programs for prospective users.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Artificial Immune System-Inspired Multiobjective Evolutionary Algorithm With Application to the Detection of Distributed Computer Network Intrusions
Today's predominantly-employed signature-based intrusion detection systems are reactive in nature and storage-limited. Their operation depends upon catching an instance of an intrusion or virus after a potentially successful attack, performing post-mortem analysis on that instance and encoding it into a signature that is stored in its anomaly database. The time required to perform these tasks provides a window of vulnerability to DoD computer systems. Further, because of the current maximum size of an Internet Protocol-based message, the database would have to be able to maintain 25665535 possible signature combinations. In order to tighten this response cycle within storage constraints, this thesis presents an Artificial Immune System-inspired Multiobjective Evolutionary Algorithm intended to measure the vector of tradeoff solutions among detectors with regard to two independent objectives: best classification fitness and optimal hypervolume size. Modeled in the spirit of the human biological immune system and intended to augment DoD network defense systems, our algorithm generates network traffic detectors that are dispersed throughout the network.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
U.S. Cyber Strategy Deterrence and Strategic Response
A great deal of thought has been applied to focus government and industrial resources on the important problem of preventing cyber attacks against high profile infrastructure and economic targets. The cyber attack prevention problem is actually one of risk management and mitigation - it aims to reduce the number, severity, and impact of attacks rather than dreaming of preventing all cyber attacks. As prevention efforts continue, cyber attacks are ongoing and unlikely to completely stop. The pragmatic problem shifts toward appropriate responses. contend that not enough attention has been brought to study how the nation should respond to cyber attacks. Clearly such a policy rests heavily on knowing the source of an attack, the nature of the attacked infrastructure, as well as the destructive effect of the attack. Policy makers must also consider how the international community would view such a policy in light of existing international criminal law and the laws of armed conflict. Attack attribution is problematic, but can be helped with international cooperation. Thus, the key recommendation is that international norms for cyber crime and war fighting in the cyber domain be established through broadening of existing laws and conventions.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cyberspace
In the last century, the United States was protected from a direct physical attack by its adversaries due to its geographic isolation. However, today any adversary with sufficient capability can exploit vulnerabilities in the United States' critical network infrastructures using cyber warfare and leverage physical attacks to significantly impact the lives of its citizens and erode their confidence in its ability to protect their way of life. This AY-10 student research paper provides information to assist senior leaders working to prevent or to minimize the effects of future cyber attacks by a nation state or non-state actor against the United States' critical network infrastructures.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Analysis of Computer Aided Design (CAD) Packages Used at MSFC for the Recent Initiative to Integrate Engineering Activities
This paper analyzes the use of Computer Aided Design (CAD) packages at NASA's Marshall Space Flight Center (MSFC). It examines the effectiveness of recent efforts to standardize CAD practices across MSFC engineering activities. An assessment of the roles played by management, designers, analysts, and manufacturers in this initiative will be explored. Finally, solutions are presented for better integration of CAD across MSFC in the future.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Defining Our National Cyberspace Boundaries
In February 2009, the Obama Administration commissioned a 60-day review of the United States' cyber security. A near-term action recommended by the 60-day review was to prepare an updated national strategy to secure information and communications infrastructure.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
What Senior Leaders Need to Know About Cyberspace
What must senior security leaders know about cyberspace to transform their organizations and make wise decisions? How does the enduring cyberspace process interact with and transform organizations, technology, and people, and, in turn, how do they transform cyberspace itself? To evaluate these questions, this essay establishes the enduring nature of the cyberspace process and compares this relative constant to transformation of organizations and people. Each section discussing these areas provides an assessment of their status as well as identifies key issues for senior security leaders to comprehend now and work to resolve in the future. Specific issues include viewing cyberspace as a new strategic common akin to the sea, comparing effectiveness of existing hierarchies in achieving cybersecurity against networked adversaries, and balancing efficiency and effectiveness of security against the universal laws of privacy and human rights.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Analysis of Computer Aided Design (CAD) Packages Used at MSFC for the Recent Initiative to Integrate Engineering Activities
This paper analyzes the use of Computer Aided Design (CAD) packages at NASA's Marshall Space Flight Center (MSFC). It examines the effectiveness of recent efforts to standardize CAD practices across MSFC engineering activities. An assessment of the roles played by management, designers, analysts, and manufacturers in this initiative will be explored. Finally, solutions are presented for better integration of CAD across MSFC in the future.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Evaluation of Personnel Parameters in Software Cost Estimating Models
Software capabilities have steadily increased over the last half century. The Department of Defense has seized this increased capability and used it to advance the warfighter's weapon systems. However, this dependence on software capabilities has come with enormous cost. The risks of software development must be understood to develop an accurate cost estimate. Department of Defense cost estimators traditionally depend on parametric models to develop an estimate for a software development project. Many commercial parametric software cost estimating models exist such as COCOMO II, SEER-SEM, SLIM, and PRICE S. COCOMO II is the only model that has open architecture. The open architecture allows the estimator to fully understand the impact each parameter has on the effort estimate in contrast with the closed architecture models that mask the quantitative value with a qualitative input to characterize the impact of the parameter.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Load Balancing Using Time Series Analysis for Soft Real Time Systems With Statistically Periodic Loads
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cyberspace
In the last century, the United States was protected from a direct physical attack by its adversaries due to its geographic isolation. However, today any adversary with sufficient capability can exploit vulnerabilities in the United States' critical network infrastructures using cyber warfare and leverage physical attacks to significantly impact the lives of its citizens and erode their confidence in its ability to protect their way of life. This AY-10 student research paper provides information to assist senior leaders working to prevent or to minimize the effects of future cyber attacks by a nation state or non-state actor against the United States' critical network infrastructures.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Evolving Compact Decision Rule Sets
With the increased proliferation of computing equipment, there has been a corresponding explosion in the number and size of databases. Although a great deal of time and e ort is spent building and maintaining these databases, it is nonetheless rare that this valuable resource is exploited to its fullest. The principle reason for this paradox isthat many organizations lack the insight and/or expertise to e ectively translate this information into usable knowledge. While data mining technology holds the promise of automatically extracting useful patterns (such as decision rules) from data, this potential has yet to be realized. One of the major technical impediments is that the current generation of data mining tools produce decision rule sets that are very accurate, but extremely complex and difficult to interpret. As a result, there is a clear need for methods that yield decision rule sets that are both accurate and compact. The development of the Genetic Rule and Classi er Construction Environment (GRaCCE) is proposed as an alternative to existing decision rule induction (DRI) algorithms. GRaCCE is a multi-phase algorithm which harnesses the power of evolutionary search to mine classi cation rules from data. These rules are based on piece-wise linear estimates of the Bayes decision boundary within a winnowed subset of the data.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Interagency Organization for Cyberwar
Many people take for granted things they cannot see, smell, or touch. For most people, security in cyberspace is one of these things. Aside from the securing their home personal computer with the latest anti-virus, the majority of Americans take government and corporate cyber security for granted assuming the professionals have security of the nation's military networks, sensitive government data, and consumers' personal data and financial information under control. Outside of an occasional news story about a denial of service internet attack or an "I Love You" virus, what goes on behind the closed compact disc drive doors does not concern most of the nation. The chilling fact is the nation should be concerned about what is going on in cyberspace. Since the terrorist attacks on 9/11, the nation has taken a renewed interest in securing the homeland, to include efforts to protect the countries critical infrastructure such as electrical plants, dams, and water supplies. It is no secret that terrorists are interested in striking these targets with the intent of inflicting catastrophic physical and economic damage to western civilization. What many people do not realize is, the computer network systems which monitor and manage these systems, and many others, are also under attack by what some are calling cyber terrorists. Although the government and industry has undertaken a significant amount of effort to protect the nation's military, non-military government, financial, and industrial networks, more work is necessary.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
U.S. Policy Recommendation for Responding to Cyber Attacks Against the United States
U.S. Response Strategy for Cyber Attacks The United States has traditionally looked to its military to defend against all foreign enemies. International telecommunications and computer networks and globalization have now overcome the military's absolute ability to provide for that common defense. More than capable to respond to attacks in traditional war fighting domains of land, sea, air, and even space, the military will not be able to prevent all cyber attacks against U.S. interests. As a result, the U.S. should establish and announce the nature of its strategic responses to cyber attacks - including legal prosecution, diplomacy, or military action. Such a policy pronunciation will serve both as a deterrent to potential attackers and likely be established as a normative international standard. The outline for a response policy begins by addressing attacks based upon the prevailing security environment - peacetime or conflict. The U.S. should respond to peacetime attacks based on the target, reasonably expected damage, attack type, and source. Attacks likely to cause significant injuries and damage warrant a full spectrum of response options, while state-sponsored attacks would justify a forcible response when their type and target indicate destructive effects including widespread injury and damage.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
On Graph Isomorphism and the Pagerank Algorithm
Graphs express relationships among objects, such as the radio connectivity among nodes in unmanned vehicle swarms. Some applications may rank a swarm's nodes by their relative importance, for example, using the PageRank algorithm applied in certain search engines to order query responses. The PageRank values of the nodes correspond to a unique eigenvector that can be computed using the power method, an iterative technique based on matrix multiplication. The first result is a practical lower bound on the PageRank algorithm's execution time that is derived by applying assumptions to the PageRank perturbation's scaling value and the PageRank vector's required numerical precision. The second result establishes nodes contained in the same block of the graph's coarsest equitable partition must have equal PageRank values. The third result, the AverageRank algorithm, ensures such nodes are assigned equal PageRank values. The fourth result, the ProductRank algorithm, reduces the time needed to find the PageRank vector by eliminating certain dot products in the power method if the graph's coarsest equitable partition contains blocks composed of multiple vertices.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Smarte Unternehmen, digitale Gesellschaft - Wie KI & Co. das Leben und Arbeiten der Zukunft gestalten k繹nn(t)en
Digital Health, Flexicurity, Shared Mobility oder Smart Cities sind nur wenige ausgew瓣hlte Schlagworte aus der Vielzahl aktueller Entwicklungen in Wirtschaft und Gesellschaft. Neue Technologien und deren gezielte Nutzung k繹nnen in herausfordernden Zeiten das Leben und Arbeiten der Zukunft zentral pr瓣gen. Der vorliegende Band geht daher praxisnahen Fragesellungen nach, die sich mit unternehmerischen und gesellschaftlichen Zukunftsszenarien rund um KI & Co. besch瓣ftigen. Dabei werden kurz- oder langfristige technologiegetriebene Trends im Hinblick auf deren Chancen und Risiken untersucht. Neben gesellschaftlichen Entwicklungen, Ver瓣nderungen der Arbeitswelt, Anwendungen im Mobilit瓣tssektor werden auch Aspekte der Sicherheit betrachtet.
Evolving Compact Decision Rule Sets
With the increased proliferation of computing equipment, there has been a corresponding explosion in the number and size of databases. Although a great deal of time and e ort is spent building and maintaining these databases, it is nonetheless rare that this valuable resource is exploited to its fullest. The principle reason for this paradox isthat many organizations lack the insight and/or expertise to e ectively translate this information into usable knowledge. While data mining technology holds the promise of automatically extracting useful patterns (such as decision rules) from data, this potential has yet to be realized. One of the major technical impediments is that the current generation of data mining tools produce decision rule sets that are very accurate, but extremely complex and difficult to interpret. As a result, there is a clear need for methods that yield decision rule sets that are both accurate and compact. The development of the Genetic Rule and Classi er Construction Environment (GRaCCE) is proposed as an alternative to existing decision rule induction (DRI) algorithms. GRaCCE is a multi-phase algorithm which harnesses the power of evolutionary search to mine classi cation rules from data. These rules are based on piece-wise linear estimates of the Bayes decision boundary within a winnowed subset of the data.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Barriers to Electronic Records Management
Corporate and government organizations can use electronic records as an important strategic resource, if the records are managed properly. In addition to meeting legal requirements, electronic records can play a vital role in the management and operation of an organization's activities. Corporate America is facing challenges in managing electronic records, and so too is the U.S. Air Force (USAF). The deployed environment is particularly problematic for electronic records management (ERM). This research, thus, investigates ERM in the deployed environment to identify and characterize the barriers faced by USAF personnel who deployed to locations supporting Operations Enduring Freedom and Iraqi Freedom. This investigation was conducted through a qualitative approach, drawing much of its rich data from in-depth interviews. An exploratory case study was designed using a socio-technical framework and inductive analysis was used to proceed from particular facts to general conclusions. The analysis revealed 15 barriers to ERM. All 15 barriers were determined to exist throughout the entire records lifecycle and were categorized based on common overarching themes. This research reveals some unique barriers contained within the context of a deployed location, while also showing that the barriers are similar to known ERM challenges.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Automating Security Protocol Analysis
When Roger Needham and Michael Schroeder first introduced a seemingly secure protocol [24], it took over 18 years to discover that even with the most secure encryption, the conversations using this protocol were still subject to penetration. To date, there is still no one protocol that is accepted for universal use. Because of this, analysis of the protocol outside the encryption is becoming more important. Recent work by Joshua Guttman and others [9] have identified several properties that good protocols often exhibit. Termed "Authentication Tests", these properties have been very useful in examining protocols. The purpose of this research is to automate these tests and thus help expedite the analysis of both existing and future protocols. The success of this research is shown through rapid analysis of numerous protocols for the existence of authentication tests. The result of this is that an analyst is now able to ascertain in near real-time whether or not a proposed protocol is of a sound design or whether an existing protocol may contain previously unknown weaknesses. The other achievement of this research is the generality of the input process involved. Although there exist other protocol analyzers, their use is limited primarily due to their complexity of use. With the tool generated here, an analyst needs only to enter their protocol into a standard text file; and almost immediately, the analyzer determines the existence of the authentication tests.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Design and Implementation of Replicated Object Layer
One of the widely used techniques for construction of fault tolerant applications is the replication of resources so that if one copy fails sufficient copies may still remain operational to allow the application to continue to function. This thesis involves the design and implementation of an object oriented framework for replicating data on multiple sites and across different platforms. Our approach, called the Replicated Object Layer (ROL) provides a mechanism for consistent replication of data over dynamic networks. ROL uses the Reliable Multicast Protocol (RMP) as a communication protocol that provides for reliable delivery, serialization and fault tolerance. Besides providing type registration, this layer facilitates distributed atomic transactions on replicated data. A novel algorithm called the RMP Commit Protocol, which commits transactions efficiently in reliable multicast environment is presented. ROL provides recovery procedures to ensure that site and communication failures do not corrupt persistent data, and male the system fault tolerant to network partitions. ROL will facilitate building distributed fault tolerant applications by performing the burdensome details of replica consistency operations, and making it completely transparent to the application.Replicated databases are a major class of applications which could be built on top of ROL.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Performance Analysis of Protocol Independent Multicasting-Dense Mode in Low Earth Orbit Satellite Networks
This research explored the implementation of Protocol Independent Multicasting - Dense Mode (PIM-DM) in a LEO satellite constellation. PIM-DM is a terrestrial protocol for distributing traffic efficiently between subscriber nodes by combining data streams into a tree-based structure, spreading from the root of the tree to the branches. Using this structure, a minimum number of connections are required to transfer data, decreasing the load on intermediate satellite routers. The PIM-DM protocol was developed for terrestrial systems and this research implemented an adaptation of this protocol in a satellite system. This research examined the PIM-DM performance characteristics which were compared to earlier work for On- Demand Multicast Routing Protocol (ODMRP) and Distance Vector Multicasting Routing Protocol (DVMRP) - all in a LEO satellite network environment. Experimental results show that PIM-DM is extremely scalable and has equivalent performance across diverse workloads.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Evaluation of the Effects of Predicted Associativity on the Reliability and Performance of Mobile Ad Hoc Networks
Routing in Mobile Ad Hoc Networks (MANETs) presents unique challenges not encountered in conventional networks. Limitations in bandwidth and power as well as a dynamic network topology must all be addressed in MANET routing protocols. Predicted Associativity Routing (PAR) is a custom routing protocol designed to address reliability in MANETs. By collecting associativity information on links, PAR calculates the expected lifetime of neighboring links. During route discovery, nodes use this expected lifetime, and their neighbor's connectivity to determine a residual lifetime. The routes are selected from those with the longest remaining lifetimes. Thus, PAR attempts to extend the duration routes are active, thereby improving their reliability. PAR is compared to Ad Hoc On-Demand Distance Vector Routing (AODV) using a variety of reliability and performance metrics. Despite its focus on reliability, PAR does not provide more reliable routes. Rather, AODV produces routes which last as much as three times longer than PAR. However PAR, even with shorter lasting routes, delivers more data and has greater throughput. Both protocols are affected most by the node density of the networks. Node density accounts for 48.62% of the variation in route lifetime in AODV, and 70.66% of the variation in PAR. As node density increases from 25 to 75 nodes route lifetimes are halved, while throughput increases drastically with the increased routing overhead. Furthermore, PAR increases end-to-end delay, while AODV displays better efficiency.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Multicast Algorithms for Mobile Satellite Communication Networks
With the rise of mobile computing and an increasing need for ubiquitous high speed data connections, Internet-in-the-sky solutions are becoming increasingly viable. To reduce the network overhead of one-to-many transmissions, the multicast protocol has been devised. The implementation of multicast in these Low Earth Orbit (LEO) constellations is a critical component to achieving an omnipresent network environment. This research examines the system performance associated with two terrestrial-based multicast mobility solutions, Distance Vector Multicast Routing Protocol (DVMRP) with mobile IP and On Demand Multicast Routing Protocol (ODMRP). These protocols are implemented and simulated in a six plane, 66 satellite LEO constellation. Each protocol was subjected to various workload, to include changes in the number of source nodes and the amount of traffic generated by these nodes. Results from the simulation trials show the ODMRP protocol provided greater than 99% reliability in packet deliverability, at the cost of more than 8 bits of overhead for every 1 bit of data for multicast groups with multiple sources. In contrast, DVMRP proved robust and scalable, with data-to-overhead ratios increasing logarithmically with membership levels. DVMRP also had less than 70 ms of average end- to-end delay, providing stable transmissions at high loading and membership levels. Due to the fact that system performance metric values varied as a function of protocol, system design objectives must be considered when choosing a protocol for implementation.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
The Need for Censorship on the Internet Exists at the Air Command and Staff College
The researcher will survey the untilization characteristics of both faculty and students to support or refute the need for Internet censorship at ACSC. Internet censorship will be approached from the perspective of "according to Air University commander's intent". Not from a pornography or indecent perspective.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
UNIX-Based Operating Systems Robustness Evaluation
Robust operating systems are required for reliable computing. Techniques for robustness evaluation of operating systems not only enhance the understanding of the reliability of computer systems, but also provide valuable feed- back to system designers. This thesis presents results from robustness evaluation experiments on five UNIX-based operating systems, which include Digital Equipment's OSF/l, Hewlett Packard's HP-UX, Sun Microsystems' Solaris and SunOS, and Silicon Graphics' IRIX. Three sets of experiments were performed. The methodology for evaluation tested (1) the exception handling mechanism, (2) system resource management, and (3) system capacity under high workload stress. An exception generator was used to evaluate the exception handling mechanism of the operating systems. Results included exit status of the exception generator and the system state. Resource management techniques used by individual operating systems were tested using programs designed to usurp system resources such as physical memory and process slots. Finally, the workload stress testing evaluated the effect of the workload on system performance by running a synthetic workload and recording the response time of local and remote user requests. Moderate to severe performance degradations were observed on the systems under stress.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Anti-Tamper Method for Field Programmable Gate Arrays Through Dynamic Reconfiguration and Decoy Circuits
As Field Programmable Gate Arrays (FPGAs) become more widely used, security concerns have been raised regarding FPGA use for cryptographic, sensitive, or proprietary data. Storing or implementing proprietary code and designs on FPGAs could result in compromise of sensitive information if the FPGA device was physically relinquished or remotely accessible to adversaries seeking to obtain the information. Although multiple defensive measures have been implemented (and overcome), the possibility exists to create a secure design through the implementation of polymorphic Dynamically Reconfigurable FPGA (DRFPGA) circuits. Using polymorphic DRFPGAs removes the static attributes from their design; thus, substantially increasing the difficulty of successful adversarial reverse-engineering attacks. A variety of dynamically reconfigurable methodologies exist for implementations that challenge designers in the reconfigurable technology field. A Hardware Description Language (HDL) DRFPGA model is presented for use in security applications. The Very High Speed Integrated Circuit HDL(VHSIC)language was chosen to take advantage of its capabilities, which are well suited to the current research. Additionally, algorithms that explicitly support granular autonomous reconfiguration have been developed and implemented on the DRFPGA as a means of protecting its designs. Documented testing validated the reconfiguration results, compared original FPGA and DRFPGA, security, power usage, and area estimates.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.