Integrating Oil Debris and Vibration Measurements for Intelligent Machine Health Monitoring
A diagnostic tool for detecting damage to gears was developed. Two different measurement technologies, oil debris analysis and vibration were integrated into a health monitoring system for detecting surface fatigue pitting damage on gears. This integrated system showed improved detection and decision-making capabilities as compared to using individual measurement technologies. This diagnostic tool was developed and evaluated experimentally by collecting vibration and oil debris data from fatigue tests performed in the NASA Glenn Spur Gear Fatigue Rig. An oil debris sensor and the two vibration algorithms were adapted as the diagnostic tools. An inductance type oil debris sensor was selected for the oil analysis measurement technology. Gear damage data for this type of sensor was limited to data collected in the NASA Glenn test rigs. For this reason, this analysis included development of a parameter for detecting gear pitting damage using this type of sensor. The vibration data was used to calculate two previously available gear vibration diagnostic algorithms. The two vibration algorithms were selected based on their maturity and published success in detecting damage to gears. Oil debris and vibration features were then developed using fuzzy logic analysis techniques, then input into a multi sensor data fusion process. Results show combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spur gears. As a result of this research, this new diagnostic tool has significantly improved detection of gear damage in the NASA Glenn Spur Gear Fatigue Rigs. This research also resulted in several other findings that will improve the development of future health monitoring systems. Oil debris analysis was found to be more reliable than vibration analysis for detecting pitting fatigue failure of gears and is capable of indicating damage progression.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Ufgs 32 31 13.53
Unified Facilities Guide Specifications (UFGS) are a joint effort of the U.S. Army Corps of Engineers (USACE), the Naval Facilities Engineering Command (NAVFAC), the Air Force Civil Engineer Support Agency (HQ AFCESA), the Air Force Center for Engineering and the Environment (HQ AFCEE) and the National Aeronautics and Space Administration (NASA). UFGS are for use in specifying construction for the military services. This is one of those documents.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A Radial Basis Function Neural Network Approach to Two-Color Infrared Missile Detection
Multi-color infrared imaging missile-warning systems require real-time detection techniques that can process the wide instantaneous field of regard of focal plane array sensors with a low false alarm rate. Current technology applies classical statistical methods to this problem and ignores neural network techniques. Thus the research reported here is novel in that it investigates the use of radial basis function (RBF) neural networks to detect sub-pixel missile signatures. An RBF neural network is designed and trained to detect targets in two-color infrared imagery using a recently developed regression tree algorithm. Features are calculated for 3 by 3 pixel sub-images in each color band and concatenated into a vector as input to the network.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Reflectivity and Transmissivity Through Layered, Lossy Media
The theory behind the use of layers of radar absorbing materials or other dielectric materials is identical to the theory of optical reflection and transmission through layered media. This report is intended to be of use to students studying the application of layered media to a radar cross-section reduction problem. In this report, we survey several established optics and electromagnetics texts. We critique them and attempt to reconcile differences. We arrive at a single consistent theory which fully considers lossy materials. Layers are depicted as matrices which can be multiplied to combine the effects of several adjacent layers. We can then find the transmissivity and reflectivity of the entire multiple-layer structure. This theory is implemented in the MATLAB language in a user-friendly format.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Non-GPS Navigation Using Vision-Aiding and Active Radio Range Measurements
The military depends on the Global Positioning System (GPS) for a wide array of advanced weaponry guidance and precision navigation systems. Lack of GPS access makes precision navigation very difficult. Inclusion of inertial sensors in existing navigation systems provides short-term precision navigation, but drifts significantly over long-term navigation. This thesis is motivated by the need for inertial sensor drift-constraint in degraded and denied GPS environments. The navigation system developed consists of inertial sensors, a simulated barometer, three Raytheon DH500 radios, and a stereo-camera image-aiding system. The Raytheon DH500 is a combat comm radio which also provides range measurements between radios. The measurements from each sensor are fused together with an extended Kalman filter to estimate the navigation trajectory. Residual monitoring and the Sage-Husa adaptive algorithm are individually tested in the Kalman filter range update algorithm to help improve the radio range positioning performance. The navigation system is shown to provide long-term inertial sensor drift-constraint with position errors as low as 3 meters.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Comparative Energy and Cost Analysis Between Conventional HVAC Systems and Geothermal Heat Pump Systems
To sustain the United States current affluence and strength, the U.S. Government has encouraged energy conservation through executive orders, federal and local laws, and consumer education. A substantial reduction in U.S. energy consumption could be realized by using geothermal heat pumps to heat and cool buildings throughout the U.S., though initial installation cost are a deterrent. This thesis uses Monte Carlo simulation to predict energy consumption, life cycle cost and payback period for the vertical closed-loop ground source heat pump (GSHP)relative to conventional heating ventilation and air conditioning (HVAC) systems: airsource heat pumps (ASHP), air-cooled air conditioning with either natural gas, fuel oil, or liquid petroleum gas furnaces, or with electrical resistance heating. The Monte Carlo simulation is performed for a standard commercial office building within each of the 48 continental states. Regardless of the conventional HVAC system chosen, the simulation shows that for each state the GSHP has the highest probability of using less energy and having a lower operating and life cycle cost than conventional HVAC systems; however, initial installation cost are typically twice that of conventional HVAC systems and payback periods vary greatly depending on site conditions.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Unearthing Hawaii璽(TM)s Energy Goldmine
This research paper assesses the feasibility of installing a photovoltaic system at Hickam Air Force Base, Hawaii. The study begins with an analysis of the rooftop photovoltaic system installed in 2005 at Pearl Harbor Naval Station, Hawaii, which borders Hickam Air Force Base. The analysis identifies the feasibility criteria that Pearl Harbor's Energy Manager considered during project development, and reviews the performance of the array since its installation. Using the criteria and performance data from the Pearl Harbor project, the study then assesses the feasibility of implementing a similar system at Hickam. Hawaii's high electricity prices, sunny climate, and tax incentives for corporate investment in solar power combine to create one of the most favorable solar energy markets in the country.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Two Dimensional Positioning and Heading Solution for Flying Vehicles Using a Line-Scanning Laser Radar (LADAR)
Emerging technology in small autonomous flying vehicles requires the systems to have a precise navigation solution in order to perform tasks. In many critical environments, such as indoors, GPS is unavailable necessitating the development of supplemental aiding sensors to determine precise position. This research investigates the use of a line scanning laser radar (LADAR) as a standalone two dimensional position and heading navigation solution and sets up the device for augmentation into existing navigation systems. A fast histogram correlation method is developed to operate in real-time on board the vehicle providing position and heading updates at a rate of 10 Hz. LADAR navigation methods are adapted to 3 dimensions with a simulation built to analyze performance loss due attitude changes during flight. These simulations are then compared to experimental results collected using SICK LD-OEM 1000 mounted a cart traversing. The histogram correlation algorithm applied in this work was shown to successfully navigate a realistic environment where a quadrotor in short flights of less than 5 min in larger rooms. Application in hallways show great promise providing a stable heading along with tracking movement perpendicular to the hallway.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Desktop Computer Programs for Preliminary Design of Transonic Compressor Rotors
A need exists in the field of turbomachinery for correlation-based desktop computer programs that predict the flow through transonic compressor rotors with nominal computational time and cost. In this research, modified versions of two desktop computer programs, intended for preliminary transonic compressor rotor design, BOWSHOCK and TRANSROTOR, were used to perform a parametric study on a modern compressor rotor. BOWSHOCK uses a method-of-characteristics approach to calculate exit flow properties of a supersonic streamtube through a user-defined compressor rotor. TRANSROTOR calculates flow properties at three stations in a user-defined compressor stage. Modifications to TRANSROTOR included the incorporation of a recently published rotor loss model, advertised as suitable for analyzing modern blading concepts. The baseline and modified TRANSROTOR versions were run with two modern transonic compressor blades. Results were compared with results from a Navier-Stokes-based computational fluid dynamics (CFD) code, APNASA. A parametric study using BOWSHOCK examined the sensitivity of rotor efficiency and pressure ratio to variations in six blade parameters. Both TRANSROTOR versions predicted rotor efficiency and pressure ratios within ten-percent of the CFD results. The baseline version predicted total pressure ratio more accurately. Computational times were under six minutes for a single 450 MHz processor. The results of the blade geometry parametric study showed that isentropic efficiency was most sensitive to stagger angle and least sensitive to blade spacing. Total pressure ratio was most sensitive to blade maximum thickness location and least sensitive to blade maximum thickness.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Assessment of the Undiscovered oil and gas of the Senegal Province, Mauritania, Senegal, the Gambia, and Guinea-Bissau, Northwest Africa
Undiscovered, conventional oil and gas resources were assessed in the Senegal Province as part of the U.S. Geological Survey World Petroleum Assessment 2000 (U.S. Geological Survey World Energy Assessment Team, 2000). Although several total petroleum systems may exist in the province, only one composite total petroleum system, the Cretaceous-Tertiary Composite Total Petroleum System, was defined with one assessment unit, the Coastal Plain and Offshore Assessment Unit, having sufficient data to allow quantitative assessment. The primary source rocks for the Cretaceous-Tertiary Composite Total Petroleum System are the Cenomanian-Turonian marine shales. The Turonian shales can be as much as 150 meters thick and contain Type II organic carbon ranging from 3 to 10 weight percent. In the Senegal Province, source rocks are mature even when situated at depths relatively shallow for continental passive margin basins. Reservoir rocks consist of Upper Cretaceous sandstones and lower Tertiary clastic and carbonate rocks. The Lower Cretaceous platform carbonate rocks (sealed by Cenomanian shales) have porosities ranging from 10 to 23 percent. Oligocene carbonate rock reservoirs exist, such as the Dome Flore field, which contains as much as 1 billion barrels of heavy oil (10? API, 1.6 percent sulfur) in place. The traps are a combination of structural closures and stratigraphic pinch-outs. Hydrocarbon production in the Senegal Province to date has been limited to several small oil and gas fields around Cape Verde (also known as the Dakar Peninsula) from Upper Cretaceous sandstone reservoirs bounded by normal faults, of which three fields (two gas and one oil) exceed the minimum size assessed in this study (1 MMBO; 6 BCFG). Discovered known oil resources in the Senegal Province are 10 MMBO, with known gas resources of 49 BCFG (Petroconsultants, 1996). This study estimates that 10 percent of the total number of potential oil and gas fields (both discovered and undiscovered) of at least the minimum size have been discovered. The estimated mean size and number of assessed, undiscovered oil fields are 13 MMBO and 13 fields, respectively, whereas the mean size and number of undiscovered gas fields are estimated to be 50 BCFG and 11 fields. The mean estimates for undiscovered conventional petroleum resources are 157 MMBO, 856 BCFG, and 43 MMBNGL (table 2). The mean sizes of the largest anticipated undiscovered oil and gas fields are 66 MMBO and 208 BCFG, respectively. The Senegal Province is underexplored considering its large size. The province has hydrocarbon potential in both the offshore and onshore, and undiscovered gas resources may be significant and accessible in areas where the zone of oil generation is relatively shallow.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Multi-Dimensional Classification Algorithm for Automatic Modulation Recognition
This thesis proposes an approach for modulation classification using existing features in a more efficient way. The Multi-Dimensional Classification Algorithm (MDCA) treats features extracted from signals of interest as elements with irrelevant identities, hence eliminating any dependence of the classifier on any particular feature. This design enables the use of any number of features, and the MDCA algorithm provides the capability to classify modulations in higher dimensions. The use of multiple features requires an equal number of data dimensions, and thus classification in as high a dimensional space as possible can improve final classification results. Finally, the MDCA algorithm uses a relatively small number of simple operations, which leads to a fast processing time. Simulation results for the MDCA algorithm demonstrate good potential. In particular, the MDCA consistently performed well (at SNR levels down to -10dB in some cases) and in identifying more modulation types.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Analysis of the Application of a Triggered Isomer Heat Exchanger as a Replacement for the Combustion Chamber in an Off-the-Shelf Turbojet
The objective of this research was to determine the feasibility of using a nuclear reaction heat source, such as the electromagnetically triggered decay of an isomer, in a solid-state heat exchanger to power an off-the-shelf gas turbine engine. Two primary performance measures examined were the total pressure decrement across the heat exchanger and the total temperature capability leaving the heat exchanger. The analysis included the use of acommercialsoftware package, ANSYS// 5.6.1, running on a 700 MHz Pentium III PC. This package includes the FLOTRAN computational fluid dynamics program, a finite element program based on unstructured meshes, with multiple discretization schemes, turbulence models, and advection options. Boundary conditions on velocity, pressure, temperature, heat flux, and heat generation are available and were used in this research. Three basic geometries of heat exchanger were explored in this research: Concentric annular tubes, radial trapezoidal fins, and a dual, concentric annulus of rectangular fins. These were selected due to the simplicity of geometry and potential ease of manufacture. In addition, because the flow through all of these geometries could be reasonably approximated by a series of two dimensional flow fields, run times were on the order of 1 day, a significant reduction from 3-D flow calculations. All three configurations produced sufficient heat transfer. Pressure ratios across the heat exchangers varied in the range from 94.597O to 97.5%. Turbine inlet temperatures varied from 986 K to 1150 K (1775 R to 2070 R). In the J-57 engine, these conditions will produce a static, sea-level thrust of approximately 37,000 N (8,300 lb.) to 47,000 N (10,600 lb.), compared to 46,000 N (10,300 lb.) for the conventional engine.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Conceptual MEMS Devices for a Redeployable Antenna
Micro-Electro-Mechanical Systems (MEMS) are becoming an integral part of our lives through a wide range of applications, including MEMS accelerators for air bag deployment in vehicles, micromirrors in projection devices, and various sensors for chemical/biological applications. MEMS are a key aspect of everincreasing significance in a myriad of commercial and military applications. Because of this importance, this thesis utilizes MEMS devices that can deploy and retract an antenna suitably sized for placement on an insect or microrobot for communication purposes. A target monopole antenna with a length of 1 mm was used as a test metric. From this requirement, several MEMS designs using scratch drives and thermal actuators as the basis for powering the motor were developed. Some of the fabricated and tested designs included a gear with side flaps that flip up perpendicular to the substrate; gears that push an antenna beam off the edge of the substrate; and an antenna beam that is moved upwards such that it stands perpendicular to the substrate. These designs had the highest likelihood of success. Other designs included an array of micro gears and guiding beams, a large wheel powered by scratch drives, and a gear with the pawl requiring assembly. For these designs to be successful, several basic modifications would be necessary. The antenna beam that moves into a position perpendicular to the substrate was successfully self-assembled.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Design, Build and Validation of a Small Scale Combustion Chamber Testing Facility
This study investigated the design parameters necessary for the construction and use of a testing facility built to evaluate advanced combustor designs for future gas turbine engines. User inputs were acquired by interview and by evaluating facilities at other organizations and used in the decisions made in the accuracy, capability, safety and flexibility of pieces of machinery and how different systems were to interact. All systems and measurements are designed to be compliant with the guidance set forth in SAE ARP 1256. Safeguard systems were also designed into the facility to maintain a safe work environment for the user. These safeguards include automatic fuel shut-offs, heater shut-offs, and general system power downs. While the system is designed to evaluate the testing of a planar 2-D section of the UCC, the labs now have the capability to analyze many systems. The facility, now built, has the ability to supply up to 260 SCFM of air in two legs with 200 SCFM and 60 SCFM splits. These air lines can be independently heated up to 500 -F. The testing area can flow both liquid and gaseous fuels, with a maximum flow rate of 340 mL/min for liquid fuels and 200 SLPM for gaseous fuels. The air flow and fuel flows combine to allow equivalence ratios up to 4 for JP-8 fuels. The facility is also capable of testing systems requiring combustion analysis following SAE ARP 1256 for testing of emissions, a system that requires heated air or fuel, a system that requires an exhaust system to pull gasses out of the testing area, or a system that needs open flame. These additional capabilities allow further research to be conducted on site with an ability to report standardized results.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Planning and Design of Hydroelectric Power Plant Structures
This manual provides guidance and assistance to design engineers in the development of different types of equipment used by the United States Army Corps of Engineers (USACE). The manual should be used when preparing electrical designs for civil works facilities built, owned, or operated by the Corps of Engineers.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Computational Model of One-Dimensional Dielectric Barrier Discharges
A one-dimensional fluid model of a surface-type dielectric barrier discharge is created using He as the background gas. This simple model, which only considers ionizing collisions and recombination in the electropositive gas, creates an important framework for future studies into the origin of experimentally observed flowcontrol effects of the DBD. The two methods employed in this study include the semi-implicit sequential algorithm and the fully implicit simultaneous algorithm. The first involves consecutive solutions to Poisson's, the electron continuity, ion continuity and electron energy equations. This method combines a successive overrelaxation algorithm as a Poisson solver with the Thomas algorithm tridiagonal routine to solve each of the continuity equations. The second algorithm solves an Ax=b system of linearized equations simultaneously and implicitly. The coefficient matrix for the simultaneous method is constructed using a Crank-Nicholson scheme for additional stability combined with the Newton-Raphson approach to address the non-linearity and to solve the system of equations. Various boundary conditions, flux representations and voltage schemes are modeled. Test cases include modeling a transient sheath, ambipolar decay and a radio-frequency discharge. Results are compared to validated computational solutions and/or analytic results when obtainable. Finally, the semi-implicit method is used to model a DBD streamer.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Structural Response of the Slotted Waveguide Antenna Stiffened Structure Components Under Compression
The Slotted Waveguide Antenna Stiffened Structure (SWASS) is an aircraft system that can provide the capabilities of a stiffened panel skin structure and a slotted waveguide radar antenna simultaneously. The system made from carbon fiber reinforced polymers is designed around a 10 GHz radar frequency in the X-band range and uses a WR- 90 waveguide as a baseline for design. The system is designed for integration into fuselage or wing sections of intelligence, surveillance, and reconnaissance (ISR) aircraft and would increase the system performance through the availability of increased area and decreased system weight. Elemental parts of the SWASS structure were tested in compression after preliminary testing was completed for material characterization of a resin reinforced plain woven carbon fiber fabric made from Grafil 34-700 fibers and a Tencate RS-36 resin with a resin mass ratio of 30%. Testing included finite element stress and strain field characterization of seven single slot configurations, and results showed the longitudinal 90- slot was the best structural slot by about 30% in terms of maximum von Mises stress. Single waveguides were tested in the non-slotted configuration and a configuration including a five longitudinal slot array in one waveguide wall. Finite element results were compared with experimental results and showed good comparisons in all areas. The slot array was determined to have a decrease in nonlinear limit load of 8% from the finite element simulations and 12% from the experimental results. All waveguides showed the characteristics of local wall buckling as the initial failure mechanism and had significant buckling features before ultimate material failure occured.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Cost Effectiveness of the Civil Engineering Self-Help Program
Self-help began as a method for base organizations to perform minor tasks such as painting to upgrade their facility environment. Today self-help's role has expanded to include major projects which are completed during duty time. This project studied the cost effectiveness of the present day self-help program. The development of self-help is explained to establish the programs background. Senior Civil Engineering leadership was interviewed for their viewpoints on the program. Self-help centers were visited or contacted to determine existing operational practices. This information is analyzed to help determine if self-help has outgrown its cost effective use.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Finite Element Analysis of Lamb Waves Acting Within a Thin Aluminum Plate
Structural health monitoring (SHM) is an emerging technology that can be used to identify, locate and quantify structural damages before failure. Among SHM techniques, Lamb waves have become widely used since they can cover large areas from one single location. Due to the development of various structural simulation programs, there is increasing interest in whether SHM data obtained from the simulation can be verified by experimentation. The objective of this thesis is to determine Lamb wave responses using SHM models in ABAQUS CAE (a Finite Element Analysis (FEA) program). These results are then compared to experimental results and theoretical predictions under isothermal and thermal gradient conditions in order to assess the sensitivity of piezo-generated Lamb wave propagation. Simulations of isothermal tests are conducted over a temperature range of 0-190-F with 100kHz and 300kHz excitation signal frequencies. The changes in temperature-dependent material properties are correlated to measurable differences in the response signal's waveform and propagation speed.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Investigation Into the Feasibility of Using a Modern Gravity Gradient Instrument for Passive Aircraft Navigation and Terrain Avoidance
Recently, Gravity Gradient Instruments (GGIs) - devices which measure the spatial derivatives of gravity, have improved remarkably due to development of accelerometer technologies. Specialized GGIs are currently flown on aircraft for geological purposes in the mining industries. As such, gravity gradient data is recorded in flight and detailed gradient maps are created after post mission processing. These maps, if stored in a database onboard an aircraft and combined with a GGI, form the basis for a covert navigation system using a map matching process. This system is completely passive and essentially unjammable. To determine feasibility of this method, a GGI sensor model was developed to investigate signal levels at representative flight conditions. Aircraft trajectories were simulated over modeled gravity gradient maps to determine the utility of flying modern GGIs in the roles of navigation and terrain avoidance.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Evaluation of a Method for Kinematic GPS Carrier-Phase Ambiguity Resolution Using a Network of Reference Receivers
New applications for GPS have driven a demand for increased positioning accuracy. The emerging GPS technology particularly affects the test community. The testing equipment and method must provide a solution that is an order of magnitude more precise than the tested equipment to achieve the desired accuracy. Carrier-phase differential GPS methods using a network of reference receivers can provide the centimeter-level accuracy required over a large geographical area. This thesis evaluates the performance of a 5-receiver network over a 50 km x 120 km area of New Mexico, using a GPS network algorithm called Net Adjust. The percentage of time a fixed integer solution was available for a kinematic baseline was investigated for three types of measurements. Results showed that the virtual reference receiver method using Net Adjust-corrected measurements outperformed the raw and Net Adjust-corrected file results. However, these results were only obtained for the shortest baseline receivers. The receivers with longer baselines did not experience the same degree of success, but did lead to several important insights gained from the research. Most importantly, the accuracy of the reference receiver coordinates is critical to the performance of a reference receiver network. Further testing must be accomplished before a full implementation is recommended.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Three-Dimensional Analysis of a Composite Repair and the Effect of Overply Shape Variation on Structural Efficiency
This research characterizes, in the elastic range, a scarf joint with overply using digital image correlation photogrammetry and finite element modeling. Additionally, the effect of varying the overply's geometric profile is examined. Specimens are constructed from AS4/3501-6 prepreg with a [0/ 45/90]2S layup. A fixture is used to achieve a consistent scarfed hole in each panel. The patch and adhesive (FM 300) are co-cured to the panels using positive pressure, which minimizes repair porosity. Three variations in the overply geometry are used: circular, rooftop-end, and tooth-end. The full strain field in each uni-axially loaded specimen is captured using digital image correlation photogrammetry (ARAMIS). These results validate an ABAQUS 3-D finite element model of a scarf patch with circular overply. Good correlation is evident in the longitudinal strain; strain sensitivity limits correlation in the transverse and shear directions. The finite element model is used to identify peak out-of-plane stresses in the repair joint. Significant normal stresses occur at edge of the overply and at the inner scarf diameter. Finally, the experimentally-measured strains of the 3 overply variations are examined. Variation in strain magnitude is insignificant; the strain gradient at the overply edge, however, is significantly lower on the profile with the tooth-edge.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Analysis of Cloud-Free Line-of-Sight Probability Calculations
Cloud-free line-of-sight probabilities were calculated using two separate methods. The first was a variation of a method developed by the Rand Corporation in 1972. In it, CFLOS probabilities were calculated using empirical data based on five years of photograms taken over Columbia, Missouri and forecasted cloud amounts rather than climatological values. The second was a new approach using the Cloud Scene Simulation Model developed by Phillips Laboratory. Cloud scenes were generated using forecasted cloud fields, meteorological inputs, and thirty random numbers. Water content files were produced and processed through a follow-on program to determine the extinction coefficients at each grid point in the working domain. A reiterative routine was written to integrate the extinction coefficients along a view angle from the top of the domain down to the surface at separate points within the horizontal domain. The values of each point were summed and averaged over the working domain to determine the CFLOS probability for the target area. The nadir look angle was then examined for both methods. Stratus, stratocumulus, cumulus, and altocumulus cloud types were independently examined with the CSSM generated cloud scenes. Each method and cloud type were compared against the known CFLOS probability for nadir.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Characterizing the Impact of Precision Time and Range Measurements From Two-Way Time Transfer Systems on Network Differential GPS Position Solutions
Precise positioning plays an important role for both military and civilian users, from cell phones and OnStar to precision munitions and swarms of UAVs. Many applications require precise relative positioning of a network of vehicles (such as aircraft, tanks, troops, etc). Currently, the primary means for performing precise positioning is by using the Global Positioning System (GPS), and although GPS has become commonplace in today's society, there are still limitations affecting the system. Recent advances in dynamic Two-Way Time Transfer (TWTT) have potentially provided a means to improve precise relative positioning accuracy over differential GPS (DGPS)-only approaches. TWTT is a technique in which signals are simultaneously exchanged between users.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Creating a Network Model for the Integration of a Dynamic and Static Supervisory Control and Data Acquisition (SCADA) Test Environment
Since 9/11 protecting our critical infrastructure has become a national priority. Presidential Decision Directive 63 mandates and lays a foundation for ensuring all aspects of our nation's critical infrastructure remain secure. Key in this debate is the fact that much of our electrical power grid fails to meet the spirit of this requirement. My research leverages the power afforded by Electric Power and Communication Synchronizing Simulator (EPOCHS) developed with the assistance of Dr. Hopkinson, et al. The power environment is modeled in an electrical simulation environment called PowerWorld . The network is modeled in OPNET and populated with self-similar network and Supervisory Control and Data Acquisition (SCADA). The two are merged into one working tool that can realistically model and provide a dynamic network environment coupled with a robust communication methodology. This new suite of tools will enhance the way we model and test hybrid SCADA networks. By combining the best of both worlds we get an effective and robust methodology that correctly predicts the impact of SCADA traffic on a LAN and vice versa. This ability to properly assess data flows will allow professionals in the power industry to develop tools that effectively model future concepts for our critical infrastructure.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A Platform for Antenna Optimization With Numerical Electromagnetics Code Incorporated With Genetic Algorithms
This thesis investigation presents a unique incorporation of the Method of Moments with a Genetic Algorithm. The use of this tool can improve antennas whose basis of designs are both the Yagi-Uda antenna and the Log Periodic Dipole Array (LPDA) antenna. The applications for these two antennas are of particular use in Passive Remote Sensing (PRS) and Over the Horizon Radar (OTHR). The designs are reached in a low cost and effective manner, the implementation of which is simple and expandable. A Genetic Algorithm (GA) is used in concert with the Numerical Electromag- netics Code, Version 4 (NEC4) to create and optimize typical wire antenna designs including single elements and arrays, the result being antennas with impressive char- acteristics.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Nonlinear Suppression of Range Ambiguity in Pulse Doppler Radar
Coherent pulse train processing is most commonly used in airborne pulse Doppler radar, achieving adequate transmitter/receiver isolation and excellent resolution properties while inherently inducing ambiguities in both Doppler and range. As first introduced by Palermo in 1962 using two conjugate LFM pulses, the primary nonlinear suppression (NLS) objective involves reducing range ambiguity, given the waveform is nominally unambiguous in Doppler, by using interpulse and intrapulse coding (pulse compression) to discriminate the received ambiguous pulse responses. By introducing a nonlinear operation on compressed (undesired) pulse responses within individual channels, ambiguous energy levels are reduced in channel outputs. The proliferation of high-speed digital signal processing capability and discrete code development occurring since 1962, greatly improves the feasibility of implementing NLS using code sets of multiple codes.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Dynamic Response of a Collidant Impacting a Low Pressure Airbag
There are many uses of low pressure airbags, both military and commercial. Many of these applications have been hampered by inadequate and inaccurate modeling tools. This dissertation contains the derivation of a four degree-of-freedom system of differential equations from physical laws of mass and energy conservation, force equilibrium, and the Ideal Gas Law. Kinematic equations were derived to model a cylindrical airbag as a single control volume impacted by a parallelepiped collidant. An efficient numerical procedure was devised to solve the simplified system of equations in a manner amenable to discovering design trends. The largest public airbag experiment, both in scale and scope, was designed and built to collect data on low-pressure airbag responses, otherwise unavailable in the literature. The experimental results were compared to computational simulations to validate the simplified numerical model. Experimental response trends are presented that will aid airbag designers. The two objectives of using a low pressure airbag to demonstrate the feasibility to 1) accelerate a munition to 15 feet per second velocity from a bomb bay, and 2) decelerate humans hitting trucks below the human tolerance level of 50 G's, were both met.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Radar Orbit Analysis Tool Using Least Squares Estimator
Most objects tracked in space follow a regular Keplerian orbit; unfortunately, non-Keplerian objects such as maneuvering satellites, tethered systems, and thrusting ballistic missiles are becoming more common. It is important to be able to distinguish between Keplerian and non-Keplerian objects due to the potential risk of a tethered satellite being mistaken for an object on re-entry. This research focused on creating a computer model that can detect the non-gravitational acceleration present in non- Keplerian orbits. A 3rd order Taylor series expansion was used to model the dynamics and to produce simulated radar data. Linear least squares estimation was used to estimate the initial state of a space object with a state vector composed of position, velocity, acceleration, and its first derivative. Monte Carlo analysis was used to verify that the estimator was unbiased and representative of the uncertainty in the data. The Monte Carlo method detected non-gravitational acceleration as small as 1.12 cm/s2; however, a subsequent approach that analyzed the data sets individually only detected acceleration as small as 10.63 cm/s2. At smaller magnitudes, the estimator was able to detect the presence of non-gravitational acceleration, but was ultimately unable to estimate the true value with statistical accuracy.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Doppler Aliasing Reduction in Wide-Angle Synthetic Aperture Radar Using Phase Modulated Random Stepped-Frequency Waveforms
This research investigates the benefits of using several phase modulated Random Stepped Frequency (RSF) waveforms in a Wide-Angle Synthetic Aperture Radar (WA-SAR) scenario. RSF waveforms have been demonstrated to have desirable properties which allow for cancelling of Doppler aliased scatterers in WA-SAR images. Additional aliased energy reduction is realized by improving the uniformity of the fre- quency coverage across the waveform's bandwidth. Phase code modulations applied to the subpulses of a RSF waveform spread the subpulse frequency content and im- prove WA-SAR image quality. A length 13 Barker code applied to a RSF waveform produces an image with a 91.95% reduction in the aliased energy present relative to a WA-SAR image produced using uncoded RSF. Length 25 Frank and P4 coded RSF waveforms reduce aliased energy by 96.65% and 96.72% respectively. Additionally, phase coded RSF waveforms produce images with improved noise-free dynamic range capabilities. The Barker, Frank and P4 coded waveforms improve the noise-free dynamic range by 9.4 dB, 12.6 dB, and 12.4 dB, respectively.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A New Flexible Global Positioning System (GPS) Constellation Sustainment Strategy
The Global Positioning System (GPS) is now a global utility. The United States Air Force is the steward responsible for sustaining and modernizing the constellation. The current launch-to-sustain strategy implemented by the Air Force is not flexible, does not effectively support GPS modernization, and it does not lend itself to a future responsive launch paradigm.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Multiple Model Methods for Cost Function Based Multiple Hypothesis Trackers
To estimate the state of a maneuvering target in clutter, a tracking algorithm must becapable of addressing measurement noise, varying target dynamics, and clutter. Traditionally, Kalman filters have been used to reject measurement noise, and their multiple model form can accurately identify target dynamics. The Multiple Hypothesis Tracker (MHT), a Bayesian solution to the measurement association problem that retains the probability density function of the target state as a mixture of weighted Gaussians, offers the greatest potential for rejecting clutter, especially when based on an advanced mixture reduction algorithm (MRA) such as the Integral Square Error (ISE) cost function.This research seeks to incorporate multiple model filters into an ISE cost-function based MHT to increase the fidelity of target state estimation.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Creating a Network Model for the Integration of a Dynamic and Static Supervisory Control and Data Acquisition (SCADA) Test Environment
Since 9/11 protecting our critical infrastructure has become a national priority. Presidential Decision Directive 63 mandates and lays a foundation for ensuring all aspects of our nation's critical infrastructure remain secure. Key in this debate is the fact that much of our electrical power grid fails to meet the spirit of this requirement. My research leverages the power afforded by Electric Power and Communication Synchronizing Simulator (EPOCHS) developed with the assistance of Dr. Hopkinson, et al. The power environment is modeled in an electrical simulation environment called PowerWorld . The network is modeled in OPNET and populated with self-similar network and Supervisory Control and Data Acquisition (SCADA). The two are merged into one working tool that can realistically model and provide a dynamic network environment coupled with a robust communication methodology. This new suite of tools will enhance the way we model and test hybrid SCADA networks. By combining the best of both worlds we get an effective and robust methodology that correctly predicts the impact of SCADA traffic on a LAN and vice versa. This ability to properly assess data flows will allow professionals in the power industry to develop tools that effectively model future concepts for our critical infrastructure.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Finite Element Analysis of Active and Sensory Thermopiezoelectric Composite Materials
Analytical formulations are developed to account for the coupled mechanical, electrical, and thermal response of piezoelectric composite materials. The coupled response is captured at the material level through the thermopiezoelectric constitutive equations and leads to the inherent capability to model both the sensory and active responses of piezoelectric materials. A layerwise laminate theory is incorporated to provide more accurate analysis of the displacements, strains, stresses, electric fields, and thermal fields through-the-thickness. Thermal effects which arise from coefficient of thermal expansion mismatch, pyroelectric effects, and temperature dependent material properties are explicitly accounted for in the formulation. Corresponding finite element formulations are developed for piezoelectric beam, plate, and shell elements to provide a more generalized capability for the analysis of arbitrary piezoelectric composite structures. The accuracy of the current formulation is verified with comparisons from published experimental data and other analytical models. Additional numerical studies are also conducted to demonstrate additional capabilities of the formulation to represent the sensory and active behaviors. A future plan of experimental studies is provided to characterize the high temperature dynamic response of piezoelectric composite materials.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Separation of the Heavier Rare Earths by Fractional Solvent Extraction
The Office of Scientific & Technical Information (OSTI), is a part of the U.S. Department of Energy (DOE) that houses research and development results from projects funded by the DOE. The information is generally an article, technical document, conference paper or dissertation. This is one of those publications.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A 3D Display System for Lightning Detection and Ranging (LDAR) Data
Lightning detection is an essential part of safety and resource protection at Cape Canaveral. In order to meet the unique needs of launching space vehicles in the thunderstorm prone Florida environment, Cape Canaveral has the only operational three-dimensional (3D) lightning detection network in the world, the Lightning Detection and Ranging (LDAR) system. Although lightning activity is detected in three dimensions, the current LDAR display, developed 20 years ago, is two-dimensional. This thesis uses modern three-dimensional graphics, object-oriented software design, and innovative visualization techniques to develop a 3D visualization application for LDAR data. The individual data points in an LDAR data file are compiled into a tree-like hierarchy using Java data structures. This hierarchy groups the points into a series of nested 3D cubes of varying sizes. The resulting data structures are used to construct a Java 3D scene graph containing the lightning information, using a visualization technique called Nested Cubes. Nested Cubes divides the Cape Canaveral area into a series of non-overlapping cubes 10 km on a side. If any stepped leaders are detected within one of these areas, they become visible in the scene as a transparent, red 10 km cube. If the user zooms in close enough, a 10 km cube will disappear and be replaced first by 1 km cubes, then 100 m cubes, bounding the areas where lightning was detected inside the larger cube.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Development of a Wireless Model Incorporating Largescale Fading in a Rural, Urban and Suburban Environment
The goal of this research is to develop a more realistic estimate of received signal strength level as calculated by OPNET. The goal is accomplished by replacing the existing free-space pathloss model used by OPNET with the Hata and COST-231 pathloss models. The calculated received signal strength using the new models behaves similarly to the measured values, with a 0.245 dB difference for 880 MHz and a 1.365 dB difference for 1922 MHz between the pathloss slopes. There is an 11.3 dBm difference between the initial starting signal strength from the calculated values and the measured values. An important aspect of a wireless communication system is the planning process. The planning phase of a wireless communication system will determine the number of necessary transmitting antennas, the frequency to be used for communications, and ultimately the cost of the entire project. Because of the possible expense of these factors it is important that the planning stage of any wireless communications project produce an accurate calculation of the coverage area.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Ufgs 01 35 26
Unified Facilities Guide Specifications (UFGS) are a joint effort of the U.S. Army Corps of Engineers (USACE), the Naval Facilities Engineering Command (NAVFAC), the Air Force Civil Engineer Support Agency (HQ AFCESA), the Air Force Center for Engineering and the Environment (HQ AFCEE) and the National Aeronautics and Space Administration (NASA). UFGS are for use in specifying construction for the military services. This is one of those documents.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Development of a Robust Optical Image Registration Algorithm for Negating Speckle Noise Effects in Coherent Images Generated by a Laser Imaging System
The Air Force Research Laboratory (AFRL) Sensors Directorate has constructed and tested a coherent LIght Detection And Ranging (LIDAR) imaging system called Laservision. Registration of individual images remains a significant problem in the generation of useful images collected using coherent imaging systems. Coherent images typically contain significant speckle noise created by the coherency of the laser. Each image collected by the system must be properly registered to allow for averaging the images to produce a single image with adequate resolution to allow detection and identification algorithms to operate accurately or for system operators to perform target detection and identification within a scene. An investigation of the performance of a new image registration algorithm designed using laser speckle noise statistics is conducted on data collected from the Laservision system. This thesis documents the design and performance of the proposed technique compared to that of a standard cross-correlation algorithm. Based on using only speckle noise statistics, the simulated data test results indicate that there is a small range of low average signal-to-noise ratios (SNR) where there is the potential to improve the shift estimation error by 0.1 to 0.16 pixel.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Approach to Large Scale Radar-Based Modeling and Simulation
This research presents a method of aggregating, or reducing the resolution, of a commonly available DoD simulation. It addresses the differences between varying levels of resolution and scope used in the Department of Defense's hierarchy of models pyramid. A data representation that aggregates engagement-level simulation data to use at a lower resolution level, the mission-level, is presented and analyzed. Two formats of implementing this data representation are developed and compared: the rigid cylinder format and the expanding tables format. The rigid cylinder format provides an intuitive way to visualize the data representation and is used to develop the theory. The expanding tables format expands upon the capabilities of the rigid cylinder format and reduces the simulation time. Tests are run to show the effects of each format for various combinations of engagement-level simulation inputs. A final set of tests highlight the loss in accuracy incurred from reducing the number of samples used by the mission-level simulation.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Efficient and Effective Implementation of the Trust System for Power Grid Compartmentalization
The goal of this research is to show in a simulated environment that security of the network can be strengthen by first fielding the trust system and second, by dividing a network into smaller clusters, called "domains", in order to isolate anomalies or intrusions detected. In order to show this, a mathematical model of the problem will be built and translated into a software tool that at the end will receive real-life-network data as input. This program uses real world power grid representative data, outputs a network configuration that has used the concepts described above of network compartmentalization and strategic placing of trust nodes. As a result, this new network configuration ensures safe day-to-day operations by minimizing the effects in case of an attack or equipment malfunction of the system by subdividing the network into domains. Each domain protected by a trust node(s) without violating timing constraints.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
High Frequency Direction Finding Using Structurally Integrated Antennas on a Large Airborne Platform
Estimating the angle of arrival (AOA) of a high frequency (HF) signal, 2- 2 MHz, is challenging, especially if the antenna array is installed on a platform with dimensions on the order of one wavelength. Accurate AOA estimates are necessary for search and rescue operations and geolocating RF emitters of interest. This research examines the performance of a direction finding (DF) system using structurally ntegrated (SI) antennas installed on an airborne platform which allows the aircraft structure to become the receiving element. Two simulated DF systems are analyzed at 4 and 11 MHz. The relationship between the number of SI antennas used and the AOA accuracy is examined by simulating systems using 4, 8, and 16 antennas. Simulations are also performed using the SI array to synthesize the pattern of a 3-loop cube, or vector, antenna. The maximum likelihood algorithm is used to produce AOA estimates. An array of SI antennas, with a dedicated receiver channel for each antenna, produce more accurate AOA estimates at 11 MHz than at 4 MHz. The accuracy improves when more antennas are included, regardless of frequency. Synthesizing a pattern to perform AOA estimation is an unnecessary step resulting in a suboptimal array for HFDF purposes.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Graphitized Carbon Foam With Phase Change Material
The transient heating and cooling responses of graphitized carbon foam infiltrated with phase change material (PCM) are studied, including thermal cycling, analytical modeling, contact resistance, and the temperature gradient through the infiltrated foam. Infiltrating carbon foam with PCM creates an effective thermal energy storage device (TESD). The high thermal conductivity of the graphite ligaments in the foam allows rapid transfer of heat throughout the PCM volume. The PCM, chosen for its high heat capacity and high heat of fusion, stores the heat for later removal. The PCM is able to absorb a significant amount of heat without a significant increase in temperature during phase change. Three different types of carbon foam were selected for this study, and a fully-refined paraffin wax was chosen for the PCM. Experimental samples of foam and PCM were heated on a temperature-controlled heater block from room temperature through phase change and to steady-state. Heat was then removed using a liquid-cooled cooling block. A data acquisition unit recorded temperatures throughout the experimental sample, the heater, and cooler every four seconds. The heating and cooling responses were modeled using an exponential function. The results show a decrease in the temperature rate of change during melting and solidifying of the PCM. Multiple cycles of heating and cooling the sample produced consistent responses.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A New Flexible Global Positioning System (GPS) Constellation Sustainment Strategy
The Global Positioning System (GPS) is now a global utility. The United States Air Force is the steward responsible for sustaining and modernizing the constellation. The current launch-to-sustain strategy implemented by the Air Force is not flexible, does not effectively support GPS modernization, and it does not lend itself to a future responsive launch paradigm.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Evaluation of a Method for Kinematic GPS Carrier-Phase Ambiguity Resolution Using a Network of Reference Receivers
New applications for GPS have driven a demand for increased positioning accuracy. The emerging GPS technology particularly affects the test community. The testing equipment and method must provide a solution that is an order of magnitude more precise than the tested equipment to achieve the desired accuracy. Carrier-phase differential GPS methods using a network of reference receivers can provide the centimeter-level accuracy required over a large geographical area. This thesis evaluates the performance of a 5-receiver network over a 50 km x 120 km area of New Mexico, using a GPS network algorithm called Net Adjust. The percentage of time a fixed integer solution was available for a kinematic baseline was investigated for three types of measurements. Results showed that the virtual reference receiver method using Net Adjust-corrected measurements outperformed the raw and Net Adjust-corrected file results. However, these results were only obtained for the shortest baseline receivers. The receivers with longer baselines did not experience the same degree of success, but did lead to several important insights gained from the research. Most importantly, the accuracy of the reference receiver coordinates is critical to the performance of a reference receiver network. Further testing must be accomplished before a full implementation is recommended.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Modification of Position and Attitude Determination of a Test Article Through Photogrammetry to Account for Structural Deformation
The Arnold Engineering Development Center (AEDC) at Arnold AFB, TN currently has a computer program which, through a process known as photogrammetry, combines multiple 2D images of a wind tunnel test article, affixed with numerous registration markers, and the known 3D coordinates of those markers. It can then accurately determine the unknown position and attitude of the test article relative to the wind tunnel. The current algorithm has a problem in that it assumes the test article is a rigid body, when, in fact, the test article experiences deformation under aerodynamic loads. Due to this deformation, the 3D coordinates of the markers are not precisely known. This research looks at modifying the current program to account for this deformation and to improve the accuracy of the position and attitude determination of the test article. The current program uses the Levenberg-Marquardt method of multi-parameter optimization to solve for the unknown parameters of position and attitude. In this work, deformation is modeled in two modes, simple parabolic bending and linear twisting, and uses the L-M method to solve for these additional parameters. This work also determines the minimum number of targets and cameras required to obtain the maximum accuracy. It varies the model targets from about 20 to 200, and looks at using 1, 2, 4, 6, and 8 cameras. The results are a great improvement in accuracy over the original program. The results also show that optimal accuracy is obtained with approximately 50 targets and 2 cameras. Any more than this produces an extremely small improvement in accuracy, with no real added benefit. It is clear that by adding simple bending and twisting parameters to the list of unknowns in the L-M solver, a much greater accuracy can be achieved in the determination of the position and attitude.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Simulation of a Diode Pumped Alkali Laser
This paper develops a three level model for a continuous wave diode pumped alkali laser by creating rate equations based on a three level system. Differential equations for intra-gain pump attenuation and intra-gain laser growth are developed in the fashion done by Rigrod. Using Mathematica 7.0, these differential equations are solved numerically and a diode pumped alkali laser system is simulated. The results of the simulation are compared to previous experimental results and to previous computational results for similar systems. The absorption profile for the three level numerical model is shown to have excellent agreement with previous absorption models. The lineshapes of the three level numerical model are found to be nearly identical to previous developments excepting those models assumptions. The three level numerical model provides results closer to experimental results than previous systems and provides results which observe effects not previously modeled, such as the effects of lasing on pump attenuation.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Unearthing Hawaii璽(TM)s Energy Goldmine
This research paper assesses the feasibility of installing a photovoltaic system at Hickam Air Force Base, Hawaii. The study begins with an analysis of the rooftop photovoltaic system installed in 2005 at Pearl Harbor Naval Station, Hawaii, which borders Hickam Air Force Base. The analysis identifies the feasibility criteria that Pearl Harbor's Energy Manager considered during project development, and reviews the performance of the array since its installation. Using the criteria and performance data from the Pearl Harbor project, the study then assesses the feasibility of implementing a similar system at Hickam. Hawaii's high electricity prices, sunny climate, and tax incentives for corporate investment in solar power combine to create one of the most favorable solar energy markets in the country.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.