Evolving Compact Decision Rule Sets
With the increased proliferation of computing equipment, there has been a corresponding explosion in the number and size of databases. Although a great deal of time and e ort is spent building and maintaining these databases, it is nonetheless rare that this valuable resource is exploited to its fullest. The principle reason for this paradox isthat many organizations lack the insight and/or expertise to e ectively translate this information into usable knowledge. While data mining technology holds the promise of automatically extracting useful patterns (such as decision rules) from data, this potential has yet to be realized. One of the major technical impediments is that the current generation of data mining tools produce decision rule sets that are very accurate, but extremely complex and difficult to interpret. As a result, there is a clear need for methods that yield decision rule sets that are both accurate and compact. The development of the Genetic Rule and Classi er Construction Environment (GRaCCE) is proposed as an alternative to existing decision rule induction (DRI) algorithms. GRaCCE is a multi-phase algorithm which harnesses the power of evolutionary search to mine classi cation rules from data. These rules are based on piece-wise linear estimates of the Bayes decision boundary within a winnowed subset of the data.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Artificial Media
A groundbreaking exploration of the evolving relationship between the fields of artificial intelligence and creativity studies, Artificial Media charts the course of a transformative path toward hybrid methodologies involving computing and human-centric approaches. Scholars and practitioners from leading research centers in South America, Asia and Europe delve into theoretical and philosophical frameworks, practical deployments and data-based critical analyses of artificial-media initiatives that reconfigure authorship and collaboration. Co-creation, collective memory, and situated-knowledge practices are featured in multiple hands-on examples of technological design, music, visual-arts, journalistic and educational projects that address the ethical and social implications of generative techniques. Through an interdisciplinary lens, this collection, projects a nuanced panorama of both the remarkable results and the complex challenges of emerging artificial-media methods, offering practical insights for anyone seeking to engage with the future of creativity in the age of autonomous machines.
Artificial Intelligence Security and Safety
This book proposes the architecture of artificial intelligence (AI) security and safety, discusses the topics about AI for security, AI security and AI safety, and makes an in-depth study on the ethical code of AI security and safety. Meanwhile, this book makes a detailed analysis of "artificial intelligence actant" (AIA) concept and its possible security problems, proposes the solutions for the AIA safely hoop, and provides the assessment and detection methods for AIA. Finally, this book discusses the AI cutting-edge technologies, as well as the future development trend of AI security and safety. This book is suitable for researchers, practitioners, regulators and enthusiasts in the field of AI, cyberspace security, etc.
Tactile Robotics
Tactile Robotics structures and unifies the information processing of tactile data--not only for extracting object property but also for controller computation. This book systematically introduces tactile sensors, perception, and control, providing readers with no prior background with a better sense and knowledge of robotics and machine learning and helping users understand the concept of tactile robots and their various applications for use in real-world scenarios.
Pattern Recognition and Image Analysis
The two volume set LNCS 15937 + 15938 constitutes the proceedings of the 12th Iberian Conference on Pattern Recognition and Image Analysis, IbPRIA 2025, which took place in Coimbra, Portugal, during June 30-July 3, 2025. The 67 full papers included in the proceedings were carefully reviewed and selected from 115 submissions. They were organized in topical sections as follows: Part I: Computer vision; faces, body, fingerprints and biometrics; machine and deep learning; explainability, bias and fairness in DL; Part II: Natural language processing; biomedical applications; and other applications.
Computer Vision and Image Processing
The Six-volume proceedings set CCIS 2473 and 2478 constitutes the refereed proceedings of the 9th International Conference on Computer Vision and Image Processing, CVIP 2024, held in Chennai, India, during December 19-21, 2024. The 178 full papers presented were carefully reviewed and selected from 647 submissions.The papers focus on various important and emerging topics in image processing, computer vision applications, deep learning, and machine learning techniques in the domain.
Intelligent and Fuzzy Systems
Artificial Intelligence in Human-Centric, Resilient & Sustainable Industries This book focuses on benefiting artificial intelligent tools in our business and social life under emerging conditions. Human-centric, resilient, and sustainable industries are built on ideals like human-centricity, ecological advantages, or social benefits. The mission of human-centric artificial intelligence is to improve people's lives by offering solutions that boost productivity, accessibility to resources, security, well-being, and general quality of life. The latest intelligent methods and techniques on human-centric, resilient, and sustainable industries are introduced by theory and applications. This book covers the chapters of world-wide known experts on machine learning, medical image processing, process intelligence, process mining, and others. The intended readers are intelligent systems researchers, lecturers, M.Sc. and Ph.D. students trying to develop approaches giving human needs, values, and viewpoints top priority through artificial intelligent systems.
Intelligent Business Analytics
This book explores the transformative role of soft computing methods in increasing business analytics, providing a comprehensive look into how these advanced methods can be applied to complex business data for meaningful insights. Through the integration of neural network, fuzzy logic, genetic algorithms, artificial intelligence, machine learning, deep learning, and other innovative approaches, Intelligent Business Analytics: Harnessing the Power of Soft Computing for Data- Driven Insights presents a roadmap for leveraging computational intelligence in diverse areas of business decision- making.Readers will venture from predictive analytics and customer segmentation to real- time decision support systems and many other applications. Soft computing's flexibility and applicability in the handling of uncertainty, ambiguity, and dynamic data environments shine throughout the book. Each chapter is created to be a base of theory and, at the same time, provide an applied example, so the book is appropriate for students, researchers, and professionals in the field.This book also discusses where the markets are heading and new applications that are in store for intelligent analytics to create a competitive advantage that also supports sustainable growth. At the end, this book is for those who want to learn more about using data-driven approaches and those who are ready to face the changes of the fast-evolving digital world.
The Definitive Game Narrative Guide
The Definitive Game Narrative Guide is the ultimate start and end point for storytelling in video games. Whether you're an aspiring writer or a seasoned game developer, this book offers an in-depth, comprehensive look at the entire narrative process.Written by two industry veterans with experience across some of the biggest AAA franchises, this guide covers the basics to the advanced, including the "why" for each topic as much as the "how."This book explores the nuances of world building, character development, interactive storytelling, and the technical challenges unique to game narrative. With real-world examples, practical insights, and expert advice, it provides a look into how game stories come together, from the smallest indie project to massive AAA teams.The Definitive Game Narrative Guide is more than a how-to book, as it also serves as an industry insider's perspective on what makes game storytelling truly great. It discusses techniques to navigate the creative workplace, working as a creative, and most importantly, collaborating with other creatives and disciplines, such as art and design.An essential tool for anyone looking to level up their understanding of game narrative. This book will help you bring unforgettable stories to life in an interactive form.
Leveraging Artificial Intelligence in Cloud, Edge, Fog and Mobile Computing
In an era defined by rapid technological advancements, the convergence of artificial intelligence (AI) with cloud, edge, fog, and mobile computing is transforming the landscape of computing and data processing. These emerging technologies are not only enhancing computational capabilities but also paving the way for innovative applications across diverse industries, from healthcare and finance to transportation and entertainment.Leveraging Artificial Intelligence in Cloud, Edge, Fog and Mobile Computing explores the symbiotic relationship between AI and these computing paradigms. As AI continues to evolve, its integration with cloud, edge, fog, and mobile computing platforms is unlocking new potentials and driving efficiencies and enabling real-time, intelligent decision-making processes.The book begins with an in-depth examination of the foundational principles of cloud, edge, fog, and mobile computing, followed by a detailed analysis of how AI technologies are being embedded within these frameworks. It then delves into the unique advantages and challenges of each paradigm, highlighting their roles in facilitating seamless, decentralized data processing and enhancing user experiences.The book is structured to provide a comprehensive understanding of the current state and future directions of AI in these computing environments.The book is intended to serve as a resource and inspiration for those seeking to explore the vast potential of AI in the realms of cloud, edge, fog, and mobile computing. Its goal is to spark new ideas, foster innovation, and contribute to the ongoing dialogue on the future of intelligent computing.
Digital Twins
This book centres on the topic of digital twins for superior healthcare decision support, as access is enabled to large volumes of multi-dimensional data such as patient's electronic medical records, medical scans, and data.
Designing Virtual Worlds Volume I
Designing Virtual Worlds stands as the most comprehensive examination of virtual-world design ever written. This seminal work is a tour de force, remarkable for its intellectual breadth, encompassing the literary, economic, sociological, psychological, physical, technological, and ethical foundations of virtual worlds.
Pattern Recognition and Image Analysis
The two volume set LNCS 15937 + 15938 constitutes the proceedings of the 12th Iberian Conference on Pattern Recognition and Image Analysis, IbPRIA 2025, which took place in Coimbra, Portugal, during June 30-July 3, 2025. The 67 full papers included in the proceedings were carefully reviewed and selected from 115 submissions. They were organized in topical sections as follows: Part I: Computer vision; faces, body, fingerprints and biometrics; machine and deep learning; explainability, bias and fairness in DL; Part II: Natural language processing; biomedical applications; and other applications.
Information Technology and Systems
This book comprises papers written in English and accepted for presentation and discussion at the 2025 International Conference on Information Technology & Systems (ICITS'25), held at Instituto Polit矇cnico Nacional (IPN), Mexico City, Mexico, from January 22 to 24, 2025. ICITS'25 serves as a global forum for researchers and practitioners to present and discuss recent findings, innovations, current trends, professional experiences, and challenges in modern information technology and systems research, along with their technological developments and applications. The main topics covered include: Information and Knowledge Management; Organizational Models and Information Systems; Software and Systems Modeling; Software Systems, Architectures, Applications, and Tools; Multimedia Systems and Applications; Computer Networks, Mobility, and Pervasive Systems; Intelligent and Decision Support Systems; Big Data Analytics and Applications; Human-Computer Interaction; Ethics, Computers, and Security; Health Informatics; Information Technologies in Education; Media, Applied Technology, and Communication. The primary audience for this book includes postgraduate students and researchers in the field of Information Systems and Technologies. The secondary audience consists of undergraduate students and professionals working in related domains.
The Definitive Game Narrative Guide
The Definitive Game Narrative Guide is the ultimate start and end point for storytelling in video games. Whether you're an aspiring writer or a seasoned game developer, this book offers an in-depth, comprehensive look at the entire narrative process.Written by two industry veterans with experience across some of the biggest AAA franchises, this guide covers the basics to the advanced, including the "why" for each topic as much as the "how."This book explores the nuances of world building, character development, interactive storytelling, and the technical challenges unique to game narrative. With real-world examples, practical insights, and expert advice, it provides a look into how game stories come together, from the smallest indie project to massive AAA teams.The Definitive Game Narrative Guide is more than a how-to book, as it also serves as an industry insider's perspective on what makes game storytelling truly great. It discusses techniques to navigate the creative workplace, working as a creative, and most importantly, collaborating with other creatives and disciplines, such as art and design.An essential tool for anyone looking to level up their understanding of game narrative. This book will help you bring unforgettable stories to life in an interactive form.
Futureshock
This book provides an accessible introduction to leading edge topics today, from AI ethics and cybersecurity, through to augmented realities, virtual interfaces, and much more. This collection of writings by experts in their respective fields, invites the reader to explore new worlds that race towards us.
Digital Twins
This book centres on the topic of digital twins for superior healthcare decision support, as access is enabled to large volumes of multi-dimensional data such as patient's electronic medical records, medical scans, and data.
Digital Strategy and Governance in Transformative Technologies
Digital Strategy and Governance in Transformative Technologies offers a comprehensive exploration of how emerging technologies are reshaping business operations, governance structures, and societal interactions.
The Video Game Writer's Guide to Surviving an Industry That Hates You
In an industry that can seem like it's stacked against a game writer's professional survival, knowing how to navigate the choppy waters of building schedules, interfacing with other team members, getting actionable feedback and putting yourself in a position to do your best work without killing yourself is vital information.
Futureshock
This book provides an accessible introduction to leading edge topics today, from AI ethics and cybersecurity, through to augmented realities, virtual interfaces, and much more. This collection of writings by experts in their respective fields, invites the reader to explore new worlds that race towards us.
Digital Strategy and Governance in Transformative Technologies
Digital Strategy and Governance in Transformative Technologies offers a comprehensive exploration of how emerging technologies are reshaping business operations, governance structures, and societal interactions.
Kernelized Locality Sensitive Hashing for Fast Image Landmark Association
As the concept of war has evolved, navigation in urban environments where GPS may be degraded is increasingly becoming more important. Two existing solutions are vision-aided navigation and vision-based Simultaneous Localization and Mapping (SLAM). The problem, however, is that vision-based navigation techniques can require excessive amounts of memory and increased computational complexity resulting in a decrease in speed. This research focuses on techniques to improve such issues by speeding up and optimizing the data association process in vision-based SLAM. Specifically, this work studies the current methods that algorithms use to associate a current robot pose to that of one previously seen and introduce another method to the image mapping arena for comparison. The current method, kd-trees, is ecient in lower dimensions, but does not narrow the search space enough in higher dimensional datasets. In this research, Kernelized Locality-Sensitive Hashing (KLSH) is implemented to conduct the aforementioned pose associations. Results on KLSH shows that fewer image comparisons are required for location identification than that of other methods. This work can then be extended into a vision-SLAM implementation to subsequently produce a map.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Feature Extraction Using Principal and Independent Component Analysis aor Hyperspectral Imagery
Hyperspectral imagery (HSI) analysis is frequently employed by the Department of Defense for the purpose of classifying objects within an image as a form of target detection. In this research a robust Two-Phase Filtering Independent Component Analysis (ICA) Target Detection Method is proposed and validated. This new method resolves two main challenges encountered when implementing target detection methods using ICA, a high order statistics feature extraction (FE) method. The first challenge is the high computational demand imposed by the large volume of data associated with HSI during the FE process. To alleviate the effort required for ICA data processing, principal component analysis (PCA), a classical second order statistics method, is used for data reduction. Furthermore, the performance of using PCA under classification is compared against recently developed supervised FE techniques. The second challenge arises during the feature selection (FS) phase after the statistically independent components have been extracted. Current ICA target FS techniques have shown to be either unreliable or require significant user-intervention. A reliable FS process is essential in automating the target detection process. This proposed method uses ICA to extract independent features from the retained principal components, and is followed by an unsupervised target FS with a two-phase filtering process using kurtosis and mean silhouette values. This method achieved promising results when tested against a wide range of benchmark images.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Multi-Objective Mission Route Planning Using Particle Swarm Optimization
The Mission Routing Problem (MRP) is the selection of a vehicle path starting at a point, going through enemy terrain defended by radar sites to get to the target(s) and returning to a safe destination (usually the starting point). The MRP is a three-dimensional, multi-objective path search with constraints such as fuel expenditure, time limits, multi-targets, and radar sites with different levels of risks. It can severely task all the resources (people, hardware, software) of the system trying to compute the possible routes. The nature of the problem can cause operational planning systems to take longer to generate a solution than the time available. Since time is critical in MRP, it is important that a solution is reached within a relatively short time. It is not worth generating the solution if it takes days to calculate since the information may become invalid during that time. Particle Swarm Optimization (PSO) is an Evolutionary Algorithm (EA) technique that tries to find optimal solutions to complex problems using particles that interact with each other. Both Particle Swarm Optimization (PSO) and the Ant System (AS) have been shown to provide good solutions to Traveling Salesman Problem (TSP). PSO_AS is a synthesis of PSO and Ant System (AS). PSO_AS is a new approach for solving the MRP, and it produces good solutions. This thesis presents a new algorithm (PSO_AS) that functions to find the optimal solution by exploring the MRP search space stochastically.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
They Tried to Warn Us
They warned us. Not in slogans. Not in press releases. Not in Twitter threads or TED Talks.They warned us in books, essays, interviews, and thought experiments. They warned us with metaphors and manifestos, parables and polemics. And for the most part, we ignored them.This book is about those voices. They came from across centuries and continents: poets, scientists, philosophers, engineers, journalists, prophets. Some were household names. Others were dismissed, marginalized, or forgotten. What united them wasn't ideology or discipline, but vision. Each of them looked beyond the surface of their time - and saw the future rushing toward us. And then they tried to stop it.The PremiseThey Tried to Warn Us began as a podcast (which you can access at: https: //open.spotify.com/show/3IlsQV7KXuaseN1d3n7Hix). A thought experiment. What if we could bring back the thinkers who saw what was coming - and ask them what they make of our world today?
Knowledge Base Support for Design and Synthesis of Multi-Agent Systems
Agent Tool is an AFIT-produced, AFOSR-sponsored multi-agent system (MAS) development tool intended for production of MASs that meet military requirements. This research focuses on enabling MAS design and synthesis tools like agent Tool to store, retrieve, and filter persistent, reusable, and reliable agent domain knowledge. This "enabling" is vital if such tools are expected to produce consistent, maintainable, and verifiable agent applications on short timetables. Enabling requires: 1) modeling the agent knowledge domain, 2) designing and employing a persistent knowledge base, and 3) bridging that domain model to the knowledgebase with an extensible domain interchange grammar. The achieved interchange grammar, called Multi-Agent Markup Language (MAML), is presented and shown to be capable of representing MAS design knowledge in a concise and easily parsed form that is readily stored and retrieved in the knowledge base. The selected knowledge base, called the Agent Random-Access Meta-Structure (ARAMS), is shown to support MAML and operate in a distributed environment that permits sharing of agent development knowledge between various tools and tool instances. Tests of MAML and ARAMS with agent Tool are summarized, and related future work suggested.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Using Upper Layer Weights to Efficiently Construct and Train Feedforward Neural Networks Executing Backpropagation
Feed-forward neural networks executing back propagation are a common tool for regression and pattern recognition problems. These types of neural networks can adjust themselves to data without any prior knowledge of the input data. Feed-forward neural networks with a hidden layer can approximate any function with arbitrary accuracy. In this research, the upper layer weights of the neural network structure are used to determine an effective middle layer structure and when to terminate training. By combining these two techniques with signal-to-noise ratio feature selection, a process is created to construct an efficient neural network structure. The results of this research show that for data sets tested thus far, these methods yield efficient neural network structure in minimal training time. Data sets used include an XOR data set, Fisher's Iris problem, a financial industry data set, among others.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Taking the High Ground
Cloud computing offers tremendous opportunities for private industry, governments, and even individuals to access massive amounts of computational resources on-demand at very low cost. Recent advancements in bandwidth availability, virtualization technologies, distributed programming paradigms, security services and general public awareness havecontributed to this new business model for employing information technology (IT) resources. IT managers face tough decisions as they attempt to balance the pros and cons of integrating commercial cloud computing into their existing IT architectures. On one hand, cloud computing provides on-demand scalability, reduces capital and operational expenses, decreases barriers to entry, and enables organizations to refocus on core competencies rather than on IT expertise.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Social Networking Website Users and Privacy Concerns
Social networking websites are the fastest growing entity on the Internet. Users of social networking websites post personal information and pictures on these websites. Privacy and social networking websites has been previously studied, however, since those studies were conducted the rules for those websites have changed dramatically. A mixed methods approach was used in this study to examine what privacy concerns users of social networking websites have, whether it's regarding information on their accounts or the pictures they have posted. This study also considered if there were common personality traits present in people with those concerns. A comparison of user preferences between MySpace and Facebook was also conducted. Quantitative data in the form of survey information was used in addition to qualitative data gathered from semistructured interviews. This study supports that Social Desirability Bias was correlated with a user being selective of what pictures were displayed on social networking website accounts. Few users expressed a preference for one social networking website over the other. Over half of the participants did express concern for their privacy on social networking website accounts, but there were no personality factors that showed to be predictive of that concern.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Improved Multispectral Skin Detection and Its Application to Search Space Reduction for Dismount Detection Based on Histograms of Oriented Gradients
Due to the general shift from conventional warfare to terrorism and urban warfare by enemies of the United States in the late 20th Century, locating and tracking individuals of interest have become critically important. Dismount detection and tracking are vital to provide security and intelligence in both combat and homeland defense scenarios including base defense, combat search and rescue (CSAR), and border patrol. This thesis focuses on exploiting recent advances in skin detection research to reliably detect dismounts in a scene. To this end, a signal-plus-noise model is developed to map modeled skin spectra to the imaging response of an arbitrary sensor, enabling an in-depth exploration of multispectral features as they are encountered in the real world for improved skin detection. Knowledge of skin locations within an image is exploited to cue a robust dismount detection algorithm, significantly improving dismount detection performance and efficiencyThis work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Multi-Objective Mission Route Planning Using Particle Swarm Optimization
The Mission Routing Problem (MRP) is the selection of a vehicle path starting at a point, going through enemy terrain defended by radar sites to get to the target(s) and returning to a safe destination (usually the starting point). The MRP is a three-dimensional, multi-objective path search with constraints such as fuel expenditure, time limits, multi-targets, and radar sites with different levels of risks. It can severely task all the resources (people, hardware, software) of the system trying to compute the possible routes. The nature of the problem can cause operational planning systems to take longer to generate a solution than the time available. Since time is critical in MRP, it is important that a solution is reached within a relatively short time. It is not worth generating the solution if it takes days to calculate since the information may become invalid during that time. Particle Swarm Optimization (PSO) is an Evolutionary Algorithm (EA) technique that tries to find optimal solutions to complex problems using particles that interact with each other. Both Particle Swarm Optimization (PSO) and the Ant System (AS) have been shown to provide good solutions to Traveling Salesman Problem (TSP). PSO_AS is a synthesis of PSO and Ant System (AS). PSO_AS is a new approach for solving the MRP, and it produces good solutions. This thesis presents a new algorithm (PSO_AS) that functions to find the optimal solution by exploring the MRP search space stochastically.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Explicit Building Block Multiobjective Evolutionary Computation
This dissertation presents principles, techniques, and performance of evolutionary computation optimization methods. Evolutionary computation concepts examined are algorithm convergence, population diversity and sizing, genotype and phenotype partitioning, archiving, BB concepts, parallel evolutionary algorithm (EA) models, robustness, visualization of evolutionary process, and performance in terms of e ectiveness and e ciency. Additional contributions include the extension of explicit BB de nitions to clarify the meanings for good single and multiobjective BBs and a new visualization technique is developed for viewing genotype, phenotype, and the evolutionary process in nding Pareto front vectors. The culmination of this research is explicit BB state-of-the-art MOEA technology based on the MOEA design, BB classi er type assessment, solution evolution visualization, and insight into MOEA test metric validation and usage as applied to the following: test suite, deception, bioinformatics, unmanned vehicle -ight pattern, and digital symbol set design MOPs.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Evaluating the Performance of Multiple Classifier System
Receiver Operating Characteristic (ROC) curve, which graphs the trade-off between the conditional probabilities of an MCS in which Boolean rules are used to combine individual decisions. The method required performance data similar to the data available in the ROC curves for the entire system. A consequence of this result is that one can save time and money by effectively evaluating the performance of an MCS without performing experiments.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
The Generalizability of Private Sector Research on Software Project Management in Two USAF Organizations
Project managers typically set three success criteria for their projects: meet specifications, be on time, and be on budget. However, software projects frequently fail to meet these criteria. Software engineers, acquisition officers, and project managers have all studied this issue and made recommendations for achieving success. But most of this research in peer reviewed journals has focused on the private sector. Researchers have also identified software acquisitions as one of the major differences between the private sector and public sector MIS. This indicates that the elements for a successful software project in the public sector may be different from the private sector. Private sector project success depends on many elements. Three of them are user interaction with the project's development, critical success factors, and how the project manager prioritizes the traditional success criteria. High user interaction causes high customer satisfaction, even when the traditional success criteria are not completely met. Critical success factors are those factors a project manager must properly handle to avoid failure. And priorities influence which success criteria the project manager will most likely succeed in meeting.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Evaluating the Performance of Multiple Classifier System
Receiver Operating Characteristic (ROC) curve, which graphs the trade-off between the conditional probabilities of an MCS in which Boolean rules are used to combine individual decisions. The method required performance data similar to the data available in the ROC curves for the entire system. A consequence of this result is that one can save time and money by effectively evaluating the performance of an MCS without performing experiments.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
The CURE
In The CURE, visionary entrepreneur Gwen Swan unveils a bold new paradigm at the intersection of blockchain, artificial intelligence, and multi-omics healthcare. Chronic and complex diseases-cancer, cardiovascular disorders, genetic and rare conditions, and neurodegenerative illnesses-have long evaded one-size-fits-all therapies. Rising costs, inequitable access, and reactive treatment models leave millions at risk. Swan's groundbreaking thesis is simple but revolutionary: leverage a purpose-built cryptocurrency, CURE, to fund and incentivize truly preventive, precision-tailored miRNA-based treatments, transforming how we pay for, develop, and deliver medicine.CURE is more than a digital token. It is the financial backbone of the CureLogic AI ecosystem, enabling patients, providers, insurers, and institutions to pre-purchase bespoke, microRNA-signature drug plans at a 50% discount-before disease even takes hold. Each token transaction triggers advanced AI diagnostics that analyze a patient's unique genomics, proteomics, blood chemistry, and environmental factors. Within this data framework, CureLogic's proprietary algorithms sift through 55 million miRNA variants and a 25 million-compound drug library to generate individualized treatment protocols down to precise molecular dosages. The result: therapies tailored to each person's biology, powered by transparent, auditable blockchain payments.As both a compelling manifesto and a practical playbook, The CURE charts a path toward a future where preventive precision medicine becomes the norm, not the exception. By uniting cutting-edge AI diagnostics, molecular therapeutics, and blockchain-based incentives, Gwen Swan lays the foundation for eliminating the world's deadliest diseases-and for extending healthy, vibrant lives worldwide. Whether you are a clinician, biotech innovator, insurer, or patient advocate, The CURE offers the vision and the tools to participate in this healthcare revolution.
Explicit Building Block Multiobjective Evolutionary Computation
This dissertation presents principles, techniques, and performance of evolutionary computation optimization methods. Evolutionary computation concepts examined are algorithm convergence, population diversity and sizing, genotype and phenotype partitioning, archiving, BB concepts, parallel evolutionary algorithm (EA) models, robustness, visualization of evolutionary process, and performance in terms of e ectiveness and e ciency. Additional contributions include the extension of explicit BB de nitions to clarify the meanings for good single and multiobjective BBs and a new visualization technique is developed for viewing genotype, phenotype, and the evolutionary process in nding Pareto front vectors. The culmination of this research is explicit BB state-of-the-art MOEA technology based on the MOEA design, BB classi er type assessment, solution evolution visualization, and insight into MOEA test metric validation and usage as applied to the following: test suite, deception, bioinformatics, unmanned vehicle -ight pattern, and digital symbol set design MOPs.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Paradata
To make sense of data and use it effectively, it is essential to know where it comes from and how it has been processed and used. This is the domain of paradata, an emerging interdisciplinary field with wide applications. As digital data rapidly accumulates in repositories worldwide, this comprehensive introductory book, the first of its kind, shows how to make that data accessible and reusable. In addition to covering basic concepts of paradata, the book supports practice with coverage of methods for generating, documenting, identifying and managing paradata, including formal metadata, narrative descriptions and qualitative and quantitative backtracking. The book also develops a unifying reference model to help readers contextualise the role of paradata within a wider system of knowledge, practices and processes, and provides a vision for the future of the field. This guide to general principles and practice is ideal for researchers, students and data managers. This title is also available as open access on Cambridge Core.
PRINCE2 Agile
Lead IT Projects with Agility and Precision In today's fast-paced digital environment, IT project success requires more than speed-it demands structure, adaptability, and strategic control. PRINCE2 Agile brings together the governance strengths of PRINCE2 with the flexibility of Agile frameworks like Scrum, Kanban, and Lean. This concise yet comprehensive guide explores how to tailor PRINCE2 principles for Agile delivery, manage compliance in iterative workflows, and apply best practices across software, infrastructure, and DevOps. Ideal for academic study, certification preparation, and enterprise application, it offers the tools and insights needed to lead complex projects with confidence. Whether you're pursuing professional accreditation or managing real-world transformations, this book will help you deliver results-faster, smarter, and with greater assurance.
AI For Beginners
���� AI for Beginners - Learn Artificial Intelligence the Easy WayDiscover the world of AI without coding, jargon, or confusion. This beginner-friendly guide is your shortcut to understanding artificial intelligence, even if you've never written a line of code. Whether you're curious about AI tools like ChatGPT, want to use automation in your business, or simply want to keep up with the future-this book makes AI simple and accessible.���� What You'll Learn in AI for Beginners: What Is AI? - A plain-English breakdown of artificial intelligence, machine learning, and generative AI.How AI Works - Explore neural networks, large language models (LLMs), and how AI learns from data.Practical AI Tools - Discover beginner-friendly tools to boost productivity, creativity, and business.AI in Everyday Life - See how AI impacts jobs, education, health, and social media.Using ChatGPT & Gemini - Learn how to ask the right prompts and get useful results from popular AI tools.Ethical AI - Understand AI risks, bias, and how to use it responsibly.Future-Proof Your Skills - Stay ahead by mastering the basics of AI that everyone will need to know.���� No Experience NeededThis book is designed specifically for non-technical readers, students, entrepreneurs, and anyone curious about AI. It's perfect for absolute beginners who want a fast, practical, and real-world introduction to AI.✅ Why This AI Guide Is DifferentUnlike complex textbooks or hype-filled articles, AI for Beginners gives you: Simple language, not tech talkHands-on examples and clear visualsImmediate, real-world applicationsA roadmap to start using AI today���� Who Should Read AI for Beginners?If you're overwhelmed by headlines about AI but don't know where to start, this guide is for you. Whether you're a student, small business owner, freelancer, or lifelong learner, AI for Beginners breaks it all down with step-by-step guidance.You'll gain confidence using AI tools like ChatGPT, Google Gemini, Microsoft Copilot, and more-without any prior knowledge. No algorithms, no math-just real-world explanations anyone can follow.���� Why Learn AI Now?Artificial intelligence is changing how we work, learn, and live. From smart assistants to content creation, AI is becoming essential. With AI for Beginners, you'll get ahead of the curve and avoid falling behind in a tech-driven world.Start your journey with the #1 beginner's guide to artificial intelligence-written in plain English and built for action.If you've been asking "Where do I start with AI?" - AI For Beginners is your number 1 choice.
Azure for Developers - Third Edition
Advance your development career by mastering Microsoft Azure's latest tools and technologies to enhance existing applications and build powerful cloud-native solutionsKey Features: - Build and deploy Azure apps with web, serverless, and container-based architectures- Create end-to-end cloud solutions on Azure by integrating AI services, monitoring tools, and DevOps- Upskill confidently with practical insights and real-world development practices- Purchase of the print or Kindle book includes a free PDF eBookBook Description: Supercharge your development career by mastering Azure's evolving GenAI, container, and serverless capabilities to build scalable, secure applications with confidence. This third edition of Azure for Developers transforms complex cloud concepts into practical skills, guiding you through the design, deployment, and management of cloud-native solutions while eliminating infrastructure headaches.Fully updated with Azure's latest features, this hands-on guide helps you automate DevOps pipelines with GitHub Actions, deploy microservices using containers, and integrate generative AI via Azure OpenAI to modernize your development workflows. You will learn how to set up your environment, streamline app deployment, and implement robust service integrations using real-world best practices.The final section is a game-changer for developers who want to stay ahead of the curve. It shows you how to leverage Azure's AI and machine learning services to automate tasks, fine-tune models, and build intelligent assistants and next-generation workflows.By the end, you will have the confidence and capabilities to deliver production-grade cloud solutions that meet real-world demands and position yourself at the forefront of modern cloud development.What You Will Learn: - Integrate data solutions like Azure Storage and managed SQL databases into your applications- Embed monitoring into your application using Application Insights SDK- Develop serverless solutions with Azure Functions and Durable Functions- Automate CI/CD workflows with GitHub Actions and Azure integration- Build and manage containers using Azure Container Apps, Azure Container Registry (ACR), and App Service- Design powerful workflows with both low-code and full-code approaches- Enhance applications with AI and machine learning componentsWho this book is for: This book is for cloud developers and engineers building applications with Microsoft Azure, as well as those looking to begin a career in Azure development. While a basic understanding of programming concepts is recommended, the book covers both basic and advanced ideas and solutions, making it valuable for beginners and experienced developers looking to enhance their skills.Table of Contents- Getting Started with an Azure Account and Selecting an IDE- Choosing between Azure CLI and Azure PowerShell- Hosting Applications with Azure App Service- Developing Static Web Applications- Going Serverless with Azure Functions- Managing secrets and configuration in Azure - Integrating services with Azure Logic Apps- Building workflows using Durable Functions- Learning about Azure Container Registry- Building ad-hoc workloads using Azure Container Instances- Developing microservices with Azure Container Apps- Hosting containers with Azure App Service- Storing data with Azure Storage- Using queues in Microsoft Azure- Using relational databases in Microsoft Azure- Adding monitoring to your application- Integrating an application with Azure OpenAI service(N.B. Please use the Read Sample option to see further chapters)
Pan-African Artificial Intelligence and Smart Systems
This two-volume set LNICST 631 & 632 constitutes the proceedings of the Third Pan-African Conference on Pan-African Intelligence and Smart Systems, PAAISS 2024, which was held in Durban, South Africa, during December 4-6, 2024. The 39 full papers presented in this volume were carefully reviewed and selected from 103 submissions. They are organized according to the following topics: Part-I: Artificial Intelligence in Medicine; Smart Systems Enabling Technologies; and Artificial Intelligence-Enabled Communication Systems.Part-II: Artificial Intelligence Theory and Methods; Artificial Intelligence and Smart Systems; Remote sensing and Artificial Intelligence.
Pan-African Artificial Intelligence and Smart Systems
This two-volume set LNICST 631 & 632 constitutes the proceedings of the Third Pan-African Conference on Pan-African Intelligence and Smart Systems, PAAISS 2024, which was held in Durban, South Africa, during December 4-6, 2024. The 39 full papers presented in this volume were carefully reviewed and selected from 103 submissions. They are organized according to the following topics: Part-I: Artificial Intelligence in Medicine; Smart Systems Enabling Technologies; and Artificial Intelligence-Enabled Communication Systems.Part-II: Artificial Intelligence Theory and Methods; Artificial Intelligence and Smart Systems; Remote sensing and Artificial Intelligence.
AI on Trial
The structure of AI on Trial follows the same process as a High Court trial, taking a unique approach to the most innovative of technological areas. Addressing the current state of artificial intelligence and the law, the book identifies why the technology should be 'placed on trial' and presents relevant evidence, before passing 'judgment' and proposing a Manifesto for Responsible AI and a blueprint for an ethical, legal and regulatory framework. The Second Edition includes: - Four new 'evidence chapters' on generative AI, data ownership, the digital divide, and AI in education- Discussion concerning the threats posed by ever-increasing digital and tech poverty, the opportunities for a potential revolution in education and the creative challenges from the rise of GenAI- Contributions from leading US and other international thought leaders Written from the viewpoint of practitioners, academics and journalists, this is an essential title for all information and technology law practitioners, in-house counsel, data protection officers, company directors, finance directors, academics and students. Technologists, regulators, legislators and journalists interested in getting to grips with the issues presented by AI will also benefit. This title is included in Bloomsbury Professional's Cyber Law online service.