Computational Intelligence for Wireless Sensor Networks
Computational Intelligence for Wireless Sensor Networks: Principles and Applications provides an integrative overview of the computational intelligence (CI) in wireless sensor networks and enabled technologies. It aims to demonstrate how the paradigm of computational intelligence can benefit Wireless Sensor Networks (WSNs) and sensor-enabled technologies to overcome their existing issues.This book provides extensive coverage of the multiple design challenges of WSNs and associated technologies such as clustering, routing, media access, security, mobility, and design of energy-efficient network operations.It also describes various CI strategies such as fuzzy computing, evolutionary computing, reinforcement learning, artificial intelligence, swarm intelligence, teaching learning-based optimization, etc. It also discusses applying the techniques mentioned above in wireless sensor networks and sensor-enabled technologies to improve their design.The book offers comprehensive coverage of related topics, including: Emergence of intelligence in wireless sensor networks Taxonomy of computational intelligence Detailed discussion of various metaheuristic techniques Development of intelligent MAC protocols Development of intelligent routing protocols Security management in WSNs This book mainly addresses the challenges pertaining to the development of intelligent network systems via computational intelligence. It provides insights into how intelligence has been pursued and can be further integrated in the development of sensor-enabled applications.
Deep Learning in Visual Computing
Deep learning is an artificially intelligent entity that teaches itself and can be utilized to make predictions. Deep learning mimics the human brain and provides learned solutions addressing many challenging problems in the area of visual computing. From object recognition to image classification for diagnostics, deep learning has shown the power of artificial deep neural networks in solving real world visual computing problems with super-human accuracy. The introduction of deep learning into the field of visual computing has meant to be the death of most of the traditional image processing and computer vision techniques. Today, deep learning is considered to be the most powerful, accurate, efficient and effective method with the potential to solve many of the most challenging problems in visual computing. This book provides an insight into deep machine learning and the challenges in visual computing to tackle the novel method of machine learning. It introduces readers to the world of deep neural network architectures with easy-to-understand explanations. From face recognition to image classification for diagnosis of cancer, the book provides unique examples of solved problems in applied visual computing using deep learning. Interested and enthusiastic readers of modern machine learning methods will find this book easy to follow. They will find it a handy guide for designing and implementing their own projects in the field of visual computing.
Risk-Sensitive Reinforcement Learning via Policy Gradient Search
Reinforcement learning (RL) is one of the foundational pillars of artificial intelligence and machine learning. An important consideration in any optimization or control problem is the notion of risk, but its incorporation into RL has been a fairly recent development. This monograph surveys research on risk-sensitive RL that uses policy gradient search. The authors survey some of the recent work in this area specifically where policy gradient search is the solution approach. In the first risk-sensitive RL setting, they cover popular risk measures based on variance, conditional value at-risk and chance constraints, and present a template for policy gradient-based risk-sensitive RL algorithms using a Lagrangian formulation. For the setting where risk is incorporated directly into the objective function, they consider an exponential utility formulation, cumulative prospect theory, and coherent risk measures. Written for novices and experts alike the authors have made the text completely self-contained but also organized in a manner that allows expert readers to skip background chapters. This is a complete guide for students and researchers working on this aspect of machine learning.
Phishing Detection Using Content-Based Image Classification
Phishing Detection Using Content-Based Image Classification is an invaluable resource for any deep learning and cybersecurity professional and scholar trying to solve various cybersecurity tasks using new age technologies like Deep Learning and Computer Vision. With various rule-based phishing detection techniques at play which can be bypassed by phishers, this book provides a step-by-step approach to solve this problem using Computer Vision and Deep Learning techniques with significant accuracy. The book offers comprehensive coverage of the most essential topics, including: Programmatically reading and manipulating image data Extracting relevant features from images Building statistical models using image features Using state-of-the-art Deep Learning models for feature extraction Build a robust phishing detection tool even with less data Dimensionality reduction techniques Class imbalance treatment Feature Fusion techniques Building performance metrics for multi-class classification task Another unique aspect of this book is it comes with a completely reproducible code base developed by the author and shared via python notebooks for quick launch and running capabilities. They can be leveraged for further enhancing the provided models using new advancement in the field of computer vision and more advanced algorithms.
Practical Digital Design
The VHSIC Hardware Description Language (VHDL) is one of the two most popular languages used to design digital logic circuits. This book provides a comprehensive introduction to the syntax and the most commonly used features of VHDL. It also presents a formal digital design process and the best-case design practices that have been developed over more than twenty-five years of VHDL design experience by the author in military ground and satellite communication systems. Unlike other books on this subject, this real-world professional experience captures not only the what of VHDL, but also the how. Throughout the book, recommended methods for performing digital design are presented along with the common pitfalls and the techniques used to successfully avoid them. Written for students learning VHDL for the first time as well as professional development material for experienced engineers, this book's contents minimize design time while maximizing the probability of first-time design success.
Data Structures using C
The data structure is a set of specially organized data elements and functions, which are defined to store, retrieve, remove and search for individual data elements. Data Structures using C: A Practical Approach for Beginners covers all issues related to the amount of storage needed, the amount of time required to process the data, data representation of the primary memory and operations carried out with such data. Data Structures using C: A Practical Approach for Beginners book will help students learn data structure and algorithms in a focused way. Resolves linear and nonlinear data structures in C language using the algorithm, diagrammatically and its time and space complexity analysis Covers interview questions and MCQs on all topics of campus readiness Identifies possible solutions to each problem Includes real-life and computational applications of linear and nonlinear data structures This book is primarily aimed at undergraduates and graduates of computer science and information technology. Students of all engineering disciplines will also find this book useful.
E-Systems for the 21st Century
E-based systems and computer networks are becoming standard practice across all sectors, including health, engineering, business, education, security, and citizen interaction with local and national government. With contributions from researchers and practitioners from around the world, this two-volume book discusses and reports on new and important developments in the field of e-systems, covering a wide range of current issues in the design, engineering, and adoption of e-systems.
Datacenter Design and Management
An era of big data demands datacenters, which house the computing infrastructure that translates raw data into valuable information. This book defines datacenters broadly, as large distributed systems that perform parallel computation for diverse users. These systems exist in multiple forms--private and public--and are built at multiple scales. Datacenter design and management is multifaceted, requiring the simultaneous pursuit of multiple objectives. Performance, efficiency, and fairness are first-order design and management objectives, which can each be viewed from several perspectives. This book surveys datacenter research from a computer architect's perspective, addressing challenges in applications, design, management, server simulation, and system simulation. This perspective complements the rich bodies of work in datacenters as a warehouse-scale system, which study the implications for infrastructure that encloses computing equipment, and in datacenters as distributed systems, which employ abstract details in processor and memory subsystems. This book is written for first- or second-year graduate students in computer architecture and may be helpful for those in computer systems. The goal of this book is to prepare computer architects for datacenter-oriented research by describing prevalent perspectives and the state-of-the-art.
A Primer on Hardware Prefetching
Since the 1970's, microprocessor-based digital platforms have been riding Moore's law, allowing for doubling of density for the same area roughly every two years. However, whereas microprocessor fabrication has focused on increasing instruction execution rate, memory fabrication technologies have focused primarily on an increase in capacity with negligible increase in speed. This divergent trend in performance between the processors and memory has led to a phenomenon referred to as the "Memory Wall." To overcome the memory wall, designers have resorted to a hierarchy of cache memory levels, which rely on the principal of memory access locality to reduce the observed memory access time and the performance gap between processors and memory. Unfortunately, important workload classes exhibit adverse memory access patterns that baffle the simple policies built into modern cache hierarchies to move instructions and data across cache levels. As such, processors often spend much time idling upon a demand fetch of memory blocks that miss in higher cache levels. Prefetching--predicting future memory accesses and issuing requests for the corresponding memory blocks in advance of explicit accesses--is an effective approach to hide memory access latency. There have been a myriad of proposed prefetching techniques, and nearly every modern processor includes some hardware prefetching mechanisms targeting simple and regular memory access patterns. This primer offers an overview of the various classes of hardware prefetchers for instructions and data proposed in the research literature, and presents examples of techniques incorporated into modern microprocessors.
Customizable Computing
Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development oncustomizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems.
Research Infrastructures for Hardware Accelerators
Hardware acceleration in the form of customized datapath and control circuitry tuned to specific applications has gained popularity for its promise to utilize transistors more efficiently. Historically, the computer architecture community has focused on general-purpose processors, and extensive research infrastructure has been developed to support research efforts in this domain. Envisioning future computing systems with a diverse set of general-purpose cores and accelerators, computer architects must add accelerator-related research infrastructures to their toolboxes to explore future heterogeneous systems. This book serves as a primer for the field, as an overview of the vast literature on accelerator architectures and their design flows, and as a resource guidebook for researchers working in related areas.
A Primer on Compression in the Memory Hierarchy
This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them.
E-Systems for the 21st Century
E-based systems and computer networks are becoming standard practice across all sectors, including health, engineering, business, education, security, and citizen interaction with local and national government. With contributions from researchers and practitioners from around the world, this two-volume book discusses and reports on new and important developments in the field of e-systems, covering a wide range of current issues in the design, engineering, and adoption of e-systems.
Power-Efficient Computer Architectures
As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture. Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Specialization / Communication and Memory Systems / Conclusions / Bibliography / Authors' Biographies
Single-Instruction Multiple-Data Execution
Having hit power limitations to even more aggressive out-of-order execution in processor cores, many architects in the past decade have turned to single-instruction-multiple-data (SIMD) execution to increase single-threaded performance. SIMD execution, or having a single instruction drive execution of an identical operation on multiple data items, was already well established as a technique to efficiently exploit data parallelism. Furthermore, support for it was already included in many commodity processors. However, in the past decade, SIMD execution has seen a dramatic increase in the set of applications using it, which has motivated big improvements in hardware support in mainstream microprocessors. The easiest way to provide a big performance boost to SIMD hardware is to make it wider--i.e., increase the number of data items hardware operates on simultaneously. Indeed, microprocessor vendors have done this. However, as we exploit more data parallelism in applications, certain challenges can negatively impact performance. In particular, conditional execution, non-contiguous memory accesses, and the presence of some dependences across data items are key roadblocks to achieving peak performance with SIMD execution. This book first describes data parallelism, and why it is so common in popular applications. We then describe SIMD execution, and explain where its performance and energy benefits come from compared to other techniques to exploit parallelism. Finally, we describe SIMD hardware support in current commodity microprocessors. This includes both expected design tradeoffs, as well as unexpected ones, as we work to overcome challenges encountered when trying to map real software to SIMD execution.
Security Basics for Computer Architects
Design for security is an essential aspect of the design of future computers. However, security is not well understood by the computer architecture community. Many important security aspects have evolved over the last several decades in the cryptography, operating systems, and networking communities. This book attempts to introduce the computer architecture student, researcher, or practitioner to the basic concepts of security and threat-based design. Past work in different security communities can inform our thinking and provide a rich set of technologies for building architectural support for security into all future computers and embedded computing devices and appliances. I have tried to keep the book short, which means that many interesting topics and applications could not be included. What the book focuses on are the fundamental security concepts, across different security communities, that should be understood by any computer architect trying to design or evaluate security-awarecomputer architectures.
Multithreading Architecture
Multithreaded architectures now appear across the entire range of computing devices, from the highest-performing general purpose devices to low-end embedded processors. Multithreading enables a processor core to more effectively utilize its computational resources, as a stall in one thread need not cause execution resources to be idle. This enables the computer architect to maximize performance within area constraints, power constraints, or energy constraints. However, the architectural options for the processor designer or architect looking to implement multithreading are quite extensive and varied, as evidenced not only by the research literature but also by the variety of commercial implementations. This book introduces the basic concepts of multithreading, describes a number of models of multithreading, and then develops the three classic models (coarse-grain, fine-grain, and simultaneous multithreading) in greater detail. It describes a wide variety of architectural and software design tradeoffs, as well as opportunities specific to multithreading architectures. Finally, it details a number of important commercial and academic hardware implementations of multithreading. Table of Contents: Introduction / Multithreaded Execution Models / Coarse-Grain Multithreading / Fine-Grain Multithreading / Simultaneous Multithreading / Managing Contention / New Opportunities for Multithreaded Processors / Experimentation and Metrics / Implementations of Multithreaded Processors / Conclusion
Analyzing Analytics
This book aims to achieve the following goals: (1) to provide a high-level survey of key analytics models and algorithms without going into mathematical details; (2) to analyze the usage patterns of these models; and (3) to discuss opportunities for accelerating analytics workloads using software, hardware, and system approaches. The book first describes 14 key analytics models (exemplars) that span data mining, machine learning, and data management domains. For each analytics exemplar, we summarize its computational and runtime patterns and apply the information to evaluate parallelization and acceleration alternatives for that exemplar. Using case studies from important application domains such as deep learning, text analytics, and business intelligence (BI), we demonstrate how various software and hardware acceleration strategies are implemented in practice. This book is intended for both experienced professionals and students who are interested in understanding core algorithms behind analytics workloads. It is designed to serve as a guide for addressing various open problems in accelerating analytics workloads, e.g., new architectural features for supporting analytics workloads, impact on programming models and runtime systems, and designing analytics systems.
Resilient Architecture Design for Voltage Variation
Shrinking feature size and diminishing supply voltage are making circuits sensitive to supply voltage fluctuations within the microprocessor, caused by normal workload activity changes. If left unattended, voltage fluctuations can lead to timing violations or even transistor lifetime issues that degrade processor robustness. Mechanisms that learn to tolerate, avoid, and eliminate voltage fluctuations based on program and microarchitectural events can help steer the processor clear of danger, thus enabling tighter voltage margins that improve performance or lower power consumption. We describe the problem of voltage variation and the factors that influence this variation during processor design and operation. We also describe a variety of runtime hardware and software mitigation techniques that either tolerate, avoid, and/or eliminate voltage violations. We hope processor architects will find the information useful since tolerance, avoidance, and elimination are generalizable constructsthat can serve as a basis for addressing other reliability challenges as well. Table of Contents: Introduction / Modeling Voltage Variation / Understanding the Characteristics of Voltage Variation / Traditional Solutions and Emerging Solution Forecast / Allowing and Tolerating Voltage Emergencies / Predicting and Avoiding Voltage Emergencies / Eliminiating Recurring Voltage Emergencies / Future Directions on Resiliency
Shared-Memory Synchronization
This book offers a comprehensive survey of shared-memory synchronization, with an emphasis on "systems-level" issues. It includes sufficient coverage of architectural details to understand correctness and performance on modern multicore machines, and sufficient coverage of higher-level issues to understand how synchronization is embedded in modern programming languages.The primary intended audience for this book is "systems programmers"--the authors of operating systems, library packages, language run-time systems, concurrent data structures, and server and utility programs. Much of the discussion should also be of interest to application programmers who want to make good use of the synchronization mechanisms available to them, and to computer architects who want to understand the ramifications of their design decisions on systems-level code.
Non-Volatile In-Memory Computing by Spintronics
Exa-scale computing needs to re-examine the existing hardware platform that can support intensive data-oriented computing. Since the main bottleneck is from memory, we aim to develop an energy-efficient in-memory computing platform in this book. First, the models of spin-transfer torque magnetic tunnel junction and racetrack memory are presented. Next, we show that the spintronics could be a candidate for future data-oriented computing for storage, logic, and interconnect. As a result, by utilizing spintronics, in-memory-based computing has been applied for data encryption and machine learning. The implementations of in-memory AES, Simon cipher, as well as interconnect are explained in details. In addition, in-memory-based machine learning and face recognition are also illustrated in this book.
Architectural and Operating System Support for Virtual Memory
This book provides computer engineers, academic researchers, new graduate students, and seasoned practitioners an end-to-end overview of virtual memory. We begin with a recap of foundational concepts and discuss not only state-of-the-art virtual memory hardware and software support available today, but also emerging research trends in this space. The span of topics covers processor microarchitecture, memory systems, operating system design, and memory allocation. We show how efficient virtual memory implementations hinge on careful hardware and software cooperation, and we discuss new research directions aimed at addressing emerging problems in this space. Virtual memory is a classic computer science abstraction and one of the pillars of the computing revolution. It has long enabled hardware flexibility, software portability, and overall better security, to name just a few of its powerful benefits. Nearly all user-level programs today take for granted that they will have beenfreed from the burden of physical memory management by the hardware, the operating system, device drivers, and system libraries. However, despite its ubiquity in systems ranging from warehouse-scale datacenters to embedded Internet of Things (IoT) devices, the overheads of virtual memory are becoming a critical performance bottleneck today. Virtual memory architectures designed for individual CPUs or even individual cores are in many cases struggling to scale up and scale out to today's systems which now increasingly include exotic hardware accelerators (such as GPUs, FPGAs, or DSPs) and emerging memory technologies (such as non-volatile memory), and which run increasingly intensive workloads (such as virtualized and/or "big data" applications). As such, many of the fundamental abstractions and implementation approaches for virtual memory are being augmented, extended, or entirely rebuilt in order to ensure that virtual memory remains viable and performant in the years to come.
Optimization and Mathematical Modeling in Computer Architecture
In this book we give an overview of modeling techniques used to describe computer systems to mathematical optimization tools. We give a brief introduction to various classes of mathematical optimization frameworks with special focus on mixed integer linear programming which provides a good balance between solver time and expressiveness. We present four detailed case studies -- instruction set customization, data center resource management, spatial architecture scheduling, and resource allocation in tiled architectures -- showing how MILP can be used and quantifying by how much it outperforms traditional design exploration techniques. This book should help a skilled systems designer to learn techniques for using MILP in their problems, and the skilled optimization expert to understand the types of computer systems problems that MILP can be applied to.
On-Chip Photonic Interconnects
As the number of cores on a chip continues to climb, architects will need to address both bandwidth and power consumption issues related to the interconnection network. Electrical interconnects are not likely to scale well to a large number of processors for energy efficiency reasons, and the problem is compounded by the fact that there is a fixed total power budget for a die, dictated by the amount of heat that can be dissipated without special (and expensive) cooling and packaging techniques. Thus, there is a need to seek alternatives to electrical signaling for on-chip interconnection applications. Photonics, which has a fundamentally different mechanism of signal propagation, offers the potential to not only overcome the drawbacks of electrical signaling, but also enable the architect to build energy efficient, scalable systems. The purpose of this book is to introduce computer architects to the possibilities and challenges of working with photons and designing on-chip photonic interconnection networks.
Machine Learning on Kubernetes
Build a Kubernetes-based self-serving, agile data science and machine learning ecosystem for your organization using reliable and secure open source technologiesKey Features: Build a complete machine learning platform on KubernetesImprove the agility and velocity of your team by adopting the self-service capabilities of the platformReduce time-to-market by automating data pipelines and model training and deploymentBook Description: MLOps is an emerging field that aims to bring repeatability, automation, and standardization of the software engineering domain to data science and machine learning engineering. By implementing MLOps with Kubernetes, data scientists, IT professionals, and data engineers can collaborate and build machine learning solutions that deliver business value for their organization.You'll begin by understanding the different components of a machine learning project. Then, you'll design and build a practical end-to-end machine learning project using open source software. As you progress, you'll understand the basics of MLOps and the value it can bring to machine learning projects. You will also gain experience in building, configuring, and using an open source, containerized machine learning platform. In later chapters, you will prepare data, build and deploy machine learning models, and automate workflow tasks using the same platform. Finally, the exercises in this book will help you get hands-on experience in Kubernetes and open source tools, such as JupyterHub, MLflow, and Airflow.By the end of this book, you'll have learned how to effectively build, train, and deploy a machine learning model using the machine learning platform you built.What You Will Learn: Understand the different stages of a machine learning projectUse open source software to build a machine learning platform on KubernetesImplement a complete ML project using the machine learning platform presented in this bookImprove on your organization's collaborative journey toward machine learningDiscover how to use the platform as a data engineer, ML engineer, or data scientistFind out how to apply machine learning to solve real business problemsWho this book is for: This book is for data scientists, data engineers, IT platform owners, AI product owners, and data architects who want to build their own platform for ML development. Although this book starts with the basics, a solid understanding of Python and Kubernetes, along with knowledge of the basic concepts of data science and data engineering will help you grasp the topics covered in this book in a better way.
Green Internet of Things
Green Internet of Things (IoT) envisions the concept of reducing the energy consumption of IoT devices and making the environment safe. Considering this factor, this book focuses on both the theoretical and implementation aspects in green computing, next-generation networks or networks that can be utilized in providing green systems through IoT-enabling technologies, that is, the technology behind its architecture and building components. It also encompasses design concepts and related advanced computing in detail.- Highlights the elements and communication technologies in Green IoT- Discusses technologies, architecture and components surrounding Green IoT- Describes advanced computing technologies in terms of smart world, data centres and other related hardware for Green IoT- Elaborates energy-efficient Green IoT Design for real-time implementations- Covers pertinent applications in building smart cities, healthcare devices, efficient energy harvesting and so forthThis short-form book is aimed at students, researchers in IoT, clean technologies, computer science and engineering cum Industry R&D researchers.
Network Evolution and Applications
Network Evolution and Applications provides a comprehensive, integrative, and easy approach to understanding the technologies, concepts, and milestones in the history of networking. It provides an overview of different aspects involved in the networking arena that includes the core technologies that are essential for communication and important in our day-to-day life. It throws some light on certain past networking concepts and technologies that have been revolutionary in the history of science and technology and have been highly impactful. It expands on various concepts like Artificial Intelligence, Software Defined Networking, Cloud Computing, and Internet of Things, which are very popular at present. This book focuses on the evolutions made in the world of networking. One can't imagine the world without the Internet today; with the Internet and the present- day networking, distance doesn't matter at all. The COVID-19 pandemic has resulted in a tough time worldwide, with global lockdown, locked homes, empty streets, stores without consumers, and offices with no or fewer staff. Thanks to the modern digital networks, the culture of work from home (WFH) or working remotely with the network/Internet connection has come to the fore, with even school and university classes going online. Although WFH is not new, the COVID-19 pandemic has given it a new look, and industries are now willfully exploring WFH to extend it in the future. The aim of this book is to present the timeline of networking to show the developments made and the milestones that were achieved due to these developments.
Metaheuristic Computation with Matlab(r)
The main purpose of this book is to provide a unified view of the most popular metaheuristic methods. Under this perspective, it has presented the fundamental design principles as well as the operators of metaheuristic approaches which are considered essential.
Introduction to the Cyber Ranges
Introduction to the Cyber Ranges provides a comprehensive, integrative, easy-to-comprehend overview of different aspects involved in the cybersecurity arena. It expands on various concepts like cyber situational awareness, simulation and emulation environments, and cybersecurity exercises. It also focuses on detailed analysis and the comparison of various existing cyber ranges in military, academic, and commercial sectors. It highlights every crucial aspect necessary for developing a deeper insight about the working of the cyber ranges, their architectural design, and their need in the market. It conveys how cyber ranges are complex and effective tools in dealing with advanced cyber threats and attacks. Enhancing the network defenses, resilience, and efficiency of different components of critical infrastructures is the principal objective of cyber ranges. Cyber ranges provide simulations of possible cyberattacks and training on how to thwart such attacks. They are widely used in urban enterprise sectors because they present a sturdy and secure setting for hands-on cyber skills training, advanced cybersecurity education, security testing/training, and certification. Features: A comprehensive guide to understanding the complexities involved with cyber ranges and other cybersecurity aspects Substantial theoretical knowhow on cyber ranges, their architectural design, along with case studies of existing cyber ranges in leading urban sectors like military, academic, and commercial Elucidates the defensive technologies used by various cyber ranges in enhancing the security setups of private and government organizations Information organized in an accessible format for students (in engineering, computer science, and information management), professionals, researchers, and scientists working in the fields of IT, cybersecurity, distributed systems, and computer networks
Designing Secure Iot Devices with the Arm Platform Security Architecture and Cortex-M33
Designing Secure IoT devices with the Arm Platform Security Architecture and Cortex-M33 explains how to design and deploy secure IoT devices based on the Cortex-M23/M33 processor. The book is split into three parts. First, it introduces the Cortex-M33 and its architectural design and major processor peripherals. Second, it shows how to design secure software and secure communications to minimize the threat of both hardware and software hacking. And finally, it examines common IoT cloud systems and how to design and deploy a fleet of IoT devices. Example projects are provided for the Keil MDK-ARM and NXP LPCXpresso tool chains. Since their inception, microcontrollers have been designed as functional devices with a CPU, memory and peripherals that can be programmed to accomplish a huge range of tasks. With the growth of internet connected devices and the Internet of Things (IoT), "plain old microcontrollers" are no longer suitable as they lack the features necessary to create both a secure and functional device. The recent development by ARM of the Cortex M23 and M33 architecture is intended for today's IoT world.
Supervised Machine Learning
AI framework intended to solve a problem of bias-variance tradeoff for supervised learning methods in real-life applications. The AI framework comprises of bootstrapping to create multiple training and testing data sets with various characteristics, design and analysis of statistical experiments to identify optimal feature subsets and optimal hyper-parameters for ML methods, data contamination to test for the robustness of the classifiers. Key Features: Using ML methods by itself doesn't ensure building classifiers that generalize well for new data Identifying optimal feature subsets and hyper-parameters of ML methods can be resolved using design and analysis of statistical experiments Using a bootstrapping approach to massive sampling of training and tests datasets with various data characteristics (e.g.: contaminated training sets) allows dealing with bias Developing of SAS-based table-driven environment allows managing all meta-data related to the proposed AI framework and creating interoperability with R libraries to accomplish variety of statistical and machine-learning tasks Computer programs in R and SAS that create AI framework are available on GitHub
Mobile Microservices
In the 5G era, edge computing and new ecosystems of mobile microservices enable new business models, strategies, and competitive advantage. Focusing on microservices, this book introduces the essential concepts, technologies, and trade-offs in the edge computing architectural stack, providing for widespread adoption and dissemination. The book elucidates the concepts, architectures, well-defined building blocks, and prototypes for mobile microservice platforms and pervasive application development, as well as the implementation and configuration of service middleware and AI-based microservices. A goal-oriented service composition model is then proposed by the author, allowing for an economic assessment of connected, smart mobile services. Based on this model, costs can be minimized through statistical workload aggregation effects or backhaul data transport reduction, and customer experience and safety can be enhanced through reduced response times.This title will be a useful guide for students and IT professionals to get started with microservices and when studying the use of microservices in pervasive applications. It will also appeal to researchers and students studying software architecture and service-oriented computing, and especially those interested in edge computing, pervasive computing, the Internet of Things, and mobile microservices.
Object Detection with Deep Learning Models
Object Detection with Deep Learning Models discusses recent advances in object detection and recognition using deep learning methods, which have achieved great success in the field of computer vision and image processing. It provides a systematic and methodical overview of the latest developments in deep learning theory and its applications to computer vision, illustrating them using key topics, including object detection, face analysis, 3D object recognition, and image retrieval.The book offers a rich blend of theory and practice. It is suitable for students, researchers and practitioners interested in deep learning, computer vision and beyond and can also be used as a reference book. The comprehensive comparison of various deep-learning applications helps readers with a basic understanding of machine learning and calculus grasp the theories and inspires applications in other computer vision tasks.Features: A structured overview of deep learning in object detection A diversified collection of applications of object detection using deep neural networks Emphasize agriculture and remote sensing domains Exclusive discussion on moving object detection
Internet of Things
Today, Internet of Things (IoT) is ubiquitous as it is applied in practice in everything from Industrial Control Systems (ICS) to e-Health, e-commerce, Cyber Physical Systems (CPS), smart cities, smart parking, healthcare, supply chain management and many more. Numerous industries, academics, alliances and standardization organizations make an effort on IoT standardization, innovation and development. But there is still a need for a comprehensive framework with integrated standards under one IoT vision. Furthermore, the existing IoT systems are vulnerable to huge range of malicious attacks owing to the massive numbers of deployed IoT systems, inadequate data security standards and the resource-constrained nature. Existing security solutions are insufficient and therefore it is necessary to enable the IoT devices to dynamically counter the threats and save the system.Apart from illustrating the diversified IoT applications, this book also addresses the issue of data safekeeping along with the development of new security-enhancing schemes such as blockchain, as well as a range of other advances in IoT. The reader will discover that the IoT facilitates a multidisciplinary approach dedicated to create novel applications and develop integrated solutions to build a sustainable society. The innovative and fresh advances that demonstrate IoT and computational intelligence in practice are discussed in this book, which will be helpful and informative for scientists, research scholars, academicians, policymakers, industry professionals, government organizations and others.This book is intended for a broad target audience, including scholars of various generations and disciplines, recognized scholars (lecturers and professors) and young researchers (postgraduate and undergraduates) who study the legal and socio-economic consequences of the emergence and dissemination of digital technologies such as IoT. Furthermore, the book is intended for researchers, developers and operators working in the field of IoT and eager to comprehend the vulnerability of the IoT paradigm. The book will serve as a comprehensive guide for the advanced-level students in computer science who are interested in understanding the severity and implications of the accompanied security issues in IoT.Dr. Bharat Bhushan is an Assistant Professor of Department of Computer Science and Engineering (CSE) at School of Engineering and Technology, Sharda University, Greater Noida, India.Prof. (Dr.) Sudhir Kumar Sharma is currently a Professor and Head of the Department of Computer Science, Institute of Information Technology & Management affiliated to GGSIPU, New Delhi, India.Prof. (Dr.) Bhuvan Unhelkar (BE, MDBA, MSc, PhD; FACS; PSM-I, CBAP(R)) is an accomplished IT professional and Professor of IT at the University of South Florida, Sarasota-Manatee (Lead Faculty).Dr. Muhammad Fazal Ijaz is working as an Assistant Professor in Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Korea.Prof. (Dr.) Lamia Karim is a professor of computer science at the National School of Applied Sciences Berrechid (ENSAB), Hassan 1st University.
Deep Learning in Computer Vision
Deep learning algorithms have brought a revolution to the computer vision community by introducing non-traditional and efficient solutions to several image-related problems that had long remained unsolved or partially addressed. This book presents a collection of eleven chapters where each individual chapter explains the deep learning principles of a specific topic, introduces reviews of up-to-date techniques, and presents research findings to the computer vision community. The book covers a broad scope of topics in deep learning concepts and applications such as accelerating the convolutional neural network inference on field-programmable gate arrays, fire detection in surveillance applications, face recognition, action and activity recognition, semantic segmentation for autonomous driving, aerial imagery registration, robot vision, tumor detection, and skin lesion segmentation as well as skin melanoma classification. The content of this book has been organized such that each chapter can be read independently from the others. The book is a valuable companion for researchers, for postgraduate and possibly senior undergraduate students who are taking an advanced course in related topics, and for those who are interested in deep learning with applications in computer vision, image processing, and pattern recognition.
Artificial Intelligence in a Throughput Model
This book provides an overview of the existing biometric technologies, decision-making algorithms and the growth opportunity in biometrics. The book proposes a throughput model, which draws on computer science, economics and psychology to model perceptual, informational sources, judgmental processes and decision choice algorithms.
Machine Learning for Automated Theorem Proving
Automated theorem proving represents a significant and long-standing area of research in computer science, with numerous applications. A large proportion of the methods developed to date for the implementation of automated theorem provers (ATPs) have been algorithmic, sharing a great deal in common with the wider study of heuristic search algorithms. However, in recent years researchers have begun to incorporate machine learning (ML) methods into ATPs in an effort to extract better performance. Propositional satisfiability (SAT) solving and machine learning are both large and longstanding areas of research, and each has a correspondingly large literature. In this book, the author presents the results of his thorough and systematic review of the research at the intersection of these two apparently rather unrelated fields. It focusses on the research that has appeared to date on incorporating ML methods into solvers for propositional satisfiability SAT problems, and also solvers for its immediate variants such as and quantified SAT (QSAT). The comprehensiveness of the coverage means that ML researchers gain an understanding of state-of-the-art SAT and QSAT solvers that is sufficient to make new opportunities for applying their own ML research to this domain clearly visible, while ATP researchers gain a clear appreciation of how state-of-the-art machine learning might help them to design better solvers. In presenting the material, the author concentrates on the learning methods used and the way in which they have been incorporated into solvers. This enables researchers and students in both Automated Theorem Proving and Machine Learning to a) know what has been tried and b) understand the often complex interaction between ATP and ML that is needed for success in these undeniably challenging applications.
Big Data Management in Sensing - Applications in AI and Iot
The book is centrally focused on human computer Interaction and how sensors within small and wide groups of Nano-robots employ Deep Learning for applications in industry. It covers a wide array of topics that are useful for researchers and students to gain knowledge about AI and sensors in nanobots. Furthermore, the book explores Deep Learning approaches to enhance the accuracy of AI systems applied in medical robotics for surgical techniques. Secondly, we plan to explore bio-nano-robotics, which is a field in nano-robotics, that deals with automatic intelligence handling, self-assembly and replication, information processing and programmability.
What Every Engineer Should Know About the Internet of Things
This practical text provides an introduction to IoT that can be understood by every engineering discipline and discusses detailed applications of IoT.
Spectral Methods for Data Science
In contemporary science and engineering applications, the volume of available data is growing at an enormous rate. Spectral methods have emerged as a simple yet surprisingly effective approach for extracting information from massive, noisy and incomplete data. A diverse array of applications have been found in machine learning, imaging science, financial and econometric modeling, and signal processing. This monograph presents a systematic, yet accessible introduction to spectral methods from a modern statistical perspective, highlighting their algorithmic implications in diverse large-scale applications. The authors provide a unified and comprehensive treatment that establishes the theoretical underpinnings for spectral methods, particularly through a statistical lens. Building on years of research experience in the field, the authors present a powerful framework, called leave-one-out analysis, that proves effective and versatile for delivering fine-grained performance guarantees for a variety of problems. This book is essential reading for all students, researchers and practitioners working in Data Science.
Computational Engineering
Computational engineering is a promising and emerging field that deals with the development of models for providing high performance computing, to analyse designs and fix complex problems. Its framework includes data science for developing algorithms and mathematical foundations like fourier analysis and discrete fourier transforms. This book integrates physical and experimental approaches applied in the development of the discipline. It includes comprehensive techniques and applications to fabricate structures and networks. It focuses upon applied mathematics, computer modelling and other related fields. This text is an asset for anyone who is interested in the field of computational engineering.
Fundamentals of Machine Learning
The scientific study of statistical models and algorithms that computer systems use in order to perform a specific task without any explicit instructions is referred to as machine learning. It relies on patterns and inference. Machine learning is a subset of artificial intelligence. The study of mathematical optimization contributes significantly to the methods, applications and theory of machine learning. Some of the different models, which are used within this field are artificial neural networks, decision trees and Bayesian networks. Machine learning is applied in various other fields such as in machine perception, agriculture, adaptive websites, bioinformatics, optimization, sentiment analysis, etc. The topics included in this book on machine learning are of utmost significance and bound to provide incredible insights to readers. It unfolds the innovative aspects of this field, which will be crucial for the progress of this field in the future. Those in search of information to further their knowledge will be greatly assisted by this book.
Tensor Regression
Regression analysis is a key area of interest in the field of data analysis and machine learning which is devoted to exploring the dependencies between variables, often using vectors. The emergence of high dimensional data in technologies such as neuroimaging, computer vision, climatology and social networks, has brought challenges to traditional data representation methods. Tensors, as high dimensional extensions of vectors, are considered as natural representations of high dimensional data. In this book, the authors provide a systematic study and analysis of tensor-based regression models and their applications in recent years. It groups and illustrates the existing tensor-based regression methods and covers the basics, core ideas, and theoretical characteristics of most tensor-based regression methods. In addition, readers can learn how to use existing tensor-based regression methods to solve specific regression tasks with multiway data, what datasets can be selected, and what software packages are available to start related work as soon as possible. Tensor Regression is the first thorough overview of the fundamentals, motivations, popular algorithms, strategies for efficient implementation, related applications, available datasets, and software resources for tensor-based regression analysis. It is essential reading for all students, researchers and practitioners of working on high dimensional data.
Handbook of Automated Scoring
"Automated scoring engines [...] require a careful balancing of the contributions of technology, NLP, psychometrics, artificial intelligence, and the learning sciences. The present handbook is evidence that the theories, methodologies, and underlying technology that surround automated scoring have reached maturity, and that there is a growing acceptance of these technologies among experts and the public." From the Foreword by Alina von Davier, ACTNext Senior Vice PresidentHandbook of Automated Scoring: Theory into Practice provides a scientifically grounded overview of the key research efforts required to move automated scoring systems into operational practice. It examines the field of automated scoring from the viewpoint of related scientific fields serving as its foundation, the latest developments of computational methodologies utilized in automated scoring, and several large-scale real-world applications of automated scoring for complex learning and assessment systems. The book is organized into three parts that cover (1) theoretical foundations, (2) operational methodologies, and (3) practical illustrations, each with a commentary. In addition, the handbook includes an introduction and synthesis chapter as well as a cross-chapter glossary.
Introduction to Wavelet Transforms
The textbook, Introduction to Wavelet Transforms provides basics of wavelet transforms in a self-contained manner. Applications of wavelet transform theory permeate our daily lives. Therefore it is imperative to have a strong foundation for this subject.FeaturesNo prior knowledge of the subject is assumed. Sufficient mathematical background is provided to complete the discussion of different topics.Different topics have been properly segmented for easy learning. This makes the textbook pedagogical and unique.Notation is generally introduced in the definitions. Relatively easy consequences of the definitions are listed as observations, and important results are stated as theorems.Examples are provided for clarity and to enhance reader's understanding of the subject.Each chapter also has a problem section. A majority of the problems are provided with sufficient hints.The textbook can be used either in an upper-level undergraduate or first-year graduate class in electrical engineering, or computer science, or applied mathematics. It can also be used by professionals and researchers in the field who would like a quick review of the basics of the subject.About the AuthorNirdosh Bhatnagar works in both academia and industry in Silicon Valley, California. He is also the author of a comprehensive two-volume work: Mathematical Principles of the Internet, published by the CRC Press in the year 2019. Nirdosh earned M.S. in Operations Research, and M.S. and Ph.D. in electrical engineering, all from Stanford University, Stanford, California.
Construct Theory
In earlier work I showed there is every reason to consider biological life and AI are not only mathematical constructs, but that they are described in terms of one another. Here I introduce the what I call The Delta-Phi function. When we say biological creation is natural, we don't say that about artificial intelligence, though we put the naturally occurring elements together to give them electronic logic, these elements actually were made in the interior of stars just like the biological life elements.
Minimum-Distortion Embedding
Embeddings provide concrete numerical representations of otherwise abstract items, for use in downstream tasks. For example, a biologist might look for subfamilies of related cells by clustering embedding vectors associated with individual cells, while a machine learning practitioner might use vector representations of words as features for a classification task. In this monograph the authors present a general framework for faithful embedding called minimum-distortion embedding (MDE) that generalizes the common cases in which similarities between items are described by weights or distances. The MDE framework is simple but general. It includes a wide variety of specific embedding methods, including spectral embedding, principal component analysis, multidimensional scaling, Euclidean distance problems, etc. The authors provide a detailed description of minimum-distortion embedding problem and describe the theory behind creating solutions to all aspects. They also give describe in detail algorithms for computing minimum-distortion embeddings. Finally, they provide examples on how to approximately solve many MDE problems involving real datasets, including images, co-authorship networks, United States county demographics, population genetics, and single-cell mRNA transcriptomes. An accompanying open-source software package, PyMDE, makes it easy for practitioners to experiment with different embeddings via different choices of distortion functions and constraint sets. The theory and techniques described and illustrated in this book will be of interest to researchers and practitioners working on modern-day systems that look to adopt cutting-edge artificial intelligence.