Chip Multiprocessor Architecture
Chip multiprocessors - also called multi-core microprocessors or CMPs for short - are now the only way to build high-performance microprocessors, for a variety of reasons. Large uniprocessors are no longer scaling in performance, because it is only possible to extract a limited amount of parallelism from a typical instruction stream using conventional superscalar instruction issue techniques. In addition, one cannot simply ratchet up the clock speed on today's processors, or the power dissipation will become prohibitive in all but water-cooled systems. Compounding these problems is the simple fact that with the immense numbers of transistors available on today's microprocessor chips, it is too costly to design and debug ever-larger processors every year or two. CMPs avoid these problems by filling up a processor die with multiple, relatively simpler processor cores instead of just one huge core. The exact size of a CMP's cores can vary from very simple pipelines to moderately complex superscalar processors, but once a core has been selected the CMP's performance can easily scale across silicon process generations simply by stamping down more copies of the hard-to-design, high-speed processor core in each successive chip generation. In addition, parallel code execution, obtained by spreading multiple threads of execution across the various cores, can achieve significantly higher performance than would be possible using only a single core. While parallel threads are already common in many useful workloads, there are still important workloads that are hard to divide into parallel threads. The low inter-processor communication latency between the cores in a CMP helps make a much wider range of applications viable candidates for parallel execution than was possible with conventional, multi-chip multiprocessors; nevertheless, limited parallelism in key applications is the main factor limiting acceptance of CMPs in some types of systems. After a discussion of the basic pros and cons of CMPs when they are compared with conventional uniprocessors, this book examines how CMPs can best be designed to handle two radically different kinds of workloads that are likely to be used with a CMP: highly parallel, throughput-sensitive applications at one end of the spectrum, and less parallel, latency-sensitive applications at the other. Throughput-sensitive applications, such as server workloads that handle many independent transactions at once, require careful balancing of all parts of a CMP that can limit throughput, such as the individual cores, on-chip cache memory, and off-chip memory interfaces. Several studies and example systems, such as the Sun Niagara, that examine the necessary tradeoffs are presented here. In contrast, latency-sensitive applications - many desktop applications fall into this category - require a focus on reducing inter-core communication latency and applying techniques to help programmers divide their programs into multiple threads as easily as possible. This book discusses many techniques that can be used in CMPs to simplify parallel programming, with an emphasis on research directions proposed at Stanford University. To illustrate the advantages possible with a CMP using a couple of solid examples, extra focus is given to thread-level speculation (TLS), a way to automatically break up nominally sequential applications into parallel threads on a CMP, and transactional memory. This model can greatly simplify manual parallel programming by using hardware - instead of conventional software locks - to enforce atomic code execution of blocks of instructions, a technique that makes parallel coding much less error-prone. Contents: The Case for CMPs / Improving Throughput / Improving Latency Automatically / Improving Latency using Manual Parallel Programming / A Multicore World: The Future of CMPs
Reading and Writing the Electronic Book
Developments over the last twenty years have fueled considerable speculation about the future of the book and of reading itself. This book begins with a gloss over the history of electronic books, including the social and technical forces that have shaped their development. The focus then shifts to reading and how we interact with what we read: basic issues such as legibility, annotation, and navigation are examined as aspects of reading that ebooks inherit from their print legacy. Because reading is fundamentally communicative, I also take a closer look at the sociality of reading: how we read in a group and how we share what we read. Studies of reading and ebook use are integrated throughout the book, but Chapter 5 "goes meta" to explore how a researcher might go about designing his or her own reading-related studies. No book about ebooks is complete without an explicit discussion of content preparation, i.e., how the electronic book is written. Hence, Chapter 6 delves into the underlying representation of ebooks and efforts to create and apply markup standards to them. This chapter also examines how print genres have made the journey to digital and how some emerging digital genres might be realized as ebooks. Finally, Chapter 7 discusses some beyond-the-book functionality: how can ebook platforms be transformed into portable personal libraries? In the end, my hope is that by the time the reader reaches the end of this book, he or she will feel equipped to perform the next set of studies, write the next set of articles, invent new ebook functionality, or simply engage in a heated argument with the stranger in seat 17C about the future of reading. Table of Contents: Preface / Figure Credits / Introduction / Reading / Interaction / Reading as a Social Activity / Studying Reading / Beyond the Book / References / Author Biography
Multiresolution Frequency Domain Technique for Electromagnetics
In this book, a general frequency domain numerical method similar to the finite difference frequency domain (FDFD) technique is presented. The proposed method, called the multiresolution frequency domain (MRFD) technique, is based on orthogonal Battle-Lemarie and biorthogonal Cohen-Daubechies-Feauveau (CDF) wavelets. The objective of developing this new technique is to achieve a frequency domain scheme which exhibits improved computational efficiency figures compared to the traditional FDFD method: reduced memory and simulation time requirements while retaining numerical accuracy. The newly introduced MRFD scheme is successfully applied to the analysis of a number of electromagnetic problems, such as computation of resonance frequencies of one and three dimensional resonators, analysis of propagation characteristics of general guided wave structures, and electromagnetic scattering from two dimensional dielectric objects. The efficiency characteristics of MRFD techniques based on different wavelets are compared to each other and that of the FDFD method. Results indicate that the MRFD techniques provide substantial savings in terms of execution time and memory requirements, compared to the traditional FDFD method. Table of Contents: Introduction / Basics of the Finite Difference Method and Multiresolution Analysis / Formulation of the Multiresolution Frequency Domain Schemes / Application of MRFD Formulation to Closed Space Structures / Application of MRFD Formulation to Open Space Structures / A Multiresolution Frequency Domain Formulation for Inhomogeneous Media / Conclusion
DFT-Domain Based Single-Microphone Noise Reduction for Speech Enhancement
As speech processing devices like mobile phones, voice controlled devices, and hearing aids have increased in popularity, people expect them to work anywhere and at any time without user intervention. However, the presence of acoustical disturbances limits the use of these applications, degrades their performance, or causes the user difficulties in understanding the conversation or appreciating the device. A common way to reduce the effects of such disturbances is through the use of single-microphone noise reduction algorithms for speech enhancement. The field of single-microphone noise reduction for speech enhancement comprises a history of more than 30 years of research. In this survey, we wish to demonstrate the significant advances that have been made during the last decade in the field of discrete Fourier transform domain-based single-channel noise reduction for speech enhancement.Furthermore, our goal is to provide a concise description of a state-of-the-art speech enhancement system, and demonstrate the relative importance of the various building blocks of such a system. This allows the non-expert DSP practitioner to judge the relevance of each building block and to implement a close-to-optimal enhancement system for the particular application at hand. Table of Contents: Introduction / Single Channel Speech Enhancement: General Principles / DFT-Based Speech Enhancement Methods: Signal Model and Notation / Speech DFT Estimators / Speech Presence Probability Estimation / Noise PSD Estimation / Speech PSD Estimation / Performance Evaluation Methods / Simulation Experiments with Single-Channel Enhancement Systems / Future Directions
Real-Time Image and Video Processing
This book presents an overview of the guidelines and strategies for transitioning an image or video processing algorithm from a research environment into a real-time constrained environment. Such guidelines and strategies are scattered in the literature of various disciplines including image processing, computer engineering, and software engineering, and thus have not previously appeared in one place. By bringing these strategies into one place, the book is intended to serve the greater community of researchers, practicing engineers, industrial professionals, who are interested in taking an image or video processing algorithm from a research environment to an actual real-time implementation on a resource constrained hardware platform. These strategies consist of algorithm simplifications, hardware architectures, and software methods. Throughout the book, carefully selected representative examples from the literature are presented to illustrate the discussed concepts. After reading thebook, the readers are exposed to a wide variety of techniques and tools, which they can then employ to design a real-time image or video processing system.
Joint Source Channel Coding Using Arithmetic Codes
Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used for decoding convolutional codes, such as the list Viterbi decoding algorithm, can be applied directly on the trellis. The finite state machine interpretation can be easily migrated to Markov source case. We can encode Markov sources without considering the conditional probabilities, while using the list Viterbi decoding algorithm which utilizes the conditional probabilities. We can also use context-based arithmetic coding to exploit the conditional probabilities of the Markov source and apply a finite state machine interpretation to this problem. The finite state machine interpretation also allows us to more systematically understand arithmetic codes with forbidden symbols. It allows us to find the partial distance spectrum of arithmetic codes with forbidden symbols. We also propose arithmetic codes with memories which use high memory but low implementation precision arithmetic codes. The low implementation precision results in a state machine with less complexity. The introduced input memories allow us to switch the probability functions used for arithmetic coding. Combining these two methods give us a huge parameter space of the arithmetic codes with forbidden symbols. Hence we can choose codes with better distance properties while maintaining the encoding efficiency and decoding complexity. A construction and search method is proposed and simulation results show that we can achieve a similar performance as turbo codes when we apply this approach to rate 2/3 arithmetic codes. Table of Contents: Introduction / Arithmetic Codes / Arithmetic Codes with Forbidden Symbols / Distance Property and Code Construction / Conclusion
Antennas with Non-Foster Matching Networks
Most antenna engineers are likely to believe that antennas are one technology that is more or less impervious to the rapidly advancing semiconductor industry. However, as demonstrated in this lecture, there is a way to incorporate active components into an antenna and transform it into a new kind of radiating structure that can take advantage of the latest advances in analog circuit design. The approach for making this transformation is to make use of non-Foster circuit elements in the matching network of the antenna. By doing so, we are no longer constrained by the laws of physics that apply to passive antennas. However, we must now design and construct very touchy active circuits. This new antenna technology is now in its infancy. The contributions of this lecture are (1) to summarize the current state-of-the-art in this subject, and (2) to introduce some new theoretical and practical tools for helping us to continue the advancement of this technology.
Image Understanding using Sparse Representations
Image understanding has been playing an increasingly crucial role in several inverse problems and computer vision. Sparse models form an important component in image understanding, since they emulate the activity of neural receptors in the primary visual cortex of the human brain. Sparse methods have been utilized in several learning problems because of their ability to provide parsimonious, interpretable, and efficient models. Exploiting the sparsity of natural signals has led to advances in several application areas including image compression, denoising, inpainting, compressed sensing, blind source separation, super-resolution, and classification. The primary goal of this book is to present the theory and algorithmic considerations in using sparse models for image understanding and computer vision applications. To this end, algorithms for obtaining sparse representations and their performance guarantees are discussed in the initial chapters. Furthermore, approaches for designing overcomplete, data-adapted dictionaries to model natural images are described. The development of theory behind dictionary learning involves exploring its connection to unsupervised clustering and analyzing its generalization characteristics using principles from statistical learning theory. An exciting application area that has benefited extensively from the theory of sparse representations is compressed sensing of image and video data. Theory and algorithms pertinent to measurement design, recovery, and model-based compressed sensing are presented. The paradigm of sparse models, when suitably integrated with powerful machine learning frameworks, can lead to advances in computer vision applications such as object recognition, clustering, segmentation, and activity recognition. Frameworks that enhance the performance of sparse models in such applications by imposing constraints based on the prior discriminatory information and the underlying geometrical structure, and kernelizing the sparse coding and dictionary learning methods are presented. In addition to presenting theoretical fundamentals in sparse learning, this book provides a platform for interested readers to explore the vastly growing application domains of sparse representations.
The Theory of Linear Prediction
Linear prediction theory has had a profound impact in the field of digital signal processing. Although the theory dates back to the early 1940s, its influence can still be seen in applications today. The theory is based on very elegant mathematics and leads to many beautiful insights into statistical signal processing. Although prediction is only a part of the more general topics of linear estimation, filtering, and smoothing, this book focuses on linear prediction. This has enabled detailed discussion of a number of issues that are normally not found in texts. For example, the theory of vector linear prediction is explained in considerable detail and so is the theory of line spectral processes. This focus and its small size make the book different from many excellent texts which cover the topic, including a few that are actually dedicated to linear prediction. There are several examples and computer-based demonstrations of the theory. Applications are mentioned wherever appropriate, but the focus is not on the detailed development of these applications. The writing style is meant to be suitable for self-study as well as for classroom use at the senior and first-year graduate levels. The text is self-contained for readers with introductory exposure to signal processing, random processes, and the theory of matrices, and a historical perspective and detailed outline are given in the first chapter. Table of Contents: Introduction / The Optimal Linear Prediction Problem / Levinson's Recursion / Lattice Structures for Linear Prediction / Autoregressive Modeling / Prediction Error Bound and Spectral Flatness / Line Spectral Processes / Linear Prediction Theory for Vector Processes / Appendix A: Linear Estimation of Random Variables / B: Proof of a Property of Autocorrelations / C: Stability of the Inverse Filter / Recursion Satisfied by AR Autocorrelations
Learning Programming Using Matlab
This book is intended for anyone trying to learn the fundamentals of computer programming. The chapters lead the reader through the various steps required for writing a program, introducing the MATLABr(R) constructs in the process. MATLABr(R) is used to teach programming because it has a simple programming environment. It has a low initial overhead which allows the novice programmer to begin programming immediately and allows the users to easily debug their programs. This is especially useful for people who have a "mental block" about computers. Although MATLABr(R) is a high-level language and interactive environment that enables the user to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran, the author shows that it can also be used as a programming learning tool for novices. There are a number of exercises at the end of each chapter which should help users become comfortable with the language.
Biomedical Image Analysis
In biological and medical imaging applications, tracking objects in motion is a critical task. This book describes the state-of-the-art in biomedical tracking techniques. We begin by detailing methods for tracking using active contours, which have been highly successful in biomedical applications. The book next covers the major probabilistic methods for tracking. Starting with the basic Bayesian model, we describe the Kalman filter and conventional tracking methods that use centroid and correlation measurements for target detection. Innovations such as the extended Kalman filter and the interacting multiple model open the door to capturing complex biological objects in motion. A salient highlight of the book is the introduction of the recently emerged particle filter, which promises to solve tracking problems that were previously intractable by conventional means. Another unique feature of Biomedical Image Analysis: Tracking is the explanation of shape-based methods for biomedical image analysis. Methods for both rigid and nonrigid objects are depicted. Each chapter in the book puts forth biomedical case studies that illustrate the methods in action.
Circuit Analysis with Multisim
This book is concerned with circuit simulation using National Instruments Multisim. It focuses on the use and comprehension of the working techniques for electrical and electronic circuit simulation. The first chapters are devoted to basic circuit analysis. It starts by describing in detail how to perform a DC analysis using only resistors and independent and controlled sources. Then, it introduces capacitors and inductors to make a transient analysis. In the case of transient analysis, it is possible to have an initial condition either in the capacitor voltage or in the inductor current, or both. Fourier analysis is discussed in the context of transient analysis. Next, we make a treatment of AC analysis to simulate the frequency response of a circuit. Then, we introduce diodes, transistors, and circuits composed by them and perform DC, transient, and AC analyses. The book ends with simulation of digital circuits. A practical approach is followed through the chapters, using step-by-step examples to introduce new Multisim circuit elements, tools, analyses, and virtual instruments for measurement. The examples are clearly commented and illustrated. The different tools available on Multisim are used when appropriate so readers learn which analyses are available to them. This is part of the learning outcomes that should result after each set of end-of-chapter exercises is worked out. Table of Contents: Introduction to Circuit Simulation / Resistive Circuits / Time Domain Analysis -- Transient Analysis / Frequency Domain Analysis -- AC Analysis / Semiconductor Devices / Digital Circuits
Adaptive High-Resolution Sensor Waveform Design for Tracking
Recent innovations in modern radar for designing transmitted waveforms, coupled with new algorithms for adaptively selecting the waveform parameters at each time step, have resulted in improvements in tracking performance. Of particular interest are waveforms that can be mathematically designed to have reduced ambiguity function sidelobes, as their use can lead to an increase in the target state estimation accuracy. Moreover, adaptively positioning the sidelobes can reveal weak target returns by reducing interference from stronger targets. The manuscript provides an overview of recent advances in the design of multicarrier phase-coded waveforms based on Bjorck constant-amplitude zero-autocorrelation (CAZAC) sequences for use in an adaptive waveform selection scheme for mutliple target tracking. The adaptive waveform design is formulated using sequential Monte Carlo techniques that need to be matched to the high resolution measurements. The work will be of interest to both practitionersand researchers in radar as well as to researchers in other applications where high resolution measurements can have significant benefits. Table of Contents: Introduction / Radar Waveform Design / Target Tracking with a Particle Filter / Single Target tracking with LFM and CAZAC Sequences / Multiple Target Tracking / Conclusions
Modeling Digital Switching Circuits with Linear Algebra
Modeling Digital Switching Circuits with Linear Algebra describes an approach for modeling digital information and circuitry that is an alternative to Boolean algebra. While the Boolean algebraic model has been wildly successful and is responsible for many advances in modern information technology, the approach described in this book offers new insight and different ways of solving problems. Modeling the bit as a vector instead of a scalar value in the set {0, 1} allows digital circuits to be characterized with transfer functions in the form of a linear transformation matrix. The use of transfer functions is ubiquitous in many areas of engineering and their rich background in linear systems theory and signal processing is easily applied to digital switching circuits with this model. The common tasks of circuit simulation and justification are specific examples of the application of the linear algebraic model and are described in detail. The advantages offered by the new model as compared to traditional methods are emphasized throughout the book. Furthermore, the new approach is easily generalized to other types of information processing circuits such as those based upon multiple-valued or quantum logic; thus providing a unifying mathematical framework common to each of these areas. Modeling Digital Switching Circuits with Linear Algebra provides a blend of theoretical concepts and practical issues involved in implementing the method for circuit design tasks. Data structures are described and are shown to not require any more resources for representing the underlying matrices and vectors than those currently used in modern electronic design automation (EDA) tools based on the Boolean model. Algorithms are described that perform simulation, justification, and other common EDA tasks in an efficient manner that are competitive with conventional design tools. The linear algebraic model can be used to implement common EDA tasks directly upon a structural netlist thus avoiding the intermediate step of transforming a circuit description into a representation of a set of switching functions as is commonly the case when conventional Boolean techniques are used. Implementation results are provided that empirically demonstrate the practicality of the linear algebraic model.
MATLAB(R) Software for the Code Excited Linear Prediction Algorithm
This book describes several modules of the Code Excited Linear Prediction (CELP) algorithm. The authors use the Federal Standard-1016 CELP MATLAB(R) software to describe in detail several functions and parameter computations associated with analysis-by-synthesis linear prediction. The book begins with a description of the basics of linear prediction followed by an overview of the FS-1016 CELP algorithm. Subsequent chapters describe the various modules of the CELP algorithm in detail. In each chapter, an overall functional description of CELP modules is provided along with detailed illustrations of their MATLAB(R) implementation. Several code examples and plots are provided to highlight some of the key CELP concepts. Link to MATLAB(R) code found within the book Table of Contents: Introduction to Linear Predictive Coding / Autocorrelation Analysis and Linear Prediction / Line Spectral Frequency Computation / Spectral Distortion / The Codebook Search / The FS-1016 Decoder
Fundamentals of Electromagnetics 2
This book is the second of two volumes which have been created to provide an understanding of the basic principles and applications of electromagnetic fields for electrical engineering students. Fundamentals of Electromagnetics Vol 2: Quasistatics and Waves examines how the low-frequency models of lumped elements are modified to include parasitic elements. For even higher frequencies, wave behavior in space and on transmission lines is explained. Finally, the textbook concludes with details of transmission line properties and applications. Upon completion of this book and its companion Fundamentals of Electromagnetics Vol 1: Internal Behavior of Lumped Elements, with a focus on the DC and low-frequency behavior of electromagnetic fields within lumped elements, students will have gained the necessary knowledge to progress to advanced studies of electromagnetics.
FPGA-Accelerated Simulation of Computer Systems
To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed for FPGA accelerated simulation, survey the state-of-the-art in FPGA-accelerated simulation, and describe in detail selected instances of the described techniques. Table of Contents: Preface / Acknowledgments / Introduction / Simulator Background / Accelerating Computer System Simulators with FPGAs / Simulation Virtualization / Categorizing FPGA-based Simulators / Conclusion / Bibliography / Authors' Biographies
Representations of Multiple-Valued Logic Functions
Compared to binary switching functions, the multiple-valued functions (MV) offer more compact representations of the information content of signals modeled by logic functions and, therefore, their use fits very well in the general settings of data compression attempts and approaches. The first task in dealing with such signals is to provide mathematical methods for their representation in a way that will make their application in practice feasible. Representation of Multiple-Valued Logic Functions is aimed at providing an accessible introduction to these mathematical techniques that are necessary for application of related implementation methods and tools. This book presents in a uniform way different representations of multiple-valued logic functions, including functional expressions, spectral representations on finite Abelian groups, and their graphical counterparts (various related decision diagrams). Three-valued, or ternary functions, are traditionally used as the first extensionfrom the binary case. They have a good feature that the ratio between the number of bits and the number of different values that can be encoded with the specified number of bits is favourable for ternary functions. Four-valued functions, also called quaternary functions, are particularly attractive, since in practical realization within today prevalent binary circuits environment, they may be easy coded by binary values and realized with two-stable state circuits. At the same time, there is much more considerable advent in design of four-valued logic circuits than for other $p$-valued functions. Therefore, this book is written using a hands-on approach such that after introducing the general and necessarily abstract background theory, the presentation is based on a large number of examples for ternary and quaternary functions that should provide an intuitive understanding of various representation methods and the interconnections among them. Table of Contents: Multiple-Valued Logic Functions / Functional Expressions for Multiple-Valued Functions / Spectral Representations of Multiple-Valued Functions / Decision Diagrams for Multiple-Valued Functions / Fast Calculation Algorithms
Embedded System Design with the Atmel AVR Microcontroller II
This textbook provides practicing scientists and engineers an advanced treatment of the Atmel AVR microcontroller. This book is intended as a follow-on to a previously published book, titled Atmel AVR Microcontroller Primer: Programming and Interfacing. Some of the content from this earlier text is retained for completeness. This book will emphasize advanced programming and interfacing skills. We focus on system level design consisting of several interacting microcontroller subsystems. The first chapter discusses the system design process. Our approach is to provide the skills to quickly get up to speed to operate the internationally popular Atmel AVR microcontroller line by developing systems level design skills. We use the Atmel ATmega164 as a representative sample of the AVR line. The knowledge you gain on this microcontroller can be easily translated to every other microcontroller in the AVR line. In succeeding chapters, we cover the main subsystems aboard the microcontroller, providing a short theory section followed by a description of the related microcontroller subsystem with accompanying software for the subsystem. We then provide advanced examples exercising some of the features discussed. In all examples, we use the C programming language. The code provided can be readily adapted to the wide variety of compilers available for the Atmel AVR microcontroller line. We also include a chapter describing how to interface the microcontroller to a wide variety of input and output devices. The book concludes with several detailed system level design examples employing the Atmel AVR microcontroller. Table of Contents: Embedded Systems Design / Atmel AVR Architecture Overview / Serial Communication Subsystem / Analog to Digital Conversion (ADC) / Interrupt Subsystem / Timing Subsystem / Atmel AVR Operating Parameters and Interfacing / System Level Design
Basic Simulation Models of Phase Tracking Devices Using MATLAB
The Phase-Locked Loop (PLL), and many of the devices used for frequency and phase tracking, carrier and symbol synchronization, demodulation, and frequency synthesis, are fundamental building blocks in today's complex communications systems. It is therefore essential for both students and practicing communications engineers interested in the design and implementation of modern communication systems to understand and have insight into the behavior of these important and ubiquitous devices. Since the PLL behaves as a nonlinear device (at least during acquisition), computer simulation can be used to great advantage in gaining insight into the behavior of the PLL and the devices derived from the PLL. The purpose of this Synthesis Lecture is to provide basic theoretical analyses of the PLL and devices derived from the PLL and simulation models suitable for supplementing undergraduate and graduate courses in communications. The Synthesis Lecture is also suitable for self study by practicing engineers. A significant component of this book is a set of basic MATLAB-based simulations that illustrate the operating characteristics of PLL-based devices and enable the reader to investigate the impact of varying system parameters. Rather than providing a comprehensive treatment of the underlying theory of phase-locked loops, theoretical analyses are provided in sufficient detail in order to explain how simulations are developed. The references point to sources currently available that treat this subject in considerable technical depth and are suitable for additional study. Download MATLAB codes (.zip) Table of Contents: Introduction / Basic PLL Theory / Structures Developed From The Basic PLL / Simulation Models / MATLAB Simulations / Noise Performance Analysis
High Performance Networks
Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applications to communicate and interoperate in an orchestrated and efficient way. This book describes the design and engineering tradeoffs of datacenter networks. It describes interconnection networks from topology and network architecture to routing algorithms, and presents opportunities for taking advantage of the emerging technology trends that are influencing router microarchitecture. With the emergence of "many-core" processor chips, it is evident that we will also need "many-port" routing chips to provide a bandwidth-rich network to avoid the performance limiting effects of Amdahl's Law. We provide an overview of conventional topologies and their routing algorithms and show how technology, signaling rates and cost-effective optics are motivating new network topologies that scale up to millions of hosts. The book also provides detailed case studies of two high performance parallel computer systems and their networks. Table of Contents: Introduction / Background / Topology Basics / High-Radix Topologies / Routing / Scalable Switch Microarchitecture / System Packaging / Case Studies / Closing Remarks
Block Transceivers
The demand for data traffic over mobile communication networks has substantially increased during the last decade. As a result, these mobile broadband devices spend the available spectrum fiercely, requiring the search for new technologies. In transmissions where the channel presents a frequency-selective behavior, multicarrier modulation (MCM) schemes have proven to be more efficient, in terms of spectral usage, than conventional modulations and spread spectrum techniques. The orthogonal frequency-division multiplexing (OFDM) is the most popular MCM method, since it not only increases spectral efficiency but also yields simple transceivers. All OFDM-based systems, including the single-carrier with frequency-division equalization (SC-FD), transmit redundancy in order to cope with the problem of interference among symbols. This book presents OFDM-inspired systems that are able to, at most, halve the amount of redundancy used by OFDM systems while keeping the computational complexity comparable. Such systems, herein called memoryless linear time-invariant (LTI) transceivers with reduced redundancy, require low-complexity arithmetical operations and fast algorithms. In addition, whenever the block transmitter and receiver have memory and/or are linear time-varying (LTV), it is possible to reduce the redundancy in the transmission even further, as also discussed in this book. For the transceivers with memory it is possible to eliminate the redundancy at the cost of making the channel equalization more difficult. Moreover, when time-varying block transceivers are also employed, then the amount of redundancy can be as low as a single symbol per block, regardless of the size of the channel memory. With the techniques presented in the book it is possible to address what lies beyond the use of OFDM-related solutions in broadband transmissions. Table of Contents: The Big Picture / Transmultiplexers / OFDM / Memoryless LTI Transceivers with Reduced Redundancy / FIR LTV Transceivers with Reduced Redundancy
Sparse Representations for Radar with MATLAB Examples
Although the field of sparse representations is relatively new, research activities in academic and industrial research labs are already producing encouraging results. The sparse signal or parameter model motivated several researchers and practitioners to explore high complexity/wide bandwidth applications such as Digital TV, MRI processing, and certain defense applications. The potential signal processing advancements in this area may influence radar technologies. This book presents the basic mathematical concepts along with a number of useful MATLAB(R) examples to emphasize the practical implementations both inside and outside the radar field. Table of Contents: Radar Systems: A Signal Processing Perspective / Introduction to Sparse Representations / Dimensionality Reduction / Radar Signal Processing Fundamentals / Sparse Representations in Radar
Computer Architecture Techniques for Power-Efficiency
In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these costs is the inexorable increase in power dissipation and power density in processors. Power dissipation issues have catalyzed new topic areas in computer architecture, resulting in a substantial body of work on more power-efficient architectures. Power dissipation coupled with diminishing performance gains, was also the main cause for the switch from single-core to multi-core architectures and aslowdown in frequency increase. This book aims to document some of the most important architectural techniques that were invented, proposed, and applied to reduce both dynamic power and static power dissipation in processors and memory hierarchies. A significant number of techniques have been proposed for a wide range of situations and this book synthesizes those techniques by focusing on their common characteristics. Table of Contents: Introduction / Modeling, Simulation, and Measurement / Using Voltage and Frequency Adjustments to Manage Dynamic Power / Optimizing Capacitance and Switching Activity to Reduce Dynamic Power / Managing Static (Leakage) Power / Conclusions
Transactional Memory, Second Edition
The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and coordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010. Table of Contents: Introduction / Basic Transactions / Building on Basic Transactions / Software Transactional Memory / Hardware-Supported Transactional Memory / Conclusions
Analysis of Sub-synchronous Resonance (SSR) in Doubly-fed Induction Generator (DFIG)-Based Wind Farms
Wind power penetration is rapidly increasing in today's energy generation industry. In particular, the doubly-fed induction generator (DFIG) has become a very popular option in wind farms, due to its cost advantage compared with fully rated converter-based systems. Wind farms are frequently located in remote areas, far from the bulk of electric power users, and require long transmission lines to connect to the grid. Series capacitive compensation of DFIG-based wind farm is an economical way to increase the power transfer capability of the transmission line connecting the wind farm to the grid. For example, a study performed by ABB reveals that increasing the power transfer capability of an existing transmission line from 1300 MW to 2000 MW using series compensation is 90% less expensive than building a new transmission line. However, a factor hindering the extensive use of series capacitive compensation is the potential risk of subsynchronous resonance (SSR). The SSR is a condition where the wind farm exchanges energy with the electric network, to which it is connected, at one or more natural frequencies of the electric or mechanical part of the combined system, comprising the wind farm and the network, and the frequency of the exchanged energy is below the fundamental frequency of the system. This oscillatory phenomenon may cause severe damage in the wind farm, if not prevented. Therefore, this book studies the SSR phenomenon in a capacitive series compensated wind farm. A DFIG-based wind farm, which is connected to a series compensated transmission line, is considered as a case study. The book consists of two main parts: Small-signal modeling of DFIG for SSR analysis: This part presents a step-by-step tutorial on modal analysis of a DFIG-based series compensated wind farm using Matlab/Simulink. The model of the system includes wind turbine aerodynamics, a 6th order induction generator, a 2nd order two-mass shaft system, a 4th order series compensated transmissionline, a 4th order rotor-side converter (RSC) controller and a 4th order grid-side converter (GSC) controller, and a 1st order DC-link model. The relevant modes are identified using participation factor analysis. Definition of the SSR in DFIG-based wind farms: This part mainly focuses on the identification and definition of the main types of SSR that occur in DFIG wind farms, namely: (1) induction generator effect (SSIGE), (2) torsional interactions (SSTI), and (3) control interactions (SSCI).
Code Division Multiple Access (CDMA)
This book covers the basic aspects of Code Division Multiple Access or CDMA. It begins with an introduction to the basic ideas behind fixed and random access systems in order to demonstrate the difference between CDMA and the more widely understood TDMA, FDMA or CSMA. Secondly, a review of basic spread spectrum techniques are presented which are used in CDMA systems including direct sequence, frequency-hopping and time-hopping approaches. The basic concept of CDMA is presented, followed by the four basic principles of CDMA systems that impact their performance: interference averaging, universal frequency reuse, soft handoff, and statistical multiplexing. The focus of the discussion will then shift to applications. The most common application of CDMA currently is cellular systems. A detailed discussion on cellular voice systems based on CDMA, specifically IS-95, is presented. The capacity of such systems will be examined as well as performance enhancement techniques such as coding and spatial filtering. Also discussed are Third Generation CDMA cellular systems and how they differ from Second Generation systems. A second application of CDMA that is covered is spread spectrum packet radio networks. Finally, there is an examination of multi-user detection and interference cancellation and how such techniques impact CDMA networks. This book should be of interest and value to engineers, advanced students, and researchers in communications.
Transient Electro-Thermal Modeling on Power Semiconductor Devices
This book presents physics-based electro-thermal models of bipolar power semiconductor devices including their packages, and describes their implementation in MATLAB and Simulink. It is a continuation of our first book Modeling of Bipolar Power Semiconductor Devices. The device electrical models are developed by subdividing the devices into different regions and the operations in each region, along with the interactions at the interfaces, are analyzed using the basic semiconductor physics equations that govern device behavior. The Fourier series solution is used to solve the ambipolar diffusion equation in the lightly doped drift region of the devices. In addition to the external electrical characteristics, internal physical and electrical information, such as junction voltages and carrier distribution in different regions of the device, can be obtained using the models. The instantaneous dissipated power, calculated using the electrical device models, serves as input to the thermal model (RC network with constant and nonconstant thermal resistance and thermal heat capacity, or Fourier thermal model) of the entire module or package, which computes the junction temperature of the device. Once an updated junction temperature is calculated, the temperature-dependent semiconductor material parameters are re-calculated and used with the device electrical model in the next time-step of the simulation. The physics-based electro-thermal models can be used for optimizing device and package design and also for validating extracted parameters of the devices. The thermal model can be used alone for monitoring the junction temperature of a power semiconductor device, and the resulting simulation results used as an indicator of the health and reliability of the semiconductor power device.
Game Theory for Wireless Engineers
The application of mathematical analysis to wireless networks has met with limited success, due to the complexity of mobility and traffic models, coupled with the dynamic topology and the unpredictability of link quality that characterize such networks. The ability to model individual, independent decision makers whose actions potentially affect all other decision makers makes game theory particularly attractive to analyze the performance of ad hoc networks. Game theory is a field of applied mathematics that describes and analyzes interactive decision situations. It consists of a set of analytical tools that predict the outcome of complex interactions among rational entities, where rationality demands a strict adherence to a strategy based on perceived or measured results. In the early to mid-1990's, game theory was applied to networking problems including flow control, congestion control, routing and pricing of Internet services. More recently, there has been growing interest in adopting game-theoretic methods to model today's leading communications and networking issues, including power control and resource sharing in wireless and peer-to-peer networks. This work presents fundamental results in game theory and their application to wireless communications and networking. We discuss normal-form, repeated, and Markov games with examples selected from the literature. We also describe ways in which learning can be modeled in game theory, with direct applications to the emerging field of cognitive radio. Finally, we discuss challenges and limitations in the application of game theory to the analysis of wireless systems. We do not assume familiarity with game theory. We introduce major game theoretic models and discuss applications of game theory including medium access, routing, energy-efficient protocols, and others. We seek to provide the reader with a foundationalunderstanding of the current research on game theory applied to wireless communications and networking.
Reconfigurable Antennas
This lecture explores the emerging area of reconfigurable antennas from basic concepts that provide insight into fundamental design approaches to advanced techniques and examples that offer important new capabilities for next-generation applications. Antennas are necessary and critical components of communication and radar systems, but sometimes their inability to adjust to new operating scenarios can limit system performance. Making antennas reconfigurable so that their behavior can adapt with changing system requirements or environmental conditions can ameliorate or eliminate these restrictions and provide additional levels of functionality for any system. For example, reconfigurable antennas on portable wireless devices can help to improve a noisy connection or redirect transmitted power to conserve battery life. In large phased arrays, reconfigurable antennas could be used to provide additional capabilities that may result in wider instantaneous frequency bandwidths, more extensive scan volumes, and radiation patterns with more desirable side lobe distributions. Written for individuals with a range of experience, from those with only limited prior knowledge of antennas to those working in the field today, this lecture provides both theoretical foundations and practical considerations for those who want to learn more about this exciting subject. Contents: Introduction / Definitions of Critical Parameters for Antenna Operation / Linkage Between Frequency Response and Radiation Characteristics: Implications for Reconfigurable Antennas / Methods for Achieving Frequency Response Reconfigurability / Methods for Achieving Polarization Reconfigurability / Methods for Achieving Radiation Pattern Reconfigurability / Methods for Achieving Compound Reconfigurable Antennas / Practical Issues for Implementing Reconfigurable Antennas / Conclusions and Directions for Future work
An Introduction to Kalman Filtering with MATLAB Examples
The Kalman filter is the Bayesian optimum solution to the problem of sequentially estimating the states of a dynamical system in which the state evolution and measurement processes are both linear and Gaussian. Given the ubiquity of such systems, the Kalman filter finds use in a variety of applications, e.g., target tracking, guidance and navigation, and communications systems. The purpose of this book is to present a brief introduction to Kalman filtering. The theoretical framework of the Kalman filter is first presented, followed by examples showing its use in practical applications. Extensions of the method to nonlinear problems and distributed applications are discussed. A software implementation of the algorithm in the MATLAB programming language is provided, as well as MATLAB code for several example applications discussed in the manuscript.
OFDM Systems for Wireless Communications
Orthogonal Frequency Division Multiplexing (OFDM) systems are widely used in the standards for digital audio/video broadcasting, WiFi and WiMax. Being a frequency-domain approach to communications, OFDM has important advantages in dealing with the frequency-selective nature of high data rate wireless communication channels. As the needs for operating with higher data rates become more pressing, OFDM systems have emerged as an effective physical-layer solution. This short monograph is intended as a tutorial which highlights the deleterious aspects of the wireless channel and presents why OFDM is a good choice as a modulation that can transmit at high data rates. The system-level approach we shall pursue will also point out the disadvantages of OFDM systems especially in the context of peak to average ratio, and carrier frequency synchronization. Finally, simulation of OFDM systems will be given due prominence. Simple MATLAB programs are provided for bit error rate simulation using a discrete-time OFDM representation. Software is also provided to simulate the effects of inter-block-interference, inter-carrier-interference and signal clipping on the error rate performance. Different components of the OFDM system are described, and detailed implementation notes are provided for the programs. The program can be downloaded here. Table of Contents: Introduction / Modeling Wireless Channels / Baseband OFDM System / Carrier Frequency Offset / Peak to Average Power Ratio / Simulation of the Performance of OFDM Systems / Conclusions
Multipath Effects in GPS Receivers
Autonomous vehicles use global navigation satellite systems (GNSS) to provide a position within a few centimeters of truth. Centimeter positioning requires accurate measurement of each satellite's direct path propagation time. Multipath corrupts the propagation time estimate by creating a time-varying bias. A GNSS receiver model is developed and the effects of multipath are investigated. MATLABtm code is provided to enable readers to run simple GNSS receiver simulations. More specifically, GNSS signal models are presented and multipath mitigation techniques are described for various multipath conditions. Appendices are included in the booklet to derive some of the basics on early minus late code synchronization methods. Details on the numerically controlled oscillator and its properties are also given in the appendix.
Deep Learning for Autonomous Vehicle Control
The next generation of autonomous vehicles will provide major improvements in traffic flow, fuel efficiency, and vehicle safety. Several challenges currently prevent the deployment of autonomous vehicles, one aspect of which is robust and adaptable vehicle control. Designing a controller for autonomous vehicles capable of providing adequate performance in all driving scenarios is challenging due to the highly complex environment and inability to test the system in the wide variety of scenarios which it may encounter after deployment. However, deep learning methods have shown great promise in not only providing excellent performance for complex and non-linear control problems, but also in generalizing previously learned rules to new scenarios. For these reasons, the use of deep neural networks for vehicle control has gained significant interest. In this book, we introduce relevant deep learning techniques, discuss recent algorithms applied to autonomous vehicle control, identify strengths and limitations of available methods, discuss research challenges in the field, and provide insights into the future trends in this rapidly evolving field.
New Prospects of Integrating Low Substrate Temperatures with Scaling-Sustained Device Architectural Innovation
In order to sustain Moore's Law-based device scaling, principal attention has focused on toward device architectural innovations for improved device performance as per ITRS projections for technology nodes up to 10 nm. Efficient integration of lower substrate temperatures (
Radiation Imaging Detectors Using SOI Technology
Silicon-on-Insulator (SOI) technology is widely used in high-performance and low-power semiconductor devices. The SOI wafers have two layers of active silicon (Si), and normally the bottom Si layer is a mere physical structure. The idea of making intelligent pixel detectors by using the bottom Si layer as sensors for X-ray, infrared light, high-energy particles, neutrons, etc. emerged from very early days of the SOI technology. However, there have been several difficult issues with fabricating such detectors and they have not become very popular until recently.This book offers a comprehensive overview of the basic concepts and research issues of SOI radiation image detectors. It introduces basic issues to implement the SOI detector and presents how to solve these issues. It also reveals fundamental techniques, improvement of radiation tolerance, applications, and examples of the detectors.Since the SOI detector has both a thick sensing region and CMOS transistors in amonolithic die, many ideas have emerged to utilize this technology. This book is a good introduction for people who want to develop or use SOI detectors.
Low Substrate Temperature Modeling Outlook of Scaled n-MOSFET
Low substrate/lattice temperature (
The Transmission-Line Modeling (TLM) Method in Electromagnetics
This book presents the topic in electromagnetics known as Transmission-Line Modeling or Matrix method-TLM. While it is written for engineering students at graduate and advanced undergraduate levels, it is also highly suitable for specialists in computational electromagnetics working in industry, who wish to become familiar with the topic. The main method of implementation of TLM is via the time-domain differential equations, however, this can also be via the frequency-domain differential equations. The emphasis in this book is on the time-domain TLM. Physical concepts are emphasized here before embarking onto mathematical development in order to provide simple, straightforward suggestions for the development of models that can then be readily programmed for further computations. Sections with strong mathematical flavors have been included where there are clear methodological advantages forming the basis for developing practical modeling tools. The book can be read at different depths depending on the background of the reader, and can be consulted as and when the need arises.
Circuit Analysis Laboratory Workbook
This workbook integrates theory with the concept of engineering design and teaches troubleshooting and analytical problem-solving skills. It is intended to either accompany or follow a first circuits course, and it assumes no previous experience with breadboarding or other lab equipment. This workbook uses only those components that are traditionally covered in a first circuits course (e.g., voltage sources, resistors, potentiometers, capacitors, and op amps) and gives students clear design goals, requirements, and constraints. Because we are using only components students have already learned how to analyze, they are able to tackle the design exercises, first working through the theory and math, then drawing and simulating their designs, and finally building and testing their designs on a breadboard.
Higher-Order FDTD Schemes for Waveguides and Antenna Structures
This publication provides a comprehensive and systematically organized coverage of higher order finite-difference time-domain or FDTD schemes, demonstrating their potential role as a powerful modeling tool in computational electromagnetics. Special emphasis is drawn on the analysis of contemporary waveguide and antenna structures. Acknowledged as a significant breakthrough in the evolution of the original Yee's algorithm, the higher order FDTD operators remain the subject of an ongoing scientific research. Among their indisputable merits, one can distinguish the enhanced levels of accuracy even for coarse grid resolutions, the fast convergence rates, and the adjustable stability. In fact, as the fabrication standards of modern systems get stricter, it is apparent that such properties become very appealing for the accomplishment of elaborate and credible designs.
The Digital Revolution
As technologists, we are constantly exploring and pushing the limits of our own disciplines, and we accept the notion that the efficiencies of new technologies are advancing at a very rapid rate. However, we rarely have time to contemplate the broader impact of these technologies as they impact and amplify adjacent technology disciplines. This book therefore focuses on the potential impact of those technologies, but it is not intended as a technical manuscript. In this book, we consider our progress and current position %toward on arbitrary popular concepts of future scenarios rather than the typical measurements of cycles per second or milliwatts. We compare our current human cultural situation to other past historic events as we anticipate the future social impact of rapidly accelerating technologies. We also rely on measurements based on specific events highlighting the breadth of the impact of accelerating semiconductor technologies rather than the specific rate of advance of any particular semiconductor technology. These measurements certainly lack the mathematic precision and repeatability to which technologists are accustomed, but the material that we are dealing with--the social objectives and future political structures of humanity--does not permit a high degree of mathematic accuracy. Our conclusion draws from the concept of Singularity. It seems certain that at the rate at which our technologies are advancing, we will exceed the ability of our post‒Industrial Revolution structures to absorb these new challenges, and we cannot accurately anticipate what those future social structures will resemble.
MRTD (Multi Resolution Time Domain) Method in Electromagnetics
This book presents a method that allows the use of multiresolution principles in a time domain electromagnetic modeling technique that is applicable to general structures. The multiresolution time-domain (MRTD) technique, as it is often called, is presented for general basis functions. Additional techniques that are presented here allow the modeling of complex structures using a subcell representation that permits the modeling discrete electromagnetic effects at individual equivalent grid points. This is accomplished by transforming the application of the effects at individual points in the grid into the wavelet domain. In this work, the MRTD technique is derived for a general wavelet basis using a relatively compact vector notation that both makes the technique easier to understand and illustrates the differences between MRTD basis functions. In addition, techniques such as the uniaxial perfectly matched layer (UPML) for arbitrary wavelet resolution and non-uniform gridding are presented. Using these techniques, any structure that can be simulated in Yee-FDTD can be modeled with in MRTD.
Sustaining Moore's Law
In 1965, Intel co-founder Gordon Moore, in ""Cramming more components onto Integrated Circuits"" in Electronics Magazine (April 19, 1965), made the observation that, in the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. Since its inception in 1965 until recent times, this law has been used in the semiconductor industry to guide investments for long-term planning as well as to set targets for research and development. These investments have helped in a productive utilization of wealth, which created more employment opportunities for semiconductor industry professionals. In this way, the development of Moore's Law has helped sustain the progress of today's knowledge-based economy. While Moore's Law has, on one hand, helped drive investments toward technological and economic growth, thereby benefiting the consumers with more powerful electronic gadgets, Moore's Law has indirectly also helped to fuel other innovationsin the global economy. However, the Law of diminishing returns is now questioning the sustainability of further evolution of Moore's Law and its ability to sustain the progress of today's knowledge based economy. The lack of liquidity in the global economy is truly bringing the entire industry to a standstill and the dark clouds of an economic depression are hovering over the global economy. What factors have been ignored by the global semiconductor industry leading to a demise of Moore's Law? Do the existing business models prevalent in the semiconductor industry pose any problems? Have supply chains made that progress unsustainable? In today's globalized world, have businesses been able to sustain national interests while driving the progress of Moore's Law? Could the semiconductor industry help the entire global economy move toward a radiance of the new crimson dawn, beyond the veil of the darkest night by sustaining the progress of Moore's Law? The entire semiconductor industry isnow clamoring for a fresh approach to overcome existing barriers to the progress of Moore's Law, and this book delivers just that. Moore's Law can easily continue for the foreseeable future if the chip manufacturing industry becomes sustainable by having a balanced economy. The sustainable progress of Moore's Law advocates the ""heresy"" of transforming the current economic orthodoxy of monopoly capitalism into free-market capitalism. The next big thing that everybody is looking forward to after mobile revolution is the ""Internet of Things"" (IoT) revolution. While some analysts forecast that the IoT market would achieve 5.4 billion connections worldwide by 2020, the poor consumer purchasing power in global economy makes this forecast truly questionable. Sustaining Moore's Law presents a blueprint for sustaining the progress of Moore's Law to bring about IoT Revolution in the global economy.
Layout Techniques in MOSFETs
This book aims at describing in detail the different layout techniques for remarkably boosting the electrical performance and the ionizing radiation tolerance of planar Metal-Oxide-Semiconductor (MOS) Field Effect Transistors (MOSFETs) without adding any costs to the current planar Complementary MOS (CMOS) integrated circuits (ICs) manufacturing processes. These innovative layout styles are based on pn junctions engineering between the drain/source and channel regions or simply MOSFET gate layout change. These interesting layout structures are capable of incorporating new effects in the MOSFET structures, such as the Longitudinal Corner Effect (LCE), the Parallel connection of MOSFETs with Different Channel Lengths Effect (PAMDLE), the Deactivation of the Parallel MOSFETs in the Bird's Beak Regions (DEPAMBBRE), and the Drain Leakage Current Reduction Effect (DLECRE), which are still seldom explored by the semiconductor and CMOS ICs industries. Several three-dimensional (3D) numerical simulations and experimental works are referenced in this book to show how these layout techniques can help the designers to reach the analog and digital CMOS ICs specifications with no additional cost. Furthermore, the electrical performance and ionizing radiation robustness of the analog and digital CMOS ICs can significantly be increased by using this gate layout approach.
Double-Grid Finite-Difference Frequency-Domain (DG-FDFD) Method for Scattering from Chiral Objects
This book presents the application of the overlapping grids approach to solve chiral material problems using the FDFD method. Due to the two grids being used in the technique, we will name this method as Double-Grid Finite Difference Frequency-Domain (DG-FDFD) method. As a result of this new approach the electric and magnetic field components are defined at every node in the computation space. Thus, there is no need to perform averaging during the calculations as in the aforementioned FDFD technique [16]. We formulate general 3D frequency-domain numerical methods based on double-grid (DG-FDFD) approach for general bianisotropic materials. The validity of the derived formulations for different scattering problems has been shown by comparing the obtained results to exact and other solutions obtained using different numerical methods. Table of Contents: Introduction / Chiral Media / Basics of the Finite-Difference Frequency-Domain (FDFD) Method / The Double-Grid Finite-Difference Frequency-Domain (DG-FDFD) Method for Bianisotropic Medium / Scattering FromThree Dimensional Chiral Structures / ImprovingTime and Memory Efficiencies of FDFD Methods / Conclusions / Appendix A: Notations / Appendix B: Near to Far FieldTransformation
Computational Electronics
Computational Electronics is devoted to state of the art numerical techniques and physical models used in the simulation of semiconductor devices from a semi-classical perspective. Computational electronics, as a part of the general Technology Computer Aided Design (TCAD) field, has become increasingly important as the cost of semiconductor manufacturing has grown exponentially, with a concurrent need to reduce the time from design to manufacture. The motivation for this volume is the need within the modeling and simulation community for a comprehensive text which spans basic drift-diffusion modeling, through energy balance and hydrodynamic models, and finally particle based simulation. One unique feature of this book is a specific focus on numerical examples, particularly the use of commercially available software in the TCAD community. The concept for this book originated from a first year graduate course on computational electronics, taught now for several years, in the Electrical Engineering Department at Arizona State University. Numerous exercises and projects were derived from this course and have been included. The prerequisite knowledge is a fundamental understanding of basic semiconductor physics, the physical models for various device technologies such as pndiodes, bipolar junction transistors, and field effect transistors.
Mapped Vector Basis Functions for Electromagnetic Integral Equations
The method-of-moments solution of the electric field and magnetic field integral equations (EFIE and MFIE) is extended to conducting objects modeled with curved cells. These techniques are important for electromagnetic scattering, antenna, radar signature, and wireless communication applications. Vector basis functions of the divergence-conforming and curl-conforming types are explained, and specific interpolatory and hierarchical basis functions are reviewed. Procedures for mapping these basis functions from a reference domain to a curved cell, while preserving the desired continuity properties on curved cells, are discussed in detail. For illustration, results are presented for examples that employ divergence-conforming basis functions with the EFIE and curl-conforming basis functions with the MFIE. The intended audience includes electromagnetic engineers with some previous familiarity with numerical techniques.
Introduction to the Finite Element Method in Electromagnetics
This series lecture is an introduction to the finite element method with applications in electromagnetics. The finite element method is a numerical method that is used to solve boundary-value problems characterized by a partial differential equation and a set of boundary conditions. The geometrical domain of a boundary-value problem is discretized using sub-domain elements, called the finite elements, and the differential equation is applied to a single element after it is brought to a "weak" integro-differential form. A set of shape functions is used to represent the primary unknown variable in the element domain. A set of linear equations is obtained for each element in the discretized domain. A global matrix system is formed after the assembly of all elements. This lecture is divided into two chapters. Chapter 1 describes one-dimensional boundary-value problems with applications to electrostatic problems described by the Poisson's equation. The accuracy of the finite element methodis evaluated for linear and higher order elements by computing the numerical error based on two different definitions. Chapter 2 describes two-dimensional boundary-value problems in the areas of electrostatics and electrodynamics (time-harmonic problems). For the second category, an absorbing boundary condition was imposed at the exterior boundary to simulate undisturbed wave propagation toward infinity. Computations of the numerical error were performed in order to evaluate the accuracy and effectiveness of the method in solving electromagnetic problems. Both chapters are accompanied by a number of Matlab codes which can be used by the reader to solve one- and two-dimensional boundary-value problems. These codes can be downloaded from the publisher's URL: www.morganclaypool.com/page/polycarpou This lecture is written primarily for the nonexpert engineer or the undergraduate or graduate student who wants to learn, for the first time, the finite element method with applications to electromagnetics. It is also targeted for research engineers who have knowledge of other numerical techniques and want to familiarize themselves with the finite element method. The lecture begins with the basics of the method, including formulating a boundary-value problem using a weighted-residual method and the Galerkin approach, and continues with imposing all three types of boundary conditions including absorbing boundary conditions. Another important topic of emphasis is the development of shape functions including those of higher order. In simple words, this series lecture provides the reader with all information necessary for someone to apply successfully the finite element method to one- and two-dimensional boundary-value problems in electromagnetics. It is suitable for newcomers in the field of finite elements in electromagnetics.