Mathematical Biology
Very Short Introductions: Brilliant, Sharp, Inspiring Why are English Premier League football shirt patterns very similar to animal coat markings? And what do invasive species have in common with cancer cells in the body? Mathematical biology develops models which answer these questions, as they are applied to processes from the spread of a gene in a population, to predator-prey dynamics in an ecosystem, to the growth of tumours. In this Very Short Introduction Philip K. Maini describes the art of modelling, what it is, why we do it, and illustrates how the abstract way of thinking that is the essence of mathematics enables us to transfer knowledge from one area of research to another. Using numerous examples, he explains how the same fundamental ideas have been used in different fields, and shows how mathematics is the language of science. The author also points to cases in science where the traditional scientific modelling approach - verbal reasoning - is incorrect and shows how mathematics can uncover, and correct, such flawed reasoning while, at the same time, enhance our intuition. This book provides a guide to the trajectory of mathematical biology from a niche subject in the 1970s to a well-established, popular subject that is truly inter-disciplinary, and points to exciting future challenges. ABOUT THE SERIES: The Very Short Introductions series from Oxford University Press contains hundreds of titles in almost every subject area. These pocket-sized books are the perfect way to get ahead in a new subject quickly. Our expert authors combine facts, analysis, perspective, new ideas, and enthusiasm to make interesting and challenging topics highly readable.
Probably Overthinking It
An essential guide to the ways data can improve decision making. Statistics are everywhere: in news reports, at the doctor's office, and in every sort of forecast, from the stock market to the weather. Blogger, teacher, and computer scientist Allen B. Downey knows well that people have an innate ability both to understand statistics and to be fooled by them. As he makes clear in this accessible introduction to statistical thinking, the stakes are big. Simple misunderstandings have led to incorrect medical prognoses, underestimated the likelihood of large earthquakes, hindered social justice efforts, and resulted in dubious policy decisions. There are right and wrong ways to look at numbers, and Downey will help you see which are which. Probably Overthinking It uses real data to delve into real examples with real consequences, drawing on cases from health campaigns, political movements, chess rankings, and more. He lays out common pitfalls--like the base rate fallacy, length-biased sampling, and Simpson's paradox--and shines a light on what we learn when we interpret data correctly, and what goes wrong when we don't. Using data visualizations instead of equations, he builds understanding from the basics to help you recognize errors, whether in your own thinking or in media reports. Even if you have never studied statistics--or if you have and forgot everything you learned--this book will offer new insight into the methods and measurements that help us understand the world.
The Indisputable Existence of Santa Claus
In The Indisputable Existence of Santa Claus, two distinguished mathematicians explain, with humor and clarity, mathematical concepts through one very merry motif: Christmas. Lighthearted and diverting with Christmasy diagrams, sketches and graphs, equations, Markov chains, and matrices, The Indisputable Existence of Santa Claus brightens up the bleak midwinter with stockingsful of mathematical marvels. How do you apply game theory to select who should be on your Christmas shopping list? What equations should you use to decorate the Christmas tree? Will calculations show Santa is getting steadily thinner--shimmying up and down chimneys for a whole night--or fatter--as he munches on cookies and milk in billions of houses across the world? In their quest to provide mathematical proof for the existence of Santa, the authors take readers on a festive journey through a traditional holiday season. Every activity, from wrapping presents to playing board games to cooking the perfect turkey, is analyzed through the lens of math. Because who hasn't always wondered how to set up a mathematically perfect Secret Santa? This book belongs under your Christmas tree if you enjoy a spice of math in your eggnog.
Foundations of Numerical Methods and Data Analysis
This book offers a comprehensive journey into the world of numerical methods, beginning with the essential mathematical preliminaries: calculus, vectors, matrices, and programming concepts before moving into deeper areas such as error analysis, curve fitting, interpolation, and the numerical solution of ordinary and partial differential equations. Advanced chapters extend into systems of equations, finite element methods, and spectral techniques, ensuring that readers not only understand the fundamentals but also gain exposure to methods at the frontier of computational practice. A distinguishing feature of the book is the integration of theory with practice. Each concept is accompanied by carefully chosen examples, figures, and end- of-chapter exercises designed to strengthen understanding and encourage hands-on application. Historical notes and bibliographical references enrich the discussion, situating modern numerical methods in their broader intellectual context. In addition, the book pays special attention to the use of contemporary computational tools such as MATLAB, Python, and other numerical libraries, thereby bridging traditional methods with current software-driven practice. In short, it is both a textbook and a reference, meant to serve readers across different stages of their academic or professional journey. For more details, please visit https: //centralwestpublishing.com
Advanced Analytical and Numerical Methods with their Application to Industrial Problems
This book covers the models of different real-world problems that include the models using ordinary and partial differential equations and dynamical systems, uncertainty quantifications using fuzzy systems, computational methods for differential equations, modern control theory and applications, neural networks and neural computing, computational heat and mass transfer and computational fluid dynamics. It also includes various research problems on developing advanced analytical and numerical methods for solving real-world situations, with its theoretical derivations and engineering and science applications. For more details, please visit https: //centralwestpublishing.com
A Philosophical Essay On Probabilities
A philosophical essay on probabilities begins by establishing that much of what is considered human knowledge rests on the principles of probability rather than certainty. The essay presents a structured investigation into the nature and utility of probability, showing how it extends beyond games of chance into fields such as astronomy, jurisprudence, and social behavior. It emphasizes that probability emerges from ignorance of true causes and that rational decisions should be based on the best available data and inference. The early sections consider how prior beliefs influence our understanding and how probability serves as a bridge between ignorance and reasonable expectation. By tracing the evolution of the concept from superstition to science, the work affirms that what was once deemed chance can often be calculated, interpreted, and applied with mathematical rigor. The writing examines the philosophical implications of uncertainty and advocates for the application of probabilistic reasoning as a key method for approaching reality, whether through empirical study or moral judgment. Through this lens, the work outlines a vision where probability becomes a central tool in shaping knowledge, expectations, and action.
Statistics for Composite Indicators
This book provides a systematic and integrated approach to construct measures of complex and multidimensional concepts called composite indicators. One of the most pressing needs of scientists and policy makers is to measure phenomena that are important to our lives in society using numbers, to observe their evolution over time, and to analyse the relationships between them in order to understand the complex reality and decide on the right actions to achieve specific goals. Many socio-economic phenomena, as well as ecological, biological and of other sciences, are multidimensional and, to be measured, require the use of statistical-mathematical techniques that facilitate their reading and use for studies and analyses. This book is a guide to the knowledge and application of statistical tools suitable for the construction of "optimal" composite indicators, i.e. indicators that provide the most accurate measure of multidimensional reality. The book is aimed at all those - statisticians, sociologists, economists, and policy makers - who wish to construct composite indicators to measure and evaluate the complex reality that surrounds us.
Artificial Intelligence in Healthcare
The two-volume set constitutes the proceedings of the Second International Conference on Artificial Intelligence in Healthcare, AIiH 2025, which took place in Cambridge, UK, in September 2025. The 60 full papers included in this book were carefully reviewed and selected from 83 submissions. They were organized in topical sections as follows: Health informatics, Personalised Healthcare, Robotics, Assisted Living Technology, Computational Medicine, Long-term Health Conditions, Maternity and Women's Health and Wellbeing.
Unequal
An exciting "new perspective on equality and difference" (Stephon Alexander) that shows why the familiar equal sign isn't just a marker of sameness but a gateway into math's--and humanity's--most profound questions "Eugenia Cheng has opened up my mind to the wondrous world of pure mathematics in a way that I never thought was possible."―Willow Smith, singer and actress Math is famous for its equations: 1 + 1 = 2, a^2 + b^2 = c^2, or y = mx + b. Much of the time it can seem like that's all mathematics is: following steps to show that what's on one side of an equation is the same as what's on the other. In Unequal, Eugenia Cheng shows that's just part of the story, and the boring part to boot. Mathematics isn't only about showing how numbers and symbols are the same. It isn't even just about numbers and symbols at all, but a world of shapes, symmetries, logical ideas, and more. And in that world, the boundary between things being equal and unequal is a gray area, or perhaps a rainbow of beautiful, vibrant, subtly nuanced color. As Unequal shows, once you go over that rainbow, almost everything can be considered equal and unequal at the same time, whether it's shapes (seen from the right perspective, a circle is the same as an ellipse), words (synonyms), or people--even numbers! It all depends on what features we care about. And it's up to us what we do about it. That's because mathematics isn't a series of rules, facts, or answers. It's an invitation to a more powerful way of thinking.
Artificial Intelligence in Healthcare
The two-volume set constitutes the proceedings of the Second International Conference on Artificial Intelligence in Healthcare, AIiH 2025, which took place in Cambridge, UK, in September 2025. The 60 full papers included in this book were carefully reviewed and selected from 83 submissions. They were organized in topical sections as follows: Health informatics, Personalised Healthcare, Robotics, Assisted Living Technology, Computational Medicine, Long-term Health Conditions, Maternity and Women's Health and Wellbeing.
Classical Mechanics
Classical Mechanics is a textbook for undergraduate students majoring in Physics (or Mathematics and Physics). The book introduces the main ideas and concepts of Newtonian, Lagrangian, and Hamiltonian mechanics, including the basics of rigid body motion and relativistic dynamics, at an intermediate to advanced level. The physical prerequisites are minimal, with a short primer included in the first chapter. As to the mathematical prerequisites, only a working knowledge of linear algebra, basic multivariate calculus, and the rudiments of ordinary differential equations is expected.Features Numerous exercises and examples A focus on mathematical rigor that will appeal to Physics students wanting to specialize in theoretical physics, or Mathematics students interested in math- ematical physics Sufficient material to service either a one- or two-semester course
Near Vector Spaces and Related Topics
Near Vector Spaces and Related Topics provides a systematic treatment of the introductory theory of near vector spaces, as well as a range of associated areas.Since many topics in nonlinear analysis rely on the properties established in topological vector space, the concepts and topics presented in this book may stir up the interest of some researchers working in mathematical analysis, especially nonlinear analysis, and thus may potentially open a new avenue of research. The main prerequisites for most of the material in this book are basic concepts of functional analysis, including the basic tools of topology.This book is accessible to senior undergraduate students in mathematics and may also be used as a graduate level text, or as a reference for researchers who work on the applications of nonlinear analysis.Features - Valuable resource for researchers and postgraduate students interested in the foundation of fuzzy sets and nonlinear analysis- Presents new, previously unpublished material on near vector spaces- Well-organized and comprehensive treatment of the subject.
Visualization for Social Data Science
"This is an important book on an important topic. I particularly like the examples showing different visualizations of the same data and the parallel presentation of graphics and code. And I absolutely love the chapter on visual storytelling. I can't wait to use this book in my classes."- Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University, New York"A book that gives learners the inspiration, knowledge and worked examples to create cutting edge visualisations of their own."- James Chesire, Professor of Geographic Information and Cartography, University College LondonVisualization for Social Data Science provides end-to-end skills in visual data analysis. The book demonstrates how data graphics and modern statistics can be used in tandem to process, explore, model and communicate data-driven social science. It is packed with detailed data analysis examples, pushing you to do visual data analysis. As well as introducing, and demonstrating with code, a wide range of data visualizations for exploring patterns in data, Visualization for Social Data Science shows how models can be integrated with graphics to emphasise important structure and de-emphasise spurious structure and the role of data graphics in scientific communication -- in building trust and integrity. Many of the book's influences are from data journalism, as well as information visualization and cartography. Each chapter introduces statistical and graphical ideas for analysis, underpinned by real social science datasets. Those ideas are then implemented via principled, step-by-step, workflows in the programming environment R. Key features include: - Extensive real-world data sets and data analysis scenarios in Geography, Public Health, Transportation, Political Science;- Code examples fully-integrated into main text, with code that builds in complexity and sophistication;- Quarto template files for each chapter to support literate programming practices;- Functional programming examples, using tidyverse, for generating empirical statistics (bootstrap resamples, permutation tests) and working programmatically over model outputs;- Unusual but important programming tricks for generating sophisticated data graphics such as network visualizations, dot-density maps, OD maps, glyphmaps, icon arrays, hypothetical outcome plots and graphical line-ups plots. Every data graphic in the book is implemented via ggplot2.- Chapters on uncertainty visualization and data storytelling that are uniquely accompanied with detailed, worked examples.
Hierarchical Modeling and Analysis for Spatial Data
Hierarchical Modeling and Analysis for Spatial Data, Third Edition is the latest edition of this popular and authoritative text on Bayesian modeling and inference for spatial and spatial-temporal data. The text presents a comprehensive and up-to-date treatment of hierarchical and multilevel modeling for spatial and spatio-temporal data within a Bayesian framework. Over the past decade since the second edition, spatial statistics has evolved significantly driven by an explosion in data availability and advances in Bayesian computation. This edition reflects those changes, introducing new methods, expanded applications, and enhanced computational resources to support researchers and practitioners across disciplines, including environmental science, ecology, and public health.Key features of the third edition: A dedicated chapter on state-of-the-art Bayesian modeling of large spatial and spatio-temporal datasets Two new chapters on spatial point pattern analysis, covering both foundational and Bayesian perspectives A new chapter on spatial data fusion, integrating diverse spatial data sources from different probabilistic mechanisms An accessible introduction to GPS mapping, geodesic distances, and mathematical cartography An expanded special topics chapter, including spatial challenges with finite population modeling and spatial directional data A thoroughly revised chapter on Bayesian inference, featuring an updated review of modern computational techniques A dedicated GitHub repository providing R programs and solutions to selected exercises, ensuring continued access to evolving software developments With refreshed content throughout, this edition serves as an essential reference for statisticians, data scientists, and researchers working with spatial data. Graduate students and professionals seeking a deep understanding of Bayesian spatial modeling will find this volume an invaluable resource for both theory and practice.
Visualization for Social Data Science
"This is an important book on an important topic. I particularly like the examples showing different visualizations of the same data and the parallel presentation of graphics and code. And I absolutely love the chapter on visual storytelling. I can't wait to use this book in my classes."- Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University, New York"A book that gives learners the inspiration, knowledge and worked examples to create cutting edge visualisations of their own."- James Chesire, Professor of Geographic Information and Cartography, University College LondonVisualization for Social Data Science provides end-to-end skills in visual data analysis. The book demonstrates how data graphics and modern statistics can be used in tandem to process, explore, model and communicate data-driven social science. It is packed with detailed data analysis examples, pushing you to do visual data analysis. As well as introducing, and demonstrating with code, a wide range of data visualizations for exploring patterns in data, Visualization for Social Data Science shows how models can be integrated with graphics to emphasise important structure and de-emphasise spurious structure and the role of data graphics in scientific communication -- in building trust and integrity. Many of the book's influences are from data journalism, as well as information visualization and cartography. Each chapter introduces statistical and graphical ideas for analysis, underpinned by real social science datasets. Those ideas are then implemented via principled, step-by-step, workflows in the programming environment R. Key features include: - Extensive real-world data sets and data analysis scenarios in Geography, Public Health, Transportation, Political Science;- Code examples fully-integrated into main text, with code that builds in complexity and sophistication;- Quarto template files for each chapter to support literate programming practices;- Functional programming examples, using tidyverse, for generating empirical statistics (bootstrap resamples, permutation tests) and working programmatically over model outputs;- Unusual but important programming tricks for generating sophisticated data graphics such as network visualizations, dot-density maps, OD maps, glyphmaps, icon arrays, hypothetical outcome plots and graphical line-ups plots. Every data graphic in the book is implemented via ggplot2.- Chapters on uncertainty visualization and data storytelling that are uniquely accompanied with detailed, worked examples.
Entropies and Fractionality
Entropies and Fractionality: Entropy Functionals, Small Deviations and Related Integral Equations starts with a systematization and calculation of various entropies (Shannon, R矇nyi and some others) of selected absolutely continuous probability distributions.
Advanced Statistical Analytics for Health Data Science with SAS and R
In recent years, there has been a growing emphasis on making statistical methods and analytics accessible to health data science researchers and students. Following the first book on "Statistical Analytics for Health Data Science with SAS and R" (2023, www.routledge.com/9781032325620), this book serves as a comprehensive reference for health data scientists, bridging fundamental statistical principles with advanced analytical techniques. By providing clear explanations of statistical theory and its application to real- world health data, we aim to equip researchers with the necessary tools to navigate the evolving landscape of health data science.Designed for advanced-level data scientists, this book covers a wide range of statistical methodologies, including models for longitudinal data with time-dependent covariates, multi-membership mixed-effects models, statistical modeling of survival data, Bayesian statistics, joint modeling of longitudinal and survival data, nonlinear regression, statistical meta-analysis, spatial statistics, structural equation modeling, latent growth curve modeling, causal inference, and propensity score analysis.A key feature of this book is its emphasis on real-world applications. We integrate publicly available health datasets and provide case studies from a variety of health applications. These practical examples demonstrate how statistical methods can be applied to solve critical problems in health science.To support hands-on learning, we offer implementation guidance using SAS and R, ensuring that readers can replicate analyses and apply statistical techniques to their own research. Step-by-step computational examples facilitate reproducibility and deeper exploration of statistical models. By combining theoretical foundations with practical applications, this book empowers health data scientists to develop robust statistical solutions for complex health challenges. Whether working in academia, industry, or public health, readers will gain the expertise to advance data-driven decision-making and contribute to evidence-based health research.
On Range Space Techniques, Convex Cones, Polyhedra and Optimization in Infinite Dimensions
This book is a research monograph with specialized mathematical preliminaries. It presents an original range space and conic theory of infinite dimensional polyhedra (closed convex sets) and optimization over polyhedra in separable Hilbert spaces, providing, in infinite dimensions, a continuation of the author's book: A Conical Approach to Linear Programming, Scalar and Vector Optimization Problems, Gordon and Breach Science Publishers, Amsterdam, 1997. It expands and improves author's new approach to the Maximum Priciple for norm oprimal control of PDE, based on theory of convex cones, providing shaper results in various Hilbert space and Banach space settings. It provides a theory for convex hypersurfaces in lts and Hilbert spaces. For these purposes, it introduces new results and concepts, like the generalizations to the non compact case of cone capping and of the Krein Milman Theorem, an extended theory of closure of pointed cones, the notion of beacon points, and a necessary and sufficient condition of support for void interior closed convex set (complementing the Bishop Phelps Theorem), based on a new decomposition of non closed non pointed cones with non closed lineality space.
Banach Contraction Principle
This book offers a comprehensive exploration of the Banach contraction principle and its many facets. A compilation of chapters authored by global experts, it is aimed at researchers and graduate students in mathematics. The content covers the Banach contraction principle, its generalizations, extensions, consequences and applications, focusing on both single-valued and multi-valued mappings across various spaces. While discussing theoretical foundations, this book uniquely emphasizes the practical applications of the Banach contraction principle in real-world problem-solving scenarios. Each chapter addresses specific topics, including fractals, fractional differentials, integral equations, elastic beam problems and mathematical modeling and analysis of electrical circuits. These diverse subjects showcase the principle's versatility in solving complex issues that go beyond theoretical mathematics. By highlighting Banach's contraction principle as a lasting legacy, the book not only honours past mathematical achievements but also anticipates future innovations in industrial and applied mathematics. It underscores the enduring relevance of the principle, ensuring its continued prominence in mathematical discourse and its pivotal role in driving advancements across the field. This comprehensive exploration catalyzes inspiring future developments in mathematical research.
A Theory of Traces and the Divergence Theorem
This book provides a new approach to traces, which are viewed as linear continuous functionals on some function space. A key role in the analysis is played by integrals related to finitely additive measures, which have not previously been considered in the literature. This leads to Gauss-Green formulas on arbitrary Borel sets for vector fields having divergence measure as well as for Sobolev and BV functions. The integrals used do not require trace functions or normal fields on the boundary and they can deal with inner boundaries. For the treatment of apparently intractable degenerate cases a second boundary integral is used. The calculus developed here also allows integral representations for the precise representative of an integrable function and for the usual boundary trace of Sobolev or BV functions. The theory presented gives a new perspective on traces for beginners as well as experts interested in partial differential equations. The integral calculus might also be a stimulating tool for geometric measure theory.
An Introduction to Web Mining
This book is devoted to the art and science of web mining -- showing how the world's largest information source can be turned into structured, research-ready data. Drawing on many years of teaching graduate courses on Web Mining and on numerous large-scale research projects in web mining contexts, the author provides clear explanations of key web technologies combined with hands-on R tutorials that work in the real world -- and keep working as the web evolves. Through the book, readers will learn how to - scrape static and dynamic/JavaScript-heavy websites - use web APIs for structured data extraction from web sources - build fault-tolerant crawlers and cloud-based scraping pipelines - navigate CAPTCHAs, rate limits, and authentication hurdles - integrate AI-driven tools to speed up every stage of the workflow - apply ethical, legal, and scientific guidelines to their web mining activities Part I explains why web data matters and leads the reader through a first "hello-scrape" in R while introducing HTML, HTTP, and CSS. Part II explores how the modern web works and shows, step by step, how to move from scraping static pages to collecting data from APIs and JavaScript-driven sites. Part III focuses on scaling up: building reliable crawlers, dealing with log-ins and CAPTCHAs, using cloud resources, and adding AI helpers. Part IV looks at ethical, legal, and research standards, offering checklists and case studies, enabling the reader to make responsible choices. Together, these parts give a clear path from small experiments to large-scale projects. This valuable guide is written for a wide readership -- from graduate students taking their first steps in data science to seasoned researchers and analysts in economics, social science, business, and public policy. It will be a lasting reference for anyone with an interest in extracting insight from the web -- whether working in academia, industry, or the public sector.
Essential Statistics
Essential Statistics: Understanding and Using Data provides students with the tools they need to understand what statistics are, how they work, why they are so important, and how they function in the world. With a focus on step-by-step instruction, Essential Statistics begins each section with a sharp focus on simplified main concepts, followed by expansions into how variation impacts each concept. Readers find this easy-to-read textbook welcoming because of its friendly, patient voice and style and its reliance on real-world examples of where statistics fit in everyday life.This book covers the basics of statistics and data, as well as more advanced topics, including: Descriptive statistics, data displays, central location, and deviations Discrete probability distributions Continuous probability distributions Confidence intervals Hypothesis testing Correlation and linear regression Analysis of variance (ANOVA)Nonparametric statisticsWritten by an actual teacher, Essential Statistics recognizes the need for down-to-earth math instruction. It perfectly addresses this by giving students accessible, linear, and relevant context for why statistics are what its title suggests: essential.
The Doctrine Of Chances, Or, The Theory Of Gaming, Made Easy
The Doctrine of Chances, or, The Theory of Gaming, Made Easy, explores the mathematical principles underlying games of chance and gambling. Written by William Rouse, this treatise provides a detailed examination of probability as it applies to various games, offering insights into the odds and strategies involved. The book delves into the calculations necessary for understanding and predicting outcomes in gaming scenarios, making it accessible to both mathematicians and enthusiasts interested in the theory behind games of chance. Rouse's work is a valuable resource for anyone seeking to understand the mathematical foundations of gambling and probability, offering a historical perspective on the development of these concepts. This edition preserves the original text, providing readers with an authentic view of early mathematical approaches to gaming.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Nonlinear Regression Methods for Estimation
Regression techniques are developed for batch estimation and applied to three specific areas, namely, ballistic trajectory launch point estimation, adaptive flight control, and radio-frequency target triangulation. Specifically, linear regression with an intercept is considered in detail. An augmentation formulation is developed. Extensions of theory are applied to nonlinear regression as well. The intercept parameter estimate within the linear regression is used to identify the e ects of trim change that are associated with the occurrence of a control surface failure. These estimates are used to adjust the inner loop control gains via a feed-forward command, hence providing an automatic recon gurable retrim of an aircraft. The regression algorithms are used to consider reduced information applications, such as initial position target determination from bearings-only measurement data. In total, this dissertation develops algorithms for batch processes that broaden the envelope of successful estimation within the three aforementioned application areas. Additionally, the developed batch algorithms do not adversely impact the estimation ability in cases that are already estimated successfully by conventional approaches.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Trigonometric Transforms for Image Reconstruction
This dissertation demonstrates how the symmetric convolution-multiplication property of discrete trigonometric transforms can be applied to traditional problems in image reconstruction with slightly better performance than Fourier techniques and increased savings in computational complexity for symmetric point spread functions. The fact that the discrete Fourier transform diagonalizes a circulant matrix provides an alternate way to derive the symmetric convolutionmultiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions and generalizes to multidimensional asymmetric sequences. The symmetric convolution-multiplication property allows for linear filtering of degraded images via point-by-point multiplication in the transform domain of trigonometric transforms. Specifically in the transform domain of a type-II discrete cosine transform, there is an asymptotically optimum energy compaction about the lowfrequency indices of highly correlated images which has advantages in reconstructing images with high-frequency noise. The symmetric convolution-multiplication property allows for wellapproximated scalar representations in the trigonometric transform domain for linear reconstruction filters such as the Wiener filter. An analysis of the scalar Wiener filter's improved meansquared error performance in the trigonometric transform domain is given.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
A Primer on Semiconvex Functions in General Potential Theories
This book examines the symbiotic interplay between fully nonlinear elliptic partial differential equations and general potential theories of second order. Starting with a self-contained presentation of the classical theory of first and second order differentiability properties of convex functions, it collects a wealth of results on how to treat second order differentiability in a pointwise manner for merely semicontinuous functions. The exposition features an analysis of upper contact jets for semiconvex functions, a proof of the equivalence of two crucial, independently developed lemmas of Jensen (on the viscosity theory of PDEs) and Slodkowski (on pluripotential theory), and a detailed description of the semiconvex approximation of upper semicontinuous functions. The foundations of general potential theories are covered, with a review of monotonicity and duality, and the basic tools in the viscosity theory of generalized subharmonics, culminating in an account of the monotonicity-duality method for proving comparison principles. The final section shows that the notion of semiconvexity extends naturally to manifolds. A complete treatment of important background results, such as Alexandrov's theorem and a Lipschitz version of Sard's lemma, is provided in two appendices. The book is aimed at a wide audience, including professional mathematicians working in fully nonlinear PDEs, as well as master's and doctoral students with an interest in mathematical analysis.
Recent Progress Numeric Analy Nonlinear Dispersive Equation
This book presents an overview of recent advances in the numerical analysis of nonlinear dispersive partial differential equations (PDEs) - including the nonlinear Schr繹dinger equation, the Korteweg-de Vries (KdV) equation, and the nonlinear Klein-Gordon equation. These fundamental models are central to mathematical physics and computational PDE theory, and their analysis, both individually and through asymptotic relationships, has become an active and evolving area of research.Recent progress includes the extension of harmonic analysis tools, such as Strichartz estimates and Bourgain spaces, into discrete settings. These innovations have improved the accuracy and flexibility of numerical methods, especially by relaxing regularity assumptions on initial data, potentials, and nonlinearities. Additionally, enhanced long-time numerical estimates now support simulations over substantially longer time intervals, expanding the practical reach of computational models.The analytical breakthroughs that underpin these developments trace back to the seminal work by Jean Bourgain in the 1990s, which introduced powerful techniques for studying dispersive PDEs. Adapting these continuous tools to discrete frameworks has proven both challenging and rewarding, offering new insights into the interface between numerical computation and theoretical analysis.Aimed at graduate students, researchers, and practitioners in numerical analysis, applied mathematics, and computational physics, this volume provides a clear entry point into cutting-edge research, supported by a rich bibliography for further exploration.
Communicating the Quantum Way
The book consists of scientific essays dedicated to A S Holevo on the occasion of his 80th birthday, written by prominent researchers in theoretical-mathematical physics. It is a snapshot of contemporary research at the frontier of quantum information science, work that builds on Holevo's pioneering contributions during the decades of his distinguished career.While a portion of the book are the 15 articles published in a special issue of IJQI, the book contains additional material such as a summary of Holevo's scientific life, with comments on research directions not covered in the 15 articles, and the list of his publications.
Social Impact of Ai: Research, Diversity and Inclusion Frameworks
The workshop proceedings SIAI 2025 constitutes the refereed proceedings of the fourth International Workshop on Social Impact of AI, SIAI 2025, which was held in Philadelphia, PA, USA, in March 2025. The 8 full papers, 4 short papers, and one poster presented in these proceedings were carefully reviewed and selected from 26 submissions. SIAI-ReDI 2025 centered on the development and assessment of inclusive AI frameworksthat prioritize ethical design, equitable deployment, and cultural context. The workshop'scall for papers invited submissions across a range of urgent topics, including: Algorithmic fairness and transparencyInclusive AI education and workforce strategiesAI in marginalized and underrepresented communitiesIntersectionality, gender, and accessibility in AICross-cultural AI governance and regulationPublic trust, participatory AI, and responsible design
Mathematical Methods for Accident Reconstruction
This book demonstrates the application of mathematics to modeling accident reconstructions involving a range of moving vehicles, including automobiles, small and large trucks, bicycles, motorcycles, all-terrain vehicles, and construction equipment, such as hoists and cranes.The book is anchored on the basic principles of physics that may be applied to any of the above-named vehicles or equipment. Topics covered include the foundations of measurement, the various energy methods used in reconstruction, momentum methods, vehicle specifications, failure analysis, geometrical characteristics of highways, and softer scientific issues, such as visibility, perception, and reaction. The authors examine the fundamental characteristics of different vehicles, discuss the retrieval of data from crash data recorders, and review low-speed impacts with an analysis of staged collisions. Finally, it details established standards and protocols for accident reconstruction for use in both investigations and in the courtroom. In addition, this new edition covers nonvehicle-related topics like slip and fall analysis, fall protection, and engine and mechanical failure, as well as federal rules and laws that have been established for the work environment. Many reconstructions of incidents require extensive analysis that may require a variety of methods in order to properly model the incident.Exploring a broad range of accident scenarios, the breadth and depth of this book's coverage makes Mathematical Methods for Accident Reconstruction, Second Edition, a critical reference for engineers and scientists who perform vehicular accident reconstructions.
Data Clustering with Python
Data clustering, an interdisciplinary field with diverse applications, has gained increasing popularity since its origins in the 1950s. Over the past six decades, researchers from various fields have proposed numerous clustering algorithms. In 2011, I wrote a book on implementing clustering algorithms in C++ using object-oriented programming. While C++ offers efficiency, its steep learning curve makes it less ideal for rapid prototyping. Since then, Python has surged in popularity, becoming the most widely used programming language since 2022. Its simplicity and extensive scientific libraries make it an excellent choice for implementing clustering algorithms.Features: Introduction to Python programming fundamentals Overview of key concepts in data clustering Implementation of popular clustering algorithms in Python Practical examples of applying clustering algorithms to datasets Access to associated Python code on GitHub This book extends my previous work by implementing clustering algorithms in Python. Unlike the object-oriented approach in C++, this book uses a procedural programming style, as Python allows many clustering algorithms to be implemented concisely. The book is divided into two parts: the first introduces Python and key libraries like NumPy, Pandas, and Matplotlib, while the second covers clustering algorithms, including hierarchical and partitional methods. Each chapter includes theoretical explanations, Python implementations, and practical examples, with comparisons to scikit-learn where applicable.This book is ideal for anyone interested in clustering algorithms, with no prior Python experience required. The complete source code is available at: https: //github.com/ganml/dcpython.
Real Analysis - An Introduction
Designed for a broad spectrum of mathematics majors, not only those pursuing graduate school, this book also provides a thorough explanation of undergraduate Real Analysis. Through a developmentally appropriate narrative that integrates informal discussion, motivation, and basic proof writing approaches with mathematical rigor and clarity, the aim is to assist all students in learning more about the real number system and calculus theory.
Extreme-Scale Computing
Scientific computing is essential for tackling complex problems across many domains--but how can scientists develop high-performance and high-quality software that scales efficiently? This book serves as an accessible introduction to extreme-scale computing, specifically designed for domain scientists who may not have formal computer science training but need to harness the power of C++ and parallel computing for large-scale applications. The book begins by covering the fundamentals of scientific computing software management, including essential tools like Linux, Git, and CMake, before diving into a detailed exploration of C++ for extreme-scale computing. Readers familiar with languages like Python will gain the necessary skills to transition to C++ and build scalable, efficient software. Beyond basic programming, this book delves into hardware-aware computing, teaching readers how to optimize software performance by understanding the underlying architecture of modern computational systems. It then introduces parallel computing techniques, covering MPI for distributed memory parallelism, shared memory parallelism, CUDA for GPU programming, and Kokkos for performance portability. Further chapters focus on efficient I/O, debugging, and profiling, which all address aspects of the critical challenge of performance optimization in extreme-scale computing. The book concludes with an overview of popular libraries for extreme-scale computing, equipping readers with the tools they need to solve real-world computational problems. With a balance of theory, practical applications, and illustrative case studies, this book provides domain scientists with a comprehensive roadmap to mastering extreme-scale computing and developing highly parallel and performant software.
Recent Advances in Fixed Point Theory in Abstract Spaces
Solved Exercises in Counting Principles and Probability
Quadrature, Interpolation and Observability
Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Consistency Results for the ROC Curves of Fused Classifiers
The U.S. Air Force is researching the fusion of multiple classifiers. Given a finite collection of classifiers one seeks a new fused classifier with improved performance. An established performance quantifier is the Receiver Operating Characteristic (ROC) curve, which allows one to view the probability of detection versus the probability of false alarm in one graph. Previous research shows that one does not have to perform tests to determine the ROC curve of this new fused classifier. If the ROC curve for each individual classifier has been determined, then formulas for the ROC curve of the fused classifier exist for certain fusion rules. This will be an enormous saving in time and money since the performance of many fused classifiers can be determined analytically. In reality only finite data is available so only an estimated ROC curve can be constructed. It has been proven that estimated ROC curves will converge to the true ROC curve in probability. This research examines if convergence is preserved when these estimated ROC curves are fused. It provides a general result for fusion rules that are governed by a Lipschitz continuous ROC fusion function and establishes a metric that can be used to prove this convergence.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Descartes's Theory of Light and Refraction
Contents: (1) Historical Overview; & Descartes's Perspectivist Sources; (2) Analysis of Refraction: Cartesian Light-Theory; & A Critical Evaluation; (3) The Foundations of Perspectivist Optics: Perspectivist Light-Theory; Quantization of Light; & Comparison with Descartes's Theory of Light; (4) The Perspectivist Analysis of Refraction: Physical Model; Physical "Explanation" & The Final Cause; (5) Perspectivist Grounds of the Cartesian Proof: Mathematical Implications; From Cosines to Sines; & Descartes Revisited; (6) Cartesian Light-Theory as a Culmination; Toward a Kinetic Theory of Light; & Epistemological Consequences. App.: The Sine-Law Before Descartes; The Fermat-Descartes Controversy; & Kepler, Descartes & the Anaclastic. Illustrations.
Quadrature, Interpolation and Observability
Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Mathematical Programming Model for Fighter Training Squadron Pilot Scheduling
The United States Air Force fighter training squadrons build weekly schedules using a long and tedious process. Very little of this process is automated and optimality of any kind is nearly impossible. Schedules are built to a feasible condition only to be changed with consideration of Wing level requirements. Weekly flying schedules are restricted by requirements for crew rest, days since a pilot's last sortie, sorties in the last 30 days, and sorties in the last 90 days. By providing a scheduling model to the pilot charged with creating the schedule, valuable pilot hours could be spent in the cockpit, simulator, or other required duty. This research effort presents a mathematical programming (MP) approach to the fighter squadron pilot training scheduling problem. The methodology presented is based on binary variables that will provide integer solutions to every feasible set of inputs. A simulator heuristic developed specifically for this problem assigns pilots to simulator sorties based on the feasible solutions obtained from two different formulation and solving approaches. One approach assigns training mission sorties and duties for the entire week, while the other approach breaks the week into ten successive sub-problems. The model constructs two feasible schedules in approximately 2.5 minutes.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
An Investigation of the Effects of Correlation, Autocorrelation, and Sample Size in Classifier Fusion
This thesis extends the research found in Storm, Bauer, and Oxley, 2003. Data correlation effects and sample size effects on three classifier fusion techniques and one data fusion technique were investigated. Identification System Operating Characteristic Fusion (Haspert, 2000), the Receiver Operating Characteristic "Within" Fusion method (Oxley and Bauer, 2002), and a Probabilistic Neural Network were the three classifier fusion techniques; a Generalized Regression Neural Network was the data fusion technique. Correlation was injected into the data set both within a feature set (autocorrelation) and across feature sets for a variety of classification problems, and sample size was varied throughout. Total Probability of Misclassification (TPM) was calculated for some problems to show the effect of correlation on TPM.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.
Anderson-Darling and Cramer-Von Mises Based Goodness-of-Fit Tests for the Weibull Distribution With Known Shape Using Normalized Spacings
Two new goodness-of-_t tests are developed for the three-parameter Weibull distribution with known shape parameter. These procedures eliminate the need for estimating the unknown location and scale parameters prior to initiating the tests and are easily adapted for censored data. This is accomplished by employing the Anderson-Darling A2 s and Cram_er-von Mises W2 S statistics based on the normalized spacings of the sample data. Critical values of A2 s and W2 s are obtained for common signi_cance levels by large Monte Carlo simulations for shapes of 0.5(0.5)4.0 and sample sizes of 5(5)40 with up to 40% censoring (type II) from the left and/or right. An extensive Monte Carlo power study is also conducted to compare the two tests with each other and with their prominent competitors. The competitors include another spacings test, Z_, and the modi_ed Kolmogorov-Smirnov (KS), Cram_er- von Mises (W2) and Anderson-Darling (A2) EDF tests. The power results indicate that no one test is superior in all situations. When the alternatives considered are tested against a skewed Weibull null distribution, A2 s and W2 s achieve considerably higher power than the other EDF tests, but do not perform as well as Z_. On the other hand, when the null distribution is symmetric, Z_ loses all of its power, while A2 s and W2 s yield power comparable to the other EDF tests. Results also show A2 s generally outperforms W2 s, and for these reasons, A2 s is the preferred test for the three-parameter Weibull with known shape.This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps (as most of these works have been housed in our most important libraries around the world), and other notations in the work.This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may freely copy and distribute this work, as no entity (individual or corporate) has a copyright on the body of the work.As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant.