Interacting with Information
We live in an "information age," but information is only useful when it is interpreted by people and applied in the context of their goals and activities. The volume of information to which people have access is growing at an incredible rate, vastly outstripping people's ability to assimilate and manage it. In order to design technologies that better support information work, it is necessary to better understand the details of that work. In this lecture, we review the situations (physical, social and temporal) in which people interact with information. We also discuss how people interact with information in terms of an "information journey," in which people, iteratively, do the following: recognise a need for information, find information, interpret and evaluate that information in the context of their goals, and use the interpretation to support their broader activities. People's information needs may be explicit and clearly articulated but, conversely, may be tacit, exploratory and evolving. Widely used tools supporting information access, such as searching on the Web and in digital libraries, support clearly defined information requirements well, but they provide limited support for other information needs. Most other stages of the information journey are poorly supported at present. Novel design solutions are unlikely to be purely digital, but to exploit the rich variety of information resources, digital, physical and social, that are available. Theories of information interaction and sensemaking can highlight new design possibilities that augment human capabilities. We review relevant theories and findings for understanding information behaviours, and we review methods for evaluating information working tools, to both assess existing tools and identify requirements for the future. Table of Contents: Introduction: Pervasive Information Interactions / Background: Information Interaction at the Crossroads of Research Traditions / The Situations: Physical, Social and Temporal / The Behaviors: Understanding the "Information Journey" / The Technologies: Supporting the Information Journey / Studying User Behaviors and Needs for Information Interaction / Looking to the Future / Further Reading
The Design of Implicit Interactions
People rely on implicit interaction in their everyday interactions with one another to exchange queries, offers, responses, and feedback without explicit communication. A look with the eyes, a wave of the hand, the lift of the door handle--small moves can do a lot to enable joint action with elegance and economy. This work puts forward a theory that these implicit patterns of interaction with one another drive our expectations of how we should interact with devices. I introduce the Implicit Interaction Framework as a tool to map out interaction trajectories, and we use these trajectories to better understand the interactions transpiring around us. By analyzing everyday implicit interactions for patterns and tactics, designers of interactive devices can better understand how to design interactions that work or to remedy interactions that fail. This book looks at the "smart," "automatic," and "interactive" devices that increasingly permeate our everyday lives--doors, switches, whiteboards--and provides a close reading of how we interact with them. These vignettes add to the growing body of research targeted at teasing out the factors at play in our interactions. I take a look at current research, which indicates that our reactions to interactions are social, even if the entities we are interacting with are not human. These research insights are applied to allow us to refine and improve interactive devices so that they work better in the context of our day-to-day lives. Finally this book looks to the future, and outlines considerations that need to be taken into account in prototyping and validating devices that employ implicit interaction. Table of Contents: Acknowledgments / Introduction / The Theory and Framework for Implicit Interaction / Opening the Door to Interaction / Light and Dark: Patterns in Interaction / Action and Reaction: The Interaction Design Factory/Driving into the Future, Together / Bibliography / Author Biography
A Handbook for Analytical Writing
This handbook accelerates the development of analytical writing skills for high school students, students in higher education, and working professionals in a broad range of careers. This handbook builds on the idea that writing clarifies thought, and that through analytical writing comes improved insight and understanding for making decisions about innovation necessary for socioeconomic development. This short handbook is a simple, comprehensive guide that shows differences between descriptive writing and analytical writing, and how students and teachers work together during the process of discovery-based learning. This handbook provides nuts and bolts ideas for team projects, organizing writing, the process of writing, constructing tables, presenting figures, documenting reference lists, avoiding the barriers to clear writing, and outlines the importance of ethical issues and bias for writers. Finally, there are ideas for evaluating writing, and examples of classroom exercises for students and teachers.
Experience-Centered Design
Experience-centered design, experience-based design, experience design, designing for experience, user experience design. All of these terms have emerged and gained acceptance in the Human-Computer Interaction (HCI) and Interaction Design relatively recently. In this book, we set out our understanding of experience-centered design as a humanistic approach to designing digital technologies and media that enhance lived experience. The book is divided into three sections. In Section 1, we outline the historical origins and basic concepts that led into and flow out from our understanding of experience as the heart of people's interactions with digital technology. In Section 2, we describe three examples of experience-centered projects and use them to illustrate and explain our dialogical approach. In Section 3, we recapitulate some of the main ideas and themes of the book and discuss the potential of experience-centered design to continue the humanist agenda by giving a voice to those whomight otherwise be excluded from design and by creating opportunities for people to enrich their lived experience with and through technology. Table of Contents: How Did We Get Here? / Some Key Ideas Behind Experience-Centered Design / Making Sense of Experience in Experience-Centered Design / Experience-Centered Design as Dialogue / What do We Mean by Dialogue? / Valuing Experience-Centered Design / Where Do We Go from Here?
Studies of Work and the Workplace in HCI
This book has two purposes. First, to introduce the study of work and the workplace as a method for informing the design of computer systems to be used at work. We primarily focus on the predominant way in which the organization of work has been approached within the field of human-computer interaction (HCI), which is from the perspective of ethnomethodology. We locate studies of work in HCI within its intellectual antecedents, and describe paradigmatic examples and case studies. Second, we hope to provide those who are intending to conduct the type of fieldwork that studies of work and the workplace draw off with suggestions as to how they can go about their own work of developing observations about the settings they encounter. These suggestions take the form of a set of maxims that we have found useful while conducting the studies we have been involved in. We draw from our own fieldwork notes in order to illustrate these maxims. In addition we also offer some homilies about how to make observations; again, these are ones we have found useful in our own work. Table of Contents: Motivation / Overview: A Paradigmatic Case / Scientific Foundations / Detailed Description / Case Study / How to Conduct Ethnomethodological Studies of Work / Making Observations / Current Status
Context-Aware Mobile Computing
The integration of ubiquitous mobile computing resources into physical spaces can potentially affect the development, maintenance, and transformation of communities and social interactions and relations within a particular context or location. Ubiquitous mobile computing allows users to engage in activities in diverse physical locations, to access resources specific to the location, and to communicate directly or indirectly with others. Mobile technologies can potentially enhance social interactions and users' experiences, extend both social and informational resources available in context, and greatly alter the nature and quality of our interactions. Activities using mobile devices in context generate complex systems of interactions, and the benefits of ubiquity and mobility can be easily lost if that complexity is not appreciated and understood. This monograph attempts to address issues of using and designing location-based computing systems and the use of these tools to enhance social awareness, navigate in spaces, extend interactions, and influence others. Table of Contents: Introduction / Space, Place, and Context / Creating a Sense of Presence and Awareness with Mobile Tools / Mobile Computing: A Tool for Social Influence to Change Behavior / Ethical Issues and Final Thoughts
Common Ground in Electronically Mediated Conversation
Technologies that electronically mediate conversation, such as text-based chat or desktop video conferencing, draw on theories of human-human interaction to make predictions about the effects of design decisions. This lecture reviews the theory that has been most influential in this area: Clark's theory of language use. The key concept in Clark's theory is that of common ground. Language is viewed as a collaborative activity that uses existing common ground to develop further common ground and, hence, to communicate efficiently. The theory (a) defines different kinds of common ground, (b) formalizes the notion of collaborative activity as a "joint action," and (c) describes the processes by which common ground is developed through joint action. Chapter 1 explains why a purely cognitive model of communication is not enough and what is meant by the phrase "collaborative activity." Chapter 2 introduces the idea of common ground and how it is used in language through an example of two people conversing over a video link. Chapter 3 indicates where the interested reader can find out about the antecedents to Clark's theory. Chapter 4 sets out the fundamental concepts in Clark's theory. Chapter 5 uses five published case studies of electronically mediated communication to illustrate the value of the theory. These include studies of a computer-supported meeting room (Cognoter), a video tunnel that supports gaze awareness, video conferencing in medical consultation, and text chat. Table of Contents: Motivation - Conversation as a Collaborative Activity / Overview - Developing Common Ground, An Example / Scientific Foundations / The Theory in More Detail / Case Studies - Applying the Theory to Electronically Mediated Communication / Current Status
Designing for User Engagment
This book explores the design process for user experience and engagement, which expands the traditional concept of usability and utility in design to include aesthetics, fun and excitement. User experience has evolved as a new area of Human Computer Interaction research, motivated by non-work oriented applications such as games, education and emerging interactive Web 2.0. The chapter starts by examining the phenomena of user engagement and experience and setting them in the perspective of cognitive psychology, in particular motivation, emotion and mood. The perspective of aesthetics is expanded towards interaction and engagement to propose design treatments, metaphors, and interactive techniques which can promote user interest, excitement and satisfying experiences. This is followed by reviewing the design process and design treatments which can promote aesthetic perception and engaging interaction. The final part of the chapter provides design guidelines and principles drawn from theinteraction and graphical design literature which are cross-referenced to issues in the design process. Examples of designs and design treatments are given to illustrate principles and advice, accompanied by critical reflection. Table of Contents: Introduction / Psychology of User Engagement / UE Design Process / Design Principles and Guidelines / Perspectives and Conclusions
Internet of Things and Secure Smart Environments
This book will provide a comprehensive overview of recent research and open problems in the area of IoT research. It will cover state of the art problems, present solutions, and open research directions and will be targeted to researchers and scholars in both industry and academia.
Neural Network Methods for Natural Language Processing
Neural networks are a family of powerful machine learning models. This book focuses on the application of neural network models to natural language data. The first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. The second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. These architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.
A Primer on Physical-Layer Network Coding
The concept of physical-layer network coding (PNC) was proposed in 2006 for application in wireless networks. Since then it has developed into a subfield of communications and networking with a wide following. This book is a primer on PNC. It is the outcome of a set of lecture notes for a course for beginning graduate students at The Chinese University of Hong Kong. The target audience is expected to have some prior background knowledge in communication theory and wireless communications, but not working knowledge at the research level. Indeed, a goal of this book/course is to allow the reader to gain a deeper appreciation of the various nuances of wireless communications and networking by focusing on problems arising from the study of PNC. Specifically, we introduce the tools and techniques needed to solve problems in PNC, and many of these tools and techniques are drawn from the more general disciplines of signal processing, communications, and networking: PNC is used as a pivot to learn about the fundamentals of signal processing techniques and wireless communications in general. We feel that such a problem-centric approach will give the reader a more in-depth understanding of these disciplines and allow him/her to see first-hand how the techniques of these disciplines can be applied to solve real research problems. As a primer, this book does not cover many advanced materials related to PNC. PNC is an active research field and many new results will no doubt be forthcoming in the near future. We believe that this book will provide a good contextual framework for the interpretation of these advanced results should the reader decide to probe further into the field of PNC.
Performance Modeling, Stochastic Networks, and Statistical Multiplexing, Second Edition
This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in computing performance measures. The monograph also covers stochastic network theory including Markovian networks. Recent results on network utility optimization and connections to stochastic insensitivity are discussed. Also presented are ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. In particular, the important concept of effective bandwidths as mappings from queueing level phenomena to loss network models is clearly presented along with a detailed discussion of accurate approximations for large networks.
Understanding Interaction: The Relationships Between People, Technology, Culture, and the Environment
Understanding Interaction explores the interaction between people and technology in the broader context of the relations between the human-made and the natural environments. It is not just about digital technologies - our computers, smartphones, the Internet - but all our technologies, such as mechanical, electrical, and electronic. Our ancestors started creating mechanical tools and shaping their environments millions of years ago, developing cultures and languages, which in turn influenced our evolution. Volume 1 looks into this deep history, starting from the tool-creating period (the longest and most influential on our physical and mental capacities) to the settlement period (agriculture, domestication, villages and cities, written language), the industrial period (science, engineering, reformation, and renaissance), and finally the communication period (mass media, digital technologies, and global networks). Volume 2 looks into humans in interaction - our physiology, anatomy, neurology, psychology, how we experience and influence the world, and how we (think we) think. From this transdisciplinary understanding, design approaches and frameworks are presented to potentially guide future developments and innovations. The aim of the book is to be a guide and inspiration for designers, artists, engineers, psychologists, media producers, social scientists, etc., and, as such, be useful for both novices and more experienced practitioners.   Image Credit: Still of interactive video pattern created with a range of motion sensors in the Facets kaleidoscopic algorithm (based underwater footage of seaweed movement) by the author on 4 February 2010, for a lecture at Hyperbody at the Faculty of Architecture, TU Delft, NL.
Automated Grammatical Error Detection for Language Learners, Second Edition
It has been estimated that over a billion people are using or learning English as a second or foreign language, and the numbers are growing not only for English but for other languages as well. These language learners provide a burgeoning market for tools that help identify and correct learners' writing errors. Unfortunately, the errors targeted by typical commercial proofreading tools do not include those aspects of a second language that are hardest to learn. This volume describes the types of constructions English language learners find most difficult: constructions containing prepositions, articles, and collocations. It provides an overview of the automated approaches that have been developed to identify and correct these and other classes of learner errors in a number of languages. Error annotation and system evaluation are particularly important topics in grammatical error detection because there are no commonly accepted standards. Chapters in the book describe the options available to researchers, recommend best practices for reporting results, and present annotation and evaluation schemes. The final chapters explore recent innovative work that opens new directions for research. It is the authors' hope that this volume will continue to contribute to the growing interest in grammatical error detection by encouraging researchers to take a closer look at the field and its many challenging problems.
Network Games
Traditional network optimization focuses on a single control objective in a network populated by obedient users and limited dispersion of information. However, most of today's networks are large-scale with lack of access to centralized information, consist of users with diverse requirements, and are subject to dynamic changes. These factors naturally motivate a new distributed control paradigm, where the network infrastructure is kept simple and the network control functions are delegated to individual agents which make their decisions independently ("selfishly"). The interaction of multiple independent decision-makers necessitates the use of game theory, including economic notions related to markets and incentives. This monograph studies game theoretic models of resource allocation among selfish agents in networks. The first part of the monograph introduces fundamental game theoretic topics. Emphasis is given to the analysis of dynamics in game theoretic situations, which is crucial for design and control of networked systems. The second part of the monograph applies the game theoretic tools for the analysis of resource allocation in communication networks. We set up a general model of routing in wireline networks, emphasizing the congestion problems caused by delay and packet loss. In particular, we develop a systematic approach to characterizing the inefficiencies of network equilibria, and highlight the effect of autonomous service providers on network performance. We then turn to examining distributed power control in wireless networks. We show that the resulting Nash equilibria can be efficient if the degree of freedom given to end-users is properly designed. Table of Contents: Static Games and Solution Concepts / Game Theory Dynamics / Wireline Network Games / Wireless Network Games / Future Perspectives
Humanistic HCI
Although it has influenced the field of Human-Computer Interaction (HCI) since its origins, humanistic HCI has come into its own since the early 2000s. In that time, it has made substantial contributions to HCI theory and methodologies and also had major influence in user experience (UX) design, aesthetic interaction, and emancipatory/social change-oriented approaches to HCI. This book reintroduces the humanities to a general HCI readership; characterizes its major epistemological and methodological commitments as well as forms of rigor; compares the scientific report vs. the humanistic essay as research products, while offering some practical advice for peer review; and focuses on two major topics where humanistic HCI has had particular influence in the field--user experience and aesthetics and emancipatory approaches to computing. This book argues for a more inclusive and broad reach for humanistic thought within the interdisciplinary field of HCI, and its lively and engaging style will invite readers into that project.
Worth-Focused Design, Book 2
This book introduces the concept of worth for design teams, relates it to experiences and outcomes, and describes how to focus on worth when researching and expressing design opportunities for generous worth. Truly interdisciplinary teams also need an appropriate common language, which was developed in the companion book Worth-Focused Design, Book 1: Balance, Integration, and Generosity (Cockton, 2020a). Its new lexicon for design progressions enables a framework for design and evaluation that works well with a worth focus. Design now has different meanings based upon the approach of different disciplinary practices. For some, it is the creation of value. For others, it is the conception and creation of artefacts. For still others, it is fitting things to people (beneficiaries). While each of these design foci has merits, there are risks in not having an appropriate balance across professions that claim the centre of design for their discipline and marginalise others.Generosity is key to the best creative design--delivering unexpected worth beyond documented needs, wants, or pain points. Truly interdisciplinary design must also balance and integrate approaches across several communities of practice, which is made easier by common ground. Worth provides a productive focus for this common ground and is symbiotic with balanced, integrated, and generous (BIG) practices. Practices associated with balance and integration for worth-focused generosity are illustrated in several case studies that have used approaches in this book, complementing them with additional practices.
The Trouble With Sharing
Peer-to-peer exchange is a type of sharing that involves the transfer of valued resources, such as goods and services, among members of a local community and/or between parties who have not met before the exchange encounter. It involves online systems that allow strangers to exchange in ways that were previously confined to the realm of kinship and friendship. Through the examples in this book, we encounter attempts to foster the sharing of goods and services in local communities and consider the intricacies of sharing homes temporarily with strangers (also referred to as hospitality exchange or network hospitality). Some of the exchange arrangements discussed involve money while others explicitly ban participants from using it. All rely on digital technologies, but the trickiest challenges have more to do with social interaction than technical features. This book explores what makes peer-to-peer exchange challenging, with an emphasis on reciprocity, closeness, and participation: How should we reciprocate? How might we manage interactions with those we encounter to attain some closeness but not too much? What keeps people from getting involved or draws them into exchange activities that they would rather avoid? This book adds to the growing body of research on exchange platforms and the sharing economy. It provides empirical examples and conceptual grounding for thinking about interpersonal challenges in peer-to-peer exchange and the efforts that are required for exchange arrangements to flourish. It offers inspiration for how we might think and design differently to better understand and support the efforts of those involved in peer-to-peer exchange. While the issues cannot be simply "solved" by technology, it matters which digital tools an exchange arrangement relies on, and even seemingly small design decisions can have a significant impact on what it is like to participate in exchange processes. The technologies that support exchange arrangements--often platformsof some sort--can be driven by differing sets of values and commitments. This book invites students and scholars in the Human-Computer Interaction community, and beyond, to envision and design alternative exchange arrangements and future economies.
Bayesian Analysis in Natural Language Processing, Second Edition
Natural language processing (NLP) went through a profound transformation in the mid-1980s when it shifted to make heavy use of corpora and data-driven techniques to analyze language. Since then, the use of statistical techniques in NLP has evolved in several ways. One such example of evolution took place in the late 1990s or early 2000s, when full-fledged Bayesian machinery was introduced to NLP. This Bayesian approach to NLP has come to accommodate various shortcomings in the frequentist approach and to enrich it, especially in the unsupervised setting, where statistical learning is done without target prediction examples. In this book, we cover the methods and algorithms that are needed to fluently read Bayesian learning papers in NLP and to do research in the area. These methods and algorithms are partially borrowed from both machine learning and statistics and are partially developed "in-house" in NLP. We cover inference techniques such as Markov chain Monte Carlo sampling and variational inference, Bayesian estimation, and nonparametric modeling. In response to rapid changes in the field, this second edition of the book includes a new chapter on representation learning and neural networks in the Bayesian context. We also cover fundamental concepts in Bayesian statistics such as prior distributions, conjugacy, and generative modeling. Finally, we review some of the fundamental modeling techniques in NLP, such as grammar modeling, neural networks and representation learning, and their use with Bayesian analysis.
Participatory Design
This book introduces Participatory Design to researchers and students in Human-Computer Interaction (HCI). Grounded in four strong commitments, the book discusses why and how Participatory Design is important today. The book aims to provide readers with a practical resource, introducing them to the central practices of Participatory Design research as well as to key references. This is done from the perspective of Scandinavian Participatory Design. The book is meant for students, researchers, and practitioners who are interested in Participatory Design for research studies, assignments in HCI classes, or as part of an industry project. It is structured around 11 questions arranged in 3 main parts that provide the knowledge needed to get started with practicing Participatory Design. Each chapter responds to a question about defining, conducting, or the results of carrying out Participatory Design. The authors share their extensive experience of Participatory Design processes and thinking by combining historical accounts, cases, how-to process descriptions, and reading lists to guide further readings so as to grasp the many nuances of Participatory Design as it is practiced across sectors, countries, and industries.
Natural Language Processing for Social Media, Third Edition
In recent years, online social networking has revolutionized interpersonal communication. The newer research on language analysis in social media has been increasingly focusing on the latter's impact on our daily lives, both on a personal and a professional level. Natural language processing (NLP) is one of the most promising avenues for social media data processing. It is a scientific challenge to develop powerful methods and algorithms that extract relevant information from a large volume of data coming from multiple sources and languages in various formats or in free form. This book will discuss the challenges in analyzing social media texts in contrast with traditional documents. Research methods in information extraction, automatic categorization and clustering, automatic summarization and indexing, and statistical machine translation need to be adapted to a new kind of data. This book reviews the current research on NLP tools and methods for processing the non-traditional information from social media data that is available in large amounts, and it shows how innovative NLP approaches can integrate appropriate linguistic information in various fields such as social media monitoring, health care, and business intelligence. The book further covers the existing evaluation metrics for NLP and social media applications and the new efforts in evaluation campaigns or shared tasks on new datasets collected from social media. Such tasks are organized by the Association for Computational Linguistics (such as SemEval tasks), the National Institute of Standards and Technology via the Text REtrieval Conference (TREC) and the Text Analysis Conference (TAC), or the Conference and Labs of the Evaluation Forum (CLEF). In this third edition of the book, the authors added information about recent progress in NLP for social media applications, including more about the modern techniques provided by deep neural networks (DNNs) for modeling language and analyzing social media data.
Core-Task Design
This book focuses on design of work from the human-factors (HF) perspective. In the approach referred to as Core-Task Design (CTD), work is considered practice, composed of human actors, the physical and social environment, and the tools used for reaching the actors' objectives. This book begins with consideration of an industrial case, the modernization of a nuclear power plant automation system, and the related human-system interfaces in the control room. This case illustrates generic design dilemmas that invite one to revisit human-factors research methodology: Human factors should adopt practice as a new unit of analysis and should accept intervention as an inherent feature of its methodology. These suggestions are put into practice in the CTD approach, according to which three general design functions are performed, those being: - understand-to-generalize--empirical analysis of the work at hand, - foresee-the-promise--creation of concepts for future work, and - intervene-to-develop--participatory development and design of work. For fulfillment fulfillment of each of the design functions, several CTD methods are introduced. The methods are aimed at modeling the core task and analyzing how the actors actually take the core task features into account in order to achieve balance between potentially conflicting demands in action. Thereby, new understanding of the core task is acquired. Further methods focus on projecting the roles and functionality of technologies in the future work and on implementing changes to the work. Specific studies of the nuclear power plant's control-room renewal constitute an example demonstrating a core task and the associated methods. We argue that the CTD approach offers clear utility for the design of future technology, work, and everyday services and environments. CTD utilizes achievements of practice theory in the social sciences to generate a creative synthesis of Cognitive Work Analysis, semiotic analysis of practice, and the cultural-historical theory of activity. Core-Task Design facilitates dialogue among human-factors experts, design engineers, and end users in their joint development of work. The intended audience of this book is students, researchers, and practitioners of human factors, industrial art and design, and instrumentation and control-system design. Table of Contents: Acknowledgments / Preface / Introduction / Core-Task Design Methodology / Understandings: How to Generalize from Empirical Enquiry about Actual Work / Foreseeing: How to Uncover the Promise of Solutions for Future Work / Intervening: How to Develop the Work System / Core-Task Deign in Broader Perspective / Bibliography / Author Biographies
HCI Design Knowledge
This is the first of two books concerned with engineering design principles for Human-Computer Interaction-Engineering Design Principles (HCI-EDPs). The book presents the background for the companion volume. The background is divided into three parts and comprises--"HCI for EDPs," "HCI Design Knowledge for EDPs," and "HCI-EDPs--A Way Forward for HCI Design Knowledge." The companion volume reports in full the acquisition of initial HCI-EDPs in the domains of domestic energy planning and control and business-to-consumer electronic commerce (Long, Cummaford, and Stork, 2022, in press). The background includes the disciplinary basis for HCI-EDPs, a critique of, and the challenge for, HCI design knowledge in general. The latter is categorised into three types for the purposes in hand. These are craft artefacts and design practice experience, models and methods, and principles, rules, and heuristics. HCI-EDPs attempt to meet the challenge for HCI design knowledge by increasing the reliabilityof its fitness-for-purpose to support HCI design practice. The book proposes "instance-first/class-first" approaches to the acquisition of HCI-EDPs. The approaches are instantiated in two case studies, summarised here and reported in full in the companion volume. The book is for undergraduate students trying to understand the different kinds of HCI design knowledge, their varied and associated claims, and their potential for application to design practice now and in the future. The book also provides grounding for young researchers seeking to develop further HCI-EDPs in their own work.
Poisson Line Cox Process
This book provides a comprehensive treatment of the Poisson line Cox process (PLCP) and its applications to vehicular networks. The PLCP is constructed by placing points on each line of a Poisson line process (PLP) as per an independent Poisson point process (PPP). For vehicular applications, one can imagine the layout of the road network as a PLP and the vehicles on the roads as the points of the PLCP. First, a brief historical account of the evolution of the theory of PLP is provided to familiarize readers with the seminal contributions in this area. In order to provide a self-contained treatment of this topic, the construction and key fundamental properties of both PLP and PLCP are discussed in detail. The rest of the book is devoted to the applications of these models to a variety of wireless networks, including vehicular communication networks and localization networks. Specifically, modeling the locations of vehicular nodes and roadside units (RSUs) using PLCP, the signal-to-interference-plus-noise ratio (SINR)-based coverage analysis is presented for both ad hoc and cellular network models. For a similar setting, the load on the cellular macro base stations (MBSs) and RSUs in a vehicular network is also characterized analytically. For the localization networks, PLP is used to model blockages, which is shown to facilitate the characterization of asymptotic blind spot probability in a localization application. Finally, the path distance characteristics for a special case of PLCP are analyzed, which can be leveraged to answer critical questions in the areas of transportation networks and urban planning. The book is concluded with concrete suggestions on future directions of research. Based largely on the original research of the authors, this is the first book that specifically focuses on the self-contained mathematical treatment of the PLCP. The ideal audience of this book is graduate students as well as researchers in academia and industry whoare familiar with probability theory, have some exposure to point processes, and are interested in the field of stochastic geometry and vehicular networks. Given the diverse backgrounds of the potential readers, the focus has been on providing an accessible and pedagogical treatment of this topic by consciously avoiding the measure theoretic details without compromising mathematical rigor.
Toward Engineering Design Principles for HCI
This is the second of two books by the authors about engineering design principles for human-computer interaction (HCI-EDPs). The books report research that takes an HCI engineering discipline approach to acquiring initial such principles. Together, they identify best-practice HCI design knowledge for acquiring HCI-EDPs. This book specifically reports two case studies of the acquisition of initial such principles in the domains of domestic energy planning and control and business-to-consumer electronic commerce. The book begins by summarising the earlier volume, sufficient for readers to understand the case studies reported in full here. The themes, concepts, and ideas developed in both books concern HCI design knowledge, a critique thereof, and the related challenge. The latter is expressed as the need for HCI design knowledge to increase its fitness-for-purpose to support HCI design practice more effectively. HCI-EDPs are proposed here as one response to that challenge, and the bookpresents case studies of the acquisition of initial HCI-EDPs, including an introduction; two development cycles; and presentation and assessment for each. Carry forward of the HCI-EDP progress is also identified. The book adopts a discipline approach framework for HCI and an HCI engineering discipline framework for HCI-EDPs. These approaches afford design knowledge that supports "specify then implement" design practices. Acquisition of the initial EDPs apply current best-practice design knowledge in the form of "specify, implement, test, and iterate" design practices. This can be used similarly to acquire new HCI-EDPs. Strategies for developing HCI-EDPs are proposed together with conceptions of human-computer systems, required for conceptualisation and operationalisation of their associated design problems and design solutions. This book is primarily for postgraduate students and young researchers wishing to develop further the idea of HCI-EDPs and other more reliable HCI design knowledge. It is structured to support both the understanding and the operationalisation of HCI-EDPs, as required for their acquisition, their long-term potential contribution to HCI design knowledge, and their ultimate application to design practice.
Constructing Knowledge Art
This book is about how people (we refer to them as practitioners) can help guide participants in creating representations of issues or ideas, such as collaborative diagrams, especially in the context of Participatory Design (PD). At its best, such representations can reach a very high level of expressiveness and usefulness, an ideal we refer to as Knowledge Art. Achieving that level requires effective engagement, often aided by facilitators or other practitioners. Most PD research focuses on tools and methods, or on participant experience. The next source of advantage is to better illuminate the role of practitioners-the people working with participants, tools, and methods in service of a project's larger goals. Just like participants, practitioners experience challenges, interactions, and setbacks, and come up with creative ways to address them while maintaining their stance of service to participants and stakeholders. Our research interest is in understanding what moves and choices practitioners make that either help or hinder participants' engagement with representations. We present a theoretical framework that looks at these choices from the experiential perspectives of narrative, aesthetics, ethics, sensemaking and improvisation and apply it to five diverse case studies of actual practice. Table of Contents: Acknowledgments / Introduction / Participatory Design and Representational Practice / Dimensions of Knowledge Art / Case Studies / Discussion and Conclusions / Appendix: Knowledge Art Analytics / Bibliography / Author Biographies
Human-Computer Interactions in Museums
Museums have been a domain of study and design intervention for Human-Computer Interaction (HCI) for several decades. However, while resources providing overviews on the key issues in the scholarship have been produced in the fields of museum and visitor studies, no such resource as yet existed within HCI. This book fills this gap and covers key issues regarding the study and design of HCIs in museums. Through an on-site focus, the book examines how digital interactive technologies impact and shape galleries, exhibitions, and their visitors. It consolidates the body of work in HCI conducted in the heritage field and integrates it with insights from related fields and from digital heritage practice. Processes of HCI design and evaluation approaches for museums are also discussed. This book draws from the authors' extensive knowledge of case studies as well as from their own work to provide examples, reflections, and illustrations of relevant concepts and problems. This book isdesigned for students and early career researchers in HCI or Interaction Design, for more seasoned investigators who might approach the museum domain for the first time, and for researchers and practitioners in related fields such as heritage and museum studies or visitor studies. Designers who might wish to understand the HCI perspective on visitor-facing interactive technologies may also find this book useful.
Ultrasound Mid-Air Haptics for Touchless Interfaces
Over the last decade, ultrasound mid-air haptic technology has emerged and rapidly advanced to engage multidisciplinary scientific communities within and adjacent to the haptics and HCI fields. Additionally, this haptic technology has been adopted by a number of industry sectors (e.g., automotive, virtual reality, digital signage, neuroscience research) who appear keen to exploit its unique value proposition: the ability to deliver rich haptic sensations from a distance, without the need to touch, wear or hold anything in order to enhance touchless interfaces, novel applications, and experiences.This book is the first, and currently the only one, that provides a comprehensive description of the technology, encapsulating almost all aspects relating to electronic prototyping, acoustics, haptics, psychology and perception, user experience and end-user HCI applications. Through its 18 chapters written by 30 expert co-authors, this book is therefore an excellent introduction to the technology for anyone coming from any of those fields. Specifically, the reader will benefit by getting a unique and multi-dimensional perspective on the state-of-the-art of this enabling haptic technology while also understanding its history, relevant best research practices, and an overview of the various open challenges and opportunities.
BATS Codes
This book discusses an efficient random linear network coding scheme, called BATched Sparse code, or BATS code, which is proposed for communication through multi-hop networks with packet loss. Multi-hop wireless networks have applications in the Internet of Things (IoT), space, and under-water network communications, where the packet loss rate per network link is high, and feedbacks have long delays and are unreliable. Traditional schemes like retransmission and fountain codes are not sufficient to resolve the packet loss so that the existing communication solutions for multi-hop wireless networks have either long delay or low throughput when the network length is longer than a few hops. These issues can be resolved by employing network coding in the network, but the high computational and storage costs of such schemes prohibit their implementation in many devices, in particular, IoT devices that typically have low computational power and very limited storage. A BATS code consists of an outer code and an inner code. As a matrix generalization of a fountain code, the outer code generates a potentially unlimited number of batches, each of which consists of a certain number (called the batch size) of coded packets. The inner code comprises (random) linear network coding at the intermediate network nodes, which is applied on packets belonging to the same batch. When the batch size is 1, the outer code reduces to an LT code (or Raptor code if precode is applied), and network coding of the batches reduces to packet forwarding. BATS codes preserve the salient features of fountain codes, in particular, their rateless property and low encoding/decoding complexity. BATS codes also achieve the throughput gain of random linear network coding. This book focuses on the fundamental features and performance analysis of BATS codes, and includes some guidelines and examples on how to design a network protocol using BATS codes.
Semantic Relations Between Nominals, Second Edition
Opportunity and Curiosity find similar rocks on Mars. One can generally understand this statement if one knows that Opportunity and Curiosity are instances of the class of Mars rovers, and recognizes that, as signalled by the word on, rocks are located on Mars. Two mental operations contribute to understanding: recognize how entities/concepts mentioned in a text interact and recall already known facts (which often themselves consist of relations between entities/concepts). Concept interactions one identifies in the text can be added to the repository of known facts, and aid the processing of future texts. The amassed knowledge can assist many advanced language-processing tasks, including summarization, question answering and machine translation. Semantic relations are the connections we perceive between things which interact. The book explores two, now intertwined, threads in semantic relations: how they are expressed in texts and what role they play in knowledge repositories. A historical perspective takes us back more than 2000 years to their beginnings, and then to developments much closer to our time: various attempts at producing lists of semantic relations, necessary and sufficient to express the interaction between entities/concepts. A look at relations outside context, then in general texts, and then in texts in specialized domains, has gradually brought new insights, and led to essential adjustments in how the relations are seen. At the same time, datasets which encompass these phenomena have become available. They started small, then grew somewhat, then became truly large. The large resources are inevitably noisy because they are constructed automatically. The available corpora--to be analyzed, or used to gather relational evidence--have also grown, and some systems now operate at the Web scale. The learning of semantic relations has proceeded in parallel, in adherence to supervised, unsupervised or distantly supervised paradigms. Detailed analyses of annotated datasets in supervised learning have granted insights useful in developing unsupervised and distantly supervised methods. These in turn have contributed to the understanding of what relations are and how to find them, and that has led to methods scalable to Web-sized textual data. The size and redundancy of information in very large corpora, which at first seemed problematic, have been harnessed to improve the process of relation extraction/learning. The newest technology, deep learning, supplies innovative and surprising solutions to a variety of problems in relation learning. This book aims to paint a big picture and to offer interesting details.
From Net Neutrality to Ict Neutrality
This book discusses the pros and cons of information and communication (ICT) neutrality. It tries to be as objective as possible from arguments of proponents and opponents, this way enabling readers to build their own opinion. It presents the history of the ongoing network neutrality debate, the various concepts it encompasses, and also some mathematical developments illustrating optimal strategies and potential counter-intuitive results, then extends the discussion to connected ICT domains. The book thus touches issues related to history, economics, law, networking, and mathematics. After an introductory chapter on the history of the topic, chapter 2 surveys and compares the various laws in place worldwide and discusses some implications of heterogeneous rules in several regions. Next, chapter 3 details the arguments put forward by the participants of the net neutrality debate. Chapter 4 then presents how the impact of neutral or non-neutral behaviors can be analyzed mathematically, with sometimes counter-intuitive results, and emphasizes the interest of modeling to avoid bad decisions. Chapter 5 illustrates that content providers may not always be on the pro-neutrality side, as there are situations where they may have an economic advantage with a non-neutral situation, e.g. when they are leaders on a market and create barriers to entry for competitors. Another related issue is covered in chapter 6, which discusses existing ways for ISPs to circumvent the packet-based rules and behave non-neutral without breaking the written law. Chapter 7 gives more insight on the role and possible non-neutral behavior of search engines, leading to another debate called the search neutrality debate. Chapter 8 focuses on e-commerce platforms and social networks, and investigates how they can influence users' actions and opinions. The issue is linked to the debate on the transparency of algorithms which is active in Europe especially. Chapter 9 focuses on enforcing neutrality in practice through measurements: indeed, setting rules requires monitoring the activity of ICT actors in order to sanction non-appropriate behaviors and be proactive against new conducts. The chapter explains why this is challenging and what tools are currently available. Eventually, Chapter 10 briefly concludes the presentation and opens the debate.
Statistical Relational Artificial Intelligence
An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations. The book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extension of Bayesian networks.
Predicting Human Decision-Making
Human decision-making often transcends our formal models of "rationality." Designing intelligent agents that interact proficiently with people necessitates the modeling of human behavior and the prediction of their decisions. In this book, we explore the task of automatically predicting human decision-making and its use in designing intelligent human-aware automated computer systems of varying natures--from purely conflicting interaction settings (e.g., security and games) to fully cooperative interaction settings (e.g., autonomous driving and personal robotic assistants). We explore the techniques, algorithms, and empirical methodologies for meeting the challenges that arise from the above tasks and illustrate major benefits from the use of these computational solutions in real-world application domains such as security, negotiations, argumentative interactions, voting systems, autonomous driving, and games. The book presents both the traditional and classical methods as well asthe most recent and cutting edge advances, providing the reader with a panorama of the challenges and solutions in predicting human decision-making.
Path Planning and Tracking for Vehicle Collision Avoidance in Lateral and Longitudinal Motion Directions
In recent years, the control of Connected and Automated Vehicles (CAVs) has attracted strong attention for various automotive applications. One of the important features demanded of CAVs is collision avoidance, whether it is a stationary or a moving obstacle. Due to complex traffic conditions and various vehicle dynamics, the collision avoidance system should ensure that the vehicle can avoid collision with other vehicles or obstacles in longitudinal and lateral directions simultaneously. The longitudinal collision avoidance controller can avoid or mitigate vehicle collision accidents effectively via Forward Collision Warning (FCW), Brake Assist System (BAS), and Autonomous Emergency Braking (AEB), which has been commercially applied in many new vehicles launched by automobile enterprises. But in lateral motion direction, it is necessary to determine a flexible collision avoidance path in real time in case of detecting any obstacle. Then, a path-tracking algorithm is designedto assure that the vehicle will follow the predetermined path precisely, while guaranteeing certain comfort and vehicle stability over a wide range of velocities. In recent years, the rapid development of sensor, control, and communication technology has brought both possibilities and challenges to the improvement of vehicle collision avoidance capability, so collision avoidance system still needs to be further studied based on the emerging technologies. In this book, we provide a comprehensive overview of the current collision avoidance strategies for traditional vehicles and CAVs. First, the book introduces some emergency path planning methods that can be applied in global route design and local path generation situations which are the most common scenarios in driving. A comparison is made in the path-planning problem in both timing and performance between the conventional algorithms and emergency methods. In addition, this book introduces and designs an up-to-date path-planning method based on artificial potential field methods for collision avoidance, and verifies the effectiveness of this method in complex road environment. Next, in order to accurately track the predetermined path for collision avoidance, traditional control methods, humanlike control strategies, and intelligent approaches are discussed to solve the path-tracking problem and ensure the vehicle successfully avoids the collisions. In addition, this book designs and applies robust control to solve the path-tracking problem and verify its tracking effect in different scenarios. Finally, this book introduces the basic principles and test methods of AEB system for collision avoidance of a single vehicle. Meanwhile, by taking advantage of data sharing between vehicles based on V2X (vehicle-to-vehicle or vehicle-to-infrastructure) communication, pile-up accidents in longitudinal direction are effectively avoided through cooperative motion control of multiple vehicles.
Lifelong Machine Learning, Second Edition
Lifelong Machine Learning, Second Edition is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent. Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks--which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning--most notably, multi-task learning, transfer learning, and meta-learning--because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields.
Answer Set Solving in Practice
Answer Set Programming (ASP) is a declarative problem solving approach, initially tailored to modeling problems in the area of Knowledge Representation and Reasoning (KRR). More recently, its attractive combination of a rich yet simple modeling language with high-performance solving capacities has sparked interest in many other areas even beyond KRR. This book presents a practical introduction to ASP, aiming at using ASP languages and systems for solving application problems. Starting from the essential formal foundations, it introduces ASP's solving technology, modeling language and methodology, while illustrating the overall solving process by practical examples. Table of Contents: List of Figures / List of Tables / Motivation / Introduction / Basic modeling / Grounding / Characterizations / Solving / Systems / Advanced modeling / Conclusions
Fieldwork for Healthcare
Conducting fieldwork for investigating technology use in healthcare is a challenging undertaking, and yet there is little in the way of community support and guidance for conducting these studies. There is a need for better knowledge sharing and resources to facilitate learning. This is the second of two volumes designed as a collective graduate guidebook for conducting fieldwork in healthcare. This volume brings together thematic chapters that draw out issues and lessons learned from practical experience. Researchers who have first-hand experience of conducting healthcare fieldwork collaborated to write these chapters. This volume contains insights, tips, and tricks from studies in clinical and non-clinical environments, from hospital to home. This volume starts with an introduction to the ethics and governance procedures a researcher might encounter when conducting fieldwork in this sensitive study area. Subsequent chapters address specific aspects of conducting situated healthcare research. Chapters on readying the researcher and relationships in the medical domain break down some of the complex social aspects of this type of research. They are followed by chapters on the practicalities of collecting data and implementing interventions, which focus on domain-specific issues that may arise. Finally, we close the volume by discussing the management of impact in healthcare fieldwork. The guidance contained in these chapters enables new researchers to form their project plans and also their contingency plans in this complex and challenging domain. For more experienced researchers, it offers advice and support through familiar stories and experiences. For supervisors and teachers, it offers a source of reference and debate. Together with the first volume, Fieldwork for Healthcare: Case Studies Investigating Human Factors in Computing systems, these books provide a substantive resource on how to conduct fieldwork in healthcare. Table of Contents: Preface / Acknowledgments / Ethics, Governance, and Patient and Public Involvement in Healthcare / Readying the Researcher for Fieldwork in Healthcare / Establishing and Maintaining Relationships in Healthcare Fields / Practicalities of Data Collection in Healthcare Fieldwork / Healthcare Intervention Studies "In the Wild" / Impact of Fieldwork in Healthcare: Understanding Impact on Researchers, Research, Practice, and Beyond / References / Biographies
Designed Technologies for Healthy Aging
Designed Technologies for Healthy Aging identifies and presents a variety of contemporary technologies to support older adults' abilities to perform everyday activities. Efforts of industry, laboratories, and learning institutions are documented under four major categories: social connections, independent self care, healthy home and active lifestyle. The book contains well-documented and illustrative recent examples of designed technologies--ranging from wearable devices, to mobile applications, to assistive robots-- on the broad areas of design and computation, including industrial design, interaction design, graphic design, human-computer interaction, software engineering, and artificial intelligence.
Graph Representation Learning
Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis. This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs--a nascent but quickly growing subset of graph representation learning.
Introduction to Symbolic Plan and Goal Recognition
Plan recognition, activity recognition, and goal recognition all involve making inferences about other actors based on observations of their interactions with the environment and other agents. This synergistic area of research combines, unites, and makes use of techniques and research from a wide range of areas including user modeling, machine vision, automated planning, intelligent user interfaces, human-computer interaction, autonomous and multi-agent systems, natural language understanding, and machine learning. It plays a crucial role in a wide variety of applications including assistive technology, software assistants, computer and network security, human-robot collaboration, natural language processing, video games, and many more. This wide range of applications and disciplines has produced a wealth of ideas, models, tools, and results in the recognition literature. However, it has also contributed to fragmentation in the field, with researchers publishing relevant results in a wide spectrum of journals and conferences. This book seeks to address this fragmentation by providing a high-level introduction and historical overview of the plan and goal recognition literature. It provides a description of the core elements that comprise these recognition problems and practical advice for modeling them. In particular, we define and distinguish the different recognition tasks. We formalize the major approaches to modeling these problems using a single motivating example. Finally, we describe a number of state-of-the-art systems and their extensions, future challenges, and some potential applications.
Probabilistic and Biologically Inspired Feature Representations
Under the title "Probabilistic and Biologically Inspired Feature Representations," this text collects a substantial amount of work on the topic of channel representations. Channel representations are a biologically motivated, wavelet-like approach to visual feature descriptors: they are local and compact, they form a computational framework, and the represented information can be reconstructed. The first property is shared with many histogram- and signature-based descriptors, the latter property with the related concept of population codes. In their unique combination of properties, channel representations become a visual Swiss army knife--they can be used for image enhancement, visual object tracking, as 2D and 3D descriptors, and for pose estimation. In the chapters of this text, the framework of channel representations will be introduced and its attributes will be elaborated, as well as further insight into its probabilistic modeling and algorithmic implementation will be given. Channel representations are a useful toolbox to represent visual information for machine learning, as they establish a generic way to compute popular descriptors such as HOG, SIFT, and SHOT. Even in an age of deep learning, they provide a good compromise between hand-designed descriptors and a-priori structureless feature spaces as seen in the layers of deep networks.
Introduction to Graph Neural Networks
Graphs are useful data structures in complex real-life applications such as modeling physical systems, learning molecular fingerprints, controlling traffic networks, and recommending friends in social networks. However, these tasks require dealing with non-Euclidean graph data that contains rich relational information between elements and cannot be well handled by traditional deep learning models (e.g., convolutional neural networks (CNNs) or recurrent neural networks (RNNs)). Nodes in graphs usually contain useful feature information that cannot be well addressed in most unsupervised representation learning methods (e.g., network embedding methods). Graph neural networks (GNNs) are proposed to combine the feature information and the graph structure to learn better representations on graphs via feature propagation and aggregation. Due to its convincing performance and high interpretability, GNN has recently become a widely applied graph analysis tool. This book provides a comprehensive introduction to the basic concepts, models, and applications of graph neural networks. It starts with the introduction of the vanilla GNN model. Then several variants of the vanilla model are introduced such as graph convolutional networks, graph recurrent networks, graph attention networks, graph residual networks, and several general frameworks. Variants for different graph types and advanced training methods are also included. As for the applications of GNNs, the book categorizes them into structural, non-structural, and other scenarios, and then it introduces several typical models on solving these tasks. Finally, the closing chapters provide GNN open resources and the outlook of several future directions.
Body Tracking in Healthcare
Within the context of healthcare, there has been a long-standing interest in understanding the posture and movement of the human body. Gait analysis work over the years has looked to articulate the patterns and parameters of this movement both for a normal healthy body and in a range of movement-based disorders. In recent years, these efforts to understand the moving body have been transformed by significant advances in sensing technologies and computational analysis techniques all offering new ways for the moving body to be tracked, measured, and interpreted. While much of this work has been largely research focused, as the field matures, we are seeing more shifts into clinical practice. As a consequence, there is an increasing need to understand these sensing technologies over and above the specific capabilities to track, measure, and infer patterns of movement in themselves. Rather, there is an imperative to understand how the material form of these technologies enables them also tobe situated in everyday healthcare contexts and practices. There are significant mutually interdependent ties between the fundamental characteristics and assumptions of these technologies and the configurations of everyday collaborative practices that are possible them. Our attention then must look to social, clinical, and technical relations pertaining to these various body technologies that may play out in particular ways across a range of different healthcare contexts and stakeholders. Our aim in this book is to explore these issues with key examples illustrating how social contexts of use relate to the properties and assumptions bound up in particular choices of body-tracking technology. We do this through a focus on three core application areas in healthcare--assessment, rehabilitation, and surgical interaction--and recent efforts to apply body-tracking technologies to them.
Adversarial Machine Learning
The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicius objects they develop. The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning. We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research. Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings.
Fieldwork for Healthcare
Performing fieldwork in healthcare settings is significantly different from fieldwork in other domains and it presents unique challenges to researchers. Whilst results are reported in research papers, the details of how to actually perform these fieldwork studies are not. This is the first of two volumes designed as a collective graduate guidebook for conducting fieldwork in healthcare. This volume brings together the experiences of established researchers who do fieldwork in clinical and non-clinical settings, focusing on how people interact with healthcare technology, in the form of case studies. These case studies are all personal, reflective accounts of challenges faced and lessons learned, which future researchers might also learn from. We open with an account of studies in the Operating Room, focusing on the role of the researcher, and how participants engage and resist engaging with the research process. Subsequent case studies address themes in a variety of hospital settings, which highlight the variability that is experienced across study settings and the importance of context in shaping what is possible when conducting research in hospitals. Recognising and dealing with emotions, strategies for gaining access, and data gathering are themes that pervade the studies. Later case studies introduce research involving collaborative design and intervention studies, which seek to have an immediate impact on practice. Mental health is a theme of two intervention studies as we move out of the hospital to engage with vulnerable participants suffering from long-term conditions and people in the home. This volume closes with an intervention study in the developing world that ends with some tips for conducting studies in healthcare. Such tips are synthesised through the thematic chapters presented in the companion volume.
Computational Texture and Patterns
Visual pattern analysis is a fundamental tool in mining data for knowledge. Computational representations for patterns and texture allow us to summarize, store, compare, and label in order to learn about the physical world. Our ability to capture visual imagery with cameras and sensors has resulted in vast amounts of raw data, but using this information effectively in a task-specific manner requires sophisticated computational representations. We enumerate specific desirable traits for these representations: (1) intraclass invariance--to support recognition; (2) illumination and geometric invariance for robustness to imaging conditions; (3) support for prediction and synthesis to use the model to infer continuation of the pattern; (4) support for change detection to detect anomalies and perturbations; and (5) support for physics-based interpretation to infer system properties from appearance. In recent years, computer vision has undergone a metamorphosis with classic algorithms adaptingto new trends in deep learning. This text provides a tour of algorithm evolution including pattern recognition, segmentation and synthesis. We consider the general relevance and prominence of visual pattern analysis and applications that rely on computational models.
Case-Based Reasoning
Case-based reasoning is a methodology with a long tradition in artificial intelligence that brings together reasoning and machine learning techniques to solve problems based on past experiences or cases. Given a problem to be solved, reasoning involves the use of methods to retrieve similar past cases in order to reuse their solution for the problem at hand. Once the problem has been solved, learning methods can be applied to improve the knowledge based on past experiences. In spite of being a broad methodology applied in industry and services, case-based reasoning has often been forgotten in both artificial intelligence and machine learning books. The aim of this book is to present a concise introduction to case-based reasoning providing the essential building blocks for the design of case-based reasoning systems, as well as to bring together the main research lines in this field to encourage students to solve current CBR challenges.