No catches, no fine print just unadulterated book loving, with your favourite books saved to your own digital bookshelf.
New members get entered into our monthly draw to win £100 to spend in your local bookshop Plus lots lots more…Find out more
The book discusses subjective ratings of quality and preference of unknown voices and dialog partners - their likability, for example. Human natural and artificial voices are studied in passive listening and interactive scenarios. In this book, the background, state of research, and contributions to the assessment and prediction of talker quality that is constituted in voice perception and in dialog are presented. Starting from theories and empirical findings from human interaction, major results and approaches are transferred to the domain of human-computer interaction (HCI). The main objective of this book is to contribute to the evaluation of spoken interaction in humans and between humans and computers, and in particular to the quality subsequently attributed to the speaking system or person based on the listening and interactive experience. Provides a comprehensive overview of research in evaluation of speakers and dialog partners; Presents recent results on the relevance of a first passive and interactive impression; Includes human and HCI evaluation results from a communicative perspective.
This book studies the motivation of crowdworkers to find out how to attract more people and reach a higher quality of outcomes. The book first proposes a taxonomy for studying the motivation of crowdworkers including the potential influencing factors, different types of motivation, and possible consequences and outcomes related to the motivation. Next, the CWMS questionnaire, an instrument for measuring the underlying motivation of crowdworkers is developed. It considers different dimensions of motivation suggested by the Self-Determination Theory of motivation which is a well-established and empirically validated psychological theory used in various domains. This instrument can be used to study the effect of platform and user characteristics on the general motivation of crowdworkers. Later, the task-specific motivation of crowdworkers is studied in detail: Influencing factors are investigated, subjective methods for measuring them are evaluated, a model for predicting worker's decision on taking a task is proposed, the relative importance of different factors for two populations of crowdworkers is studied, and finally, a model for predicting the expected workload (as one of the major influencing factors) given the task design is proposed.
This book presents a new approach to examining the perceived quality of audiovisual sequences. It uses electroencephalography (EEG) to explain in detail how user quality judgments are formed within a test participant, and what the physiological implications might be when subjects are exposed to lower quality media. The book redefines the experimental paradigms of using EEG in the area of quality assessment so that they better suit the requirements of standard subjective quality testing, and presents experimental protocols and stimuli that have been adjusted accordingly.
This book investigates the susceptibility of intrinsic physically unclonable function (PUF) implementations on reconfigurable hardware to optical semi-invasive attacks from the chip backside. It explores different classes of optical attacks, particularly photonic emission analysis, laser fault injection, and optical contactless probing. By applying these techniques, the book demonstrates that the secrets generated by a PUF can be predicted, manipulated or directly probed without affecting the behavior of the PUF. It subsequently discusses the cost and feasibility of launching such attacks against the very latest hardware technologies in a real scenario. The author discusses why PUFs are not tamper-evident in their current configuration, and therefore, PUFs alone cannot raise the security level of key storage. The author then reviews the potential and already implemented countermeasures, which can remedy PUFs' security-related shortcomings and make them resistant to optical side-channel and optical fault attacks. Lastly, by making selected modifications to the functionality of an existing PUF architecture, the book presents a prototype tamper-evident sensor for detecting optical contactless probing attempts.
This book discusses the fusion of mobile and WiFi network data with semantic technologies and diverse context sources for offering semantically enriched context-aware services in the telecommunications domain. It presents the OpenMobileNetwork as a platform for providing estimated and semantically enriched mobile and WiFi network topology data using the principles of Linked Data. This platform is based on the OpenMobileNetwork Ontology consisting of a set of network context ontology facets that describe mobile network cells as well as WiFi access points from a topological perspective and geographically relate their coverage areas to other context sources. The book also introduces Linked Crowdsourced Data and its corresponding Context Data Cloud Ontology, which is a crowdsourced dataset combining static location data with dynamic context information. Linked Crowdsourced Data supports the OpenMobileNetwork by providing the necessary context data richness for more sophisticated semantically enriched context-aware services. Various application scenarios and proof of concept services as well as two separate evaluations are part of the book. As the usability of the provided services closely depends on the quality of the approximated network topologies, it compares the estimated positions for mobile network cells within the OpenMobileNetwork to a small set of real-world cell positions. The results prove that context-aware services based on the OpenMobileNetwork rely on a solid and accurate network topology dataset. The book also evaluates the performance of the exemplary Semantic Tracking as well as Semantic Geocoding services, verifying the applicability and added value of semantically enriched mobile and WiFi network data.
This book presents a new diagnostic information methodology to assess the quality of conversational telephone speech. For this, a conversation is separated into three individual conversational phases (listening, speaking, and interaction), and for each phase corresponding perceptual dimensions are identified. A new analytic test method allows gathering dimension ratings from non-expert test subjects in a direct way. The identification of the perceptual dimensions and the new test method are validated in two sophisticated conversational experiments. The dimension scores gathered with the new test method are used to determine the quality of each conversational phase, and the qualities of the three phases, in turn, are combined for overall conversational quality modeling. The conducted fundamental research forms the basis for the development of a preliminary new instrumental diagnostic conversational quality model. This multidimensional analysis of conversational telephone speech is a major landmark towards deeply analyzing conversational speech quality for diagnosis and optimization of telecommunication systems.
This book addresses the issue of Machine Learning (ML) attacks on Integrated Circuits through Physical Unclonable Functions (PUFs). It provides the mathematical proofs of the vulnerability of various PUF families, including Arbiter, XOR Arbiter, ring-oscillator, and bistable ring PUFs, to ML attacks. To achieve this goal, it develops a generic framework for the assessment of these PUFs based on two main approaches. First, with regard to the inherent physical characteristics, it establishes fit-for-purpose mathematical representations of the PUFs mentioned above, which adequately reflect the physical behavior of these primitives. To this end, notions and formalizations that are already familiar to the ML theory world are reintroduced in order to give a better understanding of why, how, and to what extent ML attacks against PUFs can be feasible in practice. Second, the book explores polynomial time ML algorithms, which can learn the PUFs under the appropriate representation. More importantly, in contrast to previous ML approaches, the framework presented here ensures not only the accuracy of the model mimicking the behavior of the PUF, but also the delivery of such a model. Besides off-the-shelf ML algorithms, the book applies a set of algorithms hailing from the field of property testing, which can help to evaluate the security of PUFs. They serve as a toolbox , from which PUF designers and manufacturers can choose the indicators most relevant for their requirements. Last but not least, on the basis of learning theory concepts, the book explicitly states that the PUF families cannot be considered as an ultimate solution to the problem of insecure ICs. As such, it provides essential insights into both academic research on and the design and manufacturing of PUFs.
This book develops valuable new approaches to digital out-of-home media and digital signage in urban environments. It offers solutions for communicating interactive features of digital signage to passers-by. Digital out-of-home media and digital signage screens are becoming increasingly interactive thanks to touch input technology and gesture recognition. To optimize their conversion rate, interactive public displays must 1) attract attention, 2) communicate to passers-by that they are interactive, 3) explain the interaction, and 4) provide a motivation for passers-by to interact. This book highlights solutions to problems 2 and 3 above. The focus is on whole-body interaction, where the positions and orientations of users and their individual body parts are captured by specialized sensors (e.g., depth cameras). The book presents revealing findings from a field study on communicating interactivity, a laboratory on analysing visual attention, a field study on mid-air gestures, and a field study on using mid-air gestures to select items on interactive public displays.
This book proposes a combination of cognitive modeling with model-based user interface development to tackle the problem of maintaining the usability of applications that target several device types at once (e.g., desktop PC, smart phone, smart TV). Model-based applications provide interesting meta-information about the elements of the user interface (UI) that are accessible through computational introspection. Cognitive user models can capitalize on this meta-information to provide improved predictions of the interaction behavior of future human users of applications under development. In order to achieve this, cognitive processes that link UI properties to usability aspects like effectiveness (user error) and efficiency (task completion time) are established empirically, are explained through cognitive modeling, and are validated in the course of this treatise. In the case of user error, the book develops an extended model of sequential action control based on the Memory for Goals theory and it is confirmed in different behavioral domains and experimental paradigms. This new model of user cognition and behavior is implemented using the MeMo workbench and integrated with the model-based application framework MASP in order to provide automated usability predictions from early software development stages on. Finally, the validity of the resulting integrated system is confirmed by empirical data from a new application, eliciting unexpected behavioral patterns.
This book presents an alternative approach to studying smartphone-app user notifications. It starts with insights into user acceptance of mobile notifications in order to provide tools to support users in managing these. It extends previous research by investigating factors that influence users' perception of notifications and proposes tools addressing the shortcomings of current systems. It presents a technical framework and testbed as an approach for evaluating the usage of mobile applications and notifications, and then discusses a series of studies based on this framework that investigate factors influencing users' perceptions of mobile notifications. Lastly, a set of design guidelines for the usage of mobile notifications is derived that can be employed to support users in handling notifications on smartphones.
This book reviews research towards perceptual quality dimensions of synthetic speech, compares these findings with the state of the art, and derives a set of five universal perceptual quality dimensions for TTS signals. They are: (i) naturalness of voice, (ii) prosodic quality, (iii) fluency and intelligibility, (iv) absence of disturbances, and (v) calmness. Moreover, a test protocol for the efficient indentification of those dimensions in a listening test is introduced. Furthermore, several factors influencing these dimensions are examined. In addition, different techniques for the instrumental quality assessment of TTS signals are introduced, reviewed and tested. Finally, the requirements for the integration of an instrumental quality measure into a concatenative TTS system are examined.
This book investigates processes for the prototyping of user interfaces for mobile apps, and describes the development of new concepts and tools that can improve the prototype driven app development in the early stages. It presents the development and evaluation of a new requirements catalogue for prototyping mobile app tools that identifies the most important criteria such tools should meet at different prototype-development stages. This catalogue is not just a good point of orientation for designing new prototyping approaches, but also provides a set of metrics for a comparing the performance of alternative prototyping tools. In addition, the book discusses the development of Blended Prototyping, a new approach for prototyping user interfaces for mobile applications in the early and middle development stages, and presents the results of an evaluation of its performance, showing that it provides a tool for teamwork-oriented, creative prototyping of mobile apps in the early design stages.
This book describes an extension of the user behaviour simulation (UBS) of an existing tool for automatic usability evaluation (AUE). This extension is based upon a user study with a smart home system. It uses technical-sociological methods for the execution of the study and the analysis of the collected data. A comparison of the resulting UBS with former UBSs, as well as the empirical data, shows that the new simulation approach outperforms the former simulation. The improvement affects the prediction of dialogue metrics that are related to dialogue efficiency and dialogue effectiveness. Furthermore, the book describes a parameter-based data model, as well as a related framework. Both are used to uniformly describe multimodal human-computer interactions and to provide such descriptions for usability evaluations. Finally, the book proposes a new two-stage method for the evaluation of UBSs. The method is based on the computation of a distance measures between two dialogue corpora and the pair-wise comparison of distances among several dialogue corpora.
This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling.
This book presents two practical physical attacks. It shows how attackers can reveal the secret key of symmetric as well as asymmetric cryptographic algorithms based on these attacks, and presents countermeasures on the software and the hardware level that can help to prevent them in the future. Though their theory has been known for several years now, since neither attack has yet been successfully implemented in practice, they have generally not been considered a serious threat. In short, their physical attack complexity has been overestimated and the implied security threat has been underestimated. First, the book introduces the photonic side channel, which offers not only temporal resolution, but also the highest possible spatial resolution. Due to the high cost of its initial implementation, it has not been taken seriously. The work shows both simple and differential photonic side channel analyses. Then, it presents a fault attack against pairing-based cryptography. Due to the need for at least two independent precise faults in a single pairing computation, it has not been taken seriously either. Based on these two attacks, the book demonstrates that the assessment of physical attack complexity is error-prone, and as such cryptography should not rely on it. Cryptographic technologies have to be protected against all physical attacks, whether they have already been successfully implemented or not. The development of countermeasures does not require the successful execution of an attack but can already be carried out as soon as the principle of a side channel or a fault attack is sufficiently understood.
This work addresses stealthy peripheral-based attacks on host computers and presents a new approach to detecting them. Peripherals can be regarded as separate systems that have a dedicated processor and dedicated runtime memory to handle their tasks. The book addresses the problem that peripherals generally communicate with the host via the host's main memory, storing cryptographic keys, passwords, opened files and other sensitive data in the process - an aspect attackers are quick to exploit. Here, stealthy malicious software based on isolated micro-controllers is implemented to conduct an attack analysis, the results of which provide the basis for developing a novel runtime detector. The detector reveals stealthy peripheral-based attacks on the host's main memory by exploiting certain hardware properties, while a permanent and resource-efficient measurement strategy ensures that the detector is also capable of detecting transient attacks, which can otherwise succeed when the applied strategy only measures intermittently. Attackers exploit this strategy by attacking the system in between two measurements and erasing all traces of the attack before the system is measured again.
This book presents (1) an exhaustive and empirically validated taxonomy of quality aspects of multimodal interaction as well as respective measurement methods, (2) a validated questionnaire specifically tailored to the evaluation of multimodal systems and covering most of the taxonomy's quality aspects, (3) insights on how the quality perceptions of multimodal systems relate to the quality perceptions of its individual components, (4) a set of empirically tested factors which influence modality choice, and (5) models regarding the relationship of the perceived quality of a modality and the actual usage of a modality.
The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.
This work addresses the evaluation of the human and the automatic speaker recognition performances under different channel distortions caused by bandwidth limitation, codecs, and electro-acoustic user interfaces, among other impairments. Its main contribution is the demonstration of the benefits of communication channels of extended bandwidth, together with an insight into how speaker-specific characteristics of speech are preserved through different transmissions. It provides sufficient motivation for considering speaker recognition as a criterion for the migration from narrowband to enhanced bandwidths, such as wideband and super-wideband.