LoveReading

Becoming a member of the LoveReading community is free.

No catches, no fine print just unadulterated book loving, with your favourite books saved to your own digital bookshelf.

New members get entered into our monthly draw to win £100 to spend in your local bookshop Plus lots lots more…

Find out more

Data analysis: general

See below for a selection of the latest books from Data analysis: general category. Presented with a red border are the Data analysis: general books that have been lovingly read and reviewed by the experts at Lovereading. With expert reading recommendations made by people with a passion for books and some unique features Lovereading will help you find great Data analysis: general books and those from many more genres to read that will keep you inspired and entertained. And it's all free!

Data Visualization in Society

Data Visualization in Society

Author: Helen Kennedy Format: Paperback / softback Release Date: 16/04/2020

data visualization, meaning-making, literacy, politics

A First Course in Network Science

A First Course in Network Science

Networks are everywhere: networks of friends, transportation networks and the Web. Neurons in our brains and proteins within our bodies form networks that determine our intelligence and survival. This modern, accessible textbook introduces the basics of network science for a wide range of job sectors from management to marketing, from biology to engineering, and from neuroscience to the social sciences. Students will develop important, practical skills and learn to write code for using networks in their areas of interest - even as they are just learning to program with Python. Extensive sets of tutorials and homework problems provide plenty of hands-on practice and longer programming tutorials online further enhance students' programming skills. This intuitive and direct approach makes the book ideal for a first course, aimed at a wide audience without a strong background in mathematics or computing but with a desire to learn the fundamentals and applications of network science.

Text and Data Mining The theory and practice of using TDM for scholarship in the humanities

Text and Data Mining The theory and practice of using TDM for scholarship in the humanities

Author: Paul Verhaar Format: Paperback / softback Release Date: 31/12/2019

This book offers a broad and accessible introduction to research based on text and data mining (TDM), focusing specifically on the ways in which TDM has been applied within the humanities. TDM is a collection of computational and algorithmic methods that enable researchers to extract information from large collections of machine-readable texts. As is the case in many other academic disciplines, a growing number of scholars in the humanities are trying to harness the numerous innovate possibilities that can emanate from TDM. While there is a clear uptake of TDM within the humanities, it is relatively difficult for scholars who are new to the field to find books which explain in understandable terms what TDM actually entails. This book offers a accessible and comprehensive overview of the methodology and the theory of TDM, concentrating on applications within the humanities. The book firstly discusses TDM on a practical level. It defines central terms and concepts, and it characterises the tools and the algorithms which have been used most commonly. The purposes and the contexts of these techniques are clarified using a generic description of the workflow that is followed during research projects. The book additionally contains chapters about the various ways in which academic libraries are organising their support for TDM, and about some of the obstacles posed by legislation in the field of intellectual property rights. Based on a thorough scrutiny of existing critical debates about computer-assisted textual research, this book also characterises the possibilities and the limitations of TDM on a more conceptual level. The main objective of the book is to develop a theoretical framework which can help to clarify aspects of research based on TDM and to describe the general ways in which TDM may affect and transform traditional scholarship in the humanities. Supported by international case studies, coverage in the book includes: pre-processing operations data analysis obstacles posed by Intellectual Property Rights text and data mining on a conceptual level visualisation tools criticism library support for text and data mining. The book will be essential reading for humanities scholars interested in getting started in TDM and those who aim to develop their understanding of TDM on a more theoretical level. It will also be a must-read for academic librarians and information professionals who seek to develop services to support scholarship based on TDM and students interested in digital humanities.

Text and Data Mining The theory and practice of using TDM for scholarship in the humanities

Text and Data Mining The theory and practice of using TDM for scholarship in the humanities

Author: Paul Verhaar Format: Hardback Release Date: 31/12/2019

This book offers a broad and accessible introduction to research based on text and data mining (TDM), focusing specifically on the ways in which TDM has been applied within the humanities. TDM is a collection of computational and algorithmic methods that enable researchers to extract information from large collections of machine-readable texts. As is the case in many other academic disciplines, a growing number of scholars in the humanities are trying to harness the numerous innovate possibilities that can emanate from TDM. While there is a clear uptake of TDM within the humanities, it is relatively difficult for scholars who are new to the field to find books which explain in understandable terms what TDM actually entails. This book offers a accessible and comprehensive overview of the methodology and the theory of TDM, concentrating on applications within the humanities. The book firstly discusses TDM on a practical level. It defines central terms and concepts, and it characterises the tools and the algorithms which have been used most commonly. The purposes and the contexts of these techniques are clarified using a generic description of the workflow that is followed during research projects. The book additionally contains chapters about the various ways in which academic libraries are organising their support for TDM, and about some of the obstacles posed by legislation in the field of intellectual property rights. Based on a thorough scrutiny of existing critical debates about computer-assisted textual research, this book also characterises the possibilities and the limitations of TDM on a more conceptual level. The main objective of the book is to develop a theoretical framework which can help to clarify aspects of research based on TDM and to describe the general ways in which TDM may affect and transform traditional scholarship in the humanities. Supported by international case studies, coverage in the book includes: pre-processing operations data analysis obstacles posed by Intellectual Property Rights text and data mining on a conceptual level visualisation tools criticism library support for text and data mining. The book will be essential reading for humanities scholars interested in getting started in TDM and those who aim to develop their understanding of TDM on a more theoretical level. It will also be a must-read for academic librarians and information professionals who seek to develop services to support scholarship based on TDM and students interested in digital humanities.

Practical Data Science with R

Practical Data Science with R

Author: Nina Zumel, John Mount Format: Paperback / softback Release Date: 06/12/2019

This invaluable addition to any data scientist's library shows you how to apply the R programming language and useful statistical techniques to everyday business situations as well as how to effectively present results to audiences of all levels. To answer the ever-increasing demand for machine learning and analysis, this new edition boasts additional R tools, modeling techniques, and more. Practical Data Science with R, Second Edition takes a practice oriented approach to explaining basic principles in the ever-expanding field of data science. You'll jump right to real-world use cases as you apply the R programming language and statistical analysis techniques to carefully explained examples based in marketing, business intelligence, and decision support. Key features * Data science and statistical analysis for the business professional * Numerous instantly familiar real-world use cases * Keys to effective data presentations * Modeling and analysis techniques like boosting, regularized regression, and quadratic discriminant analysis Audience While some familiarity with basic statistics and R is assumed, this book is accessible to readers with or without a background in data science. About the technology Business analysts and developers are increasingly collecting, curating, analyzing, and reporting on crucial business data. The R language and its associated tools provide a straightforward way to tackle day-to-day Nina Zumel and John Mount are co-founders of Win-Vector LLC, a San Francisco-based data science consulting firm. Both hold PhDs from Carnegie Mellon and blog on statistics, probability, and computer science at win-vector.com.

Present Sense A Practical Guide to the Science of Measuring Performance and the Art of Communicating it, with the Brain in Mind

Present Sense A Practical Guide to the Science of Measuring Performance and the Art of Communicating it, with the Brain in Mind

Author: Dr Steve Morlidge Format: Paperback / softback Release Date: 31/10/2019

In this provocative yet practical guidebook Steve Morlidge demonstrates why the approach and methods of performance reporting that all information professionals have been taught fails, and what we need to do differently to help us make sense of the dynamic, complex and data rich world in which we now live and work. Reporting on performance should not be treated as worthy but dull, requiring no more than routine comparisons of actual against targets. This traditional approach is based on the false premise organisations can be managed as if they were a simple mechanical system operating in a predictable environment. And the methods associated with it, such as variance analyses and data tables that are used to measure and communicate performance, are completely inadequate. Instead, Morlidge argues performance reporting should be reconceived as an act of perception conducted on behalf of the organisation, helping to make sense of the sensory inputs (data) that it has at its disposal. And to do so effectively performance reporters need to learn from and exploit the strengths of our own brains, compensate for its weaknesses and communicate in a way that makes it easy for their audience's brains to assimilate. Drawing on the latest insights from cognitive science in this book you will learn: * how to bring a dynamic perspective into performance reporting * how to deploy a set of simple tools to help speared the signal from the noise inherent in large data sets and to make sound inferences * how to set goals intelligently * about the grammar of data visualization and how use it to design powerful and simple reports In this way information professionals are uniquely charged with the responsibility for creating the shared consciousness that is a prerequisite for organisations to effectively respond and adapt to their environments.

Disaster Evaluation Research A field guide

Disaster Evaluation Research A field guide

A human disaster is defined as a hazardous event that overwhelms the capacity of the local community to respond to the needs of the affected population. Medical and public health responses aim to provide care efficiently and promptly but all too often, responses are hampered by recurring mistakes. Analysing the factors at play such as the scale and frequency of disasters and the variety of challenges they present, is central to developing more effective response plans. However the complexity of disasters often precludes reliable data collection, hampering the accuracy of the results, conclusions and recommendations required to improve responses. Disaster Evaluation Research: A field guide presents a new approach to the study of disaster by incorporating a mixed-methods research approach. This practical manual provides a range of reliable methods, robust approaches and proven techniques for the gathering and analyzing of data. Written by leading evaluation scientists with a wealth of experience, the authors present their 'EIGHT Step Model' for disaster evaluation studies. This framework applies evaluation science to disaster responses, helping scientists to select key stakeholders effectively, write evaluation questions, use logic models and mixed-methods research design, prepare sampling plans, collect and analyse data, and prepare a final report. This guide also features useful tools for carrying out evaluations including; evaluation questions, indicators and data sources, resources, and questionnaires used in past evaluation studies. Using a clear, accessible and step-by-step style this practical manual is easy to use in the field and essential reading for medical and public health professionals involved in disaster preparedness and response, humanitarian relief workers, policy analysts, evaluation scientists and epidemiologists.

Statistical Modelling of Complex Correlated and Clustered Data Household Surveys in Africa

Statistical Modelling of Complex Correlated and Clustered Data Household Surveys in Africa

Author: Ngianga-Bakwin, PhD Kandala Format: Hardback Release Date: 01/10/2019

In order to assist a hospital in managing its resources and patients, modelling the length of stay is highly important. Recent health scholarship and practice has largely remained empirical, dwelling on primary data. This is critically important, first, because health planners generally rely on data to establish trends and patterns of disease burden at national or regional level. Secondly, epidemiologists depend on data to investigate possible risk factors of the disease. Yet the use of routine or secondary data has, in recent years, proved increasingly significant in such endeavours. Various units within the health systems collected such data primarily as part of the process for surveillance, monitoring and evaluation. Such data is sometimes periodically supplemented by population-based sample survey datasets. Thirdly, coupled with statistical tools, public health professionals are able to analyze health data and breathe life into what may turn out to be meaningless data. The main focus of this book is to present and showcase advanced modelling of routine or secondary survey data. Studies demonstrate that statistical literacy and knowledge are needed to understand health research outputs. The advent of user-friendly statistical packages combined with computing power and widespread availability of public health data resulted in more reported epidemiological studies in literature. However, analysis of secondary data, has some unique challenges. These are most widely reported health literature, so far has failed to recognize resulting in inappropriate analysis, and erroneous conclusions. This book presents the application of advanced statistical techniques to real examples emanating from routine or secondary survey data. These are essentially datasets in which the two editors have been involved, demonstrating how to tackle these challenges. Some of these challenges are: the complex sampling design of the surveys, the hierarchical nature of the data, the dependence of data at the sampled cluster and missing data among many more challenges. Using data from the Health Management Information System (HMIS), and Demographic and Health Survey (DHS), we provide various approaches and techniques of dealing with data complexity, how to handle correlated or clustered data. Each chapter presents an example code, which can be used to analyze similar data in R, Stata or SPSS. To make the book more concise, we have provided the codes on the book's website. The book considers four main topics in the field of health sciences research: (i) structural equation modeling; (ii) spatial and spatio-temporal modeling; (iii) correlated or clustered copula modeling; and (iv) survival analysis. The book has potential to impact methodologists, including students undertaking Master's or Doctoral level programmes as well as other researchers seeking some related reference on quantitative analysis in public health or health sciences or other areas where data of similar nature would be applicable. Further the book can be a resource to public health professionals interested in quantitative approaches to answer questions of epidemiological nature. Each chapter starts with a motivating background, review of statistical methods, analysis and results, ending discussion and possible recommendations.

Data Analysis

Data Analysis

Author: Partha Sarathi Bishnu, Vandana Bhattacherjee Format: Paperback / softback Release Date: 30/06/2019

Data analysis using statistics and probability with R language is a complete introduction to data analysis. It provides a sound understanding of the foundations of the data analysis, in addition to covering many important advanced topics. Moreover, all the techniques have been implemented using R language as well as Excel. This book is intended for the undergraduate and postgraduate students of Management and Engineering disciplines. It is also useful for research scholars. Key Features 1. Covers data analysis topics such as: Descriptive statistics like mean, median, mode, standard deviation, skewness, kurtosis, correlation and regression Probability and probability distribution Inferential statistics like estimation of parameters, hypothesis testing, ANOVA test, chi-square and t-test Statistical quality control, time series analysis, statistical decision theory Explorative data analysis like clustering and classification Advanced techniques like conjoint analysis, panel data analysis, and logistic regression analysis 2. Comprises 12 chapters which include examples, solved problems, review questions and unsolved problems. 3. Requires no programming background and can be used to understand theoretical concepts also by skipping programming. 4. R and Excel implementations, and additional advanced topics are available at https://phindia.com/partha_sarathi_ bishnu_ and_vandana_bhattacherjee 5. Whenever in any branch, data analysis technique is required, this book is the best.