No catches, no fine print just unadulterated book loving, with your favourite books saved to your own digital bookshelf.
New members get entered into our monthly draw to win £100 to spend in your local bookshop Plus lots lots more…Find out more
Risk, Opportunity, Uncertainty and Other Random Models (Volume V in the Working Guides to Estimating and Forecasting series) goes part way to debunking the myth that research and development cost are somewhat random, as under certain conditions they can be observed to follow a pattern of behaviour referred to as a Norden-Rayleigh Curve, which unfortunately has to be truncated to stop the myth from becoming a reality! However, there is a practical alternative in relation to a particular form of PERT-Beta Curve. However, the major emphasis of this volume is the use of Monte Carlo Simulation as a general technique for narrowing down potential outcomes of multiple interacting variables or cost drivers. Perhaps the most common of these in the evaluation of Risk, Opportunity and Uncertainty. The trouble is that many Monte Carlo Simulation tools are 'black boxes' and too few estimators and forecasters really appreciate what is happening inside the 'black box'. This volume aims to resolve that and offers tips into things that might need to be considered to remove some of the uninformed random input that often creates a misinformed misconception of 'it must be right!' Monte Carlo Simulation can be used to model variable determine Critical Paths in a schedule, and is key to modelling Waiting Times and cues with random arisings. Supported by a wealth of figures and tables, this is a valuable resource for estimators, engineers, accountants, project risk specialists as well as students of cost engineering.
Best Fit Lines and Curves, and Some Mathe-Magical Transformations (Volume III of the Working Guides to Estimating & Forecasting series) concentrates on techniques for finding the Best Fit Line or Curve to some historical data allowing us to interpolate or extrapolate the implied relationship that will underpin our prediction. A range of simple 'Moving Measures' are suggested to smooth the underlying trend and quantify the degree of noise or scatter around that trend. The advantages and disadvantages are discussed and a simple way to offset the latent disadvantage of most Moving Measure Techniques is provided. Simple Linear Regression Analysis, a more formal numerical technique that calculates the line of best fit subject to defined 'goodness of fit' criteria. Microsoft Excel is used to demonstrate how to decide whether the line of best fit is a good fit, or just a solution in search of some data. These principles are then extended to cover multiple cost drivers, and how we can use them to quantify 3-Point Estimates. With a deft sleight of hand, certain commonly occurring families of non-linear relationships can be transformed mathe-magically into linear formats, allowing us to exploit the powers of Regression Analysis to find the Best Fit Curves. The concludes with an exploration of the ups and downs of seasonal data (Time Series Analysis). Supported by a wealth of figures and tables, this is a valuable resource for estimators, engineers, accountants, project risk specialists as well as students of cost engineering.
Principles, Process and Practice of Professional Number Juggling (Volume 1 of the Working Guides to Estimating & Forecasting series) sets the scene of TRACEability and good estimate practice that is followed in the other volumes in this series of five working guides. It clarifies the difference between an Estimating Process, Procedure, Approach, Method and Technique. It expands on these definitions of Approach (Top-down, Bottom-up and 'Ethereal') and Method (Analogy, Parametric and 'Trusted Source') and discusses how these form the basis of all other means of establishing an estimate. This volume also underlines the importance of 'data normalisation' in any estimating procedure, and demonstrates that the Estimating by Analogy Method, in essence, is a simple extension of Data Normalisation. The author looks at simple measures of assessing the maturity or health of an estimate, and offers a means of assessing a spreadsheet for any inherent risks or errors that may be introduced by failing to follow good practice in spreadsheet design and build. This book provides a taster of the more numerical techniques covered in the remainder of the series by considering how an estimator can potentially exploit Benford's Law (traditionally used in Fraud Detection) to identify systematic bias from third party contributors. It will be a valuable resource for estimators, engineers, accountants, project risk specialists as well as students of cost engineering.
Learning, Unlearning and Re-learning Curves (Volume IV of the Working Guides to Estimating & Forecasting series) focuses in on Learning Curves, and the various tried and tested models of Wright, Crawford, DeJong, Towill-Bevis and others. It explores the differences and similarities between the various models and examines the key properties that Estimators and Forecasters can exploit. A discussion about Learning Curve Cost Drivers leads to the consideration of a little used but very powerful technique of Learning Curve modelling called Segmentation, which looks at an organisation's complex learning curve as the product of multiple shallower learning curves. Perhaps the biggest benefit is that it simplifies the calculations in Microsoft Excel where there is a change in the rate of learning observed or expected. The same technique can be used to model and calibrate discontinuities in the learning process that result in setbacks and uplifts in time or cost. This technique is compared with other, better known techniques such as Anderlohr's. Equivalent Unit Learning is another, relative new technique that can be used alongside traditional completed unit learning to give an early warning of changes in the rates of learning. Finally, a Learning Curve can be exploited to estimate the penalty of collaborative working across multiple partners. Supported by a wealth of figures and tables, this is a valuable resource for estimators, engineers, accountants, project risk specialists, as well as students of cost engineering.
Probability, Statistics and Other Frightening Stuff (Volume II of the Working Guides to Estimating & Forecasting series) considers many of the commonly used Descriptive Statistics in the world of estimating and forecasting. It considers values that are representative of the 'middle ground' (Measures of Central Tendency), and the degree of data scatter (Measures of Dispersion and Shape) around the 'middle ground' values. A number of Probability Distributions and where they might be used are discussed, along with some fascinating and useful 'rules of thumb' or short-cut properties that estimators and forecasters can exploit in plying their trade. With the help of a 'Correlation Chicken', the concept of partial correlation is explained, including how the estimator or forecaster can exploit this in reflecting varying levels of independence and imperfect dependence between an output or predicted value (such as cost) and an input or predictor variable such as size. Under the guise of 'Tails of the unexpected' the book concludes with two chapters devoted to Hypothesis Testing (or knowing when to accept or reject the validity of an assumed estimating relationship), and a number of statistically-based tests to help the estimator to decide whether to include or exclude a data point as an 'outlier', one that appears not to be representative of that which the estimator is tasked to produce. This is a valuable resource for estimators, engineers, accountants, project risk specialists as well as students of cost engineering.