Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Me Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Measure Anything provides just the tools most of us need to measure anything better, to gain that insight, to make progress, and to succeed." -Peter Tippett, PhD, M.D. Chief Technology Officer at CyberTrust and inventor of the first antivirus software "Doug Hubbard has provided an easy-to-read, demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions. We encourage our clients to try his powerful, practical techniques." -Peter Schay EVP and COO of The Advisory Council "As a reader you soon realize that actually everything can be measured while learning how to measure only what matters. This book cuts through conventional cliches and business rhetoric and offers practical steps to using measurements as a tool for better decision making. Hubbard bridges the gaps to make college statistics relevant and valuable for business decisions." -Ray Gilbert EVP Lucent "This book is remarkable in its range of measurement applications and its clarity of style. A must-read for every professional who has ever exclaimed, 'Sure, that concept is important, but can we measure it?'" -Dr. Jack Stenner Cofounder and CEO of MetraMetrics, Inc.

# How to Measure Anything: Finding the Value of "Intangibles" in Business

Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Me Praise for How to Measure Anything: Finding the Value of Intangibles in Business "I love this book. Douglas Hubbard helps us create a path to know the answer to almost any question in business, in science, or in life . . . Hubbard helps us by showing us that when we seek metrics to solve problems, we are really trying to know something better than we know it now. How to Measure Anything provides just the tools most of us need to measure anything better, to gain that insight, to make progress, and to succeed." -Peter Tippett, PhD, M.D. Chief Technology Officer at CyberTrust and inventor of the first antivirus software "Doug Hubbard has provided an easy-to-read, demystifying explanation of how managers can inform themselves to make less risky, more profitable business decisions. We encourage our clients to try his powerful, practical techniques." -Peter Schay EVP and COO of The Advisory Council "As a reader you soon realize that actually everything can be measured while learning how to measure only what matters. This book cuts through conventional cliches and business rhetoric and offers practical steps to using measurements as a tool for better decision making. Hubbard bridges the gaps to make college statistics relevant and valuable for business decisions." -Ray Gilbert EVP Lucent "This book is remarkable in its range of measurement applications and its clarity of style. A must-read for every professional who has ever exclaimed, 'Sure, that concept is important, but can we measure it?'" -Dr. Jack Stenner Cofounder and CEO of MetraMetrics, Inc.

Compare

## 30 review for How to Measure Anything: Finding the Value of "Intangibles" in Business

## Related products

**4.4**out of 5

### The Complete Works of Mark Twain: The Novels, short stories, essays and satires, travel writing, non-fiction, the complete letters, the complete speeches, and the autobiography of Mark Twain

**4.0**out of 5

### The Scarlet Pimpernel (Annotated): An action adventure, a romance, historical fiction

**3.9**out of 5

### White Fang by Jack London, Fiction, Classics

**4.1**out of 5

4out of 5Takuro Ishikawa–The most important thing I learned from this book: “A measurement is a set of observations that reduce uncertainty where the result is expressed as a quantity.” Finally! Someone has clearly explained that measurements are all approximations. Very often in social research, I have to spend a lot of time explaining that metrics don’t need to be exact to be useful and reliable. Hopefully, this book will help me shorten those conversations.

4out of 5Jurgen Appelo–297 references to risk, and only 29 references to opportunity. No mention of unknown unknowns (or black swans), and no mention of the observer effect (goodhart's law). A great book, teaching you all about metrics, as long as you ignore complexity.

5out of 5Yevgeniy Brikman–As an engineer, this book makes me happy. A great discussion of how to break *any* problem down into quantifiable metrics, how to figure out which of those metrics is valuable, and how to measure them. The book is fairly actionable, there is a complementary website with lots of handy excel tools, and there are plenty of examples to help you along. The only downside is that this is largely a stats book in disguise, so some parts are fairly dry and a the difficulty level jumps around a little bit. As an engineer, this book makes me happy. A great discussion of how to break *any* problem down into quantifiable metrics, how to figure out which of those metrics is valuable, and how to measure them. The book is fairly actionable, there is a complementary website with lots of handy excel tools, and there are plenty of examples to help you along. The only downside is that this is largely a stats book in disguise, so some parts are fairly dry and a the difficulty level jumps around a little bit. If you make important decisions, especially in business, this book is for you. Some great quotes: Anything can be measured. If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods. Measurement: a quantitatively expressed reduction of uncertainty based on one or more observations. So a measurement doesn’t have to eliminate uncertainty after all. A mere _reduction_ in uncertainty counts as a measurement and possibly can be worth much more than the cost of the measurement. A problem well stated is a problem half solved. —Charles Kettering (1876–1958) The clarification chain is just a short series of connections that should bring us from thinking of something as an intangible to thinking of it as a tangible. First, we recognize that if X is something that we care about, then X, by definition, must be detectable in some way. How could we care about things like “quality,” “risk,” “security,” or “public image” if these things were totally undetectable, in any way, directly or indirectly? If we have reason to care about some unknown quantity, it is because we think it corresponds to desirable or undesirable results in some way. Second, if this thing is detectable, then it must be detectable in some amount. If you can observe a thing at all, you can observe more of it or less of it. Once we accept that much, the final step is perhaps the easiest. If we can observe it in some amount, then it must be measurable. Rule of five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. An important lesson comes from the origin of the word experiment. “Ex- periment” comes from the Latin ex-, meaning “of/from,” and periri, mean- ing “try/attempt.” It means, in other words, to get something by trying. The statistician David Moore, the 1998 president of the American Statistical Association, goes so far as to say: “If you don’t know what to measure, measure anyway. You’ll learn what to measure.” Four useful measurement assumptions: 1. Your problem is not as unique as you think. 2. You have more data than you think. 3. You need less stated that you think. 4. And adequate amount of new data is more accessible than you think. Don’t assume that the only way to reduce your uncertainty is to use an impractically sophisticated method. Are you trying to get published in a peer-reviewed journal, or are you just trying to reduce your uncertainty about a real-life business decision? Think of measurement as iterative. Start measuring it. You can always adjust the method based on initial findings. In business cases, most of the variables have an "information value" at or near zero. But usually at least some variables have an information value that is so high that some deliberate measurement is easily justified. While there are certainly variables that do not justify measurement, a persistent misconception is that unless a measurement meets an arbitrary standard (e.g., adequate for publication in an academic journal or meets generally accepted accounting standards), it has no value. This is a slight oversimplification, but what really makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong. Whether it meets some other standard is irrelevant. When people say “You can prove anything with statistics,” they probably don’t really mean “statistics,” they just mean broadly the use of numbers (especially, for some reason, percentages). And they really don’t mean “anything” or “prove.” What they really mean is that “numbers can be used to confuse people, especially the gullible ones lacking basic skills with numbers.” With this, I completely agree but it is an entirely different claim. The fact is that the preference for ignorance over even marginal reductions in ignorance is never the moral high ground. If decisions are made under a self-imposed state of higher uncertainty, policy makers (or even businesses like, say, airplane manufacturers) are betting on our lives with a higher chance of erroneous allocation of limited resources. In measurement, as in many other human endeavors, ignorance is not only wasteful but can be dangerous. If we can’t identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value. The lack of having an exact number is not the same as knowing nothing. The McNamara Fallacy: The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t important. This is blindness. The fourth step is to say that what can’t easily be measured really doesn’t exist. This is suicide. First, we know that the early part of any measurement usually is the high-value part. Don’t attempt a massive study to measure something if you have a lot of uncertainty about it now. Measure a little bit, remove some uncertainty, and evaluate what you have learned. Were you surprised? Is further measurement still necessary? Did what you learned in the beginning of the measurement give you some ideas about how to change the method? Iterative measurement gives you the most flexibility and the best bang for the buck. This point might be disconcerting to some who would like more certainty in their world, but everything we know from “experience” is just a sample. We didn’t actually experience everything; we experienced some things and we extrapolated from there. That is all we get—fleeting glimpses of a mostly unobserved world from which we draw conclusions about all the stuff we didn’t see. Yet people seem to feel confident in the conclusions they draw from limited samples. The reason they feel this way is because experience tells them sampling often works. (Of course, that experience, too, is based on a sample.) Anything you need to quantify can be measured in some way that is superior to not measuring it at all. —Gilb’s Law

5out of 5Nils–An OK popularization of measurement techniques. But it downplays the key issue—which is data quality challenges, of which there are at least two types. The first is the "moneyball" type: a phenomenon where we know intuitively that there are important differences in measurable outcomes but we lack statistically significant explanations. The challenge here is to find things to measure that are consistently revealing of the phenomenon you are ultimately interested in measuring (say team wins). Maki An OK popularization of measurement techniques. But it downplays the key issue—which is data quality challenges, of which there are at least two types. The first is the "moneyball" type: a phenomenon where we know intuitively that there are important differences in measurable outcomes but we lack statistically significant explanations. The challenge here is to find things to measure that are consistently revealing of the phenomenon you are ultimately interested in measuring (say team wins). Making it harder is that sometimes you need to build a supercollider in order to measure the phenomenon in question, and for many reasons that may not always be feasible. Data collection is expensive, in many ways, not least socially: new forms of measurement of social activities (including business activities) threaten those who benefit from status quo. The second data quality challenge is more insidious, the "deviant globalization" type: we have the data, or some data, but it is hopelessly and often intentionally corrupted or compromised, since there are actors who have an active interest in obscuring measurement. This is true about almost all information related to morally questionable activities, for example, from sex to drugs to theft. But it's not just there: any sales manager trying to accurately gauge the size of his reps' pipeline is intimate with the problem of trying to extract accurate data. In sum, the book is fine on the technique side, but naive about what we may call the social epistemologies.

5out of 5Martin Klubeck–I really like this book. Hubbard not only champions the belief that anything can be measured, he gives you the means (the understanding of how) to get it done. I have used his book on numerous occasions when tackling some difficult data collection efforts. Hubbard's taxonomy and mine don't fully jive, but that's a minor point; I found much more to like than not. I like to highlight and make notes in good books...this book is full of both. I especially like one of his "useful measurement assumpti I really like this book. Hubbard not only champions the belief that anything can be measured, he gives you the means (the understanding of how) to get it done. I have used his book on numerous occasions when tackling some difficult data collection efforts. Hubbard's taxonomy and mine don't fully jive, but that's a minor point; I found much more to like than not. I like to highlight and make notes in good books...this book is full of both. I especially like one of his "useful measurement assumptions." I think it sums up the book nicely: "There is a useful measurement that is much simpler than you think." This book helps you find the simple answer to the daunting problem of "how to measure" something. Another section I like a lot is how to "calibrate estimates" - basically it gives really useful, hands-on techniques for getting better at guessing. This is a great tool, not only for measuring, but for any role that requires good estimating. Nothing is perfect, and Hubbard has at least one chapter where I think he failed to simplify life - his chapter on measuring risk was too complicated (unless you are a statistician). Bottom line? Great book - especially for those tasked with collecting the data necessary to measure stuff!

4out of 5Marcelo Bahia–An excellent read. It could be summed up as a "basic statistics for business" book, although it definitely goes beyond that in many aspects. As the title suggests, throughout the whole book the author strongly defends the case that everything can be measured, even though the method may not be obvious at first glance. The book structure basically consists of the explanations of why this is so and various examples and methods that should help the reader to deal with many types of such problems. Alon An excellent read. It could be summed up as a "basic statistics for business" book, although it definitely goes beyond that in many aspects. As the title suggests, throughout the whole book the author strongly defends the case that everything can be measured, even though the method may not be obvious at first glance. The book structure basically consists of the explanations of why this is so and various examples and methods that should help the reader to deal with many types of such problems. Along the way, writing is very clear and reading is more pleasant than you would expect from a "statistics book". This is so because much of the value-added of the book comes not from the quantitative side (which is actually quite basic statistics, something that I see as positive in the context of the book), but from the qualitative analysis and differentiated viewpoint of the author under various circumstances. Actually, he seems knowledgeable and is pretty insightful most of the time, and I expect that the usefulness of each of these insights will depend on your current career and experience. Having worked as a financial analyst in the Brazilian financial markets for the past 8 years, for me the 2 most interesting insights were: 1) His definition of measurement as any number or figure that reduces risk compared to your previous state. I consider this REALLY important in the workplace, as most people consider valid measurements only those ones which can be precisely quantified, preferring ignorance over possible risk-reducing wide-range estimates in all other situations. 2) Due to the above misconception of the definition of measurement, people neglect measurements and estimates exactly in the situations in which they are more useful. When you don't know anything, any imprecise estimate will reduce risk and add value! Looking back, this non-obvious insight is precisely what we needed when facing some specific analytical and decision-making problems in my firm. Overall, this is one of the most interesting books I've read in the past few months, and it should be a great investment of time & money to any professional that mildly deals with quantitative problems at work.

4out of 5Steve Walker–There is a lot of good information here but it is more of a text book and very dry. I read this book because I have to make decisions every day. Some decisions are very easy because I have the intell and facts that make the decision for me. But other decisions aren't so easy. What are my "real" risks? How do I separate emotion from a decision? What about all the things involved that can't be measured? Ah, that is where this book was insightful an helpful. Hubbard aserts that there isn't anything There is a lot of good information here but it is more of a text book and very dry. I read this book because I have to make decisions every day. Some decisions are very easy because I have the intell and facts that make the decision for me. But other decisions aren't so easy. What are my "real" risks? How do I separate emotion from a decision? What about all the things involved that can't be measured? Ah, that is where this book was insightful an helpful. Hubbard aserts that there isn't anything that can't be measured. Metrics. That is the key to making better decisions. The group I manage has a lot of dynamic and organic tasks to perform each day. I have never been able to quantify a lot of the work we do. That is because I am intrenched in scientific measurements such as average time to handle a customer call. That measurement is meaningless for me. Each call is a different subject. I cannot measure their performance based on how quickly they resolve a call because some problems are simple and others are complex and require enlisting other personnel. But Hubbard teaches many techniques and alternate ways to look at things to get some way of quantifying; perhaps not precisely, but enough to help navigate the myriad pieces of information that can go into a business decision. You have to "want" to read this book. But if you "want" to improve ROI; if you "want" to provide better risk analysis; "if you "want" to be more confident about providing management with your recommendations ... then you'll "want" to read this book.

5out of 5Bibhu Ashish–Happened to read the book from IIBA.org site where I have been a member since last year. The best takeaway from the book is the structural thought process it brings in while dealing with intangibles which we always are demotivated to measure. To summarize my learning, I would just mention the below which I have copied from the book. 1-If it's really that important, it's something you can define. If it's something you think exists at all, it's something you've already observed somehow. 2-If it's so Happened to read the book from IIBA.org site where I have been a member since last year. The best takeaway from the book is the structural thought process it brings in while dealing with intangibles which we always are demotivated to measure. To summarize my learning, I would just mention the below which I have copied from the book. 1-If it's really that important, it's something you can define. If it's something you think exists at all, it's something you've already observed somehow. 2-If it's something important and something uncertain, you have a cost of being wrong and a chance of being wrong. 3-You can quantify your current uncertainty with calibrated estimates. 4-You can compute the value of additional information by knowing the "threshold" of the measurement where it begins to make a difference compared to your existing uncertainty. 5-Once you know what it's worth to measure something, you can put the measurement effort in context and decide on the effort it should take. 6-Knowing just a few methods for random sampling, controlled experiments, or even merely improving on the judgments of experts can lead to a significant reduction in uncertainty. One caution though. People who are not that fond of Mathematics and data may find it bit too much, but this book is worth reading at least once.

4out of 5Jon–Simply put the first half of this is just awesome. As I listened to this via audio the second half is plagued by many formulas that doesn’t translate or understood well when listened to. The second half is also very heavily into statistics which could be a somewhat laborious read for some. The first half is very recommended as it goes into what it means to “measure” something and suggest some very fundamental questions regarding measuring. E.g.: What is it you want to have measured? E.g. what does Simply put the first half of this is just awesome. As I listened to this via audio the second half is plagued by many formulas that doesn’t translate or understood well when listened to. The second half is also very heavily into statistics which could be a somewhat laborious read for some. The first half is very recommended as it goes into what it means to “measure” something and suggest some very fundamental questions regarding measuring. E.g.: What is it you want to have measured? E.g. what does security mean for you? Why is this important for you? How much is this measurement worth to you? What do you know now about the problem now? Hubbard gives tools for solving problem e.g. the Fermi and the baysian toolbox that allows a rough estimation of practically anything. Hubbard also gives some very good pointers as to how you calibrate yourself to counteract psychological biases. If you read it, make sure you dedicate a good amount of time on the first half as imo, this is where most of the loot is located.

5out of 5Rick Howard–Douglas Hubbard’s "How to Measure Anything: Finding the Value of "Intangibles" is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of Douglas Hubbard’s "How to Measure Anything: Finding the Value of "Intangibles" is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of probability is called Bayesian and it has been out of favor within the statistical community until just recently when it became obvious that it worked for a certain set of really hard problems. He describes a few simple math tricks that all network defenders can use to make predictions about risk decisions for our organizations. He even demonstrates how easy it is for network defenders to run our own Monte Carlo simulations using nothing more than a spreadsheet. Because of all of that, "How to Measure Anything: Finding the Value of "Intangibles" is indeed a Cybersecurity Canon Hall of Fame candidate and you should have read it by now. Introduction The Cybersecurity Canon project is a “curated list of must-read books for all cybersecurity practitioners – be they from industry, government or academia — where the content is timeless, genuinely represents an aspect of the community that is true and precise, reflects the highest quality and, if not read, will leave a hole in the cybersecurity professional’s education that will make the practitioner incomplete.” [1] This year, the Canon review committee inducted this book into the Canon Hall of Fame: “How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen. [2] [3] According to the Canon committee member reviewer, Steve Winterfeld, "How to Measure Anything in Cybersecurity Risk” is an extension of Hubbard’s successful first book, “How to Measure Anything: Finding the Value of “Intangibles” in Business. It lays out why statistical models beat expertise every time. It is a book anyone who is responsible for measuring risk, developing metrics, or determining return on investment should read. It provides a strong foundation in qualitative analytics with practical application guidance." [4] I personally believe that precision risk assessment is a key and currently missing element in the CISO’s bag of tricks. As a community, network defenders in general are not good at transforming technical risk into business risk for the senior leadership team. For my entire career, I have gotten away with listing the 100+ security weaknesses within my purview and giving them a red, yellow, or green labels to mean bad, kind-of-bad, or not bad. If any of my bosses would have bothered to ask me why I gave one weakness a red label vs a green label, I would have said something like: “25 years of experience, Blah, Blah, Blah, Trust Me, Blah, Blah, Blah, can I have the money please?” I believe the network defender’s inability to translate technical risk into business risk with any precision is the reason that the CISO is not considered at the same level as other senior C-Suite executives like the CEO, the CFO, the CTO and the CMO. Most of those leaders have no idea what the CISO is talking about. For years, network defenders have blamed these senior leaders for not being smart enough to understand the significance of the security weaknesses we bring to them. But I assert that it is the other way around. The network defenders have not been smart enough to convey the technical risks to business leaders in a way they might understand. This CISO inability is the reason that the Canon Committee inducted "How to Measure Anything in Cybersecurity Risk,” and another precision risk book called “Measuring and Managing Information Risk: A FAIR Approach” into the Canon Hall of Fame. [5][4][3][6] [7]. These books are the places to start if you want to educate yourself on this new way of thinking about risk to the business. For me though, this is not an easy subject. I slogged my way through both of these books because basic statistical models completely baffle me. I took stat courses in college and grad school but sneaked through them by the skin of my teeth. All I remember about stats was that it was hard. When I read these two books, I think I only understood about a three-quarters of what I was reading not because they were written badly but because I struggled with the material. I decided to get back to the basics and read Hubbard’s original book that Winterfeld referenced in his review: “How to Measure Anything: Finding the Value of “Intangibles” in Business” to see if it was also Canon worthy. The Network Defender’s misunderstanding of Metrics, Risk Reduction and Probabilities Throughout the book, Hubbard emphasizes that seemingly dense and complicated risk questions are not as hard to measure as you might think. He reasons from scholars like Edward Lee Thorndike and Paul Meehl from the early twentieth-century about Clarification Chains: If it matters at all, it is detectable/observable. If it is detectable, it can be detected as an amount (or range of possible amounts). If it can be detected as a range of possible amounts, it can be measured. [8] As a network defender, whenever I think about capturing metrics that will inform how well my security program is doing, my head begins to hurt. Oh, there are many things that we could collect – like outside IP addresses hitting my infrastructure, security control logs, employee network behavior, time to detect malicious behavior, time to eradicate malicious behavior, how many people must react to new detections, etc. – but it is difficult to see how that collection of potential badness demonstrates that I am reducing material risk to my business with any precision. Most network defenders in the past, including me, have simply thrown our hands up in surrender. We seem to say to ourselves that if we can’t know something with 100% accuracy or if there are countless intangible variables with many veracity problems, then it is impossible to make any kind of accurate prediction about the success or failure of our programs. Hubbard makes the point that we are not looking for 100% accuracy. What we are really looking for is a reduction in uncertainty. He says that the concept of measurement is not the elimination of uncertainty but the abatement of it. If we can collect a metric that helps us reduce that uncertainty, even if it is just by a little bit, then we have improved our situation from not knowing anything to knowing something. He says that you can learn something from measuring with very small random samples of a very large population. You can measure the size of a mostly unseen population. You can measure even when you have many, sometimes unknown, variables. You can measure the risk of rare events. Finally, Hubbard says that you can measure the value of subjective preferences like art or free time or life in general. According to Hubbard, “We quantify this initial uncertainty and the change in uncertainty from observations by using probabilities.” [8] These probabilities refer to our uncertainty state about a specific question. The math trick that we all need to understand is allowing for ranges of possibilities that we are 90% sure the true value lies between. For example, we may be trying to reduce the number of humans that have to respond to a cyberattack. In this fictitious example, last year the Incident Response Team handled 100 incidents with three people each; a total of 300 people. We think that installing a next generation firewall will reduce that number. We don’t know exactly how many but some. We start here to bracket the question. Do we think that installing the firewall will eliminate the need for all humans to respond? Absolutely not. What about reducing the number to three incidents with three people for a total of nine. Maybe. What about reducing the number to 10 incidents with three people for a total of 30. That might be possible. That is our lower limit. Let’s go to the high side. Do you think that installing the firewall will have zero impact in reducing the number? No. What about 90 attacks with three people for a total of 270? Maybe. What about 85 attacks with three people for a total of 255? That seems reasonable. That is our upper limit. By doing this bracketing we can say that we are 90% sure that installing the next generation firewall will reduce the number of humans that have to respond to cyber incidents from 300 to between 30 and 255. Astute network defenders will point out that this range is pretty wide. How is that helpful? Hubbard says that first, you now know this where before you didn’t know anything. Second, this is the start. You can now collect other metrics perhaps that night help you reduce the gap. The History of Scientific Measurement Evolution This particular view of probabilities, the idea that there is a range of outcomes that you can be 90% sure about, is the Bayesian interpretation of probabilities. Interestingly, this different view of statistics has not been in favor since its inception when Thomas Bayes penned the original formula back in the 1740s. The naysayers originated from the Frequentists. Their theory said that the probability of an event can only be determined by how many times it has happened in the past. To them, modern science requires both objectivity and precise answers. According to Hubbard, “The term ‘statistics’ was introduced by the philosopher, economist, and legal expert Gottfried Achenwall in 1749. He derived the word from the Latin statisticum, meaning ‘pertaining to the state.’ Statistics was literally the quantitative study of the state.” [8] In the Frequentist view, the Bayesian philosophy requires a measure of “belief and approximations. It is subjectivity run amok, ignorance coined into science.” [7] But the real world has problems where the data is scant. Leaders worry about potential events that have never happened before. Bayesians were able to provide real answers to these kinds problems like the defeating of the Enigma encryption machine in World War II and finding a lost and sunken nuclear submarine that was the basis for the movie “Hunt for Red October.” But It wasn’t until the early 1990s when the theory became commonly accepted. [7] Hubbard walks the reader through this historical research about the current state in scientific measurement. He explains how Paul Meehl in the early 1900s demonstrated time and again that statistical models outperformed human experts. He describes the birth of Information Theory with Claude Shannon in the late 1940s and credits Stanley Smith Stevens around the same time with crystalizing different scales of measurement from sets, to ordinals, to ratios and to intervals. He reports how Amos Tversky and Daniel Kahneman, through their research in the 1960s and 1970s, demonstrated that we can improve our measurements around subjective probabilities. In the end, Hubbard defines measurement as this Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. [8] Simple Math Tricks Hubbard explains two math tricks that, after reading, seem impossible to be true, but when used by a Bayesian proponents, greatly simplify measurement-taking for difficult problems. The Power of Small Samples: The Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. [8] The Single Sample Majority Rule (i.e., The Urn of Mystery Rule): Given maximum uncertainty about a population proportion—such that you believe the proportion could be anything between 0% and 100% with all values being equally likely—there is a 75% chance that a single randomly selected sample is from the majority of the population. [8] I admit that the math behind these rules escapes me. But I don’t have to understand the math to use the tools. It reminds me of a moving scene from one of my favorite movies: “Lincoln.” President Lincoln, played brilliantly by Daniel Day-Lewis, discusses his reasoning for keeping the southern agents, who want to discuss peace before the 13th Amendment is passed, away from Washington. "Euclid's first common notion is this. Things that are equal to the same thing are equal to each other. That's a rule of mathematical reasoning. It's true because it works. Has done and always will do.” [9] The bottom line is that statistically significant does not mean a large number of samples. Hubbard says that statistical significance has a precise mathematical meaning that most lay people do not understand and many scientists get wrong most of the time. For the purposes of risk reduction, stick to the idea of a 90% confidence interval regarding potential outcomes. The Power of Small Samples and the Single Sample Majority Rule are rules of mathematical reasoning that all network defenders should keep handy in their utility belts as they measure risk in their organizations. Simple Measurement Best Practices and Definitions As I said before, most network defenders think that measuring risk in terms of cyber security is too hard. Hubbard explains four rules of thumb that every practitioner should consider before they give up: It’s been measured before. You have far more data than you think. You need far less data than you think. Useful, new observations are more accessible than you think. [8] He then defines “uncertainty” and “risk” through a possibility and probabilistic lens: Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. Measurement of Uncertainty: A set of probabilities assigned to a set of possibilities. Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome. Measurement of Risk: A set of possibilities each with quantified probabilities and quantified losses. [8] In the network defender world, we tend to define risk in terms of threats and vulnerabilities and consequences. [10] Hubbard’s relatively new take gives us a much more precise way to think about these terms. Monte Carlo Simulations According to Hubbard, the invention of the computer made it possible for scientists to run thousands of experimental trials based on probabilities for inputs. These trials are called Monte Carlo simulations. In the 1930s, Enrico Fermi used the method to calculate neutron diffusion by hand with human mathematicians calculating the probabilities. In the 1940s, Stanislaw Ulam, John von Neumann, and Nicholas Metropolis realized that the computer could automate the Monte Carlo method and help them design the atomic and hydrogen bombs. Today, everybody that has access to a spreadsheet can run their own Monte Carlo simulations. For example, if you take my previous example of trying to reduce the number of humans that have to respond to a cyberattack. We said that during the previous year, 300 people responded to a cyberattack. We said that we were 90% certain that the installation of a next generation firewall would result in a reduction of the humans that have to respond to an incident to between 30 and 255 humans. We can refine that number even more by simulating hundreds or even thousands of scenarios inside a spreadsheet. I did this myself by setting up 100 scenarios where I randomly picked a number between 0 and 300. I calculated the mean to be 131 and the standard deviation to be 64. Remember that the standard deviation is nothing more than a measure of spread from the mean. [11][12][13] The rule of 68–95–99.7 says that 68% of the recorded values will fall within the first standard deviation. 95% will fall within the second standard deviation. 97.7% will fall within the third standard deviation. [8] With our original estimate, we said there was a 90% chance that the number is between 30 and 255. After running the Monte Carlo simulation, we can say that there is 68% chance that the number is between 76 and 248. How about that? Even a statistical luddite can run his own Monte Carlo simulation. Conclusion After reading Hubbard’s second book in the series, “How to Measure Anything in Cybersecurity Risk," I decided to go back to the original to see if I could understand with a bit more clarity exactly how the statistical models worked and to determine if the original was Canon worthy too. I learned that there was probably a way to collect data to support risk decisions for even the hardest kinds of questions. I learned that network defenders do not have to have 100% accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. I learned that this particular view of probability is called Bayesian and it has been out of favor within the statistical community until just recently when it became obvious that it worked for a certain set of really hard problems. I learned that there are a few simple math tricks that we can all use to make predictions about these really hard problems that will help us make risk decisions for our organizations. And I even learned how to build my own Monte Carlo simulations to supports those efforts. Because of all of that, "How to Measure Anything: Finding the Value of "Intangibles" is indeed Canon worthy and you should have read it by now. Sources [1] "Cybersecurity Canon: Essential Reading for the Security Professional," by Palo Alto Networks, Last Viewed 5 July 2017, https://www.paloaltonetworks.com/thre... [2] "Cybersecurity Canon: 2017 Award Winners," by Palo Alto Networks, Last Visited 5 July 2017, https://cybercanon.paloaltonetworks.c... [3] " 'How To Measure Anything in Cybersecurity Risk' - Cybersecurity Canon 2017," Video Interview by Palo Alto Networks, Interviewer: Canon Committee Member, Bob Clark, Interviewees Douglas W. Hubbard and Richard Seiersen, 7 June 2017, Last Visited 5 July 2017, https://www.youtube.com/watch?v=2o_mA... [4] "The Cybersecurity Canon: How to Measure Anything in Cybersecurity Risk," Book review by Canon Committee Member, Steve Winterfeld, 2 December 2016, Last Visited 5 July 2017, https://cybercanon.paloaltonetworks.com/ [5] "How to Measure Anything in Cybersecurity Risk," by Douglas W. Hubbard and Richard Seiersen, Published by Wiley, April 25th 2016, Last Visited 5 July 2017, https://www.goodreads.com/book/show/2... [6] "The Cybersecurity Canon: Measuring and Managing Information Risk: A FAIR Approach," Book review by Canon Committee Member, Ben Rothke, 10 September 2015, Last Visited 5 July 2017, https://researchcenter.paloaltonetwor... [7] "Sharon Bertsch McGrayne: 'The Theory That Would Not Die' | Talks at Google," by Sharon Bertsch McGrayne, Google, 23 August 2011, Last Visited 7 July 2017, https://www.youtube.com/watch?v=8oD6e... [8] "How to Measure Anything: Finding the Value of "Intangibles" in Business," by Douglas W. Hubbard, Published by John Wiley & Sons, 1985, Last Visited 10 July 2017, https://www.goodreads.com/book/show/4... [9] "Lincoln talks about Euclid," by Alexandre Borovik, The De Morgan Forum, 20 December 2012, Last Visited 10 July 2017, http://education.lms.ac.uk/2012/12/li... [10] BITSIGHT SECURITY RATINGS BLOG," by MELISSA STEVENS, 10 JANUARY 2017, Last Visited 10 July 2017, https://www.bitsighttech.com/blog/cyb... [11] "Standard Deviation - Explained and Visualized," by Jeremy Jones, YouTube, 5 April 2015, Last Visited 9 July 2017, https://www.youtube.c

4out of 5Emil O. W. Kirkegaard–Kind of an introduction to applied decision theory, with some good stuff about how to quantify things.

5out of 5Allison–Lots of great commentary on why using data is important... his processes for measurement are less... interesting? A good read for data people. :)

4out of 5Stephen Rynkiewicz–Classical Greeks not only figured out that the planet is round, but had it measured. Eratosthenes calculated its circumference from a lunch-hour measurement at his library in Alexandria during the summer solstice, knowing only his distance from the Tropic of Cancer. Eratosthenes is a hero of Chicago statistician Doug Hubbard, who trains managers in "calibrated estimates," basically closely observed ballpark figures. Here he describes approaches to making more accurate guesses, including when it' Classical Greeks not only figured out that the planet is round, but had it measured. Eratosthenes calculated its circumference from a lunch-hour measurement at his library in Alexandria during the summer solstice, knowing only his distance from the Tropic of Cancer. Eratosthenes is a hero of Chicago statistician Doug Hubbard, who trains managers in "calibrated estimates," basically closely observed ballpark figures. Here he describes approaches to making more accurate guesses, including when it's worth spending money to take out some of the guesswork. If you didn't get past introductory statistics in college, this is a useful guide to Monte Carlo simulations, Baysean inversion, crowdsourcing and other analytical concepts. Not only does Hubbard open up the black box of predictive modeling, but he also points to ways we can think about thinking: It's risky to rely on just gut instinct, but maybe we can trust our gut once we measure just how far to trust it.

4out of 5June Ding–The title made me curious. The author did make the case that anything can be measured including many things that we consider abstract or intangible. The stories it gave at the start of the book is fascinating and opened my mind about what we think measurement really is. There is no perfect measurement. There is no absolute truth. Measurement is a quantitativly expressed reduction of uncertainty based on one or more observations. I also find the methods to define the problem and the notion that a The title made me curious. The author did make the case that anything can be measured including many things that we consider abstract or intangible. The stories it gave at the start of the book is fascinating and opened my mind about what we think measurement really is. There is no perfect measurement. There is no absolute truth. Measurement is a quantitativly expressed reduction of uncertainty based on one or more observations. I also find the methods to define the problem and the notion that a measurement has to support a decision is helpful.

5out of 5Jeff Yoak–This was a fantastic read. It helps with general numeracy as well as providing an overview on how to think about measurement and statistics practically. This is an area where I have some experience and I still learned a lot. This book, especially the first half, should be accessible to everyone. The second half is a bit more technical and I wished I had been reading in paper instead of in audio. I may do that eventually. The pacing is a little hard in audio and I could have benefited from notes, This was a fantastic read. It helps with general numeracy as well as providing an overview on how to think about measurement and statistics practically. This is an area where I have some experience and I still learned a lot. This book, especially the first half, should be accessible to everyone. The second half is a bit more technical and I wished I had been reading in paper instead of in audio. I may do that eventually. The pacing is a little hard in audio and I could have benefited from notes, but still... a great read and actively beneficial.

4out of 5Peter Mcloughlin–Fairly good business statistics book on measuring factors and on how to apply measurement and some good risk analysis. Definitely overhyped as a revolutionary booklet (I think this happens with business books a lot.) But it is accessible and gives some good advice on how to measure things statistically and using statistical methods for practical applications but it isn't the second coming.

4out of 5Kc–I purchased this book because I am in the middle of a project where I have to measure an "intangible". I liked the author's ideas on breaking down a measurement and figuring out the uncertainty factor on each variable. The information he provided helped me to find a solution for my project.

5out of 5Pauli Kongas–Perhaps not the best read in audio because of some math and a lot of pictures etc.

5out of 5Albert–The book is a very interesting one, that presents the premise that anything that needs to be assessed can be measured, in one form or another. Of course, there is a need to define/redefine what a measurement is. In this, the book is a fascinating look at the paradigm shift that needs to occur to perceive the world in a new way that allows it to be measurable. Many basic assumptions are challenged and revised in the process, which was actually neat. It brings a new perspective, which opens more p The book is a very interesting one, that presents the premise that anything that needs to be assessed can be measured, in one form or another. Of course, there is a need to define/redefine what a measurement is. In this, the book is a fascinating look at the paradigm shift that needs to occur to perceive the world in a new way that allows it to be measurable. Many basic assumptions are challenged and revised in the process, which was actually neat. It brings a new perspective, which opens more possibilities and opportunities. Towards the end of the book, it starts getting very math- and statistics-heavy, and necessarily so, to present the complete content of his methodology. Even if you end up getting lost in the math section (and that's frustratingly easy to do when someone is reading a math equation to you), the principles set forward help to accept the assertion that anything CAN be measured. Wow. I finished this audiobook, but the voice acting is SO bad that I spent the first third of the book getting used to listening to him, detracting from the concentration I had to pay towards the content. The voice is pretty identical to the announcer's from when we had to call on the phone to get movie showing times, just like the one that Kramer mocks in the Seinfeld episode. Another annoyance that I could not get over: The voice actor read, literally, over and over again, e.g., "i e" and "e g" instead of converting them to "that is" and "for example"... It made the book feel so stunted and the reading felt...dumb. There are books that are amenable to audiobook format, then there are books that just should not be made into audiobooks. This is such a one. Not only does it not work to have a mathematical equation read out to you, but there is additional information containing charts, graphs, and even test exams that are referenced and really should be consulted online while going through the book; this kind of defeats the purpose of an audiobook, it seems to me. But this is a problem with the book format, not the content. Because of the content, I am willing to give this 4 stars. But this should never have been made into an audiobook. The content of the book doesn't lend itself to it and the voice actor chosen should definitely find a different avenue of work. Again: Do NOT make the mistake I made and get an audio version of this book! Read it on paper instead!

4out of 5Sundarraj Kaushik–A nice book. A must read for sceptics like me who think there are many immeasurables in business. The key message the author gives is, instead of taking a path or avoid taking path because one does not find the right measurement, an attempt should be made to find out what can be measured to reduce the risk if the path is taken or not taken. This will help make a more sensible decision than just saying there are immeasurables. In short some information is better than no information. It is recommen A nice book. A must read for sceptics like me who think there are many immeasurables in business. The key message the author gives is, instead of taking a path or avoid taking path because one does not find the right measurement, an attempt should be made to find out what can be measured to reduce the risk if the path is taken or not taken. This will help make a more sensible decision than just saying there are immeasurables. In short some information is better than no information. It is recommended that one of the tools be leveraged to carry out the measurement with whatever data is available. 1. Monte Carlo 2. Markov Chains 3. Bayesian Probability and Bayesian Inversion for Ranges 4. Rasch Model 5. Lens Model 6. Simple sampling. The key is that the samples must be really random. 7. Brunswik's method 8. Daves Z Scale 7. Objective Model if Historical Data is available They myth that is dispelled in the book is that when you have a lot of uncertainity, you don't need too much data to reduce uncertainity significantly. Event a very small amount of relevant data will go a long way to reduce the uncertainity. Some of the issues that must be avoided are 1. Bandwagon effect 2. Halo effect 3. Choice blindness 4. Don't over measure. The theory of Diminishing Marginal Returns starts applying and only adds to the cost reducing the risk significantly. At a high level the steps outlined are 1. Define the decision and variables that matter to it 2. Model the current state of uncertainity about those variables 3. Compute the value of additional measurements 4. Measure the high-value uncertainities in a way that is economically justified 5. Make risk/return decision after the economically justified amount of uncertainity is reduced. A must read for all decision makers, which is all of us.

4out of 5Lukasz Nalepa–For a long time time now, I've heard recommendation from various people, that this book is really worth reading. It took me a while to grab it though, as it did not seem as very interested topic for me, but finally I decided to give it a try - I needed to think about some measures, and I hoped to find some inspiration and guidelines. Cutting through the chase - I feel deeply disappointed. I feel like this book is more about decission making and statistics (or probabilities) rathern than actual me For a long time time now, I've heard recommendation from various people, that this book is really worth reading. It took me a while to grab it though, as it did not seem as very interested topic for me, but finally I decided to give it a try - I needed to think about some measures, and I hoped to find some inspiration and guidelines. Cutting through the chase - I feel deeply disappointed. I feel like this book is more about decission making and statistics (or probabilities) rathern than actual measurements. There are some tough cases described there of course like measure of value of information, measure of risk, reasoning to measure value of human life - but not a measurement itself though. But for me it falls short, compariun to auditious title. I was inclined to give it two starts tops for almost all duration of reading, but the sumup of the book reminded me, that I actually had some take aways from this book. Most important for me personally, was a reminder that measurement is a way of decrising an uncertainty - there are no absolute measures, and by that fact only, everything can be measured at least a bit. The second nice take away, was an idea to use Monte Carlo simmulations to deal with ranges of unknowns (due to uncertainty). The third, and final for me, would be to employ some "statistic" tricks, while having a limited number of data. So overall: 2,5 (rounded up i guess) stars from me. Maybe it would be better, with more examples, more interesting narrative and far less statistics. Maybe the title should be: how to uese statistical methods on non-statistical problems. Than, it would be descriptive and I would definetely skip it without all that whining ;)

4out of 5J Keefer–This is a good layman's introduction to reducing uncertainty, especially in business problems. Hubbard makes a strong case for prioritizing measurement of even seemingly nebulous intangibles. The book is centered around a useful framework for tailoring uncertainty reduction to specific problems (p. 41-42; p. 266-270). As a result, I can see this book being a pillar for an enlightened manager to reference frequently. I found the first half of the book excellent. It was dense with intuitive takeawa This is a good layman's introduction to reducing uncertainty, especially in business problems. Hubbard makes a strong case for prioritizing measurement of even seemingly nebulous intangibles. The book is centered around a useful framework for tailoring uncertainty reduction to specific problems (p. 41-42; p. 266-270). As a result, I can see this book being a pillar for an enlightened manager to reference frequently. I found the first half of the book excellent. It was dense with intuitive takeaways (a few of which are included at the end of this review). I especially enjoyed the "calibration exercises" of chapter 5, which helped me better understand uncertainty and confidence intervals. In the second half, Hubbard discussed from a high level some of the statistical methods he uses, as well as some applications. As mass-market books on technical topics often do, it tried to find a balance between providing a high-level survey of the subject matter and giving some nuts-and-bolts details, but I don't think it succeeded. Unfortunately it did not employ the same clear exposition of the first half of the book (for example, I thought Chapter 10's discussion of Bayesian statistics was muddled and confusing compared to other treatments I've seen). Some choice quotes from the hardcover 2nd edition: P. 23: a measurement is "a quantitatively expressed reduction of uncertainty based on one or more observations." P. 27: If a trait matters at all, it is detectable, and therefore measurable. P. 28: "All measurements of interest to a manager must support a specific decision." P. 41: "Ignorance is never the moral high ground." P. 76: "Once calibrated, you are a changed person. You have a keen sense of your level of uncertainty."

4out of 5Stuart Bobb–This book is not your typical breezy business book that you skim/read on an airplane flight. It's closer to a text book in terms of information density and complexity. Bayesian statistics, calibrated experts, Monte Carlo simulations and a pile of other data heavy methodologies are brought to bear on some very challenging measurement questions. The basic premise of this book is that far more things can be measured - to some degree of confidence - than you might ever imagine. If you are careful abo This book is not your typical breezy business book that you skim/read on an airplane flight. It's closer to a text book in terms of information density and complexity. Bayesian statistics, calibrated experts, Monte Carlo simulations and a pile of other data heavy methodologies are brought to bear on some very challenging measurement questions. The basic premise of this book is that far more things can be measured - to some degree of confidence - than you might ever imagine. If you are careful about what you pick, that is, is the value of better information high enough in this instance, you are likely to discover that spending even minor resources to measure something that others call "intangible" may drastically reduce your uncertainty about an outcome. YOu don't need a statistics background to understand and appreciate this book - though having one will accelerate the time it takes for you to fully comprehend what the author is doing. I'm anxious to apply some of this to my next measurement problem, I think there are some very powerful tools in this book. Next time somebody tells you that "There's no way to measure that" - toss this book their way and ask "Are you sure? This book says you're probably wrong". :-) The usual cautions apply. Not every uncertainty is worth the cost to measure and reduce the uncertainty. Being 90% confident is still a far cry from certainty - but it is likely to be far better than the stab in the dark being made by the uninformed on all kinds of decisions right now.

4out of 5Ioana–I liked this book well enough, but I think that it doesn't give an accurate description of what you should expect when you start it. The author makes a point of saying that it has been adopted as reading material for university classes but how it has not been written as a textbook. I beg to differ, especially by the fact that an accompanying workbook does exist. Overall, the information was great. I have learned to approach measurements and goals with more of a plan in mind and got some excellent I liked this book well enough, but I think that it doesn't give an accurate description of what you should expect when you start it. The author makes a point of saying that it has been adopted as reading material for university classes but how it has not been written as a textbook. I beg to differ, especially by the fact that an accompanying workbook does exist. Overall, the information was great. I have learned to approach measurements and goals with more of a plan in mind and got some excellent pointers on how to distill the problems. Minus the mathematics involved, it was all very intuitive and easy to grasp. With the mathematics, I was disappointed to see that while I was reading the book I wasn't even able to follow them. There is plenty of material online on them, and I was fine doing the calculations on my own with pen and paper, but I would expect to at least understand where the author is going with everything. Finishing this book, I feel like I know of more methods to make measurements and more algorithms that should allow me to interpolate and extrapolate information from the data I have. I would personally use it more as a reference book than anything else, because there is no way to fully memorise everything described. It's more about giving you a path forward.

5out of 5Ryan–A good book about the value of measuring things in business. My top 5-10 takeaways were: - To be valuable, a measurement doesn't have to remove uncertainty, it just needs to reduce it. - If it matters at all, it is detectable/observable. (Even for "touch-feely" things like employee engagement.) - You often don't need to know something with absolute certainty. The amount when it starts to matter is the "threshold". If you can measure enough to meet the threshold, that is enough. - If we can't identif A good book about the value of measuring things in business. My top 5-10 takeaways were: - To be valuable, a measurement doesn't have to remove uncertainty, it just needs to reduce it. - If it matters at all, it is detectable/observable. (Even for "touch-feely" things like employee engagement.) - You often don't need to know something with absolute certainty. The amount when it starts to matter is the "threshold". If you can measure enough to meet the threshold, that is enough. - If we can't identify a decision that is going to be made based on the measure, then the measure has no value. - People can be trained to make "calibrated estimates", such as "there is a 90% chance the value will fall within this range". That is a way to get value from experts. This goes by the term "90% confidence interval". (There is a fun trivia quiz in the book to test this on yourself.) - Be iterative. The first measures will reduce uncertainty the most. This might be all you need. There are also some handy measuring instruments (Mathless 90% CI, Rule of 5).

5out of 5Casey–A good book, it covers the author's measurement and analysis system, Applied Information Economics, plus offers a lot of encouraging advice on how to find data measurements in any field or endeavor. Though parts of the book can get fairly deep into mathematical examples, overall it was very readable and, if nothing else, offered a plethora of ways to find and use measurements. Personally I thought the author was a bit too dismissive of some decision techniques, his emphasis is on probabilistic m A good book, it covers the author's measurement and analysis system, Applied Information Economics, plus offers a lot of encouraging advice on how to find data measurements in any field or endeavor. Though parts of the book can get fairly deep into mathematical examples, overall it was very readable and, if nothing else, offered a plethora of ways to find and use measurements. Personally I thought the author was a bit too dismissive of some decision techniques, his emphasis is on probabilistic models and eschews deterministic methods, but he did at least provide a broad scope of processes and backed up his preferences with ample reasoning. Recommended if you want to take a plunge into being more data driven in your day-to-day applications.

5out of 5Serejka Keller–It's hard to read sometimes, but the greatest value of this book could sum up into 3 to 6 questions: 1. What are you trying to measure? 2. Why are you trying to measure? 3. What is the value of the measurement? What is threshold for error and is additional effort economically justified? ^ This defines if I really NEED to measure something 4. So what do we know about the subject? How much uncertainty is there? 5. What are observations methods to confirm or reject the hypothesis? 6. How do we measure err It's hard to read sometimes, but the greatest value of this book could sum up into 3 to 6 questions: 1. What are you trying to measure? 2. Why are you trying to measure? 3. What is the value of the measurement? What is threshold for error and is additional effort economically justified? ^ This defines if I really NEED to measure something 4. So what do we know about the subject? How much uncertainty is there? 5. What are observations methods to confirm or reject the hypothesis? 6. How do we measure errors, and how we can avoid them. ^ This could be probably changed based on the subjective perception of a group of people, but most of the time you just need to understand how to validate your hypothesis.

4out of 5Toni Tassani–If we understand "measuring" as "reducing uncertainty", we lower all of our mechanisms that protect us from metrics. Hubbard maintains the text just in between of informative and highly mathematical, something that can attract or cause discomfort. It worked for me. The author presents his approach to measurement and defends an statistical approach to decision making. Bayes, Paul Meehl, Kahneman, Tversky or Fermi are some of the references used in the book. You can learn about estimates, calibration If we understand "measuring" as "reducing uncertainty", we lower all of our mechanisms that protect us from metrics. Hubbard maintains the text just in between of informative and highly mathematical, something that can attract or cause discomfort. It worked for me. The author presents his approach to measurement and defends an statistical approach to decision making. Bayes, Paul Meehl, Kahneman, Tversky or Fermi are some of the references used in the book. You can learn about estimates, calibration, risk, sampling, errors, forecasting and bias. It's a thick book, sometimes dense, but contains knowledge you can use inmediately.

4out of 5Sergey Shishkin–Very thorough, no fluff treatment of a very important subject. Main takeaway for me was the focus on uncertainty measurement and reduction, as opposed to uncertainty elimination. After reading this book, one can indeed enthusiastically start measuring everything, which is definitely not author's intention. When stakes are high, however, one can quantify how much uncertainty reduction is economically justified, and then know how to conduct the necessary measurements. I wish the book contained even Very thorough, no fluff treatment of a very important subject. Main takeaway for me was the focus on uncertainty measurement and reduction, as opposed to uncertainty elimination. After reading this book, one can indeed enthusiastically start measuring everything, which is definitely not author's intention. When stakes are high, however, one can quantify how much uncertainty reduction is economically justified, and then know how to conduct the necessary measurements. I wish the book contained even more little anecdote of practical measurements of seemingly intangible things. Nonetheless, the book is well written and wasn't too dry.

5out of 5Ned–If you ever wondered how to apply statistics that are taught in school to a business, this books lays the ground for it. It's an interesting read that starts with several examples of how the circumference of the Earth was first measured. It goes into what we can learn from these examples and further diving into the statistics part. The books has examples from the author's experience and tries to make the math easy to digest. Overall, I liked the deconstruction on the measurement and the small an If you ever wondered how to apply statistics that are taught in school to a business, this books lays the ground for it. It's an interesting read that starts with several examples of how the circumference of the Earth was first measured. It goes into what we can learn from these examples and further diving into the statistics part. The books has examples from the author's experience and tries to make the math easy to digest. Overall, I liked the deconstruction on the measurement and the small and valuable lessons throught the book.