Recently, I read the book “How to measure anything” by Douglas W Hubbard. It is a real insightful book. Following are some of the excerpts from the book.
In ancient Greece , a man named Eratosthenes estimated the circumference of the earth by looking at different lengths of the shadows in different cities at noon and by applying some simple geometry.
A Nobel prize winning physicist (Enrico Fermi ) taught his students how to estimate by estimating the number of piano tuners in Chicago.
Three reasons why people think something cannot be measured.
- Concept of measurement ( not understanding what measurement actually means )
- Object of measurement ( The thing that is being measured is not well defined. Sloppy and ambiguous language )
- Method of measurement ( many procedures of empirical observation are not well known. if people were more familiar with measurement methods, then it would become apparent that many things that are defined as immeasurable are not only measurable but may already have been measured. )
Definition of Measurement
Measurement => A quantitatively expressed reduction of uncertainty based on one or more observations.
Object of measurement.
A problem well stated is a problem half solved. - Charles Kettering ( Inventor of electrical ignition for automobiles , holder of 300 parents )
Important questions to ask
- What do you mean by <….> ?
- Why do you care ?
- If it matters at all, it is detectable/observable
- If it is detectable, is can be detected as an amount ( or a range of possible amounts )
- If it can be detected as a range of possible amounts, it can be measured.
Seven proven measurement methods.
- Measuring with very small random samples
- Measuring with population of things that you will never see all of
- Measuring when many other, even unknown variables are involved.
- Measuring the risk of rare events
- Measuring subjective preference and values
Rule of Five
There is 93.75 chance that, the median of a population is between the smallest and largest values in any random sample of five from that population.
Four useful measurement assumptions
- Your problem is not as unique as you think
- You have more data than you think
- You need less data than you think
- An adequate amount of new data is more accessible than you think
Putting a measurement problem in context
- What is the decision this measurement is supposed to support.
- What is the definition of the thing being measured in terms of observable consequences
- How exactly does this thing matter do the decision being asked
- How much do you know about it now ( what is your current level of uncertainty )
- what is the value of additional information
Definition of uncertainty and risk and their measurements
- Uncertainty: The lack of complete certainty, that is the existence of more than one possibility . The “true” outcome, state, result is not known.
- Measurement of uncertainty : A set of possibilities assigned to a set of possibilities. For example, there is 60% chance that this market will more than double in five years. a 30% chance that, it will grow at a slower rate, and a 10% chance the market will shrink in the same period
- Risk : A state of uncertainty where some of the possibilities involve a loss , catastrophe, or other undesirable outcome.
- Measurement of Risk : A set of possibilities each with quantified probabilities and quantified losses. “We believe there is a 40% chance of the proposed oil well be dry with a loss of $12 million in exploratory drilling costs.
How much do you know now ?
The most important questions of life are indeed, for the most part, really only problems of probability -- Pierre Simon Laplace
Two extremes of subjective confidence
Over confidence : When an individual routinely overstates knowledge and is correct less often that he or she expects.
Under confidence: When an individual routinely understates knowledge and is correct much often than he or she expects.
Estimation of risk is a skill that can be improved by learning
Measuring risk through modeling
It is better to be approximately right than precisely wrong. - Warren Buffet
Risk Analysis though Monte Carlo Method.
If an organization uses quantitative risk analysis at all, it is usually for routine operational decisions. The largest most risky decisions get the least amount of proper risk analysis.
The McNamara Fallacy
The first step is to measure whatever can be easily measured. This is Ok as far as it goes. The second step is to disregard that which cant be easily be measured or give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what cannot be measured easily is not important. This is blindness. The forth step is to say that, what cannot be easily measured really does not exist. This is suicide.
Value of Information
Expected value of information (EVI)= Reduction of expected opportunity loss ( EOL ) or EVI = EOL (before information ) – EOL ( After information )
Where EOL = chance of bring wrong * cost of being wrong
Expected value of prefect information ( EVPI ) = EOL (before info )
A common measurement myth
Myth : When you have lot of uncertainty, you need lot of data to tell you something useful
Fact: If you have a lot of uncertainty now, you do not need much data to reduce uncertainty significantly , When you have lot of certainty, then you need a lot of data to reduce uncertainty significantly
In a business case, the economic value of measuring a variable is usually inversely proportional to to how much measurement attention it usually gets.
If your only tool is a hammer, then every problem looks like a nail. - Abraham Maslow
Example : It is discovered that function point analysis of the cost of an IT project was no more accurate then that of the initial estimate project managers made and therefore function point analysis did not reduce any of the uncertainty of the cost estimate and hence added no value.
Decompose it. Many measurements start by decomposing an uncertain variable into constituent parts to identify directly observable things that are easier to measure.
Decomposition effect. The phenomenon that the decomposition itself often turns out to provide such a sufficient reduction in uncertainty that further observations are not required.
Assume you are not the first to measure it. ( Do internet search )
Internet search tips
1. If I am really new to a topic, I do not start with google, I start with wikipedia.
2. Use search terms then tend to be associated with research and quantitative data. ( Eg, if you need to measure software quality or customer perception, do not just search those terms alone. Instead , include terms like “table”, “survey” , “control group” “correlation” “standard deviation “
3. Think of internet research in two levels. Search engines and topic specific repositories. For example, if you are looking for statistical data use sites like census , CIA fact book ect.
4. Try multiple search engines.
5. If you find marginally related research that still does not directly address your topic, be use to read the bibliography.
Basic method of observation : If one does not work , try the next
- Follow its trail like a clever detective. Do forensic analysis of data you already have.
- Use direct observation. Start looking, counting, and or sampling if possible.
- If it has not left any trail so far, add a tracer to it so it starts leaving a trail
- If you cannot follow a trail at all, create the conditions to observe it. ( Experiment )
Willingness to pay is a measure of how much a thing is valued.
Homo Absurdus : The weird reasons behind our decisions
Anchoring : Thinking of a number affects the value of a subsequent estimate even on a completely unrelated issue. A scientist asked his subjects to write down the last for numbers of their social security number and the number of physicians in new york city. and it was found that there is .4 correlation between the estimate of physicians and the last digits of social security number.
Halo/ horns effect : if people see one attribute that predisposes them to favor or disfavor one alternative, they are move likely to interpret additional subsequent information in a way that supports their conclusion regardless of what the additional information is.
For example, if you initially have a positive impression on a person, you are likely to interpret additional information about that person in a positive light. ( the halo effect). Likewise, an initially negative impression has the opposite effect ( The horns effect ) This effect happens even when the initially perceived positive or negative attributes should be unrelated to subsequent evaluations.
Bandwagon bias : When within the group of people, people are likely to follow what others seem to follow.
Emerging preferences: Once people begin to prefer an alternative, they will actually change their preferences about this additional information in a way that supports the earlier decision.
Measuring text ratability in lexile
New technology measures
Prediction Markets (Dynamic aggregation of opinion) can be used to measure people option.
Summarizing the philosophy
It it is something really important, it is something you can define. If it is something that exist at all, its something you have observed somehow.
It is is something important and something uncertain , you have a cost of being wrong and a chance of being wrong
You can quantify your current uncertainty with calibrated estimates.
you can compute the value of additional information by knowing the “threshold” of the measurement where it begins to make a difference compared to your existing uncertainty.
Once you know what's worth to measure something you can put the measurement effort in context and decide on the effort it should take.
Knowing just a few methods of random sampling , controlled experiments, or even merely improving on the judgment of experts can lead to a significant reduction in uncertainty