When Measurements Are Stupid

Eric Sandosham, Ph.D.
4 min readFeb 2, 2025

--

It’s time to call bullshit.

Photo by Towfiqu barbhuiya on Unsplash

Background

Last week, I got into an online “argument” with the CEO of a market research company when I called his survey “scam-my”. I was reading an online article about how Singaporeans were positively responding to Trump’s re-entry into the White House, and found the conclusion that it was likely that Singaporeans resonated with Trump’s conservative stance because they were themselves conservative incredulous. The “finding” was part of a quarterly market survey on citizen opinions in the island state. For those of you who have read my earlier articles, you will know that I have a deep dislike for survey data, finding them to be highly unreliable, poorly constructed, and often non-actionable. The CEO of the said market research company told me to climb down from my ivory tower (because of my Ph.D.). He told me the data was public, and I was free to analyse it. I did, and I still think it’s rubbish.

In that same week, I was conducting a client workshop on HR Analytics with my consulting team in Jakarta. A question was raised on why the client’s organisation’s measurement of culture and values were looked weird — “awareness” measurements were consistently lower than “buy-in” measurements. My gut-instinct diagnosis was that it was an instrumentation issue — measuring awareness was akin to testing rote memory (to this day, loyal customers still wrongly attribute brand advertising imagery and messaging) while measuring buy-in was experiential and therefore more reliable, even though the measurement itself may not be valid.

These 2 events may seem unrelated on the surface. But as I reflected on them, I realised that they shared one key commonality. They were both about stupid measurements.

And so I dedicate my 76th article to unpacking what makes a measurement stupid.

(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)

Measure Like There’s No Tomorrow

There are industries built on measurements, or rather, over-measurements. They believe they are creating useful information; that every measurement means something. That everything that can be measured, should be measured. (It’s reminiscent of naive data leaders who throw all their business, process and systems data in their data lake without any real understanding of what they are trying to achieve, driven by the fear-of-missing-out.)

Consider the global public opinion and election polling industry. It is worth just under USD 7 billion dollars and growing at just about 2% every year, i.e. no real growth. Either the market is saturated or it reflects a lack of sufficient utility generated by this industry. In the Singapore market research survey that I spoke of in my opening paragraph, the survey includes such questions (measured on a quarterly basis):

  • “Do you feel that things in Singapore are heading in the right direction or would you say things are heading in the wrong direction?”

This question is problematic at so many different levels:

  1. The survey does not use a fixed panel of responders but varies with each quarter, which gives rise to a naturally higher response variation.
  2. The emotional and subjective nature of the question introduces another level of variation; even if the environmental context remained status quo, there is natural variation when asking people about their “feelings”.
  3. The measurement to the question is not calibrated, meaning, what should be the expected baseline? No economy benefits every citizen, and hence there will always be winners and losers. So what does heading in the “right direction” mean? People often do not have informed opinions.
  4. The measurement to the question has no decisioning value. If either positive or negative sentiments were high, what decisions can or should be taken? And why would any corporation make decisions based on “feelings”?

This question is ultimately symptomatic of the broader collective of stupid measurements.

Stupid Measurements

What makes a measurement stupid? I submit to you 3 simple smell-test criteria:

  1. A measurement is stupid if it has no decisioning value.
  2. This is also known as the “so what” test. Does the measurement play a meaningful role as an input to a decision or as a reliable outcome measure of a decision for which you have influence or control over? Alternatively, do you know what the measurement is correlated to?
  3. A measurement is stupid if it does not represent what it’s supposed to represent, and you know it.
  4. This is also known as the validity test. Consider the case of exit interviews — asking employees why they’ve decided to leave the organisation. We know they are verifiably false (there is no incentive to tell the truth and most HR professionals would advise you not to burn bridges), and yet HR still collects them.
  5. A measurement is stupid if it is not calibrated, if it has no baseline.
  6. I’ll call this the interpretation test. Without calibration / baseline, we can’t interpret whether the measurement is good or bad. Consider the case of measuring culture / value awareness that I shared in my second paragraph above. What makes for an acceptable awareness measurement? Should it be 60%, 70% or 80%?

Conclusion

Just because you have the ability to create measurements doesn’t mean you should. You should pass the 3 smell tests, failing which, you are simply introducing noise into the world. And on the receiving side, we should not be afraid to actively call out bullshit each time we encounter such noise. It will help to raise the bar in a world that’s already inundated with data.

--

--

Eric Sandosham, Ph.D.
Eric Sandosham, Ph.D.

Written by Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.

Responses (1)