Why the Obsession with Customer Lifetime Value?

Eric Sandosham, Ph.D.
4 min readMar 24, 2024

--

And other useless things.

Photo by Lukas Blazek on Unsplash

Background

The topic of Customer Lifetime Value (CLV) came up in a recent conversation with a dear friend and co-author (we are working on a book together), and it got me to thinking about my early days (1990’s) in Citibank as a junior data analyst. CLV was all the rage back then. We would agonise over data, computation methodologies, validation, financial reconciliation, etc. 30 years on, I still see organisations discussing this topic. There are even articles that talk about CLV as the only customer metrics that matter, and successful organisations such as Netflix utilises it. (Where have we heard this before? Oh yes, the infamous and academically-debunked Net Promoter Score; not a fan.) This obsession with CLV is unhealthy and largely misplaced. And so I dedicate my 31st weekly article to discussing the bad thinking around customer lifetime value.

(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)

What Matters

How do we begin to know what should matter in our pursuit of CLV? Most data analysts, when approached by the business to derive CLV, would ask the following questions:

  1. What is the definition of CLV?
  • What defines a customer?
  • What counts as ‘value’? Is it just revenue earned?
  • What is the duration of ‘lifetime’?

2. Where will I get the data?

  • Is the data already or readily available?
  • Are there privacy concerns with the data?
  • What is the extraction and refresh cycle of the data?

3. How should I set up the computation?

  • How much computational power would I need?
  • When should the computed output be made available?
  • How will I validate the computational output?

I submit to you that these are all the WRONG questions. The only question worth asking in this instance is:

  • “How will the computed CLV be used in the decision-making process — will it be used as an input, a feedback or an outcome measure?”

But if this question comes across as somewhat obscure or even abstract for the novice, perhaps a simpler question to ask is:

  • “If Customer A were to differ from Customer B in their CLV by 5%, why would it matter? What different actions would you take for each customer?”

If you can’t answer this question, or if you don’t have sufficiently differentiated actions at that level of relative CLV difference, then you really don’t need to waste your time tinkering with the computation inputs and logic to extract greater accuracy. The efforts simply do not matter; they won’t translate into economic value!

Knowing vs Doing

In Dec 2023, I published my 17th article of this weekly series in which I wrote about the concept of ‘Knowing’ vs ‘Doing’, i.e. obsessing over information processing vs decision execution. This is one of the issues plaguing the data analytics / data science function in many organisations — fixating on knowing for the sake of knowing, instead of focusing on solving execution-oriented challenges.

The CLV instance can be generalised into a class of problem statements related to forecasting accuracy. There are actually 2 aspects here — the ability to achieve a reasonable forecast and the effort to further improve its accuracy. With forecasting, you are either simulating the future effects of your present day action or you are trying to manoeuvre yourself into a favourable position based on that forecasted future. In both scenarios, your decisions are either broad strokes or fine grain, meaning you just need directional guidance such as relative prioritisation of actions, or you need very specific set of instructions and activities. In the case of CLV, it is largely directional in nature — you want to know which set of customers to invest your time and resources against. If the CLV is directionally accurate (meaning the relative accuracy of one customer to another is reliable), then that is sufficient. Putting additional efforts into the accuracy won’t change your business decisioning. Instead, it would distract you from the real economic extraction activities.

The relevant questions to ask in any data analytics computation exercise is:

  1. What decisions will be taken based on the computation outputs?
  2. How will the variance in the computation output affect the range of decision choices?
  3. How durable should the computation output be at the time of decision-making?

Conclusion

Many organisations are wasting valuable time and resources working on ‘accurate’ CLV calculations. These can also take the form of pro-forma P&L simulation for product launches and customer segments. To boast that your forecast is fantastically accurate is to pat yourself on the back for ‘Knowing’. These rarely translate into incremental economic benefit for the organisation once that accuracy is achieved because it’s ‘Knowing’ that’s not connected to ‘Doing’. Your computation thoughtfulness and efforts need to be intimately linked to execution activities so that you know when you’ve achieved sufficiency.

CLV isn’t useless. But the CLV computation complexity must be matched against the decisions to be taken on the outputs. If the decision is directional (e.g. invest in one customer instead of another), then relative accuracy is all you need. The fixation with CLV could possibly trace its roots to Net Present Value (NPV) calculations. In the latter, the absolute accuracy matters because you end up taking varying financial decisions based on the the variance of the NPV outputs.

Can you think of similar examples to CLV that organisations are fixating on?

--

--

Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.