Harvesting Knowledge From Your Campaigns

Eric Sandosham, Ph.D.
6 min readJun 16, 2024

--

From doing to knowing.

Photo by Farrinni on Unsplash

Background

In my 3-decades long work in data analytics / data science, there are 2 “necessary evils” that analysts either love or hate. They are (i) campaign management, and (ii) business metrics reporting, sometimes colloquially known as MIS. I would label these as sub-disciplines under the parent discipline of data analytics. Over the decades, these sub-disciplines have evolved both in complexity and solutioning. For example, MIS has evolved into self-help interactive dashboards. While I will discuss MIS in a subsequent article, for this article, I specifically want to talk about campaign management. I will reference the financial services industry which is the domain I’m most familiar with, although the points I will raise can be universally applied to any industry.

Now, if you take a typical retail bank in Asia Pacific, it would run hundreds of campaigns each year. Over the decades, banks have gotten pretty good at tracking the performance of their campaigns and evaluating their ROIs (return on investments). A 2:1 ROI is generally what they would aim for, i.e. $2 of revenue for $1 of campaign cost, which includes campaign delivery and cost of offers. The 2:1 ratio makes sense when you understand the cost structure of a retail bank, and attribute a fully-loaded cost to campaigns.

But even as the retail banks track and evaluate every single campaign, they don’t learn anything much from it. They’ve missed the forest for the trees, as the saying goes. And so I dedicate my 43rd weekly article to providing a fresh perspective on the role of campaign management in the broader discipline of data analytics / data science.

(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)

Creating Value; Creating Knowledge

In my 7th weekly article (The Problem with Campaign Management), I wrote about the poor thinking and gaps in campaign design and campaign evaluation. The theme then was about the value creation aspects of campaigns. The minutia of tracking and evaluation serves to optimise the creation and extraction of that value.

But campaigns are also ingress mechanisms for knowledge creation. This is can sound surprising and even counter-intuitive. Having interacted with a fair number of retail banks since co-founding my consulting firm, I’ve come to realise that the retail banks haven’t necessarily gotten smarter (campaign-wise) despite having more consistent and rigorous capabilities to track and evaluate campaigns. Why is that? A big reason is that the retail banks, and organisations in general, view campaigns as an execution process, i.e. that this was simply the last mile expression of all the thinking that went before it. But the outcomes of campaigns can reveal so much more … if we know how to ask the right questions.

So how do we go about “harvesting” knowledge from campaigns? Let me introduce to you 3 guiding principles:

  1. State and validate your campaign assumptions and estimates.
  2. Triangulate your assumptions and trace their inter-dependencies.
  3. Identify areas of wasted resources to improve sustainability.

Assumptions & Estimates

In my last article (How to Perform Diagnostic Analytics), I covered some points on validating operating assumptions. That same philosophy applies to campaigns. Every campaign is built on a set of assumptions and estimates. We believe a particular target group of customers will be responsive to this campaign because of a particular set of motivations or attributes. And we believe that this will be the magnitude of the response. Does the outcome of the campaign correspond to these assumptions and estimations? Does the campaign reinforce our understanding of the underlying behaviours of our customer base? Meeting a set of campaign ROIs isn’t the same thing as validating assumptions and uncertainties.

We start by stating the assumptions and uncertainties in the design phase of any given campaign.

  1. Here is an example of an assumption in a campaign: ”We believe customers aged 35 to 45 will have the highest perceived utility in our product and are more prepared to pay for it. This assumption is supported by focus group research, and further tempered by experience and observations.”
  2. Here is an example of an estimation in a campaign: ”We estimate that 10% of customers aged 35 to 45 will respond to this campaign which is 2x higher than the average response rates from past campaigns because we need this level of response for the campaign to breakeven.”

The more we are able to list down statements such as these in our campaign designs, and store them in a campaign knowledge inventory as part of the knowledge audit trail, the more we will be keeping ourselves honest to the subsequent learning process. As part of the campaign evaluation, we must then be seeking answers to validate these assumptions and/or improve the certainty of our estimates. For example, to validate our assumption that customers aged 35 to 45 is the most “receptive” target group, we would need to include customers outside that age range in the campaign design, and a simple rank ordering of responses by age group in the campaign outcome would provide a reasonable validation, or invalidation. And this validation/invalidation learning would need to be data-entried back into the campaign knowledge inventory to close the loop.

Triangulation & Traceability

Stating and validating your assumptions and estimates on a per campaign basis is just the foundational beginning. To build campaign intelligence, we need to reference these assumptions across campaigns. How does the learning from one campaign triangulate with a similar learning from another campaign? As detailed in my last article, we can never truly validate any assumption because of the nature of historic and biased data; we can only make reasonable claims. Therefore, the more we can triangulate these reasonable claims, the more we strengthen the validation.

We also need to see if some of this knowledge contradicts each other. In a similar vein to triangulation, the more contradictions, the weaker the validation of the assumption. It could also mean that we are not having the right perspective to the assumption; that it is being confounded by other effects. This can open an area of deeper diagnostic investigation.

Beyond reinforcement and contradictions, there is also the need to identify inter-dependent assumptions or assumptions that are built on top of other assumptions. This is the concept of data lineage and traceability (in data governance) that we can similarly apply to the collection of assumptions. This is important because when we validate or invalidate an assumption, we can immediately assess its impact to other campaigns.

Waste Reduction & Sustainability

The final knowledge that we can harvest from a campaign is to actively view it through the lens of waste reduction. The premise is simple — what did we learn that was truly unnecessary in this campaign? For example:

  1. Which customer segments did not have any response at all and could have been removed from the target group?
  2. Which offers need not have been made, thus saving us some expenses?
  3. Which channels could have been de-prioritised or under-weighted, thus saving us some expenses?

A useful way to approach this exercise is to simulate achieving the same campaign outcome with 10% less resources. Can it be done? If yes, what did we learn? And what can we file away as knowledge that might be useful for other campaigns?

By constantly doing this mental and computational exercise, we strengthen our ability to incorporate sustainability into our campaigns, thus preparing us for the eventuality of resource scarcity.

Conclusion

In today’s ultra competitive world, rigorously tracking and evaluating every campaign is not enough; having access to asymmetric knowledge gives us a competitive advantage. To get there, we need to build a knowledge harvesting process around our organisation’s campaigns — set up a campaign knowledge inventory, put in place the right knowledge audit trails, enforce knowledge lineage and traceability. It sounds daunting, and there are now AI-supported tools and processes that can get us there much faster and easier. Building up organisational intelligence via your campaign process could reap significant dividends.

--

--

Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.