The Problem with Campaign Management

Eric Sandosham, Ph.D.
6 min readOct 8, 2023

--

Photo by Melanie Deziel on Unsplash

Background

In my last article, I wrote about the bad thinking and bad practices around the design of sales incentive programmes. In this 7th article, I switch my perspective to the customer. Allow me to expound on the sub-optimalities in the management of campaigns.

Campaigns represent the life-blood of any organisation, be it B2C or B2B. Campaign Management is defined as “the process of planning, executing, tracking, and analysing a marketing campaign from start to finish”. What I have observed is that many organisations suffer from the following issues:

  1. Incomplete campaign design framework
  2. Incomplete campaign evaluation framework

One of the key reasons why these issues exist stems from the critical question of who manages campaigns in an organisation. Many organisations treat campaigns as the responsibility of the Product / Marketing function, and thus logic dictates that Marketing should be given the budget and task of designing, executing and measuring campaigns. The truth is that many Product / Marketing teams lack the multi-disciplinary skills required to truly excel in campaign management. Instead of building up these needed skills, either through collaboration and partnership with other functions or self-investment, the Product / Marketing teams chase buzz terms like omni-channel marketing, hyper-personalisation, and 1-to-1 marketing. While they may appear intuitive and make for great strategy talks, most of these concepts are really just that — concepts! They simply don’t go very far, and when you spend time unpacking it, they are quite empty in fact.

A few days ago, I had dinner with friends who are in the Data Analytics domain, and one of them (Rakesh) summed this up brilliantly:

“Product / Marketing wants to target the right customer with the right product at the right time. That’s a lot of ‘right’ to predict and estimate; each is difficult and requires its own set of deep constructional thinking and data analytics. Combining all 3 dimensions makes it exponentially difficult!” Boom! 10 points! I would add to that trinity the right communication as well. How does one even define ‘right’? Right for whom? For the customer or for the organisation?

Campaign Design

In most people’s minds, ‘campaign’ is synonymous with ‘conversion’ — either some additional product sale is involved or incremental usage of an existing product. In reality, campaigns can have non-singular and nuanced objectives. In addition to creating sales, a campaign could also be creating a habit, creating awareness, or creating trust. Each of these objectives have different execution mechanisms and success measures. We can create policies around the execution mechanisms such as channel protocols or contact protocols to create operating boundaries so as to reduce the level of decision complexity. A channel protocol is essentially an evidence-based decision rule on the ‘appropriate’ channel to leverage for the campaign objective because of the unique attributes of that channel — e.g. the ability to efficiently scale or the ability to enact a closed-loop call-to-action. This implies the need to create a comprehensive and updated knowledge inventory on channel attributes.

A contact protocol is an evidence-based decision rule on ‘appropriate’ communication timing (such as time period, frequency, rest period) to apply based on the campaign objective so as to maximise attention availability and recall, as well as minimise fatigue. The contact protocol also includes policies around minimum contact — e.g. at least once every 3 months. The evidence supporting the contact protocol needs to be continuously tested, assessed and validated.

Campaign design is both a thought-intensive and data-intensive activity. As this is not an exposition on campaign design, it suffices to say that determining a campaign’s objective(s) and understanding the need for it drives the core of the design. This triggers off the right channel and contact protocols to utilise. Determining and sizing the target group, determining the size of the desired impact, determining how the impact can be attributed, determining what lessons are to be learned. This naturally segue to campaign tracking and evaluation. It’s fair to say that the use of predictive models in campaign lead selection is pervasive across many organisations. That’s a good thing but only in so far as the model align to the target objective. If models are built off data from historic campaigns, then we are constraining ourselves to a cost reduction exercise as the predictive model removes the ‘waste’ but does not expand the target universe. I have seen organisations pursue campaign ROI religiously leading to a significant reduction in the leads generated (as all the ‘waste’ is taken out) but ultimately not having any meaningful impact on the portfolio because of the reduced scale.

Campaign Evaluation

I remember a conversation I had with a client regarding their claim that their predictive model was not working because they were not getting the desired sales conversion. I had asked them how they knew that the fault lay with the predictive model? They said it was obvious because the predictive model produced leads that were distributed to sales touch-points and those leads were not translating into sufficient sales conversion. I responded that they needed to be able to answer the following questions before jumping to that conclusion:

  1. What is the model predicting precisely? And what data was it built on?
  2. What % of the leads resulted in a contact with the customer?
  3. Of those contacted, what % were presented the offer?
  4. Of those presented the offer, what % indicated they had a need, but that need had already been fulfilled by a competitor’s offer?
  5. Of those presented the offer, what % indicated they had a need, but did not like the offer from the organisation or did not have a brand affinity to the organisation?
  6. Of those presented the offer, what % indicated they had a need, but the timing was not great and they had to reconsider?

You can see where I’m going with these set of questions. It’s a case of getting the right attribution. The predictive model could be working as intended, but the contacted clients could be ‘resisting’ conversion for many other reasons. I call this “measuring the in-betweens”.

When we design a campaign, we make certain assumptions underpinning its success. The more we can validate these assumptions, the more we can re-use or build on them for future campaigns, making them even more effective. So, the objective of campaign evaluation is validating assumptions through right attribution. Controlling for the attribution is the trick, and that is in fact what test and control is constructed to achieve. Once you understand attribution, there are many clever ways to measure and account for it; I’ve even seen the use of synthetic control groups in campaign evaluation.

At the heart of campaign evaluation is the creation of a knowledge inventory linked to useful design assumptions. The more we can reduce the uncertainty in these assumptions, the more certainty we get in our campaign performance, leading obviously to better decision-making. Creating and maintaining this knowledge inventory is a massive missed opportunity in many client-centric organisations.

Conclusion

We started this article by asking the question: whose job is it to manage campaigns? I would argue that it shouldn’t be the Product / Marketing function because the skillset needed to make high-quality decisions on campaign design and campaign evaluation simply doesn’t exist in this function. Campaign Management is, in fact, a data analytics practice — the ability to recognise assumptions in the design and validate them during the evaluation process is a deeply analytical one. Sadly, many organisations continue to give full ownership of campaign management to the Product / Marketing function, relegating the Data Analytics function to merely a supporting role. That needs to change.

--

--

Eric Sandosham, Ph.D.
Eric Sandosham, Ph.D.

Written by Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.

No responses yet