The Problem with Decision Strategy

Eric Sandosham, Ph.D.
7 min readOct 29, 2023

--

Photo by Brandon Lopez on Unsplash

Background

This is the last of my 3-part sub-series on how organisations should go about building their Data Analytics Capability. In the last 2 articles, I wrote about building Data Capability and Computation Capability. This is also my 10th article in the broader series of calling out bad thinking and bad practices in the data analytics domain.

To recap, building data analytics capability is made up of Data Capability, Computation Capability and Translation Capability (see diagram below):

  1. Data capability — thoughtful data instrumentation strategy and methodology to create re-usable information assets with a priority on 1st party data to maximise competitive value.
  2. Computation capability — advance algorithmic solutions integrated with customer touch-points that optimise customer value through estimation of preferences, risks, price sensitivities.
  3. Translation capability — Data sensemaking abilities to understand information and its context to improve decision-making intelligence across the enterprise.
3 Pillars of Building Data Analytics Capability

We will now unpack part 3 (of 3) — Translation Capability.

The term ‘translation’ is new in the data analytics / data science vernacular. The term was created by the ‘masters of organisation destruction’ in 2018 — the venerable management consulting firm of McKinsey & Co.. The Big McK has a gift for recognising weak points of greed and fear in organisations and exploiting them to the hilt for their financial benefit. They were quick to recognise that the exponential growth and focus in the technical aspects of data science was creating a gulf between the organisation and value realisation; C-suite needed to virtual signal that their organisations were digitally progressive, but they simply couldn’t make sufficient profits from the data science investment. So the fault must lie in one of these: (a) stakeholders are not identifying the right problems to solve with data science, or (b) solution adoption is meagre due to poor integration and execution. So obviously, an organisation would need new pairs of hands to manage these friction points. McKinsey defines an Analytics Translator as someone who identifies business challenges and work with data engineers and data scientists on solutions to overcome obstacles and streamline processes. This is sometimes referred to as ‘value engineering’. The description seems intuitive but on closer inspection, it actually says nothing and provides no real illumination.

What evidence is there that stakeholders are not identifying the right problems to solve with data science? What evidence is there that data science solutions are poorly integrated in the organisation? I’ve not seen any comprehensive study on these phenomenon, and therefore can’t substantiate whether McKinsey’s claim has any merit or not. However, my near 30 years of professional experience in the data analytics / data science practice has afforded some first-hand understanding of the possible root causes.

Data-Driven Problem-Solving

I look at translation as the act of removing friction in the end-to-end workflow of turning data into information, into knowledge (insight), into action and impact. To better understand how Data ‘translates’ into Impact, consider the diagram below that articulates how data-driven problem solving occurs, and the various roles involved in the process. Note the 3 friction points highlighted in the diagram by the yellow arrows — these define the job role of the Analytics Translator. Reading from left to right, I call these ‘yellow-arrow’ friction points as first mile friction, mid-mile friction and last mile friction.

Turning Data Into Insights & Impact

The friction that arises in the first mile is associated with the improper articulation of the problem statement and its translation into its data representation. Simply put, we incorrectly identify the pain-point(s) of a phenomenon or situation. Consider the simple example of employee attrition — the pain-point isn’t the loss of the employee but the loss of the employee value which could be in the form of high productivity, unique competencies or instituitional knowledge (often undocumented). Solving for the loss of these forms of employee value is fundamentally different from solving for employee churn. I won’t go into the details here because I’ve been asked to write a more expository article on the use of data science in solving for employee churn and I will release that on a later date. Finding explicit data representation (i.e. measurable) for of these forms of employee value is not obvious and is impacted by your data capability.

The friction that arises in the mid-mile is associated with the inappropriate translation of insights gathered from the diagnostic phase and into the solution designing phase. Firstly, a great number of organisations jump into solution design without going through the rigours of diagnostic analytics. They see some data patterns, they make an inference, they jump to a conclusion, and they proceed to cobble together a sufficient solution. They don’t validate their inferences! Secondly, even if the diagnostic phase is robust and the insights generated are valid, figuring out the right solution design requires both sensitivity to the organisation’s operating environment and its delivery capability. Consider the following example: a bank discovered that it might have a lot upsale opportunities for remittance services within its customer base, and decided to build a model to predict who needed remittances and ran a marketing campaign to extract the economic value. But they failed to recognise that you can’t generate more remittance needs but can only steal share from a competitor, but they were unprepared to compete on pricing. So the solution didn’t make sense despite having performed the analysis.

The friction that arises in the last mile is associated with the improper integration of the solution into the decision ecosystem of the organisation. Organisations rarely consider whether their solution significantly reduces the stakeholders’ (can be customers or employees) choice-making abilities or if it significantly increases the stakeholders’ cognitive information load. These issue impair successful solution adoption — stakeholders will often sabotage the solution, either unknowingly or intentionally as a response. The example that comes to mind was one during my Citibank days where we had developed a model to predict which credit card customers were most likely to call into the call centre to request for a waiver of the annual fee; we wanted to reduce such inbound calls to the call centre given that the eventual waiver rate was extremely high. These selected customers would receive a pre-recorded message when they called in that announce that the bank had already taken steps to waiver their credit card annual fees. Interestingly, the number of such inbound didn’t drop significantly. We found that these pre-waived customers would still request to speak with a customer service officer, simply to confirm if the pre-recorded message was real. Now, the results of the same predictive model could have been used as an outbound direct message rather than an inboound pre-recorded message. The latter clearly increased the customer’s cognitive load.

Decision Management

A unifying theme that emerges when studying these friction points is the recognition that the role of the Analytics Translator is to understand how the nature of problems arise because of flaws in an organisation’s decision ecosystem. Pain-points erupt from not-great decisions, and understanding the sub-optimalities of the choice-selection process will help us understand the nature of the problem and what to do to address those sub-optimalities. Sometimes the sub-optimalities are because of poor quality data, sometimes it’s because of poor quality interpretation of the data, and sometimes it’s because of poor solution design.

When I was at Citibank, we had changed the name of the function from Database Marketing & Analytics (DBMA) and Business Planning & Analytics (BP&A) into Decision Management. It was a nod towards the maturity of the work which was focused on knowledge and impact rather than data. It was a recognition that the data analytics work that we did was ultimately all about improving the speed and quality of decision-making. And to that end, we needed to be embedded and deeply familiar with the bank’s decision ecosystem. We were given decision rights so that we could be held accountable. The transformation of our function correlated with the rise of the competencies associated with the Analytics Translator role, although we didn’t call the role by that name.

Conclusion

McKinsey was right in calling out the need for the Analytics Translator role to accelerate the ROI on data science. But they were wrong in articulating the role as that of a ‘data-savvy’ project manager. The reality is that the Analytics Translator is a decision scientist — an advanced practitioner of data analytics and data science with strong business domain knowledge + intimate knowledge of the data inputs and stakeholder interactions in decision-making. The Analytics Translator is the natural leadership transition for data analytics and data science practitioners seeking to create more impact.

--

--

Eric Sandosham, Ph.D.
Eric Sandosham, Ph.D.

Written by Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.

Responses (1)