Full-Stack Decision Science

Eric Sandosham, Ph.D.
8 min readDec 31, 2019

--

Photo by Eaters Collective on Unsplash

My business partner, Sally and I came from a function called Decision Management in Citibank, where we were members of the senior leadership team. The motivation of the function was really to redefine Analytics as a business-driving practice as opposed to the support function that was typical of the time. This was way before the term ‘data science’ was even birthed. We took that calling with us when we founded Red & White Consulting, and have been staunch advocates of a practice we call ‘decision science’ … before the term took root within the broader Analytics community. The fact that Google now has a chief decision scientist attest to the importance of this evolutionary path.

In our last 6 years of consulting, Sally and I have had the opportunity to assist various organisations set-up, and more particularly, reset their analytics capabilities. This is driven by the phenomenal failure of data science as a significant P&L-contributing practice. So, we’ve introduced them to the concept of decision science, but still many questions remained unanswered. For example:

  • “How does data science and decision science tie together?”
  • “Is there a standard operating procedure or practice manual for decision science?”
  • “Do we have the right technical and organisation infrastructure to evolve from data science to decision science?”
  • “How do we convince senior management when they are still enthralled by technical sophistication of data techniques and AI?”

Decision Science

Decision science is the simple idea that Analytics is all about improving decision making; analytics is simply a means to an end, and that we are better off focusing on influencing the end outcomes. While data science focuses on seeking and yielding patterns within the data, decision science focuses on what information needs to be consumed in the decision-making process. It embraces the notion that simpler solutions are more resilient solutions.

Connecting

The simplest way to integrate data science with decision science is to go back to basics. The entire Analytics spectrum can be deconstructed into 3 primary hierarchical layers — Data, Information and Knowledge.

Data is just symbols (e.g. numbers, text, audio). Information is interpreted symbols (i.e. data that is contextualised). And knowledge is information that is used for decision making. And here’s the interesting part — insight is incremental information that improves the quality of decisions; insight is therefore a subset of knowledge.

It should be obvious that the higher up the hierarchy we go, the more value we generate for the organisation. The current articulation of data science is obsessed at the data level and it is this inability to raise itself through the information and knowledge layers that is responsible for it less-then-desired ROI.

Full-Stack

Let me now introduce you to the idea of ‘full-stack decision science’. It is an attempt to articulate what is required at each hierarchical level, and how they would logically connect into each other to achieve the ultimate enterprise-level decision science capability.

Data Capabilities

Let’s start with the data level. The objective here is to make data accessible and usable. There are 2 focal threads — (a) platforms and tools that facilitate pattern recognition in the data, and (b) data governance.

On the former, it is about the complementation of traditional and big data technologies. There is a lot of misconception that big data technology should be the end-state replacement. Traditional approaches to data management will continue to be the mainstay because they are adept at making data accessible and usable. Embark on the big data technology route only when you’ve built out a proper justification for it and considered the total cost of ownership. Key to note is that more sophisticated techniques don’t equate to more insights; increasing model accuracy doesn’t equate to better relevance.

On the latter, a broader articulation of data governance is required that goes beyond data quality and data stewardship. We have to consider data dictionaries and data biases. The embrace of big data technologies has led to some laziness in data organisation and documentation, impeding repeatability and reusability. Organisations are obsessed with simply getting more data, but research is showing that the more data we wrap our arms around, the more likely we will encounter spurious correlations. A formal policy and procedure around data dictionaries is important to redress this. Concerning data bias, we have yet to see maturity in problem-identification, let alone problem-solution. Most data quality efforts are focused on data completeness, but don’t have a way to understand whether the said data is representative. A lot of data is still generated manually and then converted into digital formats, and this introduces a significant amount of unquantified bias. Consider the example of call centre agents tagging the nature of the call in their closing step — a number of organisations have found that only 20% of the calls are correctly tagged (based on voice and text mining triangulation). Data scientists are using much of these varied sources of data without any understanding on their bias, and this is now a real threat, particularly as we start to implement AI solutions.

Information Capabilities

At the information level, the objective is to make data meaningful. Here the focus is on (a) data literacy and (b) visual literacy.

Data literacy goes hand-in-glove with self-help analytics and reporting. Most organisations don’t have a data literacy program in place and have found that their enterprise dashboard implementations have yielded little but eye-candy. Decision-taking has not fundamentally altered. Employees must learn to trust their understanding of the data and their ability to slice-n-dice it before they can be given the reigns of this truly transformational tool. Data literacy must be supported with formal policies and procedures on information dictionaries — an abstraction of data dictionaries that document how data has been filtered, transformed and combined to create information content. This lack of documentation has led to enormous amounts of time wasted in reconciling reports, with fingers inevitably pointing towards data quality as an easy scapegoat. Consider the example of counting the number of customers that an organisation has. It is obvious that the definitions used by Sales & Marketing (based on eligibility) would differ from Risk (would have filtered out write-offs) and Finance (based on financial obligations). The data may come from exactly the same source but is transformed into specific usable information for each function. There is therefore no ‘single source of truth’ that the vendors have been repeatedly touting. There is only contextual interpretation of the data.

Visual literacy is simply missing in most organisations. The term hardly exists in the analytics vernacular. Why do we sometimes get an ‘aha’ moment with a well-constructed visual representation when the same data represented another way becomes non-obvious? There is an emerging science in the way the human brain ingests information visually, and there are basic principles that when applied properly, reduces the cognitive load on the brain during the ingestion process. If we are to leverage self-help analytics as a way to democratise the practice, then it is vital that employees learn the science of visual communication otherwise we are going to quickly run into a situation of information overload.

To date, few organisations have an active strategy for either data or visual literacy. They think they are addressing it by having the dashboard-making capabilities reside within IT, but this only serves to compound the issue.

Knowledge Capabilities

Lastly, at the knowledge level, the objective is to monetise data. The focus here is on improving the quality of decision-making through (a) information reduction and (b) bias reduction.

More data doesn’t mean more information. And more information doesn’t mean better decisions. In a post analytics world, management strains from cognitive overload and most continue to make decisions using the same few information elements that they have traditionally relied on and trust. It is therefore necessary that we take the time to present evidence on how the new information improves the quality decisions. And in the spirit of parsimony, it is preferred that the new information supersedes the old rather than being incremental. We thus need to develop the ability to continuously simplify the information elements to a succinct level of direct correlation to decision outcomes. It is the super-abstraction of analytics. The use of ‘decision frameworks’ is one such approach to simplification. Consider the example of the 9-box segmentation where the horizontal axis represents 3 levels (low/medium/high) of revenue output and the vertical axis represents 3 levels of organisation effort (starting in reverse from high to low). The content within each box would be a proposed product feature. The top right box would represent the product feature that generates the highest revenue output with the lowest organisation effort (if one exists). This would be the obvious sweet spot. Management used to create such 9-box segmentation based on intuition, but we can now bring an evidence-based approach to this segmentation through the power of analytics. The 9-box is deceptively simply, but an immense amount of effort would have to go into determining the right definition for “low medium high” and running well-defined simulations on how each product feature would play out in a real setting.

Now let’s look at bias reduction. To achieve sustainably good decision-making, we must develop the ability to recognise emerging bias. Decision-making is an iterative learned process. The human brain is exceptional in reinforcing what works and discarding what doesn’t. Management is nothing more than collective cognitive reinforcement. There is emerging bias in assumptions where reinforced beliefs in the correlation of certain information elements and decision outcomes break down. And there is emerging bias in data where the data that underlies the information element has drifted from its original representation (e.g. when data is captured manually and is slowly distorted by input behaviour changes). Organisations need to articulate policies and procedures to constantly challenge and review their decision-making ‘belief systems’. The demise of many famous organisations can be distilled down to this missing capability.

The key to connecting the information layer to the knowledge layer lies in a paradigm shift towards knowledge management. The articulation of knowledge management in a post analytics world is a topic for another post as it requires a fair bit of unpacking.

Transcending

Organisations should approach full-stack decision science by first revisiting their data science practice. They need to raise their thinking beyond the data layer and not get caught up with technical sophistication. Start investing in processes rather than shiny new tools.

Secondly, organisations should consider simplifying and merging the pantheon of data-oriented leadership roles — CDO (chief data officer), CIO (chief information officer) and CAO (chief analytics officer). The organisation would be better served through a single enlightened individual.

Thirdly, organisations should consider an analytics advisory council made up of millennial talent to supplement the opinions of their CTO. Most CTOs are passed their shelf-life and don’t understand the architecture requirements of analytics. They don’t understand data governance, and they don’t understand data literacy. Getting fresh and unbiased inputs is always a step forward in the right direction.

--

--

Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.