The Problem with Automated Decision-Making

Eric Sandosham, Ph.D.
4 min read6 days ago

--

AI isn’t about decision-making.

Photo by Mike Hindle on Unsplash

Background

This has really been troubling me for a while. This thought that is just at the periphery of my cognition. An itch that I just can’t seem to scratch … yet. After ruminating on it for quite a while, I’m now somewhat ready to write about it. So, here’s that thought: “Is automated decision-making really decision-making at all?” It’s not just semantics. It’s a paradigm-shifting perspective that has implications for organisation leaders on whether their deep embrace of AI is going to translate into better decision-making.

And so I dedicate my 78th article to the topic of AI-driven automated decision-making.

(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)

What is Decision-Making?

Despite all the rhetoric, decision-making is simply choice-selection under uncertainty. But that simple statement reveals some interesting conditions. Firstly, “choice-selection” implies that there are viable alternatives for consideration, each leading to sufficiently different outcome or consequences. Secondly, “uncertainty” means the outcomes or consequences are not fully known at the point of choice-selection. If these two conditions do not exist, then we are NOT in the realm of decision-making.

Obviously, to reduce the uncertainty in outcomes (and consequences), you can conduct simulations with good input data and robust predictive algorithms. What you get is a range of outcome probabilities. But it will still not be perfect. The more complex the situation is, the more interaction effects that are probably missed out in the simulation (which is after all a simplified or low-fidelity facsimile of the situation), and the more likely that the probabilities are not representative. Consider the simple of jaywalking across a road with busy oncoming traffic. Simulating that outcome is quite straightforward. Now, consider if you are trying to jaywalk the same road with the same traffic condition, but it’s nighttime and raining heavily, you are carrying an umbrella and you are wearing slippers. It’s not easy to simulate the interaction of water and light (both of which are in motion), or the interaction of running while balancing an umbrella upright, or the interaction of your slippers on a wet road. The decision-making process in this latter, more complex, scenario is of course harder, with lower probability of successfully crossing the road, and a higher likelihood of false positives or mis-computation.

Automated Decision-Making?

Enter the term “automated decision-making”. It’s a nice buzzword introduced alongside the value equation of AI. I’ve come to realise that it’s a nonsensical word. AI is NOT automating any decision-making. Rather, it is simply executing pre-made decisions. The automated decision-making buzzword is designed to give the illusion that AI is “thinking”, and that we are delegating that thinking to a machine much like the same we would delegate to a human subordinate.

While AI deals with probabilities, the end-state of decision-making is still rule-based binary. If a probability exceeds a threshold (say more than 50% likelihood) then the next execution step commences. We allow for “automated decision-making” because we have already set all the threshold and trigger parameters, and simulated the outcomes, however imperfect, to our satisfaction. AI might have helped us to reduce the uncertainties, but more importantly, we have intentionally removed choice-selection in the process. And in doing so, we have removed the act of decision-making. Calling it “automated decision-making” is disingenuous.

When organisation seniors think about AI, they often consider the opportunities to automate decision-making, thereby taking the man-in-the-loop out of … well, the loop. The reality is that you need a whole bunch of “men” to work through the thresholds, and uncertainties, and outcomes, and then get the AI to encode and execute it. The AI is still ultimately a dumb terminal point. This is still the case when we talk about agentic AI, the latest buzzword. This is the cobbling together of separate AI processes into a singular composite workflow, where the probabilistic outcome of one AI agent becomes the input to another, kicking off a chain-reaction to achieve a target output. Some useful articles have emerged on this new topic that highlight the significance of compounded errors that naturally arise because of the chaining effect.

There is a tendency to confuse automated decision-making with manual standard operating procedures (SOP). In the latter, the man-in-the-loop can and will still make exceptions to the procedure based on their judgement of changes to the operating context. In contrast, AI-driven automated decision-making isn’t built for exception handling because the very nature of the exception is hard to anticipate and pre-determine.

Powering Up Performance

The pre-eminent decision psychologist, Gary Klein, in his book “Seeing What Others Don’t” shared a simple formula: performance improvements = reducing errors + increasing insights. When we think about the realm of employing AI for automated decision-making to power up organisation performance, we are only solving for error reduction, specifically execution error. We are therefore only solving for a subset of one-half of the equation. Automated insights remain elusive. Genuine insight generation is a man-in-the-loop thing; born of lived experiences that shape hypotheses formation. I’ve yet to be convinced that (non-sentient) AI can operate successfully in this space.

Conclusion

Automated decision-making isn’t some break-through approach to decision-making. Organisations should not fool themselves to think that they have become smarter because of it. In fact, automated decision-making is not decision-making at all. It’s automated execution of pre-made decisions. And organisations will ultimately be judged based on the quality of those pre-determinations.

--

--

Eric Sandosham, Ph.D.
Eric Sandosham, Ph.D.

Written by Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.

Responses (2)