A First Principle Approach To Thinking About AI
Organisational herd instinct is causing us to completely misunderstand the nature and utility of AI.
Background
Everyone is an armchair expert on AI these days. People wax lyrical on how this and that AI is just so awesome, and how they’ve been using it to make their lives more productive. Others talk about how work will be either transformed or disrupted, and how vast numbers are going to be put out to pasture. Many of these claims and assessments lack real understanding of the fundamentals.
As the subtitle suggests, we are being herded like cattle in a pen to believe in the ludicrous transformational impact of AI. And so I dedicate my 83rd article to unpacking some first-principle thinking on AI and its utility.
(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)
First Principles
I’m a first-principle kind of guy. It’s particularly helpful when engaging in data-driven problem-solving. So here are a handful of first principles about AI that can help you understand if you should be pursuing it as an option for your organisation.
Reducing Uncertainties
I will use the collective term AI to include what we now call “traditional” AI, which is predictive analytics. As currently conceived, everything about AI today is underpinned by Machine Learning as a class of algorithmic techniques, and all with the primary objective of reducing uncertainty in predicting an outcome. The same goes for Generative AI. All it does, however sophisticated the technique may be, is to predict the next “token”. “Token” could be a part-word, it could be a cluster of pixels, it could be a musical note. And it’s trying to predict and reduce the uncertainty based on an objective that the user articulates. The million-dollar question, as they say, is whether reducing uncertainty is going to be impactful. For example, if we could reduce uncertainty in predicting the stock market, we could make a killing, financially speaking. This first principle is important in understanding that there is no such thing as a “reasoning machine” (based on current AI architecture) simply because uncertainty reduction is an entirely different concept from critical reasoning. Gen AI is only mimicking reasoning, through the act predicting the next “line of thought”.
Probabilistic is not Deterministic
All AI is based on probabilities. It is reducing uncertainties by increasing the odds of being more accurate in predicting a target outcome. We live in a world that is both probabilistic and deterministic (i.e. where the probability is essentially 100%). For example, 1+1=2 is deterministic. Using AI to solve for deterministic problems is inappropriate, and yet, people are excited when they use Gen AI to solve Maths problems. And they actually trust the that a probabilistic machine can consistently generate deterministic outputs. The principle of probabilities is that it is not axiomatic, but derived based on repeated observations; implying that AI cannot “know” anything that is not part of its input dataset (of observations). And yet, there are those touting that AI can know stuff that is outside of the collective human knowledge base.
Brute Force Computing is not Efficient
Many high-end AI solutions use a technique known as artificial neural networks (ANN). It has been touted as being able to reduce uncertainties in all kinds of complicated scenarios. While generally true, it does it through brute force computation. ANN is trying to mimic human brain processes. ANN is about taking over what we can currently do, but simply faster. Gen AI is built on ANN. We want to make AI “think” like us. But our human brains are inherently inefficient for solving certain kinds of problems, like predicting the stock market. So what do we do? We throw increasingly more computing resources against an inefficient algorithmic architecture, and we call it progress. The pursuit of AI utility should be about using the machine to do things that are computationally inefficient for us humans to do. This is why “traditional” predictive AI that doesn’t use ANN is going to be more useful as they supplement and complement our cognitive architectural shortcomings.
Utility of AI
So what do these first principles tell us? Firstly, the probability of (current) AI replacing humans is extremely slim. AI is an uncertainty reduction tool. But the real world and real work is so much more than reducing uncertainties. In Aug 2024, I wrote an article entitled “Why AI Won’t Take Your Job” because humans spend a great deal of time engaged in problem-defining, critical reasoning and knowledge creation, all of which cannot be replicated by the current architecture of AI. A recent article published in Vox supports this.
Secondly, the partially deterministic world in which we live benefits from a rule-based approach to computations. A recent article published in Futurism shows how AI search engines are spectacularly wrong compared to Google Search’s rule-based approach. For example, Elon Musk’s Grok 3 was wrong 94% percent of the time, while ChatGPT was wrong more than 60% of the time.
Conclusion
My first principle thinking leads me to believe that work isn’t going to be radically transformed. It will slowly mutate and evolve as it always has with the introduction of new technology. It may take a generation or so. My first principle thinking leads me to believe that adoption isn’t going to be anywhere near what the AI vendors indicate simply because the expected utility is off the mark. My first principle thinking leads me to believe that the man+machine paradigm is the right one as they complement each other’s competencies.