The Great AI Divide
The fear and the fear-not.
There are broadly 3 groups emerging around the topic of AI — those who believe in its power for good, those who believe in its power for evil, and those who don’t believe it has any powers at all.
Research and implementation of AI has been around for more than 50 years, so arguable we’ve had time to understand the technology and its implications, we’ve had time to understand the emerging issues. And yet, we find ourselves with amnesia when discussing the AI topic in the current space.
And so I dedicate my 55th article to this topic of why the topic of AI seems so controversial and divisive.
(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)
Definitions
As a data analyst, I like to always get the definitions “out of the way” to set the context. While the grand objective of AI is to replicate human intelligence (and in so doing, perhaps surpass it), human intelligence itself is a broad collective of abilities, some of which are still relatively unexplored. The best of AI research thus far has only been able to mimic human intelligence. Alan Turing was spot on when he used the term “The Imitation Game” in his seminal paper on the possibility of machine intelligence. Predictive AI mimics the aspects of human computational abilities, while Generative AI mimics communication abilities. We have not yet cracked reasoning, perceptual or musical abilities; simply because we don’t yet understand their workings. I read that neuroscientists recently discovered that humans have vastly different and rich inner voice(s) that fundamentally shape our perceptual abilities. Neuroscientists are also discovering that dreaming is essential for memory formation and sensemaking of the physical world.
Now, let’s also unpack my opening sentence about AI-for-good, for-evil or for-nothing. AI-for-good means its makes us more human, more productive and improves the economy and our lives. AI-for-evil means it takes away our economic power (e.g. massive job losses) and civil liberties. AI-for-nothing means it’s not really going to matter one way or the other; it will just be business-as-usual.
Mirror on the Wall
AI has been a mirror on the wall. Remember when IBM unveiled Deep Blue, the chess-playing supercomputer? It was touted as a game changer (pun intended) in achieving human-like intelligence. Deep Blue didn’t make a dent in our lives, but it taught us that chess-playing was all about memory capacity rather than critical or strategic thinking. So long as the computer could memorise more moves than a chess grandmaster, there was no way a human could beat it.
And then there was IBM Watson, the Jeopardy-playing supercomputer. It was touted as a reasoning machine. Watson has since been declared a commercial failure. But it taught us that reasoning was more than just making accurate predictions, particularly when there were situations with no objective definitions of right and wrong outcomes, such as in financial advice. (Watson had a good start in medical assessments but failed miserably when it came to the financial services industry.)
ChatGPT is all the rage now. It may turn out to be a commercial failure too, but regardless, it’s teaching us that creativity is much more than complete sentences with good grammar or stylish colour-coordinated artwork.
And so with each AI epoch, we learn something about the nature of human intelligence that we didn’t quite understand before. In a way, AI allows us to test and evaluate the theories of human intelligence in a very real and practical way.
The Great Divide
Prominent voices are trying to get people to choose one of 2 opposing ends — AI-for-good versus AI-for-evil. They are trying to get the AI-for-nothing crowd to get off the proverbial fence. I have friends and ex-colleagues on extreme ends of this great AI divide; these are people who are both digital and data literate, to say the least. I am therefore struck by how one comes to their respective positions. What information signals do they perceive that makes them conclude that AI will destroy civilisation or will be mankind’s greatest achievement? Or is it just pre-conditioned bias? For example, the generation that grew up with the movie hit Terminator are more likely to see AI as a threat, while the generation that grew up with Star Trek: The Next Generation may view AI more positively because of Commander Data. Research indicates that the younger the generation, the more comfortable they are using AI solutions. The dark side of this behaviour is that although Gen Z’s are more pro-AI, they also have a stronger belief that AI would displace their jobs.
Based on my readings and conversation, I hypothesise that the great AI divide is driven by 2 broad considerations:
- How a person perceives AI intersecting with their work activities.
- How comfortable a person is with AI-based decisioning.
Point (1) translates into the fear of job disruption, while point (2) translates into the loss of decisioning control. Job disruption can be rationally assessed and quantified, and hence managed. In fact, I’ve written here about how Gen AI isn’t necessarily going to replace you, despite what the superficial signals indicate.
But the discomfort with AI-based decisioning is a psychological one. Despite empirical evidence that AI-based decisioning generally has less bias than human-based ones, there are those, particularly the older generation, that simply won’t accept it. Consider the judicial system. Despite overwhelming evidence that justice is neither blind nor fair, and AI-based decisioning would bring much needed impartiality to the process, very few individuals would be comfortable having an algorithm pass judgment and sentence on them or their loved ones; everyone thinks their situation is unique and exceptional, and that mercy (a form of bias) should be shown to them.
Conclusion
Your stand on whether it’s AI-for-good or AI-for-evil, in many ways, reveals something about your personal biases and familiarity with the technology. There will be more talk than action, the debate will continue to rage on, and in time, fizzle out. As with all technology in the past, the implications will land somewhere in the middle, with a large dose of AI-for-nothing.