Why This Mis-Guided Fascination with AGI?

Eric Sandosham, Ph.D.
6 min read1 day ago

--

Misunderstanding intelligence and consciousness.

Photo by Natasa Grabovac on Unsplash

AI is all the rage now, and everyone is an armchair expert on its implication to the future of the human race. I’m troubled by the various uninformed and mis-informed arguments by smart people on the pros and cons of it, and more so when it comes to the possibility of creating sentient artificial intelligence. Sentient = conscious, self-aware.

Elon Musk thinks that OpenAI will soon, if not already, stumble into Artificial General Intelligence (AGI), and awaken an artificial sentient being that will kill or enslave us all. Or something like that. Many seemingly intelligent people are jumping on this doomsday bandwagon. It’s something that baffles me. It may be that our understanding of AI has been so warped by sci-fi stories and movies that we can’t tell reality from fiction any more. To most lay people, AI means Commander Data from Star Trek (all cold and intelligent) or C3PO from Star Wars (having personality and playing a supportive non-threatening role) or SkyNet from Terminator (authoritarian, strong exterminating tendencies). Every single one these movie-personas is a thinking, conscious entity endowed with personhood.

But the reality of how we are developing AI is so far from that movie hype. Our current (mathematical) pursuit of AI and Gen AI won’t get us anywhere near that manifestation. And all this fear mongering by the so-called intelligentia is misplaced and not helpful at all.

And so I dedicate my 45th weekly article on our mis-guided fascination with AGI.

(I write a weekly series of articles where I call out bad thinking and bad practices in data analytics / data science which you can find here.)

Artificial Intelligence

What AI researchers are currently doing is simply attempting to simulate human intelligence. But the grand objective of AI research is to develop a machine that can think and act like humans, and perhaps even surpass them. It has to be genuinely “intelligent”. Is simulated intelligence intelligence? And what exactly is intelligence? Based on my readings, I have distilled and unified the prevalent definitions of intelligence as follows:

  • Intelligence can be defined as having agency to adapt and survive in an environment based on the ability to learn, reason and problem-solve using integrated cognitive functions such as perception, attention, memory, language, and planning.

Based on the above complex definition, it is quite obvious that today’s AI is NOT intelligent. Period. Today’s AI lacks agency, and fundamentally, it lacks the ability to reason (numerous papers have been written about this). As the name suggests, artificial intelligence is not real intelligence; it’s not even a new kind of intelligence. Yet.

Artificial General Intelligence

The AI that we have today all fall under the category of Artifical Narrow Intelligence (ANI). It’s narrow simply because their AI solutions are purpose-built to only do a specific set of tasks. They are not adaptable, and they have no agency to evolve and survive.

Enter stage left: Artifical General Intelligence (AGI). McKinsey says “AGI is AI with capabilities that rival those of a human. While purely theoretical at this stage, some day AGI may replicate human-like cognitive abilities including reasoning, problem-solving, perception, learning, and language comprehension.” And this is the AI that Elon Musk is worred about. He believes that the company that discovers AGI will have the ability to dominate the world, and then possibly not being able to rein it in at some future stage. Researchers are divided as to when AGI might be attained, from years to decades to a century to never. Clearly we are clueless and have no convergence in our approach.

Artificial Consciousness

Does intelligence require consciousness? Does consciousness require intelligence?

Researchers conflate AGI with artificial consciousness. This is probably the single biggest issue in AI research and implementation today — the conflation of intelligence and consciousness. A number of researchers believe that as something becomes more and more intelligent, it will ultimately be conscious. There is no supporting evidence for this claim. And at present, we are only simulating intelligence.

Our current attempts to define and understand natural consciousness have been largely driven by philosophers (e.g. Descartes, Humes, Dennett, Chalmers). Philosophers have long been facinated by the subjective inner life we experience in our minds, trying to tease it apart, trying to create theories on its nature and existence. While useful at the onset in identifying and framing the nature of the problem, philosophy is NOT science, and cannot sufficiently move the needle towards a proper understanding of consciousness. Concepts like qualia do not pass scientific reasoning muster. Information theory and quantum theory of consciousness may sound scientific but are not; they lack a robust first principle basis to reason from. Philosophers discussing and trying to unpack consciousness today is equivalent to Generative AI hallucinating!

The Australian philosopher, David Chalmers is famous for this statement on the ”hard problem of consciousness”: the problem of explaining the relationship between physical brain processes and subjective experiences — why an experience of red rather than green, for example. Personally, I think it’s a hard problem because we define it as such — it’s circular reasoning. I am of the opinion that Jeff Hawkins got it right. In his book Thousand Brain Theory, he provides credible empirical evidence that the brain is an organic simulation engine. It simulates a fascimile of our environment (our eyes have a narrow field of vision, and things on the periphery are simulated and refreshed). It seems likely that to adapt and survive, the simulating brain would have to simulate self within the simulated environment. Simply put, we need to have the ability to imagine ourselves in various potential situations. I posit that when “self” is simulated, we get consciousness and self-identity. (It’s interesting to note that we have no sense of self-identity before the age of 4, maybe because the brain, like a computer, is still developing its ability to simulate self.)

Our current computational articulation of AI has no agency to adapt and survive. It has no need to create a digital twin (i.e. simulate) of the physical environment (unless it is embodied; in the case of AI robotics), and thus no requirement to simulate “self”. In such a case, it’s likely that it will ever be conscious.

General Purpose AI

If the intent is to merely solve for a general purpose AI, then we already have much of the ingredients today. We can already construct an ensemble AI composing of different specialised narrow AIs, and then orchestrating them as a whole. To pick up a new ability, we would need to “insert” a new specialised AI into the mix; this approximate version of AGI would not be able to “evolve” a new specialised algorithm on its own. But perhaps that’s the right line of inquiry to pursue to see how that “self evolution” might take place.

The late Marvin Minsky, one of the godfathers of AI, developed a theory of natural intelligence based on the sum of “mindless” parts known as agents. The theory was called society of mind. We know that the brain is indeed such a construct, with different regions dedicated to different specialised activity. We also know that the human brain is composed of 2 independent brains (i.e. the 2 hemispheres), each with its own personality, even though for the most part, we experience our “inner self” as a singular entity.

We can work towards solving for general purpose AI without necessarily solving for genuine general intelligence or consciousness.

Conclusion

Based on everything I’ve read so far, I’m not convinced that we will achieve AGI (as popularly defined) through the current algorithmic route we have taken. Notwithstanding the amazing breakthroughs achieved in Large Language Models like ChatGPT and Gemini, it is still narrow AI, built for a class of specialised activities. Even if we could achieve an approximate general purpose AI, how do we introduce randomness and survival pressure to stimulate evolution? And will that then evolve towards real intelligence? Truth be told, this fascination with creating sentient AI will never stop. The act to create runs strong in us; the desire to design and engineer something from the ground up.

--

--

Eric Sandosham, Ph.D.

Founder & Partner of Red & White Consulting Partners LLP. A passionate and seasoned veteran of business analytics. Former CAO of Citibank APAC.