
The very respectable and authoritative website The Conversation asked five AI experts to offer their view on whether artificial general intelligence (AGI) will happen. I wasn’t surprised to find that all answered in the affirmative – quite the strong affirmative to boot. Despite the article title being framed as a question, there was not much room for hesitation in the answers given.
What I was less surprised to see was how similar to each other the answers and the views they expressed were. In fact, they were all grounded in the same idea: despite the lack of hard evidence, we like to believe the future of AI research is open and leads to AGI. The leap from AI to AGI was seen as inevitable. The remarkable yet limited achievements in AI so far imply unlimited future breakthroughs. The views were not rooted in sober science and balanced estimation, but in the powerful language of mythological representation and assertive wishful thinking. Ranging from modal to categorical, the idea that human-like artificial intelligence will happen, and even surpass human intelligence, is almost axiomatic. It’s what many other philosophers and AI experts refer to, critically, as the ‘inevitability thesis’.
There was talk of emotional intelligence, consciousness, critical reasoning, creativity and intuition. But the question of meaning, of computers achieving common sense, wasn’t raised. That the world is not the rule-based environment of chess, Go or Diplomacy games at which AI is so good at, didn’t strike the experts as worthy even of mention. That our inferences are never based on necessity, as deduction and induction are, didn’t come up. That big data may lead, inexorably, to combinatorial explosion, that human learning and intelligence is embodied and embedded, were not considerations structuring the arguments.
‘If humans can learn these traits (critical reasoning and emotional regulation), AI probably can too – and maybe at an even faster rate.’ What does that even mean? Critical thinking and emotional interaction requires an agent-arena environment, a co-creation between the intelligence and the world. It requires an understanding of language as a tool for handling objects in the world, for self-understanding, and for intentional behaviour. As far as I know, these are not tasks even on the AI dev radar, let alone minuscule points on the horizon of achievement.
There are enough compelling arguments that artificial intelligence is neither intelligent nor artificial in the sense of self-made or non-human. But it is artificial in the sense of man-made, which means that behind the wild complexity of AI models lies the humunculus of the human programmer (no disparagement intended, but a wink to Descartes), the human voice of computational makebelieve.
The achievements of what we hold AI for, machine learning, large language models, natural language processing, neural networks, predictive engines, etc, are absolutely impressive. But they are illusory at best and deceptive at worst from a strictly intelligence point of view.
The interesting point here, however, is how much we like to think that it is. How much we’ve bought into the AI mythology that we’re amost convinced it lies right around the corner. But that corner is merely an angle in our mind, whose touch points with reality are as ‘grounded’ as AI’s understanding of the world around it.
AI doesn’t need to become AGI to pose a threat and invite ethical debates and regulation frenzies. After all, objects made by human hands have often led down the same path. But AI needs to become more than a manipulator of mindless items held together in fascinatingly complex webs of relationships for it to be worthy of the tag intelligence.
Leave a Reply