An old Latin dictum often attributed to the philosopher Seneca says: Errare humanum est. It is human to be wrong. Don’t beat yourself up, the Romans would’ve said, but don’t make it a habit either. Perseverare diabolicum.
Error in, error out. This is not just a principle of logic, where false premises lead to false conclusions, or faulty reasonings lead to deception. It is also a typically human fact of life: wherever people get involved, error finds a home. On a strictly philosophical level, this is a tautological statement, for error is human judgement that cannot exist outside the human framework.
But the rise of machine learning and so-called artificial intelligence brings a novel nuance to the old Latin motto.
Machine inference, which underpins all machine learning, neural networks and large language models, is nothing but deductive and inductive reasoning. We deduce from general rule its binding particulars, and we induce from instances a binding rule. The more data, the more sure we are the conclusions are right. Big data makes these two types of inference watertight in their application.
But human intelligence is not exclusively inductive and deductive. We get wrong all the time, not because we don’t apply these two rules as well as an AI would, but because we do many other things, which have not been formalised in computing language. One of them is working with hypotheses, which might not be true, but might also turn out to be true, only after subjected to a ton of other hypotheses, where deduction and induction are suspended, as the system evolves, self-optimises and effectively tries to solve a problem by suspending both inductive and deductive inferences. That process is often called abductive reasoning, or ‘detective thinking’. Computers can’t do that, and to believe many AI experts, we are far from getting there. That’s why computers do extremely well at playing rule-based games, like chess or language processing, but are 0% effective at solving any kind of problem that lies outside the familiar (enclosed) space in which they were trained. Framed problems can be tackled, reframing them is at the moment impossible.
If computers could meaningfully tell us about the state of their intelligence, the first thing they’d tell us is that they hate error. The same error which we lament to be human. But a cognitive agent cannot circumvent error, it has to go right through it and come out on the other side, like Dante in Inferno and then out into the light of Purgatorio and Paradiso.
Some people worry that soon we might not be able to tell the difference about computer and human output. Wrong. Look for the error. Don’t look for the falsehood, for where garbage goes in, garbage always comes out. But where error is, there the human is also. And until we haven’t taught AI to learn and make errors, and not just to avoid them, we haven’t moved beyond make believe.
Leave a Reply