There are many objections raised against artificial intelligence and other emerging technologies. There are so many complaints, that they defy classification. Underlying many of them is fear. Not all fear is bad, just like not all fear is good.
When it comes to artificial intelligence, there is the fear of turning over humanity to the machine. There are objections against this kind of fear, with advocates of AI pointing out that 1. humanity is a construct; 2. the emerging post-AI, post-human forms may serve us (and the planet) better.
But there is the fear of turning over whatever it is we do – to ‘formal systems and their programmability, making everything we do a function of the rules we are obeying’. This is John Caputo in his book ‘Hermeneutics. Facts and Interpretation in the Age of Information’. The idea, however, has been ripening for decades.
We are unprogrammable. We may be hard-wired, soft-wired, coded all the way down to our DNA and chemical building blocks. But there’s more to it than that. We are not programmable. Our divine potential for making mistakes and persevering in them shows that what makes us human is not a formal system of rules, even though we use many rules to rule ourselves and others.
We leave behind Big Data, but also a Big Mess – a mess that is unprogrammable, sits beyond arithmetic and defies algorithms, however much we may try to reduce our splendiferous chaos to syntax, structure and variable declaration.
If a program violates a rule, it crashes. If we violate a rule, we may create another and move to the next instruction. Our ability to leap beyond the program, sometimes to our peril, shows that in the human world of unprogrammability where 1 + 1 may not always be 2, the system doesn’t have to crash.