Humans make mistakes a computer would never make. Whatever we do, we will most certainly never overcome this essential limitation. Computers make mistakes a human would never make. Depending on how technology evolves, computers may one day overcome this limitation. Have you heard this before? This narrative is gaining ever more ground.
A human mistake is not always about the evidence used. A computer mistake is always about the evidence used. More evidence, fewer mistakes, which means lower error. But human mistakes are not always about errors. A person can feel they’ve made a mistake and not see why, or come to realize that years later. The silicon world cannot understand that a mistake goes deeper than an error in the CPU – identifiable, measurable, debuggable.
Mistakes are errors of significance, which puts the emphasis on significance, not errors. On the specificity, not the genre. Mistakes are fundamentally unexplainable under a rational framework or a logic benchmark.
The human mistake is a product of consciousness. The computer error is a result of computing. At the moment, consciousness cannot be computed, and if we believe neuroscientist Christof Koch, it can never be computed.
When a computer encounters an error, it has two options under the current tech-dispensation: to ignore it or to correct it. If it ignores it, it does so because it is not equipped to deal with it. If it corrects it, it does so because it is able to – until the error is reduced. When a human encounters an error, she also has two options: to ignore it or to try to correct it. She merely tries to correct it because it is never clear whether the tools are there to tackle it – so a game of trial and error begins. On the other hand, if she ignores it, it does so out of a lack of will, setting aside the desire to improve the system and to reduce the error level. A human may wilfully contemplate the degradation of its own system knowing very well that the tools are there to improve it but not willing to.