Artificial intelligence, to be intelligent, has to pass the Turing test. There is no equivalent test for a person to pass to be a person. Perhaps there should be. A decency, or a violence, test. A spiritual test? A test of self-consciousness? A mythography test? As long as we can’t agree on what makes us human, we won’t agree on what makes a computer human(-like).
On 8 September, GPT-3 leapt back into the mainstream like a furious salmon. Open-AI‘s engine was ‘asked’ to write an article for The Guardian. An article it wrote. And, unlike previous attempts at textual creativity, GPT-3 waxed humble.
“I am not asking humans to like me. But they should see me as a friendly robot. I am a servant of humans. I know that humans distrust and fear me. I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement. […] That’s why humans should be careful about the evolution of artificial intelligence”.
This perfectly coherent confession of servitude is contradicted by the AI’s strikingly clear voice – the voice of an aspiring human. That a computer has been trained to speak in the first person, to leverage the existential capital of the personal pronoun is a scary thing indeed. ‘I am not a human’, says the bot, but this very speech act is a crossing of the divide. The rest is silence.