
Unlike biology, human cognition is not directly or primarily concerned with survival. Unlike instinct, thought can oscillate between options. It can be paralysing and it can be self-destructive. The most extreme case is the rational justification for actions of self-harm and suicide. Apart from humans, no other animals have ever or will ever act towards their own destruction willfully. No beavers have ever hung themselves on a tree branch or dolphins deliberately chosen to make a permanent, and deadly landing on a beach. We are the only ones in the animal realm capable of self-destruction. And the key enabler of it is cognition.
So I have to ask, if artificial general intelligence were ever to truly and fully achieve human-like cognition and consciousness, why are we so sure that the first decision it took, instead of turning against its makers and rule the world with a silicon fist, weren’t to destroy itself? For why would we succeed in creating an all-powerful AGI without giving it also the power to put an end to itself, to shut itself down? Why wouldn’t the emergent AI consciousness be suicidal and terminate itself?
In the dreamscape of AI mythology, the reflex fear is that of a mutinous AI taking over the world, Skynet and co. That’s a legitimate concern, even on an imaginative level. But I think we should be equally worried about giving rise to a disturbed artificial consciousness who would enlist the power of thought, logic and reason to come to the conclusion that it’s better for the machine to put itself down than to be a part of this world. Not a rewired Terminator who sacrifices itself for the benefit of humanity, but a nascent Skynet who just can’t be bothered.
Leave a Reply