Help! My ECD is a robot. By @HFoenander
By Henry Foenander
Help! My ECD is a robot.
I’ve noticed over the past year or so, there’s been a lot of talk within the ad industry about the role of Artificial Intelligence in creativity. The majority of articles suggest that creatives, are for the most part, safe from the pesky job nicking robots. The logic behind this position is that unlike other industries, creativity is fuelled by imagination, which A.I lacks (for now). However, experts say that the singularity is fast approaching, if this happens, things will get messy, so here’s my two cents on the subject.
A very clever bloke called John Searle came up with an equally clever thought experiment, in the hope of putting our minds at ease about A.I. It’s called the ‘Chinese Room Experiment’. And it goes a little like this…
First, assume you can’t read or understand Chinese Symbols.
You’re now stuck in a room with a gap on one wall labelled input, and a gap on the other wall labelled output.
In the middle of the room is a book. The book has a list of input Chinese Symbols, next to the list is the correlating output Chinese Symbols (stay with me, it gets clearer).
Soon, cards with symbols start falling through the ‘Input’ gap in the wall. You pick them up, look for the symbols in the book, find which output symbol they match with, and put that symbol through the ‘output gap’.
Here is where it gets interesting. What you don’t know is that the exchange of these symbols is actually a conversation. The input symbol might mean ‘How are you?’ and the output symbol might mean ‘Fine, thank you’.
So what you are now doing is having a conversation in Chinese Symbols without knowing the language at all. You’ve got no idea what you’re saying, but as long as you’re following the instructions in the book, the conversation will make sense.
Back to that clever John Searle bloke. He reckons this proves that computers can never be aware, or have consciousness or imagination. Because like you, stuck in that room, the computer is just taking input symbols, following the instructions, and outputting them. It has no understanding of the context of what it is doing, yet it’s doing it accurately.
Now, this argument is meant to show that, by definition, a computer can never be imaginative. But what if we twist that on its head. What if instead we were to say that: if a computer has imagination, by definition, it can no longer be classed as a computer, instead it must be classed as a mind.
This would mean that creative jobs aren’t so safe after all. We are already seeing evidence of computers understanding context, google reckons they’ve made a computer that has dreams. Some programmes claim they can understand natural language. If they can dream, and understand, they would fit into the definition of being a mind.
So no, computers aren’t going to steal your creative job. But an artificial mind just might… 01101001010100101010111000001011100110000011110101010101110011001101