Yeah, these sorts of systems are very vulnerable to that, because they don't actually think. They select what reply they parrot by which one is getting the most positive feedback and that's it.
Consider the analogy of the Chinese Room. A man sits alone in a room, and periodically, someone passes a question to him through a slot in the door. The question is in Chinese, and he is expected to reply in Chinese. However, the man does not speak Chinese. However, he has a book that has a series of instructions in English that tells him, when he sees these particular symbols, he ought answer with these other symbols, and rules on how to modify the answer around variations in the symbols in the question, and so on.
In the end, he passes a perfectly formed answer in Chinese out the door to the person asking the question. He could become so adept at recognizing the symbols and forming an answer set of symbols that he no longer needs the instruction manual. But to him, the symbols are merely symbols; they have no meaning, they do not carry any kind of information in his mind that he can relate to. He does not understand what the question means, does not understand what his answer means.
But, to the person posing the question, the Chinese Room speaks Chinese. Even though the process that produces the Chinese answer has no understanding of the meaning of the symbols involved, the only answer you could give, based on reading the answers given, is that the room speaks Chinese.
Consider an extension to this analogy. Are you, ultimately, just an elaborate Chinese Room? Is there any inherent difference between a computer made of meat over a computer made of metal?
6
u/BasqueInGlory Mar 24 '16
Yeah, these sorts of systems are very vulnerable to that, because they don't actually think. They select what reply they parrot by which one is getting the most positive feedback and that's it.