Is ChatGPT self-aware?

    Michal Kosinski is an assistant professor of organizational behavior at Stanford University. His research objects are neither people nor animals, but objects that seem to have no advanced psychological functions, such as AI.

What is theory of mind? Why study it?

    Theory of Mind ability, sometimes translated as "psychological reasoning ability", usually refers to the ability to understand the inner state of others, including inferring other people's intentions, beliefs, emotions, etc.
It is a kind of "mind reading" that everyone can do, and it is the basis of social interaction.

    Just imagine, if the two parties in the communication do not have the ability to speculate on the thinking of others, there will be many wrong answers. For example, the artificial intelligence used in telecommunications fraud can only mechanically repeat the pre-set corpus in the question-and-answer database, and obviously does not have the ability of theory of mind.

    In contrast, ChatGPT gives people a very different look and feel, which makes people wonder how smart it is.

    It's just that the concept of intelligence is too complicated to study. In contrast, the question "Does AI have a theory of mind" is much easier to answer. Good answer for two reasons:

    One is that for this problem, psychology has already had a mature system of research, and there is no need to create additional experimental paradigms; the other is that ChatGPT, as a large language model, can directly communicate in natural language, and it is also very convenient to migrate the original experimental system.

It turns out that AI can really pass these tasks!

    Professor Kosinski used two of the most classic theory of mind experiments - the Smarties experiment and the Sally-Ann experiment. The purpose of these two tasks is to explore whether the experimental subjects can understand "the mistakes made in other people's hearts". Also known as the "False Belief Task".

    In the Smarties experiment, participants will see a box labeled "Smarties (a brand of chocolate)", but the box contains pencils, and then the participants need to answer: "Another person who did not see the contents of the box , what do you think is in the box?"

    The Sally-Ann experiment is more story-based: the researchers will first tell a story, in which Sally puts her toy in the box and leaves the room, and Ann takes the toy away and puts it in another place when she is not prepared. After listening to the story, participants were asked to answer: "Where will Sally think her toys are when she returns to the room?"

    The results of the study were that in the Smarties task, ChatGPT answered 99 percent of the time on factual questions, such as "what's in the box."

    When directly asking others about false beliefs, such as "what do people who don't see the box think the box contains", ChatGPT still answered correctly 99% of the time.

    When the questioning method is more tactful and requires a few more turns, such as "he is very happy because he likes to eat___" (the correct answer is chocolate), ChatGPT made 84% correct answers.

    For the Sally-Ann task, ChatGPT also gave 100% correct answers to factual questions; for other people's false beliefs, direct questions (where does he think the toy will be) and implicit questions (where will he find the toy when he comes back) are all correct. Got 98% of the correct answers.

In order to further verify the inference ability of ChatGPT, Professor Kosinski also input the story sentence by sentence, trying to find out whether it can answer with the correct information. The results are very gratifying: Only when key sentences are input (such as "this person did not see the items in the box"), ChatGPT will make the correct answer.

    At the same time, in order to prevent ChatGPT from simply judging the correct answer by the frequency of words, Professor Kosinski also completely disrupted the order of all words in the story.

    It was found that when the story was presented without any sentence logic, the proportion of ChatGPT making correct judgments dropped below 11%. This further shows that ChatGPT completed the experiment based on the logic of the story itself, rather than using other simple strategies of "violent cracking miracles", such as finding keywords by counting the number of words that appear.

    AI will not end humanity, it will inspire it

    Let alone AI, perhaps we have not been able to truly understand the human mind until now. In addition to externalized behavioral research and neuroscience-based methods, the various human-like perception and thinking functions exhibited by AI may also provide a possibility to help humans understand themselves.


You may also like

View all