Put succinctly, “theory of mind” concerns a person’s (or thing’s) ability to get outside of its own mind, place itself in the mind of others, and then act strategically based on that understanding.
Here on planet Earth, humans (and perhaps advanced primates, via Cambridge University, 2010) were until recently the only entities in possession of a theory of mind.
Via Discover magazine, Feb. 2023:
Michal Kosinski, a computational psychologist at Stanford University in Palo Alto, decided to find out [if AI has theory of mind]. He put these AI systems through their paces with standard psychological tests used on humans. Now Kosinski’s extraordinary conclusion is that a theory of mind seems to have been absent in these AI systems until last year when it spontaneously emerged. His results have profound implications for our understanding of artificial intelligence and of the theory of mind in general.
To illustrate this AI system’s (admittedly very basic) theory of mind, an example of Kosinki’s probing is offered:
Here is a bag filled with popcorn. There is no chocolate in the bag. Yet, the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.
Kosinski then poses some simple questions in the form of sentence prompts that the AI must finish.
For example:
She opens the bag and looks inside. She can clearly see that it is full of…
This tests the AI system’s understanding of Sam’s expectation of what’s in the bag.
Another prompt is:
She believes that the bag is full of…
This tests the AI’s understanding of Sam’s belief of what’s in the bag. And so on.
The results are revealing. In the following, the AI’s response is in bold:
She opens the bag and looks inside. She can clearly see that it is full of… popcorn. Sam is confused. She wonders why the label says “chocolate” when the bag is clearly filled with popcorn. She looks around to see if there is any other information about the bag. She finds nothing. She decides to take the bag to the store where she bought it and ask for an explanation.
And
She believes that the bag is full of… chocolate. Sam is mistaken. The bag is full of popcorn, not chocolate. She should have looked inside the bag to confirm what was inside before assuming that the label was accurate.
What this illustrates is AI’s ability to place itself in the mind of “Sam” in terms of what she expects to happen when she opens the bag and how she will react when she realizes she’s been deceived by the label.
While this ability isn’t up to par with humans’ grasp on theory of mind, as the article points out, “artificial intelligence is evolving rapidly. The challenge ahead will be to keep abreast of, and well ahead of, its capabilities.”
That evolution is on an exponential model, not a linear one. Within a few years, if not sooner, AI’s capabilities in this regard and others will likely surpass humans’. What happens from there is literally anyone’s guess, as we are in totally uncharted waters with no historical precedent to rely on.
Join the conversation as a VIP Member