Theory of Mind
Human interaction (like conversation) is generally efficient and effective. This effectiveness is enabled, in part, by the human ability to represent the intentions of others (variously known as mentalizing, perspective taking, and theory of mind). A theory of mind allows individuals to fill in gaps in conversation, resolve ambiguous referrals, and recognize and repair misalignments in understanding.
Generally, today’s autonomous systems, which often work in tandem with human users, lack the ability to model the intentions of those users. This deficit results in human-agent interactions that are tedious, inefficient, and cognitively demanding on the human users, who generally have all the responsibility to recognize misalignment and initiate repair.
We are exploring how to build and to exploit a model of theory of mind. Theory-of-mind functions are realized within a cognitive architecture, which both simplifies computational requirements and facilitates domain generality. Our long-term goal is to demonstrate the extent to which a theory-of-mind model can improve the trustworthiness of cognitive agents and the effectiveness of human-agent teams.