Cognitive Architecture and Language Models

Cognitive architectures support end-to-end integration of interaction, reasoning, language processing, and learning for autonomous systems. However, they depend on structured, curated knowledge which limits their ability to learn new knowledge autonomously.

Language models (LMs) such as GPT-3, PALM, and OPT, offer tantalizing promise as a source of knowledge for these systems. Language models provide associational, generative retrieval from massive stores of latent knowledge. While powerful, retrieval is unreliable. Further, responses from LMs are not grounded in specific task-performance contexts: responses can be reasonable in some situation but not useful for an agent with a specific need in a specific context.

Given these characteristics of cognitive architectures and language models, we are researching approaches and methods that allow cognitive agents to use LMs as a source of knowledge for learning new tasks. Our agents use multiple knowledge sources, rather than depending solely on a LM, and we are seeking to identify when and how a LM should be used by an agent to obtain new knowledge.