General autonomous intelligent agents with ongoing existence have many challenges when it comes to learning. On the one hand, they must continually react to their environment, focusing their computational resources and using their available knowledge to make the best decision for the current situation. On the other hand, they need to learn everything they can from their experience, building up their knowledge so that they are prepared to make decisions in the future. We posit two distinct levels of learning in general autonomous intelligent agents. Level 1 (L1) are architectural learning mechanisms that are innate, automatic, effortless, and outside of the agent’s control. Level 2 (L2) are knowledge-based learning strategies that are controlled by the agent's knowledge, whose purpose is to create experiences for L1 mechanisms to learn from.
We describe these levels and provide examples from our research in interactive task learning (ITL). In ITL, an agent learns novel tasks through natural interactions with an instructor.ITL is challenging because it requires a tight integration of many of the cognitive capabilities embodied in human-level intelligence: multiple types of reasoning, problem solving, and learning; multiple forms of knowledge representations; natural language interaction; dialog management; and interaction with an external environment – all in real time. Our agent is implemented in Soar, and uses a combination of innate L1 mechanisms and L2 strategies to learn ~60 puzzles and games, as well as tasks for a mobile robot. Our agent is embodied in a table top robot, a small mobile robot, a Fetch robot, and Cozmo.
Recent Research on the Soar Cognitive Architecture and Interactive Task Learning