We consider the problem of learning generalized policies for classical planning domains without supervision and using graph neural networks (GNNs) from small instances represented in lifted STRIPS. The problem has been considered before but the proposed neural architectures are complex and the results are often mixed. In this work, we use a simple and general GNN architecture and aim at crisp experimental results and understanding: either the policy greedy in the learned value function achieves close to 100% generalization over larger instances than those in training, or the failure must be understood, and possibly fixed, logically. For this, we exploit the relation established between the expressive power of GNNs and the C2 fragment of first-order logic (namely, FOL with 2 variables and counting quantifiers). We find for example that domains with general policies that require more expressive features can be solved with GNNs once the states are extended with suitable "derived atoms", encoding role compositions and transitive closures that do not fit into C2. The work is most closely related to the GNN approach for learning optimal general policies in a supervised fashion by Ståhlberg, Bonet and Geffner 2021; yet the learned policies are no longer required to be optimal (which expands the scope as many planning domains do not have general optimal policies) and are learned without supervision. Interestingly, value-based reinforcement learning methods that aim to produce optimal policies, do not yield policies that generalize, as the goals of optimality and generality are indeed often in conflict.
Learning Generalized Policies Without Supervision Using GNNs