| ||||
| ||||
![]() Title:Between min cost search and least action Authors:Tim Fernando Conference:EuroProofNet-WG5 Tags:fine-tuning and alignment, finite-state methods (Mona), least action principle, least action principle (discretized), min cost search, min cost search (surprisal) and Mona and finite-state methods Abstract: The contrast between the remarkable fluency of large language models and their flawed reasoning has been linked more than once to the distinction between pre-training powered by word prediction and fine-tuning associated with alignment such as reinforcement learning [e.g., Mahowald et al, 2024]. The present work attempts to understand this distinction by exploring the view that a nascent notion of state from word prediction is refined by alignment to carry out a search. Open-ended as that refinement and search may be, some structure is imposed below by developing Kleene’s representation of nerve nets into a logical system [Goguen and Burstall 1992] on which information-theoretic costs are introduced around a discretized least action principle [Marsden and West 2001]. While the discretrization can be aligned to the continuous standard by carefully crafted refinements, deviations from the standard are significant inasmuch as they represent deformations that shape patterns from various levels of cognitive processing [Mumford 1994] and can be explored in a finite-state setting, supported by tools such as Mona. Between min cost search and least action ![]() Between min cost search and least action | ||||
Copyright © 2002 – 2025 EasyChair |