View: session overviewtalk overviewside by side with other conferences
09:30 | A Coq Library for Mechanised First-Order Logic PRESENTER: Dominik Kirst ABSTRACT. We report about an ongoing collaborative effort to consolidate several Coq developments concerning metamathematical results in first-order logic into a single library. We first describe the framework regarding the representation of syntax, deduction systems, and semantics as well as its instantiation to axiom systems and tools for user-friendly interaction. Next, we summarise the included results mostly connected to completeness, undecidability, and incompleteness. Finally, we conclude by reporting on challenges experienced and anticipated during the integration. The current status of the project can be tracked in a public fork of the Coq Library of Undecidability Proofs. |
10:00 | QuantumLib: A Library for Quantum Computing in Coq PRESENTER: Jacob Zweifler ABSTRACT. The INQWIRE QuantumLib is a library for representing and reasoning about quantum computing in the Coq proof assistant. The library includes a deep thread of linear algebra definitions and proofs that emphasize mathematical facts from the perspective of quantum computing. We include much support for different quantum objects and operations, allowing users to simulate many different types of quantum programs. QuantumLib now provides the mathematical foundation for a range of other quantum verification projects. QuantumLib emphasizes both computation and proof (unlike libraries such as Mathematical Components or CoqEAL that focus on one or the other) and is specialized to the use-case of verified quantum computing in Coq. |
11:00 | Trakt: a generic pre-processing tactic for theory-based proof automation PRESENTER: Enzo Crance ABSTRACT. We present Trakt, a Coq plugin to pre-process goals before handing them to theory-based proof automation tactics. It is designed to be user-friendly, generic and extensible. The plugin is available with documentation and examples on GitHub. |
11:30 | PRESENTER: Jason Gross ABSTRACT. In most programming languages, asymptotic performance issues can almost always be explained by reference to the algorithm being implemented. At most, the standard asymptotic performance of explicitly used operations on chosen data structures must be considered. Even the constant factors in performance bottlenecks can often be explained without reference to the implementation of the interpreter, compiler, nor underlying machine. In 10+ years of working with Coq, we (Jason, Andres, and our colleagues) have found this pattern, which holds across multiple programming languages, to be the exception rather than the rule in Coq! This turns performant proof engineering, especially performant proof automation engineering, from straightforward science into unpredictable and fragile guesswork. By presenting in detail a sampling of examples, we propose a defense of the thesis: Performance bottlenecks in proof automation almost always result from inefficiencies in parts of the system which are conceptually distant from the theorem being proven. Said another way, *debugging, understanding, and fixing performance bottlenecks in automated proofs almost always requires extensive knowledge of the proof engine, and almost never requires any domain-specific knowledge of the theorem being proven*. Further, there is no clear direction of improvement: We know of no systematic proposal, nor even folklore among experts, of what primitives and performance characteristics are sufficient for a performant proof engine. We hope to start a discussion on the obvious corollary of this thesis: *This should not be!* Our presentation, we hope, will serve as a call for a POPLMark for Proof Engines, a call for designing and implementing an adequate proof engine for *scalable performant modular proof automation*. |
12:00 | PRESENTER: Seth Poulsen ABSTRACT. Understanding mathematical proofs is critical for students learning the foundations of computing. Having students construct mathematical proofs with the help of a computer is appealing as it makes it easier to autograde student work and autogenerate practice problems for students. Most existing tools for students to construct proofs with a computer are restricted systems that only permit simple logics, or there is a large overhead involved in students learning how to use them Proof Blocks (proofblocks.org), a tool that allows students to drag-and-drop prewritten lines of a proof into the correct order is a nice compromise because the tool is easy for students to use to construct proofs on any topic~\cite{poulsen2022proof}. However, a downside is that the process of writing questions can be time consuming and error-prone. An instructor must write the proof, break it into lines, and then specify a directed acyclic graph giving the logical dependence between lines of the proof. In this paper, we document the first step toward building a system to automatically generate Proof Blocks problems from Coq proofs: a Coq plugin which generates a natural language proof from a Coq proof. Our natural language generator differs from similar tools in that we deliberately restrict which definitions and tactics are allowed in the interest of improving readability of the output. |
Lunches will be held in Taub hall.
14:00 | jsCoq: Literate Proof Documents Dressed for the Web PRESENTER: Emilio Jesús Gallego Arias |
15:00 | HenBlocks: Structured Editing for Coq PRESENTER: Bernard Boey ABSTRACT. There are a number of pain points in using the Coq Proof Assistant, which affects beginners most. Structured editors, which allow the user to manipulate structured blocks corresponding to the abstract syntax of a program, have been used to make programming more accessible to beginners. However, they have not been applied to proving thus far. We present HenBlocks (available at https://henblocks.github.io), a web-based fully-fledged structured editor that allows users to write Coq proofs by manipulating blocks. We conclude that structured editing is a promising approach to proof writing that warrants more exploration, development, and testing. |
16:00 | Coq Community Survey 2022: Summary of Results PRESENTER: Ana de Almeida Borges ABSTRACT. The Coq Community Survey 2022 was an online public survey conducted during February 2022. Its main goal was to obtain an updated picture of the Coq user community and inform future decisions taken by the Coq team. In particular, the survey aimed to enable the Coq team to make effective decisions about the development of the Coq software, and also about matters that pertain to the ecosystem maintained by Coq users in academia and industry. In this presentation proposal, we outline how the survey was designed, its content, and some initial data analysis and directions. Due to the many free-text answers to some questions that require careful analysis, we expect a full presentation to include additional data and conclusions. |
16:30 | Discussion with Coq Core Development Team PRESENTER: Emilio Jesús Gallego Arias |