FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
VARDIFEST22 TALK ABSTRACTS: PAPERS WITH ABSTRACTS

Editors: Kuldeep S. Meel, Giuseppe De Giacomo, Kristin Yvonne Rozier, Priyanka Golia and Suwei Yang

Authors, Title and AbstractPaperTalk

ABSTRACT. Moshe Vardi is a highly inspiring computer scientist, who has made groundbreaking contributions in the last four decades to logic and automata, as well as its diverse applications in computer science, including formal verification, databases, computational complexity, knowledge reasoning, to name a few. Moshe's research has personally inspired me to conduct research at the border of different areas in computer science, glued together by logic and automata. I would like to dedicate my recent work joint with Diego Figueira and Artur J\.{e}z, which is to appear in PODS'22, to Moshe. The work combines a number of different areas --- finite model theory, graph databases, automata theory, and SMT --- to all of which, Moshe has made significant contributions.

We initiate the study of data-path query languages (in particular, regular data path queries (RDPQ) and conjunctive RDPQ (CRDPQ)) in the classic setting of embedded finite model theory, wherein each graph is ``embedded'' into a background infinite structure (with a decidable FO theory or fragments thereof). Our goal is to address the current lack of support for typed attribute data (e.g.\ integer arithmetics) in existing data-path query languages, which are crucial in practice. We propose an extension of register automata by allowing powerful constraints over the theory and the database as guards, and having two types of registers: registers that can store values from the active domain, and read-only registers that can store arbitrary values. We prove NL data complexity for (C)RDPQ over various standard theories in SMT; namely, the Presburger arithmetic, the real-closed field, the existential theory of automatic structures and word equations with regular constraints. All these results strictly extend the known NL data complexity of RDPQ with only equality comparisons, and provides an answer to a recent open problem posed by Libkin et al. Among others, we introduce one crucial proof technique for obtaining NL data complexity for data path queries over embedded graph databases called ``Restricted Register Collapse (RRC)'', inspired by the notion of Restricted Quantifier Collapse (RQC) in embedded finite model theory.

Jul 31 09:50

ABSTRACT. Synthesis techniques for temporal logic specifications are typically based on exploiting symbolic techniques, as done in model checking. These symbolic techniques typically use backward fixpoint computation. Planning, which can be seen as a specific form of synthesis, is a witness of the success of forward search approaches. In this talk, we present a forward-search approach to full-fledged Linear Temporal Logic on finite traces (LTLf) synthesis. We show how to compute the Deterministic Finite Automaton (DFA) of an LTLf formula on-the-fly, while performing an adversarial forward search towards the final states, by considering the DFA as a sort of AND-OR graph. Our approach is characterized by branching on suitable propositional formulas, instead of individual evaluations, hence radically reducing the branching factor of the search space. Specifically, we take advantage of techniques developed for knowledge compilation, such as Sentential Decision Diagrams (SDDs), to implement the approach efficiently.

Jul 31 10:00

ABSTRACT. The main Web page of FLoC-2022 includes a rephrase of the opening sentence of the Aaron Bradley and Zohar Manna's book: "Logic is the calculus of computation". This phrase naturally leads to the question if there is a language with expressive syntax and semantics that can combine the strength of logic and calculus, and if there are reasoning mechanisms for solving practical problems formulated in that language ? It turns out that the answer is affirmative, and combinations of this kind have been already proposed. But I would like to argue they were not ambitious enough, and not motivated enough by practical problems outside of CS. There is a lot more that can be done. In particular, M.Vardi and his colleagues studied aspects of how to effectively integrate Task and Motion Planning (TAMP), a research direction that is explored by several research groups. This research direction is related to planning and control of hybrid systems. The latter are called hybrid because they include both discrete parameterized actions and continuous parameterized processes that can be initiated and terminated by the agent actions or by external events. It turns out, that there are interesting classes of hybrid systems, where the states have relation structure as in databases. I would like to argue that a recently developed Hybrid Temporal Situation Calculus provides a language for a combination of logic and calculus that can be useful for representation and reasoning about relational hybrid systems. This language develops the situation calculus of John McCarthy and Ray Reiter, in particular, sequential temporal situation calculus of Reiter, towards including ordinary differential equations to compute continuous change over time. Also, I would like to argue that constraint logic programming (CLP) leads naturally to reasoning algorithms that can find near-optimal solutions for the planning problem in relational hybrid systems. The computational problems of this kind are usually solved using the methods of the Mixed-Integer Non-Linear Programming (MINLP) by a large community. However, I formulate a hypothesis that logic and reasoning algorithms combined with calculus and strengthened by external NLP solvers can be better suited not only to the task of modelling relational hybrid systems, but also to the task of computing near-optimal plans more efficiently, and, moreover, for the larger scale system than are currently feasible with MINLP alone. I conclude with an invitation to explore more a new direction that can demonstrate "Not So Unusual Effectiveness of Logic" from the angle that was little explored before.

Jul 31 10:10

ABSTRACT. In this talk, I am going to present the results of SAT-based reasoning for LTL over finite and infinite traces, the crux of which is to utilize the SAT solver to compute the states of the automaton w.r.t. an LTL (or LTLf ) formula on the fly. The series of work is accomplished by working with Moshe since 2014 and partially working with Kristin as well. Up to now, we have several applications based on this framework, including LTL satisfiability checking [3, 4], LTLf satisfiability checking [1, 2], and LTLf synthesis [5]. I will show such applications in a high-level way and presents the state-of-the-art results we have obtained. Finally, I will also discuss the possible research directions that are worth exploring further, e.g., on-the-fly LTL model checking, LTL synthesis and etc.

Jul 31 10:20

ABSTRACT. In formal system design, model checking is a well-established method to automatically check for global correctness of systems. In such a framework, in order to verify whether a system is correct with respect to a desired property, we describe its structure with a mathematical model, specify the property with a temporal logic formula, and check formally that the model satisfies the specification. This method has been first conceived for closed systems whose behavior is completely determined by their internal states and transitions.

In late 90s, interest has arisen in analyzing the behavior of individual components (or sets of components) in systems with multiple entities. The interest began in the field of reactive systems, which are systems that interact continually with their environments and whose behaviour depends on this interaction. Successively, researchers have looked for logics to reason about, and verify strategic behavior of agents in multi-agent systems.

The aim of this contribution is to report on the main developments made in the last 20 years in formal verification of reactive and multi-agent systems. In particular, we will report on CTL* module checking, ATL* model checking, and Strategy Logic.

Jul 31 11:00

ABSTRACT. The codication of legal artefacts in formal logic has a long tradition, dating back, among others, to seminal works be G. W. Leibniz. More recent works include the use of epistemic and deontic logics in the formalization of laws and contracts. While many of the existing approaches focus on issues of expressiveness, adequateness and non-automated reasoning with respect to the considered logics, our interest lies in a Computer Science and System Analysis perspective. In particular, we aim at developing tools that build on mechanized logics and perform automated consistency analysis for legal contract documents. Inconsistencies in legal contracts can be static, which means that they include contradictory textual information, or they can be dynamic, leading to (a potential) non-executability of the contract during its dynamic execution. We will present a method and tool called Con- tractCheck [1], which performs static and dynamic logical consistency analyses of Share Purchase Agreements (SPAs) based on the SMT solver Z3. We focus on its application to SPAs in the acquisition of corporations, since these are typically shielded from many of the implicit legal background facts implied by legal dogmatics. Starting from the denition of an ontology for SPAs, we provide structured text blocks in ContractCheck from which the contract can be composed. ContractCheck encodes the relevant legal facts from each of the blocks of an SPA in a logic encompassing decidable fragments of rst-order logic (linear real arithmetic, uninterpreted functions). The logical model, together with the applicable logical consistency constraints, are handed over to the SMT solver, and the analysis results are then depicted by ContractCheck. Using an illustrative example we will exemplify the capabilities of ContractCheck. In addition to the computation of inconsistencies, the model construction capabilities of the SMT solver synthesize dynamic contract executions consistent with the contractual constraints. The MaxSMT capabilities even permit to compute preferable orders of claim executions. We view our work as an illustration of how far mechanized logic can take us in understanding and improving even the non-technical facets of the world that we live in. [1] Alan Khoja, Martin Kolbl, Stefan Leue, and Rudiger Wilhelmi. Automated consistency analysisfor legal contracts. In Proc. of the SPIN Symposium on Model Checking Software, SPIN 2022, LNCS. Springer Verlag, 2022. To appear.

Jul 31 11:10

ABSTRACT. We survey our results on a formalism to model and reason about systems with reconfigurable communication interfaces. Processes in our formalism are independent state machines communicating by exchanging messages on broadcast and multicast channels. Messages can include additional data values allowing for further design flexibility. Broadcasts can be customized to target only recipients satisfying certain conditions on their states. Processes can connect to and disconnect from multicast channels and messages can go through only once a condition on the state of all recipients (those listening to the channel) is satisfied. Dependence on the local state of senders and receivers implies that interaction interfaces are reconfigured through execution. Reconfiguration occurs both at message level and channel level.

We show that such systems can be represented and reasoned about symbolically by including existential quantification in the definition of the transition relation. The quantification allows to apply conditions imposed by senders in a way that depends on the local state of recipients. While this symbolic representation supports a global semantics and efficient analysis, it is not compositional. We match the symbolic representation with a compositional semantics through Channeled Transition Systems. When coming to consider the partial order semantics of such reconfigurable systems, we identify that additional information about interleaving needs to be included in the partial order.

On the reasoning side, we consider an extension of LTL that allows to reason about the contents of messages and the intentions of senders when setting conditions on receivers. We support modelling and analysis through model checking of a simple programming language tailored for the production of such processes.

Jul 31 11:20

ABSTRACT. I will talk about Strategy Logic: a powerful language for strategic reasoning that Moshe Vardi introduced a decade ago and quickly became widely applied in Formal Methods and AI. I will do it with a special focus on its expressiveness in Multi-Agent Systems and Game Theory, as well as for solving Rational Synthesis problems.

Jul 31 11:35

ABSTRACT. Mechanism Design aims to design a game so that a desirable outcome is reached regardless of agents’ self-interests. In this paper, we show how this problem can be rephrased as a synthesis problem, where mechanisms are automatically synthesized from a partial or complete specification in a high-level logical language. We show that Quantitative StrategyLogic is a perfect candidate for specifying mechanisms as it can express complex strategic and quantitative properties. We solve automated mechanism design in two cases: when the number of actions is bounded, and when agents play in turn.

Jul 31 11:45

ABSTRACT. Logical, or formal, specifications have proved extremely useful for automated synthesis of plans, strategies, and other objects such as norms of behaviour in software systems. In addition to purely software systems, synthesis of plans conforming to logical specification has been shown to work for cyber-physical systems, in particular, for manufacturing systems. Among many synthesis problems Moshe Vardi worked on is synthesis for manufacturing. The problem in manufacturing synthesis is as follows: given a production recipe, that is, a description of which operations need to be performed in order to have different versions of a product to be manufactured, and a description of a manufacturing facility, synthesise a controller specific to this manufacturing facility, where each operation is assigned to a specific set of manufacturing robots [3]. In [2, 1], synthesis for manufacturing has been investigated, where both the production recipes and manufacturing facilities are modelled as transducers (Mealy machines). It was shown that this version of the synthesis for manufacturing problem is decidable in 2EXPTime when the set of manufacturing resources is fixed, and is undecidable if the problem requires to find how many copies of each manufacturing resource are needed to manufacture the product. In this talk, I would like to outline possible extensions of the problem while keeping it decidable, for example using first order specifications of the production recipes as in [4] and specifying additional constraints on how the production facilities should be used.

Jul 31 11:55

ABSTRACT. We give a logical epistemic characterization of iterated admissibility (i.e., iterated deletion of weakly dominated strategies), based on the intuition that a strategy is admissible if it is one used by an agent who is rational, knows that other agents satisfy the appropriate rationality requirements, and that is "all the agent knows'', in that she considers possible all strategies compatible with these assumptions.

Jul 31 12:10

ABSTRACT. In automata-theoretic verification, fair simulation is a sufficient condition for trace inclusion. The two notions coincide if the specification automaton is history-deterministic (also called "good-for-games"). History-determinism is nondeterminism that can be resolved by remembering the past, without need for guessing the future. History-determinism is a natural automaton- and game-theoretic concept that can replace determinism for many verification and synthesis questions.

Jul 31 12:20

ABSTRACT. This is a talk proposal for the VardiFest where I want to recount my one and half years of learning directly from Moshe and having worked with him on a paper. This talk will be more of ruminations on that wonderful experience (1995-1996) and a homage to the wonderous computer scientist that I get inspired by even today.

Jul 31 12:30

ABSTRACT. Quantitative reasoning is synonymous with numerical methods. The techniques that come to mind to solve problems involving quantitative properties are numerical methods --- linear programming, optimization, and similar other methods. In this talk, I will introduce a novel framework, called comparators, for quantitative reasoning which is purely based in automata methods as opposed to numerical methods, and discuss their pros and cons.

Jul 31 14:50

ABSTRACT. In synthesis, environment specifications are constraints on the environments that rule out certain environment behaviors. To solve synthesis of LTLf goals under environment assumptions, we could reduce the problem to LTL synthesis. Unfortunately, while synthesis in LTLf and in LTL have the same worst-case complexity (both are 2EXPTIME-complete), the algorithms available for LTL synthesis are much harder in practice than those for LTLf synthesis. Recently, it has been shown that when the environment specifications are in form of fairness or stability, or GR(1) formulas, we can avoid such a detour to LTL and keep the simplicity of LTLf synthesis. Furthermore, even when the environment specifications are specified in full LTL we can partially avoid this detour.

In this talk, I give a review on how to effetely tackle the synthesis problem for reachability and safety environment task under reachability and safety environment specifications by using synthesis techniques based on finite-state automata, thus exploiting the intrinsic simplicity of LTLf.

Jul 31 15:00

ABSTRACT. Boolean synthesis is the process of synthesizing a multiple-output Boolean function from a declarative specification describing a relation between its inputs and outputs. Motivated by applications in diverse areas such as digital-circuit design, QBF certification and reactive synthesis, a number of different approaches have been developed to tackle this problem. This talk intends to give a brief overview of the ongoing development of Boolean-synthesis algorithms based on decision diagrams. This approach to Boolean synthesis is intrinsically connected to its application as a component of reactive synthesis, since decision diagrams are common in a number of successful reactive-synthesis algorithms. We will contrast the original algorithm based on Binary Decision Diagrams (BDDs) with a newly-introduced version making use of a symbolic CNF representation using Zero-Suppressed Decision Diagrams (ZDDs), and show how this alternative representation can mitigate some of the scalability limitations of BDDs. In the course of the talk we will touch on some of the algorithmic insights that went into the design of such techniques.

Jul 31 15:10

ABSTRACT. Given a Markov decision process (MDP) $M$ and a formula $\Phi$, the strategy synthesis problem asks if there exists a strategy $\sigma$ s.t. the resulting Markov chain $M[\sigma]$ satisfies $\Phi$. This problem is known to be undecidable for the probabilistic temporal logic PCTL. We study a class of formulae that can be seen as a fragment of PCTL where a local, bounded horizon property is enforced all along an execution. Moreover, we allow for linear expressions in the probabilistic inequalities. This logic is at the frontier of decidability, depending on the type of strategies considered. In particular, strategy synthesis is decidable when strategies are deterministic while the general problem is undecidable.

Jul 31 15:20

ABSTRACT. Moshe's has made key contributions in the areas of logic and formal methods, and more recently, he has been one of the main voices discussing publicly the social impact of modern computer technology and AI. I propose to talk about two themes that are related to these. The first is about recent work that highlights the role of formal languages and logic in representation learning. The second is about my experience teaching an optional, upper division course that I developed, called ``Social and technological change'' related to Moshe's concerns.

Jul 31 16:00

ABSTRACT. The proposed theory of survival of the fitted [1] explains the evolution of species in the biosphere based on sustainability of multi-scale interaction networks: interaction patterns are what sustains the network and its constituents. Winning in struggles, and having reproductive advantages as associated with Darwinian and Neo-Darwinian survival of the fittest and Natural Selection is secondary. We define a species as the collective of organisms that share a set of fundamental, core, sustaining interactions, which we term the species interaction code. Here (see also [2]), we show that interaction codes are formed, and subsequently evolve, in response to genetic, environmental and other changes, in a process that in some ways relates to the well-studied concepts of autoencoding or dimensionality reduction in machine learning. In a unique autoencoding process, which we term natural autoencoding, every biological process or interaction in the entire biosphere takes part in multiple functions: encoding existing organisms and interactions; decoding such codes into networks of sustained ecosystems, and reacting to innovation and change by adjusting species interaction codes. DNA is only one element in this mechanism. One result of this process is that species interaction codes both map and construct the species environment.

Natural autoencoding differs from typical artificial autoencoding; natural autoencoding is: 1. Sustainment-guided: Typical artificial autoencoders iteratively optimize a loss function: each input is encoded, its representation is then decoded, and the differences between inputs and corresponding reconstructed outputs are computed. In natural autoencoding, evolving species interaction codes stabilize around de-facto sustained interaction patterns. This is retrospective; there is no sustainability assessment of a present state. What works works. 2. Multi-species and multi-scale: Natural autoencoding concurrently generates many species interaction codes, at multiple scales/levels, including biochemical, cellular, and organism. 3. Recursive and Reflective: The biosphere drives its own evolution: the input to the "natural autoencoder" is the entire biosphere, incorporating all available outputs of previous steps, and the autoencoder itself.

Our team is now designing computer-based autoencoders to model natural autoencoding.

Principles associated with Natural Selection and survival of the fittest have significantly affected human political, social, economical, and educational norms. Survival-of-the-fitted by natural autoencoding explains why a habitable environment requires diverse networked interactions. This perspective can help drive much needed changes in human behavior, both environmental and social, including the kind of social responsibility advocated by Moshe Vardi [3].

Acknowledgement: We thank D. Harel, S. Szekely and G. Frankel for valuable contributions.

Jul 31 16:10

ABSTRACT. in the 1990s, during my PhD studies at Imperial College, London. One of his particular papers, on why modal logic is robustly decidable clearly called my attention, as he neatly explained not only the paper's main topic but also handed the reader the understanding of model checking and validity \cite{Vardi96}. This paper inspired our work on Connectionist Modal Logics years later \cite{DAVILAGARCEZ200734} and our approach to Neurosymbolic AI in general \cite{3rdwave}. Several years later, we invited him to be a keynote at the Brazilian National Conference on Computer Science. Since then, we have become friends and met several times. Our recent collaboration aims at better understanding and explaining machine learning \cite{Tavares2020}, showing that neural learning and logic can lead to richer neurosymbolic AI models, and at integrating machine learning and reasoning toward more robust AI \cite{Lamb2020,PratesALLV19}. In this talk, I will highlight Vardi's contribution to this endeavour and the inspiration that I have drawn from his exquisite yet humongous contribution to computer science, logic, and artificial intelligence.

Jul 31 16:20

ABSTRACT. We intend to present our line of work based on epistemic logic, lasting since some years, where we cope with group dynamics in multi-agent settings. We have proposed and extended over time the epistemic logic \textit{L-DINF}, that allows one to formalize such dynamics, with special attention to explainability, modularity, and to the ability of modelling cognitive aspects related to agents' operation within the application at hand.

Jul 31 16:30

ABSTRACT. Logic and graph theory form the basis of a fascinating interplay between combinatorial structures and their algorithmic properties. Some of the most important examples of this are relations between logic, hypergraphs and constraint satisfaction problems, graph width parameters, and connections with monadic second-order logic.

The interaction between logic and graphs have had an important impact on applications in computer science, from database query optimization to temporal reasoning and verification, as well as identifying tractable cases of their relevant constraint satisfaction problems.

In this brief talk, we will explore a number of themes in which logic and algorithmic graph theory have influenced each other.

Jul 31 16:45

ABSTRACT. I will discuss the highlights of two papers written with the late Ashok Chandra between 1979 and 1982. The subject is computability and complexity on finite unordered structures (e.g., graphs). Despite the original papers being couched in the framework of relational databases, the ideas and results constitute a rather elegant generalization of Turing computability and classical complexity theory. They have also triggered a large amount of research in finite model theory and descriptive complexity, including seminal work of Moshe Vardi.

Among other things, I will describe a tantalizing problem we posed in the second paper, which is still unsolved after over 40 years, concerning a fundamental issue around PTIME.

Jul 31 16:55

ABSTRACT. The Safety Fragment of LTL

Aug 01 09:50

ABSTRACT. A significant challenge to widespread adoption of reinforcement learning (RL) is the faithful translation of designer’s intent to the scalar reward signal required by RL algorithms. Logic-based specifications help in two ways: by precisely capturing the intended objective, and by allowing its automatic translation to a reward function. Omega-regular objectives, such as those expressed in LTL and by omega-automata, have recently been proposed to specify learning objectives in RL. In this talk, we will discuss the impact of Vardi's contributions to automata-theoretic reinforcement learning.

Aug 01 10:00

ABSTRACT. Fifteen years have passed since the introduction of the original turn-based variant of Strategy Logic, and twelve since its full-fledged concurrent extension. Several interesting results have been obtained and, for sure, many more are still to come. The ones that I consider more meaningful concern, in particular, the enlightening connections with other subfields of theoretical computer science, most notably algorithmic game theory and finite model theory, which have enriched and broadened the scientific literature. In this talk, I will overview some of these results, starting with anecdotes on the original work done in 2008, while I was a visiting Ph.D. student at Rice University under Moshe’s supervision, and terminating with a few open questions.

Aug 01 10:10

ABSTRACT. The paper discusses a collaboration with Moshe Vardi that started with a work presenting an algorithm that takes one regular expression E and n regular expressions S as input, and returns an automaton which is shown to be the maximal rewriting of E with respect to S.

Aug 01 10:20

ABSTRACT. The paper itself is an extended abstract.

Aug 01 11:00

ABSTRACT. I propose to present our recent results [LICS'19,LICS'21] on capturing various bisimilarity notions— covering not only the conventional relational notion but also its quantitative extensions such as probabilistic bisimulation and bisimulation metric—in the language of category theory. The theory combines abstract category theory and concrete games—so-called codensity games which are played in categories—which I believe will be of interest of Moshe and the audience.

Aug 01 11:10

ABSTRACT. In order to improve the effectiveness of law enforcement agencies, we address the problem of capturing absconding using graph theory, formal methods and game theory. We define a set of problems and we outline an approach to solve them.

Aug 01 11:20

ABSTRACT. We point out the surprising connections which have recently been observed between the famous Kochen-Specker theorem in the foundations of quantum mechanics, and algorithmic questions relating to constraint satisfaction and the celebrated Feder-Vardi Dichotomy Conjecture (recently proved by Bulatov and Zhuk), and to the Weisfeiler-Leman approximations to structure isomorphism.

Aug 01 11:35

ABSTRACT. In his 1982 landmark paper “The Complexity of Relational Query Languages” [7], Moshe Vardi defined and studied the concept of data complexity of relational query languages, which is the complexity of evaluating a fixed query in the language as a function of the size of the database. Vardi also defines the notion of expression complexity (fixed database, query as input), now often called “program complexit”, and the combined complexity (where both, the database and the query constitute the input). Data complexity has become the standard method for assessing the complexity of query languages. Vardi [7] analysed various query languages and showed that fixed-point queries are complete for PTIME in data complexity by a proof from which the same result for Datalog follows. He noticed that ”the expression complexity of the investigated languages is usually one exponential higher than the data complexity” [7]. Vardi also notes that over ordered structures fixed-point queries (and implicitly Datalog queries) capture PTIME, which was shown independently by Immerman [6], and more explicitly for the second-order Horn fragment corresponding to Datalog by Gr ̈adel [5]. The first part of the talk will give a short overview of these results.

In the second part, I briefly address the complexity and expressive power of ontological reasoning formalisms such as ontology-based data access via description logics [3] or via variants of Datalog [4,1,2]. I will essentially illustrate two points. First, rather than a typical single-exponential jump from data to expression (or combined) complexity for classical logical query languages addressed by Vardi, we now typically have a double-exponential jump. Second, to better understand the expressive power of ontological reasoning formalisms where, in addition to a database (or ABox) and a query, there is an ontology (or TBox or Datalog± program), it is convenient to consider a refined concept of expressive power studied in [1,2], which is defined by the set of Boolean queries that can be expressed when keeping ontologies fixed.

References

[1] Marcelo Arenas, Georg Gottlob, and Andreas Pieris. Expressive languages for querying the semantic web. ACM Trans. Database Syst., 43(3):13:1–13:45, 2018. [2] Gerald Berger, Georg Gottlob, Andreas Pieris, and Emanuel Sallinger. The space-efficient core of vadalog. ACM Trans. Database Syst., 47(1), apr 2022. [3] Diego Calvanese, Giuseppe De Giacomo, Domenico Lembo, Maurizio Lenzerini, Antonella Poggi, Mariano Rodriguez-Muro, Riccardo Rosati, Marco Ruzzi, and Domenico Fabio Savo. The mastro system for ontology-based data access. Semantic Web, 2(1):43–53, 2011. [4] Georg Gottlob, Thomas Lukasiewicz, and Andreas Pieris. Datalog+/-: Questions and answers. In 14th Intl. Conf. on the Principles of Knowledge Representation and Reasoning (KR’14), 2014. [5] Erich Gr ̈adel. Capturing complexity classes by fragments of second-order logic. Theoretical Com- puter Science, 101(1):35–57, 1992. [6] Neil Immerman. Relational queries computable in polynomial time (extended abstract). In Pro- ceedings of STOC’82, May 5-7, 1982, San Francisco, California, USA. [7] Moshe Y. Vardi. The complexity of relational query languages. In Proc. of STOC’82, 1982.

Aug 01 11:45

ABSTRACT. In this talk we present some of the many examples where Prof. Moshe Vardi’s research has enabled progress in solving some of the most challenging technical problems present in aviation. We touch upon a subset of use-cases from knowledge acquisition, advanced reasoning, planning, environment health and safety, and energy efficiency.

Aug 01 11:55

ABSTRACT. The determinization of a nondeterministic B\"uchi automaton (NBA) is a fundamental construction of automata theory, with applications to probabilistic verification and reactive synthesis. The standard determinization constructions, such as the ones based on the Safra-Piterman's approach, work on the whole NBA. In this work we propose a divide-and-conquer determinization approach. To this end, we first classify the strongly connected components (SCCs) of the given NBA as inherently weak, deterministic accepting, and nondeterministic accepting. We then present how to determinize each type of SCC \emph{independently} from the others; this results in an easier handling of the determinization algorithm that takes advantage of the structure of that SCC. Once all SCCs have been determinized, we show how to compose them so to obtain the final equivalent deterministic Emerson-Lei automaton, which can be converted into a deterministic Rabin automaton without blow-up of states and transitions. We implement our algorithm in a prototype tool named ourDC and empirically evaluate ourDC with the state-of-the-art tools on a large set of benchmarks from the literature. The experimental results show that our prototype ourDC outperforms Spot and Owl regarding the number of states and transitions.

Aug 01 12:10

ABSTRACT. We have been studying LTL misconceptions with multiple populations to determine *in what ways* LTL is tricky and to decide *what we can do* to address the issues. We propose an interactive remote talk that aims to demonstrate the LTL misconceptions and expert blind spots that we have found.

Aug 01 12:20

ABSTRACT. We present a novel hybrid algorithm for training Deep Neural Networks (DNNs) that combines the state-of-the-art Gradient Descent (GD) method with a Mixed Integer Linear Programming (MILP) solver, outperforming GD and variants in terms of accuracy, resource and data efficiency for regression as well as classification tasks. Our GD+Solver hybrid algorithm, called \toolname, works as follows: given a DNN $D$ as input, \toolname invokes GD to partially train $D$ until it gets stuck in a local minima, at which point \toolname invokes an MILP solver to exhaustively search a region of the loss landscape around the weight assignments of $D$'s final layer parameters, with the goal of {\it tunnelling through and escaping the local minima}. The process is repeated until desired accuracy is achieved. We report on an extensive evaluation of \toolname against Stochastic Gradient Descent (SGD), Momentum Based Algorithms (Adam) and common heuristics (e.g., Learning Rate Scheduling LRS) on a suite of regression and classification tasks. In our experiments, we find that \toolname not only scales well to additional data and very large model sizes, but also outperforms all other competing methods in terms of rates of convergence and data efficiency. For regression tasks, \toolname produced models with, on average, 31.5\% lower MSE in 48\% less time, and for classification tasks on MNIST and CIFAR10, \toolname was able to achieve the highest accuracy over all competing methods, using only 50\% of the training data that GD baselines required. By judiciously leveraging an MILP solver to {\it fine-tune the final layer and thus tunnelling through local minima}, we avoid the scalability issues associated with previous solver-based training methods, while at the same time avoiding the propensity of GD methods getting stuck in local minima.

ABSTRACT. The Skolem Problem asks how to determine algorithmically whether a given linear recurrence sequence (such as the Fibonacci numbers) has a zero. It is a central question in dynamical systems and number theory, and has many connections to other branches of mathematics and computer science, such as program analysis and automated verification. Unfortunately, its decidability has been open for nearly a century! In this talk, I will present a brief survey of what is known on the Skolem Problem and related questions, including recent and ongoing developments.

Aug 01 14:25

ABSTRACT. I will explain how old papers by Moshe Vardi and Ray Reiter on query answering over incomplete databases led to new recently discovered schemes of efficient approximation of certain answers, and present a previously unknown and exceptionally simple formulation of such schemes for first-order queries over relational databases.

Aug 01 14:35

ABSTRACT. I describe my work in connection with several papers by Moshe Vardi.

Aug 01 14:45

ABSTRACT. In this talk I will recall a fruitful collaboration between Serge Abiteboul and myself with Moshe, that resulted in the 1997 JACM article with the above title. Under Moshe's impetus, the article completed in a very elegant way previous results, providing a comprehensive and compelling picture, and remains one of my favorite papers.

The results establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. The parameterized fixpoint logics express the complexity classes P, NP, PSPACE, and EXPTIME, but only over ordered structures.

The order requirement highlights an inherent mismatch between complexity and logic -- while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we used a theory of relational complexity based on the relational machine, a computational device that operates directly on structures. Relational complexity bridges the gap between standard complexity and fixpoint logic. On one hand, questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes. On the other hand, the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight, three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE, and EXPTIME. The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.

Aug 01 15:00

ABSTRACT. Proofs of unsatisfiability facilitate the validation of SAT solver results. Practically all top-tier solvers support proof logging and these proofs can efficiently be checked using formally-verified tools. However, the size of these proofs is typically large and sometimes gigantic, thereby making them impossible to understand. On the other hand, one can extract useful information out of proofs, such as unsatisfiable cores or interpolants.

We present some results on extracting some understanding from proofs of unsatisfiability. This work started after a question by Moshe Vardi about the effect of using a large interval for the Pythagorean Triples problem on the size of the proof. Increasing the size of the interval turned out to reduce the size of the proof substantially. It might even be possible to produce a humanly-understandable proof for this problem if the interval is large enough.

We also show some other results in this direction. For example, short proofs of unsatisfiability have been crucial to constructing small unit-distance graphs with chromatic number 5. These graphs are important building blocks to solving the Hadwiger-Nelson problem. Also, compact proofs of mutilated chessboard problems provided an alternative short, humanly-understandable argument of unsatisfiability.

Aug 01 16:00

ABSTRACT. In this short talk, we will give a retrospective of the collaboration Moshe had with Intel over the past 25 years! This long-lasting collaboration was key in bringing formal verification to industry. It is an excellent example of Vardi’s unique contribution and ability to bridge theory and practice.

Aug 01 15:20

ABSTRACT. This summer, we gather to celebrate Moshe Vardi's many pioneering contributions to the theory and practice of computer science, and his leadership qualities and activities. My toast will focus on some less-known ingenious traits he exhibited starting very early in his career. In particular, I will discuss how Moshe acted in the role of a grand translator who applied advanced archery strategies in database research, and will comment on his role as a member of the Gang-of-Four.

Aug 01 15:30

ABSTRACT. Although I have worked closely with Moshe on counting problems, it will be a tall order to count all the things I've learnt from him -- through his articles, lectures, one-on-one conversations and gems of advice over the years. So I'd rather not try to count but briefly talk of a few (among many) occasions at different stages of my career, when I kept running into Moshe's profound contributions and insights. Starting from my grad school days, Moshe's result with Pierre Wolper on automata theoretic LTL model checking was among my first few introductions to formal verification. Much later, while working with my Ph.D. student on logic and some aspects of finite model theory, we kept running into beautiful results due to Moshe and his collaborators. More recently, while working with Moshe on PAC counting, I recall some very interesting discussions on how 2-, 3- or even 2.5-universal hashing might just be the sweet spot to help achieve a balance between scalability and strong formal guarantees. Almost a decade later, we know how spot-on his prediction was.

Aug 01 15:10

ABSTRACT. Differential privacy is a mathematical framework for developing statistical computations with provable guarantees of privacy and accuracy. In contrast to the privacy component of differential privacy, which has a clear mathematical and intuitive meaning, the accuracy component of differential privacy does not have a general accepted definition; accuracy claims of differential privacy algorithms vary from algorithm to algorithm and are not instantiations of a general definition. In a recent paper~\cite{bcksv21}. we identify program discontinuity as a common cause for \emph{ad hoc} definitions and introduce an alternative notion of accuracy parametrized by, what we call, {distance to disagreement} --- the {distance to disagreement} of an input $x$ w.r.t.\, a deterministic computation $f$ and a distance $d$ is the minimal distance $d(x,y)$ over all $y$ such that $f(y)\neq f(x)$. The talk will discuss what this definition entails and identify circumstance under which verifying claims of accuracy is decidable.

Aug 01 16:45

ABSTRACT. In this talk I will share the technical stepping stones that were the path from my research world to Moshe’s. My background is in hardware verification and this led to my interest in developing practical SAT solvers that could handle hardware verification problems at scale. This led to the two key contributions of the Chaff SAT solver form my group – the two-literal watching algorithm for unit propagation and the VSIDS (Variable State Independent Decaying Sum) decision heuristic. These techniques built on the earlier success of what is now known as CDCL (Conflict Driven Clause Learning). Collectively these techniques dramatically improved the capabilities to SAT solvers enabling them to tackle problems at scale in not just hardware verification, but system verification and even beyond instances from verification. The practical success of these and subsequent solvers seemed to fly in the face of the theoretical complexity of SAT. This piqued Moshe’s interest and led to his taking a major leadership role in trying to develop the theoretical foundations for what makes these solvers effective for the practical instances of interest –the relationship between the search algorithm and the search space characteristics of the practical instances. He was the driver of a series of workshops titled “Theory and Practice of Satisfiability Solving” held at BIRS Banff (2014), Dagstuhl (2015) and BIRS Oxaca (2018). These workshops were remarkable in their bringing together theoreticians and practitioners interested in SAT in an immersive setting to learn from each other and build bridges between theory and practice for this simultaneously simple and complex problem. Moshe was also instrumental in shepherding articles on the practical successes of SAT and SMT solvers in CACM – making sure this reached out to the broad CS community. My chance to collaborate directly with Moshe came through when Kuldeep Meel visited me at Princeton while he was Moshe’s student. We started working on the problem of finding the minimum/minimal set of independent variables for a given CNF SAT formula. This could significantly simplify the cost of the uniform sampling and model counting algorithms that Kuldeep and Moshe were working on. The collaboration expanded to include Alex Ivrii and led to a nice algorithm for this problem – making my Moshe number 1!

Aug 01 16:55

ABSTRACT. LTLf? It's Easy! Precisely! (More details in the attached extended abstract)

ABSTRACT. In 2010, I approached Vardi for an undergraduate research project, and he suggested I look at the problem of uniform generation of SAT solutions. I had set myself to deliver a practical sampler by the end of that academic year. Oh, the naivety of youth.... Well, what a journey it has been.

ABSTRACT. We survey the issue of how an agent views its environment in synthesis: whether the environment generates traces or uses strategies. While historically the conventional wisdom has been that in the setting of linear-time logic, the trace view is adequate, we argue that for modern complex forms of synthesis, e.g., best-effort synthesis, the distinction becomes important and the trace-based view seems to be lacking. Indeed, the strategy-view sheds light on the limitations of the trace-view, which amount to the observation that environment-traces correspond to oblivious strategies of the environment.

Based on joint work with Benjamin Aminof, Giuseppe De Giacomo, and Moshe Vardi