FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
WIL ON SUNDAY, JULY 31ST

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 1J
Location: Ullmann 307
09:00
Embedding Quantitative Properties of Call-by-Name and Call-by-Value into Call-by-Push-Value

ABSTRACT. This talks explores how the (untyped and typed) theories of Call-by-Name (CBN) and Call-by-Value (CBV) can be unified in a more general framework provided by Call-by-Push-Value (CBPV).Indeed, we first present an untyped CBPV-like calculus, called lambda-bang, which encodes untyped CNB and CBV, both from a static anda dynamic point of view.  We then explore these encodings in a typed framework, specified by quantitative (aka non-idempotent intersection) types.  Three different (quantitative) properties are discussed. The first one is related to upper bounds for reduction lengths, the second one concerns exact measures for reduction lengths and sizes of normal forms, and the last one is about the inhabitation problem. In all these cases, explained and discussed in the talk, the (quantitative) property for CBN/CBV is inherited from the corresponding one in the lambda-bang calculus.

10:00
The essence of type-theoretic elaboration

ABSTRACT. When using type theories in proof assistants the full syntax can quickly become too verbose to handle. One common solution to this problem is to design two type theories: a fully annotated type theory which has good meta-theoretic properties, is suitable for algorithmic processing and resides in the kernel of the proof assistant, and an economic one for the users’ input. The two theories are linked by elaboration, a reconstruction of missing information that happens during, or in parallel with, type-checking. We define a type-theoretic account of an elaboration map and relate its algorithmic properties to decidability of type-checking.

10:30-11:00Coffee Break
11:00-12:30 Session 10N
Location: Ullmann 307
11:00
Monoidal Width -- Extended Abstract
PRESENTER: Elena Di Lavore

ABSTRACT. We introduce monoidal width as a measure of the difficulty of decomposing morphisms in monoidal categories. By instantiating monoidal width and two variations in a suitable category of cospans of graphs, we capture existing notions, namely branch width, tree width and path width. By changing the category of graphs, we are also able to capture rank width. Through these examples, we propose that monoidal width: (i) is a promising concept that, while capturing known measures, can similarly be instantiated in other settings, avoiding the need for ad-hoc domain-specific definitions and (ii) comes with a general, formal algebraic notion of decomposition using the language of monoidal categories.

11:20
Hypertrace Logic

ABSTRACT. Security policies are often specified as hyperproperties. The logical specification of hyperproperties requires quantification over both time and traces. The interaction between trace and time quantifiers makes it challenging to understand what is being specified by a hyperlogic formula. For example, starting from the high-level two-state requirement that information from x should not flow to z until information in y does not flow to z, eight different specification variants were identified in a recent work. A two-sorted first-order logic, called Hypertrace logic, was introduced to provide mathematical definitions for the variants identified. In this talk, I introduce Hypertrace logic with some of its fragments. My goal is to present hyperproperties through the lenses of a first-order formalism and discuss possible new research directions. For example, how to use techniques from modal logic to identify decidable fragments in Hypertrace logic.

11:40
A goal-oriented proof system for epistemic modal logic

ABSTRACT. Epistemic logic enables us to have a logical approach to information, particularly the type of information that is meaningful for an agent as a whole: their knowledge. It compasses a family of well-developed and well-understood systems. However, its proof-theoretic presentation usually is axiomatic. This approach creates obstacles when we try to integrate it into rich theories of human reasoning and automated reasoning systems. This talk develops new proof systems for Epistemic Logic based on the goal-oriented systems of Gabbay & Olivetti (2000). In a goal-oriented proof system with epistemic modality, a deductive process will begin by wondering if an agent knows a particular objective in a particular state, depending on a particular database and the states accessible for that agent. The general approach of this system is based on databases and objectives. By integrating Epistemic Logic with a goal-based proof, we can allow for new human and automated reasoning applications.

12:00
A Mixed Linear and Graded Logic-Extended Abstract
PRESENTER: Victoria Vollmer

ABSTRACT. Benton showed in his seminal paper on Linear/Non-Linear logic and models that the ofcourse modality of linear logic can be split into two modalities connecting intuitionistic logic with linear logic forming a symmetric monoidal adjunction. In this paper, we give a similar result showing that graded modalities can be split into two modalities connecting graded logic with linear logic. We propose a sequent calculus, its proof theory and categorical model, and a natural deduction system in the form of a term assignment system which is shown to be isomorphic to the sequent calculus.

12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

14:00-15:30 Session 14Q
Location: Ullmann 307
14:00
A normalized edit distance on finite and infinite words

ABSTRACT. Robustness in formal verification refers to the question: given a system S satisfying a specification Phi, how likely is it that S will still satisfy Phi, even in the presence of errors disrupting its normal behavior. The formal definition typically assumes some error model and some distance function measuring distance between words (describing computations) and languages. To define robustness for reactive systems which are modeled by infinite words, a notion of distance between infinite words is required. A common distance between finite words w1 and w2 is the edit (Levenstein) distance, that provides the minimal number of edit operations (substitute, delete or insert) that is required to transform w1 into w2. But how can it be enhanced to account for infinite words? Can we find a definition that is both intuitive and satisfies the requirements of a metric? Can it be computed at least for the case of ultimately periodic words? Efficiently? And what about computing the distance between languages of infinite words (say represented by Büchi automata)? In this talk we will see that all the questions above can be answered positively. The distance function enabling this is termed ω-NED and it is a natural extension of NED, the normalized edit distance between finite words. In the quest for answering these questions we stumbled upon the fact that while it was known that NED does not satisfy the triangle inequality for arbitrary weights, the question whether it satisfies the triangle inequality when weights are uniform was open. We managed to close this gap by proving that it does, and build on this for proving the above results. The talk is based on joint works with Joshua Grogin, Oded Margalit and Gera Weiss.

15:00
Automated Logic-Based Reasoning for Analyzing Prime Video Code

ABSTRACT. Developers working in Prime Video are constantly innovating, introducing new features, and improving the performance of the Prime Video application. Each modification of the Prime Video application code is required to pass a testing and code review process before it is delivered to customer devices. As the application evolves rapidly, it cannot be expected that manual reasoning about the code keeps up with the software complexity curve. This setting presents an opportunity for the application of automated logic-based approaches. In this abstract, we present techniques and tools we have used and developed in order to automatically reason about the the Prime Video application code. We discuss a code review bot, which runs multiple static code analysis tools, and introduce a novel static analyzer for TypeScript, which enables reasoning about deep properties in a trustworthy manner, while providing transparent actionable feedback to the developers.

15:30-16:00Coffee Break
16:00-17:00 Session 19N
Location: Ullmann 307
16:00
Fixed-Template Promise Model Checking Problems
PRESENTER: Silvia Butti

ABSTRACT. The fixed-template constraint satisfaction problem (CSP) can be seen as the problem of deciding whether a given primitive positive first-order sentence is true in a fixed structure (also called model). We study a class of problems that generalizes the CSP simultaneously in two directions: we fix a set L of quantifiers and Boolean connectives, and we specify two versions of each constraint, one strong and one weak. Given a sentence which only uses symbols from L, the task is to distinguish whether the sentence is true in the strong sense, or it is false even in the weak sense.

We classify the computational complexity of these problems for the existential positive equality-free fragment of first-order logic, and we prove some upper and lower bounds for the positive equality-free fragment. The partial results are sufficient, e.g., for all extensions of the latter fragment.

16:20
Two Dimensional Bounded Model Checking for Unbounded Client-Server Systems
PRESENTER: Tephilla Prince

ABSTRACT. Bounded model checking (BMC) is an efficient formal verification technique which allows for desired properties of a software system to be checked on bounded runs of an abstract model of the system. The properties are frequently described in some temporal logic and the system is modeled as a state transition system. In this paper we propose a novel counting logic, L_C, to describe the temporal properties of client-server systems with an unbounded number of clients. We also propose two dimensional bounded model checking (2D-BMC) strategy that uses two distinguishable parameters, one for execution steps and another for the number of tokens in the net representing a client-server system, and these two evolve separately, which is different from the standard BMC techniques in the Petri Nets formalism. This 2D-BMC strategy is implemented in a tool called DCModelChecker which leverages the 2D-BMC technique with a state-of-the-art satisfiability modulo theories (SMT) solver Z3. The system is given as a Petri Net and properties specified using L_C are encoded into formulas that are checked by the solver. Our tool can also work on industrial benchmarks from the Model Checking Contest (MCC). We report on these experiments to illustrate the applicability of the 2D-BMC strategy.

16:40
A note on Türing 1936

ABSTRACT. Türing historical article of 1936 is the result of a special endeavor focused around the factuality of a general process for algorithmic computation. The Türing machine as a universal feasibility test for computing procedures is applied up to closely examining what are considered to be the limits of computation itself. In this regard he claims to have defined a number which is not computable, arguing that there can be no machine computing the diagonal on the enumeration of the computable sequences. This article closely examines the original 1936 argument, displaying how it cannot be considered a demonstration, and that there is indeed no evidence of such a defined number that is not computable.