FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
ICLP ON TUESDAY, JULY 17TH
Days:
previous day
all days

View: session overviewtalk overviewside by side with other conferences

09:00-10:30 Session 118D: Test of Time Awards and Best DC Paper
Location: Blavatnik LT2
09:00
Test of time 20

ABSTRACT. TBD

09:30
TEST OF TIME 10

ABSTRACT. TBD

10:00
Best DC Paper

ABSTRACT. TBD

10:30-11:00Coffee Break
11:00-12:30 Session 120C: Applications
Location: Blavatnik LT2
11:00
Phylotastic: An Experiment in Creating, Manipulating, and Evolving Phylogenetic Biology Workflows Using Logic Programming
SPEAKER: Thanh Nguyen

ABSTRACT. Evolutionary Biologists have long struggled with the challenge of developing analysis workflows in a flexible manner, thus facilitating the reuse of phylogenetic knowledge. An evolutionary biology workflow can be viewed as a plan which composes web services that can retrieve, manipulate, and produce phylogenetic trees.

The Phylotastic project was launched two years ago as a collaboration between evolutionary biologists and computer scientists, with the goal of developing an open architecture to facilitate the creation of such analysis workflows. While composition of web services is a problem that has been extensively explored in the literature, including within the logic programming domain, the incarnation of the problem in Phylotastic provides a number of additional challenges. Along with the need to integrate preferences and formal ontologies in the description of the desired workflow, evolutionary biologists tend to construct workflows in an incremental manner, by successively refining the workflow, by indicating desired changes (e.g., exclusion of certain services, modifications of the desired output). This leads to the need of successive iterations of incremental replanning, to develop a new workflow that integrates the requested changes while minimizing the changes to the original workflow. This paper illustrates how Phylotastic has addressed the challenges of creating and refining phylogenetic analysis workflows using logic programming technology and how such solutions have been used within the general framework of the Phylotastic project.

11:30
Experimenting with robotic intra-logistics domains

ABSTRACT. We introduce the asprilo [1] framework to facilitate experimental studies of approaches addressing complex dynamic applications. For this purpose, we have chosen the domain of robotic intra-logistics. This domain is not only highly relevant in the context of today's fourth industrial revolution but it moreover combines a multitude of challenging issues within a single uniform framework. This includes multi-agent planning, reasoning about action, change, resources, strategies, etc. In return, asprilo allows users to study alternative solutions as regards effectiveness and scalability. Although asprilo relies on Answer Set Programming and Python, it is readily usable by any system complying with its fact-oriented interface format. This makes it attractive for benchmarking and teaching well beyond logic programming. More precisely, asprilo consists of a versatile benchmark generator, solution checker and visualizer as well as a bunch of reference encodings featuring various ASP techniques. Importantly, the visualizer's animation capabilities are indispensable for complex scenarios like intra-logistics in order to inspect valid as well as invalid solution candidates. Also, it allows for graphically editing benchmark layouts that can be used as a basis for generating benchmark suites.

[1] asprilo stands for Answer Set Programming for robotic intra-logistics

12:00
Optimal Scheduling for Exposed Datapath Architectures with Buffered Processing Units by ASP
SPEAKER: Marc Dahlem

ABSTRACT. Conventional processor architectures are restricted in exploiting instruction level parallelism (ILP) due to the relatively low number of programmer-visible registers. Therefore, more recent processor architectures expose their datapaths so that the compiler (1) can schedule parallel instructions to different processing units and (2) can make effective use of local storage of the processing units. Among these architectures, the Synchronous Control Asynchronous Dataflow (SCAD) architecture is a new exposed datapath architecture whose processing units are equipped with first-in first-out (FIFO) buffers at their input and output ports.

In contrast to register-based machines, the optimal code generation for SCAD is still a matter of research. In particular, SAT and SMT solvers were used to generate optimal resource constrained and optimal time constrained schedules for SCAD, respectively. As Answer Set Programming (ASP) offers better flexibility in handling such scheduling problems, we focus in this paper on using an answer set solver for both resource and time constrained optimal SCAD code generation. As a major benefit of using ASP, we are able to generate \emph{all} optimal schedules for a given program which allows one to study their properties. Furthermore, the experimental results of this paper demonstrate that the answer set solver can compete with SAT solvers and outperforms SMT solvers.

12:30-14:00Lunch Break
14:00-15:30 Session 122D: Probabilistic and Constraint LP
Chair:
Location: Blavatnik LT2
14:00
Shape Neutral Analysis of Graph-based Data-structures
SPEAKER: Roland Yap

ABSTRACT. Malformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented in Constraint Handling Rules (CHR) extending builtin heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e. the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in wide range of datastructure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation based on a specialized shape neutral constraint solver implemented in the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.

14:30
A Probabilistic Extension of Action Language BC+
SPEAKER: Yi Wang

ABSTRACT. We present a probabilistic extension of action language BC+. Just like BC+ is defined as a high-level notation of answer set programs for describing transition systems, the proposed language, which we call pBC+, is defined as a high-level notation of LPMLN programs---a probabilistic extension of answer set programs. We show how probabilistic reasoning about transition systems, such as prediction, postdiction, and planning problems, as well as probabilistic diagnosis for dynamic domains, can be modeled in pBC+ and computed using an implementation of LPMLN.

15:00
Constraint-Based Inference in Probabilistic Logic Programs
SPEAKER: Arun Nampally

ABSTRACT. Probabilistic Logic Programs (PLPs) generalize traditional logic programs and allow the encoding of models combining logical structure and uncertainty. In PLP, inference is performed by summarizing the possible worlds which entail the query in a suitable data-structure, and using it to compute the answer probability. Systems such as ProbLog, PITA, etc., use propositional data-structures like explanation graphs, BDDs, SDDs, etc., to represent the possible worlds. While this approach saves inference time due to substructure sharing, there are a number of problems where a more compact data-structure is possible. We propose a data-structure called Ordered Symbolic Derivation Diagram (OSDD) which captures the possible worlds by means of constraint formulas. We describe a program transformation technique to construct OSDDs via query evaluation, and give procedures to perform exact and approximate inference over OSDDs. Our approach has two key properties. Firstly, the exact inference procedure is a generalization of traditional inference, and results in speedup over the latter in certain settings. Secondly, the approximate technique is a generalization of likelihood weighting in Bayesian Networks, and allows us to perform sampling-based inference with lower rejection rate and variance. We evaluate the effectiveness of the proposed techniques through experiments on several problems.

15:30-16:00Coffee Break
16:00-16:30 Session 123D: Doctoral Consortium teaser talks

The session hosts 3 minutes summaries by each Doctoral Consortium Ph.D. student who will give the longer presentation during the ICLP-DC on 18th of July.

Chair:
Location: Blavatnik LT2
16:30-18:00 Session 124: Technical Communications II
Location: Blavatnik LT2
16:30
Towards Static Performance Guarantees for Programs with Run-time Checks

ABSTRACT. Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express “admissible” overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.

16:45
Towards Incremental and Modular Context-sensitive Analysis

ABSTRACT. Context-sensitive global analysis of large code bases can be expensive, which can be specially problematic in interactive uses of analyzers. However, in practice each development iteration implies small modifications which are often isolated within a few modules, and analysis cost can be reduced by reusing the results of previous analyses.

This has been achieved to date on one hand through modular analysis, which can reduce the memory consumption and often localize the computation during reanalysis mainly to the modules affected by changes. In parallel, context-sensitive incremental fixpoints have been proposed that achieve cost reductions at finer levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs.

This paper describes, implements, and evaluates a context sensitive, fixpoint analysis algorithm for (Constraint) Logic Programs aimed at achieving both inter-modular (coarse-grain) and intra-modular (fine-grain) incrementality, solving the problems related to propagation of the fine-grain change information and effects across module boundaries, for additions and deletions in multiple modules.

The implementation and evaluation of our algorithm shows encouraging results: the expected advantages of fine-grain incremental analysis carry over to the modular analysis context. Furthermore, the fine-grained propagation of analysis information of our algorithm improves performance with respect to traditional modular analysis even when analyzing from scratch.

17:00
Learning Commonsense Knowledge through Interactive Dialogue
SPEAKER: Benjamin Wu

ABSTRACT. One of the most difficult problems in Artificial Intelligence is related to acquiring commonsense knowledge -- to create a collection of facts and information that an ordinary person should know. In this work, we present a system that, from a limited background knowledge, is able to learn to form simple concepts through interactive dialogue with a user. We approach the problem using a syntactic parser, along with a mechanism to check for synonymy, to translate sentences into a logical formulas represented in Event Calculus using Answer Set Programming (ASP). Reasoning and learning tasks are then automatically generated for the translated text, with learning being initiated through question and answering. The system is capable of learning with no contextual knowledge prior to the dialogue. The system has been evaluated on stories inspired by the Facebook's bAbI's question-answering tasks, and through appropriate question and answering is able to respond accurately to these dialogues.

17:15
A New Proof-theoretical Linear Semantics for CHR

ABSTRACT. Constraint handling rules are a committed-choice language consisting of multiple-heads guarded rules that rewrite constraints into simpler ones until they are solved. We propose a new proof-theoretical declarative linear semantics for Constraint Handling Rules. We demonstrate completeness and soundness of our semantics w.r.t. operational w_t semantics. We propose also a translation from this semantics to linear logic.

17:30
CHRvis: Syntax and Semantics
SPEAKER: Nada Sharaf

ABSTRACT. The work in the paper presents an animation extension CHRvis to Constraint Handling Rules CHR. Visualizations have always helped programmers understand data and debug programs. A picture is worth a thousand words. It can help identify where a problem is or show how something works. It can even illustrate a relation that was not clear otherwise.

17:45
Declarative Algorithms in Datalog with Aggregates: user-friendly formal semantics conducive to performance and scalability
SPEAKER: Carlo Zaniolo

ABSTRACT. Pre-mappable (PreM ) extrema constraints in recursive Datalog programs enable concise declarative formulations for classical algorithms (Zaniolo et al. 2017). The programs expressing these algorithms have formal non- monotonic semantics with efficient and scalable support on multiple platforms (Shkapsky et al. 2016) (Yang et al. 2017). However proving PreM for different programs can be challenging for programmers; thus, in this paper, we introduce simple templates that allow users to verify with ease that their programs have the PreM property along with the rigorous semantics and the efficient and scalable implementation associated with it. We thus obtain simple declarative formulation for classical algorithms in two equivalent versions: one with perfect model semantics and the other with stable model semantics.