LUXLOGAI 2018: LUXEMBOURG LOGIC FOR AI SUMMIT
PROGRAM FOR FRIDAY, SEPTEMBER 21ST, 2018
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 38A: Short presentations 1 (Deduktionstreffen)
Location: MSA 3.120
09:00
Mathematical Interface Theories for System Integration

ABSTRACT. Formal systems such as interactive theorem provers and computer algebra systems come with their own libraries or ontologies of mathematical concepts they operate on. These libraries are largely unconnected and (when considered collectively) contain huge duplications of content.

While the TPTP language already provides a standard for formal logics, making the test problems in its library available to various formal systems, a similar language and associated ontology for higher mathematical knowledge is needed to effectively transfer more advanced mathematical knowledge.

We present the Math-in-the-Middle library, a semi-formal archive of formal mathematics in MMT, intended to serve as a collection of interface theories for mathematical concepts.

Its abstract representation in OMDoc/MMT makes it easily parsable and hence available to any formal system, allowing it to serve as a common ontology to exchange knowledge across systems and logical foundations. Its capability to handle content on various levels of formality makes it conveniently extensible even with highly complicated definitions and theorems not easily expressible in terms of simple logical expressions.

The Math-in-the-Middle library is currently seeded with ~200 theories from algebra, arithmetics, calculus, topology, geometry, graph theory, probability theory, functional analysis and measure theory, as well as various theory morphisms between them.

It is already used to facilitate knowledge exchange between computer algebra systems, databases of mathematical objects and modelling software in the OpenDreamKit and MaMoRed projects.

09:15
Proof Recovery for the IMPS system

ABSTRACT. In previous work, we translated the mathematical library of the IMPS system (at danger of being lost due to the system no longer actively developed) to OMDoc and partially verify the result within the MMT system against an implementation of the underlying logic of IMPS.

However, so far we only include proof scripts as informal (not machine-checkable) information in our translation. Hence, our goal now is to extend our implementation of the translation to also include as much formal information as possible about the IMPS proof commands, user-defined macetes and their combination language, with the hope of being able to also formally verify (parts of) the translated proofs via MMT.

09:30
Automatic generation of Invariants for Concurrent Programs

ABSTRACT. see paper

09:45
On Inductive Verification and Synthesis

ABSTRACT. We study possibilities of using symbol elimination in program verification and synthesis. We consider programs in which some instructions or subprograms are not fully specified; the task is to derive conditions on the parameters (or subprograms) which imply inductivity of certain properties. We then propose a method for property-directed invariant generation and analyze its properties. We indicate how to use similar ideas for proving properties of recursively defined functions.

10:00
CoRg: Commonsense Reasoning Using a Theorem Prover and Machine Learning

ABSTRACT. Many problems humans face in their everyday life cannot be solved by solely relying on explicitly given facts but so-called commonsense knowledge. These facts are often not formalized in knowledge bases, but can be derived indirectly, e.g., from text passages. However this results in incomplete and inconsistent knowledge. In recent research, many scientists address commonsense reasoning tasks using primarily information retrieval, machine learning or neural network techniques. While this works rather well on several benchmarks, this procedure remains a black box, unable to explain how the correct answer is derived. In our research we address this shortcoming and develop a explainable and responsible artificial intelligence by integrating a theorem prover into the reasoning process. This theorem prover will give a fundamental basis to the derivation process, as well as making it possible to properly explain it and ultimately making an attempt to explain human reasoning.

For this purpose, a reasoning chain was developed at the university in Koblenz which aims to solve commonsense reasoning problems, e.g., from the COPA benchmark set (Choice of Plausible Alternatives), each consisting of a problem and two possible answers. The chain converts the input into a logical representation using Knews and adds information from WordNet, ConceptNet and other Ontologies. This information is passed to the theorem prover Hyper, which conducts a logical model out of it. In the end, this model and the gathered information is fed into a machine learning algorithm, which determines the probabilities of the answer candidates for being the right answer. Primarily we will use recurrent neural networks, as they already proved their superiority at this tasks. More precisely we want to employ predictive neural networks, as they enable us to re-engineer the derivation process.

Our research goal within the CoRg project (Cognitive Reasoning), conducted together with the university in Koblenz, is to evaluate whether a theorem prover can improve this reasoning tasks in the developed chain. Furthermore we will evaluate different ontologies, theorem provers and machine learning algorithms. To address the incomplete and inconsistent knowledge we will also consider the combination of defeasible and normative reasoning. In the end, we will analyze which inputs were most influential to derive the results, e.g., with a principal component analysis on a predictive neural network. This will give us significant hints on how human reasoning might take place.

10:15
Deep Reasoning - Hardware Accelerated Artificial Intelligence

ABSTRACT. Most modern first-order theorem provers rely on methods like Superposition [1] and Resolution [5], where implications of a clause set are computed to eventually generate the empty clause. The chronological order of computing inferences is a key aspect of proof search and is crucial for the system’s performance. It is typically controlled by a clause selection heuristic. For the task of clause selection, we present a novel heuristic based on artificial neural networks. Deep Learning has demonstrated significant success in several domains [3, 4]. In our approach, clauses are first embedded by a vocabulary that is learned during training and shares features among similar symbols. Compression of clauses to a fixed size is implemented by combining the concepts of dilated convolutions [8] and a dense block [2] to ensure a broad context. The features of a clause to be evaluated are combined with features of the negated conjecture and of the original axiom set, and are processed by a small classifier network. The training is optimized by extensive negative mining and a novel loss function. The heuristic is integrated and tested in the state of the art theorem prover E [6, 7]. In experiments, the network correctly classifies over 90% of all clauses in the test data set with respect to the question if they are useful for the proof or not. However, despite hardware acceleration, the Deep Learning based evaluation currently is several orders of magnitude slower than conventional heuristics, and hence only one from a sample of 25 new proof problems can be solved with the developed technique, highlighting the challenge for future work.

References [1] Leo Bachmair and Harald Ganzinger. Rewrite-Based Equational Theorem Proving with Selection and Simplification. Journal of Logic and Computation, 3(4):217–247, 1994. [2] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2261–2269, 2017. [3] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. [4] Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep Network Guided Proof Search. In Thomas Eiter and David Sands, editors, Proc. 21st LPAR, Maun, Botswana, volume 46 of EPiC Series in Computing, pages 85–105, 2017. [5] J. A. Robinson. A Machine-Oriented Logic Based on the Resolution Principle. Journal of the ACM, 12(1):23–41, 1965. [6] Stephan Schulz. E – A Brainiac Theorem Prover. Journal of AI Communications, 15(2/3):111–126, 2002. [7] Stephan Schulz. System Description: E 1.8. In Ken McMillan, Aart Middeldorp, and Andrei Voronkov, editors, Proc. of the 19th LPAR, Stellenbosch, volume 8312 of LNCS, pages 735–743. Springer, 2013. [8] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. Computing Research Repository (CoRR), abs/1511.07122, 2015.

09:00-10:00 Session 38B: Invited Talk (RuleML+RR)
Location: MSA 3.520
09:00
Modal Rules: Extending Defeasible Logic with Modal Operators
10:00-10:30 Session 39: RuleML+RR and Doctoral Consortium: Joint Poster Teasers (RuleML+RR)
Location: MSA 3.520
10:00
Improving Probabilistic Rules Compilation using PRM (poster teaser)

ABSTRACT. Widely adopted for more than 20 years in industrial fields, business rules offer the opportunity to non-IT users to define decision-making policies in a simple and intuitive way. To facilitate their use, systems known as Business Rule Management Systems have been developed, separating the business logic from the application one. While suitable for processing structured and complete data, BRMS face difficulties when those are incomplete or uncertain. This study proposes a new approach for the integration of probabilistic reasoning in IBM Operational Decision Manager (ODM), IBM’s BRMS, especially through the introduction of a notion of risk, making the compilation phase more complex but increasing the expressiveness of business rules.

10:04
Computational Hermeneutics: Using Automated Theorem Proving for the Logical Analysis of Natural-Language Arguments (poster teaser)

ABSTRACT. While there have been major advances in automated theorem proving (ATP) during the last years, its main field of application has mostly remained bounded to mathematics and hardware/software verification. I argue that the use of ATP in philosophy can also be very fruitful, not only because of the obvious quantitative advantages of automated reasoning tools (e.g. reducing by several orders of magnitude the time needed to test some argument's validity), but also because it enables a novel approach to the logical analysis of arguments. This approach, which I have called computational hermeneutics, draws its inspiration from work in the philosophy of language such as Donald Davidson's theory of radical interpretation and contemporary so-called inferentialist theories of meaning, which do justice to the inherent circularity of linguistic understanding: the whole is understood (compositionally) on the basis of its parts, while each part is understood only in the (inferential) context of the whole. Computational hermeneutics is thus a holistic, iterative, trial-and-error enterprise, where we evaluate the adequacy of some candidate formalization of a sentence by computing the logical validity of the whole argument. We start with formalizations of some simple statements (taking them as tentative) and use them as stepping stones on the way to the formalization of other argument's sentences, repeating the procedure until arriving at a state of reflective equilibrium: A state where our beliefs have the highest degree of coherence and acceptability.

10:08
Towards knowledge-based integration and visualization of geospatial data using Semantic Web technologies (poster teaser)

ABSTRACT. Geospatial data have been pervasive and indispensable for various real-world ap-plication of e.g. urban planning, traffic analysis and emergency response. To this end, the data integration and knowledge transfer are two prominent issues for augmenting the use of geospatial data and knowledge. In order to address these issue, Semantic Web technologies have been considerably adopted in geospatial domain, and there are currently still some inactivates investigating the benefits brought up from the adoption of Semantic Web technologies. In this context, this paper showcases and discusses the knowledge-based geospatial data integration and visualization leveraging ontologies and rules. Specifically, we use the Linked Data paradigm for modelling geospatial data, and then create knowledge base of the visualization of such data in terms of scaling, data portrayal and geometry source. This approach would benefit the transfer, interpret and reuse the visuali-zation knowledge for geospatial data. At the meantime, we also identified some challenges of modelling geospatial knowledge and outreaching such knowledge to other domains as future study.

10:12
A new approach to conceive ASP solvers (poster teaser)
SPEAKER: Tarek Khaled

ABSTRACT. The Answer set programming (ASP) is a non-monotonic declarative programming paradigm that is widely used for the formulation of problems in artificial intelligence. The ASP paradigm provides also a general framework for the resolution of decision and optimization problems. The idea behind ASP is to represent a problem as a logic program and solve that problem by computing sta- ble models. In our work, we propose a new method for searching stable models of logical programs. This method is based on a relatively new semantic that had not been exploited yet. This semantic captures and extends that one of the stable models. The method performs a DPLL enumerative process only on a restricted set of literals called the strong back-door (STB). This method has the advantage to use a Horn clause representation having the same size as the input logic pro- gram and has constant spatial complexity. It avoids the heaviness induced by the loop management from which suffer most of the ASP solvers based on the Clark completion.

10:16
Concepts as Modalities in Description Logics (poster teaser)

ABSTRACT. Language phrases like "stone lion", "fake gun" and "glass gummy bear" inspire us to invent a new type of concept composition for description logics (DL). A set-theoretic approach by Klarman is considered but not adopted since his logic does not allowing conflicting possibilities as demonstrated by the "glass gummy bear". Constraints are presented that should hold for the extended logic. A new approach extends the FOL-embedding of DL and introduces a new dimension as a Kripke structure. In the end, further examples are discussed (the penguin does not fly and the veggi burger does not contain any meat).

10:20
Inducing Schema.org markup from Natural Language Context (poster teaser)

ABSTRACT. Schema.org creates, supports and maintain schemas for structured data on the web pages. For a non-technical author, it is difficult to publish contents in a structured format. The current work presents an automated way of Inducing Schema.org markup from Natural Language Context of web-pages. This paper proposes an approach of inducing the Schema.org from the content by applying the knowledge base creations technique. As a dataset, Web Data Commons is used and the scope for the experimental parts is limited to RDFa. In the initial stage, the approach is tested using the Knowledge Graph building technique - Knowledge Vault and KnowMore.

10:24
An Ontology for Transportation System (poster teaser)

ABSTRACT. In this work, we present a domain ontology for Transportation System. We have developed an ontology for a semantics-aware transportation system from the perspective of a traveler user, capable of answering general competence queries like the nearest bus stop to a particular place, the nearest parking slots available, etc. We have studied the transportation system of some of the big cities of the world and have tried to come up with a vocabulary that can be applied to any city with little modifications. This vocabulary is further aligned with an upper-level ontology to have a common starting point.

10:30-11:00Coffee Break and Poster Session
11:00-12:00 Session 40B: Knowledgebases and Reasoning (RuleML+RR)
Location: MSA 3.520
11:00
A Case-Based Inquiry into the Decision Model and Notation (DMN) and the Knowledge Base (KB) Paradigm

ABSTRACT. Modelling decisions in organisations is a challenging task. Deciding which modelling language to use for the problem at hand is a fundamental question. We investigate the Decision Model and Notation (DMN) standard and the IDP knowledge base system (KBS) in their effectiveness to model and solve specific real-life case problems. This pa- per presents two cases that are solved with DMN and IDP: (1) Income taxation for foreign artists temporarily working in Belgium; and (2) Reg- istration duties when purchasing real-estate in Belgium. The solutions in both DMN and IDP are examined and restrictions and opportunities of both methods are elaborated upon for each type of problem presented in the case studies. Additionally, compatibilities and synergies between DMN and IDP are identified

11:30
CHR.js: A CHR Implementation in JavaScript

ABSTRACT. Constraint Handling Rules (CHR) is usually compiled to logic programming languages. While there are implementations for imperative programming languages such as C and Java, its most popular host language remains Prolog. In this paper we present CHR.js, a CHR system implemented in JavaScript, that is suitable for both the server-side and interactive client-side web applications. CHR.js provides (i) an interpreter, which is based on the asynchronous execution model of JavaScript, and (ii) an ahead-of-time compiler, resulting in synchronous constraint solvers with better performances. Because of the great popularity of JavaScript, CHR.js is the first CHR system that runs on almost all and even mobile devices, without the need for an additional runtime environment. As an example application we present the CHR.js Playground, an offline-capable web-interface which allows the interactive exploration of CHRs in every modern browser.

12:00-12:30 Session 41A: Short presentations 2 (Deduktionstreffen)
Location: MSA 3.120
12:00
Herbrand's Revenge - SAT Solving for First-Order Theorem Proving

ABSTRACT. We describe a light-weight integration of the propositional SAT solver PicoSAT and the saturation-based superposition prover E. The proof search is driven by the saturation prover. Periodically, the saturation is interrupted, and all first-order clauses are grounded. The resulting ground problem is converted to a propositional format and handed to the SAT solver. If the SAT solver reports unsatisfiability, the proof is extracted and reported on the first-order level. First experiments demonstrate the viability of the approach and suggest future extensions. They also yield interesting information about the structure of the search space.

12:15
A Roadmap to Gradually Compare and Benchmarking Description Logic Calculi

ABSTRACT. The most expressive layer of the Semantic Web is based on Description Logic (DL) [1]. This fact naturally brought about a larger interest in this family of formalisms and their languages. The Semantic Web choice for DL was surely based in two main pros from these formalisms: its rich expressiveness and its inherent reasoning possibilities. Concerning this last aspect, DL reasoners were developed with cutting-edge performance, implementing plenty of specific optimization techniques over tableaux based methods, which took over the field for years. On the other hand, promising methods may have been neglected in such a scenario, in which the tough competition is often focused on gains through optimizations. Therefore, perhaps there is still room available for “basic research” on DL reasoning, in the sense that other efficient calculi need to be adapted to DL, tuned and tested. The purpose of this work is to stimulate research on trying out DL calculi (other than tableaux) by making a careful, detailed comparison between tableaux and other inference methods in a systematic way: first starting with simpler languages (like ALC) without any optimizations. Then gradually including optimizations and comparing them; and continuing these interactive steps: enhancing language expressivity, including optimizations, and testing until reaching the last advances on optimizations and the most expressive DL fragments such as SROIQ. The comparison can also be done by in terms of two other additional aspects: memory usage and algorithm asymptotic analysis, with worst and average cases, etc. The rationale is identifying whether there are fragments which are more suitable to certain inference methods, as well as which aspects or constructs (e.g., the costliest combinations, which usually involve inverses, nominals, equalities, etc) are sensitive to which calculus. Concretely, the idea is to start out with our DL connection calculi [2,3] and reasoner, the RACCOON (ReAsoner based on the Connection Calculus Over ONtologies) [4], which for now takes on ALC, the basic DL fragment, and displayed a surprisingly effective performance in comparison with other well-established reasoners, such as Hermit, FaCT++ and Konclude.

12:00-12:30 Session 41B: Benchmarking (RuleML+RR)
Location: MSA 3.520
12:00
A First Order Logic Benchmark for Defeasible Reasoning Tool Profiling

ABSTRACT. In this paper we are interested in the task of a data engineer choosing what tool to use to perform defeasible reasoning with a first order logic knowledge base. To this end we propose the first benchmark in the literature that allows one to classify first order defeasible reasoning tools based on their semantics, expressiveness and performance.

12:30-14:00Lunch Break
14:00-15:30 Session 42A: Short presentations 3 (Deduktionstreffen)
Location: MSA 3.120
14:00
IsaSAT, a verified SAT Solver using Isabelle/HOL

ABSTRACT. We have formalized conflict-driven clause learning (CDCL) in Isabelle/HOL. Later we have refined the CDCL to include two crucial optimisation: two watched literals with blocking literals. We formalized the data structure and the invariants. Through a chain of refinements carried out using the Isabelle Refinement Framework, we target Imperative HOL and extract imperative Standard ML code. Although our solver is not competitive with the state of the art, it offers acceptable performance for some applications.

14:15
Hilbert Meets Isabelle: Formalisation of the DPRM Theorem in Isabelle

ABSTRACT. In 1900, at the International Congress of Mathematicians in Paris and in a publication that shortly followed, David Hilbert presented the twenty-three most important problems for the coming century. The tenth problem concerns the solvability of Diophantine equations and was resolved in 1970 by Yuri Matiyasevich (DPRM theorem).

We conduct a formal verification of Matiyasevich's original proof using the proof assistant Isabelle. We formally state each lemma in Isabelle's formal language and prove each required step. The complete formalised proof includes even the simplest details such as commutativity of matrix addition and more abstract concepts like termination of register machines. At the time of writing, we have completely formalised the proof that exponentiation is Diophantine --- Matiyasevich's key breakthrough from 1970. We have also produced a model of a register machine and shown that given equations describing a certain register machine we can encode them as geometric sums (and hence, as exponential Diophantine equations). Additional work will produce proofs that simulate a register machine as equations. This will then result in a full formalisation of the DPRM theorem.

Fourteen of the original Hilbert problems have been resolved and none of the solutions have been formally verified yet. Our project will be the first complete such formal verification of a Hilbert problem. By formalising the DPRM theorem, we see our larger goal as bringing one of the most important mathematical results the twentieth century into the methods of the twenty-first.

14:30
Competitive Proving for Fun

ABSTRACT. We propose a system for large-scale theorem proving contests. We hope that such contests could spark interest in the research field, attract a new generation of theorem proving savants, and foster competition among proof assistants. For the proof assistant Isabelle, we construct and evaluate a prototype implementation of our proposed system architecture.

14:45
Computational Hermeneutics: Using Automated Reasoning for the Logical Analysis of Natural-Language Arguments

ABSTRACT. In this project, we want to explore the computer-supported application of formal logic (using automated theorem provers and model finders) to issues in philosophy concerned with: (i) the methodical evaluation and interpretation of arguments and, building upon this, we want to tackle (ii) the problem of formalization: how to search methodically for the most appropriate logical form(s) of a given natural-language argument, by casting its individual statements into expressions of some sufficiently expressive logic (classical or non-classical). Computational hermeneutics is a holistic, iterative, trial-and-error enterprise, where we evaluate the adequacy of some candidate formalization of a sentence by computing the logical validity of the whole argument. We start with formalizations of some simple statements (taking them as tentative) and use them as stepping stones on the way to the formalization of other argument's sentences, repeating the procedure until arriving at a state of reflective equilibrium: A state where our beliefs have the highest degree of coherence and acceptability.

15:00
Leibniz on Reasoning about Impossible Concepts

ABSTRACT. Leibniz wrote down a proof for the existence of God accompanied by a note on impossible concepts in Latin (a reproduction of the manuscript is presented). A possibly true contradiction is quite unusual and not compatible with mainstream logic or his “Theodicy”. His example of such a contradiction about an impossible concept, the “square circle”, is still popular in contemporary metaphysics. Similar examples make the proposition of contradiction and possible truth controversial. An alternative formalization approach using a different kind of concept composition is proposed.

14:00-15:30 Session 42B: Rule Systems and Applications (RuleML+RR)
Location: MSA 3.520
14:00
Computational Regulation of Medical Devices in PSOA RuleML

ABSTRACT. The registration and marketability of medical devices in Europe is governed by the Regulation (EU) 2017/745 and by guidelines published in terms thereof. This work focuses on rules for risk-based classification of medical devices as well as for the declaration of conformity of each class. We formalized the core rules of the EU regulation for medical devices in Positional-Slotted Object-Applicative (PSOA) RuleML. This resulted in a refinement and subgrouping of the original legal rules. The formalization led to an object-relational PSOA RuleML rulebase, which was supplemented by object-relational facts (data) about medical devices to form a knowledge base (KB). The KB represents knowledge by facts and rules integrating RDF/F-logic-like graphs/frames (`objects') with Prolog-like relationships. We tested this open-source KB, Medical Devices Rules, by querying it in the open-source PSOATransRun system, which provided a feedback loop for refinement and extension. The aim of this formalization is to create a computational guideline to assist regulators, Notified bodies (NBs), manufacturers, importers, distributors, and wholesalers of medical devices in the classification of medical devices. This can support the licensing process for these stakeholders and for the registration of medical devices with a CE conformity mark and a ''notified body number'' affixed. Because the Medical Devices Rules KB already is in a computational rule format, it can become a pluggable component of smart contracts about medical devices.

14:22
Nuance Reasoning Framework: A Rule-Based System for Semantic Query Rewriting

ABSTRACT. We present the Nuance Reasoning Framework (NRF), a generic rule-based framework for semantic query rewriting and reasoning that is being utilized by Nuance Communications Inc. in speech-enabled conversational virtual assistant solutions for numerous automotive OEMs. We focus on the semantic rewriting task performed by NRF, which bridges the conceptual mismatch between the natural language front-end of automotive virtual assistants and their back end databases, and personalizes the results to the driver. We also describe many of its powerful features such as rewriter arbitration, query mediation and more.

14:44
A rule-based eCommerce methodology for the IoT using trustworthy Intelligent Agents and Microservices

ABSTRACT. The impact of the Internet of Things will transform dramatically not only people's' everyday lives but also business and economy. This huge network of intercommunicating heterogeneous Things is expected to revolutionize the commerce industry by driving innovation and new opportunities in the near future. Yet, this open, distributed and heterogeneous environment raises important challenges. Old eCommerce practices cannot be sufficiently applied to the demands of the Internet of Things while trustworthiness issues arise. In this context, this study aims at proposing a rule-based eCommerce methodology that will allow Things, devices, services and humans, to safely trade on the network. For this purpose, the proposed methodology represents Things as Intelligent Agents since they form an alternative to traditional interactions among people and objects while, at the same time, they are involved in a rich research effort regarding trust management. The methodology combines Intelligent Agents with the microservice architecture in order to deal with Things heterogeneity while it adopts the use of a social agent-based trust model for the Internet of Things. Additionally, well-known semantic technologies such as RuleML and defeasible logic is adopted in order to maximize interoperability among Things. Furthermore, in order to deal with issues related to rules exchange where no common syntax is used, the methodology is integrated to an appropriate multi-agent knowledge-based framework. Finally, an eCommerce use case scenario is presented that illustrates the viability of the proposed approach.

15:06
Learning condition action rules for personalised journey recommendations

ABSTRACT. We apply a learning classier system (specically XCSI) to the task of learning individual passenger preferences, expressed as condition-action rules, in order to provide personalised suggestions for their onward journey. Learning classier systems are adaptive systems that function in complex environments, altering internal structure to achieve a goal through interactions with the environment. The proposed XCSI system is embedded within a simulated environment of passengers travelling around the London Underground network, subject to disruption. We show that XCSI is able to learn passenger preferences and use that to provide reasonably accurate recommendations on multi-modal transport options, without substantial parameter modication. Further parameter adjustments yield higher performance. Structural adjustments may remain, suggesting opportunities for improvement in future work.

15:30-16:00Coffee Break and Poster Session
16:00-17:00 Session 43A: Invited Talk (Deduktionstreffen)
Location: MSA 3.120
16:00
Wanda: a higher-order termination tool

ABSTRACT. Wanda is a fully automatic tool to analyse termination of higher-order term rewriting systems in various styles. In this talk I will present Wanda and discuss the underlying methodology, which includes both a discussion of the theory involved and the strategies for automation.