ABSTRACT. The Countdown game is one of the oldest TV show in the world. It started broadcasting in 1972 on the french television and in 1982 on British channel 4, and it has been running since in both countries. The game, while extremely popular, never received any serious scientific attention, probably because it seems too simple at first sight. We present in this article an in-depth analysis of the numbers round of the countdown game. This includes a complexity analysis of the game, an analysis of existing algorithms and the presentation of a new algorithm that increases resolution speed by a large factor. It also includes some leads on how to turn the game into a more difficult one, both for a human player and for a computer, and even to transform it into a possibly undecidable problem.

ABSTRACT. In open multi-agent systems, often, agents interact with each other to meet their objectives. Trust is, therefore, considered essential to make such interactions useful. However, trust is a complex, multifaceted concept and includes more than just evaluating others honesty. Many trust evaluation models have been proposed and implemented in different areas; most of them focused on creating algorithms for trustors to model the honesty of trustees in order to make effective decisions about which trustees to select. However, slight consideration has been paid to trust establishment. This work describes a Trust Establishment Model Using Indirect Feedback (TEIF) that goes beyond trust evaluation to outline actions to guide trustees (instead of trustors). The model uses the number of transactions performed by trustors. A trustee will adjust the utility gain it provides when interacting with each trustor, depending on the average number of transactions carried out by that trustor, relative to the mean number of transactions performed by all trustors interacting with this trustee. The proposed model does not depend on direct feedback, nor does it depend on current reputation of trustees in the community. Simulation results indicate that trustees empowered with the proposed model can be selected more by trustors.

Mats Carlsson (SICS)

Pierre Flener (Uppsala University)

Xavier Lorca (Ecole des Mines de Nantes)

Justin Pearson (Uppsala University)

Thierry Petit (Worcester Polytechnique Institute, USA / Mines de Nantes-LINA, France)

Charles Prud'Homme (TASC (CNRS/INRIA), Mines Nantes)

ABSTRACT. Some constraint programming solvers and constraint modelling languages feature the SORT(L,P,S) constraint, which holds if S is a nondecreasing rearrangement of the list L, the permutation being made explicit by the optional list P. However, such sortedness constraints do not seem to be used much in practice. We argue that reasons for this neglect are that it is impossible to require the underlying sort to be stable, so that SORT cannot be guaranteed to be a total-function constraint, and that L cannot contain tuples of variables, some of which form the key for the sort. To overcome these limitations, we introduce the StableKeysort constraint, decompose it using existing constraints, and propose a propagator. This new constraint enables a powerful modelling idiom, which we illustrate by elegant and scalable models of two problems that are otherwise hard to encode as constraint programs.

Tamar Bliadze (Tbilisi State University)

Zurab Kochladze (Tbilisi State University)

ABSTRACT. The application of e-mail is the most widespread communication method. The increased email number includes permanently increased amount of unwanted notifications (spam). The statistics claims that 80% of the traffic is spam that lags the internet traffic, consumes space on hard disc, and reduces the capacity of the network, not to mention time spent for email’s classification. The advanced E-mail service packs contain spam filtration program. Nevertheless, to classify the e-mails as spam fully depends on the consumers, as information acceptable (or vital) for one consumer, may be a spam for other. Hence, it is crucial to have automatic spam filtration system for every individual consumer. The paper discuss the solution, a learning system that filters emails through consumers’ parameters and gradually learn how to filter them accordingly. The system based on multi-layer neural network, which uses logistic activation function, learns via tutor and counter-spread algorithm of mistakes

ABSTRACT. Ontology-based query answering augments classical query answering in databases by adopting the open-world assumption and by including domain knowledge provided by an ontology. We investigate temporal query answering w.r.t. ontologies formulated in DL-Lite, a family of description logics that captures the conceptual features of relational databases and was tailored for efficient query answering. We consider a recently proposed temporal query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL). In particular, we consider negation in the ontology and query language, and study both data and combined complexity of query entailment.

ABSTRACT. We consider two classes of computations which admit taking linear combinations of execution runs: probabilistic sampling and generalized animation. We argue that the task of program learning should be more tractable for these architectures than for conventional deterministic programs. We look at the recent advances in the ``sampling the samplers" paradigm in higher-order probabilistic programming. We also discuss connections between partial inconsistency, non-monotonic inference, and vector semantics.

Steffen Hölldobler (Technische Universitaet Dresden)

Luís Moniz Pereira (NOVA)

ABSTRACT. We discuss the evaluation of conditionals. Under classical logic a conditional of the form \textit{A implies B} is semantically equivalent to \textit{not A or B}. However, psychological experiments have repeatedly shown that this is not how humans understand and use conditionals. We introduce an innovative abstract reduction system under the three-valued {\L}ukasiewicz logic and the weak completion semantics, that allows us to reason abductively and by revision with respect to conditionals, in three values. We discuss the strategy of \textit{minimal revision followed by abduction} and discuss two notions of relevance. Psychological experiments will need to ascertain if these strategies and notions, or a variant of them, correspond to how humans reason with conditionals.

Michel Ludwig (Institut für Theoretische Informatik, TU Dresden)

Dirk Walther (Theoretical Computer Science, TU Dresden)

ABSTRACT. We investigate the logical difference problem between general EL-TBoxes. The logical difference is the set of concept subsumptions that are logically entailed by a first TBox but not by a second one. We first reduce the logical difference between two EL-TBoxes to fixpoint reasoning w.r.t. EL-TBoxes. We show how entailments of the first TBox can be represented by subsumptions of least fixpoint concepts by greatest fixpoint concepts which can then be checked w.r.t. the second TBox. The subsumption checks are based on checking for the existence of simulations between hypergraph representations of the fixpoint concept and the TBoxes, not relying on intricate automata-theoretic techniques.

Pierre Flener (Uppsala University)

Justin Pearson (Uppsala University)

ABSTRACT. Automata allow many constraints on sequences of variables to be specified in a high-level way for constraint programming solvers. An automaton with accumulators induces a decomposition of the specified constraint into a conjunction of constraints with existing inference algorithms, called propagators. Towards improving propagation, we design a fully automated tool that selects, in an off-line process, constraints that are implied by such a decomposition. We show that a suitable problem-specific choice among the tool-selected implied constraints can considerably improve solving time and propagation, both on a decomposition in isolation and on entire constraint problems containing the decomposition.

ABSTRACT. Metis is an automated theorem prover based on ordered paramodulation. It is widely employed in the interactive theorem provers Isabelle/HOL and HOL4 to automate proofs as well as reconstruct proofs found by automated provers. For both these purposes, the tableaux-based MESON tactic is frequently used in HOL Light. However, paramodulation-based provers such as Metis perform better on many problems involving equality. We created a Metis-based tactic in HOL Light which translates HOL problems to Metis, runs an OCaml version of Metis, and reconstructs proofs in Metis' paramodulation calculus as HOL proofs. We evaluate the performance of Metis as proof reconstruction method in HOL Light.

Konstantin Korovin (Manchester University)

Dmitry Tsarkov (The University of Manchester)

ABSTRACT. In recent years it was proposed to encode bounded model checking (BMC) into the effectively propositional fragment of first-order logic (EPR). The EPR fragment can provide for a succinct representation of the problem and facilitate reasoning at a higher level. In this paper we present an extension of the EPR-based bounded model checking with k-induction which can be used to prove safety properties of systems over unbounded runs. We present a novel abstraction-refinement approach based on unsatisfiable cores and models (UCM) for BMC and k-induction in the EPR setting. We have implemented UCM refinements for EPR-based BMC and k-induction in a first-order automated theorem prover iProver. We also extended iProver with the AIGER format and evaluated it over the HWMCC'14 competition benchmarks. The experimental results are encouraging. We show that a number of AIG problems can be verified until deeper bounds with the EPR-based model checking.

Gia Sirbiladze (Tbilisi State University)

Gvanca Tsulaia (Tbilisi State University)

ABSTRACT. The article proposes a multi-attribute decision making (MADM) approach, which is applied to the problem of optimal selection of the investment projects. This novel methodology comprises two stages. First, it makes ranking of projects based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method presented in hesitant fuzzy environment. We consider the case when the information on the weights of the attributes is completely unknown. The identification of the weights of the attributes is made in the context of hesitant fuzzy sets and is based on the De Luca-Termini information entropy. The ranking of alternatives is made in accordance with the proximity of their distance to the positive and negative ideal solutions. Second stage of the methodology allows making the most profitable investment in several projects simultaneously. The decision on an optimal distribution of allocated investments among the selected projects is provided using the method developed by the authors for a possibilistic bicriteria optimization problem. An investment example is given to illustrate the application of the proposed approach.

ABSTRACT. In this paper the problem of recognition of patient's intent to move hand prosthesis is addressed. The proposed method is based on recognition of electromiographic (EMG) and mechanomiographic (MMG) biosignals using a multiclassifier system (MCS) working with dynamic ensemble selection (DES) scheme and original concept of competence function. The competence measure is based on relating the response of the classifier with the decision profile of a test object which is evaluated using K nearest objects from the validation set (static mode). Additionally, feedback information coming from bioprosthesis sensors on the correct/incorrect classification is applied to the adjustment of the combining mechanism during MCS operation through adaptive tuning competences of base classifiers depending on their decisions (dynamic mode). Experimental investigations using real data concerning the recognition of five types of grasping movements and computer-simulated procedure of generating feedback signals are performed. The performance of MCS with proposed competence measure is experimentally compared against 4 state-of-art MCS's in static mode and furthermore the MCS system developed is evaluated with respect to the effectiveness of the procedure of tuning competence. The results obtained indicate that modification of competence of base classifiers during the working phase essentially improves performance of the MCS system. The system developed achieved the highest classification accuracy demonstrating the potential of MCS with feedback signals from prosthesis sensors for the control of bioprosthetic hand.

Mikhail Soutchanski (Ryerson University, Toronto, Ontario)

Megan Antoniazzi (Ryerson University)

ABSTRACT. Organic Synthesis is a computationally challenging practical problem concerned with constructing a target molecule from a set of initially available molecules via chemical reactions. This paper demonstrates how organic synthesis can be formulated as a planning problem in Artificial Intelligence, and how it can be explored using the state-of-the-art planners. To this end, we develop a methodology to represent chemical molecules and generic reactions in PDDL 2.2, a version of the standardized Planning Domain Definition Language popular in AI. In our model, derived predicates define common functional groups and chemical classes in chemistry, and actions correspond to generic chemical reactions. We develop a set of benchmark problems that can be subsequently useful for empirical assessment of the performance of various state-of-the-art planners.

Benedikt Wolters (RWTH Aachen University)

Lukas Netz (RWTH Aachen University)

Sascha Geulen (RWTH Aachen University)

Erika Abraham (RWTH Aachen University)

ABSTRACT. Genetic algorithms have been applied to various optimization problems. Our library GeneiAL implements a framework for genetic algorithms especially targeted to the area of hybrid electric vehicles. In a parallel hybrid electric vehicle (PHEV), an internal combustion engine and an electrical motor are coupled on the same axis in parallel. In the area of PHEVs, genetic algorithms have been extensively used for the optimization of the sizing of powertrain components and for parameter tuning of control strategies. We use GeneiAL to control the torque distribution between the engines directly. The objective function of this control strategy minimizes the weighted sum of functions that evaluate the fuel consumption, the battery state of charge, and drivability aspects, over a prediction horizon of fixed finite length. We analyze the influence of these weights and different configurations for the genetic algorithm on the computation time, the convergence, and the quality of the optimization result. For promising configurations, we compare the results of our control strategy with common control.

ABSTRACT. We study two modal logics of trust that interconnect the trust modality with the individual belief modalities of agents. One logic includes an axiom stating that if agent i trusts agent j with respect to a statement p then i believes that j does not disbelieve p. In the second logic, if i trusts j concerning p then i believes that j believes p. These extensions of the logic of trust help to solve some unintuitive consequences that arise when the semantics of trust and belief are independent. As a main technical result we prove the soundness and completeness of these logics.

Zeev Volkovich (Braude College)

Gerhard Weber (Middle East Technical University)

ABSTRACT. We introduce the notion of strongly distributed structures and present a uniform approach to incremental automated reasoning on them. The approach is based on systematic use of two logical reduction techniques: Feferman-Vaught reductions and syntactically defined translation schemes. The distributed systems are presented as logical structures $\fA$'s. We propose a uniform template for methods, which allow for certain cost evaluation of formulae of logic $\mcL$ over $\fA$ from values of formulae over its components and values of formulae over the index structure $\cI$. Given logic $\cL$, structure $\fA$ as a composition of structures $\fA_i, i \in I$, index structure $\cI$ and formula $\phi$ of the logic to be evaluated on $\fA$, the question is: what is the reduction sequence for $\phi$ if any. We show that if we may prove preservation theorems for $\cL$ as well as if $\fA$ is a strongly distributed composition of its components then the corresponding reduction sequence for $\fA$ may be effectively computed. We show that the approach works for lots of extensions of $FOL$ but not all. The considered extensions of $FOL$ are suitable candidates for modeling languages for components and services, used in incremental automated reasoning, data mining, decision making, planning and scheduling. A short complexity analysis of the method is also provided.

Michael Sioutis (Artois University, CRIL/CNRS UMR 8188)

ABSTRACT. The aim of this work is to define a resolution method for the modal logic S5. We first propose a conjunctive normal form (S5-CNF) which is mainly based on using labels referring to semantic worlds. In a sense, S5-CNF can be seen as a generalization of the conjunctive normal form in propositional logic by using in the clause structure the modal connective of necessity and labels. We show that every S5 formula can be transformed into an S5-CNF formula using a linear encoding. Then, in order to show the suitability of our normal form, we describe a modeling of the problem of graph coloring. Finally, we introduce a simple resolution method for S5, composed of three deductive rules, and we show that it is sound and complete. Our deductive rules can be seen as adaptations of Robinson’s resolution rule to the possible-worlds semantics.

ABSTRACT. First-order theorem provers have to search for proofs in an infinite space of possible derivations. Proof search heuristics play a vital role for the practical performance of these systems. In the current generation of saturation-based theorem provers like SPASS, E, Vampire or Prover~9, one of the most important decisions is the selection of the next clause to process with the given clause algorithms. Provers offer a wide variety of basic clause evaluation functions, which can often be parameterized and combined in many different ways. Finding good strategies is usually left to the users or developers, often backed by large-scale experimental evaluations. We describe a way to automatize this process using genetic algorithms, evaluating a population of different strategies on a test set, and applying mutation and crossover operators to good strategies to create the next generation. We describe the design and experimental set-up, and report on first promising results.

ABSTRACT. The paper describes a project aiming at developing formal foundations of combined multi-language constraint solving in the form of an algebra of modular systems. The basis for integration of different formalisms is the classic model theory. Each atomic module is a class of structures. It can be given, e.g., by a set of constraints in a constraint formalism that has an associated solver. Atomic modules are combined using a small number of algebraic operations. The algebra simultaneously resembles Codd's relational algebra, (but is defined on classes of structures instead of relational tables), and process algebras originated in the work of Milner and Hoare.

The goal of this paper is to establish and justify the main notions and research directions, make definitions precise. We explain several results, but do not give proofs. The proofs will appear in several forthcoming papers. We keep this paper as a project description paper to discuss the overall project, to establish and bridge individual directions.

Franciszek Seredynski (Cardinal Stefan Wyszynski University)

ABSTRACT. Energy optimization problem in Wireless Sensor Networks (WSN) is a backbone of efficient performance of sensor network consisting of small devices with limited and non-recovering battery. WSN lifetime maximization problem under assumption of that the coverage is main task of the network is known as Maximal lifetime coverage problem (MLCP). This problem belongs to a class of NP-hard problems. In this paper we propose a novel simulated annealing (SA) algorithm to solve MLCP. The proposed algorithm is studied for high dense WSN instances under different parameter setup and compared with the simplified algorithm applied for disjoint set cover problem. Simulation results have shown that SA algorithm outperforms the second one.

ABSTRACT. BliStr is a system that automatically develops strong targetted theorem-proving strategies for the E prover on a large set of diverse problems. The main idea is to interleave (i) iterated low-timelimit local search for new strategies on small sets of similar easy problems with (ii) higher-timelimit evaluation of the new strategies on all problems. The accumulated results of the global higher-timelimit runs are used to define and evolve the notion of "similar easy problems'", and to control the evolution of the next strategy. The technique significantly strengthened the set of E strategies used by the MaLARea, E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the Mizar division. Similar improvement was obtained on the problems created from the Flyspeck corpus.