View: session overviewtalk overview
11:00 | Clustering of Argument Graphs using Semantic Similarity Measures ABSTRACT. Research on argumentation in Artificial Intelligence recently investigates new methods that contribute to the vision of developing robust argumentation machines. One line of research explores ways of reasoning with natural language arguments coming from information sources on the web as a foundation for the deliberation and synthesis of arguments in specific domains. This paper builds upon arguments represented as argument graphs in the standardized Argument Interchange Format. While previous work was focused on the development of semantic similarity measures used for the case-based retrieval of argument graphs, this paper addresses the problem of clustering argument graphs to explore structures that facilitate argumentation interpretation. We propose a k-medoid and an agglomerative clustering approach based on semantic similarity measures. We compare the clustering results based on a graph-based semantic measure that takes the structure of the argument into account with a semantic word2vec measure on the pure textual argument representation. Experiments based on the Microtext corpus show that the graph-based similarity is best on internal evaluation measures, while the pure textual measure perform very well for identifying topic-specific clusters. |
11:25 | Improving Implicit Stance Classification in Tweets Using Word and Sentence Embeddings ABSTRACT. Argumentation Mining aims at finding components of arguments, as well as relations between them, in text. One of the largely unsolved problems is implicitness, where the text invites the reader to infer a missing component, such as the claim or a supporting statement. In the work of Wojatzki and Zesch (2016), an interesting implicitness problem is addressed on a Twitter data set. They showed that implicit stances toward a claim can be found with some success using just token and character n-grams. Using the same dataset, we show that results for this task can be improved using word and sentence embeddings, but that not all embedding variants perform alike. Specifically, we compare fastText, GloVe, and Universal Sentence Encoder (USE); and we find that USE yields best results for this task. |
11:40 | Strong Equivalence for Argumentation Frameworks with Collective Attacks PRESENTER: Anna Rapberger ABSTRACT. Argumentation frameworks with collective attacks are a prominent extension of Dung's abstract argumentation frameworks, where an attack can be drawn from a set of arguments to another argument. These frameworks are often abbreviated as SETAFs. Although SETAFs have received increasing interest recently, the notion of strong equivalence, which is fundamental in nonmonotonic formalisms to characterize equivalent replacements, has not yet been investigated. In this paper, we study how strong equivalence between SETAFs can be decided with respect to the most important semantics and also consider variants of strong equivalence. |
12:05 | Mixing Description Logics in Privacy-Preserving Ontology Publishing ABSTRACT. In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an EL instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL EL. We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given EL concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker's knowledge is given by an FL0 or an EL concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of FL0, it turns out to be actually lower (polynomial) for the more expressive DL FLE. |
14:15 | Analogy-Based Preference Learning with Kernels ABSTRACT. Building on a specific formalization of analogical relationships of the form A relates to B as C relates to D, we establish a connection between two important subfields of artificial intelligence, namely analogical reasoning and kernel-based learning. More specifically, we show that so-called "analogical proportions" are closely connected to kernel functions on pairs of objects. Based on this result, we introduce the "analogy kernel", which can be seen as a measure of how strongly four objects are in analogical relationship. As an application, we consider the problem of object ranking in the realm of preference learning, for which we develop a new method based on support vector machines trained with the analogy kernel. Our first experimental results for data sets from different domains (sports, education, tourism, etc.) are promising and suggest that our approach is competitive to state-of-the-art algorithms in terms of predictive accuracy. |
14:40 | A crow search-based genetic algorithm for solving two-dimensional bin packing problem ABSTRACT. The two-dimensional bin packing problem (2D-BPP) consists of packing, without overlapping, a set of rectangular items with different sizes into smallest number of rectangular containers, called “bins”, having identical dimensions. According to the real-word requirements, the items may either have a fixed orientation or they can be rotated by 90°. In addition, it may or not be subjugate to the guillotine cutting. In this article, we consider the two-dimensional bin packing problem with fixed orientation and free cutting. In fact, we propose a hybrid approach by combining two bio-inspired algorithms that are the crow search algorithm (CSA) and the genetic algorithm (GA) to solve the considered problem. So, the main idea behind this hybridization is to expect reaching a sort of cooperative synergy between the operators of the two combined algorithms. That is, the CSA is discretized and adapted to the 2D-BPP context, while using genetic operators to improve individuals (i.e. crows) adaptation. The average performance of the proposed hybrid approach is evaluated on the standard benchmark instances of the considered problem and compared with two other nature-inspired algorithms; namely standard genetic algorithm and binary particle swarm optimization algorithm. The obtained results are very promising. |
15:05 | An Empirical Study of the Usefulness of State-Dependent Action Costs in Planning PRESENTER: Sumitra Corraya ABSTRACT. The vast majority of work in planning to date has focused on state-independent action costs. However, if a planning task features state-dependent costs, using a cost model with state-independent costs means either introducing a modeling error, or potentially sacrificing compactness of the model. In this paper, we investigate the conflicting priorities of modeling accuracy and compactness empirically, with a particular focus on the extent of the negative impact of reduced modeling accuracy on (a) the quality of the resulting plans, and (b) the search guidance provided by heuristics that are fed with inaccurate cost models. Our empirical results show that the plan suboptimality introduced by ignoring state-dependent costs can range, depending on the domain, from inexistent to several orders of magnitude. Furthermore, our results show that the impact on heuristic guidance additionally depends strongly on the heuristic that is used, the specifics of how exactly the costs are represented, and whether one is interested in heuristic accuracy, node expansions, or overall runtime savings. |
15:20 | An Introduction to AnyBURL ABSTRACT. This paper is an extended abstract of an IJCAI 2019 paper. Within this paper we introduce AnyBURL, an anytime bottom-up algorithm for efficiently learning logical rules from large knowledge graphs. We apply AnyBURL to the use case of knowledge graph completion. AnyBURL outperforms other rule-based approaches, is competitive with current state of the art based on latent representations and requires significantly less computational resources. |
15:25 | The Higher-Order Prover Leo-III ABSTRACT. Leo-III is an automated theorem prover for extensional type theory with Henkin semantics. It also provides automation for various non-classical logics, in particular reasoning in almost every normal higher-order modal logic is supported. In this paper, the features of Leo-III are surveyed. This is an abstract of the homonymous paper accepted at the 9th International Joint Conference on Automated Reasoning (IJCAR 2018). |
Annual Meeting of Members of the AI Chapter of German Society for Informatics (GI-FBKI)