LINKING
RESEARCH
GLOBALLY
GCAI 2017 / 3rd Global Conference on Artificial Intelligence
PROGRAM FOR SATURDAY, OCTOBER 21ST
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-10:30 Session 8: Invited Talk (Machine Learning): van den Broeck
09:30
Open-World Probabilistic Databases

ABSTRACT. Large-scale probabilistic knowledge bases are becoming increasingly important in academia and industry alike. They are constantly extended with new data, powered by modern information extraction tools that associate probabilities with database tuples. In this talk, we revisit the semantics underlying such systems. In particular, the closed-world assumption of probabilistic databases, that facts not in the database have probability zero, clearly conflicts with their everyday use. To fix this discrepancy, we propose an open-world probabilistic databases semantics, which relaxes the probability of open facts to intervals. While still assuming a finite domain, this semantics can provide meaningful answers when some probabilities are not precisely known. For this open-world setting, we propose an efficient evaluation algorithm for unions of conjunctive queries. Our open-world algorithm incurs no overhead compared to closed-world reasoning and runs in time linear in the database size for tractable queries. Finally, we discuss limitations and additional knowledge-representation layers that can further strengthen open-world reasoning about big uncertain data, as well as connections to lifted probabilistic inference algorithms and statistical relational learning.

10:30-11:00Break
11:00-12:30 Session 9: Machine Learning
11:00
Object-sensitive Deep Reinforcement Learning
SPEAKER: unknown

ABSTRACT. Deep reinforcement learning has become popular over recent years, showing superiority on different visual-input tasks such as playing Atari games and robot navigation. Although objects are important image elements, few work considers enhancing deep reinforcement learning with object characteristics. In this paper, we propose a novel method that can incorporate object recognition processing to deep reinforcement learning models. This approach can be adapted to any existing deep reinforcement learning frameworks. State-of-the-art results are shown in experiments on Atari games. We also propose a new approach called “object saliency maps” to visually explain the actions made by deep reinforcement learning agents.

11:30
Anemic Status Prediction using Multi Layer Perceptron Neural Network Model
SPEAKER: unknown

ABSTRACT. Artificial Neural Network (ANN) has been well recognized as an effective tool in medical science. Medical data is complex, and currently data is not readily available or trusted when attempting to review and assess the effectiveness of blood management in clinical practice. Reliable evidence is needed to selectively define areas that can be improved and establish standard protocols across healthcare service lines to substantiate best practice in blood product utilization. The ANN is able to provide this evidence using automatic learning techniques to mine the hidden information under the medical data and come to conclusions.

Blood transfusions can be life saving and are used commonly in complex surgical cases. However, allogeneic blood transfusions come with associated risks. In addition, the storage and distribution of allogeneic blood is costly. Anemia and clinical symptoms are currently used to determine whether a packed red blood cell transfusion is necessary. In this paper, we apply Multi-layer perceptrons Neural Network to predict the degree of post-operative anemia. Successful prediction of postoperative anemia may help inform medical practitioners whether there is a need for further packed red blood cell transfusion.

12:00
Implementation of Incremental Learning in Artificial Neural Networks

ABSTRACT. Nowadays, the use of artificial neural networks (ANN), in particular the Multilayer Perceptron (MLP), is very popular for executing different tasks such as pattern recognition, data mining, and process automation. However, there are still weaknesses in these models when compared with human capabilities. A characteristic of human memory is the ability for learning new concepts without forgetting what we learned in the past, which has been a disadvantage in the field of artificial neural networks. How can we add new knowledge to the network without forgetting what has already been learned, without repeating the exhaustive ANN process? In an exhaustively training is used a complete training set, with all objects of all classes.
  
In this work, we present a novel incremental learning algorithm for the MLP. New knowledge is incorporated into the target network without executing an exhaustive retraining. Objects of a new class integrate this knowledge, which was not included in the training of a source network. The algorithm consists in taking the final weights from the source network, doing a correction of these with the Support Vector Machine tools, and transferring the obtained weights to a target network. This last net is trained with a training set that it is previously preprocessed. The efficiency resulted of the target network is comparable with a net that is exhaustively trained.

12:30-14:00Lunch
14:00-16:00 Session 10: Intelligent Search and Heuristics
14:00
Improved Heuristic for Manipulation of Second-order Copeland Elections
SPEAKER: unknown

ABSTRACT. The \textit{second-order Copeland} voting scheme is NP-complete to manipulate even if a manipulator has perfect information about the preferences of other voters in an election.~A recent work proposes a \textit{branch-and-bound} heuristic for manipulation of second-order Copeland elections.~The work shows that there are instances of the elections that may be manipulated using the branch-and-bound heuristic.~However, the performance of the heuristic degraded for fairly large number of candidates in elections.~We show that this heuristic is \textit{exponential} in the number of candidates in an election, and propose an improved heuristic that extends this previous work.~Our improved heuristic is based on \textit{randomization technique} and is shown to be \textit{polynomial} in the number of candidates in an election.~We also account for the number of samples required for a given accuracy and the probability of missing the accurate value of the number of manipulations in an election.

14:30
Enhanced Simplified Memory-bounded A Star (SMA*+)
SPEAKER: unknown

ABSTRACT. In 1992, Stuart Russell briefly introduced a series of memory efficient optimal search algorithms. Among which is the Simplified Memory-bounded A Star (SMA*) algorithm, unique for its explicit memory bound. Despite progress in memory efficient A Star variants, search algorithms with explicit memory bounds are absent from progress. SMA* remains the premier memory bounded optimal search algorithm. In this paper, we present an enhanced version of SMA* (SMA*+), providing a new open list, simplified implementation, and a culling heuristic function, which improves search performance through a priori knowledge of the search space. We present benchmark and comparison results with state-of-the-art optimal search algorithms, and examine the performance characteristics of SMA*+.

15:00
Automated Invention of Strategies and Term Orderings for Vampire
SPEAKER: unknown

ABSTRACT. In this work we significantly increase the performance of the Vampire and E automated theorem provers (ATPs) on a set of loop theory problems. This is done by developing EmpireTune, an AI system that automatically invents targeted search strategies for Vampire and E. EmpireTune extends previous strategy invention systems in several ways. We have developed support for the Vampire prover, further extended Vampire by new mechanisms for specifying term orderings, and EmpireTune can now automatically invent suitable term ordering for classes of problems. We describe the motivation behind these additions, their implementation in Vampire and EmpireTune, and evaluate the systems with very good results on the AIM (loop theory) ATP benchmark.

15:30
Improving SAT Solver Performance with Structure-based Preferential Bumping
SPEAKER: unknown

ABSTRACT. We introduce a method we call structure-based preferential VSIDS bumping as a low-cost way to exploit formula structure. We show that the Glucose SAT solver, when modified with preferential bumping of certain easily-identified structurally important variables, out-performs unmodified Glucose on the industrial formulas from recent SAT solver competitions.