AAAI-13: THE TWENTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE

PROGRAM FOR THURSDAY, JULY 18TH, 2013

Days:
previous day
all days

View: session overviewtalk overviewside by side with other conferences

09:00-10:00 Session 31: Invited talk: Mataric' (AAAI)
09:00
AAAI-13 Invited Talk: Socially Assistive Robotics: Human-Robot Interaction Methods for Creating Robots That Care
SPEAKER: Maja Mataric'

ABSTRACT. Socially assistive robotics (SAR) is a new field of intelligent robotics that focuses on developing machines capable of assisting users through social rather than physical interaction. The robot's physical embodiment is at the heart of SAR's effectiveness, as it leverages the inherently human tendency to engage with lifelike (but not necessarily humanlike or otherwise biomimetic) social behavior. People readily ascribe intention, personality, and emotion to robots; SAR leverages this engagement stemming from non-contact social interaction involving speech, gesture, movement demonstration and imitation, and encouragement, to develop robots capable of monitoring, motivating, and sustaining user activities and improving human learning, training, performance and health outcomes. Human-robot interaction (HRI) for SAR is a growing multifaceted research area at the intersection of engineering, health sciences, neuroscience, social, and cognitive sciences. This talk will describe our research into embodiment, modeling and steering social dynamics, and long-term user adaptation for SAR. The research will be grounded in projects involving analysis of multi-modal activity data, modeling personality and engagement, formalizing social use of space and non-verbal communication, and personalizing the interaction with the user over a period of months, among others. The presented methods and algorithms will be validated on implemented SAR systems evaluated by human subject cohorts from a variety of user populations, including stroke patients, children with autism spectrum disorder, and elderly with Alzheimers and other forms of dementia.

10:00-10:20Coffee Break
10:20-11:20 Session 32A: Privacy and Social Media (AAAI)
10:20
Social Rankings in Human-Computer Committees
SPEAKER: unknown

ABSTRACT. Despite committees and elections being widespread in the real-world, the design of agents for operating in human-computer committees has received far less attention than the theoretical analysis of voting strategies. We address this gap by providing an agent design that outperforms other voters in groups comprising both people and computer agents. In our setting participants vote by simultaneously submitting a ranking over a set of candidates and the election system uses a social welfare rule to select a ranking that minimizes disagreements with participants' votes. We ran an extensive study in which hundreds of people participated in repeated voting rounds with other people as well as computer agents that differed in how they employ strategic reasoning in their voting behavior. Our results show that over time, people learn to deviate from truthful voting strategies, and use heuristics to guide their play, such as repeating their vote from the previous round. We show that a computer agent using a best response voting strategy was able to outperform people in the game. Our study has implication for agent designers, highlighting the types of strategies that enable agents to succeed in committees comprising both human and computer participants. This is the first work to study the role of computer agents in voting settings involving both human and agent participants.

10:35
From Interest to Function: Location Estimation in Social Media
SPEAKER: unknown

ABSTRACT. Recent years have witnessed the tremendous development of the social media, which possesses a vast number of Internet users. The high-dimension content generated by these users provides a unique opportunity to understand their behavior deeply. As one of the most fundamental topics, location estimation attracts more and more research efforts. Different from the previous literature, we find that user location is strongly related with user interest. Based on this, we first build an detection model to mining user interest from short text, then the mapping between location function and user interest is established and an efficient model is finally presented to predict the user location with convincing fidelity. Thorough evaluations and comparisons on an authentic data set show that our model outperforms the previous approaches greatly and the high efficiency also guarantees its applicability in real-world scenarios.

10:50
Automated Workflow Synthesis
SPEAKER: unknown

ABSTRACT. By coordinating efforts from humans and machines, human computation systems can solve problems that machines cannot tackle alone. A general challenge is to design efficient human computation algorithms or workflows with which to coordinate the work of the crowd. We explore automated workflow synthesis aimed at ideally harnessing human efforts by learning about the crowd's performance on tasks and synthesizing an optimal workflow for solving a problem. We present experimental results for human sorting tasks, which demonstrate both the benefit of understanding and optimizing the structure of workflows based on observations. Results also demonstrate the benefits of using value of information to guide experiments for identifying efficient workflows with fewer experiments.

11:05
Search More, Disclose Less
SPEAKER: unknown

ABSTRACT. Experienced shoppers know that the best way to make sure you are getting the best deal for your money is comparison shop before making a purchase. In today's online world, comparison shopping can be substantially facilitated through the use of comparison shopping agents (CSAs). These web-based intelligent software applications allow checking out many online stores prices, saving buyers time and money. The blooming of CSAs in recent years enables buyers in today's markets to query more than a single CSA per their search, thus substantially expanding the list of sellers whose prices they obtain. From the individual CSA point of view, however, the multi-CSAs querying is definitely non-favorable as most of today's CSAs benefit depends on payments they receive from sellers upon transferring buyers to their websites (and making a purchase). The most straightforward way for the CSA to improve its competence is through spending more resources on getting more sellers' prices, potentially resulting in a more attractive best price. In this paper we suggest a complementary approach that improves the attractiveness of the best price returned to the buyer without having to extend the CSAs' price database. The approach, which we term ``selective price disclosure'' relies on removing some of the prices known to the CSA from the list of results returned to the buyer. The advantage of this approach is in the ability to affect the buyer's beliefs regarding the probability of obtaining more attractive prices if querying additional CSAs. The paper presents two methods for choosing the subset of prices to be presented to a fully-rational buyer, attempting to overcome the computational complexity associated with evaluating all possible subsets. The effectiveness and efficiency of the methods are demonstrated using real data, collected from five CSAs for four product. Furthermore, since people are known to be inherently bounded rational, the two methods are also evaluated with human buyers, demonstrating that selective price-disclosing can be highly effective with people, however the subset of prices that needs to be used should be extracted in a different (and more simplistic) manner.

10:20-11:20 Session 32B: Mechanism Design and Aggregation (AAAI)
10:20
Dynamic Social Choice with Evolving Preferences
SPEAKER: unknown

ABSTRACT. Social choice theory provides insights into a variety of collective decision making settings, but nowadays some of its tenets are challenged by Internet environments, which call for dynamic decision making under constantly changing preferences. In this paper we model the problem via Markov decision processes (MDP), where the states of the MDP coincide with preference profiles and a (deterministic, stationary) policy corresponds to a social choice function. We can therefore employ the axioms studied in the social choice literature as guidelines in the design of socially desirable policies. We present tractable algorithms that compute optimal policies under different prominent social choice constraints. Our machinery relies on techniques for exploiting symmetries and isomorphisms between MDPs.

10:35
A Framework for Aggregating Influenced CP-nets and its Resistance to Bribery
SPEAKER: unknown

ABSTRACT. We consider multi-agent settings where a set of agents want to take a collective decision, based on their preferences over the possible candidate options. While agents have their initial inclination, they may interact and influence each other, and therefore modify their preferences, until hopefully they reach a stable state and declare their final inclination. At that point, a voting rule is used to aggregate the agents' preferences and generate the collective decision. Recent work has modeled the influence phenomenon in the case of voting over a single issue. Here we generalize this model to account for preferences over combinatorially structured domains including several issues. We propose a way to model influence when agents express their preferences as CP-nets. We define two procedures for aggregating preferences in this scenario, by interleaving voting and influence convergence, and we study their resistance to bribery.

10:50
Bundling Attacks in Judgment Aggregation
SPEAKER: unknown

ABSTRACT. We consider judgment aggregation over multiple independent issues, where the chairperson has her own opinion, and can try to bias the outcome by bundling several issues together. Since for each bundle judges must give a uniform answer on all issues, different partitions of the issues may result in an outcome that is significantly different than the ``true'', issue-wise, decision.

We prove that the bundling problem faced by the chairperson, i.e. trying to bias the outcome towards her own opinion, is computationally difficult in the worst case. Then we study the probability that an effective bundling attack exists as the disparity between the opinions of the judges and the chair varies. We show that if every judge initially agrees with the chair on every issue with probability of at least 1/2, then there is almost always a bundling attack (i.e. a partition) where the opinion of the chair on all issues is approved. Moreover, such a partition can be found efficiently. In contrast, when the probability is lower than 1/2 then the chair cannot force her opinion even on a single issue.

11:05
On the Social Welfare of Mechanisms for Repeated Batch Matching
SPEAKER: unknown

ABSTRACT. We study hybrid online-batch matching problems, where agents arrive continuously, but are only matched in periodic rounds, when many of them can be considered simultaneously. Agents not getting matched in a given round remain in the market for the next round. This setting models several scenarios of interest, including many job markets as well as kidney exchange mechanisms. We consider the social utility of two commonly used mechanisms for such markets: one that aims for stability in each round (greedy), and one that attempts to maximize social utility in each round (max-weight). Surprisingly, we find that in the long term, the social utility of the greedy mechanism can be higher than that of the max-weight mechanism. We hypothesize that this is because the greedy mechanism behaves similarly to a soft threshold mechanism, where all connections below a certain threshold are rejected by the participants in favor of waiting until the next round. Motivated by this observation, we propose a method to approximately calculate the optimal threshold for an individual agent to use based on characteristics of the other agents participating, and demonstrate experimentally that social utility is high when all agents use this strategy. Thresholding can also be applied by the mechanism itself to improve social welfare; we demonstrate this with an example on graphs that model pairwise kidney exchange.

10:20-11:20 Session 32C: Multi-* Machine Learning (AAAI)
10:20
Multi-Label Learning with PRO Loss
SPEAKER: unknown

ABSTRACT. Multi-label learning approaches assign multiple labels to one object. However, besides differentiating relevant labels from irrelevant ones, it is often required in real applications to rank the \textit{relevant} labels for an object, while the ranking of \textit{irrelevant} labels is not so valuable. Such a requirement, however, cannot be well satisfied by existing approaches. One crucial reason is that most approaches were designed to optimize existing criteria, yet there is no criterion which encodes the above requirement. In this paper, we design a new criterion, \textsf{PRO Loss}, which concerns about the prediction on all labels as well as the ranking of relevant labels, while neglecting the ranking of irrelevant ones. We then propose the ProSVM approach which optimizes the \textsf{PRO Loss} efficiently using alternating direction method of multipliers. We further improve efficiency with an upper approximation that reduces the number of constraints from $O(T^2)$ to $O(T)$, where $T$ is the number of labels. Experiments show that our proposals are not only superior to state-of-the-art approaches on \textsf{PRO Loss}, but also highly competitive on existing evaluation criteria.

10:35
Supervised Coupled Dictionary Learning with Group Structures for Multi-modal Retrieval
SPEAKER: unknown

ABSTRACT. A better similarity mapping function across heterogenous high-dimensional features is very desirable for many applications involving multi-modal data. In this paper, we introduce coupled dictionary learning (DL) into supervised sparse coding for multimodal retrieval. We call this Supervised coupled-dictionary learning with group structures for Multi-Modal retrieval(SliM^2). SliM^2 formulates the multi-modal metric learning as a constrained dictionary learning problem. By the utilization of intrinsic power of dealing with the heterogenous features in DL, SliM^2 first extends uni-modal DL to multi-modal DL. Moreover, the label information is employed in SliM^2 to discover the shared structure inside intra-modality within a same class by a mixed norm (i.e., l_1/ l_2 norm). At last, the multi-modal retrieval is conducted via a jointly learned mapping function across multi-modal data. The experimental results show the effectiveness of our proposed model when applied to multi-modal retrieval.

10:50
Convex Subspace Representation Learning from Multi-view Data
SPEAKER: Yuhong Guo

ABSTRACT. Learning from multi-view data is important in many applications. In this paper, we propose a novel convex subspace representation learning method for multi-view clustering. We first formulate the subspace learning in multiple views as one joint optimization problem with a common subspace representation matrix and a group sparsity inducing norm. By exploiting the properties of dual norms, we then show a convex min-max dual formulation with a sparsity inducing trace norm can be obtained. We develop a proximal bundle optimization algorithm to globally solve the min-max optimization problem. Our empirical study shows the proposed subspace representation learning can effectively facilitate the multi-view clustering task and outperform alternative multi-view clustering methods across different scenarios.

11:05
AAAI-13 Outstanding Paper Award: SMILe: Shuffled Multiple-Instance Learning
SPEAKER: unknown

ABSTRACT. Resampling techniques such as bagging are often used in supervised learning to produce more accurate classifiers. In this work, we show that multiple-instance learning admits a different form of resampling, which we call "shuffling." In shuffling, we resample instances in such a way that the resulting bags are likely to be correctly labeled. We show that resampling results in both a reduction of bag label noise and a propagation of additional informative constraints to a multiple-instance classifier. We empirically evaluate shuffling in the context of multiple-instance classification and multiple-instance active learning and show that the approach leads to significant improvements in accuracy.

10:20-11:20 Session 32D: Temporal Reasoning (AAAI)
10:20
Timelines with Temporal Uncertainty
SPEAKER: unknown

ABSTRACT. Timelines are a formalism to model planning domains where the temporal aspects are predominant, and have been used in many real-world applications. Despite their practical success, a major limitation is the inability to model temporal uncertainty, i.e. the fact that the plan executor cannot decide the actual duration of some activities.

In this paper we make two key contributions. First, we propose a comprehensive, semantically well founded framework that (conservatively) extends with temporal uncertainty the state of the art timeline approach.

Second, we focus on the problem of producing time-triggered plans that are robust with respect to temporal uncertainty, under a bounded horizon. In this setting, we present the first complete algorithm, and we show how it can be made practical by leveraging the power of Satisfiability Modulo Theories.

10:35
Decoupling the Multiagent Disjunctive Temporal Problem
SPEAKER: unknown

ABSTRACT. The Multiagent Disjunctive Temporal Problem (MaDTP) is a general constraint-based formulation for scheduling problems that involve interdependent agents. Decoupling agents' interdependent scheduling problems, so that each agent can manage its schedule independently, requires agents to adopt more restrictive local constraints that effectively subsume their interdependencies. In this paper, we present the first algorithms for decoupling MaDTPs. Our distributed algorithm is provably sound and complete. Our experiments demonstrate that the relative efficiency of finding a temporal decoupling improves with the interconnectedness between agents' schedules, leading to orders of magnitude speedup over algorithms that find complete solution spaces for MaDTPs. However, decoupling by its nature restricts agents' local scheduling flexibility; we define novel flexibility metrics for decoupled MaDTPs, and show empirically how the flexibility sacrificed depends on the degree of coupling between agents' schedules.

10:50
Temporal Milestones in HTNs
SPEAKER: Brett Benyo

ABSTRACT. We present temporal milestones for hierarchical task networks to enable the complex synchronization of tasks. A temporal milestone of a task is an intermediate event that occurs during the execution of a complex task, e.g., the start time, the end time or a milestone of any of its subtasks. Unlike landmark variables, introduced in existing work, temporal milestones respect the task abstraction boundaries and preserve structural properties enabling much more efficient reasoning. Furthermore, temporal milestones are expressive as landmark variables. We provide analytical and empirical evidence to support these claims.

11:05
Simple Temporal Problems with Taboo Regions
SPEAKER: unknown

ABSTRACT. In this paper, we define and study the general framework of Simple Temporal Problems with Taboo regions (STPTs), and we show how these problems capture metric temporal reasoning aspects which are common to many real-world applications. STPTs encode simple temporal constraints between events and user-defined taboo regions on the timeline during which no event is allowed to execute. We discuss two different variants of STPTs. The first one deals with instantaneous events, while the second one allows for processes with flexible durations. We also provide polynomial-time algorithms for solving them. When not all events or processes can be scheduled outside of the taboo regions, one needs to define and reason about "soft" STPTs. We show that even "soft" STPTs can be solved in polynomial time, using reductions to max-flow. This also allows for incremental computation, which is central to the successful application of our approach in real-time domains.

10:20-11:20 Session 32E: MDPs and Sequential Processes (AAAI)
10:20
Agent Cooperatives for Effective Power Consumption Shifting
SPEAKER: unknown

ABSTRACT. In this paper, we present a directly applicable scheme for electricity consumption shifting and effective demand curve flattening. The scheme can employ the services of either individual or cooperating consumer agents alike. Agents participating in the scheme, however, are motivated to form cooperatives, in order to reduce their electricity bills via lower group prices granted for sizable consumption shifting from high to low demand time intervals. The scheme takes into account individual costs, and uses a strictly proper scoring rule to reward contributors according to efficiency. Cooperative members, in particular, can attain variable reduced electricity price rates, given their different load shifting capabilities. This allows even agents with initially forbidding shifting costs to participate in the scheme, and is achieved by a truthful and (weakly) budget-balanced reward sharing mechanism. We provide four variants of this approach, and evaluate it experimentally.

10:35
PAC Optimal Planning for Invasive Species Management: Improved Exploration for Reinforcement Learning from Simulator-Defined MDPs
SPEAKER: unknown

ABSTRACT. Often the most practical way to define a Markov Decision Process (MDP) in sustainability problems is in the form of a simulator that, given a state and an action, produces a resulting state and immediate reward sampled from the corresponding distributions. Simulators in natural resource management can be very expensive to execute, so that the time required to solve an MDP is dominated by the number of calls to the simulator. This paper presents an algorithm, DDV, that combines improved confidence intervals on the Q values (as in interval estimation) with a novel upper bound on the discounted state occupancy probabilities to intelligently choose state-action pairs to explore. We prove that this algorithm terminates with a policy whose value is within $\epsilon$ of the optimal policy (with probability $1-\delta$) after making only polynomially-many calls to the simulator. Experiments on benchmark MDPs and on an MDP for invasive species management show 3- to 5-fold reductions in the number of simulator calls required compared to the best existing algorithms.

10:50
Online Optimization with Dynamic Temporal Uncertainty : Incorporating Short Term Predictions for Renewable Integration in Intelligent Energy Systems
SPEAKER: unknown

ABSTRACT. Growing environmental awareness and government directives have set the stage for an increase in the fraction of electricity supplied using intermittent renewable sources such as solar and wind. To compensate for the increased variability in supply and demand, we need algorithms for online energy resource allocation under temporal uncertainty of future consumption and availability. Recent advances in prediction algorithms offer hope that a reduction in future uncertainty, through short term predictions, will increase the value of renewables. Predictive information is then revealed incrementally in an online manner, leading to what we call dynamic temporal uncertainty. We demonstrate the non-triviality of this problem and provide online algorithms, both randomized and deterministic, to handle time varying uncertainty in future rewards for non-stationary MDPs in general and for energy resource allocation in particular. We derive theoretical upper and lower bounds and establish that, in the deterministic case, discounting future rewards can be used as a strategy to maximize the total undiscounted reward. Finally, we demonstrate the effectiveness of the algorithm on wind and demand traces.

11:05
A Tractable Leader-Follower MDP Model for Animal Disease Management
SPEAKER: unknown

ABSTRACT. Sustainable animal disease management requires to design and implement control policies at the regional scale. However, for diseases which are not {\em regulated}, individual farmers are responsible for the adoption and successful application of the control policies at the farm scale. Organisations (groups of farmers, health institutions...) may try to influence farmers' control actions through financial incentives, in order to ensure sustainable (from the health and economical point of views) disease management policies. Economics / Operations Research frameworks have been proposed for modelling the effect of incentives on agents. The Leader-Follower Markov Decision Processes framework is one such framework, that combines Markov Decision Process (MDP) and stochastic games frameworks. However, since finding equilibrium strategies in stochastic games is hard when the number of players is large, LF-MDP problems are intractable.

Our contribution, in this article, is to propose a tractable model of the animal disease management problem. The tractable model is obtained through a few simple modeling approximations which are acceptable when the problem is viewed from the organisation side. As a result, we design a polynomial-time algorithms for animal disease management, which we evaluate on a case study inspired from the problem of controling the spread of the Porcine Reproductive and Respiratory Syndrome (PRRS).

10:20-11:20 Session 32F: Spotlights and Senior Members (AAAI)
10:20
“Meeting the Responsibility to Explain AI”
SPEAKER: unknown

ABSTRACT. Every scientist and every scientific society has a responsibility to explain goals and methods and convey research results to the lay public, with additional importance attached to educating young students in science, technology, engineering, and mathematics. AITopics was set up to meet this responsibility for the AAAI and has been in operation for over ten years. We will describe some of the technical problems we face and describe opportunities for new technological development.

10:50
ICAPS: Best paper
SPEAKER: Ron Petrick

ABSTRACT. Ronald P. A. Petrick, Mary Ellen Foster

Novel Applications track Best Paper: Planning for Social Interaction in a Robot Bartender Domain

A robot coexisting with humans must not only be able to perform physical tasks, but must also be able to interact with humans in a socially appropriate manner. In many social settings, this involves the use of social signals like gaze, facial expression, and language. We describe an application of planning to task-based social interaction using a robot that must interact with multiple human agents in a simple bartending domain. Social states are inferred from low-level sensors, using vision and speech as input modalities. The knowledge-level PKS planner is used to construct plans with task, dialogue, and social actions, as an alternative to current mainstream approaches of interaction management. The resulting system has been tested with human users in a series of drink ordering scenarios.

11:05
ICAPS: What's Hot
SPEAKER: unknown
11:20-12:20 Session 33A: NLP Generation and Translation (AAAI)
11:20
Story Generation with Crowdsourced Plot Graphs
SPEAKER: unknown

ABSTRACT. Story generation is the problem of automatically selecting a sequence of events that meet a set of criteria and can be told as a story. Story generation is knowledge-intensive; traditional story generators rely on a priori defined domain models about fictional worlds, including characters, places, and actions that can be performed. Manually authoring the domain models is costly and thus not scalable. We present a novel class of story generation system that can generate stories in an unknown domain. Our system (a) automatically learns a domain model by crowdsourcing a corpus of narrative examples and (b) generates stories by sampling from the space defined by the domain model. A large-scale evaluation shows that stories generated by our system for a previously unknown topic are comparable in quality to simple stories authored by untrained humans.

11:35
Enforcing Meter in Finite-Length Markov Sequences
SPEAKER: unknown

ABSTRACT. Markov processes are increasingly used to generate finite-length sequences for content generation applications (such as text or music). However, Markov Processes are notoriously difficult to control. Recently, Markov Constraints have been introduced in order to bring users some control on generated sequences. Markov Constraints reformulate finite-length Markov sequence generation in the framework of constraint satisfaction (CSP). However, in practice, this approach is limited to local user constraints and its performance is low for global user constraints, such as cardinality or arithmetic constraints. In this article, we introduce Markov Meter, a global constraint on sequences that ensures that 1) a sequence is Markovian with regards to a given corpus and 2) the sequence follows metrical rules expressed as cumulative cost functions. Additionally, Markov Meter constraints can simultaneously enforce cardinality constraints. We propose a filtering procedure for this constraint whose complexity is pseudopolynomial. This results is obtained by exploiting a theorem in Additive Number Theory by Nathanson. We illustrate our constraint on meter-constrained text and music generation problems that were so far not addressed by any other technique.

11:50
Generating Natural-Language Video Descriptions Using Text-Mined Knowledge
SPEAKER: unknown

ABSTRACT. We present a holistic data-driven technique that generates natural-language descriptions for videos. We combine the output of state-of-the-art object and activity detectors with “real-world” knowledge to select the most probable subject-verb-object triplet for describing a video. We show that this knowledge, automatically mined from web-scale text corpora, enhances the triplet selection algorithm by providing it contextual information and leads to a four-fold increase in activity identification. Unlike previous methods, our approach can annotate arbitrary videos without requiring the expensive collection and annotation of a similar training video corpus. We evaluate our technique against a baseline that does not use text-mined knowledge and show that humans prefer our descriptions to those generated using purely visual detections.

12:05
A Topic-Based Coherence Model for Statistical Machine Translation
SPEAKER: unknown

ABSTRACT. Coherence that ties sentences of a text into a meaningfully connected structure is of great importance to text generation and translation. In this paper, we propose a topic-based coherence model to produce coherence for document translation, in terms of the continuity of sentence topics in a text. We automatically extract a coherence chain for each source text to be translated. Based on the extracted source coherence chain, we adopt a maximum entropy classifier to predict the target coherence chain that defines a linear topic structure for the target document. The proposed topic-based coherence model then uses the predicted target coherence chain to help decoder select coherent word/phrase translations. Our experiments show that incorporating the topic-based coherence model into machine translation achieves substantial improvement over both the baseline and previous methods that integrate document topics rather than coherence chains into machine translation.

11:20-12:20 Session 33B: Satisfiability (AAAI)
11:20
Resolution and Parallelizability: Barriers to the Efficient Parallelization of SAT Solvers
SPEAKER: unknown

ABSTRACT. Recent attempts to create versions of Satisfiability (SAT) solvers that exploit parallel hardware have met with limited success. In fact, the most successful parallel solvers in recent competitions were based on portfolio approaches with little to no exchange of information between processors. This experience contradicts the apparent parallelizability of exploring a combinatorial search space. We present evidence that this discrepancy can be explained by studying SAT solvers as resolution refutation engines. Starting with the observation that a recently studied measure of resolution proofs, namely depth, provides a (weak) upper bound to the best possible speedup achievable by such solvers, we empirically show the existence of bottlenecks to parallelizability that refutations typically generated by SAT solvers exhibit. Further, we propose a new measure that explicitly accounts for a bounded number of parallel processors and empirically correlates with parallel speedups observed in practice. Our findings suggest that efficient parallelization of such solvers is not simply a matter of finding the right clause sharing heuristics, it is also hindered by the structure of the refutations these solvers produce.

11:35
Partial MUS Enumeration

ABSTRACT. Minimal explanations of infeasibility find a wide range of uses. In the Boolean domain, these are referred to as Minimal Unsatisfiable Subsets (MUSes). In some settings, one needs to enumerate MUSes of a Boolean formula. Most often the goal is to enumerate all MUSes. In cases where this is computationally infeasible, an alternative is to enumerate some MUSes. This paper develops a novel approach for partial enumeration of MUSes, that complements existing alternatives. If the enumeration of all MUSes is viable, then existing alternatives represent the best option. However, for formulas where the enumeration of all MUSes is unrealistic, our approach provides a solution for enumerating some MUSes within a given time bound. The experimental results focus on formulas for which existing solutions are unable to enumerate MUSes, and shows that the new approach can in most cases enumerate a non-negligible number of MUSes within a given time bound.

11:50
Greedy or Not? Best Improving versus First Improving Stochastic Local Search for MAXSAT
SPEAKER: unknown

ABSTRACT. Stochastic local search (SLS) is the dominant algorithmic paradigm for incomplete SAT and MAXSAT solvers. Early studies on small 3SAT instances found that the use of greedy improving moves did not improve search compared to using next improving moves (Gent and Walsh 1992, 1993). Yet several recent algorithms commonly have two modes: a greedy search followed by diversification (Tompkins, Balint and Hoos 2011). We revisit the early results by empirically studying the impact of greediness on SLS performance on much larger instances, including both random 3SAT and industrial MAXSAT problems. Current implementations using greedy moves tend to be much less efficient than using next improving moves. We implement an efficient buffering algorithm that makes greedy moves just as efficient as next improving moves. We compare versions of GSAT and AdaptG2WSAT using next and greedy moves for both finding the first optima and guiding local search during diversification. We find that the first local optima found using greedy moves are statistically significantly better than the first local optima found using next improving moves. However, this advantage reverses after subsequent search; given sufficient search time, solutions are significantly better most of the time when using next improving moves. For larger random as well as industrial MAXSAT problems, current algorithm design should revisit the use of next improving moves.

12:05
Improving WalkSAT for Random k-Satisfiability Problem with k>3
SPEAKER: unknown

ABSTRACT. Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of randomly generated instances of the Boolean satisfiablity (SAT) problem. One of the most famous SLS algorithms for SAT is WalkSAT, which is an initial algorithm that has wide influence among modern SLS algorithms. Recently, there has been increasing interest in WalkSAT, due to the discovery of its great power on large random 3-SAT instances. However, the performance of WalkSAT on random $k$-SAT instances with $k>3$ lags far behind. Indeed, there have been few works in improving SLS algorithms for such instances. This work takes a first large step towards this direction. We propose a novel concept namely $multilevel$ $make$. Based on this concept, we design a scoring function called $linear$ $make$, which is utilized to break ties in WalkSAT, leading to a new algorithm called WalkSAT$lm$. Our experimental results on random 5-SAT and 7-SAT instances show that WalkSAT$lm$ improves WalkSAT by orders of magnitudes. Moreover, WalkSAT$lm$ significantly outperforms state-of-the-art SLS solvers on random 5-SAT instances, while competes well on random 7-SAT ones. Additionally, WalkSAT$lm$ performs very well on random instances from SAT Challenge 2012, indicating its robustness.

11:20-12:20 Session 33C: Security and Network Games (AAAI)
11:20
AAAI-13 Outstanding Paper, Honorable Mention: Sensitivity of Diffusion Dynamics to Network Uncertainty
SPEAKER: unknown

ABSTRACT. Simple diffusion processes on networks have been used to model, analyze and predict diverse phenomena such as spread of diseases, information, memes, etc. More often than not, the underlying network data is noisy and sampled. This prompts the following natural question: how sensitive are the diffusion dynamics and subsequent conclusions to uncertainty in the network?

In this paper, we consider two popular diffusion models: Independent cascades (IC) model and Linear threshold (LT) model. We study how the expected number of vertices that are influenced/infected, given some initial conditions, are affected by network perturbation. By rigorous analysis under the assumption of a reasonable perturbation model we establish the following main results: (1) For the IC model, we characterize the susceptibility to network perturbation in terms of the critical probability for phase transition of the network. We find the expected number of infections is quite stable, unless the the transmission probability is close to the critical probability. (2) We show that the standard LT model with uniform edge weights is relatively stable under network perturbations. (3) Empirically, the transient behavior, i.e., the time series of the number of infections, in both models appears to be more sensitive to network perturbations. We also study these questions using extensive simulations on diverse real world networks, and find that our theoretical predictions for both models match the empirical observations quite closely.

11:35
Solving Security Games on Graphs via Marginal Probabilities
SPEAKER: unknown

ABSTRACT. Security games involving the allocation of multiple security resources to defend multiple targets generally have an exponential number of pure strategies for the defender. One method that has been successful in addressing this computational issue is to instead directly compute the marginal probabilities with which the individual resources are assigned (first pursued by Kiekintveld et al. (2009)). However, in sufficiently general settings, there exist games where these marginal solutions are not implementable, that is, they do not correspond to any mixed strategy of the defender. In this paper, we examine security games on graphs and show how the type of graph, the type of schedules, and the type of defender resources affect the applicability of this approach. In some settings, we show the approach is applicable and give a polynomial-time algorithm for computing an optimal defender strategy; in other settings, we give counterexample games that demonstrate that the approach does not work, and prove NP-hardness results for computing an optimal defender strategy.

11:50
Bounding the Cost of Stability in Games over Interaction Networks
SPEAKER: unknown

ABSTRACT. We study stability of cooperative games played over an interaction network, in a model that was introduced by Myerson (1977). We show that the cost of stability of such games (i.e., the subsidy required to stabilize the game) can be bounded in terms of natural parameters of their underlying interaction networks.

Specifically, we prove that if the treewidth of the interaction network H is k, then the relative cost of stability is at most k+1, and if the pathwidth of H is k', then the relative cost of stability is at most k'. We show that these bounds are tight for all k >= 2 and all k' >= 1, respectively.

12:05
Analyzing the Effectiveness of Adversary Modeling in Security Games
SPEAKER: unknown

ABSTRACT. Recent deployments of Stackelberg security games (SSG) have led to two competing approaches to handle boundedly rational human adversaries: (1) integrating models of human (adversary) decision-making into the game-theoretic algorithms, and (2) applying robust optimization techniques that avoid adversary modeling. A recent algorithm (MATCH) based on the second approach was shown to outperform the leading modeling-based algorithm even in the presence of significant amount of data. Is there then any value in using human behavior models in solving SSGs? Through extensive experiments with 547 human subjects playing 11102 games in total, we emphatically answer the question in the affirmative, while providing the following key contributions: (i) we show that our algorithm, SU-BRQR, based on a novel integration of human behavior model with the subjective utility function, significantly outperforms both MATCH and its improvements; (ii) we are the first to present experimental results with security intelligence experts, and find that even though the experts are more rational than the Amazon Turk workers, SU-BRQR still outperforms an approach assuming perfect rationality (and to a more limited extent MATCH); (iii) we show the advantage of SU-BRQR in a new, large game setting and demonstrate that sufficient data enables it to improve its performance over MATCH.

11:20-12:20 Session 33D: Data Mining (AAAI)
11:20
Computational Sustainability Award Winner: A Temporal Motif Mining Approach to Unsupervised Energy Disaggregation: Applications to Residential and Commercial Buildings
SPEAKER: unknown

ABSTRACT. Non-intrusive appliance load monitoring has emerged as a popular approach to study energy consumption patterns without instrumenting every device in an installation. The ensuing computational problem is to disaggregate total energy usage into usage by specific circuits and devices, to gain insight into consumption patterns. We exploit the temporal ordering implicit in on/off events of devices to uncover motifs (episodes) corresponding to the operation of individual devices. Extracted motifs are then subjected to a sequence of constraint checks to ensure that the resulting episodes are interpretable. Our preliminary results show that motif mining is adept at distinguishing devices with multiple power levels and at disentangling the combinatorial operation of devices. With suitably configured processing steps, we demonstrate the applicability of our method to both residential and commercial buildings.

11:35
Multiple Hypothesis Object Tracking for Unsupervised Self-Learning: An Ocean Eddy Tracking Application
SPEAKER: unknown

ABSTRACT. Mesoscale ocean eddies primarily transport heat, salt, energy, and nutrients across oceans. As a result, accurately identifying and tracking such phenomena are crucial for understanding future marine and terrestrial ecosystems and their sustainability. Traditionally, ocean eddies are monitored through two phases: identification and tracking. A limitation of this approach is that the tracking phase is dependent on the performance of the identification scheme, which can be susceptible to noise and sampling errors. In this paper we focus on tracking and introduce the concept of multiple hypothesis assignment (MHA) to track multiple features simultaneously across time frames. Under this scheme, features are assigned to multiple potential tracks, and the final assignment is deferred until more data are available to make a relatively unambiguous decision. Unlike the most widely used methods in the eddy tracking literature, MHA uses contextual spatio-temporal information to take corrective measures autonomously on the detection step a posteriori, is more robust to noise, and is generalizable to other multiple-object tracking problems.

11:50
Adaptive Spatio-Temporal Exploratory Models: Hemisphere-Wide Species Distributions from Massively Crowdsourced eBird Data
SPEAKER: unknown

ABSTRACT. Broad-scale spatiotemporal processes in conservation and sustainability science, such as continent-wide animal movement, occur across a range of spatial and temporal scales. Understanding these processes at multiple scales is crucial for developing and coordinating conservation strategies across national boundaries. In this paper we propose a general class of models we call AdaSTEM, for Adaptive Spatio-Temporal Exploratory Models, that are able to exploit variation in the density of observations while adapting to multiple scales in space and time. We show that this framework is able to efficiently discover multiscale structure when it is present, while retaining predictive performance when absent. We provide an empirical comparison and analysis, we offer theoretical insights from the ensemble loss decomposition, and we deploy AdaSTEM to estimate Barn Swallow's (Hirundo rustica) Hemisphere-wide spatiotemporal bird distribution, using massively crowdsourced eBird data.

12:05
Computational Sustainability Award Winner: Approximate Bayesian Inference for Reconstructing Velocities of Migrating Birds from Weather Radar
SPEAKER: unknown

ABSTRACT. Archived data from the WSR-88D network of weather radars in the US hold detailed information about the continent-scale migratory movements of birds over the last 20 years. However, significant technical challenges must be overcome to understand this information and harness its potential for science and conservation. We present an approximate Bayesian inference algorithm to reconstruct the velocity fields of birds migrating in the vicinity of a radar station. This is part of a larger project to quantify bird migration at large scales using weather radar data.

11:20-12:20 Session 33E: Spotlights and Senior Members (AAAI)
11:20
ECML/PKDD: What's Hot
SPEAKER: unknown

ABSTRACT. This talk will focus on structural changes to the way the conference is run.

11:35
ECML/PKDD: What's Hot
SPEAKER: unknown
11:50
ICML: What's Hot
SPEAKER: unknown

ABSTRACT. This talk will focus on structural changes to the way the conference is run.

12:05
NIPS: What's Hot
SPEAKER: unknown
11:20-12:00 Session 33F: Pattern Analysis and Modeling (IAAI)
11:20
Emerging: Interactive Information Extraction and Navigation to Enable Effective Link Analysis and Visualization of Unstructured Text
SPEAKER: unknown

ABSTRACT. This paper describes the Advanced Text Exploitation Assistant (ATEA), a system developed to enable intelligence analysts to perform link analysis and visualization (A&V) from information in large volumes of unstructured text. One of the key design challenges that had to be addressed was that of imperfect Information Extraction (IE) technology. While IE seems like a promising candidate for exploiting information in unstructured text, it makes mistakes. As a result, analysts do not trust its results. In this paper, we discuss how ATEA overcomes the obstacle of imperfect IE by incorporating a human-in-the-loop for review and correction of extraction results. We also discuss how coupling consolidated extraction results (corpus-level information objects) with an intuitive use interface facilitates interactive navigation of the resulting information. With these key features, ATEA enables effective link analysis and visualization of information in unstructured text.

11:40
Challenge: Scalable Models for Patterns of Life
SPEAKER: unknown

ABSTRACT. Patterns of life are observable regularities that emerge from the everyday goals, behaviors, and interactions of a community of individuals. Computational modeling for patterns of life would offer significant opportunities for practical application and theoretical research. We introduce three related scalability challenges for modeling patterns of life and outline how AI algorithms and representations are likely to be critical in meeting the challenges.

11:20-12:20 Session 33G: Ontologies and Reasoning (AAAI)
11:20
Introducing Nominals to the Combined Query Answering Approaches for EL
SPEAKER: unknown

ABSTRACT. So-called combined approaches answer a conjunctive query over a description logic ontology in three steps: first, they materialise certain consequences of the ontology and the data; second, they evaluate the query over the data; and third, they filter the result of the second phase to eliminate unsound answers. Such approaches were developed for various members of the DL-Lite and the $\mathcal{EL}$ families of languages, but none of them can handle ontologies containing nominals. In our work, we bridge this gap and present a combined query answering approach for $\mathcal{elho}^r_\bot$---a logic that contains all features of the OWL 2 EL standard apart from transitive roles and complex role inclusions; this is nontrivial because nominals require equality reasoning, which introduces complexity into the first and the third step. Our empirical evaluation suggests that our technique is suitable for practical application, and so it provides a practical basis for conjunctive query answering in a large fragment of OWL 2 EL.

(with Boris Motik, Ian Horrocks, AIW-68)

11:35
Not Quite the Same: Identity Constraints for the Web of Linked Data

ABSTRACT. Linked Data is based on the idea that information from different sources can flexibly be connected to enable novel applications that individual datasets do not support on their own. This hinges upon the existence of links to connect datasets that would otherwise be isolated. The most notable form, sameAs links, are intended to express that two identifiers are equivalent in all respects. Unfortunately, many existing links do not reflect such genuine identity. This study provides a novel method to analyse this phenomenon, based on a thorough theoretical analysis, as well as a novel graph-based method to resolve such issues to some extent. Our experiments on a representative Web-scale set of sameAs links from the Web of Linked Data show that our method can identify and remove hundreds of thousands of constraint violations.

12:20-12:30 Session 34A (AAAI)
12:20
Elo Ratings for Structural Credit Assignment in Multiagent Systems
SPEAKER: unknown

ABSTRACT. In this paper we investigate the applications of Elo ratings (originally designed for 2-player chess) to a heterogeneous nonlinear multiagent system to determine an agent's overall impact on its team's performance. Measuring this impact has been attempted in many different ways, including reward shaping; the generation of heirarchies, holarchies, and teams; mechanism design; and the creation of subgoals. % We show that in a multiagent system, an Elo rating will accurately reflect the an agent's ability to contribute positively to a team's success with no need for any other feedback than a repeated binary win/loss signal. The Elo rating not only measures ``personal" success, but simultaneously success in assisting other agents to perform favorably.

12:22
Automated Design of Search with Composability
SPEAKER: unknown

ABSTRACT. We propose a new perspective on the automated design of combinatorial search algorithms through an approach that operates at a much higher semantic level than previous algorithm configurators do. Instead of blindly tuning numerical or categorical parameters based on black-box optimization or resorting to a handful of predefined strategies, we propose to automatically search over compositions of search strategies using a light-weight language, while exploiting the semantic knowledge of the modeling language itself to guide the configuration process. Although somewhat reminiscent of the old AI vision that machines will be able to program themselves to solve novel tasks, we believe that the idea restricted to this simple but powerful search language has a chance of success in practice.

12:24
Movie Recommender System for Profit Maximization
SPEAKER: unknown

ABSTRACT. Recommendation systems are becoming increasingly popular, with recent studies showing they have a positive effect on revenue and engagement. The traditional way of planning a recommendation system is trying to predict, for every possible recommendation what is the probability that the user will follow this recommendation, and then give to the user the recommendations which maximize the probability that the user accepts them.

In this paper we show that by giving a different set of recommendations, the recommendation system can further increase the business' utility (e.g. revenue), without any significant drop in user satisfaction. Indeed, the recommendation system designer should have in mind both the user, whose taste we need to uncover, and the business, which wants to promote specific content.

To study these questions, we performed a large body of experiments on Amazon Mechanical Turk. In each of those experiments, we compared a commercial state of the art recommendation engine with a modified recommendation list, which takes into account the utility which the business gets from each suggestion being accepted by the user. We see that the modified recommendation list is more desired for the business, as the end result gives the business high revenue (or utility). One may worry about the long term effects of giving the user worse suggestions. To study this, we asked the users about how they perceive the list of recommendation that they received. Any difference in user satisfaction between the list is negligible, and we document that the satisfaction did not change (any change is within the noise).

12:20-12:30 Session 34B (AAAI)
12:20
Virtual Structure Reduction for Distributed Constraint Problem Solving
SPEAKER: unknown

ABSTRACT. Distributed Constraint Problem solving represents a fundamental research area in distributed artificial intelligence and multi-agent systems. The constraint density, or the ratio of the number of constraints to the number of variables, determines the difficulty of either finding a solution or minimizing the set of variable assignment conflicts. Reducing density typically reduces difficulty. We present a fully distributed technique for reducing the effective density of constraint graphs, called Virtual Structure Reduction (VSR). The VSR technique leverages the occurrence of variables that must be assigned the same value based on shared constraints and can improve solver performance using existing algorithms. We discuss our Distributed Constraint Optimization Problem (DCOP) solver, integrated with the Distributed Stochastic Algorithm (DSA), called VSR-DSA. The VSR-DSA algorithm demonstrates performance gains vs DSA in both solution quality and time on 3-coloring problems.

12:22
Identifying Important Nodes in Heterogenous Networks
SPEAKER: unknown

ABSTRACT. We present a new approach to identifying important nodes or entities in a complex heterogeneous network. An instance of this general problem is the Most Valuable Player (MVP) problem: determine which member of a team/group contributes the most to the team's success. The novelty of our approach is that we introduce a concept of importance based on a {\em statistical model:} An individual is important to the extent that including an individual explicitly in the model improves the data fit of the model more than it increases the model's complexity. We apply techniques from statistical-relational learning, a recent field that combines AI and machine learning, to identify statistically important individuals in a scalable manner. We evaluate the result on the OPTA soccer data set for the English premier league.

12:24
Synthetic Photographs for Learning Aesthetic Preferences
SPEAKER: unknown

ABSTRACT. This paper applies the features found in photo ranking algorithms for real photos to rate synthetic photos generated from user game play. It combines the high-level features used to rate high quality photos, like rule of thirds and balance, with those features of low quality photos, like cropped objects and a tilted horizon line, to produce one feature set. This feature set was applied to the average overall pairwise preference as well as individual users' preferences. The algorithm predicted the correct pairwise preference with 91% accuracy for the overall preferences and an average of 87% accuracy for the individual users. Moreover while feature weights varied widely with individual preferences, the most predominant features were tilted horizon line, number of objects in a photo, and cropped objects. These findings not only suggest synthetic photos are good alternative means for delving real photo aesthetic preferences but that future photo ranking algorithms should take into account individual preferences.

12:26
Machine Learning for Meeting Analysis
SPEAKER: unknown

ABSTRACT. Most people participate in meetings almost every day, multiple times a day. The study of meetings is important, but also challenging, as it requires an understanding of social signals and complex interpersonal dynamics. Our aim this work is to use a data-driven approach to the science of meetings. We provide tentative evidence that: i) there are common patterns in the way social dialogue acts are interspersed throughout a meeting, ii) it is often possible to predict whether a proposal during a meeting will be accepted or rejected based entirely on the language (the set of persuasive words) used by the speaker.

12:20-12:30 Session 34C (AAAI)
12:20
Supersparse Linear Integer Models for Predictive Scoring Systems
SPEAKER: unknown

ABSTRACT. In this paper, we introduce Supersparse Linear Integer Models (SLIM) as a tool to create scoring systems for binary classification. We derive theoretical bounds on the true risk of SLIM scoring systems, and present experimental results to show that SLIM scoring systems are accurate, sparse, and interpretable classification models.

12:22
Detecting Patterns of Crime with Series Finder
SPEAKER: unknown

ABSTRACT. Many crimes can happen every day in a major city, and fig- uring out which ones are committed by the same individual or group is an important and difficult data mining challenge. To do this, we propose a pattern detection algorithm called Series Finder, that grows a pattern of discovered crimes from within a database, starting from a “seed” of a few crimes. Se- ries Finder incorporates both the common characteristics of all patterns and the unique aspects of each specific pattern. We compared Series Finder with classic clustering and clas- sification models applied to crime analysis. It has promising results on a decade’s worth of crime pattern data from the Cambridge Police Department.

12:24
Predicting Power Failures with Reactive Point Processes
SPEAKER: unknown

ABSTRACT. We present a new statistical model for predicting discrete events continuously in time, called Reactive Point Processes (RPP’s). RPP’s are a natural fit for many domains where time-series data are available, and their development was motivated by the important problem of predicting serious events (fires, explosions, power failures) in the underground electrical grid of New York City (NYC). RPP’s capture several important properties of this domain such as self-exciting, self-regulating and diminishing returns of many events and inspections in a row.

12:26
An Interpretable Stroke Prediction Model Using Rules and Bayesian Analysis
SPEAKER: unknown

ABSTRACT. We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. We introduce a generative model called the Bayesian List Machine for fitting decision lists, a type of interpretable classifier, to data. We use the model to predict stroke in atrial fibrillation patients, and produce predictive models that are simple enough to be understood by patients yet significantly outperform the medical scoring systems currently in use.

12:20-12:30 Session 34D (AAAI)
12:20
Climate Prediction via Matrix Completion
SPEAKER: unknown

ABSTRACT. Recently, machine learning has been applied to the problem of predicting future climates, informed by the multi-model ensemble of physics-based climate models that inform the Intergovernmental Panel on Climate Change (IPCC). Past work [Monteleoni et al. 2011; McQuade and Monteleoni, 2012] demonstrated the promise of online learning algorithms applied to this problem. Here we propose a novel approach, using sparse matrix completion. Consistent with previous work, our method takes the climate models' predictions into account, including their projections into the future, in addition to the past observation data, however our approach to prediction is markedly different. We create a sparse (incomplete) matrix from climate model predictions and observed temperature data, and apply a matrix completion algorithm to recover it. In our experiments at annual and monthly frequencies, evaluated at a variety of prediction time-scales, matrix completion applied to climate prediction consistently outperforms the average prediction of the multi-model ensemble of climate models, the benchmark method in climate science.

12:22
Using Machine Learning to Improve Stochastic Optimization
SPEAKER: unknown

ABSTRACT. In many stochastic optimization algorithms there is a hyperparameter that controls how the next sampling distribution is determined from the current data set of samples of the objective function. This hyperparameter controls the exploration/exploitation trade-off of the next sample. Typically heuristic ``rules of thumb" are used to set that hyperparameter, e.g., a pre-fixed annealing schedule. We show how machine learning provides more principled alternatives to (adaptively) set that hyperparameter, and demonstrate that these alternatives can substantially improve optimization performance.

12:24
Label Ranking by Directly Optimizing Performance Measures
SPEAKER: unknown

ABSTRACT. Label ranking aims to map instances to an order over a predefined set of labels. In many applications, the label ranking results are evaluated by performance measures such as normalized discounted cumulative gain (NDCG). It is ideal that the label ranking model is trained by directly maximizing performance measures on training data. However, existing studies on label ranking models mainly based on the minimization of classification errors or rank losses. To fill in this gap in label ranking, in this paper a novel label ranking model is learned by minimizing a loss function directly defined on the performance measures. Our algorithm that employs a boosting framework and utilizes the rank aggregation technique to construct weak label rankers, referred to as BoostLR, is proved able to enhance the performance measure used in it. Experimental results reveal the initial success of BoostLR.

12:26
Utilizing Landmarks in Euclidean Heuristics for Optimal Planning
SPEAKER: unknown

ABSTRACT. An important problem in AI is to construct high-quality heuristics for optimal search. Recently, a Euclidean heuristic (EH) has been proposed which leverages on manifold learning, a subfield in machine learning, to construct an embedding of the state space graph, and derives an admissible heuristic distance between two states from the Euclidean distance between their respective embedded points. EH has shown good performance and memory efficiency in comparison to other existing heuristics. Our recent works have further improved the scalability and quality of EH. In this short paper, we present our recent progress on applying EH to problems in planning formalisms, which provide richer semantics than the simple state-space graph model. In particular, we improve EH by exploiting the landmark structure derived from the SAS+ planning formalism.

12:30-13:45Lunch Break
13:45-14:45 Session 35A: Multiagent Systems (AAAI)
13:45
Model Predictive Control with Uncertainty in Human Driven Systems
SPEAKER: unknown

ABSTRACT. Human driven systems present a unique optimization challenge for robot control. Generally, operators of these systems behave rationally given environmental factors and desired goals. However, information available to subsystem controllers is often incomplete, and the operator becomes more difficult to model without this input information. In this work we present a data-driven, nonparametric model to capture both expectation and uncertainty of the upcoming duty for a subsystem controller. This model is a modified k-nearest neighbor regressor used to generate weighted samples from a distribution of upcoming duty, which are then exploited to generate an optimal control. We test the model on a simulated heterogeneous energy pack manager in an Electric Vehicle operated by a human driver. For this domain, upcoming load on the energy pack strongly affects the optimal use and charging strategy of the pack. Given incomplete information, there is a natural uncertainty in upcoming duty due to traffic, destination, signage, and other factors. We test against a dataset of real driving data gathered from volunteers, and compare the results other models and the optimal upper bound.

14:00
Negotiated Learning for Smart Grid Agents: Entity Selection Based on Dynamic Partially Observable Features
SPEAKER: unknown

ABSTRACT. An attractive approach to managing electricity demand in the Smart Grid relies on real-time pricing (RTP) tariffs, where customers are incentivized to quickly adapt to changes in the cost of supply. However, choosing amongst competitive RTP tariffs is difficult when tariff prices change rapidly. The problem is further complicated when we assume that the price changes for a tariff are published in real-time only to those customers who are currently subscribed to that tariff, thus making the prices partially observable. We present models and learning algorithms for autonomous agents that can address the tariff selection problem on behalf of customers. We introduce 'Negotiated Learning', a general algorithm that enables a self-interested sequential decision-making agent to periodically select amongst a variable set of 'entities' (e.g., tariffs) by negotiating with other agents in the environment to gather information about dynamic partially observable entity 'features' (e.g., tariff prices) that affect the entity selection decision. We also contribute a formulation of the tariff selection problem as a 'Negotiable Entity Selection Process', a novel representation. We support our contributions with intuitive justification and simulation experiments based on real data on an open Smart Grid simulation platform.

14:15
Autonomous Agents in Future Energy Markets: The 2012 Power Trading Agent Competition
SPEAKER: unknown

ABSTRACT. Sustainable energy systems of the future will need more than efficient, clean, and low-cost energy sources. They will also need efficient price signals that motivate sustainable energy consumption behaviors and a tight real-time alignment of energy demand with supply from renewable and traditional sources. The Power Trading Agent Competition (Power TAC) is a rich, competitive, open-source simulation platform for future retail power markets built on real-world data and state-of-the-art customer models. Its purpose is to help researchers understand the dynamics of customer and retailer decision-making as well as the robustness of proposed market designs. Power TAC invites researchers to develop electricity broker agents and benchmark them against the best-performing strategies from leading energy market researchers worldwide. This provides compelling, actionable information for policy makers and industry leaders. We describe the competition scenario, demonstrate the realism of the Power TAC platform, and analyze key characteristics of successful brokers in one of our 2012 pilot competitions between seven research groups from five different countries.

14:30
Large Landscape Conservation—Synthetic and Real-world Datasets
SPEAKER: Carla Gomes

ABSTRACT. Biodiversity underpins ecosystem goods and services and hence protecting it is key to achieving sustainability. However, the persistence of many species is threatened by habitat loss and fragmentation due to human land use and climate change. Conservation efforts are implemented under very limited economic resources, and therefore designing scalable, cost-efficient and systematic approaches for conservation planning is an important and challenging computational task. In particular, preserving landscape connectivity between good habitat has become a key conservation priority in recent years. We give an overview of landscape connectivity conservation and some of the underlying graph-theoretic optimization problems. We present a synthetic generator capable of creating families of randomized structured problems, capturing the essential features of real-world instances but allowing for a thorough typical-case performance evaluation of different solution methods. We also present two large-scale real-world datasets, including economic data on land cost, and species data for grizzly bears, wolverines and lynx.

13:45-14:45 Session 35B: Vision (AAAI)
13:45
Supervised and Projected Sparse Coding for Image Classification
SPEAKER: unknown

ABSTRACT. Classic sparse representation for classification (SRC) method fails to incorporate the label information of training images, and meanwhile has a poor scalability due to the expensive computation for L1 norm. In this paper, we propose a novel subspace sparse coding method with utilizing label information to effectively classify the images in the subspace. Our new approach unifies the tasks of dimension reduction and supervised sparse vector learning, by simultaneously preserving the data sparse structure and meanwhile seeking the optimal projection direction in the training stage, therefore accelerates the classification process in the test stage. Our method achieves both flat and structured sparsity for the vector representations, therefore making our framework more discriminative during the subspace learning and subsequent classification. The empirical results on 4 benchmark data sets demonstrate the effectiveness of our method.

14:00
Joint Object and Pose Recognition Using Homeomorphic Manifold Analysis
SPEAKER: Haopeng Zhang

ABSTRACT. Object recognition is a key precursing challenge in the fields of object manipulation and robotic/AI reasoning in general. Recognizing object categories, particular instances of objects and viewpoints/poses of objects are three critical subproblems robots must solve in order to accurately grasp/manipulate objects and reason about their environments. Multi-view images of the same object lie on intrinsic low-dimensional manifolds in descriptor spaces (e.g. visual/depth descriptor spaces). These object manifolds share the same topology despite being geometrically different. Each object manifold can be represented as a deformed version of a unified manifold. The object manifolds can thus be parametrized by its homeomorphic mapping/reconstruction from the unified manifold. In this work, we construct a manifold descriptor from this mapping between homeomorphic manifolds and use it to jointly solve the three challenging recognition sub-problems. We extensively experiment on a challenging multi-modal (i.e. RGBD) dataset and other object pose datasets and achieve state-of-the-art results.

14:15
Vector-Valued Multi-View Semi-Supervised Learning for Multi-Label Image Classification
SPEAKER: Chao Xu

ABSTRACT. Images are usually associated with multiple labels and comprised of multiple views, due to each image containing several objects (e.g. a pedestrian, bicycle and tree) and multiple visual features (e.g. color, texture and shape). Currently available tools tend to use either labels or features for classification, but both are necessary to describe the image properly. There have been recent successes in using vector-valued functions, which construct matrix-valued kernels, to explore the multi-label structure in the output space. This has motivated us to develop multi-view vector-valued manifold regularization (MV$^3$MR) in order to integrate multiple features. MV$^3$MR exploits the complementary properties of different features, and discovers the intrinsic local geometry of the compact support shared by different features, under the theme of manifold regularization. We validate the effectiveness of the proposed MV$^3$MR methodology for image classification by conducting extensive experiments on two challenge datasets, PASCAL VOC' 07 and MIR Flickr.

13:45-14:45 Session 35C: Bayesian Inference (AAAI)
13:45
GiSS: Combining Gibbs Sampling and SampleSearch for Inference in Mixed Probabilistic and Deterministic Graphical Models
SPEAKER: unknown

ABSTRACT. Mixed probabilistic and deterministic graphical models are ubiquitous in real-world applications. Unfortunately, Gibbs sampling, a popular MCMC technique, does not converge to the correct answers in presence of determinism and therefore cannot be used for inference in such models. In this paper, we propose to remedy this problem by combining Gibbs sampling with SampleSearch, an advanced importance sampling technique which leverages complete SAT/CSP solvers to generate high quality samples from hard deterministic spaces. We call the resulting algorithm, GiSS. Unlike Gibbs sampling which yields unweighted samples, GiSS yields weighted samples. Computing these weights exactly can be computationally expensive and therefore we propose several approximations. We show that our new weighting schemes yield consistent estimates and demonstrate experimentally that GiSS is competitive in terms of accuracy with state-of-the-art approximate inference algorithms such as SampleSearch, MC-SAT and Belief propagation.

14:00
Complexity of Inferences in Polytree-shaped Semi-Qualitative Probabilistic Networks

ABSTRACT. Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the Bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that inferences can be performed in linear time if there is a single observed node, which is a relevant practical case. Because our proof is constructive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynomial-time algorithm for SQPNs. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.

14:15
Joint Extraction and Labeling via Graph Propagation for Dictionary Construction
SPEAKER: unknown

ABSTRACT. In this paper, we present an approach that jointly infers the boundaries of tokens and their labels to construct dictionaries for Information Extraction (IE). Our approach for joint-inference is based on a graph propagation framework, and extends it in two novel ways. First, we extend the graph representation to capture ambiguities that occur during the token extraction phase. Second, we modify the labeling phase (i.e. label propagation) to utilize this new representation, allowing evidence from labeling to be used for token extraction. Our evaluation shows these extensions (and hence our approach) significantly improve the performance of the outcome dictionaries over pipeline-based approaches by preventing aggressive commitment. Our evaluation also shows that our extensions over a base graph-propagation framework was able to improve the precision without hurting the recall.

13:45-14:45 Session 35D: Game Theory (AAAI)
13:45
Automating Collusion Detection in Sequential Games
SPEAKER: unknown

ABSTRACT. Collusion is the practice of two parties deliberately cooperating to the detriment of others. While such behavior may be desirable in certain circumstances, in many it is considered dishonest and unfair. If agents otherwise hold strictly to the established rules, though, collusion can be challenging to police. In this paper, we introduce an automatic method for collusion detection in sequential games. We achieve this through a novel object, called a collusion table, that aims to capture the effects of collusive behavior, i.e., advantage to the colluding parties, without committing to any particular pattern of behavior. We demonstrate the effectiveness of this method in the domain of poker, a popular game where collusion is prohibited.

14:00
Fast Equilibrium Computation for Infinitely Repeated Games
SPEAKER: unknown

ABSTRACT. It is known that an equilibrium of an infinitely repeated two-player game (with limit average payoffs) can be computed in polynomial time, as follows: according to the folk theorem, we compute minimax strategies for both players to calculate the punishment values, and subsequently find a mixture over outcomes that exceeds these punishment values. However, for very large games, even computing minimax strategies can be prohibitive. In this paper, we propose an algorithmic framework for computing equilibria of repeated games that does not require linear programming and that does not necessarily need to inspect all payoffs of the game. This algorithm necessarily sometimes fails to compute an equilibrium, but we mathematically demonstrate that most of the time it succeeds quickly on uniformly random games, and experimentally demonstrate this for other classes of games. This also holds for games with more than two players, for which no efficient general algorithms are known.

14:15
Optimal Coalition Structure Generation in Cooperative Graph Games
SPEAKER: unknown

ABSTRACT. Representation languages for coalitional games are a key research area in algorithmic game theory. There is an inherent tradeoff between how general a language is, allowing it to capture more elaborate games, and how hard it is computationally to optimize and solve such games. One prominent such language is the simple yet expressive Weighted Graph Games (WGGs) representation (Deng and Papadimitriou, 1994), which maintains knowledge about synergies between agents in the form of an edge weighted graph.

We consider the problem of finding the optimal coalition structure in WGGs. The agents in such games are vertices in a graph, and the value of a coalition is the sum of the weights of the edges present between coalition members. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that finding the optimal coalition structure is not only hard for general graphs, but is also intractable for restricted families such as planar graphs which are amenable for many other combinatorial problems. We then provide algorithms with constant factor approximations for planar, minor-free and bounded degree graphs.

14:30
Interdependent Multi-Issue Negotiation for Energy Exchange in Remote Communities
SPEAKER: unknown

ABSTRACT. We present a novel negotiation protocol to facilitate energy exchange between off-grid homes that are equipped with renewable energy generation and electricity storage. Our protocol imposes restrictions over negotiation such that it reduces the complex interdependent multi-issue negotiation to one where agents have a strategy profile in subgame perfect Nash equilibrium. We show that our negotiation protocol is tractable, concurrent, scalable and leads to Pareto-optimal outcomes in a decentralised manner. We empirically evaluate our protocol and show that, in this instance, a society of agents can (i) improve the overall utilities by 14% and (ii) reduce their overall use of the batteries by 37%.

13:45-14:45 Session 35E: Task Learning (AAAI)
13:45
Active Task Selection for Lifelong Machine Learning
SPEAKER: unknown

ABSTRACT. In a lifelong learning framework, an agent acquires knowledge incrementally over consecutive learning tasks, continually building upon its experience. Recent lifelong learning algorithms have achieved nearly identical performance to batch multi-task learning methods while requiring over three orders of magnitude less time to learn. In this paper, we further improve the scalability of lifelong learning by developing curriculum selection methods that enable an agent to actively select the next task to learn in order to maximize performance on future learning tasks. We demonstrate that active task selection is highly reliable and effective, allowing an agent to learn high performance models using up to 50% fewer tasks than when the agent has no control over the task order. We also explore a variant of transfer learning in the lifelong learning setting in which the agent can focus knowledge acquisition toward a particular target task, enabling the agent to quickly learn the target task.

14:00
Learning Integrated Symbolic and Continuous Action Models for Continuous Domains
SPEAKER: unknown

ABSTRACT. Long-living autonomous agents must be able to learn to perform competently in novel environments. One important aspect of competence is the ability to plan, which entails the ability to learn models of the agent’s own actions and their effects on the environment. In this paper we describe an approach to learn action models of environments with continuous-valued spatial states and realistic physics consisting of multiple interacting rigid objects. In such environments, we hypothesize that objects exhibit multiple qualitatively distinct behaviors we call modes, conditioned on their spatial relationships to each other. We argue that action models that explicitly represent these modes using a combination of symbolic spatial relationships and continuous metric information learn faster, generalize better, and make more accurate predictions than models that use only metric information. We present a method to learn action models with piecewise linear modes conditioned on a combination of first-order Horn clauses composed of symbolic spatial predicates and continuous classifiers. We empirically demonstrate with lesion studies in a physics simulation that our method learns more accurate and more general models than a method that learns a single smooth function (locally weighted regression) and a method that learns piecewise smooth functions not conditioned on spatial predicates.

14:15
Sparse Multi-task Learning for Detecting Influential Nodes in an Implicit Diffusion Network
SPEAKER: Yingze Wang

ABSTRACT. How to identify influential nodes is a central research topic in information diffusion analysis. Many existing methods rely on the assumption that the network structure is completely known by the model. However, in many applications, such a network is either unavailable or insufficient to explain the underlying information diffusion phenomena. To address this challenge, we develop a multi-task sparse linear influence model (MSLIM), which can simultaneously predict the volume for each contagion and automatically identify sets of the most influential nodes for different contagions. Our method is based on the linear influence model with two main advantages: 1) it does not require the network structure; 2) it can detect different sets of the most influential nodes for different contagions. To solve the corresponding convex optimization problem for learning the model, we adopt the accelerated gradient descent (AGM) framework and show that there is an exact closed-form solution for the proximal mapping. Therefore, the optimization procedure achieves the optimal first-order convergence rate and can be scaled to very large datasets. The proposed model is validated on a set of 2.6 millions of tweets of 1000 users on twitter. We show that MSLIM can efficiently select the most influential users for specific contagions. We also present several interesting patterns of the selected influential users.

14:30
Multiagent Learning with a Noisy Global Reward Signal
SPEAKER: unknown

ABSTRACT. Scaling multiagent reinforcement learning to domains with many agents is a complex problem. In particular, multiagent credit assignment becomes a key issue as the system size increases. Some multiagent systems suffer from a global reward signal that is very noisy or difficult to analyze. This makes deriving a learnable local reward signal very difficult. Difference rewards (a particular instance of reward shaping) have been used to alleviate this concern, but they remain difficult to compute in many domains. In this paper we present an approach to modeling the global reward using function approximation that allows the quick computation of local rewards. We demonstrate how this model can result in significant improvements in behavior for three congestion problems: a multiagent ``bar problem'', a complex simulation of the United States airspace, and a generic air traffic domain. We show how the model of the global reward may be either learned on- or off-line using either linear functions or neural networks. For the bar problem, we show an increase in reward of nearly 200% over learning using the global reward directly. For the air traffic problem, we show a decrease in costs of 25% over learning using the global reward directly.

13:45-14:45 Session 35F: Medical Prediction Problems (IAAI)
13:45
Emerging: Assessing the Predictability of Hospital Readmission Using Machine Learning
SPEAKER: unknown

ABSTRACT. Unplanned hospital readmissions raise health care costs and cause significant distress to patients. Hence, predicting which patients are at risk to be readmitted is of great interest. In this paper, we mine large amounts of administrative information from claim data, including patients demographics, dispensed drugs, medical or surgical procedures performed, and medical diagnosis, in order to predict readmission using supervised learning methods. Our objective is to gain knowledge about the predictive power of the available information. Our preliminary results on data from the provincial hospital system in Quebec illustrate the potential for this approach to reveal important information on factors that trigger hospital readmission. Our findings suggest that a substantial portion of readmissions is inherently hard to predict. Consequently, the use of the raw readmission rate as an indicator of the quality of provided care might not be appropriate.

14:05
Emerging: Early Prediction of Coronary Artery Calcification Levels Using Machine Learning
SPEAKER: unknown

ABSTRACT. Coronary heart disease (CHD) is a major cause of death worldwide. In the U.S. CHD is responsible for approximated 1 in every 6 deaths with a coronary event occurring every 25 seconds and about 1 death every minute based on data current to 2007. Although a multitude of cardiovascular risks factors have been identified, CHD actually reflects complex interactions of these factors over time. Today's datasets from longitudinal studies offer great promise to uncover these interactions but also pose enormous analytical problems due to typically large amount of both discrete and continuous measurements and risk factors with potential long-range interactions over time. Our investigation demonstrates that a statistical relational analysis of longitudinal data can easily uncover complex interactions of risks factors and actually predict future coronary artery calcification (CAC) levels --- an indicator of the risk of CHD present subclinically in an individual --- significantly better than traditional non-relational approaches. The uncovered long-range interactions between risk factors conform to existing clinical knowledge and are successful in identifying risk factors at the early adult stage. This may contribute to monitoring young adults via smartphones and to designing patient-specific treatments in young adults to mitigate their risk later.

14:25
Emerging: Case-Based Meta-Prediction for Bioinformatics
SPEAKER: unknown

ABSTRACT. Before laboratory testing, bioinformatics problems often require a machine-learned predictor to identify the most likely choices among a wealth of possibilities. Researchers may advocate different predictors for the same problem, no one of which is best in all situations. This paper introduces a case-based meta-predictor that combines a set of elaborate, pre-existing predictors to improve their accuracy on a difficult and important problem: protein-ligand docking. The method focuses on the reliability of each of its component predictors, and has broad potential applications in biology and chemistry. Despite noisy and biased input, the method outperforms its individual components on benchmark data. It provides a promising solution for the performance improvement of compound virtual screening, which would thereby reduce the time and cost of drug discovery.

14:45-15:45 Session 36A: Optimization and Search (AAAI)
14:45
Enabling E-Mobility: Facility Location for Battery Loading Stations
SPEAKER: unknown

ABSTRACT. The short cruising range due to the limited battery supply of current Electric Vehicles (EVs) is one of the main challenges to deal with to facilitate the transition to E-mobility. Until batteries of higher enery storage density have been developed, it is of utmost importance to deliberately plan the locations of new loading stations for best possible coverage. Ideally the network of loading stations should allow driving from anywhere to anywhere (and back) without running out of energy. More formally, given a street network with n nodes, we prove that achieving this goal with a minimal number of LSs is NP-hard to approximate within c ln n for c < 1 via reduction from the Dominating Set problem. On the positive side, we show with instance based lower bounds that even heuristics with incomplete knowledge achieve good solutions in practice.

15:00
Robust Network Design for Multispecies Conservation
SPEAKER: Carla Gomes

ABSTRACT. Our work is motivated by an important network design application in computational sustainability concerning wildlife conservation. In the face of human development and climate change, it is important that conservation plans for protecting landscape connectivity exhibit certain level of robustness. While previous work has focused on conservation strategies that result in a connected network of habitat reserves, the robustness of the proposed solutions has not been taken into account. In order to address this important aspect, we formalize the problem as a node-weighted bi-criteria network design problem with connectivity requirements on the number of disjoint paths between pairs of nodes. While in most previous work on survivable network design the objective is to minimize the cost of the selected network, our goal is to optimize the quality of the selected paths within a specified budget, while meeting the connectivity requirements. We characterize the complexity of the problem under different restrictions. We provide a mixed-integer programming encoding that allows for finding solutions with optimality guarantees, as well as a hybrid local search method with better scaling behavior but no guarantees. We evaluate the typical-case performance of our approaches using a synthetic benchmark, and apply them to a large-scale real-world network design problem concerning the conservation of wolverine and lynx populations in the U.S. Rocky Mountains (Montana).

15:15
Resource Sharing for Control of Wildland Fires
SPEAKER: unknown

ABSTRACT. Wildland fires (or wildfires) occur on all continents except for Antarctica. These fires threaten communities, change ecosystems and destroy vast amounts of natural resources, and the cost estimates of the damage done annually is in the billions of dollars. Controlling wildland fires is resource-intensive and there are numerous examples where resource demand has outstripped resource availability. Trends in changing climates, fire occurrence and expansion of the wildland-urban interface all point to increased resource shortages in the future. One approach for coping with these shortages has been the sharing of resources across different wildland-fire agencies. This introduces new issues as agencies have to balance their own needs and risk-management with their desire to help fellow agencies in need. Using ideas from the field of multiagent systems, we conduct the first analysis of strategic issues arising in resource-sharing for wildland-fire control, and highlight the importance in having clear protocols when allocating the resources. We also argue that the wildland-fire domain has numerous features that make it attractive to researchers in artificial intelligence and computational sustainability.

15:30
Multiagent Coordination for Energy Consumption Scheduling in Consumer Cooperatives
SPEAKER: unknown

ABSTRACT. A key challenge to create a sustainable and energy-efficient society is in making consumer demand adaptive to energy supply, especially renewable supply. In this paper, we propose a partially-centralized organization of consumers, namely, a consumer cooperative for purchasing electricity from the market. We propose a novel multiagent coordination algorithm to shape the energy consumption of the cooperative. In the cooperative, a central coordinator buys the electricity for the whole group and consumers make their own consumption decisions based on their private consumption constraints and preferences. To coordinate individual consumers under incomplete information, we propose an iterative algorithm in which a virtual price signal is sent by the coordinator to induce consumers to shift demand. We prove that our algorithm converges to the central optimal solution. Additionally we analyze the convergence rate of the algorithm via simulations using random instances. The results indicate that our algorithm is scalable with respect to the number of agents and consumption slots.

14:45-15:45 Session 36B: Classification (AAAI)
14:45
Uncorrelated Lasso
SPEAKER: unknown

ABSTRACT. Lasso-type variable selection has increasingly expanded its machine learning applications. In this paper, uncorrelated Lasso is proposed for variable selection, where variable de-correlation is considered simultaneously with variable selection, so that selected variables are uncorrelated as much as possible. An effective iterative algorithm, with the proof of convergence, is presented to solve the sparse optimization problem. Experiments on benchmark data sets show that the proposed method has better classification performance than many state-of-the-art variable selection methods.

15:00
Large-Scale Hierarchical Classification via Stochastic Perceptron
SPEAKER: unknown

ABSTRACT. The hierarchical classification (HC) problem with structured outputs is an important issue in machine learning and data mining. There are many state-of-the-art algorithms solving this problem. However, most of them suffer from high computational costs. To efficiently solves this problem, we propose a large margin method based on stochastic perceptron (SP). Differing from the conventional perceptron algorithm, we give a stochastic choice procedure to decide the direction of next iteration. This procedure leads to significant improvements in the classification accuracy in comparison with the conventional perceptron algorithm. We prove that after finite iterations the SP algorithm yields a sub-optimal solution with high probability when the input instances are separable. For large-scale and high-dimensional datasets, we devise a kernel SP algorithm, which dramatically reduces the memory space needed. The kernel SP algorithm has the merit of low space complexity as well as low time complexity. We conduct empirical analysis. The experimental results show that our approach achieves almost the same accuracy as the state-of-the-art algorithms on the real-world datasets, but with much less CPU running time.

15:15
Teaching Classification Boundaries to Humans
SPEAKER: unknown

ABSTRACT. Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work best. In particular, we begin with the baseline of random presentation and then examine combinations of several mechanisms: the indication of an example’s relative difficulty, the use of the shaping heuristic from the cognitive science literature (moving from easier examples to harder ones), and a novel kernel-based “coverage model” of the subject’s mastery of the task. From our experiments on 53 human subjects learning and performing a pair of synthetic classification tasks via our teaching system, we found that we can achieve the greatest gains with a combination of shaping and the coverage model.

15:30
A Maximum K-Min Approach for Classification
SPEAKER: unknown

ABSTRACT. Over the past decades, Maximin/Minimax Classifiers have been proven to be of excellent performance in numerous applications. In this paper, a novel robust Maximum K-Min criterion for classification is proposed which focuses on maximizing the gain obtained by the $K$ worst-classified instances while ignoring the remaining ones.The original combinatorial explosion problem is reformulated into a convex optimization problem with linear number of inequality constrains, which can be efficiently solved via standard convex optimization methods and guarantees a global optimum solution. To verify the performance of Maximum K-Min approach, a naive Linear Maximum K-Min classifier and a Nonlinear Maximum K-Min classifier are proposed for 2-class classification. The experiment results based on $10$ datasets show that the classification accuracy of Maximum K-Min classifiers is competitive with the state-of-the-art classifiers of Support Vector Machine and Logistic Regression.

14:45-15:45 Session 36C: Game Playing / Plan Reuse (AAAI)
14:45
Optimizing Objective Function Parameters for Strength in Computer Game-Playing
SPEAKER: unknown

ABSTRACT. The learning of evaluation functions from game records has been widely studied in the field of computer game-playing. Conventional learning methods optimize the evaluation function parameters by using the game records of expert players in order to imitate their plays. They utilize objective functions to increase the agreement between the moves selected by game-playing programs and the moves in the records of actual games. Such conventional methods, however, have a problem in that increasing this agreement does not always improve the strength of the program. Indeed, it is not clear how the agreement relates to the strength of the generated program. To address this problem, this paper presents a learning method to optimize objective function parameters for strength of playing. The proposed method employs an evolutionary learning algorithm with the Elo ratings of programs, which denote the strength, as their fitness scores. Experimental results show that the proposed method is effective and that programs that learn using the objective function produced by the proposed method are superior to those that learn using conventional objective functions.

15:00
Filtering with Logic Programs and its Application to General Game Playing

ABSTRACT. Motivated by the problem of building a basic reasoner for general game playing with imperfect information, we address the problem of filtering with logic programs, whereby an agent updates its incomplete knowledge of a program by observations. We develop a filtering method by adapting an existing backward-chaining and abduction method for so-called open logic programs. Experimental results show that this provides for a basic effective and efficient "legal" player for general imperfect-information games.

15:15
Parameterized Complexity Results for Plan Reuse
SPEAKER: unknown

ABSTRACT. Planning is a notoriously difficult computational problem of high worst-case complexity. Researchers have been investing significant efforts to develop heuristics or restrictions to make planning practically feasible. Case-based planning is a heuristic approach where one tries to reuse previous experience when solving similar problems in order to avoid some of the planning effort. Plan reuse may offer an interesting alternative to plan generation in some settings.

We provide theoretical results that identify situations in which plan reuse is provably tractable. We perform our analysis in the framework of parameterized complexity, which supports a rigorous worst-case complexity analysis that takes structural properties of the input into account in terms of parameters. A central notion of parameterized complexity is fixed-parameter tractability which extends the classical notion of polynomial-time tractability by utilizing the effect of structural properties of the problem input.

We draw a detailed map of the parameterized complexity landscape of several variants of problems that arise in the context of case-based planning. In particular, we consider the problem of reusing an existing plan, imposing various restrictions in terms of parameters, such as the number of steps that can be added to the existing plan to turn it into a solution of the planning instance at hand.

14:45-15:45 Session 36D: Sample Complexity / Anomaly Detection (AAAI)
14:45
Towards Cohesive Anomaly Mining
SPEAKER: unknown

ABSTRACT. In some applications, such as bioinformatics, social network analysis, and computational criminology, it is desirable to find compact clusters formed by a (very) small portion of objects in a large data set. Although in general it is still a clustering problem, it cannot be served well by the conventional clustering methods since generally those methods try to assign most of the data objects into clusters. Since such clusters are comprised of a small number of objects, they are extraordinary and anomalous with respect to the entire data set. In this paper, we model this novel and application-inspired task as the problem of mining cohesive anomalies. We propose a general framework and a principled approach to tackle the problem. The experimental results on both synthetic and real data sets verify the effectiveness and efficiency of our approach.

15:00
A Generalized Student-t Based Approach to Mixed-Type Anomaly Detection
SPEAKER: unknown

ABSTRACT. Anomaly detection for mixed-type data is an important problem that has not been well addressed in the machine learning field. There are two challenging issues for mixed-type datasets, namely modeling mutual correlations between mixed-type attributes and capturing large variations due to anomalies. This paper presents UnREBAD, an Unsupervised Robust Error Buffering approach for Anomaly Detection in mixed-type datasets. A new variant of the generalized linear model is proposed to model the dependency between mixed-type attributes. The model incorporates an error buffering component based on Student-t distribution to absorb the variations caused by anomalies. However, because of the non-Gaussian design, the problem becomes analytically intractable. We propose a novel Bayesian inference approach, which integrates Laplace approximation and several computational optimizations, and is able to efficiently approximate the posterior of high dimensional latent variables by iteratively updating the latent variables in groups. Extensive experimental evaluations based on 13 benchmark datasets demonstrate the effectiveness and efficiency of UnREBAD.

15:15
Sample Complexity and Performance Bounds for Non-parametric Approximate Linear Programming
SPEAKER: unknown

ABSTRACT. One of the most difficult tasks in value function approximation for Markov Decision Processes is finding an approximation architecture that is expressive enough to capture the important structure in the value function, while at the same time not overfitting the training samples. Recent results in non-parametric approximate linear programming (NP-ALP), have demonstrated that this can be done effectively using nothing more than a smoothness assumption on the value function. In this paper we extend these results to the case where samples come from real world transitions instead of the full Bellman equation, adding robustness to noise. In addition, we provide the first max-norm, finite sample performance guarantees for any form of ALP. NP-ALP is amenable to problems with large (multidimensional) or even infinite (continuous) action spaces, and does not require a model to select actions using the resulting approximate solution.

15:30
AAAI-13 Outstanding Paper, Honorable Mention: PAC Optimal Exploration in Continuous Space Markov Decision Processes
SPEAKER: unknown

ABSTRACT. Current exploration algorithms can be classified in two broad categories: Heuristic, and PAC optimal. While numerous researchers have used heuristic approaches such as $\epsilon$-greedy exploration successfully, such approaches lack formal, finite sample guarantees and may need a significant amount of fine-tuning to produce good results. PAC optimal exploration algorithms, on the other hand, offer strong theoretical guarantees but are inapplicable in domains of realistic size. The goal of this paper is to bridge the gap between theory and practice, by introducing C-PACE, an algorithm which offers strong theoretical guarantees and can be applied to interesting, continuous state problems.

14:45-15:45 Session 36E: Spotlights and Senior Members (AAAI)
14:45
How can AI help Synthetic Biology?
SPEAKER: Aaron Adler

ABSTRACT. Our primary goal in this talk is to draw the attention of the AI community to a novel and rich application domain, namely Synthetic Biology. Synthetic biology is the systematic design and engineering of biological systems. Synthetic organisms are currently designed at the DNA level, which limits the complexity of the systems. In our talk we will introduce the domain, describe the current workflow used by synthetic biologists, and demonstrate the feasibility of progress in this domain. Problems specific to each AI topic area will be highlighted.

15:15
ISWC: What's Hot, Challenges: Diagnosing Road Traffic Congestions in the Semantic Web
15:45-16:45 Session 37: LBP/Poster sessions (AAAI)