AAAI-13: THE TWENTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE

# AAAI ON TUESDAY, JULY 16TH, 2013

Days:
next day
all days
08:30-09:00 Session 9: AAAI Welcome and Awards
09:00-10:00 Session 10: Keynote Address: Mooney
 09:00 AAAI-13 Keynote Address: Grounded Language LearningSPEAKER: Raymond MooneyABSTRACT. Most approaches to semantics in computational linguistics represent meaning in terms of words or abstract symbols. Grounded-language research bases the meaning of natural language on perception and/or action in the (real or virtual) world. Machine learning has become the most effective approach to constructing natural-language systems; however, current methods require a great deal of laboriously annotated training data. Ideally, a computer would be able to acquire language like a child, by being exposed to language in the context of a relevant but ambiguous environment, thereby grounding its learning in perception and action. We will review recent research in grounded language learning and discuss future directions.
10:00-10:20Coffee Break
10:20-11:20 Session 11: IAAI Joint Invited talk: Birnbaum
 10:20 AAAI/IAAI Joint Invited Talk: Telling Stories at Internet ScaleSPEAKER: Larry BirnbaumABSTRACT. Taking full advantage of the massive scale of "big data" will require technologies for analyzing and communicating these data to people, in terms they can understand and act on, at an equally massive scale. The automatic generation of narratives from data offers the promise of meeting this critical need. Our technology, which leverages human editorial judgment at scale, is today generating millions of stories from data, including highly personalized stories, in domains varying from business operations, to sports, education, medicine, and finance. The resulting narratives are often indistinguishable from those written by human analysts and writers.
11:20-12:20 Session 12A: Regression
11:20-12:20 Session 12B: Sentiment and Recommendation
 11:20 The Automated Acquisition of Suggestions from TweetsSPEAKER: unknownABSTRACT. This paper targets at automatically detecting and classifying user's suggestions from tweets. The short and informal nature of tweets, along with the imbalanced characteristics of suggestion tweets, makes the task extremely challenging. To this end, we develop a classification framework on Factorization Machines, which is effective and efficient especially in classification tasks with feature sparsity settings. Moreover, we tackle the imbalance problem by introducing cost-sensitive learning techniques in Factorization Machines. Extensively experimental studies on a manually annotated real-life data set show that the proposed approach significantly improves the baseline approach, and yields the precision of 71.06% and recall of 67.86%. We also investigate the reason why Factorization Machines perform better. Finally, we introduce the first manually annotated dataset for suggestion classification. 11:35 A Hierarchical Aspect-Sentiment Model for Online ReviewsSPEAKER: unknownABSTRACT. To help users quickly understand the major opinions from massive online reviews, it is important to automatically reveal the latent structure of aspects, sentiment polarities, and the association between them from the review texts. However, there is little work available about how to do them effectively. In this paper, we propose a Bayesian nonparametric model to discover an aspect-sentiment tree from unlabeled online reviews. The model discovers hierarchical structure of aspect-based sentiments, and we name it the HASM. In HASM, the whole structure is a tree. Each node itself is a two-level tree, whose root represents an aspect and the children represent the sentiment polarities associated with it. Each aspect or sentiment polarity is modeled as a distribution of words. To automatically extract both the structure and parameters of the tree, we use recursive Chinese Restaurant Process (rCRP) as the prior and jointly infer the aspect-sentiment tree from the review texts. We experiment with two real datasets and show that our model is comparable to two other hierarchical topic models in terms of quantitative measures of topic trees. We also show that our model achieves better sentence-level classification accuracy than previously proposed aspect-sentiment joint models. 11:50 A Pattern Matching Based Model for Implicit Opinion Question IdentiﬁcationSPEAKER: unknownABSTRACT. This paper presents the results of developing subjectivity classiﬁers for Implicit Opinion Question (IOQ) identiﬁcation. IOQs are deﬁned as opinion questions with no opinion words. An IOQ example is “will the U.S. government pay more attention to the Paciﬁc Rim?” Our analysis on community questions of Yahoo! Answers shows that a large proportion of opinion questions are IOQs. It is thus important to develop techniques to identify such questions. In this research, we ﬁrst propose an effective framework based on mutual information and sequential pattern mining to construct an opinion lexicon that not only contains opinion words but also patterns. The discovered words and patterns are then combined with a machine learning technique to identify opinion questions. The experimental results on two datasets demonstrate the effectiveness of our approach. 12:05 From Semantic to Emotional Space in Probabilistic Sense Sentiment AnalysisSPEAKER: unknownABSTRACT. Sentiment Similarity of word pairs reflects the distance between the words regarding their underlying sentiments. This paper intends to infer the sentiment similarity between word pairs with respect to their senses. To achieve this aim, we propose a probabilistic emotion-based approach that is built on a hidden emotional model in which the basic human emotions are considered as hidden. This leads to predict a vector of basic emotions for each sense of the words. The emotional vectors are employed to infer the sentiment similarity of word pairs. We apply the proposed sentiment similarity approach to address two main natural language processing tasks, namely, Indirect yes/no Question Answer Pairs (IQAPs) inference and Sentiment Orientation (SO) prediction. Extensive experiments demonstrate that the proposed approach can effectively capture the sentiment similarity of word pairs and significantly outperforms semantic similarity measures on the above tasks.
11:20-12:20 Session 12C: Markets and Preferences
11:20-12:20 Session 12D: Bayesian Inference and Causality
 11:20 Reduce and Re-Lift: Bootstrapped Lifted Likelihood Maximization for MAPSPEAKER: Fabian HadijiABSTRACT. By handling whole sets of indistinguishable objects together, lifted belief propagation approaches have rendered large, previously intractable, probabilistic inference problems quickly solvable. In this paper, we show that Kumar and Zilberstein's likelihood maximization (LM) approach to MAP inference is liftable, too, and actually provides additional structure for optimization. Specifically, it has been recognized that some pseudo marginals may converge quickly, turning intuitively into pseudo evidence. This additional evidence typically changes the structure of the lifted network: it may expand or reduce it. The current lifted network, however, can be viewed as an upper bound on the size of the lifted network required to finish likelihood maximization. Consequently, we re-lift the network only if the pseudo evidence yields a reduced network, which can efficiently be computed on the current lifted network. Our experimental results on Ising models, image segmentation and relational entity resolution demonstrate that this bootstrapped LM via "reduce and re-lift" finds MAP assignments comparable to those found by the original LM approach, but in a fraction of the time. 11:35 m-Transportability: Transportability of Causal Effect from Multiple EnvironmentsSPEAKER: unknownABSTRACT. We introduce m-transportability, a generalization of transportability, which offers a license to transfer causal information obtained from experiments and observations in m>=1 source environments to estimate a causal effect in a given target environment. We provide a novel characterization of m-transportability that directly exploits the completeness of do-calculus to obtain the necessary and sufficient conditions for m-transportability. We provide a sound and complete algorithm for deciding m-transportability that: (i) indicates non-m-transportability if a causal relation is not m-transportable from a given set of m source environments to a specified target environment; (ii) produces a transport formula, that is, a recipe for combining experimental information from m source environments with only observational information from the target environment to synthesize an estimate of the desired causal effect, otherwise. 11:50 RockIt: Exploiting Parallelism and Symmetry for MAP Inference in Statistical Relational ModelsSPEAKER: Jan NoessnerABSTRACT. In this paper we present RockIt a maximum a-posteriori (MAP) query engine for statistical relational models. MAP inference in graphical models is an optimization problem and can naturally be mapped to integer linear programs (ILPs). With this paper, we present several optimizations for ILP translation of MAP queries including the meta algorithm Cutting Plane Aggregation (CPA). CPA exploits context-specific symmetries, that is, symmetries introduced by evidence and bundles large numbers of linear constraints. The resulting counting constraints lead to more compact ILPs and, more importantly, make the symmetry of the ground model explicit to state-of-the-art ILP solvers. Moreover, we parallelize all parts of the MAP inference pipeline in order to take advantage of multi-core architectures. We conducted numerous experiments on standard Markov logic networks (MLN) benchmarks, demonstrating that RockIt outperforms state-of-the-art systems such as Alchemy and Tuffy both in terms of efficiency and quality of results. 12:05 Causal Transportability with Limited ExperimentsSPEAKER: Elias BareinboimABSTRACT. We address the problem of transferring causal knowledge learned in one environment to another, potentially different environment, when only limited experiments may be conducted at the source. This generalizes the treatment of transportability introduced in [Pearl and Bareinboim, 2011; Bareinboim and Pearl, 2012b], which deals with transferring causal information when any experiment can be conducted at the source. Given that it is not always feasible to conduct certain controlled experiments, we consider the decision problem whether experiments on a selected subset Z of variables together with qualitative assumptions encoded in a diagram may render causal effects in the target environment computable from the available data. This problem, which we call z-transportability, reduces to ordinary transportability when Z is all-inclusive, and, like the latter, can be given syntactic characterization using the do-calculus [Pearl, 1995; 2000]. This paper establishes a necessary and sufficient condition for causal effects in the target domain to be estimable from both the non-experimental information available and the limited experimental information transferred from the source. We further provides a complete algorithm for computing the transport formula, that is, a way of fusing experimental and observational information to synthesize an unbiased estimate of the desired causal relation.
11:20-12:20 Session 12E: Knowledge-Based Systems
 11:20 Graph Traversal Methods for Reasoning in Large Knowledge-Based SystemsSPEAKER: unknownABSTRACT. Commonsense reasoning at scale is a core problem for cognitive systems. In this paper, we discuss two ways in which heuristic graph traversal methods can be used to generate plausible inference chains. First, we discuss how Cyc’s predicate-type hierarchy can be used to get reasonable answers to queries. Second, we explain how connection graph-based techniques can be used to identify script-like structures. Finally, we demonstrate through experiments that these methods lead to significant improvement in accuracy for both Q/A and script construction. 11:35 Automatic Extraction of Efficient Axiom Sets from Large Knowledge BasesSPEAKER: unknownABSTRACT. Efficient reasoning in large knowledge bases is an important problems for cognitive systems. Hand-optimization of reasoning becomes impractical as KBs grow, and impossible as knowledge is automatically added via knowledge capture or machine learning. This paper describes a method for automatic extraction of axioms for efficient inference over large knowledge bases, given a set of query types and information about the types of facts in the KB currently as well as what might be learned. We use the highly right skewed distribution of predicate connectivity in large knowledge bases to prune intractable regions of the search space. We show the efficacy of these techniques via experiments using queries from a cognitive system which does learning by reading. Results show that these methods lead to an order of magnitude improvement in time with minimal loss in coverage. 11:50 Preemptive Strategies for Overcoming the Forgetting of GoalsSPEAKER: unknownABSTRACT. Maintaining and pursuing multiple goals over varying time scales is an important ability for artificial agents in many cognitive architectures. Goals that remain suspended for long periods, however, are prone to be forgotten. This paper presents a class of preemptive strategies that allow agents to selectively retain goals in memory and to recover forgotten goals. Preemptive strategies work by retrieving and rehearsing goals at triggers, which are either periodic or are predictive of the opportunity to act. Since cognitive architectures contain common hierarchies of memory systems and share similar forgetting mechanisms, these strategies work across multiple architectures. We evaluate their effectiveness in a simulated mobile robot controlled by Soar, and demonstrate how preemptive strategies can be adapted to different environments and agents. 12:05 Learning to Efficiently Pursue Communication Goals on the Web with the GOSMR ArchitectureSPEAKER: Kevin GoldABSTRACT. We present GOSMR ("goal oriented scenario modeling robots"), a cognitive architecture designed to show coordinated, goal-directed behavior when using networked computer applications, focusing on the web browser as a case study. The architecture combines a variety of artificial intelligence techniques, including planning, temporal difference learning, elementary reasoning over uncertainty, and communication, but is designed to be computationally lightweight. Its intended use is to drive applications on large-scale cyber security experiments in which users' adaptation in the face of resource denial should be intelligent but varied. The planning system performs temporal difference learning of action times, discounts goals according to hyperbolic discounting of time-to-completion and chance of success, takes into account the assertions of other agents, and separates abstract action from site-specific affordances. Our experiment, in which agents learn to prefer a social networking style site for sending and receiving messages, shows that utility-proportional goal selection is a reasonable alternative to Boltzmann goal selection for producing a rational mix of behavior.
11:20-12:20 Session 12F: Spotlights and Senior Members
 11:20 SOCS: What's HotSPEAKER: unknownABSTRACT. Grand ideas and visions for search 11:35 SOCS: ChallengesSPEAKER: unknown 11:50 AAMAS: ChallengesSPEAKER: unknownABSTRACT. Curing Robot Autism: A Challenge 12:05 SDM: Best Paper: Triadic Measures on Graphs: The Power of Wedge SamplingSPEAKER: unknown
12:20-12:30 Session 14A: -
 12:20 Predicting Professions through Probabilistic Model under Social ContextSPEAKER: unknownABSTRACT. In this paper, we investigate the problem of predicting people's professions under social context. Previous work considering clothing information as well as fore/background context preliminarily proves the feasibility of predicting professions. In this paper, we discuss this problem in a more general case --- multiple people in one photo with arbitrary poses, and argue that with appropriately built partial body features, spatial relations, and background context, a more appealing result is achieved by a probabilistic model. We conduct experiments on $14$ representative professions with over $7000$ images, and demonstrate the model's superiority with impressive results. 12:22 Algorithm Selection in Bilateral NegotiationSPEAKER: unknownABSTRACT. Despite the abundance of strategies in the literature on repeated negotiation under incomplete information, there is no single negotiation strategy that is optimal for all possible settings. Thus, agent designers face an algorithm selection'' problem--- which negotiation strategy to choose when facing a new negotiation. Our approach to this problem is to predict the performance of different strategies based on structural features of the domain and to select the negotiation strategy that is predicted to be most successful using a meta-agent''. This agent was able to outperform all of the finalists to the recent Automated Negotiation Agent Competition (ANAC). Our results have insights for agent-designers, demonstrating that a little learning goes a long way'', despite the inherent uncertainty associated with negotiation under incomplete information. 12:24 RAProp: Ranking Tweets by Exploiting the Tweet/User/Web Ecosystem and Inter-Tweet Agreement SPEAKER: unknownABSTRACT. The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet’s content alone. We propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the webpages that tweets link to. The reputation score is used to power a novel method of ranking tweets by propagating the reputation over an agreement graph based on tweets’ content similarity. An evaluation of our method on 16 million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over the baseline method, and performs better than the highest-performing method on the TREC 2011 Microblog dataset. 12:26 Learning CP-net Preferences Online from User QueriesSPEAKER: unknownABSTRACT. CP-nets (Boutilier et al., 1999) offer a potentially compact qualitative representation of human preferences that operate under ceteris paribus (“with all else being equal”) semantics. In this paper we present a novel algorithm through which an agent learns the preferences of a user. CP-nets are used to represent such preferences and are learned online through a series of queries generated by the algorithm. Our algorithm builds a CP-net for the user by creating nodes and initializing CPTs, then gradually adding edges and forming more complex CPTs consistent with responses to queries until a confidence parameter is reached. 12:28 Take or Wait? Learning Turn-Taking from Multiparty DataSPEAKER: unknownABSTRACT. We build turn-taking models for autonomous characters in language-based interactions with small groups of children. Two models explore the use of support vector machines given the same multimodal features, but different methods for collecting turn-taking labels.
12:20-12:30 Session 14B
 12:20 Fast, Near-Optimal Computation for Multi-robot Path Planning on GraphsSPEAKER: unknownABSTRACT. We report a new method for computing near optimal makespan solutions to multi-robot path planning problem on graphs. Our focus here is with hard instances - those with up to 85% of all graph nodes occupied by robots. Our method yields 100-1000x speedup compared with existing methods. At the same time, our solutions have much smaller and often optimal makespans. 12:22 Uncertainty Reduction For Active Image Clustering via a Hybrid Global-Local Uncertainty ModelSPEAKER: unknownABSTRACT. We propose a novel combined global/local model for active semi-supervised spectral clustering based on the principle of uncertainty reduction. We iteratively compute the derivative of the eigenvectors produced by spectral decomposition with respect to each item/image, and combine this with local label entropy provided by the current clustering results in order to estimate the uncertainty reduction potential of each item in the dataset. We then generate pairwise queries with respect to the best candidate item and retrieve the needed constraints from the user. We evaluate our method using three different image datasets---faces, leaves and dogs, and consistently demonstrate performance superior to the current state-of-the-art. 12:24 Towards Joint Inference for Complex Ontology MatchingSPEAKER: Jan NoessnerABSTRACT. In this paper, we show how to model the matching problem as a problem of joint inference. In opposite to existing approaches, we distinguish between the layer of labels and the layer of concepts and properties. Entities from both layers appear as first class citizens in our model. We present an example and explain the benefits of our approach. Moreover, we argue that our approach can be extended to generate correspondences involving complex concept descriptions. 12:26 AMRec: An Intelligent System for Academic Method RecommendationSPEAKER: unknownABSTRACT. Finding new academic methods for research problems is the key task in a researcher’s research career. It is usually very difficult for new researchers to find good methods for their research problems since they lack of research experiences. In order to help researchers carry out their researches in a more convenient way, we describe a novel recommendation system called AMRec to recommend new academic methods for research problems in this paper. Our proposed system first extracts academic concepts (Tasks and Methods) and their relations from academic literatures, and then leverages the regularized matrix factorization model for academic method recommendation. Preliminary evaluation results are also reported and discussed. 12:28 An Ensemble of Linearly Combined Reinforcement-Learning AgentsSPEAKER: unknownABSTRACT. Reinforcement-learning (RL) algorithms are often tweaked and tuned to specific environments when applied, calling into question whether learning can truly be considered autonomous in these cases. In this work, we show how more robust learning across environments is possible by adopting an ensemble approach to reinforcement learning. Our approach learns a weighted linear combination of Q-values from multiple independent learning algorithms. In our evaluations in generalized RL environments, we find that the algorithm compares favorably to the best tuned algorithm. Our work provides a promising basis for further study into the use of ensemble methods in RL.
12:20-12:30 Session 14C
 12:20 Learning Tractable Graphical Models Using Mixture of Arithmetic CircuitsSPEAKER: unknownABSTRACT. In recent years, there has been a growing interest in learning tractable graphical models in which exact inference is efficient. Two main approaches are to restrict the inference complexity directly, as done by low-treewidth graphical models and arithmetic circuits (ACs), or introduce latent variables, as done by mixtures of trees, latent tree models, and sum-product networks (SPNs). In this paper, we combine these approaches to learn a mixtures of ACs (MAC). A mixture can represent many distributions exponentially more compactly than a single AC. By using ACs as mixture components, MAC can represent complex distributions using many fewer components than required by other mixture models. MAC generalizes ACs, mixtures of trees, latent class models, and thin junction trees, and can be seen as a special case of an SPN. Compared to state-of-the-art algorithms for learning SPNs and other tractable models, MAC is consistently more accurate while maintaining tractable inference. 12:22 The Value of Ignorance about the Number of PlayersSPEAKER: unknownABSTRACT. Our starting point is the model of Meir et al.[AAAI'12], where agents have uncertainty over the actual number of participants in a congestion game, formalized as a prior distribution over subsets of players. In this work we consider the idea of partial information revelation (signaling), for such games where the number of participants is unknown. 12:24 Chance-Constrained Strong Controllability of Temporal Plan Networks with UncertaintySPEAKER: unknownABSTRACT. This works presents a novel approach for determining chance-constrained strong controllability of Temporal Plan Networks with Uncertainty (TPNU) by framing it as an Optimal Satisfiability Problem (OpSAT). 12:26 A Novel Human Computation Game for Critique AggregationSPEAKER: unknownABSTRACT. We present a novel human computation game that elicits not only annotations, but also direct critique of a system from the players. The game is based on the popular board game - Dixit. We present the results of the initial run of the game, in which the answers of 15 players were used to profile the mistakes of an aspect based opinion mining system. We show that, for this complex problem, the gameplay allowed us to identify the major faults and the generate improvement suggestions.
12:20-12:30 Session 14D
12:30-13:45Lunch Break
13:45-14:45 Session 15A: Invited talk: Kumar
 13:45 AAAI-13 Invited Talk: Aerial Robot SwarmsSPEAKER: Vijay KumarABSTRACT. Autonomous microaerial robots can operate in three-dimensional unstructured environments, and offer many opportunities for environmental monitoring, search and rescue, and first response. I will describe the challenges in developing small, agile robots and our recent work in the areas of (1) control and planning, (2) state estimation and mapping, and (3) coordinating large teams of robots, with applications to cooperative manipulation and transport, construction, and exploration and mapping.
14:45-15:45 Session 16A: Clustering
 14:45 Smart Multi-task Bregman Clustering and Multi-task Kernel ClusteringSPEAKER: unknownABSTRACT. Multitask Bregman Clustering (MBC) alternatively updates clusters and learns relationship between clusters of different tasks, and the two phases boost each other. However, the boosting does not always have positive effects, it may also cause negative effects. Another issue of MBC is that it cannot deal with nonlinear separable data. In this paper, we show that MBC's process of using cluster relationship to boost the updating clusters phase may cause negative effects, i.e., cluster centroid may be skewed under some conditions. We propose a smart multi-task Bregman clustering (S-MBC) algorithm which identifies negative effects of the boosting and avoids the negative effect if it occurs. We then extend the framework of S-MBC to a smart multi-task kernel clustering (S-MKC) framework to deal with nonlinear separable data. We also propose a specific implementation of the framework which could be applied to any Mercer kernel. Experimental results confirm our analysis, and demonstrate the superiority of our proposed methods. 15:00 Spectral Rotation versus K-means in Spectral ClusteringSPEAKER: unknownABSTRACT. Spectral clustering has been a popular data clustering algorithm. This category of approaches often resort to other clustering methods, such as K-Means, to get the final cluster. The potential flaw of such common practice is that the obtained relaxed continuous spectral solution could severely deviate from the true discrete solution. In this paper, we propose to impose an additional orthonormal constraint to better approximate the optimal continuous solution to the graph cut objective functions. Such a method, called spectral rotation in literature, optimizes the spectral clustering objective functions better than K-Means, and improves the clustering accuracy. We would provide efficient algorithm to solve the new problem rigorously, which is not significantly more costly than K-Means. We also establish the connection between our method and K-Means to provide theoretical motivation of our method. Experimental results show that our algorithm consistently reaches better cut and meanwhile outperforms in clustering metrics than classic spectral clustering methods. 15:15 Unsupervised Cluster Matching via Probabilistic Latent Variable ModelsSPEAKER: unknownABSTRACT. We propose a probabilistic latent variable model for unsupervised cluster matching, which is the task of finding correspondences between clusters of objects in different domains. Existing object matching methods find one-to-one matching in two domains. The proposed model finds many-to-many matching, and can handle multiple domains with different numbers of objects. The proposed model assumes that there are an infinite number of latent vectors that are shared by all domains, and each object is generated using one of the latent vectors and a domain-specific linear projection. By inferring a latent vector to be used for generating each object, objects in different domains are clustered in shared groups, and thus we can find matching between clusters in an unsupervised manner. We present efficient inference procedures of the proposed model based on a stochastic EM algorithm. The effectiveness of the proposed model is demonstrated with experiments using synthetic and real data sets. 15:30 Clustering with Complex Constraints - Algorithms and ApplicationsSPEAKER: unknownABSTRACT. Clustering with constraints is an important and developing area. However, most work is confined to conjunctions of simple together and apart constraints which limit their usability. In this paper, we propose a new formulation of constrained clustering that is able to incorporate not only existing types of constraints but also more complex logical combinations beyond conjunctions. We first show how any statement in conjunctive normal form (CNF) can be represented as a linear inequality. Since existing clustering formulations such as spectral clustering cannot easily incorporate these linear inequalities, we propose a quadratic programming (QP) clustering formulation to accommodate them. This new formulation allows us to have much more complex guidance in clustering. We demonstrate the effectiveness of our approach in two applications on text and personal information management. We also compare our algorithm against existing constrained spectral clustering algorithm to show its efficiency in computational time.
14:45-15:45 Session 16B: Reinforcement Learning
 14:45 Multi-Armed Bandit with Budget Constraint and Variable CostsSPEAKER: unknownABSTRACT. We study the multi-armed bandit problems with budget constraint and variable costs (MAB-BV). In this setting, pulling an arm will receive a random reward together with a random cost, and the objective of an algorithm is to pull a sequence of arms in order to maximize the expected total reward with the costs of pulling those arms complying with a budget constraint. This new setting models many Internet applications (e.g., ad exchange, sponsored search, and cloud computing) in a more accurate manner than previous settings where the pulling of arms is either costless or with a fixed cost. We propose two UCB based algorithms for the new setting. The first algorithm needs prior knowledge about the lower bound of the expected costs when computing the exploration term. The second algorithm eliminates this need by estimating the minimal expected costs from empirical observations, and therefore can be applied to more real-world applications where prior knowledge is not available. We prove that both algorithms have nice learning abilities, with regret bounds of $O(\ln B)$. Furthermore, we show that when applying our proposed algorithms to a previous setting with fixed costs (which can be regarded as our special case), one can improve the previously obtained regret bound. Our simulation results on real-time bidding in ad exchange verify the effectiveness of the algorithms and are consistent with our theoretical analysis. 15:00 Basis Adaptation for Sparse Nonlinear Reinforcement LearningSPEAKER: Sridhar MahadevanABSTRACT. This paper presents a new approach to basis adaptation in reinforcement learning (RL) using the recently proposed {\em mirror-descent} framework. Mirror descent can be viewed as an enhanced gradient method, particularly suited to minimization of convex functions in high-dimensional spaces. Unlike traditional gradient methods, mirror descent undertakes gradient updates of weights in both the dual space and primal space, which are linked together using a Legendre transform. Mirror descent can be viewed as a proximal algorithm where the distance generating function used is a Bregman divergence. We introduce a general framework for nonlinear separable value function approximation based on finding Frechet gradients of an error function based on variable projection functionals. We then show how to combine basis adaptation methods with proximal-gradient based temporal-difference (TD) methods and present a new class of regularized TD methods, which combine feature selection through sparse $L_1$ regularization and basis adaptation. Experimental results are provided to illustrate and validate the approach. 15:15 Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPsSPEAKER: Bikramjit BanerjeeABSTRACT. Decentralized partially observable Markov decision processes (Dec-POMDPs) offer a powerful modeling technique for realistic multi-agent coordination problems under uncertainty. Prevalent solution techniques are centralized and assume prior knowledge of the model. Recently a Monte Carlo based distributed reinforcement learning approach was proposed, where agents take turns to learn best responses to each other’s policies. This promotes decentralization of the policy computation problem, and relaxes reliance on the full knowledge of the problem parameters. However, this Monte Carlo approach has a large sample complexity, which we address in this paper. In particular, we propose and analyze a modified version of the previous algorithm that adaptively eliminates parts of the experience tree from further exploration, thus requiring fewer samples while ensuring unchanged confidence in the learned value function. Experiments demonstrate significant reduction in sample complexity – the maximum reductions ranging from 61% to 93% over different benchmark Dec-POMDP problems – with the final policies being often better due to more focused exploration. 15:30 Structured Kernel-Based Reinforcement LearningSPEAKER: unknownABSTRACT. Kernel-based reinforcement learning (KBRL) is a popular approach to learning non-parametric value function approximations. In this paper, we present structured KBRL, a paradigm for kernel-based RL that allows for modeling independencies in the transition and reward models of problems. Real-world problems often exhibit this structure and can be solved more efficiently when it is modeled. We make three contributions. First, we motivate our work, define a structured backup operator, and prove that it is a contraction. Second, we show how to evaluate our operator efficiently. Our analysis reveals that the fixed point of the operator is the optimal value function in a special factored MDP. Finally, we evaluate our method on a synthetic problem and compare it to two KBRL baselines. In most experiments, we learn better policies than the baselines from an order of magnitude less training data.
14:45-15:45 Session 16C: Recognition and Detection
14:45-15:45 Session 16D: Situated Interaction
 14:45 Integrating Programing by Example and Natural Language ProgramingSPEAKER: unknownABSTRACT. Mehdi Manshadi, Dan Gildea and James Allen We motivate the integration of programming by example and natural language programming by developing a system for specifying programs for simple text editing operations. The programs are described with unconstrained natural language instructions, and providing a single example of input/output. We show that natural language allows the system to deduce the correct program much more often and much faster than is possible with the input/output example(s) alone, showing that natural language programming and programming by example can be combined in a way that overcomes the ambiguities that both methods suffer from individually, with a minimum of additional effort from the user. (This paper is from the main AAAI technical track, paper 251.) 15:00 A Hybrid Architectural Approach to Understanding and Appropriately Generating Indirect Speech ActsSPEAKER: unknownABSTRACT. Current approaches to handling indirect speech acts (ISAs) do not account for their sociolinguistic underpinnings (i.e., politeness strategies). Deeper understanding and appropriate generation of indirect acts will require mechanisms that integrate natural language (NL) understanding and generation with social information about agent roles and obligations, which we introduce in this paper. Additionally, we tackle the problem of understanding and handling indirect answers that take the form of either speech acts or physical actions, which requires an inferential, plan-reasoning approach. In order to enable artificial agents to handle an even wider-variety of ISAs, we present a hybrid approach, utilizing both the idiomatic and inferential strategies. We then demonstrate our system successfully generating indirect requests and handling indirect answers, and discuss avenues of future research. our system successfully generating indirect requests and handling indirect answers, and discuss avenues of future research. 15:15 An Agent Model for the Appraisal of Normative Events Based in In-Group and Out-Group RelationsSPEAKER: unknownABSTRACT. Emotional synthetic characters are able to evaluate (appraise) events as positive or negative taking into account several factors to trigger their emotional states. Currently, the vast majority of the models for the appraisal in synthetic characters consider factors related to the goals and preferences of the characters. We argue that appraisals that only take into consideration these "personal" factors are incomplete as other more social factors, such as the normative and the social context, including in-group and out-group relations, should be considered as well. Without them, moral emotions such as shame cannot be appraised, limiting the believability of the characters in certain situations. We present a model for the appraisal of characters' actions that evaluates if actions from in-group and out-group members conform or not to social norms and generates different emotions according to the social relation between the characters. The model was then implemented in an architecture for virtual agents and evaluated with humans. Results suggest that the emotions generated by our model are perceived adequately by the participants, taking into account the social context and that participants experienced very similar emotions, both in type and intensity, to the emotions appraised and generated by the characters. 15:30 SALL-E: Situated Agent for Language LearningSPEAKER: unknownABSTRACT. We describe ongoing research towards building a cognitively plausible system for near one-shot learning of the meanings of attribute words and object names, by grounding them in a sensory model. The system learns incrementally from human demonstrations recorded with the Microsoft Kinect, in which the demonstrator can use unrestricted natural language descriptions. We achieve near-one shot learning of simple objects and attributes by utilizing child language learning strategies and focusing solely on examples where the learning agent is confident, ignoring the rest of the data. We evaluate the system’s learning ability by having it generate descriptions of presented objects, including objects it has never seen before, and comparing the system response against collected human descriptions of the same objects. We propose that retrieving object examples with k-nearest neighbor using Mahalanobis distance corresponds to a cognitively plausible representation of objects. Our results show great promise for achieving rapid, near one-shot, incremental learning of word meanings.
14:45-15:45 Session 16E: Spotlights and Senior Members
 14:45 Modern Dynamic Organ Exchanges: Algorithms and Market DesignSPEAKER: Tuomas SandholmABSTRACT. In kidney exchanges, patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors. The clearing problem involves finding a social welfare maximizing set of non-overlapping short cycles. We proved this is NP-hard. It was one of the main obstacles to a national kidney exchange. We developed the first (and still only) algorithm capable of clearing these exchanges optimally on a nationwide scale. The key was incremental problem formulation because the formulation that gives tight LP bounds is too large to even store. On top of the branch-and-price paradigm we developed techniques that dramatically improve runtime and memory usage. Furthermore, clearing is actually an online problem where patient-donor pairs and altruistic donors appear and expire over time. We first developed trajectory-based online stochastic optimization algorithms (that use our optimal offline solver as a subroutine) for this. Such algorithms tend to be too compute-intensive at run time, so recently we developed a general approach for giving batch algorithms guidance about the future without a run-time hit. It learns potentials of elements of the problem, and then uses them in each batch clearing. I will share experiences from using our algorithms to run the UNOS US-wide kidney exchange since its inception two and a half years ago, and two regional ones before that. We introduced several design enhancements to the exchanges. For one, we used our algorithms to launch the first never-ending altruistic donor chains. I will present results – both theoretical and experimental – on the role of long chains. I will present probabilistic planning algorithms that are robust against last-minute execution failures - currently the main challenge in kidney exchange. I will discuss how general-purpose integer program solvers do not scale to this task, and will present a brand new branch-and-price algorithm that does. I will also give an overview about other issues in kidney exchanges such as sniping in the context of competing exchanges, and transplant centers' incentives to hide some of their pairs. Finally, I will discuss simulation results about the possibility of liver lobe exchanges and cross-organ exchanges. The talk covers material from the following papers of ours. In addition, the talk will cover results by Alvin Roth, Utku Unver, Itai Ashlagi, and others. - Algorithms for Robust Kidney Exchange Clearing. (With Dickerson, J. and Procaccia, A.) - Liver and Multi-Organ Exchange. (With Dickerson, J.) Under review. - Dynamic Matching via Weighted Myopia with Application to Kidney Exchange. AAAI-12. (With Dickerson, J. and Procaccia, A.) - Optimizing Kidney Exchange with Transplant Chains: Theory and Reality. AAMAS-12. (With Dickerson, J. and Procaccia, A.) - Online Stochastic Optimization in the Large: Application to Kidney Exchange. IJCAI-09. (With Awasthi, P.) - A Nonsimultaneous, Extended, Altruistic-Donor Chain. New England Journal of Medicine 360(11), March 2009. (With Rees, M., Kopke, J., Pelletier, R., Segev, D., Rutter, M., Fabrega, A., Rogers, J., Pankewycz, O., Hiller, J., Roth, A., Ünver, U., and Montgomery, R.) - Clearing Algorithms for Barter Exchange Markets: Enabling Nationwide Kidney Exchanges. EC-07. (With Abraham, D. and Blum, A.) 15:15 Data Mining Social Media for Public Health ApplicationsSPEAKER: Henry KautzABSTRACT. There is much activity in data mining social media for marketing campaigns, financial prediction, and similar purposes. Recently, however, a smaller group of researchers have begun to leverage this sensor network for a singular public good: modeling public health at a population scale. Researchers have shown, for example, that Twitter postings can be used to track and predict influenza and detect affective disorders such as depression. Such work provides strong evidence that there is a strong health signal'' in social media.
14:45-15:45 Session 16H: Crowdsourcing
 14:45 Better Human Computation Through Principled VotingSPEAKER: unknownABSTRACT. Designers of human computation systems often face the need to aggregate noisy information provided by multiple people. While voting is often used for this purpose, the choice of voting method is typically not principled. We conduct extensive experiments on Amazon Mechanical Turk to better understand how different voting rules perform in practice. We also demonstrate that our empirical conclusions differ from what popular theoretical models would predict. Our short-term goal is to inform the design of better human computation systems; our long-term goal is to spark an interaction between researchers in (computational) social choice and human computation. 15:00 Clustering CrowdsSPEAKER: unknownABSTRACT. Crowdsourcing is often used to create a data set for supervised learning. Because the quality of the data set heavily depends on the abilities of workers, many methods have been proposed to learn a classifier from the data set. From some observations, we notice that workers form clusters according to their abilities. To utilize such knowledge, we propose a method to jointly learn a classifier and clusters of workers from a crowd-generated data set. The proposed method has two advantages. One is that it realizes robust estimation of a classifier because the proposed method utilizes clusters of workers whereas the other methods do not. The other is that we can obtain clusters of workers, which help us analyze the properties of the workers. Experimental results on synthetic and real data sets indicate that the proposed method can estimate a classifier robustly. In addition, clustering workers also works well because an outlier worker was found in the real data set by applying the proposed method. 15:15 The Effects of Performance-Contingent Financial Incentives in Online Labor MarketsSPEAKER: unknownABSTRACT. Online labor markets such as Amazon Mechanical Turk (MTurk) have emerged as platforms that facilitate the allocation of productive effort across global economies. Many of these markets compensate workers with monetary payments. We study the effects of performance-contingent financial rewards on work quality and worker effort in MTurk via two experiments. We find that the magnitude of performance-contingent financial rewards alone affects neither quality nor effort. However, when workers working on two tasks of the same type in a sequence, the change in the magnitude of the reward over the two tasks affects both. In particular, both work quality and worker effort increase (alternatively decrease) as the reward increases (alternatively decreases) for the second task. This suggests the existence of the anchoring effect on workers' perception of incentives in MTurk and that this effect can be leveraged in workflow design to increase the effectiveness of financial incentives. 15:30 Hotspotting --- A Probabilistic Graphical Model For Image Object Localization through CrowdsourcingSPEAKER: unknownABSTRACT. Object localization is an image annotation task which consists of finding the location of a target object in an image. It is common to crowdsource annotation tasks and aggregate responses to estimate the true annotation. While for other kinds of annotations consensus is simple and powerful, it cannot be applied to object localization as effectively due to the task's rich answer space and inherent noise in responses. We propose a probabilistic graphical model to localize objects in images based on responses from the crowd. We improve upon natural aggregation methods such as the mean and the median by simultaneously estimating the difficulty level of each question and skill level of every participant. We empirically evaluate our model on crowdsourced data and show that our method outperforms simple aggregators both in estimating the true locations and in ranking participants by their ability. We also propose a simple adaptive sourcing scheme that works well for very sparse datasets.
15:45-16:15Coffee Break
15:45-16:45 Session 17: LBP/Poster sessions
16:45-17:45 Session 19A: Matrices
 16:45 Salient Object Detection via Low-rank and Structured Sparse Matrix DecompositionSPEAKER: unknownABSTRACT. Salient object detection provides an alternative solution to various image semantic understanding tasks such as object recognition, adaptive compression and image retrieval. Recently, low-rank matrix recovery (LR) theory has been introduced into saliency detection, and achieves impressed results. However, the existing LR-based models neglect the underlying structure of images, and inevitably degrade the associated performance. In this paper, we propose a Low-rank and Structured sparse Matrix Decomposition (LSMD) model for salient object detection. In the model, a tree-structured sparsity-inducing norm regularization is firstly introduced to provide a hierarchical description of the image structure to ensure the completeness of the extracted salient object. The similarity of saliency values within the salient object is then guaranteed by the $\ell_\infty$-norm. Finally, high-level priors are integrated to guide the matrix decomposition and enhance the saliency detection. Experimental results on the largest public benchmark database show that our model outperforms existing LR-based approaches and other state-of-the-art methods, which verifies the effectiveness and robustness of the structure cues in our model. 17:00 Robust Discrete Matrix CompletionSPEAKER: unknownABSTRACT. Most existing matrix completion methods seek the matrix global structure in the real number domain and produce predictions that are inappropriate for applications retaining discrete structure, where an additional step is required to post-process prediction results with either heuristic threshold parameters or complicated mappings. Such an ad-hoc process is inefficient and impractical. In this paper, we propose a novel robust discrete matrix completion algorithm that produces the prediction from the collection of user specified discrete values by introducing a new discrete constraint to the matrix completion model. Our method achieves a high prediction accuracy, very close to the most optimal value of competitive methods with threshold values tuning. We solve the difficult integer programming problem via incorporating augmented Lagrangian method in an elegant way, which greatly accelerates the converge process of our method and provides the asymptotic convergence in theory. The proposed discrete matrix completion model is applied to solve three real-world applications, and all empirical results demonstrate the effectiveness of our method. 17:15 Rank Aggregation via Low-Rank and Structured-Sparse DecompositionSPEAKER: unknownABSTRACT. Rank aggregation, which combines multiple individual rank lists to obtain a better one, is a fundamental technique in various applications such as meta-search and recommendation systems. Most existing rank aggregation methods blindly combine multiple rank lists with possibly considerable noises, which often degrades their performances. In this paper, we propose a new model for \textit{robust rank aggregation (RRA)} via matrix learning, which recovers a latent rank list from the possibly incomplete and noisy input rank lists. In our model, we construct a pairwise comparison matrix to encode the order information in each input rank list. Based on our observations, each comparison matrix can be naturally decomposed into a shared low-rank matrix, combined with a deviation error matrix which is the sum of a column-sparse matrix and a row-sparse one. The latent rank list can be easily extracted from the learned low-rank matrix. The optimization formulation of RRA has an element-wise multiplication operator to handle missing values, a symmetric constraint on the noise structure, and a factorization trick to restrict the maximum rank of the low-rank matrix. To solve this challenging optimization problem, we propose a novel procedure based on the Augmented Lagrangian Multiplier scheme. We conduct extensive experiments on meta-search and collaborative filtering benchmark datasets. The results show that the proposed RRA has superior performance gain over several state-of-the-art algorithms for rank aggregation.
16:45-17:45 Session 19B: Options/Pricing