CSRS@UOM 18: COMPUTER SCIENCE RESEARCH SYMPOSIUM @ MANCHESTER 2018
PROGRAM FOR WEDNESDAY, APRIL 11TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:30-10:30 Session 4: Research Talks
Location: Kilburn 1.3
09:30
Reliable Slippage Detection on a Budget through Machine Learning

ABSTRACT. We address the robotics manipulation problem of object slip- page detection during task execution. We developed a wearable glove that integrates non-expensive tactile sensors in the fingers. Using this glove we undertook a series of empirical studies that measured sensor data both before, and during slippage events, while manipulating a set of objects that required widely varying grasp types. We investigated the utility of a set of different representations of the data, and of common machine learning methods. For the subset of the objects where the grip involved multiple fingers we achieved good accuracy in slippage detection. We conclude that judicious use of state-of-the-art machine learning methods may overcome the need for expensive sensors. The proposed approach will help to enhance dexterity and adaptability of robotic hands.

09:52
Learning evolving T–S fuzzy systems – A local online optimization approach
SPEAKER: Dongjiao Ge

ABSTRACT. Most real data streams are non-linear and non-stationary by nature, which makes it a challenging issue to develop effective learning techniques. With the advantages of updating the system structure and parameters on the fly, evolving fuzzy systems (EFSs) are effective paradigms to address this issue. However, existing identifying methods and algorithms  of EFSs are usually developed based on a heuristic rather than an optimal approach, and mainly focus on tracking the most recent local model. This leads to these algorithms lack of optimality of the consequent parameters when there is a structure update of the fuzzy system. In order to resolve this issue, this presentation discusses our proposed algorithm — local error optimization approach (LEOA) for identifying evolving T–S fuzzy systems. LEOA has its antecedent learning method derived from minimizing a bunch of local error functions and guarantee the optimality of the consequent parameters by a new extended weighted recursive least square (EWRLS) method. Furthermore, LEOA is mathematically proved to satisfy the optimality and \epsilon-completeness property. Numerical examples based on several benchmark data sets are presented, and the results demonstrate that LEOA makes preferable prediction accuracy compared with many existing state-of-the-art methods.

10:14
Learning to play video-games using objects
SPEAKER: William Woof

ABSTRACT. Video-games are a great way of testing new AI techniques, but many of these techniques also have applications within video-games. Deep Reinforcement Learning (DRL) has been shown to be an effective technique for enabling autonomous agents learn to play a variety of games. These agents are able to learn directly from the video output of the game. However, for developers wanting to use DRL agents within their games, this ability is superfluous, and sometimes even providing this video output is difficult. In this work, we present a method which allows DRL agent to learn directly from object information through the use of a novel 'Object Embedding Network'.

10:30-11:00Coffee Break
11:00-12:30 Session 5: Research Talks
Location: Kilburn 1.3
11:00
Retinal Image Segmentation Based On Texture
SPEAKER: Qinhao Wu

ABSTRACT. Retinal image is one of the evidence which can be used in eye disease diagnosis, that it can be used to witness the morphological change during the disease development. In computer vision area, what we could do is offer the efficient feature extraction and semantic segmentation methods, inspired by powerful deep learning. We hope it could increase the speed to process the retinal images, and extract the target or symptom patterns. It could help the medical practitioner to give better diagnosis and treatment.

11:22
Classification of Tweets using Multiple Thresholds with Self-correction and Weighted Conditional Probabilities
SPEAKER: Tariq Ahmad

ABSTRACT. We present our classifier for classifying multi-label emotions of Arabic and English tweets. Our method is based on preprocessing the tweets and creating word vectors combined with a self correction step to remove noise. We also make use of emotion specific thresholds. The final results was selected upon the best performance achieved, selected when using a range of thresholds. Our system was evaluated on the Arabic and English datasets provided for in a competition by the competition organisers, where it ranked 2nd for the Arabic dataset (out of 14 entries) and 12th for the English dataset (out of 35 entries).

11:44
Sensing arousal from pupillary response

ABSTRACT. Physiological arousal is a proxy for measuring stress, boredom, attention and cognitive load, which are of interest to eye-tracking and behavioural science researchers. Self-reported methods are often used to measure arousal even though they are susceptible to bias. We present an algorithm, built to sense arousal related to a participant's focal attention through analysis of pupillary response from eye trackers. We also developed a tool to visualise the arousal level per area of interest as a heat map overlaid on the image stimulus. To evaluate the algorithm, we displayed twelve images of varying arousal levels rated by the International Affective Picture System (IAPS) to 41 participants while they self-reported their arousal levels. There was a moderate correlation, $r(47)=.46, p\leq.01$ between the self-reported arousal and the algorithm's arousal rating. This result shows that our algorithm has the potential to complement self-reported methods for arousal detection in usability, UX and visual behaviour.

12:06
Amplifying Data Curation Efforts to Improve the Quality of Life Science Data

ABSTRACT. Quality is an important aspect that needs to be managed in databases, as the importance of data is determined by its quality. This draws the attention of many database providers to care about curating their data in order to maintain data quality over time. Also, this leads database providers and researchers to investigate the area of data curation and propose ways to improve it, either through providing tools to automate the process or to support human curators in making changes to the data. However, among all available suggestions to improve data curation, to the best of our knowledge, no a general description of the curation process has been given that also provides solutions to improve it, and that can help database providers to assess how mature their approach to data curation is. To fill this gap, this paper proposes a maturity model, that describes the maturity levels of biomedical data curation. The proposed Maturity Model aims to help data providers to identify limitations in their current curation methods and enhance their curation process.

14:00-15:30 Session 6: Research Talks
Location: Kilburn 1.3
14:00
Exploring Novel Memory Technologies to Overcome the Memory Wall

ABSTRACT. DRAMs have been the primary option for memory systems because they were able to provide low cost per bit and high density. The technology scaling has been the main factor that enabled memory systems to have more capacity, bandwidth and energy efficiency. Lately, the technology scaling of DRAMs has become difficult and started to diminish due to physical and electrical limitations. Recent trends in computing systems, microprocessor designs and applications lead to exacerbate the challenges current memory systems are facing, causing memories to be a bottleneck in systems. In order to overcome this "memory wall", we aim to explore one of the novel and revolutionary 3D stacked memory technologies, Hybrid Memory Cube. We will exploit the features of this new memory architecture to decrease latency and analyse the trade-offs between cost and performance.

14:22
Solving Constraint Satisfaction Problems with Stochastic Spiking Neural Networks

ABSTRACT. Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP- complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g. decision and hard optimisation problems). The human brain efficiently handles CSPs both in perception and behaviour using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map colour problem and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart.

14:44
Efficient scheduling techniques for low-power heterogeneous architectures

ABSTRACT. The explosive growth of computer systems have introduced a new era for computing. There is an increasing demand for greater computational performance and at the same time less energy and power consumption. Since multi-core processors can no longer meet the above requirements there is a shift towards heterogeneous architectures. We focus on single-ISA heterogeneous architectures, and more specifically on ARM's big.LITTLE, where two or more core types are integrated into the same chip. All core types implement the same Instruction Set Architecture (ISA), but differ at the micro-architecture level and/or operating frequency, thus delivering different performance and power/energy efficiency. The major challenge in single-ISA heterogeneous architectures is scheduling; how to optimally assign tasks to the appropriate core type. The purpose of this thesis is to propose a methodology based on which schedulers can be built for asymmetric multi-cores. Currently, schedulers for these systems depend on the history of each workload in order to make decisions about the appropriate core type. However, this approach is not optimal for short running or bursty workloads because of the time needed to build the history of each process. In our approach, we take advantage of architectural characteristics of ARM processors and propose a mechanism for the scheduler to react in time. We modify the PMU's (Performance Monitor Unit) driver as well as the Linux scheduler to implement our scheduling mechanism.

15:06
Efficient Scheduling of Data-Flow Task Parallel Programs on Heterogeneous Many-core Systems

ABSTRACT. Writing programs for heterogeneous platforms is challenging, as it requires programmers to deal with multiple programming models, to partition work for CPUs and accelerators with different compute capabilities, and to manage memory in multiple distinct address spaces. We show that using a task-parallel data-flow programming model, in which parallelism is specified in a platform-neutral description that abstracts in particular from the heterogeneity of the platform, efficient execution can be carried out by a run-time system at execution time using an appropriate task scheduling and memory allocation scheme. This is achieved by dynamically adapt work granularity by grouping tasks for offloading on accelerators, interleaved execution of tasks and transfer between host and device memory, and load balancing across CPUs and GPUs.

15:30-16:00Coffee Break
16:00-17:00 Session 7: Research Talks
Location: Kilburn 1.3
16:00
Review the Near-Infrared (NIR) Spectral Data
SPEAKER: Shupeng Hu

ABSTRACT. Near-infrared spectroscopy (NIRS) is a rapid, chemical-free, environment-friendly and non-destructive analytical technology that has been widely applied to a diverse range of fields. NIR spectral data obtained from NIRS are complex and multivariate. This talk first introduces the key background of NIRS. It then demonstrates the conbributions we have already made. These contributions are: 1) to characterize Near-Infrared (NIR) Spectra as Scientific Big Data; 2) to present a software environment for real-time NIR spectra analysis and data management: 3) to link NIR spectral data to the functional data; 4) to describe our initial efforts to apply the FDA approach to the NIR spectral data.

16:22
iSketch: Bridging the Gap between Model-Driven Requirements Elicitation (MDRE) and Model-Driven Development (MDD)

ABSTRACT. While Model-Driven Requirements Elicitation (MDRE) is clearly the upstream of the Model-Driven Development (MDD) process, there is a lack of support tools for transforming requirements models into initial software models. Furthermore, while MDRE is regarded as a front-end activity in the software systems development process, there is a lack of support tools for end users to express their own requirements. These two interrelated problems create a gap between MDRE and MDD. For the MDD vision to become reality, the automation of model transformations from requirements to realization must be supported. Similarly, end users must be empowered with suitable tools to allow them to discover and express their own requirements. My PhD project is motivated to address these two challenging problems and to bridge the gap between MDRE and MDD.

16:44
DX-MAN: An Algebraic Service Model for the Internet of Things

ABSTRACT. We are entering a new era in which not only people interact through Internet, but things also do it. The Internet of Things (IoT) promises the interconnection of billions of devices offering their functionality through services. Service-Oriented Architectures (SOA) will be a key enabler of the IoT goals. In particular, service composition will allow the combination of IoT services into more complex ones. Current service composition approaches, namely orchestration and choreography, only support partial compositionality. This research work proposes a novel model for total compositionality (so called algebraic composition) where services are composed hierarchically, as mathematical functions does, using exogenous connectors.