previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 11A: SS1 (3/3): Unmanned Vehicle-Aided Routing and Scheduling: Models, Algorithms, and Applications
Location: Nouméa
Energy-minimized Partial Computation Offloading in Cloud-assisted Vehicular Edge Computing Systems

ABSTRACT. Nowadays, the rapid advancement of Connected and Automated Vehicles (CAVs) has led to their integration with various capabilities, encompassing environmental sensing, decision-making, and multi-level assisted driving. However, the integration of computationally intensive applications like navigation and autonomous driving challenges CAVs due to their limited computational resources, necessitating the timely completion of computations. Vehicular Edge Computing (VEC) offers a solution by enabling CAVs to partially offload computation-intensive tasks to Roadside Units (RSUs) embedded with Roadside Edge Servers (RESs). Nonetheless, RSUs have finite computational resources. Therefore, a Cloud-assisted Vehicular Edge Computing (CVEC) architecture is introduced to address this problem. In this paper, we first formulate a typical CVEC system and then formulate a constrained optimization problem based on the aforementioned system, which considers both communication latency and energy consumption. Finally, a novel optimization algorithm called Whale optimization embedded with Simulated- Annealing and Genetic-learning (WSAG) is proposed to solve the above optimization problem. WSAG simultaneously determines the resource allocation and optimizes the energy consumption of the system. Experiment results prove that WSAG significantly achieves lower energy consumption with faster convergence speed than state-of-the-art peers.

Toyota Woven City: Hierarchical Integrated Traffic Management in a Fully Connected and Automated Environment

ABSTRACT. With the advancement of vehicular communication technologies and automated control systems, the number of connected and automated vehicles (CAVs) in urban networks is expected to grow rapidly in the near future. However, effectively managing a large number of vehicles simultaneously is a significant challenge to fully capitalize on the benefits of enhanced mobility and safety within a city. This paper showcases the implementation of a hierarchical integrated traffic management system in Toyota Woven City, a fully connected environment with 100% CAVs, to demonstrate its capacity for effectively managing a large number of vehicles. The system incorporates multi-layered vehicle control strategies, including dynamic routing, lane optimization, incident management, and proactive signal control, all aimed at enhancing mobility for all road users. Simulation results from Toyota Woven City reveal that by deploying the integrated system, vehicle delays can be reduced by up to 95% compared to the base case with no control, and by 90% compared to a general control strategy that only utilizes cooperative adaptive cruise control and dynamic routing strategies. Furthermore, the study highlights the system's positive impact on pedestrian mobility across varying congestion levels and its reliability in managing road incidents.

Integrating Object Detection and Advanced Analytics for Smart City Crowd Management
PRESENTER: Edoardo Prezioso

ABSTRACT. In the context of rapidly advancing smart cities, efficient crowd analysis plays a crucial role in ensuring public safety, urban planning, and resource management. This paper presents a novel framework that combines the popular You Only Look Once (YOLO) object detection algorithm with advanced crowd analysis techniques, aiming to improve the understanding and management of urban crowd dynamics. The proposed framework leverages YOLO's real-time object detection capabilities to detect various objects within video frames, with a particular focus on identifying individuals. To initiate the crowd analysis process, the detected persons are isolated and tracked over time, enabling the collection of valuable data for comprehensive crowd behavior analysis. By leveraging this rich dataset, the framework enables the extraction of key crowd characteristics, such as crowd density, crowd flow patterns, crowd distribution, and crowd congestion levels. Moreover, the framework incorporates techniques to analyze the extracted data, offering valuable insights into crowd dynamics.

09:00-10:30 Session 11B: RS4 (2/2): AI & Its Applications
Location: Cancun
A review of Scalability Solutions in Blockchain-based Electronic Health Record Systems
PRESENTER: Karim Sehimi

ABSTRACT. In the context of the rapid evolution of Electronic Health Record (EHR) systems, blockchain technology has emerged as a potential solution to address critical challenges, such as security and data integrity. However, one pressing concern remains: scalability. Blockchains are known to scale poorly, which is critical in a healthcare management system. This paper conducts a review of scalability solutions in blockchain- based EHR systems, shedding light on prevalent limitations and underexplored solutions. Our findings underscore the imperative need for enhanced scalability solutions for more efficient and robust EHR systems built on blockchain technology.

Abnormal ECG Detection in Wearable Devices Using Compressed Learning

ABSTRACT. The electrocardiogram (ECG) is a broadly used diagnostic tool for assessing heart functionality. However, ECG data can be large and difficult to transmit or store. The concept of compressed sensing refers to the reconstruction of sparse signals from a limited number of measurements, potentially reducing the size of ECG data while preserving diagnostic information. The purpose of this study is to investigate the use of Compressed Learning (CL) to detect abnormal ECGs. Data from three distinct groups of individuals, namely those with congestive heart failure (CHF), cardiac arrhythmia (ARR), and those with normal sinus rhythms (NSR), have been obtained from a publicly available database. Three measurement matrices, such as Random Gaussian, Random Bernoulli, and Structured Fourier Matrices, were used at different compression ratios. The compressed ECG data was further classified using Support Vector Machines (SVM), K-Nearest Neighbor (KNN), and Neural Networks. The results show that CL can significantly reduce the size of ECG data without sacrificing diagnostic accuracy. The SVM classifier achieved high training and testing accuracy of 82%. This study suggests that CL can be a useful tool for ECG data compression and abnormalities detection and could potentially be integrated into clinical practice for more efficient ECG analysis.

Disease Forecasting and Patient Monitoring: The Great Role of Medical Data Analytics

ABSTRACT. the rapid advancement of new information technologies, such as mobile apps, cloud computing, and big data analytics, has significantly impacted various industries, with healthcare being a notable example. In healthcare, these technologies have become essential tools for delivering highquality services. Specifically, the explosion of medical data sources, the integration of sensor technology, data analysis, and the application of data mining and machine learning techniques have the potential to bring about transformative changes in healthcare. In light of the exponential growth in medical data, it is evident that a transformative shift, often referred to as a revolution, is underway within the healthcare sector. This transformation underscores the imperative of harnessing emerging technologies to enhance the medical field. This paper aims to present a comprehensive architectural design that seamlessly amalgamates the power of big data analytics, machine learning, and mobile healthcare for self-monitoring. Through this exposition, we intend to underscore the paramount significance of applying predictive analytics techniques within medical platforms. The envisioned system will adeptly harness healthcare data, employing intelligent process analysis and extensive big data processing to gleaninvaluable insights for informed decision-making, ultimately ensuring superior real-time medical monitoring.

Evaluate the Training Set: is it necessary? A theoretical Presentation of Y_Measure : New Metric for Evaluation of Training Set for Supervised Classification

ABSTRACT. All previous work was aimed at evaluating classification algorithms or the entire classification process, it fits with the studied problem in question, but never the representativeness of the Training Set. The Training Set has a great influence on the accuracy of the classification pro- cess and the reliability of results and evaluations. We do not doubt about fiability and utility of the various measures proposed and applied, we want by this article to reinforce them by another measure of representativeness of the Training Set to allow the expert to better apprehend the results and to optimize the reliability and robustness of the evaluation and in order to better identify gaps either originates from the algorithm or the Training Set. In this article, we will show the need of new Training Set quality evaluation metric. We based on the theoretical criteria for the generation of an optimal Training Set. "Divide for Reign".

09:00-10:30 Session 11C: SS5 (3/3): Swarm and evolutionary algorithms for solving complex scheduling and optimization problems
Location: Delhi
Bi-objective Resolution of the Single-machine Scheduling Problem with Availability and Human Operators
PRESENTER: Meriem Touat

ABSTRACT. planning is tackled in a new way. Indeed, human operators charged to execute the maintenance interventions are considered and expressed as temporal constraints impacting directly the scheduling feasibility. Furthermore, each operator has a competence level allowing him to execute the maintenance activity in a different duration compared to the basical one. Here, we aim to optimize two objective functions related to both production and maintenance criteria. First, we propose a Mixed-Integer Linear Programming (MILP) modeling to express the problem, then adapted the Multi-Objective Simulated Annealing algorithm (MOSA) to which we added a local search and a restart scheme to obtain an approximate Pareto front. This Pareto is then compared to the exact Pareto front obtained on small size instances by applying an enhanced Two-Phase Method (TPM) on the proposed MILP modeling. We experimented the proposed method on semi-randomly generated set of benchmarks and used Taguchi method to tune it.

Minimizing the total electricity cost and maximum lateness of the flow shop scheduling problem under time-of-use energy tariffs
PRESENTER: Xinyue Wang

ABSTRACT. Motivated by the scheduling challenges of high-performance computing in the information industry, this paper investigates a bi-objective energy-efficient flow-shop scheduling problem under time-of-use tariffs. To comprehensive consideration of practical factors, including release dates, due dates, task energy consumption and processor maintenance, a mixed-integer programming model is established, and an ε-constraint approach is adopted to convert the bi-objective model into several single-objective ones to achieve Pareto-optimal fronts. For better achieving the trade-off of energy consumption and customer satisfaction, total electricity cost and maximum lateness criteria are simultaneously introduced as minimizing objects. The superiority of the developed approach is demonstrated through experiments.

A Relax-and-fix Heuristic for Multiple Team Formation Problem

ABSTRACT. The multi-functional team formation problem (TFP) aims to construct an appropriate team balancing the coverage gains of the skills set and the cost such as communication and individual cost. Existing works focus on optimizing these objectives separately although they impact the team gain together. In this paper, we study a new problem that considers multiple projects and aims to minimize the total cost of communication and individual team member. The problem is named after the Team Formation Problem for multiple projects (TFP-MP). The problem is firstly formulated as a constrained quadratic set covering problem, which is further equivalently transformed to a mixed integer programming (MIP) model. To solve the problem, we propose a relax-and-fix heuristic by exploring the model structure and we evaluate the computational performances by solving different problem instances based randomly generated data. The computational experiment results show that the proposed relax-and-fix heuristic algorithm can solve large instances.

09:00-10:30 Session 11D: RS3: Multi-agent and Simulation-based approaches
Location: Shanghai
Encrypted Data-driven Control on Networked Multi-agent Systems

ABSTRACT. A data-driven design method is proposed for multi-agent systems that communicate with each other via a network. In such a networked control system, even if the controlled process itself is secure, information about the process can be leaked through eavesdropping because the control data is transferred over the network. In particular, in data-driven systems, process data transferred over a network drives the controller design as well as process control. Therefore, the security of data transferred over the network is critical to the safe operation of the system. To this end, this study examines a data-driven design using encrypted data. In the proposed method, both process control and controller design are performed with encrypted data instead of unencrypted data. Since the controller is designed based on encrypted data, both eavesdropping of control data and leakage of the designed controller are prevented.

Comparison of Classic and Recent Multi-Agent Path Finding Methods via MAPFame
PRESENTER: Jiaqi Huang

ABSTRACT. A Multi-Agent Path Finding (MAPF) problem aims to plan paths for multiple agents given a prescribed map and ensure they do not conflict with each other and travel the shortest distance or lowest cost in the minimal time. MAPF is useful in many practical applications, e.g., automated warehouses and intelligent factories. It has been widely-studied in the past decade. Existing MAPF algorithms have evolved from those solving single-agent path finding problems. When realized in different programming languages, they tend to deliver varying results regarding execution time and solution quality. Many kinds of simulated maps are used but some of them are not directly related to actual application environment. In this paper, we experimentally compare existing MAPF algorithms based on an open-source simulation platform called Multi-Agent Path Finding based on Advanced Methods and Evaluation (MAPFame). We analyze and test the effects of obstacle density, different maps, and agents counts on their performance indices. Hence, our research outcomes can be used by practitioners to select a right method for their particular applications.

Is averaging always the best? Improving aggregation method for federated knowledge graph embedding

ABSTRACT. In recent years, along with the rapid development of big data and AI technologies, knowledge graphs have also experienced significant growth. The vectorized representation of entities and relations in knowledge graphs has proven beneficial for various knowledge graph-related applications. However, traditional knowledge graph embedding methods are designed for centralized graphs, which cannot effectively represent the distributed knowledge graphs in real-world while ensuring data security. In this paper, we improve the aggregation method for federated knowledge graph embedding and propose a Federated knowledge graph embedding model with CompGCN, which is called FedComp for short. FedComp is an innovative server-client framework which implement CompGCN for federated KGE, along with the design of three novel aggregation methods. We conduct link prediction experiments on two datasets to demonstrate the performance of our model.

A Non-Isomorphic-Discriminated Inductive Graph-based Matrix Completion Model for High-Dimensional and Incomplete Data
PRESENTER: Renyu Zhang

ABSTRACT. High-dimensional and incomplete (HDI) data are generally used to analyze the potential links between real entities in the big data era. To perform this key task, an inductive graph-based matrix completion (IGMC) model is proposed to predict the link ratings w/o using the side information and pre-training. However, the isomorphic representation in the IGMC model only promotes the representation accuracy of the links with the adjacent ratings while does not consider the link representation accuracy of non-adjacent ratings. To address this issue, this paper proposes a local Non-Isomorphic-Discriminated IGMC (NID-IGMC) model to represent the links with distinct rating values more exactly. The main idea of the NID-IGMC model is two-fold. First, the NID-IGMC model proposes the definition of the non-isomorphic representation, constructs its representative functions and divides into three sub-categories. Second, the NID-IGMC model incorporates the non-isomorphic representation into the isomorphic representation and re-construct the model. Two kinds of experiments on industrial HDI matrices are performed, which indicate the high accuracy of the proposed NID-IGMC model.

Incorporating Event Information for High-Quality Video Interpolation
PRESENTER: Beibei Yang

ABSTRACT. Optical flow-based video interpolation is a commonly used method to enrich video details by generating new intermediate frames from existing ones. However, it cannot accurately reproduce the trajectories of irregular and fast-moving objects. To achieve the accurate reconstruction of high-speed scenes, incorporating event information during the interpolation process is an effective method. However, existing optical flow-based methods are not suitable for scenarios where events are sparse. To implement high-quality video interpolation for these scenarios, this study incorporates event information into the interpolation process with three-fold ideas: a) treating the moving target as the foreground and using events to delineate the target area, b) matching foreground to background based on the event information to generate temporal frame between keyframes, and c) generating intermediate frames between keyframe and temporal frame with optical flow interpolation. Empirical studies indicate that owing to its efficient incorporation of event information, the proposed framework outperforms state-of-the-art methods in generating high-quality frames for video interpolation.

10:30-11:00Coffee Break
11:00-12:00 Session 12: KeyNote 4 -Prof Mengchu Zhou - Internet of Behaviors

KeyNote 4

Prof Mengchu Zhou

Internet of Behaviors

Location: Nouméa
12:30-14:00Lunch Break