CAINE 2019:Papers with Abstracts

Papers
Abstract. In recent years, graph data analysis has become very important in modeling data distribution or structure in many applications, for example, social science, astronomy, computational biology or social networks with a massive number of nodes and edges. However, high-dimensionality of the graph data remains a difficult task, mainly because the analysis system is not used to dealing with large graph data. Therefore, graph-based dimensionality reduction approaches have been widely used in many machine learning and pattern recognition applications. This paper offers a novel dimensionality reduction approach based on the recent graph data. In particular, we focus on combining two linear methods: Neighborhood Preserving Embedding (NPE) method with the aim of preserving the local neighborhood information of a given dataset, and Principal Component Analysis (PCA) method with aims of maximizing the mutual information between the original high-dimensional data sets. The combination of NPE and PCA contributes to proposing a new Hybrid dimensionality reduction technique (HDR). We propose HDR to create a transformation matrix, based on formulating a generalized eigenvalue problem and solving it with Rayleigh Quotient solution. Consequently, therefore, a massive reduction is achieved compared to the use of PCA and NPE separately. We compared the results with the conventional PCA, NPE, and other linear dimension reduction methods. The proposed method HDR was found to perform better than other techniques. Experimental results have been based on two real datasets.
Abstract. Most of the biomedical signals are considered non-stationary since the human behavior depends on time. The ECG signal is one of the most important signals in cardiogram analysis. Although it provides a valuable basis for the clinical diagnosis and treatment of several diseases, it can be easily affected by various interferences caused by the power of magnetic field, patient respiratory motion or contraction. The overlapping interference affects the quality of the ECG waveform, leading to a false detection and recognition of wave groups. Therefore, the elimination of the interference of the ECG signal and the subsequent wave group identification has been a hot research topic. Since the ECG signal is not considered a stationary signal, neither the regular power spectrum nor the bispectrum can handle this problem because they do not reflect the time variation of the process characteristics. With the recent introduction of the evolutionary higher- order spectrum (EHOS) in digital signal processing, an approach for analyzing the ECG signal is proposed. The work in this paper is focusing on the reduction of the noise interferences of the ECG signal using the EHOS. This approach exploits the fact that the EHOS contains information regarding both the phase and the magnitude of the signal. Also, we will show that if the ECG signal is corrupted by stationary/non-stationary noise with symmetric distribution, the noise can be eliminated using the properties of the EHOS. Some simulation is declared to show the effectiveness of the proposed method.
Abstract. Solar energy, one of many types of renewable energy, is considered to be an excellent alternative to non-renewable energy sources. Its popularity is increasing rapidly, especially because fuel energy consumes and depletes finite natural resources, polluting the environment, whereas solar energy is low- cost and clean. To produce a reliable supply of energy, however, solar energy must also be consistent. The energy we derive from a photovoltaic (PV) array is dependent on changeable factors such as sunlight, positioning of the array, covered area, and status of the solar cell. Every change adds potential for the creation of error in the array. Therefore, thorough research and a protocol for fast, efficient location and correction of all kinds of errors must be an urgent priority for researchers.
For this project we used machine learning (ML) with voltage and current sensors to detect, localize and classify common faults including open circuit, short circuit, and hot-spot. Using the proposed algorithm, we have improved the accuracy of fault detection, classification and localization to 100%. Further, the proposed method can execute all three tasks (detection, classification, and localization) simultaneously.
Abstract. Blockchain technology is on the cusp of revolutionizing the way we handle healthcare data, in term of storage and utilization. The main goal is to empower patients to be the center of their own health record so that, the patient doesn't have to rely on different institutions or hospitals they might visit. Blockchain technology and smart contracts provide an interesting and innovative way to keep track of Electronic Health Records (EHRs). This technology could help the patients to have better control of their own data. Health professionals and institutions, such as hospitals, could have access to patient’s data owned by other institutions. In the present article, we discuss how blockchain technologies can be used to handle EHR while improving the efficiency of operations through streamlining processes and transparency. We propose an architecture to manage and share healthcare data among different organizations. The proposed work could significantly reduce the time needed to share patient data among different health organizations and reduce the overall cost.
Abstract. In this paper, we focus on the online diagnosis of Automated Production Systems (APS) equipped with sensors and actuators emitting binary signals. These systems can be considered as Discrete Event Systems (DES). The paper presents a Case-Based Reasoning for the Online Diagnosis of All types of Faults in APS (CBR-ODAF). It is an improvement of our approach presented previously in order to remedy its limitations. Firstly, it proposes a new case representation format that describes all the faults to diagnose, adapts to the dynamic aspect of APS, is quite expressive and is easy to understand by human operators. Secondly, it allows to classify in real time each new observation as a ’normal case’, ’faulty case’ or ’unidentified case’ based on a new dissimilarity index which is not intrinsic to the numerical type. It is an index that adapts to our proposed case representation format and describes the degree of difference between cases represented by data of different types (i.e. quantitative and qualitative).
Abstract. Enterprises today are technology driven and comprise of plethora of applications that may be categorized based on the technology that they are developed and deployed on. For enterprises that have existed across years and across multiple business cycles, the technologies may be classified as legacy, mature or emerging. The challenge lies in interoperability within and without the organization, especially with respect to the business objects that are required across business functions, to realize the capabilities of the organization. This is also true for scenarios of M&As (Mergers & Acquisitions) and also during creation of JVs (Joint Ventures).
Enterprise Architecture (EA) defines the Business-Technology alignment in organizations, and is an established methodology for business transformation and establishing enterprise maturity in the keenly competitive business world. Business objects are defined as Data Architecture artifacts within the ambit of EA.
The challenges to business object interoperability arises due to the incompatibility of technologies used by the applications. This leads to the well explored n*(n-1) scenario, where n is the number of application interfaces. This has serious implications towards business health of the organization, and risk to the BAU (Business As Usual) of the organization. This is because in a complex mesh like n*(n-1) scenario, it becomes practically impossible to identify the impact of changes to business capabilities in an inconspicuous attribute of a business object in an application domain.
Thus the impact analysis of business objects / data as defined by traditional description is a challenge to business sustainability of organizations. These challenges in data architecture impact analysis may be mitigated by the AI (Artificial Intelligence) paradigm, by taking recourse to the very powerful features of AI, by defining predicate calculus based knowledge bases.
In our paper we consider the Banking domain for carrying out our discussions.
Abstract. Recent progress in animal biometrics has revolutionized wildlife research. Cutting edge techniques allow researchers to track individuals through noninvasive methods of recognition that are not only more reliable, but also applicable to large, hard-to-find, and otherwise difficult to observe animals. In this research, we propose a metric for boundary descriptors based on bipartite perfect matching applied in shark dorsal fins. In order to identify a shark, we first take a fin contour and transform it to a normalized coordinate system so that we can analyze images of sharks regardless of orientation and scale. Finally, we propose a metric scheme that performs a minimum weight perfect matching in a bipartite graph. The experimental results show that our metric is applicable to identify and track individuals from visual data.
Abstract. Plant scientists around the world have expert-specified systems for dealing with sample breeding. The ability to track parents and their future progeny should be an intuitive and easy task for these systems. Surprisingly, breeders have a handful of various techniques that have little to no centralized protocol. This paper focuses on a novel implementation by Kansas State University’s Cyber Physical Systems lab, the Android application, Intercross. Intercross is the newest addition to the PhenoApps organization’s set of open source applications. This new application is a generic cross tracking system; however, because there are various methodologies for breeding this task is not trivial. This paper will expand on the non-trivial nature of crossing samples and why this system is needed. Previous systems that were used to track crosses are either outdated, do not exist, or are inefficient.
Abstract. Nowadays, business companies/organizations/enterprises are moving theirs processes to the cloud, on the other hand, they do not want to depend on a unique supplier and be tied to it. On the other hand, they want a variety of company that offers different technologies. But the fact that each supplier uses a different technology, make the selection a costly task that’s consume a lot of time. This does not mean that the amount of suppliers should be reduced, because this is good for the market. Because of this, it is necessary to have an environment that allows the diversity and compatibility of technologies. The cloud should be thought free of incompatibilities to focus on the interoperability among the different suppliers. In other words, each supplier should open the cloud to its competitors. Given them a set of standards and rules that allows the interconnection among the products provided. In this way, the costumers would have the possibility of choosing the most adequate product for its needs and do not be limited to a particular technology. In this sense, a strategy consists on a quality models, metrics and indicators that complement the framework currently applied to the cloud migration, is proposed.
Abstract. The paradigm of purposeful systems is used to analyze and define cyberspace as a collection of functions that are to be provided and preserved if harm to various elements of the space is to be avoided. We consider harm to individuals, groups of humans, as well as humanity as a whole and identify an overall systems approach to regulating cyberspace that can guide efforts currently undertaken by various governmental and industry organizations. The approach is based on clear universal ethical principles. The result will harness the full potential of cyberspace while eliminating opportunities for “bad actors” to reap undue benefits at the expense of individuals and the community.
Abstract. Predicting stock market is one of the most difficult tasks in the field of computation. There are many factors involved in the prediction – physical factors vs. physiological, rational and irrational behavior, investor sentiment, market rumors,etc. All these aspects combine to make stock prices volatile and very difficult to predict with a high degree of accuracy. We investigate data analysis as a game changer in this domain.As per efficient market theory when all information related to a company and stock market events are instantly available to all stakeholders/market investors, then the effects of those events already embed themselves in the stock price. So, it is said that only the historical spot price carries the impact of all other market events and can be employed to predict its future movement. Hence, considering the past stock price as the final manifestation of all impacting factors we employ Machine Learning (ML) techniques on historical stock price data to infer future trend. ML techniques have the potential to unearth patterns and insights we didn’t see before, and these can be used to make unerringly accurate predictions. We propose a framework using LSTM (Long Short- Term Memory) model and companies’ net growth calculation algorithm to analyze as well as prediction of future growth of a company.
Abstract. L(h,k) Labeling in graph came into existence as a solution to frequency assignment problem. To reduce interference a frequency in the form of non negative integers is assigned to each radio or TV transmitters located at various places. After L(h,k) labeling, L(h,k, j) labeling is introduced to reduce noise in the communication network. We investigated the graph obtained by Cartesian Product betweenCompleteBipartiteGraphwithPathandCycle,i. e.,Km,n×Pr andKm,n×Cr byapplying L(3,2,1)Labeling. The L(3,2,1) Labeling of a graph G is the difference between the highest and the lowest labels used in L(3,2,1) and is denoted by λ3,2,1(G) In this paper we have designed three suitable algorithms to label the graphs Km,n × Pr and Km,n × Cr . We have also analyzed the time complexity of each algorithm with illustration.
Abstract. Application level multicast is independent of router infrastructure unlike router-based IP multicast. The existing DHT-based application level multicast protocols work efficiently as long as there is almost no churn; otherwise, their performances start degrading drastically, because DHT – based architecture cannot handle churn effectively. Besides, most of DHT-based multicast protocols consider single data source and do not consider peer heterogeneity. In this work, we have considered an existing non-DHT based P2P architecture, viz., Residue Class based (RC-based) architecture which has already been shown to perform much better than some well-known DHT-based architectures from the viewpoints of speed of unicast communication and churn handling. We have presented a highly efficient capacity-constrained and any source multicast protocol suitable for the RC-based P2P architecture as mentioned above.
Abstract. The focus of the paper is to provide a prototype implementation of a temperature control system using the open-source real-time operating system FreeRTOS and communication via a Controller Area Network (CAN). By using low-cost components and open-source software on low-cost STM32F407 Discovery Boards with ARM Cortex M4 processors, this prototype is an ideal target for classroom use. The Discovery boards provide built-in support for 2 CAN buffers, but no transceivers. This paper presents the implementation of a Master-Slave architecture with CAN communication using LibOpenCM3 libraries with FreeRTOS and a more general CAN implementation using Hardware Abstraction Libraries (HAL). The paper further discusses the concepts of CAN messaging and its parameters in detail to be applicable for anyone wanting to set up their own network. In addition to CAN communication between STM32F407 Discovery Boards, this paper also presents a prototype implementation of a temperature control system on STM32F407 Discovery boards showing use of most standard interfaces on the boards such as GPIO and I2C. It describes the hardware components of the system and the software implementation using a combination of HAL, LibOpenCM3, and FreeRTOS.
Abstract. The minimum spanning tree (MST) problem is a well known optimization problem in graph theory that has been used to model many real life problems, e.g., telecommuni- cations,transportation network, routing and water supply network. The MST problems with deterministic edge costs have been worked intensively and the MST of a connected weighted graph can be determined using many efficient algorithms introduced by outstand- ing scientists. However, in real life scenarios, several types of uncertainties are generally encountered, because of insufficient information, imperfect information, failure or other reasons. In this paper, we concentrate on a MST problem of a undirected connected fuzzy graph in which a intuitionistic fuzzy numbers, instead of a crisp (real) number, is used to each edge as edge weight. We define this problem as intuitionistic minimum spanning tree (IMST) problem. We introduce an algorithmic approach for designing the IMST of a fuzzy. The Bor ̊uvka’s algorithm is a popular greedy algorithm for designing a MST in a graph. Here, we have modified the classical Bor ̊uvka’s algorithm to generate the IMST of fuzzy graph. The water distribution system is the lifeline of any city. We also describe the utility of IMST in a water distribution network. A numerical example is worked out to illustrate our proposed algorithm.
Abstract. Japanese flower markets employ a descending auction with special rule called “mari”. In the descending auction, seller offers an initial price which is enough high. The auction decrease the price when a buyer stops. Then the auction decides the buyer as a winner and allocates the item to the buyer. The mari is a transaction spped up rule, which the buyers without the winner choose to join a coalition for buying the items as the winner’s price. Our study extend this model to widely market including consumer transaction. We evaluate our proposed model by simulations. The result shows that our model reduce the number of auctions compered with the previous model.
Abstract. There are myriad ways in which people benefit from systems in cyberspace that support such things as positive social interactions, electronic commerce, and automated decision making. However, harm to people and organizations can also occur, through losing privacy, fostering crime and fraud, spreading misinformation, and challenging or violating many ethical standards. Broadly characterized, systems functioning in cyberspace involve people, data, devices, computational resources, controls, and communication infrastructure. As a concept, trust refers to the state of belief in the competence of an entity to act dependably, reliably and securely within a specific situation or context. Trust is a social construct. An acceptable level of trust is essential to meaningful or satisfactory engagement and interaction among people, and, by extension, among any and all cyberspace systems. Building on the ability for entities to monitor data and drive models within contexts of how people engage when interacting with systems, we describe approaches to elevating beneficence and reducing harm and in cyberspace. We include ways in which trust is characterized and measured, relate trust and predictive analytics, and describe the potential for recent technologies like blockchains and cloud systems to help to develop a more beneficent cyberspace.
Abstract. The use of social media as an instrument for public institutions to provide in- formation and digital services to their citizens and promote their participation has become a common strategy in the scope of e-government. In this study, an empirical investigation of the diffusion of Facebook, Twitter, and YouTube among the local governments of the 22 cantons of the province of Manabí (Ecuador) is presented. In addition to portraying the adoption of social media by the local governments in the province, the results show that the Diffusion of Innovation (DOI) Theory can be used to explain the adoption process among the studied governments and that those can be classified according to the five DOI Theory’s adoption categories: innovators; early adopters; early majority; late majority; and laggards.
Abstract. In this paper, we have applied modular arithmetic, specifically residue class (RC), to design a non-DHT-based structured P2P network. It is an interest based tree architecture. It is known as pyramid tree. A node i in this tree represents a group of peers that are interested in a particular resource of type i. It is not a conventional tree. In the present work, such a P2P architecture has been the choice because in a complete pyramid tree, multiple paths exist between most of its nodes. Such a structural characteristic can be helpful from the viewpoints of designing load balanced as well as robust communication protocols. Besides, search latency for its inter- group data lookup algorithm is bounded by the tree diameter and is independent of the number of the distinct resource types as well as the total number of peers present in the system. In addition, any intra-group data look up communication needs only one overlay hop.
Abstract. With the recent rapid development of Augmented Reality (AR) headsets, new possi- bilities emerge for applications of AR technologies. Today, publicly available AR headsets provide novel storytelling platforms, expand the vision of doctors and engineers, remove boundaries of educational processes and assist humanity in multiple endeavors.
Our research encompasses various levels of the learning process by examining design principals and developing reliable software for multiple educational applications. This paper focuses on the software development process for an AR program that teaches the medical procedure known as the Lumbar Puncture. Our team utilized the Meta 2 headset by Metavision to create a software application to enhance the student experience and extend training effectiveness. We will discuss the requirements and specification for the future application, describe the development process and issues encountered for the current version of the application, as well as present preliminary results of testing and evaluation.
Abstract. Nowadays, with the wide applications of distributed systems, web-based applications, and communications systems over the Internet for carrying data between users such as terminal client and computer/server or communications between different devices using a computer network, network security has become crucial requirement to ensure authentic received data during transmission. Authentication and encryption are basic procedures to ensure secure communications over a public network due to tamper-resistance and convenience in dealing with a password file. Most of the used protocols; HTTP, FTP, and SMTP of the Internet applications use text stream that is more and more vulnerable to attacks. Encryption represents the main security for the most computer applications.
This work proposes enhanced secure actions for transferring data using FTP protocol by using a smart token. A smart token has the capabilities of the smart card, but more secured beside some interesting operations. A practical and secure user scheme, based on a smart token device, is proposed. A Secure Platform has been developed using implemented APIs and PKCS#11 as RSA standard interface. The proposed API is called SAFEST (Secure Actions for FTP Environment with Smart Token). SAFEST API wraps a standard protocol for implementing the communication between a token and the application using it. This API is a platform independent, scalable to support more functionality, optimizing token usage and adding more security for accessing token objects. The smart token can process the cryptographic key operations on its own rather than on the host computer, which supports high-level platform independence. In addition, through the proposed SAFEST API, standard interfacing to such token devices from any vendor can be implemented through using PKCS#11 interfaces, developed by RSA labs.
Abstract. In the digital period, a large part of our daily activities revolves around social networks software. Social networks software, such as Facebook and Twitter, have brought new opportunities for users to create online communities, share content and opinions on the online platforms. Today’s colleges and universities have also employed the power of social media to increase students’ interaction and engagement. This is particularly useful for online or distance learners, where face-to-face communication is limited. This paper aims to determine requirements for an online community site that incorporates the use of social networks software. The requirements were gathered from stakeholders at an open university in Thailand. The study proposes the desired characteristics and functional requirements of an online community site in a distance learning context.
Abstract. Internet’s influence has been growing at a rapid speed, and so has been the demand for web based application. The migration from stand-alone to platform independent application offers benefits in terms of maintainability, scalability and ease of deployment.
Web Based Presentation System also referred to be as WBPS is an attempt to minimize the dependence on stand-alone applications, and additionally provide a comprehensive browser based solution which administers users, schedules presentations and provide a platform independent tool for presentations.
In the past, faculties have been required to manually schedule presentations for the students using the calendar, and publishing the schedule calendar. Students were to give their presentation using presentation tools as Microsoft PowerPoint, Dyknow and brinkpad.com.
With WBPS, faculties have the ability to allow the system to auto schedule presentation, and publish the presentation schedule. Students can upload their presentations, and use an online presentation tool for giving their presentation. The faculties have a choice to choose between manual scheduling verses automatic scheduling. Besides being an online tool for giving presentations, the WBPS have features like Member Management, Past and Presentation Templates Repository, Automated Reminders and taking Notes or making annotations while attending a presentation.
Abstract. The resurgence of interest in Artificial Intelligence and advances in several fronts of AI, machine learning with neural network in particular, have made us think again about the nature of intelligence, and the existence of a generic model that may be able to capture what human beings have in their mind about the world to empower them to present all kinds of intelligent behaviors. In this paper, we present Constrained Object Hierarchies (COHs) as such a generic model of the world and intelligence. COHs extend the well-known object-oriented paradigm by adding identity constraints, trigger constraints, goal constraints, and some primary methods that can be used by capable beings to accomplish various intelligence, such as deduction, induction, analogy, recognition, construction, learning and many others.
In the paper we will first argue the need for such a generic model of the world and intelligence, and then present the generic model in detail, including its important constructs, the primary methods capable beings can use, as well as how different intelligent behaviors can be implemented and achieved with this generic model.
Abstract. This paper elaborates on the advantages of migrating legacy IT systems for manufacturing operations to a microservice architecture, which is an important step towards a platform-based ecosystem. Architecture models for manufacturing operations from the literature are evaluated. The different models’ strengths are combined towards a common architecture for the factory of the future. Microservices are introduced as a new architectural style for manufacturing operations software.
Abstract. In a real-time embedded system which uses a primary and an alternate for each real-time task to achieve fault tolerance, there is a need to allow both primaries and alternates to have critical sections/segments in which shared data structures can be read and updated while guaranteeing that the execution of any part of one critical section will not be interleaved with or overlap with the execution of any part of a critical section belonging to some other primary or alternate which reads and writes on those shared data structures. In this paper a software architecture is presented which effectively handles critical section constraints where both primaries and alternates may have critical sections which can either overrun or underrun, while still guaranteeing that all primaries or alternates that do not overrun will always meet their deadlines while keeping the shared data in a consistent state on a multiprocessor in a fault tolerant real-time embedded system.