IGSC18: INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE
PROGRAM FOR WEDNESDAY, OCTOBER 24TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:00 Session 11: Keynote: Technology trends, requirements and challenges for ubiquitous self-powered IOT systems deployment, By: Tanay Karnik

Always ON always sensing small form factor edge systems for internet of things (IOT) are becoming ubiquitous. Many applications require these tiny devices to be self-powered and maintenance-free. Hence they should be able to harvest energy from available ambient sources and should have low manufacturing cost. Millimeter-scale form factor systems have been developed in academia for the past few years. Small form factor edge systems are becoming commercially available. These systems are essential in today’s cyber physical world. We will introduce the available market and the trends driving this growth in IOT system deployments. That will be followed by typical system requirements for a typical self-powered IOT system. Challenges to realize such a dream IOT system will be discussed. We will present two approaches to system design, namely bottom-up and top-down. An X86-based tiny microcontroller unit (MCU) was designed to enable multiple IOT usages. This MCU followed a bottom-up approach – ultra-low power low cost MCU was designed first and then applied to IOT systems such as smart sensor tag for package tracking. The discussion will introduce another IOT system that followed a top-down usage-driven approach. In this case, an agricultural usage was chosen that required energy harvesting, X86-class edge computing, visual recognition on the edge, secure storage, secure wireless communication and ultra-low power maintenance free operation. An IOT system was architected for this usage and later demonstrated. We will conclude the presentation with comparison of these two distinct approaches to IOT system design.

Chair:
10:30-12:00 Session 12A: Machine Learning
10:30
Using Machine Learning to reduce the energy wasted in Volunteer Computing Environments

ABSTRACT. High Throughput Computing (HTC) provides a convenient mechanism for running thousands of tasks. Many HTC systems exploit computers which are provisioned for other purposes by utilising their idle time -- volunteer computing. This has great advantages as it gives access to vast quantities of computational power often for little or no cost. The downside is that running tasks are sacrificed if the computer is needed for its primary use. Normally by terminating the task which must be restarted on a different computer -- leading to wasted energy and an increase in the time to task completion. We demonstrate, through the use of simulation, how we can reduce this wasted energy by targeting tasks at computers less likely to be needed for its primary use, predicting this idle time through machine learning. By combining two machine learning approaches, namely Random Forest and MultiLayer Perceptron, we save 51.4% of the energy without significantly affecting the time to complete tasks.

11:00
Energy-aware Fault-tolerant Scheduling Scheme based on Intelligent Prediction Model for Cloud Data Center

ABSTRACT. As cloud computing becomes increasingly popular, more and more applications are migrated to clouds. Due to multi-step computation of data streams and heterogeneous task dependencies, task failure occurs frequently, resulting in poor user experience and additional energy consumption. To reduce task execution failure as well as energy consumption, we propose a novel energy-aware proactive fault-tolerant scheduling scheme for cloud data centers(CDCs) in this paper. Firstly, a prediction model based on a machine learning approach is trained to classify the arriving tasks into “failure-prone tasks” and “non-failure-prone tasks” according to the predicted failure rate. Then, two efficient scheduling mechanisms are proposed to allocate two types of tasks to the most appropriate hosts in a CDC. Vector reconstruction method is developed to construct super tasks from failure-prone tasks and schedule these super tasks and non-failure-prone tasks to most suitable physical host, separately. All the tasks are scheduled in an earliest-deadline-first manner. Our evaluation results show that the proposed scheme can intelligently predict task failure and achieves better fault tolerance and reduces total energy consumption than existing schemes.

11:30
Generator Event Detection from Synchrophasor Data Using a Two-Step Time-Series Machine Learning Algorithm

ABSTRACT. We report our work on the development of an efficient algorithm to accurately identify the occurrence of generator events (GE) within an electrical grid. These events denote an electric fault of a specific generator node in the electrical network. Many reasons exist that give rise to these faults, but at its most basic, they constitute an inability of a generator to match the grid usage requirements. These events can propagate to the rest of the grid in a cascading effect leading to brownouts and prolonged outages. Beyond the disruption to the stability of the electrical supply caused by a GE, the failure to timely identify a discrete event can give rise to costly damages to other generators. Alternatively, setting the threshold for event detection too low also carries significant deleterious effects for effective production and transmission of stable electrical power by the grid. In this paper, we seek to identify such events using only the monitoring data obtained from phasor measurement units which collect data on grid state across a wide area. One of the main obstacles in correctly identifying these events is the volume of signals that must be monitored. The enormity of the dataset renders human observer solutions inadequate in attempting to monitor the electrical network and warn about the occurrence of GE. Machine learning methods are perfectly equipped to tackle this problem.

It is in this context that we endeavored to create a machine learning algorithm that would automatically and without human operator input, review the data generated by phasor measurement units within the grid and flag instances where a GE had taken place. The algorithm should perform this task in near real-time with the help of a `standard' off-the-shelf processing unit. The classifiers we used are well described, and easily understood. Furthermore, we set out to create electrical fault maps that should demarcate the progression of the fault as it was taking place. Our results show that our two-step classifier is able to accurately and efficiently identify the appearance of GE within an electrical grid. We are also able to report a fault network map that should be a powerful tool for troubleshooting.

10:30-12:00 Session 12B: Cyber-Physical Systems
10:30
Understanding the sources of power consumption in Mobile SoCs

ABSTRACT. Very deep scaling and the poor cooling methods in mobile devices are making the leakage power of a great concern. In this paper and unlike previous works in which both dynamic and leakage power analysis was carried out only at the cluster level (coarse grain), we analyze the leakage power problem of mobile SoCs at the core level, by proposing for the first time a fine-grain leakage and dynamic power identification for each SoC unit taking the advantage of the Blind Power Identification (BPI) technique after introducing new improvements to increase its accuracy. We also introduce a new experimental methodology to apply the BPI technique for heterogeneous systems, including a novel initialization for the algorithm that enhances the output accuracy. By using some benchmarks on an octa-core Snapdragon 835 processor we show that the the total idle power is between 28 mW - 953 mW for the big cluster and between 42 mW to 865 mWfor the LTTLE cores. We show as well the different trade-offs between power consumption and performance while running a workload as single threaded compared to multi-threaded at different frequencies. Finally, we shed light on the power usage of "angry-birds" mobile game as a real life example, showing how power is divided in that case between big and LITTLE cores; and the GPU. The numbers show in this case that the online cores consume 61.5% of the total power, while the idle power of the offline cores was 38.5% showing the significance of the idle power in mobile platforms. Beside the proposed technique, this work gives useful numbers, that give an insight about the power consumption at a fine grain level, which can be of great help to determine the available thermal and power headroom to improve furthermore the overall performance of mobile devices.

11:00
Appliances identification for different electrical signatures using moving average as data preparation

ABSTRACT. Abstract— Intelligent electronic equipment and automation network are the brain of high-technology energy management systems in the critical role of smart homes dominance. The smart home is a technology integration for greater comfort, autonomy, reduced cost as well as energy saving. In this paper, a system which can automatically recognize home appliances and based on a dataset of electric consumption profiles is proposed. The dataset ACS-F1 (Appliance Consumption Signature Fribourg 1) available online and containing 100 appliances signatures in XML (Extensible Markup Language) format is used for that purpose. A new format for this dataset is created as it makes easier to implement directly machine learning algorithm such as K-NN (K-Nearest Neighbors), Random Forest and Multilayer Perceptron in the feature space between the test object and the training examples. In order to optimize the classification algorithm accuracy, we propose to use a moving average function for reducing the random variations in the observations. Using this technique indeed allows the structure of the underlying causal processes to be better exposed. Moving average is widely used in trading algorithm to predict the future price movements based on identifying patterns in prices, volume and other market statistics. Recognition results using K-NN based machine learning are provided to show the impact of the number and the type of electrical signatures. In the best case an accuracy rate of 89.1% and 99.1% is obtained using K-NN, without and with moving average respectively. Our approach is compared with another data preparation technique based on dynamical coefficient and used to optimize the K-NN classifier as well. Finally, our approach based on moving average is also evaluated with Random Forest (99%) and Multilayer Perceptron (98.8%) classification algorithms for the best electrical signature obtained with K-NN

11:30
Secure Application Continuity in Intermittent Systems

ABSTRACT. Intermittent systems operate embedded devices without a source of constant reliable power, relying instead on an unreliable source such as an energy harvester. They overcome the limitation of intermittent power by retaining and restoring system state as checkpoints across periods of power loss. Previous works have addressed a multitude of problems created by the intermittent paradigm, but do not consider securing intermittent systems. In this paper, we address the security concerns created through the introduction of checkpoints to an embedded device. When the non-volatile memory that holds checkpoints can be tampered, the checkpoints can be replayed or duplicated. We propose secure application continuity as a defense against these attacks. Secure application continuity provides assurance that an application continues where it left off upon power loss. In our secure continuity solution, we define a protocol that adds integrity, authenticity, and freshness to create secure checkpoints. We develop two solutions for our secure checkpointing design. The first solution uses a hardware accelerated implementation of AES, while the second one is based on a software implementation of a lightweight cryptographic algorithm, Chaskey. We analyze the feasibility and overhead of these designs in terms of energy consumption, execution time, and code size across several application configurations. Then, we compare this overhead to a non-secure checkpointing system similar to QuickRecall. We conclude that securing application continuity does not come cheap and that it increases the overhead of checkpoint restoration from 3.79 µJ to 42.96 µJ with the hardware accelerated solution and 57.02 µJ with the software based solution. To our knowledge, no one has yet considered the cost to provide security guarantees for intermittent operations. Our work provides future developers with an empirical evaluation of this cost, and with a problem statement for future research in this area.

13:30-15:00 Session 13A: Sensing and Data Handling
13:30
Performance and Energy Evaluation of SAR Reconstruction on Intel Knights Landing

ABSTRACT. The reconstruction of nxn-pixel Synthetic Aperture Radar (SAR) imagery using a Back Projection algorithm incurs O(n^2.m) cost, where n^2 is the number of pixels and m is the number of pulses. We have developd parallel algorithms and software for constructing multi-resolution SAR images for many-core architectures. We also develop load balancing algorithms for distributing workload to the available cores, thereby optimizing performance and energy. We evaluate the performance of our algorithms and the resulting energy consumption on an Intel Knights Landing (KNL) processor. We also present a comparison of runtime and energy between KNL, Ivy Bridge and Tesla K40m.

14:00
Near Data Filtering for Distributed Database Systems

ABSTRACT. Over the past decade, data movement costs dominate the execution time of data-intensive applications for distributed systems and they are expected to be even more important in the future. Near data processing is a straightforward solution to reduce data movement which brings compute resources closer to the data source. This paper explores near data processing in a generic distributed system to improve the performance by reducing data movement. An efficient near data filtering solution is designed and implemented by introducing a filter layer which performs tuple-level near data filtering. In order to reduce idle time of processing nodes and improve data transmission throughput the proposed solution is extended to support block-level near data filtering by creating index for each data block. Furthermore, to answer the question when and how to perform near data filtering this paper proposes an adaptive near data filtering solution to balance the computation and data transmission throughput. Experimental results show that the proposed solutions are superior to the best existing method for most cases. The adaptive near data filtering solution achieves an average speedup factor of 4.59 for queries with low selectivity.

14:30
A Self-Sustaining Micro-Watt Programmable Smart Audio Sensor for Always-On Sensing
SPEAKER: Michele Magno

ABSTRACT. Self-sustainable always-on sensors are crucial for the Internet of Things and its emerging applications. However, achieving perpetual work with active sensors poses many challenges, especially in ultra-low power design and micro-power energy harvesting that can supply the sensors. This paper presents a smart sensor that combines energy harvesting and a micro-power event-driven sensor to achieve a self-sustaining programmable smart microphone for acoustic monitoring. The proposed solution is able to achieve programmable pattern recognition with up to 128 simultaneous time-frequency features exploiting mixed-signal low power design. Experimental results show that the designed circuit consumes only 26.89 µW in always-on mode, during the time-frequency feature-extraction, while the whole system consumes only 63 µW during pattern recognition including the power for a commercial MEMS microphone and the energy harvesting subsystem. We demonstrate that the sensor can operate perpetually powered with a small form factor flexible photovoltaic panel in indoor lighting conditions. Finally, with in-field experiments with two different audio streams the smart sensors achieved a high accuracy in the detection of 100%.

13:30-15:00 Session 13B: Servers
13:30
Practices of Energy Consumption for Sustainable Software Engineering

ABSTRACT. Sustainable Software Engineering, also known as “Green IN Software”, focuses on the production of sustainable software. The traditional software engineering process causes negative influences on the environment, economy and to society. For instance, the energy consumption during the software processing is considered as a first order impact because it directly leads to high costs on energy bills and consequently on the environment. Moreover, the optimization of a process implementation and software development can lead to second order impacts, also referred to as indirect impacts. Finally, the third other impact considers the user’s behaviors and consciousness regarding the concept of sustainability. In order to mitigate these negative impacts, the purpose of this research is to identify, via systematic literature review, the practices of sustainable software engineering reported by the academy applied in the industry. Through the systematic literature review, it was possible to discover 170 practices in which 70 were related to practices of energy consumption that could be adopted during the software development. Our results indicate that those practices emerged from the grounded theory, which are part of SWEBOK areas and are applicable in the industry.

14:00
GreenWeb: Hosting High-Load Websites Using Low-Power Servers

ABSTRACT. Today, there are millions of web servers hosting billions of websites. To ensure service quality and response time, it has become a conventional wisdom that websites, especially high-load websites, must be deployed on high-end servers with powerful hardware. In fact, most of these costly servers are energy-hungry and vastly under-utilized, thereby wasting significant amount of energy and dollars. This paper explores the viability of using low-power commodity servers to host high-load websites while still maintaining comparable Quality of Service(QoS). We demonstrate that, with certain software optimization(e.g. caching and content delivery network - CDN) enabled, low power servers such as Synology and Mac Mini can easily host high-load websites, and even Raspberry Pi can host medium-load websites. Our work verifies a viable solution to host high-load websites using low-power servers, which can greatly reduce the energy usage and operational cost of web servers without degrading the quality of web services.

14:30
Data Center Cooling System Integrated with Low-Temperature Desalination and Intelligent Energy-Aware Control

ABSTRACT. Data centers consume enormous energy, presently reported up to 2% of the world’s electricity, with most of  this energy then rejected into the atmosphere as waste heat.Meanwhile there is a global scarcity of safe drinking water. The UN states that 20% of the world’s population lives in regions affected by scarcity of drinking water. In this paper, we discuss an on-going research initiative that investigates the reuse of “free” waste heat energy from data centers in coastal cities and island countries, and in modular data centers that are deployed in coastal regions, through a low-pressure desalination process that converts sea water into safe drinking and irrigation water supplies, while significantly improving the Power Utilization Efficiency (PUE) for the data centers. In this paper, we discuss a work-in-progress experimental setup with the purpose of demonstrating that heat removed via common fluid-cooled rack heat exchangers in modern data centers can be re-used
 via a controlled-low-pressure desalination technique to turn salt water into drinking water at zero added carbon cost for the desalination, and significantly reduced carbon cost for the data center operations.

15:30-17:00 Session 14A: Panel 2: Computational Methods and Challenges for Enabling a Renewable Power Grid

Moderator: Adam Hahn; Panelists:

  • Shrirang Abhyankar  (Argonne National Laboratory) -   HELICS: An open-source transmission-distribution-communication co-simulation platform to assess impacts of large penetration of distributed energy resources
  • Martin  Burns, (National Institute of Standards and Technology) -    Transactive energy challenges and simulation-based abstract components models
  • Srinivas Katipamula (Pacific Northwest National Laboratory) -   Increasing building energy efficiency through market based transactive control
  • Katrina Kelly (University of Pittsburg) -  IoT and Sustainability: Using data to quantify and define Community Resilience
Chair:
15:30-17:00 Session 14B: Special Session: Power-efficient, Optimal and Robust Multi-core Compute Platforms, Moderated by: Prabhat Mishra & Sujay Deb

     Many-core processing platforms are gaining significant interest for a wide range of applications, viz., Internet of Things (IoT), consumer electronics, single-chip cloud computers, supercomputers, defense applications etc. With billions of physical devices interconnected to each other communicating continuously, huge amount of data is expected to be transferred, stored, analyzed and computed. Data centers and servers involved are equipped with many-core processing units which analyze the data, perform arithmetic and logical operations on them, and take decisions based on the results for multiple applications. Since the applications are quite diverse, the demand on compute platform will vary significantly. As it is not feasible to have customized solutions for all different applications, a platform that can be easily modified to suit particular application demands along with optimal power efficiency and robust hardware will be highly desirable. A platform based solution that is optimal, reliable and power-aware to sustain and provide scalability to this trend is proposed in this special session.