PDP 2023: 31st Euromicro International Conference on Parallel, Distributed, and Network-Based Processing Villa Doria d'Angri Naples, Italy, March 1-3, 2023 |
Conference website | https://www.pdp2023.org |
Submission link | https://easychair.org/conferences/?conf=pdp2023 |
Submission deadline | December 24, 2022 |
Authors' registration | February 15, 2023 |
Early registration | February 17, 2023 |
Parallel, Distributed, and Network-Based Processing has undergone impressive change over recent years. New architectures and applications have rapidly become the central focus of the discipline. These changes are often a result of the cross-fertilization of parallel and distributed technologies with other rapidly evolving technologies. It is paramount to review and assess these new developments compared with recent research achievements in the well-established areas of parallel and distributed computing from industry and the scientific community. PDP 2023 will provide a forum for presenting these and other issues through original research presentations and will facilitate the exchange of knowledge and new ideas at the highest technical level.
Submission Guidelines
Prospective authors should submit a full paper not exceeding 8 pages in the IEEE Conference proceedings format (IEEEtran, double-column, 10pt) to the conference main track or to the Special Sessions through the EasyChair conference submission system (see link below) indicating the Main Track or the chosen Special Session. The submission period will open on July 31st. https://easychair.org/conferences/?conf=pdp2023
- Double-blind review: the paper should not contain authors names and affiliations; in the reference list, references to the authors’ own work entries should be substituted with the string “omitted for blind review.
- Publication: All accepted papers will be included in the same proceedings volume, that will be published by Conference Publishing Services (CPS). The Final Paper Preparation and Submission Instructions will be annoiunced after the notification of acceptance. Authors of accepted papers are expected to register and present their papers at the Conference. Conference proceedings will be be submitted for indexing among others, to DBLP, Scopus ScienceDirect, and ISI Web of Knowledge.
Important dates
- Deadline for paper submission:
November 30December 11, 2022 - Acceptance notification: December 22, 2022
- Camera ready paper due: January 16, 2023
- Registration: Open: January 31, 2023
- Early registration until: February 18, 2023
- Conference: March 1 – 3, 2023
List of Topics
Topics of interest include, but are not restricted to:
- Parallel Computing: massively parallel machines; embedded parallel and distributed systems; multi- and many-core systems; GPU and FPGA-based parallel systems; parallel I/O; memory organization.
- Distributed and Network-based Computing: Cluster, Grid, Web and Cloud computing; mobile computing; interconnection networks.
- Big Data: large-scale data processing; distributed databases and archives; large scale data management; metadata; data intensive applications.
- Programming models and Tools: programming languages and environments; runtime support systems; performance prediction and analysis; simulation of parallel and distributed systems.
- Systems and Architectures: novel system architectures; high data throughput architectures; service-oriented architectures; heterogeneous systems; shared-memory and message-passing systems; middleware and distributed operating systems; dependability and survivability; resource management.
- Advanced Algorithms and Applications: distributed algorithms; multi-disciplinary applications; computations over irregular domains; numerical applications with multi-level parallelism; real-time distributed applications.
Special sessions
HIGH PERFORMANCE COMPUTING IN MODELLING AND SIMULATION
HPCMS intent is to offer an opportunity to express and confront views on trends, challenges, and state-of-the art in diverse application fields, such as engineering, physics, chemistry, biology, geology, medicine, ecology, sociology, traffic control, economy, etc.
As for previous editions, organizers of the HPCMS session are planning a Special Issue of an important international ISI Journal, based on distinguished papers that will be accepted for the session. For instance, a selected number of papers of the past workshop editions have been published on the ISI Journals “Journal of Parallel and Distributed Computing”, “International Journal of High Performance Computing Applications” and “Concurrency and Computation: Practice and Experience
Chairs:
- William Spataro (University of Calabria), spataro@unical.it
- Giuseppe A. Trunfio (University of Sassari), trunfio@uniss.it
- Rocco Rongo (University of Calabria), rongo@unical.it
GPU COMPUTING AND MANY INTEGRATED CORE COMPUTING
For the next decade, Moore’s Law is still going to bring higher transistor densities allowing Billions of transistors to be integrated on a single chip. However, it becomes obvious that exploiting significant amounts of instruction-level parallelism with deeper pipelines and more aggressive wide-issue superscalar techniques, and using most of the transistor budget for large on-chip caches has come to a dead end. Especially, scaling performance with higher clock frequencies is getting more and more difficult because of heat dissipation problems and too high energy consumption. The latter is not only a technical problem for mobile systems, but is even going to become a severe problem for computing centers because high energy consumption leads to significant cost factors in the budget. For the moment, improving performance can only be achieved by exploiting parallelism on all system levels. Multicore architectures like Graphics Processing Unit (GPU) offer a better performance/Watt ratio than single core architectures with similar performance. Combining multicore and coprocessor technology promises extreme computing power for highly CPU-time-consuming applications like in image processing. The Special Session on GPU Computing and Hybrid Computing aims at providing a forum for scientific researchers and engineers on hot topics related to GPU computing and hybrid computing with special emphasis on applications, performance analysis, programming models and mechanisms for mapping codes.
Chair:
- Didier El Baz (LAAS-CNRS), elbaz@laas.fr
SCALABLE ALGORITHMS, LIBRARIES AND TOOLS FOR COMPUTATIONAL SCIENCE AND MACHINE LEARNING ON NEW HETEROGENEOUS HPC SYSTEMS
Heterogeneity is emerging as one of the main characteristics of today’s and future HPC environments where different node organizations, memory hierarchies, and kinds of exotic accelerators are increasingly present. It pervades the entire spectrum of Computing Continuum, ranging from large Cloud infrastructures and Datacenter up to the Internet of Things and Edge Computing environments, aimed at making available in a transparent and friendly way the multitude of low-power and heterogeneous HPC resources available everywhere around us. In this context, for Computational Science and Machine Learning, it is essential to leverage efficient and highly scalable libraries and tools capable of exploiting such modern heterogeneous computers. These systems are typically characterized by very different software environments, which require a new level of flexibility in the algorithms and methods used to achieve an adequate level of performance, with growing attention to energy consumption. This conference Special Session aims to provide a forum for researchers and practitioners to discuss recent advances in parallel methods and algorithms and their implementations on current and future heterogeneous HPC architectures. We solicit research works that address algorithmic design, implementation techniques, performance analysis, integration of parallel numerical methods in science and engineering applications, energy-aware techniques, and theoretical models that efficiently solve problems on heterogeneous platforms.
Chairs:
- Marco Lapegna (University of Naples “Federico II”), marco.lapegna@unina.it
- Salvatore Cuomo (University of Naples “Federico II”), salvatore.cuomo@unina.it
- Francesco Piccialli (University of Naples “Federico II”), francesco.piccialli@unina.it
CLOUD COMPUTING ON INFRASTRUCTURE AS A SERVICE AND ITS APPLICATIONS
Cloud Computing covers a broad range of distributed computing principles from infrastructure (e.g distributed storage, reconfigurable networks) to new programming platforms (e.g MS Azure, Google Appe Engine), and internet-based applications. Particularly, Infrastructure as a Service (IaaS) Cloud systems allow the dynamic creation, destruction and management of virtual machines (VMs) as part of virtual computing infrastructures. IaaS Clouds provide a high-level of abstraction to the end user, one that allows the creation of on-demand services through a pay as you go infrastructure combined with elasticity. The increasingly large range of choices and availability of IaaS toolkits has also allowed creation of cloud solutions and frameworks even suitable for private deployment and practical IaaS use on smaller scales.
This special session on Cloud Computing is intended to be a forum for the exchange of ideas and experiences on the use of Cloud Computing technologies and applications with compute and data intensive workloads. The special session also aims at presenting the challenges and opportunities offered by the development of open-source Cloud Computing solutions, as well as case studies in applications of Cloud Computing.
Authors are invited to submit original and unpublished research in the areas of Cloud Computing, Fog/Edge, Serverless and Distributed Computing. With the rapid evolution of newly emerging technologies, this session also aims to provide a forum for novel methods and case studies on the integrated use of clouds, fogs, Internet of Things (IoT) and Blockchain systems. The general venue will be a good occasion to share, learn, and discuss the latest results in these research fields. The special session program will include presentations of peer-reviewed papers.
Chairs:
- Gabor Kecskemeti (University of Hungary), g.kecskemeti@ljmu.ac.uk
- Aattila Kertesz (University of Hungary), keratt@inf.u-szeged.hu
COMPUTE CONTINUUM
Recently, we are witnessing the growth of Internet-connected devices processing at an incredible pace. Devices that need to be “always-on” for accessing data and services through the network. This massive set of devices generates a lot of pressure on the computing infrastructure that is called to serve their requests. This is particularly critical when focusing on the so-called next-generation applications (NextGen), i.e., those applications characterized by stringent requirements in terms of latency, data, privacy, and network bandwidth. Such a “pressure” stimulates the evolution of classical Cloud computing platforms towards a large-scale distributed computing infrastructure of heterogeneous devices, forming a continuum from the Cloud to the Edge of the network.
This complex environment is determining a paradigm switch in the organization of computing infrastructures, moving from “mostly-centralized” to “mostly-decentralized” installments. Rather than relying on a traditional data center compute model, the notion of a compute continuum is gaining momentum, exploiting the right computational resources at optimal processing points in the system.
In the traditional cloud model, enterprise data is directed straight to the cloud for processing, where most of the heavy compute intelligence is located. But, in the transformative data-driven era we live in, this is increasingly not a viable long-term economic model due to the volume of data and a new emphasis on security, safety, privacy, latency, and reliability.
Today, data insights drive near real-time decisions directly affecting the operation of factories, cities, transportation, buildings, and homes. To cope, computing must be fast, efficient, and secure, which generally means putting more compute firepower closer to the data source. It builds the case for more on-device endpoint computing, more localized computing with a new breed of network and private edge servers, and sensible choices over which workloads need to remain in cloud data centers.
The cradle of this special session has been the focus group on the compute continuum that is part of the Italian National Laboratory on “High-Performance Computing: Key Technologies and Tools”, from which this initiative stems. Starting there, the special session aims to bring together experts from academia and industry to identify new challenges for the management of resources in cloud-edge infrastructures and promote this vision to academia and industry stakeholders.
Chairs:
- Raffaele Montella (University of Napoli “Parthenope”), raffaele.montella@uniparthenope.it
- Maria Fazio (University of Messina), maria.fazio@unime.it
- Patrizio Dazzi (University of Pisa), patrizio.dazzi@unipi.it
- Marco Danelutto (University of Pisa), marco.danelutto@unipi.it
WORKSHOP - BIG DATA CONVERGENCE: FROM SENSORS TO APPLICATIONS
The global information technology ecosystem is currently in transition to a new generation of applications, which require intensive systems of acquisition, processing, and data storage, both at the sensor and the computer level. The new scientific applications, more complex, and the increasing availability of data generated by high resolution scientific instruments in domains as diverse as climate, energy, biomedicine, etc., require the synergies between high performance computing (HPC) and large-scale data analysis (Big Data). Today, the HPC world demands Big Data world techniques, while intensive data analysis requires HPC solutions. However, the tools and cultures of HPC and Big Data have diverged because HPC has traditionally focused on strongly coupled intensive computing problems, while Big Data has been geared towards data analysis in highly scalable applications. The overall goal of this workshop is to create a scientific discussion forum to exchange techniques and experiences to improve the integration of the HPC and Big Data paradigms, providing a convenient way to create software and adapt existing hardware and software intensive in computing and data on an HPC platform. Thus, this workshop aims at bringing together developers of IoT/edge/Fog/HPC applications with researchers in the field of distributed IT systems. It addresses researchers who are already employing distributed infrastructure techniques in IoT applications, as well as computer scientists working on the field of distributed systems interested in bringing new developments into the Big Data convergence area. The workshop will provide the opportunity to assess technology roadmaps to support IoT data collection, Data Analytics, and HPC at scale, and to share users’ experiences. A sample of the interest of our proposal is the existence in Europe of a working group for the convergence between HPC and Big Data supported by ETP4HPC and BDVA, led by Prof. María S. Pérez and with the cooperation of several research groups in this proposal. In addition, Prof. Jesús Carretero collaborates in the preparation of the strategic research agenda of the European platform ETP4HPC in the line of data-intensive applications and Dr. Rafael Mayo-García coordinates the European Energy Research Alliance (EERA) transversal Joint Programme ‘Digitalisation for energy’ where convergence research on HPC and Data Science is developed.
The workshop addresses an audience with two profiles. On the one hand, it attracts researchers who are already employing distributed infrastructure techniques to implement IoT/edge/Fog/Cloud/HPC solutions, in particular scientists who are developing data- and compute-intensive Big Data applications that include IoT data, large-scale IoT networks, and deployments, or complex analysis and machine learning pipelines to exploit the data. On the other hand, it attracts computer scientists working in the field of distributed systems interested in bringing new developments into the convergence of Big Data and HPC solutions.
Chairs:
- Katzalin Olcoz (Universidad Complutense de Madrid), katzalin@ucm.es
- Jesus Carretero (University Carlos III of Madrid), jesus.carretero@uc3m.es
Committees
General co-chairs
- Raffaele Montella, University of Naples “Parthenope”, Italy, raffaele.montella@uniparthenope.it
- Angelo Ciaramella, University of Naples “Parthenope”, Italy, angelo.ciaramella@uniparthenope.it
- Marco Lapegna, University of Naples “Federico II”, Italy, marco.lapegna@unina.it
- Marco Danelutto, University of Pisa, Italy, marco.danelutto@unipi.it
- Dora Blanco Heras, Universidad de Santiago de Compostela, Spain, dora.blanco@usc.es
- Sokol Kosta, Aalborg University, Denmark, sok@es.aau.dk
- Jorge Ejarque, Barcelona Supercomputing Center, Spain jorge.ejarque@bsc.es
- Alessandro Mei, Università degli Studi di Roma La Sapienza, Italy alessandro.mei@uniroma1.it
Financial chair:
- Amund Skavhaug, MTP NTNU, Norway amund.skavhaug@ntnu.no
Industrial chairs:
- Giuseppe Coviello, NEC Laboratories of America, USA, giuseppe.coviello@nec-labs.com
- Brendan Boufler, AWS HPC Europe, UK, bouffler@amazon.com
Program co-chairs:
- Raffaele Montella, University of Naples “Parthenope”, Italy, raffaele.montella@uniparthenope.it
- Massimo Torquati, University of Pisa, Italy, torquati@di.unipi.it
- Diego Romano, ICAR-CNR, Italy, diego.romano@icar.cnr.it
Proceedings co-chairs:
- Raffaele Montella, University of Naples “Parthenope”, Italy, raffaele.montella@uniparthenope.it
- Javier Francisco Garcia Blas, Universidad Carlos III de Madrid, Spain fjblas@inf.uc3m.es
- Daniele D’Agostino, University of Genova, Italy, daniele.dagostino@unige.it
Publicity chairs:
- Gloria Ordega Lopez, University of Almeria, Spain, gloriaortega@ual.es
- Mariacarla Staffa, University of Naples “Parthenope”, Italy, mariacarla.staffa@uniparthenope.it
- Federica Izzo, University of Naples “Suor Orsola Benincasa”, Italy, federica.izzo@studenti.unisob.na.it
Local arrangements chairs:
- Diana Di Luccio, University of Naples “Parthenope”, Italy, diana.diluccio@uniparthenope.it
- Giuseppe Salvi, University of Naples “Parthenope”, Italy, giuseppe.salvi@uniparthenope.it
- Ciro Giuseppe De Vita, University of Naples “Parthenope”, Italy, cirogiuseppe.devita001@studenti.uniparthenope.it
- Gennaro Mellone, University of Naples “Parthenope”, Italy, gennaro.mellone1@studenti.uniparthenope.it
- Alessio Ferone University of Naples “Parthenope”, Italy, alessio.ferone@uniparthenope.it
Publication
Selected papers will be invited to submit an extended version to a special issue in selected JCR-indexed journals: Microprocessors and Microsystems (Elsevier).
Venue
The conference is hosted at Villa Doria d'Angri (Via Francesco Petrarca 80, Naples, 80123, Italy), a monumental manor part of the University of Naples "Parthenope".
Contact
Local Chair: Raffaele Montella
Email: info@pdp2023.org