ICRC 2023: THE 8TH IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC) 2023
PROGRAM FOR WEDNESDAY, DECEMBER 6TH
Days:
previous day
all days

View: session overviewtalk overview

08:30-09:00 Session 7: Registration & Breakfast (Day 2)

Location: Amici (4th Floor) | Sponsored By: Vaire Computing (vaire.co)

09:00-10:00 Session 8: Keynote (Day 2)

Location: Amici (4th Floor)

09:00
Prospect for Practical Applications using Quantum Computers

ABSTRACT. Quantum computing provides an entirely new paradigm of computation compared to the traditional digital computing, using fundamentally different laws of physics. In that sense, it is different from all other forms of alternative computing platforms that have been developed to date. Although in its early days, quantum computing is known to effectively tackle computational problems that are considered intractable using even the most powerful computers of our time. Trapped ion is the leading candidate for realizing practically useful quantum computers as it features highest performance of quantum computational operations. Introduction of advanced integration technologies has converted complex atomic physics experiments into a stand-alone programmable quantum computer, which is commercially available today. In this presentation, I will discuss the operating principles and performance characterization for advanced trapped ion quantum computers at the applications level. I will also discuss several application areas and use cases where quantum computers can make a practical contribution to the computational frontier.

10:00-10:15Coffee Break

Location: Amici (4th Floor)

10:15-11:55 Session 9: Quantum Computing

Location: Amici (4th Floor)

10:15
Quantum Array Multiplier

ABSTRACT. This paper proposes a new and improved implementation of a quantum integer multiplier. Performing arithmetic computations is sometimes a necessary step in the implementation of quantum algorithms. In this work, Quantum Fourier Transform is used in order to perform scalable arithmetic in a generic bit-width quantum system. In the phase domain, addition can be implemented through accumulated controlled rotations on the qubits' state.

Leveraging this, and inspired by the classical implementation of an array multiplier, a new integer multiplier is fully designed and tested in a quantum environment.

The depth of a quantum circuit is the number of computational steps necessary to completion, and it is a key parameter that reflects on the performance of the design. The new design reduces the quantum depth of the design from the exponential order of the previously proposed designs to polynomial order.

10:35
Using non-convex optimization in quantum process tomography: Factored gradient descent is tough to beat
PRESENTER: David A. Quiroga

ABSTRACT. We propose a non-convex optimization algorithm, based on the Burer-Monteiro (BM) factorization, for the quantum process tomography problem, in order to estimate a low-rank process matrix $\chi$ for near-unitary quantum gates. In this work, we compare our approach against state of the art convex optimization approaches based on gradient descent. We use a reduced set of initial states and measurement operators that require $8^n$ circuit settings, as well as $\mathcal{O}(4^n)$ measurements for an underdetermined setting. We find our algorithm converges faster and achieves higher fidelities than state of the art approaches, both in terms of measurement settings, as well as in terms of noise tolerance, in the cases of depolarizing and Gaussian noise models.

10:55
Optimal Clifford Initial States for Ising Hamiltonians

ABSTRACT. Modern day quantum devices are in the NISQ era, meaning that the effects of size restrictions and noise are essential considerations when developing practical quan- tum algorithms. Recent developments have demonstrated that Variational Quantum Algorithms (VQAs) are an appropriate choice for current era quantum devices. VQAs make use of classical computation to iteratively optimize parameters of a quantum circuit, referred to as the ansatz. These parameters are usually chosen such that a given objective function is minimized. Generally, the cost function to be minimized is computed on a quantum device, and then the parameters are updated using a classical optimizer. One of the most promising VQAs for practical hardware is the Variational Quantum Eigensolver (VQE). VQE uses the expectation value of a given Hamiltonian as the cost function, and the minimal value of this particular cost function corresponds to the minimal eigenvalue of the Hamiltonian.

Because evaluating quantum circuits is currently very noisy, developing classical bootstraps that help minimize the number of times a given circuit has to be evaluated is a powerful technique for improving the practicality of VQE. One possible such bootstrapping method is creating an ansatz which can be efficiently simulated on classical computers for restricted parameter values. Once the optimal set of restricted parameters is determined, they can be used as initial parameters for a VQE optimization which has access to the full parameter space. Stabilizer states are states which are generated by a particular group of operators called the Clifford Group. Because of the underlying structure of these operators, circuits consisting of only Clifford operators can be simulated efficiently on classical computers. Clifford Ansatz For Quantum Algorithms (CAFQA) is a proposed classical bootstrap for VQAs that uses ansatzes which reduce to clifford operators for restricted parameter values.

CAFQA has been shown to produce fairly accurate initialization for VQE applied to molecular Hamiltonians. Motivated by this result, in this paper we seek to analyze the Clifford states that optimize the cost function for a new type of Hamiltonian, namely Transverse Field Ising Hamiltonians. Our primary result is contained in theorem IV.1 which connects the problem of finding the optimal CAFQA initialization to a submodular minimization problem which in turn can be solved in polynomial time.

11:15
Design of General Purpose Minimal-Auxiliary Ising Machines
PRESENTER: Andrew Moore

ABSTRACT. Ising machines are a form of quantum-inspired processing-in-memory computer which has shown great promise for overcoming the limitations of traditional computing paradigms while operating at a fraction of the energy use. The process of designing Ising machines is known as the reverse Ising problem. Unfortunately, this problem is in general computationally intractable: it is a nonconvex mixed-integer linear programming problem which cannot be naively brute-forced except in the simplest cases due to exponential scaling of runtime with number of spins. We prove new theoretical results which allow us to reduce the search space to one with quadratic scaling. We utilize this theory to develop general purpose algorithmic solutions to the reverse Ising problem. In particular, we demonstrate Ising formulations of 3-bit and 4-bit integer multiplication which use fewer total spins than previously known methods by a factor of more than three. Our results increase the practicality of implementing such circuits on modern Ising hardware, where spins are at a premium.

11:35
Accelerating VQE Algorithm via Parameters and Measurement Reuse
PRESENTER: Xinpeng Li

ABSTRACT. Variational Quantum Eigensolver (VQE), a quantum-classical hybrid algorithm, holds significant potential for advancements in quantum chemistry and optimization tasks. Our study focuses on accelerating the VQE in combinatorial op- timization tasks such as Max-Cut, k-Clique, and k-SAT, while ad- dressing the challenges of scalability and measurement overhead in quantum devices. Notably, while solving one problem, it allows us to concurrently compute essential information like expectation values and gradients for multiple other problems. This capability is made feasible because combinatorial optimization problems often share the same measurement basis. Drawing inspiration from this insight, we introduce a method that efficiently solves multiple problems by appropriately selecting their initial points. Subsequently, we propose two selection strategies for the method. Employing these strategies can, to some extent, help avoid the issue of becoming trapped in local optima, thus leading to improved accuracy. Our tests on 5-node and 10-node graphs for Max-Cut and k-Clique show that, in some cases, our method outperforms random initialization in terms of both accuracy and number of iterations.

11:45
Cost Explosion for Efficient Reinforcement Learning Optimisation of Quantum Circuits
PRESENTER: Alexandru Paler

ABSTRACT. Large scale optimisation of quantum circuits is a computationally challenging problem. Reinforcement Learning (RL) is a recent approach for learning strategies to optimise quantum circuits by increasing the reward of an optimisation agent. The reward is a function of the quantum circuit costs, such as gate and qubit counts, or circuit depth. Our goal is to improve the agent's optimization strategy, by including hints about how quantum circuits are optimized manually: there are situations when the cost of a circuit should be allowed to temporary explode, before applying optimisations which significantly reduce the circuit's cost. We bring numerical evidence, using Bernstein-Vazirani circuits, to support the advantage of this strategy. Our results are preliminary, and show that allowing cost explosions offers significant advantages for RL training, such as reaching optimum circuits. Cost explosion strategies have the potential to be an essential tool for RL of large-scale quantum circuit optimisation.

12:00-13:00Lunch Break

Location: Above Ash (16th Floor)

13:00-14:40 Session 10: Quantum Computing, Thermodynamic Computing, Edge Computing, Stochastic Computing, and Approximate Computing

Location: Amici (4th Floor)

Chair:
13:00
Thermodynamic AI and the Fluctuation Frontier
PRESENTER: Patrick Coles

ABSTRACT. Many Artificial Intelligence (AI) algorithms are inspired by physics and employ stochastic fluctuations. We connect these physics-inspired AI algorithms by unifying them under a single mathematical framework that we call Thermodynamic AI, including: (1) Generative diffusion models, (2) Bayesian neural networks, (3) Monte Carlo sampling and (4) Simulated annealing. Such Thermodynamic AI algorithms are currently run on digital hardware, ultimately limiting their scalability and overall potential. Stochastic fluctuations naturally occur in physical thermodynamic systems, and such fluctuations can be viewed as a computational resource. Hence, we propose a novel computing device, called Thermodynamic AI hardware, that could accelerate such algorithms. We contrast Thermodynamic AI hardware with quantum computing where noise is a roadblock rather than a resource. Thermodynamic AI hardware can be viewed as a novel form of computing, since it uses a novel fundamental building block. We identify stochastic units (s-units) as the building blocks for Thermodynamic AI hardware. In addition to these s-units, Thermodynamic AI hardware employs a Maxwell's demon device that guides the system to produce non-trivial states. We provide a few simple physical architectures for building these devices.

13:20
A Crisis in Computing

ABSTRACT. Recent observations by the James Webb Space Telescope have focused attention on a “crisis in cosmology”. The crisis stems from the inability to reconcile the predictions of the standard model of cosmology with observations such as those from JWST. Perhaps the best-known difficulty is the “Hubble tension” - a difference in the expansion rate of the universe when measured from earlier and later stages of cosmic evolution. Such challenges call into question the conceptual foundations of the standard model and highlight the difficulties associated with unsubstantiated hypotheses like dark matter, dark energy, and inflation. This crisis, while highlighted in the context of cosmology, may have deep implications on our conception of reality more broadly [1][2]. A similar conceptual crisis has persisted in the foundations of quantum mechanics for nearly a century, where a mysterious, external observer is a required component of the theory. While admonitions to “shut up and compute” by quantum physicists may be sensible, pragmatic advice, they are also tacit acknowledgement of a foundational, conceptual challenge [3]. I contend that a long-standing conceptual crisis exists also in computing and has been brought into focus recently by the quest for more powerful and general AI. The crisis is revealed, for example, in disagreements about whether current AI chatbots should be considered “stochastic parrots” [4] or “sentient” [5], but earlier indications include the difficulties of Turing’s halting and Godel’s incompleteness problems. I frame the crisis in computing by posing the question “what are algorithms and from where do they come?” But no matter how we choose to describe it, a conceptual crisis in computing seems all but inevitable given the acknowledged crises in physics.

13:30
Thermodynamic Linear Algebra
PRESENTER: Maxwell Aifer

ABSTRACT. Linear algebraic primitives are at the core of many modern algorithms in engineering, science, and machine learning. Hence, accelerating these primitives with novel computing hardware would have tremendous economic impact. Quantum computing has been proposed for this purpose, although the resource requirements are far beyond current technological capabilities, so this approach remains long-term in timescale. Here we consider an alternative physics-based computing paradigm based on classical thermodynamics, to provide a near-term approach to accelerating linear algebra. At first sight, thermodynamics and linear algebra seem to be unrelated fields. In this work, we connect solving linear algebra problems to sampling from the thermodynamic equilibrium distribution of a system of coupled harmonic oscillators. We present simple thermodynamic algorithms for (1) solving linear systems of equations, (2) computing matrix inverses, (3) computing matrix determinants, and (4) solving Lyapunov equations. Under reasonable assumptions, we rigorously establish asymptotic speedups for our algorithms, relative to digital methods, that scale linearly in matrix dimension. Our algorithms exploit thermodynamic principles like ergodicity, entropy, and equilibration, highlighting the deep connection between these two seemingly distinct fields, and opening up algebraic applications for thermodynamic computing hardware.

13:40
Mitigating the Correlation Problem in Multi-Layer Stochastic Circuits
PRESENTER: Owen Hoffend

ABSTRACT. Stochastic computing (SC) is a low-cost non-standard computer architecture that processes pseudo-random bitstreams. Its effectiveness, and that of other probabilistic methods, requires maintaining desired levels of correlation among interacting input bitstreams, for example, SCC = 0 or SCC = +1, where SCC is the stochastic cross-correlation metric. Correlation errors are systematic (bias-causing) errors that cannot be corrected by increasing bitstream length. A typical stochastic design C1 only controls correlation at its primary input lines. This is a fairly straightforward task, however it limits the scope of SC to “single layer,” usually combinational, designs. In situations where a second processing layer C2 follows C1, the output correlation of C1 must satisfy the input correlation needs of C2. This can be done by inserting a (sequential) correlation control layer S12 between C1 and C2, which incurs high area and delay overhead. S12 transforms intralayer bitstreams Z with unknown SCC values into numerically equivalent ones Z* with desired correlation. The fundamental problem of designing C1 to produce Z* directly, thereby dispensing with S12, which apparently has not been considered before, is addressed in this paper. We focus on two-layer designs C1C2 requiring SCC = +1 between layers, and present a method called COMAX for (re)designing C1 so that it outputs bitstreams with correlation that is as close as possible to +1. We demonstrate on a representative image processing application that, compared to alternative correlation control techniques, COMAX reduces area by a massive 2x without reducing output image quality.

14:00
High-Speed Compilation of Large-Scale Stochastic Circuits
PRESENTER: Alexandru Paler

ABSTRACT. Stochastic computing (SC) is executed on bitstreams which encode probabilities into the ratio between the number of one bits and the length of the stream. SC has been successfully applied, for example, to image processing and belief-propagation of low density parity check codes. With increased interest into SC, there is a gap with respect to the available scalable design automation tools. Scalable methods for the design of SC are needed in order to enable fast prototyping of such circuits. We bridge this gap by starting from BitSAD.jl and implement data structures and algorithms to achieve orders of magnitude speed-ups for the compilation of SC. We are able to compile circuits which were previously out of reach for the tools, achieving around three orders of magnitude faster compilation. Our methods are the first step towards scalable design automation of SC.

14:10
Clustering Vehicle Routing Problems on Specialized Hardware

ABSTRACT. The Vehicle Routing Problem (VRP) is an NP-Hard optimization problem that has been extensively studied. In practice, clustering methods are used to divide problems into smaller instances in order to scale the solvers' capabilities. In this work, we propose a novel method for solving large-scale VRPs by leveraging the power of quantum and quantum-inspired devices. The focus is on clustering customers, which is a common strategy for solving large VRPs. The clustering is formulated as a variation of the Capacitated Clustering Problem (CCP) which is used to divide customers into groups with similar characteristics. We propose a new similarity metric between customers. We further demonstrate the suitability of specialized hardware, such as the Fujitsu Digital Annealer (DA), for solving CCPs and show how the new similarity measure can be incorporated into the CCP for different VRP variations. We carry out experiments on four different VRP variants and obtain near-optimal solutions, demonstrating that our model outperforms other clustering models such as K-Means, and that the DA outperforms other state-of-art solvers.

14:20
Testing the Accuracy of Surface Code Decoders
PRESENTER: Alexandru Paler

ABSTRACT. Large-scale, fault-tolerant quantum computations will be enabled by quantum error-correcting codes (QECC). This work presents the first systematic technique to test the accuracy and effectiveness of different QECC decoding schemes by comparing a look-up table decoder, where errors are systematically mapped to syndrome information, to solutions generated using algorithmic decoders. Specifically, we examine the results of minimum-weight-perfect-matching and belief-propagation decoders against exhaustive look-up tables for surface codes up to distance seven and categorise where errors are accurately corrected in both decoding schemes. While our results are preliminary, we show that significant quantitative results can be generated, comparing how actual error channels are successfully or unsuccessfully decoded. We show that different decoding schemes perform very differently under the same QECC scheme and error model, and detail how decoders can be tested and classified with respect to errors that are successfully decodable. This work paves the way to the data driven tuning of decoder ensembles and will enable tailored design of hybrid decoding schemes that allow for real-time decoding, while maintaining the high theoretical thresholds allowed by specific quantum error correction codes.

14:30
Unconventional Computing's False Promises?

ABSTRACT. Unconventional or non-classical computing has risen dramatically in popularity over the last two decades. The reasons are diverse. In this opinion article, we take stock of what this research field has achieved so far and where future trends may---or should---be going.

14:40-14:55Coffee Break

Location: Amici (4th Floor)

14:55-16:05 Session 11: Conventional Computing

Location: Amici (4th Floor)

14:55
Limits to the Energy Efficiency of CMOS Microprocessors
PRESENTER: Ege Erdil

ABSTRACT. CMOS microprocessors have experienced significant advancements in energy efficiency, but fundamental limits may soon be encountered. This paper presents a model that estimates the upper bounds on the maximum FLOPs per Joule (FLOP/J) for CMOS processors. We analyze the limits to energy efficiency in CMOS processors by considering three main sources of energy dissipation: transistor switching, interconnect, and leakage power. Using first-principles calculations and empirical data, we model each source to determine its minimum energy cost per FLOP, we establish a theoretical upper limit of approximately 1e17 FLOP/J for idealized processors, though more realistic assumptions suggests that the practical limit may be around 1e15 FLOP/J, with large uncertainty ranges. This paper represents an initial step towards rigorous and verifiable forecasting of the fundamental limits of CMOS processor energy efficiency.

15:15
RefineHD: Accurate and Efficient Single-Pass Adaptive Learning Using Hyperdimensional Computing
PRESENTER: Pere Vergés

ABSTRACT. Hyperdimensional computing (HDC) is a computing framework that has gained significant attention due to its high efficiency and rapid training and inference of machine learning algorithms. With its fast learning and inference capabilities, HDC shows excellent potential for IoT/Embedded systems. However, while HDC allows for fast single-pass learning, it suffers from weak classification accuracy, resulting from model saturation caused by excessive noise due to the addition of similar patterns. In this paper, we propose an adaptive learning method that surpasses accuracy and robustness compared to the state-of-the-art adaptive HDC model while maintaining the same efficiency during both training and testing phases. Our method addresses the issue of saturation by selectively adding correctly classified samples only when their similarity to the existing patterns sufficiently differs from the class. Moreover, we achieve a robust model against noise and hardware failures thanks to its HDC holographic properties. Through this approach, we achieve a remarkable average accuracy improvement of +2.8\% across 126 datasets (with a maximum improvement of +26\%). Furthermore, we observe a remarkable +6.6\% improvement over the hyperdimensional computing baseline (with a maximum improvement of +67\%), all while retaining the same inference efficiency.

15:35
Low Temperature Hybrid Electronics for Next Generation Computing
PRESENTER: Stephen Whiteley

ABSTRACT. We discuss the system integration of cryogenically optimized CMOS, combined with superconducting electronics and optionally other technologies, as a framework for future supercomputing.

15:45
A Proof-of-Concept Prototyping of Reservoir Computing with Quantum Dots and an Image Sensor for Image Classification
PRESENTER: Takuto Matsumoto

ABSTRACT. In this paper, we set up an experimental system leveraging a commercially available image sensor and projector to demonstrate reservoir computing with QDs. Through this, we successfully executed an image classification task. Our findings highlight that QDs serve as a reservoir, especially when dealing with large input image sizes. Future challenges include optimizing QD density and extending evaluations to other tasks.

15:55
Reducing Read Amplification and Re-synthesis in DNA-based Archival Storage

ABSTRACT. DNA-based data storage has emerged as a candidate archival technology due to its density and longevity, but current molecular approaches suffer from destructive and cumulative reads that lead to periodic re-synthesis of the DNA, a costly and time-consuming process. Architectural approaches can help mitigate the frequency of these events and boost system durability. We propose approaches for placing and caching data that exploit the fact that only a small part of the data is frequently accessed. We show that when hints are available to identify relatively hot data, distributing hot blocks or consolidating hot blocks can delay the time to first re-synthesis by up to 50% over the baseline model and reduce cumulative re-synthesis by 6.6×. We further show that combining a cache with these policies is even more effective.This work demonstrates the importance of considering system architecture to mitigate read amplification and re-synthesis of blocks.

16:05-16:20Coffee Break

Location: Amici (4th Floor)

16:30-18:30 Session 13: Intel Quantum SDK Tutorial

Location: Amici (4th Floor)

16:30
Intel Quantum SDK: Hybrid Quantum-Classical Programming Tools
PRESENTER: Kevin Rasch

ABSTRACT. Variational quantum algorithms represent a highly promising category of quantum computing tasks that exhibit quantum benefits. In this interactive tutorial, we will introduce the Intel(R) Quantum SDK for executing variational algorithms efficiently and delve into the comprehensive platform design. Moreover, we will demonstrate how users can develop their own quantum variational algorithms utilizing the Intel Quantum SDK. Participants will have the opportunity to access the Intel Quantum SDK and gain practical experience by running sample applications. Furthermore, we will showcase the process of writing, compiling, and executing a hybrid quantum-classical program on the platform.