EWDTS-2023: 2023 IEEE EAST-WEST DESIGN & TEST SYMPOSIUM
PROGRAM FOR SATURDAY, SEPTEMBER 23RD
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-09:45 Session 4

Plenary Session 2A

09:00
Integrated Circuit Reliability: Current State, Challenges and Solutions

ABSTRACT. Vazgen Sh. Melikyan, Professor, Corresponding Member of National Academy of Science of Armenia, Doctor of Technical Sciences, Honorable Scientist of Armenia, Director of Educational Department of Synopsys Armenia CJSC, Head of “Microelectronic Circuits and Systems” (MCS) Chair of National Polytechnic University of Armenia (NPUA).

09:45-13:45 Session 5

Regular Papers

Session 2A

09:45
In-Memory Fault-Free Vector Simulation

ABSTRACT. A technology for modeling and in-memory simulation of digital devices based on smart data structures is proposed. The fault-free method for modeling and simulation of digital devices based on vector logic, located in memory based on read-write transactions, is considered. In-memory fault-free simulation on smart data structure is focused on implementation in-memory of SoC, FPGA, RISC-V VLSI. The proposed method of in-memory simulation of logic circuits does not require synthesis and implementation in the standard base of elements. Moreover, the performance of the method depends on the size of the elements, the more inputs, the higher the parallelism of processing the input test sets as addresses. Based on good behavior simulation, a system for stuck-at-fault simulation of logical functionality, as well as logical circuits, is built. The software application is focused on teaching university students the methods of verification and testing of digital products. A convenient visual interface of the application can serve student projects that include dozens of logical elements of any complexity.

10:00
Modeling Faults as Addresses

ABSTRACT. A smart data architecture is proposed for simulating faults in digital circuits, as in-memory computing. The purpose of such computing is to reduce energy consumption and latency when simulation of logic circuits by replacing processor instructions fast read-write transactions on logic vectors in memory. To do this, it is proposed to leverage vector form of the truth table, which are used to construct deductive matrices of logical elements. The axiom is used: the truth table of tests T, the logical functionality of the element L and faults F are identical in form to each other and always convolutely interact with each other T⨁L⨁F=0. The deductive matrix is seen as the genome of logic to solve all design and test problems. To do this, based on the logical vector, smart data structures are built that can minimize the complexity of the fault simulation and the good operation algorithm of the digital product. Deductive mechanisms for modeling faults, like addresses, are proposed based on read-write transactions on smart and explicit data structures in the form of vectors, tables, and matrices. A superposition of smart and explicit data structures based on logical vectors and truth tables is proposed, which already forms a solution. Therefore, such data does not require simulation algorithms, but requires modeling algorithms, for good superposition of explicit data structures, which leads to a solution without simulation. A software architecture is proposed for solving problems of fault modeling, good behavior and test generation based on smart data structures. The results of processing same digital fragments for the verification of data structures and modeling and simulating mechanisms implemented in the python code are presented.

10:15
In-memory Fault as Address Simulation

ABSTRACT. A technology for modeling and in-memory simulation of digital devices based on smart data structures is proposed. A method for modeling and simulation faults as addresses is proposed for analyzing the quality of logic circuit tests based on the use of truth tables and deductive vectors. Simple formulas are proposed, which have no analogues, for fault modeling and simulation, for the synthesis of deductive matrices, and for the fault-free simulation of logic elements. Smart data structures and algorithm based on read-write transactions are being developed for in-memory fault-free simulation of digital functionality. Smart digital-logic structures (GL, RTL, SL) are implemented in any memory without complex and expensive synthesis procedures. Vector–tabular in-memory simulation of single and multiple stuck-at faults (as addresses) of functional elements of an arbitrary number of inputs is proposed. Truth table is proposed to describe the tested combinations of input lines faults. Instead of a processor, read-write transactions are used on smart data structures. A vector-logical synthesis of deductive matrices for fault simulation is proposed, which has a quadratic computational complexity. Input data for synthesis: input test set and logical vector of functionality. The essence of simulation is the superposition of three components: the input test vector, its derivative: the fault truth table and the deductive matrix obtained from the logical vector. The superposition of explicit structures of single-format smart data means that there is no simulation algorithm for obtaining a list of faults to be detected on the test. An automaton for deductive fault simulation (as addresses) of logical elements of the digital circuit is also synthesized. In-memory fault simulation on smart data structure is focused on implementation in SoC, FPGA, RISC-V, VLSI.

10:30
Advancements in Battery State of Charge Estimation Methods for Battery Management Systems: A comprehensive Literature Review

ABSTRACT. This is a comprehensive literature review on the battery State of Charge (SoC) estimation methods for Battery Management Systems (BMS). In the field of BMS, the SoC is a crucial parameter that indicates the remaining charge in a battery during its current cycle. Accurately estimating the SoC is vital to prevent the battery from operating in unfavorable conditions, such as low charge, and to ensure its safe and efficient operation, ultimately extending its service life. This motivates the exploration and comparison of various SoC estimation methods. Among the commonly used methods, the Ampere-Hour Integration (AHI) method is the simplest but lacks the ability to correct estimation errors due to its open-loop nature. The Open Circuit Voltage (OCV) method relies on a table look-up estimation based on the relationship between open-circuit voltage and SoC. However, it is unsuitable for online estimation as it requires a long standing time to stabilize the voltage value. The Kalman Filter (KF) method combines aspects of the first two methods, leveraging system observation errors to make timely corrections to state estimates. It is suitable for online estimation when accompanied by an appropriate battery model, resulting in high estimation accuracy. This paper aims to summarize the advantages and disadvantages of each SoC estimation method and explore potential avenues for improvement. By analyzing the limitations and challenges of SoC estimation algorithms in practical engineering applications, it provides insights into the future development of online battery SoC estimation.

10:45
Performance and Comparative analysis of Elliptic Curve Cryptography and RSA

ABSTRACT. Nowadays, technology is developing very fast, so cryptographers are constantly trying to create an improved cryptosystem that is better than the previous one. Data security is one of the important issues, especially for increasing transactions via the internet. RSA is a public-key cryptosystem, which is often used as a standard in data security. However, Elliptic Curve Cryptography (ECC), which was created as an alternative to the RSA cryptosystem, is relevant in recent times. For some devices with low computing resources is difficult to perform large-key encryption quickly. Elliptic curve cryptography copes well with this problem. Every public key cryptosystem is constructed on the basis of the complexity of one or more difficult mathematics problems. This paper evaluates the efficiency of the elliptic curve cryptographic method and the asymmetric RSA algorithm. The conducted comparative analysis showed that elliptic curve cryptography has an advantage in terms of calculation speed over RSA algorithm. The main advantage of ECC system over RSA is that it can provide same security with the smaller key size. For example, for security of 80 bits with ECC can be achieved with key size of 160 bits. For same security level RSA needs to use 1024-bit key size. This huge difference makes ECC better and potential algorithm for the current embedded system.

11:00
Congestion-Aware Routing in Software Defined Vehicular Networks

ABSTRACT. Vehicular communications have been extensively studied due to their potential to provide safety and intelligence in transportation. Regardless, due to the increase in mobility and complex communication environments, it has become challenging to dispense different requirements such as the decrease in latency and an increase in reliability for communicating information related to safety. With the use of distributed algorithms, routing vehicular data from one vehicle to another vehicle over multiple hops has been done in literature before. An approach using distributed algorithms would yield inefficient to have control over the congestion of the network. To overcome this, Software Defined Networking's coordination capacities are leveraged to find the ideal vehicle-to-vehicle multi-hop routes. To achieve this, Congestion-Aware Routing (CAR), a centralised routing algorithm that relies on graph theory to choose brief and uncongested vehicle-to-vehicle paths, is used.

11:15
Tuning Genetic Algorithm Parameters for Placement of Integrated Circuit Cells

ABSTRACT. Possibilities of reducing the problem of placement of integrated circuit (IC) elements to quadratic assignment problem and applying genetic algorithms to its matrix solution are observed in this paper. The impact of key parameters in genetic algorithms on the efficiency of solving the placement problem for IC cells have been studied. Based on these studies, recommendations are provided for selecting values and mechanisms for crucial parameters in genetic algorithms, such as population size, selection, crossover, and termination. In this paper, the problem of placement of IC elements is brought to the problem of quadratic assignment problem and generic algorithm with its adaptation to matrix solution of the problem of quadratic assignment has been implemented. The paper presents analysis results on the effectiveness of the proposed approach and compares it with the traditional method of sequential placement, utilizing examples of placing elements from various test circuits.

11:30
A Survey on Hardware Prefetching in Shared-Memory Multiprocessors

ABSTRACT. Memory latency presents a significant obstruction to computer performance. To hide the memory latency, prefetching mechanism is used to retrieve data from the memory before the processor requests it, anticipating near-term data demands. An effective prefetching scheme can reduce cache miss rates and hide memory latency. Prefetching has yielded major advances in both industry and academia; modern high-performance processors nearly universally implement hardware prefetchers. In this paper, we introduce the fundamental concepts underpinning hardware prefetching and survey state-of-the-art hardware prefetching schemes. We delineate common prefetching approaches, analyse their relative merits and limitations, identify key challenges and open research questions. Overall, this paper aims to provide a comprehensive overview of hardware prefetching that highlights critical trends and technologies in this crucial performance optimization domain.

11:45
Functional Verification of Multiport SRAM Memories based on UVM.
PRESENTER: Arman Manukyan

ABSTRACT. With the increasing complexity of electronic systems and the demand for higher reliability and performance, the importance of electronic verification has only grown in recent years. Verification is a crucial step in the design process of Application Specific Integrated Circuits (ASICs). Functional (Simulation-Based) Verification is a widely used methodology that involves running simulation models of the ASIC design to verify its functionality and performance. Especially Memory verification is a critical aspect of the overall verification process in digital hardware design. It specifically focuses on verifying the functionality and correctness of memory components within a design, such as RAM (Random Access Memory) and ROM (Read-Only Memory). The article proposes a multiport memories verification method, with UVM standardized methodology, which provides high functional coverage based on randomization.

12:00
Physical Design of 6T Cell of SRAM Devices and Comparative Analysis of Layout

ABSTRACT. The article discusses the possibilities of physical design of the most common 6T memory cell of modern SRAM devices and the physical design of 16-bit and 256-bit memory array based on it with 5 nm technology node. In the work, based on the schematic solution of the memory cell and information about the base of the used transistors and interconnections, five layout solutions of the 6T memory were proposed using two layers of metal. Based on them, the topology of array with 16, and 256 bits of memory were developed. Physical and schematic engineering simulations were performed for the proposed topological solutions. Based on the obtained results, a comparative analysis of the proposed layout solutions were performed in terms of the occupied area, current consumption and read/write delay times, temperature, supply voltages and process variation over a wide range.

12:15
A Comprehensive Approach for Enhancing Deep Learning Datasets Quality Using Combined SSIM Algorithm and FSRCNN

ABSTRACT. The quality of the datasets used in deep learning problems is crucial in almost all application domains. There are a variety of common issues that can degrade datasets quality, including problems like data duplication, insufficient image resolution, and low data quality. Well-established techniques exist for mitigating these problems individually, such as using the SSIM algorithm to efficiently find similarities and duplicates in data or leveraging neural networks like FSRCNN to increase image resolution and quality. However, while each of these approaches offers its own unique benefits, they also possess some inherent limitations and disadvantages when applied in isolation. To mitigate the disadvantages of individual methods and provide a more comprehensive solution, a data preprocessing technique is proposed which involves combining the preceding algorithms and applying them as a single pipeline. The key insight is that SSIM's ability to identify duplicates combined with FSRCNN's power to super-resolve images can offer complementary strengths. Experiments conducted demonstrate that this hybrid approach achieves an improvement of approximately 3% in classification accuracy on the CIFAR-100 image dataset benchmark, with only 1.5 times increase in computational training duration. The joint technique outperforms using SSIM or FSRCNN independently. This highlights the potential for hybrid solutions to enhance datasets quality. Further research can explore combinations of other complementary data enhancement methods and study their performance across diverse datasets and machine learning models. The early findings clearly show that using multiple techniques together can improve model accuracy by addressing various issues with low-quality datasets.

12:30
Research on the Throughput Capacity of LoRaWAN Communication Channel

ABSTRACT. This paper investigates the throughput capacity of the LoRaWAN communication channel between two end devices. Using a mathematical model, the airtime is calculated for message transmissions with different spreading factor (SF) values. The mathematical model takes into account regional limitations on Duty Cycle. As a result of the calculations, dependencies of the channel's throughput capacity over a day are obtained as functions of parameters such as SF, Payload, and Duty Cycle. Based on the calculations, conclusions are drawn regarding the optimization of LoRaWAN device parameters to achieve maximum throughput capacity without compromising the communication channel's range.

12:45
Smart Adjustment Of Transistor Parameters To Reduce Temperature Rise Due To Self-Heating Effect

ABSTRACT. The present study introduces a novel approach for the computation of temperature elevation arising from the self-heating phenomenon, coupled with an effective parameter tuning technique aimed at mitigating temperature variations. In contemporary integrated circuits, the self-heating effect emerges as a highly significant and intricate phenomenon, often challenging to quantify accurately. Addressing this challenge demands a comprehensive strategy involving supplementary measurements and meticulous optimizations. This research employs a power measurement methodology to assess the thermal perturbations within individual components of the circuit. Furthermore, the study advances a robust technique for calibrating the parameters of transistors to govern the thermal upswing. The experimental investigations are conducted employing a Multi-Gate MOSFET library within a 14nm technological framework. Comprehensive simulations are executed utilizing the HSPICE simulator, while the scripting aspects are implemented through Python 3.7.

13:00
Validation and Test Challenges for Multi-Memory Bus BIST Engines

ABSTRACT. The most recent advanced technology nodes (e.g., FinFET, GAA) offer more possibilities to optimize MBIST timing for pre-inserted groups of memories with scheduling. Multi-Memory Bus BIST Engines - MMBBEs are developed to take advantage of these possibilities. The validation of a given MMBBE requires consideration of a diversity of multi-memory configurations connected to one bus. A huge number of such configurations hardens the overall validation. Currently there are no open sources related to the MMBBE validation methodology. The suggested approach allows to increase flexibly the validation coverage and to reduce an exhaustion during the validation. A design generation environment is developed for MMBBE validation basing on selection of primary/main parameters of the overall MMBBE specification characterizing presence or absence of a certain feature in the specification. A hierarchy of parameter sets is built charactering next groups of MMBBE parameters: secondary, ternary, etc. All values for some groups of parameters might be considered while the values for the rest of the parameters might be chosen randomly. The solution can be continuously improved via considering new SoC implementations and AI based methods including machine learning which will lead to natural changes in the hierarchy of parameters.

13:15
Aging Protected Two Stage CTLE For High Speed Data Recievers

ABSTRACT. The proposed architecture protects two stage CTLE from aging phenomena. The solution implements additional devices to set different nodes of the circuit to different intermediate voltage levels and prevent them from high stress operating conditions. The added circuitry is estimated to cost 13% increase of the CTLE layout. Aging simulation results show that after 10-year operation the performance degradation is reduced from 45% to less than 5% due to the added protection system.

13:30
A Test Rig for Thermal Analysis of Heat Sinks for Power Electronic Applications

ABSTRACT. This paper discusses the design and manufacture of a test rig for practical thermal analysis of the temperature distribution across forced air-cooled heatsinks. High-temperature gradients across power electronic modules that have a large area of semiconductor structure can result in premature failure of the components due to mechanical stress-related fatigue. Computer modelling and simulations predict the temperature distribution across the heatsink, but physical temperature measurements are required to validate these results. In order to acquire these temperature readings, a bespoke test rig is designed and manufactured. Temperature readings obtained using this test rig are applied for comparison to those obtained by computer simulation and, hence provide validation of the computer simulation results.