View: session overviewtalk overview
Plenary Session. Logic Vector is a Genome of Computing
Regular Papers
| 14:00 | A Low-Cost Secure Streaming Scan Network Architecture with Test Vector Encryption ABSTRACT. The Streaming Scan Network (SSN) architecture offers an efficient solution for high-speed scan data distribution in System-on-Chip (SoC) testing. However, its unrestricted access poses significant security risks, including unauthorized data manipulation and leakage of sensitive information, such as secret keys. In this work, we propose a low-cost, secure SSN architecture that combines a stream cipher for test vector encryption with a user authorization mechanism for test response obfuscation. Our approach significantly reduces area overhead while maintaining robust security, offering an efficient alternative to state-of-the-art solutions. Experimental results demonstrate that our proposed scheme achieves enhanced security with minimal resource impact, making it a practically viable solution. |
| 14:15 | Predicting Anthropometric Dimensions Using Machine Learning for Biomedical Anthropometric Wearable Device ABSTRACT. Precise anthropometric data are required to tailor biomedical wearable devices. Conventional methods are either manual measurement or image processing by computer vision methods, both involving extensive reference data, quality imag ing, and pixel-level computation. Here, we introduce a new approach where image-based inputs are not required by using structured biological data and machine learning mechanisms to predict major anthropometric dimensions. We systematically considered a variety of machine learning models and deep learning models—such as Decision Tree, Linear Regression, Support Vector Regression (SVR), Random Forest, Gradient Boosting, XGBoost, and K-Nearest Neighbors (KNN)—and deep learning models like Multilayer Perceptron (MLP), TabNet, and CatBoost. After thorough benchmarking comparisons, CatBoost was found to be the best model in terms of both precision in predicting anthropometric variables and generalizability. The model only needs to use common biological inputs like gender, weight, height, and BMI to predict key anthropometric measures like knee dimensions and leg length with no reliance upon image data sets or computer vision preprocessing. The approach inherently minimizes computational overhead and streamlines the design pipeline for biomedical devices, especially where conventional measurement is not feasible. The model gains a viable, scaleable solution for data-driven, contactless design of personalized wearable systems |
| 14:30 | Intelligent Multimodal Cueing Wearable Device for Gait Rehabilitation in Parkinson’s Disease Patients ABSTRACT. Freezing of Gait (FOG) is one of the most debil itating symptoms of Parkinson’s Disease (PD), often resulting in falls, impaired mobility, and loss of independence. This paper presents a comprehensive and wearable assistive system designed to detect, predict, and mitigate FOG episodes in real time. The proposed device integrates multiple sensing and feedback modalities including inertial measurement units (IMUs), surface electromyography (sEMG), dynamic visual cue ing via laser projection, and haptic feedback through vibratory actuators. The system leverages a hybrid Convolutional Neural Network- Long Short-Term Memory (CNN-LSTM) model to recognize gait phases and detect movement intentions, achieving a classification accuracy of 90 percentage. Simultaneously, a Random Forest classifier is trained on real-time sEMG signals to monitor dorsiflexor and plantarflexor activity, providing biomechanical insight into muscular performance. Based on this muscular feedback, the system adapts its visual and haptic cues dynamically to guide patients toward optimal step initiation and foot orientation. Visual cues—projected via a wearable laser—indicate the ideal foot placement trajectory, while vibratory feedback enhances proprioceptive awareness of foot movement, particularly aiding dorsiflexion. The device supports both indoor and treadmill-based rehabilitation, offer ing flexibility for clinical deployment and home-based therapy. |
| 14:45 | Generative Diffusion Models for Test Pattern Synthesis in VLSI Enhancing Fault Coverage and Silicon Lifecycle Management ABSTRACT. The challenges of making semiconductor technologies work reliably, in terms of yield, fault-tolerance, have escalated dramatically, due to the rapid scale-down into the nanoscale regime, and these challenges are even more acute in very-large-scale integration (VLSI) systems. Traditional Automatic Test Pattern Generation (ATPG) tools, well known as the backbone of digital verification, are increasingly not keeping up with the stochastic and dynamic fault model of the advanced nodes. The techniques tend to overedit to a priori fault models, like stuck-at or transition fault models, and thus are not very flexible to newer defect patterns. Furthermore, their over-reliance on exhaustive backtracking and deterministic search, inject too much computational overhead and redundant test vectors that weaken efficiency while managing silicon lifecycle (SLM). Enter recent cutting-edge developments in generative artificial intelligence (GenAI) the prospect of probabilistic models learning complicated patterns of defect incidence, and the apparent need to revisit traditional to the world understanding of diffusional phenomena. Of these, diffusion models are notable in the variety and high quality of the samples of data they are able to produce by iteratively removing noise using generative annealing, rendering them useful at preserving distributional diversity yet alleviating mode collapse, one of the more faults of generative adversarial networks (GANs). In this paper, we develop a Generative Diffusion Model (GDM) framework that is automatic test pattern synthesis oriented in nanoscale VLSI systems. Wide-scale data consisting of defect and failure trace data (combined with benchmark circuit data) is used to train fault-aware representations that generalize beyond classical fault models. Incorporating such an approach into an SLM pipeline can enable a model to be adaptively retrained using incremental defect data such that the ultimate relevance is maintained throughout the device life cycle. Experimental results on the ITC-99 benchmark circuits show a 22 percent enhancement in fault coverage over the traditional ATPG and GAN-based synthesis and a 40 percent decrease in run time, and, further, it produces statistically diverse test vectors with no addition of test vectors. The scalability of the approach to medium-to-large circuits is further demonstrated by a detailed case study and demonstrates a greater diversity of fault activation in a circuit with increased robustness. These findings indicate that there would be a paradigm shift of the application of electronic tests strategy and diffusion-driven synthesis stands tall as a viable and dynamic solution to and management of reliable microelectronic devices. |
| 15:00 | Overvoltage Elimination of Pull-Down Transistors in High Supply Voltage Output Drivers to Ensure Signal Integrity PRESENTER: Roman Ivanyan ABSTRACT. SerDes implemented in modern System-on-Chip (SoC) platforms face challenges due to transistor operating voltages do not scale proportionally with process geometry, increasing device susceptibility to overvoltage stress and accelerating aging. The transmitter output driver is a key block that must simultaneously achieve low power and high speed while delivering sufficient output swing to ensure signal integrity. Increasing the output-stage supply voltage can improve output swing but cause overvoltage on thin-oxide pull-down NMOS devices and thereby worsen aging-related degradation. This paper proposes a method to maintain large output swing while eliminating overvoltage-induced aging in the pull-down branch by adapting the predriver level conversion scheme; the approach retains signal integrity with only modest area and power overhead. |
| 15:15 | NET PRIORITIZATION FOR ROUTING USING MACHINE LEARNING MODELS ABSTRACT. The rapid increase in integrated circuit (IC) component density has led to a significant rise in design complexity. To address this challenge within constrained timelines, various methodologies have been developed, among which the digital design flow is prominent. During the physical implementation phase, routing is a critical stage wherein interconnections between circuit elements are established. Routing is inherently resource-limited, and suboptimal routing—particularly for critical nets—can severely degrade IC performance. To mitigate these issues, net prioritization algorithms have been introduced. Conventional net prioritization methods are effective in addressing prioritization challenges; however, they often suffer from long runtimes, adversely impacting time-to-market. In the article, a machine learning-based approach for net prioritization is presented. The proposed algorithm achieves a runtime reduction of ~14% and a decrease in design rule violations by ~30%. Nevertheless, some crosstalk issues persist in the resulting designs, in comparison with classical approaches. |
| 15:30 | Classification of Histological Lung Images Using Texture and Morphometric Feature Fusion ABSTRACT. The accurate classification of histological lung images plays a crucial role in early diagnosis and treatment planning for pulmonary diseases, particularly lung cancer. Manual analysis of such images requires expert knowledge and is often time-consuming. This study proposes a hybrid method that combines Histogram of Oriented Gradients (HOG) with morphometric features – such as cell area and perimeter – to enhance the performance of image classification algorithms. Multiple machine learning classifiers, including Support Vector Machine (SVM), k-Nearest Neighbors (kNN), Decision Tree (DT), Random Forest (RF), and Naive Bayes (NB), were evaluated on a dataset of annotated lung histological images. Experimental results demonstrate that the integration of texture and geometric features significantly improves classification accuracy. The Random Forest classifier achieved the highest accuracy of 81.90% when using the fused feature set, compared to 73.24% using only texture features. These findings confirm that feature fusion enables a more comprehensive representation of histological patterns and supports the development of robust diagnostic systems in digital pathology. |
| 15:45 | Phase Estimation Algorithm for Interference-Protected GNSS-Based Direction Finder ABSTRACT. A signal phase estimation algorithm is considered. It is intended to be used in the direction finders, based on the processing of the Global Navigation Satellite Systems (GNSS) signals. The algorithm processes output signals of two Adaptive Antenna Arrays (AAAs) which ensure the direction finder operation even in the interference presence scenarios. Details of the phase estimation algorithm are presented. Besides, architecture and adaptive algorithm of the AAAs weights calculation are presented as well. The algorithm is based on the Linearly Constrained Recursive Least Squares approach. It uses the Matrix Inversion Lemma to invert the AAA input signal correlation matrix, which is used for the calculation of the Kalman gain vector which in turn is used for the calculation of the AAA weights. Simulation results demonstrate 0.5°…4° phase estimation accuracy, if the 3-by-3 AAAs are used, the Signal-to-Noise Ratio is –20 dB at the outputs of the AAA channel receivers and the AAAs operate in the presence of 0…8 spatially distributed interfering signals with the Interference-to-Signal Ratio of 90 dB each. The proposed AAA and the phase estimation algorithm are suggested for usage in modern GNSS direction finders. |
| 16:00 | Implementation and Evaluation of a Quantum-Resistant Hybrid Cryptosystems ABSTRACT. With development of quantum computing, classical encryption systems are under risk. Cryptographers are trying to create enhanced algorithm and create alternatives that are secure against quantum attacks. This is crucial for securing sensitive data, communications and infrastructure in a post-quantum world. Among hybrid encryption schemes Kyber+AES and ECC+AES are very popular. These systems are combining public-key algorithms with symmetric encryption to provide high security and performance. This paper compares these two hybrid systems based on the following key performance indicators: encryption time, decryption time and encrypted file size across different file sizes. For research was done program implementation in python using suitable libraries for these algorithms. Experimental results show that for classical systems ECC+AES is faster. However, if we need system to be secure in a quantum future, Kyber+AES might be the better choice. |
| 16:15 | Advanced Techniques For Gain Regulation In Ultra High-Speed Variable Gain Amplifier Design ABSTRACT. Gain control in modern Variable Gain Amplifier (VGA) designs is typically achieved through load resistor-based or current Digital to Analog Converter (DAC)-based techniques, each offering distinct advantages and trade-offs. While resistor-based methods provide simplicity, they often degrade bandwidth. Current DAC-based approaches offer finer control and faster switching but suffer from linearity issues at low current levels and require significant silicon area. These limitations pose challenges in advanced Serializer/Deserializer (SERDES) systems where area efficiency and high-speed performance are critical. To address these constraints, a new gain control technique is proposed that utilizes a controllable resistor circuit placed between the differential output nodes of the VGA. This output-stage configuration minimizes interference with the signal path and biasing circuitry, enabling effective gain reduction without compromising linearity. The design employs NMOS transistors as programmable resistive elements, controlled via a 3-bit thermometric scheme, offering discrete gain steps with minimal area and power overhead. Although the method supports only unidirectional gain control, it remains highly suitable for high-speed SERDES applications. Initial gain can be set using conventional techniques, with this approach providing fine-tuning capabilities. Despite a slight increase in output capacitance, the overall impact on performance is negligible, making this solution a compact, power-efficient alternative for advanced Analog Front-end AFE designs. |
| 16:30 | A Machine Learning Method to Determine the Optimal Number of Stages for a Ring Oscillator ABSTRACT. An effective method of automated selection of the number of stages of one of the most important and widespread blocks used in digital integrated circuits, the ring oscillator, based on machine learning (ML), is presented. In addition to the ability to automatically determine the number of stages, it also provides the best tradeoff between the main parameters: power, area and performance due to the ability to take the peculiarities caused by the technological process into account. The use of the ML method made it possible to reduce the duration of the RO stage calculation by approximately 4-5 times, at the expense of a 3,57% loss of accuracy. |
| 16:45 | Optimization Method of Multi-Threshold Digital Standard Libraries ABSTRACT. It is known that today power consumption in ICs has reached unprecedented values. For this reason, low-power design methods are used in integrated circuits, of which the well-known and effective method is the Multiple Threshold Voltage Design Method. Further increase in the method efficiency is closely related to the number of digital standard cell library types used, which differ in Vths. Usually, this number is fixed in libraries, which reduces the efficiency of design. A method based on the use of ML is proposed, which allows to correctly select the number of libraries' Vths for each specific circuit and design requirements. The method is tested on a Schmitt trigger, as a result of which power consumption is improved by ~25% at the expense of area loss of ~1,25%. |