| ||||
| ||||
![]() Title:SWD-HTM: an Novel Hierarchical Temporal Memory Model Integrating Optimal Transport and Sparse Autoencoder Conference:PRICAI 2025 Tags:Deep Learning, Hierarchical Temporal Memory and Time SeriesForecasting Abstract: Hierarchical Temporal Memory (HTM) is a biologically inspired, online learning algorithm that emulates neocortical computation for time series modeling. However, its reliance on hand-crafted encoders limits adaptability. Meanwhile, independent encoding and concatenation of multivariate feature embeddings often cause dimension explosion. To overcome these limitations, we propose a novel HTM architecture integrating deep representation learning via a Sparse Autoencoder (SAE) with optimal transport theory. The SAE replaces manual the original encoder and spatial pooler components with a data-driven, end-to-end framework, enhancing generalization. The Sliced Wasserstein Distance (SWD) is introduced to align the SAE’s hidden-layer activation distribution with the target Sparse Distributed Representation (SDR), ensuring sparsity, similarity, and distributivity simultaneously. This alignment minimizes distributional discrepancy while reducing computational complexity. Extensive experiments demonstrate that the proposed SWD-HTM model significantly improves prediction accuracy, achieving 14.3\% and 22.1\% gains on short-term and long-term forecasting tasks, respectively, outperforming traditional HTM and state-of-the-art baselines. SWD-HTM: an Novel Hierarchical Temporal Memory Model Integrating Optimal Transport and Sparse Autoencoder ![]() SWD-HTM: an Novel Hierarchical Temporal Memory Model Integrating Optimal Transport and Sparse Autoencoder | ||||
| Copyright © 2002 – 2026 EasyChair |
