Download PDFOpen PDF in browser

Q-Learning for Outbound Container Stacking at Container Terminals

EasyChair Preprint no. 13943

6 pagesDate: July 12, 2024

Abstract

The efficient stacking of outbound containers presents a significant challenge within container terminal operations. It's crucial to minimize the anticipated need for rehandling, as this directly impacts yard productivity and overall terminal efficiency. To address this challenge, we introduce a novel approach based on reinforcement learning. Our method employs Q-learning, incorporating Monte Carlo techniques to identify optimal storage locations by maximizing reward values. Furthermore, we've developed effective strategies for determining storage placements through extensive training iterations. Through numerical experimentation using real-world container terminal data, we've compared our model with existing algorithms. Numerical results highlight the robustness of our approach in navigating uncertain operational environments, its ability to support real-time decision-making, and its effectiveness in minimizing rehandling requirements.

Keyphrases: Container stacking problem, container terminal, Q-learning, yard management

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:13943,
  author = {Aaron Lim and Seokchan Lee and Jeongyoon Hong and Younghoo Noh and Sung Won Cho and Wonhee Lee},
  title = {Q-Learning for Outbound Container Stacking at Container Terminals},
  howpublished = {EasyChair Preprint no. 13943},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser