Tags:Deep learning, Fetal ultrasound, Pose regression and Surgical training
Abstract:
In obstetric ultrasound (US), standard planes (SPs) retain a significant clinical relevance, but their acquisition requires the ability to mentally build a 3D map of the fetus from a 2D US image. The autonomous probe navigation towards SP remains a challenging task due to the need to interpret variable and complex images and their spatial relationship. Our work focuses on developing a real-time training platform to guide inexperienced sonographers in acquiring proper obstetric US images that could be potentially deployed for existing US machines. First, we developed a Unity-based environment for volume reconstruction and acquisition of synthetic images to this aim. Secondly, we trained a regression CNN for the 6D pose estimation of arbitrarily oriented US planes on phantom data and fine-tuned it on real ones. Our regression CNN reliably localises US planes within the fetal brain in phantom data and generalises pose regression to an unseen fetal brain without real-time ground truth data or 3D volume scans of the patient beforehand. Future development will expand the prediction to volumes of the whole fetus and assess its potential for vision-based, freehand US automatic navigation when acquiring SPs.
Deep Learning-Based Plane Pose Regression Towards Training in Freehand Obstetric Ultrasound