Tags:Human Computer Interaction, Mixed Reality, Tele-robotics, Teleoperation and Ultrasound
Abstract:
Many fields including medicine, maintenance, and manufacturing are moving towards remote guidance. Current teleguidance methods consist of video conferencing and robotic teleoperation, which constitute a choice between precision and latency versus flexibility and cost. We present a new concept of “human teleoperation” which closes the gap between these approaches. In our prototype teleultrasound system, an expert remotely teloperates a person (the novice) wearing a mixed reality headset by controlling a virtual ultrasound probe projected into the novice’s scene. The novice follows the position, orientation, and force of the virtual device with a real transducer. The pose, force, mixed reality video, ultrasound images, and spatial mesh of the scene are fed back to the expert for visual and haptic feedback. This control framework, in which the input and the actuation are carried out by people, allows teleguidance that is more precise and fast than verbal guidance, yet more flexible and inexpensive than robotic teleoperation. The communication is implemented in a network-agnostic, peer-to-peer architecture that enables intuitive teleoperation over the internet via Wi-Fi, LTE, or 5G. The system was subjected to several preliminary tests to show its potential, including mean teleoperation latency of 0.32 ± 0.05 seconds and average errors of 4.4 ± 2.8 mm and 5.4 ± 2.8˚ in position and orientation tracking respectively. An initial test was carried out in which an expert sonographer performed two ultrasound procedures each on four patients, showing lower measurement error and an average completion time of 1:36 ± 0:23 minutes using human teleoperation compared to 4:13 ± 3:58 using verbal teleguidance methods.
A Mixed Reality System for Human Teleoperation in Tele-Ultrasound