Real-Time tracking of guidewire robot tips using deep convolutional neural networks on successive localized frames

Ihsan Ullah, Philip Chikontwe, Sang Hyun Park

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

Studies are proceeded to stabilize cardiac surgery using thin micro-guidewires and catheter robots. To control the robot to a desired position and pose, it is necessary to accurately track the robot tip in real time but tracking and accurately delineating the thin and small tip is challenging. To address this problem, a novel image analysis-based tracking method using deep convolutional neural networks (CNN) has been proposed in this paper. The proposed tracker consists of two parts; (1) a detection network for rough detection of the tip position and (2) a segmentation network for accurate tip delineation near the tip position. To learn a robust real-Time tracker, we extract small image patches, including the tip in successive frames and then learn the informative spatial and motion features for the segmentation network. During inference, the tip bounding box is first estimated in the initial frame via the detection network, thereafter tip delineation is consecutively performed through the segmentation network in the following frames. The proposed method enables accurate delineation of the tip in real time and automatically restarts tracking via the detection network when tracking fails in challenging frames. Experimental results show that the proposed method achieves better tracking accuracy than existing methods, with a considerable real-Time speed of 19ms.

Original languageEnglish
Article number8886572
Pages (from-to)159743-159753
Number of pages11
JournalIEEE Access
Volume7
DOIs
StatePublished - 2019

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Convolutional neural networks
  • guidewire tracking
  • micro-robot tracking
  • patch-wise segmentation

Fingerprint

Dive into the research topics of 'Real-Time tracking of guidewire robot tips using deep convolutional neural networks on successive localized frames'. Together they form a unique fingerprint.

Cite this