Automated operative phase identification in peroral endoscopic myotomy
Thomas M Ward1,2; Daniel A Hashimoto1,2; Yutong Ban1,3; Guy Rosman1,3; Ozanan R Meireles1,2.
1Surgical AI and Innovation Laboratory, Boston, MA 2Department of Surgery, Massachusetts General Hospital, Boston, MA 3Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA
Background: Artificial intelligence (AI) and computer vision (CV) have revolutionized image analysis. In surgery, CV applications have focused on surgical phase identification in laparoscopic videos. We proposed to apply CV techniques to identify phases in an endoscopic procedure, peroral endoscopic myotomy (POEM). Methods: POEM videos were collected from Massachusetts General and Showa University Koto Toyosu Hospitals. Videos were labeled by surgeons with the following ground truth phases: 1) Submucosal injection, 2) Mucosotomy, 3) Submucosal tunnel, 4) Myotomy, and 5) Mucosotomy closure. The deep-learning CV model - Convolutional Neural Network (CNN) plus Long Short-Term Memory (LSTM) - was trained on 30 videos to create POEMNet. We then used POEMNet to identify operative phases in the remaining 20 videos. The model's performance was compared to surgeon annotated ground truth.
Results: POEMNet's overall phase identification accuracy was 87.6% (95% CI 87.4% to 87.9%). When evaluated on a per-phase basis, the model performed well, with mean unweighted and prevalence-weighted F1 scores of 0.766 and 0.875, respectively. The model performed best with longer phases, with 70.6% accuracy for phases that had a duration under five minutes and 88.3% accuracy for longer phases.
Conclusion: A deep-learning based approach to CV, previously successful in laparoscopic video phase identification, translates well to endoscopic procedures. With continued refinements, AI could contribute to intra-operative decision-support systems and post-operative risk prediction.
Back to 2020 Posters