Automated Virtual Coach for Surgical Training

Anand Malpani, Johns Hopkins University

Surgical educators have recommended individualized coaching for acquisition, retention and improvement of expertise in technical skills. Such one-on-one coaching is limited to institutions that can afford surgical coaches and is certainly not feasible at national and global scales. We hypothesize that automated methods that model intra-operative video, surgeon’s hand and instrument motion, and sensor data can provide effective and efficient individualized coaching. With the advent of instrumented operating rooms and training laboratories, access to such large scale intra-operative data has become feasible. Previous methods for automated skill assessment present an overall evaluation at the task/global level to the surgeons without any directed feedback and error analysis. Demonstration, if at all, is present in the form of fixed instructional videos, while deliberate practice is completely absent from automated training platforms. We believe that an effective coach should: demonstrate expert behavior (how do I do it correctly?), evaluate trainee performance (how did I do?) at task and segment-level, critique errors and deficits (where and why was I wrong?), recommend deliberate practice (what do I do to improve?), and monitor skill progress (when do I become proficient?).

In this thesis, we present new methods and solutions towards these coaching interventions in different training settings viz. virtual reality simulation, bench-top simulation and the operating room. First, we outline a summarizations-based approach for surgical phase modeling using various sources of intra-operative procedural data such as – system events (sensors) as well as crowdsourced surgical activity context. Second, we develop a new scoring method to evaluate task segments using rankings derived from pairwise comparisons of performances obtained via crowdsourcing. Third, we implement a real-time feedback and teaching framework using virtual reality simulation to present teaching cues and deficit metrics that are targeted at critical learning elements of a task. Finally, we present an integration of the above components of task progress detection, segment-level evaluation and real-time feedback towards the first end-to-end automated virtual coach for surgical training.

Speaker Biography

Anand Malpani was born in Mumbai, India. He received his B.Tech. in Electrical Engineering at the Indian Institute of Technology (IIT) Bombay in 2010. He undertook a summer research project in 2009 at the Insitut de Recherche en Communications et Cybernetechnique de Nantes under the guidance of Vincent Ricordel (Image and Video-Communication research group) where he developed and compared various tracking methods for echocardiogram sequences. He joined the Ph.D. program in Computer Science at the Johns Hopkins University in 2010 and worked under the Language of Surgery project umbrella. His dissertation under the guidance of Gregory D. Hager, focused on surgical education and simulation-based training. During this work, he developed data analytics for delivering automated surgical coaching in collaboration with multiple surgical faculty at the Johns Hopkins School of Medicine. He was awarded the Intuitive Surgical Student Fellowship in 2013. He was a Link Foundation’s Modeling, Training and Simulation Fellowship recipient in 2015 to advance surgical simulation-based training. He was a summer research intern in the Simulation team developing the da Vinci Skills Simulator at Intuitive Surgical Inc. (Sunnyvale, CA) in 2015.