Title: Automatic Detection and Segmentation of Robot-Assisted Surgical Motions
1Automatic Detection and Segmentation of
Robot-Assisted Surgical Motions
- presented by
- Henry C. Lin
Henry C. Lin, Dr. Izhak Shafran, Todd E. Murphy,
Dr. David D. Yuh, Dr. Allison M. Okamura, Dr
Gregory D. Hager
2Authors
Henry Lin PhD Student Computer Science Izhak Shafran Research Scientist ECE
Todd Murphy MS, 2004 Mechanical Engineering David Yuh Surgeon Cardiac Surgery
Allison Okamura Assistant Professor Mechanical Engineering Gregory Hager Professor Computer Science
3Motivation
Can we quantitatively and objectively determine
which video is an expert surgeon and which is an
intermediate surgeon?
Can we automatically detect and segment the
surgical motions common in both videos?
4Cartesian Position Plots - Left Manipulator
Expert Surgeon - trial 4
Intermediate Surgeon - trial 22
5Previous Work
- Darzi, et al.
- Imperial College Surgical Assessment Device
(ICSAD) quantified motion information by
tracking electromagnetic markers on a trainees
hands. - Rosen, et al.
- Used force/torque data from laparoscopic
trainers to create a hidden Markov model task
decomposition specific to each surgeon.
6Goals
- Train LDA-based statistical models with labeled
motion data of an expert surgeon and an
intermediate surgeon. - Be able to accurately parse unlabeled raw motion
data into a labeled sequence of surgical gestures
in an automatic and efficient way. - Ultimately create evaluation metrics to benchmark
surgical skill.
7Corpus
- 78 motion variables acquired at 10Hz
- (we use 72 of them)
- 4-throw suturing task
- 15 expert trials
- 12 intermediate trials
- each trial roughly 60 seconds in length
8Gesture Vocabulary
5. Move to middle with needle (right hand)
6. Pull suture with left hand
7. Pull suture with right hand
9System Approach
10Local Feature Extractions
11System Approach
12Linear Discriminant Analysis
The objective of LDA is to perform dimensionality
reduction while preserving as much of the class
discriminatory information as possible.
13Linear Discriminant Analysis
- where the linear transformation matrix W is
estimated by maximizing the Fisher discriminant.
Fisher discriminant - ratio of distance between
the classes and the average variance of each class
14LDA Reduction (6 Labeled Classes, 3
Dimensions)Expert Surgeon
15LDA Reduction (6 Labeled Classes, 3
Dimensions)Intermediate Surgeon
16Storage Savings of LDA
For a 10 minute procedure (6000 input samples)
Method Temporal Neighbors (m) Space required (values)
Raw no - 432,000
Raw temporal yes 5 4,752,000
LDA-based yes 5 18,000
17System Approach
18Results
- Leave 2 out cross-validation paradigm used.
- 15 expert trials, 15 rounds
Training set (13)
Test set (2)
- Output of 2 test trials were compared against the
manually labeled data. - The average across the 15 tests was used to
measure performance.
19Results
20Results
n Number of labeled classes LDA output dimensions correct
1 6 3 91.26
2 6 4 91.46
3 6 5 91.14
4 5 3 91.06
5 5 4 91.34
6 5 3 92.09
7 5 4 91.92
8 4 3 91.88
21Contributions
- An automated and space efficient method to
accurately parse raw motion-data into a labeled
sequence of surgical motions.
- Linear discriminant analysis is a useful tool for
separating surgical motions.
- Results support previous work that there exist
quantitative differences in the varying skill
levels of surgeons.
22Future Work
- Currently getting synchronized stereo video and
API data. Will allow vision-based segmentation
methods to complement our statistical methods. - Apply to a larger set of expert surgeons to other
representative surgical tasks. - Create performance metrics to be used as
benchmarks for surgical skill evaluation.
23Acknowledgements
- Minimally Invasive Surgical Training Center at
the Johns Hopkins Medical School (MISTC-JHU) - - Dr. Randy Brown, Sue Eller
- Intuitive Surgical Inc.
- - Chris Hasser, Rajesh Kumar
- National Science Foundation
-
24Automatic Detection and Segmentation of
Robot-Assisted Surgical Motions
Thank you! Any questions?
- Henry C. Lin, Dr. Izhak Shafran, Todd E. Murphy,
- Dr. David D. Yuh, Dr. Allison M. Okamura,
- Dr Gregory D. Hager