Automatic Detection and Segmentation of Robot-Assisted Surgical Motions - PowerPoint PPT Presentation

About This Presentation
Title:

Automatic Detection and Segmentation of Robot-Assisted Surgical Motions

Description:

Can we automatically detect and segment the surgical motions common in both videos? ... Used force/torque data from laparoscopic trainers to create a hidden Markov ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 25
Provided by: henr6
Learn more at: https://www.cs.jhu.edu
Category:

less

Transcript and Presenter's Notes

Title: Automatic Detection and Segmentation of Robot-Assisted Surgical Motions


1
Automatic Detection and Segmentation of
Robot-Assisted Surgical Motions
  • presented by
  • Henry C. Lin

Henry C. Lin, Dr. Izhak Shafran, Todd E. Murphy,
Dr. David D. Yuh, Dr. Allison M. Okamura, Dr
Gregory D. Hager
2
Authors
Henry Lin PhD Student Computer Science Izhak Shafran Research Scientist ECE
Todd Murphy MS, 2004 Mechanical Engineering David Yuh Surgeon Cardiac Surgery
Allison Okamura Assistant Professor Mechanical Engineering Gregory Hager Professor Computer Science
3
Motivation
Can we quantitatively and objectively determine
which video is an expert surgeon and which is an
intermediate surgeon?
Can we automatically detect and segment the
surgical motions common in both videos?
4
Cartesian Position Plots - Left Manipulator
Expert Surgeon - trial 4
Intermediate Surgeon - trial 22
5
Previous Work
  • Darzi, et al.
  • Imperial College Surgical Assessment Device
    (ICSAD) quantified motion information by
    tracking electromagnetic markers on a trainees
    hands.
  • Rosen, et al.
  • Used force/torque data from laparoscopic
    trainers to create a hidden Markov model task
    decomposition specific to each surgeon.

6
Goals
  • Train LDA-based statistical models with labeled
    motion data of an expert surgeon and an
    intermediate surgeon.
  • Be able to accurately parse unlabeled raw motion
    data into a labeled sequence of surgical gestures
    in an automatic and efficient way.
  • Ultimately create evaluation metrics to benchmark
    surgical skill.

7
Corpus
  • 78 motion variables acquired at 10Hz
  • (we use 72 of them)
  • 4-throw suturing task
  • 15 expert trials
  • 12 intermediate trials
  • each trial roughly 60 seconds in length

8
Gesture Vocabulary
5. Move to middle with needle (right hand)
6. Pull suture with left hand
7. Pull suture with right hand
9
System Approach
10
Local Feature Extractions
11
System Approach
12
Linear Discriminant Analysis
The objective of LDA is to perform dimensionality
reduction while preserving as much of the class
discriminatory information as possible.
13
Linear Discriminant Analysis
  • where the linear transformation matrix W is
    estimated by maximizing the Fisher discriminant.

Fisher discriminant - ratio of distance between
the classes and the average variance of each class
14
LDA Reduction (6 Labeled Classes, 3
Dimensions)Expert Surgeon
15
LDA Reduction (6 Labeled Classes, 3
Dimensions)Intermediate Surgeon
16
Storage Savings of LDA
For a 10 minute procedure (6000 input samples)
Method Temporal Neighbors (m) Space required (values)
Raw no - 432,000
Raw temporal yes 5 4,752,000
LDA-based yes 5 18,000
17
System Approach
18
Results
  • Leave 2 out cross-validation paradigm used.
  • 15 expert trials, 15 rounds

Training set (13)
Test set (2)
  • Output of 2 test trials were compared against the
    manually labeled data.
  • The average across the 15 tests was used to
    measure performance.

19
Results
20
Results
n Number of labeled classes LDA output dimensions correct

1 6 3 91.26
2 6 4 91.46
3 6 5 91.14
4 5 3 91.06
5 5 4 91.34
6 5 3 92.09
7 5 4 91.92
8 4 3 91.88
21
Contributions
  • An automated and space efficient method to
    accurately parse raw motion-data into a labeled
    sequence of surgical motions.
  • Linear discriminant analysis is a useful tool for
    separating surgical motions.
  • Results support previous work that there exist
    quantitative differences in the varying skill
    levels of surgeons.

22
Future Work
  • Currently getting synchronized stereo video and
    API data. Will allow vision-based segmentation
    methods to complement our statistical methods.
  • Apply to a larger set of expert surgeons to other
    representative surgical tasks.
  • Create performance metrics to be used as
    benchmarks for surgical skill evaluation.

23
Acknowledgements
  • Minimally Invasive Surgical Training Center at
    the Johns Hopkins Medical School (MISTC-JHU)
  • - Dr. Randy Brown, Sue Eller
  • Intuitive Surgical Inc.
  • - Chris Hasser, Rajesh Kumar
  • National Science Foundation

24
Automatic Detection and Segmentation of
Robot-Assisted Surgical Motions
Thank you! Any questions?
  • Henry C. Lin, Dr. Izhak Shafran, Todd E. Murphy,
  • Dr. David D. Yuh, Dr. Allison M. Okamura,
  • Dr Gregory D. Hager
Write a Comment
User Comments (0)
About PowerShow.com