Title: Fusion of Multi-Modality Volumetric Medical Imagery
1Fusion of Multi-Modality Volumetric Medical
Imagery
- Mario Aguilar and Joshua R. New
- Knowledge Systems Laboratory
- MCIS Department
- Jacksonville State University
- Jacksonville, AL 36265
2Outline
- What is the neurophysiological motivation of the
fusion architecture? - How is fusion extended to a three-dimensional
operator? - How is this operator applied to volumetric data
sets? - What are the results?
3Neurophysiologically Motivated Architecture
4Fusion Architecture
52D Image Fusion Example
62D Image Fusion Animation
7Extensions to 3D
Where A decay rate B maximum activation
level (set to 1) D minimum activation level
(set to 1) IC excitatory input IS lateral
inhibitory input C, Gc and Gs are as follows
Where A decay rate B maximum activation
level (set to 1) D minimum activation level
(set to 1) IC excitatory input IS lateral
inhibitory input C and Gs are as follows
2D Fusion
3D Fusion
83D Fusion Architecture
R
G
B
Noise cleaning registration if needed
Contrast Enhancement
Between-band Fusion and Decorrelation
93D Shunt Results
Original 2D Shunt 3D Shunt
103D Image Fusion Example
113D Fusion Results
- 3D Explorer Views 3D vs. 2D shunt
12Conclusions
- Development of the 3D shunting operator is a
natural extension of the 2D operator - Application of the 3D shunting operator provides
better definition of image details in volumetric
data sets - 3D fusion extensions developed in the context of
Med-LIFE, a visualization and pattern recognition
tool.
13ABSTRACT
Ongoing efforts at our laboratory have targeted
the development of techniques for fusing medical
imagery of various modalities (i.e. MRI, CT, PET,
SPECT, etc.) into single image products. Past
results have demonstrated the potential for user
performance improvements and workload reduction.
While these are positive results, a need exists
to address the three-dimensional nature of most
medical image data sets. In particular, image
fusion of three-dimensional imagery (e.g. MRI
slices) must account for information content not
only within the given slice but also across
adjacent slices. In this paper, we describe
extensions made to our 2D image fusion system
that utilize 3D convolution kernels to determine
locally relevant fusion parameters.
Representative examples are presented for fusion
of MRI and SPECT imagery. We also present these
examples in the context of a GUI platform under
development aimed at improving user-computer
interaction for exploration and mining of medical
data.
14Neurophysiologically Motivated Architecture
Retinal Circuitry
Fusion Architecture
System is based on the fusion of color
wavelengths in human and primate retinal circuits.
We can model the neural interactions in the
retina by using the shunting neural network,
which is based on the following equation
15FUSION SYSTEM EXTENSION
2D Fusion
3D Fusion
Shunting Neural Network Equation
Shunting Neural Network Equation
Where A decay rate B maximum activation
level (set to 1) D minimum activation level
(set to 1) IC excitatory input IS lateral
inhibitory input C, Gc and Gs are as follows
Where A decay rate B maximum activation
level (set to 1) D minimum activation level
(set to 1) IC excitatory input IS lateral
inhibitory input C and Gs are as follows
16Results
Original 2D Shunt 3D Shunt
17Acknowledgements
- This work was supported by a Faculty Research
Grant awarded to the first author by the faculty
research committee and Jacksonville State
University. Opinions, interpretations, and
conclusions are those of the authors and not
necessarily endorsed by the committee or
Jacksonville State University.
For additional information, please visit
http//ksl.jsu.edu.