Project Description
Given the current availability of high-resolution three-dimensional medical imaging, surgeons commonly have access to multimodal anatomic data prior to undertaking a surgical procedure. Imaging studies such as computed tomography (CT) and magnetic resonance imaging (MRI) can offer accurate information on tissue composition and geometry, and are often used together given their complementary strengths. However, even after structures are identified on imaging, the surgeon must be able synthesize these data into a conceptual model that can predict what will be encountered intraoperatively. We have developed a virtual surgical environment intended to facilitate this task and to optimize the benefits of available imaging.
Our goal has been to create an environment that can relatively quickly incorporate routine clinical studies, enabling real-time interactive preoperative assessment. Our approach thus far has focused on procedures involving the resection of cholesteatomas (skin cysts) from the middle ear and mastoid, collectively known as tympanomastoidectomy. Such procedures involve the removal of portions of the temporal bone to gain access to these cysts, which are commonly associated with chronic ear infections. The ability to experiment with varied approaches may prove beneficial to the outcome of the procedure.
Imaging for a candidate of tympanomastoid surgery consists of a clinical CT scan of the temporal bone and two MR images: a T2-weighted FIESTA sequence and a diffusion-weighted PROPELLER sequence. Conventional CT and MR imaging cannot easily identify cholesteatoma within the temporal bone, but diffusion-weighted MR imaging shows potential as the modality of choice for this purpose. These images contain complementary information, and are used collectively to create a virtual model of the patient’s anatomy.
Anatomic structures of interest were extracted from the CT and MR images using computer-assisted segmentation tools. We developed a multi-modal volume rendering method based on a GPU-accelerated ray casting approach that is capable of simultaneously displaying the different forms of data supplied to the virtual surgical environment. The renderer combines the CT/MR images, the label volume, and the segmented mesh geometry to produced a unified visualization of the virtual patient’s anatomy.
We developed a new method for haptic rendering of volume geometry so that the surgeon can touch and manipulate the virtual patient’s anatomy. The approach combines advantages of both point-sampling and proxy-based rendering techniques.
The virtual surgical environment has modest requirements and allows the quality of the visual rendering to be adjusted interactively so that it can run effectively on a variety of commodity desktop and laptop computers. The software is designed in a cross-platform manner so that the simulation environment can be used on computers running Microsoft Windows, Linux, or Mac OS X.
We have been able to replicate salient anatomic detail in the virtual environment as compared to the video images taken during actual tympanomastoidectomy. The geometry from the CT dataset yields a subjectively accurate representation of the bony contours seen during surgery. Similarly, the cholesteatoma volume derived from PROPELLER MR imaging is accurately placed within the bone, and presents a realistic representation of what the otologic surgeon will encounter in the patient.
Our system represents a step towards the use of a virtual environment to prepare for tympanomastoid surgery. It enables the relatively rapid integration of multi-modal imaging datasets, direct volume rendering, and a means of manipulating preoperative clinical data in a surgically relevant manner. We anticipate that the methods described can be generalized to a variety of surgical procedures.
Related Publications
Chan, S., Li, P., Lee, D. H., Salisbury, K. & Blevins, N. H. A virtual surgical environment for rehearsal of tympanomastoidectomy. Medicine Meets Virtual Reality (2011).
Project Staff
Status
Active since 2002.
Funding Sources
This project was funded in part by NIH Grant 5R01LM010673-02
and in part by the Veterans Administration.