{"id":205,"date":"2013-04-11T20:00:03","date_gmt":"2013-04-12T03:00:03","guid":{"rendered":"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=205"},"modified":"2021-04-28T21:45:34","modified_gmt":"2021-04-29T04:45:34","slug":"surgical-simulation","status":"publish","type":"page","link":"https:\/\/sr.stanford.edu\/?page_id=205","title":{"rendered":"Surgical Simulation"},"content":{"rendered":"<p style=\"text-align: center;\"><a href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/wp-content\/uploads\/2013\/04\/blevins-sim-small.jpg\"><img loading=\"lazy\" class=\" wp-image-1855 aligncenter\" alt=\"blevins-sim-small\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/wp-content\/uploads\/2013\/04\/blevins-sim-small.jpg\" width=\"640\" height=\"397\" srcset=\"https:\/\/sr.stanford.edu\/wp-content\/uploads\/2013\/04\/blevins-sim-small.jpg 800w, https:\/\/sr.stanford.edu\/wp-content\/uploads\/2013\/04\/blevins-sim-small-300x186.jpg 300w, https:\/\/sr.stanford.edu\/wp-content\/uploads\/2013\/04\/blevins-sim-small-483x300.jpg 483w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p>Surgical Simulation concentrates on how to allow surgeons to practice surgery on a virtual patient and experience realistic sights, sounds, and forces like they were actually in the operating room.<\/p>\n<table>\n<tbody>\n<tr>\n<td width=\"200\"><img loading=\"lazy\" alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/0255b1d7526c41b4fd937a0ecc8411c4.jpg\" width=\"200\" height=\"161\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a title=\"Design and Implementation of a Maxillofacial Surgery Rehearsal Environment with Haptic Interaction for Bone Fragment and Plate Alignment\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=337\">Design and Implementation of a Maxillofacial Surgery Rehearsal\u00a0Environment with Haptic Interaction for Bone Fragment and Plate Alignment<\/a><\/h2>\n<p>We are designing and implementing a haptics-enabled maxillofacial surgery\u00a0rehearsal environment that requires little training and provides a\u00a0direct high-fidelity immersive experience for the operator.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table cellspacing=\"0\">\n<tbody>\n<tr>\n<td width=\"200\"><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/wp-content\/uploads\/2013\/04\/thumbnail.jpg\" border=\"0\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a title=\"Deformable Haptic Rendering For Volumetric Medical Image Data\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=2213\">Deformable Haptic Rendering For Volumetric Medical Image Data<\/a><\/h2>\n<p>Virtual-reality-based surgical simulation is one of the most notable and practical applications of kinesthetic haptic rendering. The prospect of patient-specific simulation using pre-operative medical images drives the need for haptic rendering algorithms that allow direct manipulation of volumetric data. This project attempts to address rendering of deformable tissue within medical images.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table cellspacing=\"0\">\n<tbody>\n<tr>\n<td width=\"200\"><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/fb3c676007d41f9b0547757992583902.jpg\" border=\"0\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a title=\"Real-Time Finite Element Analysis (FEA) in Haptic Surgical Simulation\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=343\">Real-Time Finite Element Analysis (FEA) in Haptic Surgical Simulation<\/a><\/h2>\n<p>We use nonlinear, real-time, FEA to simulate maxillomandibular advancement surgery, a clinical procedure used to reduce the severity of obstructive sleep apnea (OSA) and mitigate the risks of the co-morbidities associated with severe OSA<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table cellspacing=\"0\">\n<tbody>\n<tr>\n<td width=\"200\"><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/5ed4cf0d32810a57272ce7a62e0f84c0.jpg\" border=\"0\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a title=\"Six-DoF Haptic Rendering of Volumetric Data pictures\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=353\">Six-DoF Haptic Rendering of Volumetric Data pictures<\/a><\/h2>\n<p>We developed a method for 6-DOF haptic rendering of isosurface geometry embedded within sampled volume data. The algorithm uses a quasi-static formulation of motion constrained by multiple contacts to simulate rigid-body interaction between a haptically controlled virtual instrument and volumetric isosurfaces. Unmodified volume data, such as computed tomography or magnetic resonance images, can be rendered directly with this approach, making it particularly suitable for applications in medical or surgical simulation.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table cellspacing=\"0\">\n<tbody>\n<tr>\n<td width=\"200\"><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/eccc329fc424077eb361bb59b412ac61.jpg\" border=\"0\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a title=\"Surgical Rehearsal of Tympanomastoidectomy\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=361\">Surgical Rehearsal of Typanomastoidectomy<\/a><\/h2>\n<p>Our virtual surgical environment constructs interactive anatomical models from patient-specific, multi-modal preoperative image data, and incorporates new methods for visually and haptically rendering the volumetric data. Evaluation of the system\u2019s ability to replicate temporal bone dissections for tympanomastoidectomy showed strong correlations between virtual and intraoperative anatomy.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table cellspacing=\"0\">\n<tbody>\n<tr>\n<td width=\"200\"><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/1a6b68e3ee0881e2a8d8a5de92301e81.jpg\" border=\"0\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a title=\"Morphometric Workstation for Middle Ear Micro-CT\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=377\">Morphometric Workstation for Middle Ear Micro-CT<\/a><\/h2>\n<p>Middle-ear anatomy is integrally linked to both its normal function and its response to disease processes. Micro-CT imaging provides an opportunity to capture high-resolution anatomical data in a quick and non-destructive manner. We have designed and developed a software workstation that provides an intuitive means of interacting with these data.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table cellspacing=\"0\">\n<tbody>\n<tr>\n<td width=\"200\"><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/36bff4d32b9a2514b4ec2b8a0183a51b.jpg\" border=\"0\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a name=\"TOC-Endoscopic-Sinus-Surgery-Simulation\"><\/a><a title=\"Endoscopic Sinus Surgery Simulation\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=383\">Endoscopic Sinus Surgery Simulation<\/a><\/h2>\n<p>Endoscopic sinus surgery is a technically challenging procedure that could benefit from a virtual surgical rehearsal environment. We designed simulator capable of taking a pre-operative clinical CT scan and constructing a virtual 3D model of the patient on the fly. Surgically relevant anatomy seen during simulation can predict what may be encountered in the operating room.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table cellspacing=\"0\">\n<tbody>\n<tr>\n<td width=\"200\"><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/e9a58bdcb8cadbce282a4c1825bcf848.jpg\" border=\"0\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a title=\"A 6-DoF Haptic Device for Microsurgery\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=399\">A 6-DoF Haptic Device for Microsurgery<\/a><\/h2>\n<p>The \u00b5Haptic device, a new six degree of freedom haptic device, was designed with microsurgical telerobotics and simulation in mind. The passive mass properties of a master device should closely approximate \u00a0the mass properties of real surgical instruments in order to maximize transfer of the surgical motor skill, and we have designed this device with these objectives in mind.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Past &amp; Completed Projects<\/h2>\n<table cellspacing=\"0\">\n<tbody>\n<tr>\n<td width=\"200\"><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/f016fdb916dbc1d38db12e31419b96d2.jpg\" border=\"0\" \/><\/td>\n<td style=\"vertical-align: top;\">\n<h2><a title=\"VR Environment for Bone Surgery\" href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=403\">VR Environment for Bone Surgery<\/a><\/h2>\n<p>The purpose of this project was to develop a training environment that can be used to teach both sensorimotor-level and task-level skills to surgical residents. The project focus was on the simulation of temporal bone surgery, with a particular emphasis on modeling the behavior of a virtual drill and its contact with bone tissue.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n","protected":false},"excerpt":{"rendered":"<p>Surgical Simulation concentrates on how to allow surgeons to practice surgery on a virtual patient and experience realistic sights, sounds, and forces like they were actually in the operating room. Design and Implementation of a Maxillofacial Surgery Rehearsal\u00a0Environment with Haptic Interaction for Bone Fragment and Plate Alignment We are designing and implementing a haptics-enabled maxillofacial &hellip;<\/p>\n<p class=\"read-more\"> <a class=\"\" href=\"https:\/\/sr.stanford.edu\/?page_id=205\"> <span class=\"screen-reader-text\">Surgical Simulation<\/span> Read More &raquo;<\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"parent":213,"menu_order":2,"comment_status":"open","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages\/205"}],"collection":[{"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=205"}],"version-history":[{"count":35,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages\/205\/revisions"}],"predecessor-version":[{"id":2641,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages\/205\/revisions\/2641"}],"up":[{"embeddable":true,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages\/213"}],"wp:attachment":[{"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=205"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}