{"id":337,"date":"2013-04-17T15:02:48","date_gmt":"2013-04-17T22:02:48","guid":{"rendered":"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=337"},"modified":"2013-06-10T11:44:10","modified_gmt":"2013-06-10T18:44:10","slug":"337-2","status":"publish","type":"page","link":"https:\/\/sr.stanford.edu\/?page_id=337","title":{"rendered":"Design and Implementation of a Maxillofacial Surgery Rehearsal Environment with Haptic Interaction for Bone Fragment and Plate Alignment"},"content":{"rendered":"<h2>Project Description<\/h2>\n<p><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/d6377e4929cfcc36d3d6d9e82ca583dc.jpg\" border=\"0\" \/><\/p>\n<h2>Purpose<\/h2>\n<p>The treatment of patients with complex facial and neck trauma is one<br \/>\nof the most challenging multidisciplinary tasks in surgery. Simulation<br \/>\ntechnology based on 3D data of an individual patient will have a<br \/>\ncritical impact on surgical planning and training. Repair of<br \/>\nmaxillofacial fractures involves aligning fragments of bone with<br \/>\naccuracy so that aesthetics and function are restored. Surgery is<br \/>\noften lengthy and hindered by inability to fully view the fractures<br \/>\nfrom all angles due to anatomic difficulties such as muscular<br \/>\nattachments, vascular supply and critical nerve supply.<\/p>\n<p>On-site and remotely accessible virtual environments capable of<br \/>\nsimulating interactions with patient-specific anatomy will allow<br \/>\nsurgeons to plan and rehearse operations and to retrain skills for<br \/>\ninfrequent procedures. Selection of type of plate, its length,<br \/>\nalignment and screw length is currently done intraoperatively and<br \/>\nseveral alternatives might need to be considered in the operating<br \/>\nroom. The ability to shift these decisions to a pre-operative planning<br \/>\nstage would decrease the length of surgery and improve confidence in<br \/>\nthe accuracy of repair.<\/p>\n<p>Commercially available maxillofacial surgery planning software allows<br \/>\nfor interactive realignment of fractures based on segmented,<br \/>\npre-operative CT images, but are limited to the interactions of<br \/>\nkeyboard and mouse. This limited ability to control orientation of the<br \/>\nbone fragments and lack of force feedback on contact leads to low<br \/>\nperceived confidence of the resulting surgical plan.<\/p>\n<p>Our goal is to overcome the limitations of current software by<br \/>\ndesigning and implementing a haptics-enabled maxillofacial surgery<br \/>\nrehearsal environment that requires little training and provides a<br \/>\ndirect high-fidelity immersive experience for the operator. The system<br \/>\nwould support six degree-of-freedom (6-DOF) haptic interaction for<br \/>\nbone fragments and plate alignment for pre-surgical planning to treat<br \/>\nmandibular fractures. Our aim is to evolve a design of real utility to<br \/>\nsurgeons and, at the same time, ensure that it can be realized in<br \/>\nimplementation by identifying and addressing the technological and<br \/>\ninteraction design challenges through conceptual and technical<br \/>\nprototypes.<\/p>\n<h3>Methods<\/h3>\n<p>The design process has followed a user centered design method<br \/>\nconcurrently with development of state-of-the-art collision detection<br \/>\nand haptic rendering algorithms. This dual process allows for<br \/>\nidentification of requirements that may be met with currently<br \/>\navailable technology as well as opportunities for improvements in<br \/>\ncertain essential technological areas of particular value to this<br \/>\nproject. The design is informed and iteratively improved by field<br \/>\nstudies, lo-fi and hi-fi prototypes, scenarios and co-operative<br \/>\nevaluation with oral\/maxillofacial surgeons.<\/p>\n<p>During the design process we are iteratively implementing two main<br \/>\nprototypes, following the concepts of a vertical (few features, but<br \/>\nhi-fi interaction) and a horizontal (conceptually all features of the<br \/>\nsystem, but lo-fi interaction) prototype. The horizontal prototype<br \/>\nallows the surgeon to execute a typical usage scenario from beginning<br \/>\nto end. While not all features are fully implemented, and some may<br \/>\nonly be mock-ups, the purpose of the horizontal prototype is to elicit<br \/>\nfeedback and modify the prototype and scenario to identify the most<br \/>\nimportant aspects of the system. The risk with a horizontal-only<br \/>\nprototype is that the designer might use materials and technologies<br \/>\nthat are infeasible or even impossible to implement. We balance this<br \/>\nrisk with vertical prototypes that fully implement critical components<br \/>\nrequiring new technologies, which act as proof of concept and informs<br \/>\nthe overall design of what can be built.<\/p>\n<p>One essential property of the rehearsal environment is the possibility<br \/>\nof bi-manual six degree of freedom positioning and orientation of<br \/>\nfractured mandibular bone in a way that looks and feels reassuring,<br \/>\nwithout requiring the user to learn a complex CAD-like system.<br \/>\nReal-time visualization of the dental occlusion and haptic feedback<br \/>\nconveying accurate contact forces while manipulating fractured bone<br \/>\nfragments are essential components of the system, and thus novel<br \/>\ntechnological development in these areas is required.<\/p>\n<h3>Results<\/h3>\n<p>The steps and interactions we identified include loading a segmented<br \/>\nCT-scan of a trauma patient, viewing it in a stereoscopic display<br \/>\nco-located with a bi-manual haptic interface, manipulating the bone<br \/>\nfragments, and perceptualizing the occlusion between maxilla and<br \/>\nmandible both visually and haptically through force-feedback. In<br \/>\naddition, the surgeon may lock the fragments into key positions, view<br \/>\nthem from different directions, decide on plate placement and screw<br \/>\nsizes, and finally generate a report of the resulting surgical plan.<br \/>\nThe form of our initial prototype was a pure mock-up (figure 1), and<br \/>\nwas used as a conversation artifact to improve the design and build<br \/>\ncommon ground between surgeons and engineers.<\/p>\n<p>Our studies indicate that the success of such a system is highly<br \/>\ndependent on the fidelity of the haptic rendering. For the rehearsal<br \/>\nenvironment to be intuitive, the interaction must follow the direct<br \/>\nintention of the operator with smooth and accurate force feedback. A<br \/>\nvertical prototype (figure 2) has been developed to focus on the<br \/>\nhaptic rendering aspect of the system. It extends our group\u2019s recent<br \/>\nwork to formulate a new algorithm for rendering 6-DOF haptic<br \/>\ninteraction between high-resolution volumetric representations of<br \/>\nobjects (e.g. bone fragments from CT images). In addition to the<br \/>\nprototypes, we will report the results of an formative evaluative<br \/>\nstudy, which will include the grading of perceived usefulness and<br \/>\nfidelity of the interaction.<\/p>\n<h3>Conclusion<\/h3>\n<p>We show a maxillofacial surgery rehearsal environment that was developed<br \/>\nusing an iterative design process with feedback from surgical specialists. The<br \/>\nsystem will permit surgeons to plan, simulate and rehearse complex repair of<br \/>\nmaxillofacial fractures, including bone fragment and bone plate alignment in<br \/>\nthree-dimensional space. The benefit of a haptics-enabled system for obtaining<br \/>\naccurate surgical results, reducing operating time, and for enabling a<br \/>\nplatform to enhance surgical training in infrequent operations is considered.<\/p>\n<p><img alt=\"\" src=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/images\/b73be706451d41d8f286b24586f9d455.jpg\" border=\"0\" \/><\/p>\n<p><b>Figure 2.<\/b> Screenshot from the interactive prototype. Each of the two<br \/>\nmandibular fracture segments can be moved with a left and a right haptic<br \/>\ndevice respectively.<\/p>\n<h2>Project Staff<\/h2>\n<ul>\n<li><a href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=655\" title=\"Jonas Forsslund\">Jonas Forsslund<\/a><\/li>\n<li><a href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=1921\" title=\"Sara C. Schvartzman, Ph.D.\">Sara C. Schvartzman<\/a><\/li>\n<li><a href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=633\" title=\"Sonny Chan\">Sonny Chan<\/a><\/li>\n<li><a href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=1251\" title=\"Rebeka G. Silva, D.M.D.\">Rebeka Silva<\/a><\/li>\n<li><a href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=1259\" title=\"Sabine Girod, M.D., D.D.S., Ph.D.\">Sabine Girod<\/a><\/li>\n<li><a href=\"http:\/\/www.stanford.edu\/group\/sailsbury_robotx\/cgi-bin\/salisbury_lab\/?page_id=1217\" title=\"J. Kenneth Salisbury, Ph.D.\">J. Kenneth Salisbury<\/a><\/li>\n<\/ul>\n<h2>Status<\/h2>\n<p>Active since 2010.<\/p>\n<h2>Funding Sources<\/h2>\n<p>Funded through VA Grant Number.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Project Description Purpose The treatment of patients with complex facial and neck trauma is one of the most challenging multidisciplinary tasks in surgery. Simulation technology based on 3D data of an individual patient will have a critical impact on surgical planning and training. Repair of maxillofacial fractures involves aligning fragments of bone with accuracy so &hellip;<\/p>\n<p class=\"read-more\"> <a class=\"\" href=\"https:\/\/sr.stanford.edu\/?page_id=337\"> <span class=\"screen-reader-text\">Design and Implementation of a Maxillofacial Surgery Rehearsal Environment with Haptic Interaction for Bone Fragment and Plate Alignment<\/span> Read More &raquo;<\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"parent":205,"menu_order":0,"comment_status":"open","ping_status":"open","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages\/337"}],"collection":[{"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=337"}],"version-history":[{"count":8,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages\/337\/revisions"}],"predecessor-version":[{"id":1917,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages\/337\/revisions\/1917"}],"up":[{"embeddable":true,"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=\/wp\/v2\/pages\/205"}],"wp:attachment":[{"href":"https:\/\/sr.stanford.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=337"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}