Skip to content

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Read Next

Design Lab Communitycrit Narges Mahyar Steven Dow

CommunityCrit Gives Community Members a Newfound Voice

Actively engaging the public in urban design planning is essential to both establishing a strong…

Design Lab Students Swarm CHI Conference in Denver

In May, many UC San Diego Design Lab members and students swarmed the largest human-computer interaction conference in the world, ACM CHI 2017. Affiliated with ACM SIGCHI, the premier international society for professionals, academics and students who are interested in human-technology and human-computer interaction (HCI), the conference brings together people from multiple disciplines and cultures to explore new ways to practice, develop and improve methods and systems in HCI.

“I love the mix of people at CHI—chatting with people making new sensor technologies, new theoretical approaches, new architectural construction techniques -- it has incredible diversity but is still brought together with a common set of ideas and expectations,” said former Design Lab Fellow Derek Lomas, who presented at the conference.

This year, the mega-HCI conference, which was sponsored by tech-industry giants such as Facebook, Google, IBM, Microsoft and Yahoo! was held in Denver near the foothills of the Rocky Mountains. Organizers selected the site, which is full of scenic trees, mountains and valleys to serve as a motivation for the theme of “Motivate, Innovate, Inspire.”

The Worst F&#%ing Words Ever

Triton Magazine

Benjamin Bergen is a professor of cognitive science at UC San Diego and director of the Language and Cognition Lab, where he studies how our minds compute meaning and how talking interferes with safe driving—among many other things that don’t need to be bleeped. His latest book is What the F: What Swearing Reveals About Our Language, Our Brains, and Ourselves. He calls it “a book-length love letter to profanity.” You’ve been warned.
Design Lab Michele Morris Design Forward Summit Xconomy Uc San Diego

Lab Focused on Human-Centered Design Moves to Put San Diego on Map

Xconomy Article
For Michèle Morris, the big question hanging over organizers as they laid the groundwork last year for the first Design Forward Summit was whether the innovation community in San Diego understood the value of design.

“We didn’t know who was going to show up—and 600 people showed up,” said Morris, who is associate director of the Design Lab at UC San Diego and a founder of the Design Forward Summit.

Now, with the second Design Forward Summit set to begin Wednesday on San Diego’s downtown waterfront (and Thursday in Liberty Station), Morris said the question to be answered this year is “What’s next?”
UX Design Tips From Experience Designer Emilia Pucci

UX Design Tips from Experience Designer Emilia Pucci | Design Chats

Emilia Pucci, Design Lab Designer-in-Residence, shares some useful tips on User Experience Research and Prototyping.

Design Chats is a video series where we sit down with design practitioners to answer questions about how they utilize human-centered design.

View our Design Chats playlist on the Design Lab YouTube Channel

The Worst F&#%ing Words Ever

Triton Magazine

Benjamin Bergen is a professor of cognitive science at UC San Diego and director of the Language and Cognition Lab, where he studies how our minds compute meaning and how talking interferes with safe driving—among many other things that don’t need to be bleeped. His latest book is What the F: What Swearing Reveals About Our Language, Our Brains, and Ourselves. He calls it “a book-length love letter to profanity.” You’ve been warned.
Back To Top