acm - an acm publication

Articles

A conversation with Roghayeh Leila Barmaki: How immersive technologies are reshaping education and healthcare

Ubiquity, Volume 2025 Issue March, March 2025 | BY Bushra Anjum, Leila Barmaki


Full citation in the ACM Digital Library  | PDF  | m4a


Ubiquity

Volume 2025, Number March (2025), Pages 1-5

Innovation Leaders: A conversation with Roghayeh Leila Barmaki: How immersive technologies are reshaping education and healthcare
Bushra Anjum
DOI: 10.1145/3722210

In this interview, Ubiquity senior editor Bushra Anjum chats with Roghayeh Leila Barmaki to explore how immersive technologies and AI enhance education and healthcare accessibility. The discussion highlights Dr. Barmaki's journey—overcoming social communication challenges—and how it shaped research on nonverbal cues, autism, and identity formation in STEM. The conversation also examines the impact of virtual companions in learning and opportunities for interdisciplinary collaboration.

Dr. Roghayeh Leila Barmaki is an assistant professor in the Department of Computer and Information Sciences at the University of Delaware. She directs the Human-Computer Interaction Lab. Her research interests include applied machine learning, multimodal human behavior analysis, virtual and augmented reality, and human-computer interaction. Barmaki holds a Ph.D. in computer science and an M.Sc. in artificial intelligence. Before joining the University of Delaware in 2018, she was a postdoctoral fellow at Johns Hopkins University. Barmaki's research has been supported by several federal and corporate agencies, including the National Science Foundation, the National Institutes of Health, the U.S. Department of Agriculture, Amazon Research Awards, and the Amazon Health Equity Initiative. Her work has been recognized with multiple international awards, including the ACM/IEEE CHASE'23 Best Paper Award, the IEEE AIxVR'24 Best Paper Honorable Mention Award, and the ACM ICMI'16 Best Challenges Award. She can be contacted via rlbATudelDOTedu.

Can you describe your research on leveraging emerging technologies to improve accessibility in education and healthcare?

I am interested in increasing the accessibility and inclusivity of education and healthcare resources through virtual reality and AI-driven technologies. My research is focused on ways to expand the reach and accessibility of virtual learning and therapy solutions for marginalized users, such as children of color, individuals with special needs, and patients.

To achieve this, I develop virtual environments and test them with these marginalized users to meet their needs by analyzing their socio-emotional data, including gaze patterns, body movements, and other emotional indicators. With the growing interest in the metaverse, social virtual reality, and breaking the barriers of real and virtual environments, it is critical to understand, study, and adapt to the characteristics of different users. I am also interested in exploring how human interaction and communication form and change in these new learning and therapeutic environments, particularly for marginalized groups.

My interdisciplinary approach to measuring human behavior integrates social communication, learning sciences, and behavioral health. I measure complex concepts such as emotions, mutual gaze, student engagement, motivation to learn, student collaboration, learning identity, cognitive workload, and synchronous bodily movements.

For example, I recently examined how children with autism use their bodies and gaze to communicate with others and objects in a group play therapy setting. We have found children with autism maintain less eye contact and demonstrate lower levels of praxis and motor coordination in social play therapy sessions. This method can be extended to aid clinicians by improving data annotation and label verification for other autism spectrum disorder (ASD) datasets, reducing the labor-invasive process of manually annotating video data.

Building on these insights, the next step is to apply our findings and takeaways to create more adaptive and personalized interactive environments. Studies like these can create immersive and playful virtual environments that better meet the needs of children with autism.

In another project, I am investigating how middle-school students can develop their computing identities and sense of belonging in the field of computing via social interactions with virtual role models. For this project, we have been gathering information from local university alumni in computing who found their way into computing and stayed in computing. We plan to give life to these inspirational stories by creating an embodied, conversational virtual human (a digital twin of these professionals). These virtual role models will participate in an automated interview setup with middle school students. Over time, we will investigate the impact of these interviews on students' interests and career pursuits.

What inspired your research into nonverbal communication and computing identity?

I grew up in a relatively big family. I was the youngest of five siblings. When I was young, I lost my left eye vision in an accident, causing me to struggle with maintaining eye contact and engaging in social communication for the rest of my life. During my teenage years, I faced bullying and unpleasant feelings of isolation, so stepping into new groups, interacting, and mastering interpersonal social skills were beyond my comfort.

Despite these challenges, I was very blessed to have several supportive mentors and companions in my life who helped me pursue my path in computing. These key people include my eldest brother, who is an electrical engineer; my life partner, who I met in college; my first undergrad professor, Dr. Ehsan Malekian, and my Ph.D. advisor. I earned my Ph.D. from the University of Central Florida under the guidance of Dr. Charles Hughes. I clearly recall in my first meeting with him, he picked up on the lack of eye contact and respectfully asked about it. After hearing my story, he graciously helped me to define a research topic around nonverbal communication and interpersonal skills (e.g., eye contact, gestures, and head nods) in virtual classrooms. This was back in 2012 when remote learning was in its infancy.

Since then, as an independent scholar, I have been increasingly passionate about studying the nonverbal cues of humans in various contexts. That's why when my colleague invited me to study the nonverbal communication skills of children with autism, I took the opportunity without hesitation. Lack of eye contact is one of the earliest diagnostic tools in autism assessment.

Sometimes, feelings of not belonging are deeply rooted in your identity, especially if we are talking about the science, technology, engineering, math, and computing (STEM+C) fields. One may be very good at math, and yet social barriers can prevent them from seeing themselves as a "math person." My personal experience helped me to develop projects focused on computing identity formation, using virtual role modeling and simulation to support young students from marginalized groups.

How do you apply immersive technologies to enhance education and healthcare accessibility for marginalized groups?

I create and apply immersive technologies that make education and healthcare resources more accessible and adaptive. To achieve this, we integrate multiple disciplines, including communication, cognitive science, learning sciences, physical therapy, human-computer interaction, and machine learning. For example, in our recent study, we collected and analyzed multiple data modalities—such as brain signal, gaze patterns, and embodiment—to measure student engagement and cognitive workload in our in-house math educational game. For an analysis like this, we track student learning progress before, during, and after the educational game. We also apply the principles of responsible AI, like fairness and transparency, to mitigate any potential biases in measuring and predicting student engagement.

In another study funded by the National Science Foundation, I explore how role-playing in virtual environments can inspire young girls of color to become "math or computer science people." I am particularly interested in seeing how collaboration and essential life skills of students can be improved in these safe, interactive learning environments.

Yet another study explores the usage of generative AI-based solutions, e.g., ChatGPT, to support students in learning medical concepts with the aid of an embodied virtual peer. This work received as the Best Paper Honorable Mention award at the 2024 IEEE International Conference on Artificial Intelligence and Extended and Virtual Reality (IEEE AIxVR'24).

In this work, we used principles from learning science and medical education, virtual reality, human-computer interaction, and artificial intelligence to develop and analyze student motivation and learning. This project has the potential to meaningfully impact students by providing virtual peers, companions, coaches, and role models who can help them go through challenging problems and uneasy moments in the classroom and beyond. Such virtual mentors can help students develop persistence and resilience toward their goals in STEM and computing fields. Making learning and identity formation happen through virtual companions and role models outside the classroom can enhance the accessibility and flexibility of learning, touching more students in need and ultimately helping me to take a step towards my goal of increasing the accessibility and reach of virtual education.

In closing, I invite fellow researchers, educators, and healthcare professionals to collaborate to explore leveraging these technologies to empower learners and practitioners. If you are interested in projects that bridge AI, simulation and training, or educational technology, I would love to connect and explore opportunities for collaboration.

[This interview has been edited for clarity.]

Author

Bushra Anjum, Ph.D., serves as the Head of Data Science and AI/LLM subject matter expert at the EdTech startup NoRedInk. In this role, she leads a team of analysts, scientists, and engineers to develop adaptive online curriculum tools designed to enhance writing and critical thinking skills for students in grades 3–12. Dr. Anjum's expertise lies in statistical analysis, predictive modeling, GenAI tooling, and distributed systems engineering. She holds a Ph.D. in computer science from North Carolina State University.

Copyright 2025 held by Owner/Author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.

COMMENTS

POST A COMMENT
Leave this field empty