Volume 2019, Number August (2019), Pages 1-6
In this series of interviews with innovation leaders, Ubiquity Associate Editor and software engineer, Dr. Bushra Anjum sits down with Marianna Obrist, who is exploring augmented and virtual reality within the context of HCI. Obrist discusses multi-sensory interactions that go beyond sight and sound, as well as her work that explores the role of human senses in the design of future technologies.
Marianna Obrist is Professor of Multisensory Experiences and head of the Sussex Computer Human Interaction (SCHI 'sky') Lab at the School of Engineering and Informatics at the University of Sussex in the U.K. Her research ambition is to establish touch, taste, and smell as interaction modalities in human-computer interaction (HCI). Before joining Sussex, Marianna was a Marie Curie Fellow at Newcastle University and before this an Assistant Professor at the University of Salzburg, Austria. Marianna is an inaugural member of the ACM Future of Computing Academy (ACM-FCA) and was selected Young Scientist 2017 and 2018 to attend the World Economic Forum (WEF) in China. As part of her research, her team developed a novel scent-delivery technology that was exhibited at the WEF 2019 in Davos and at the World Government Summit in Dubai in February. Most recently, she was appointed as a Visiting Professor in the Material Futures Research Group at the Royal College of Art, London. More details on her research can be found at http://www.multi-sensory.info. Marianna can be reached via email [email protected] and Twitter @obristmarianna.
What is your big concern about the future of computing to which you are dedicating yourself?
Interactive technologies, such as virtual and augmented reality, are transforming how people experience, interact, and share information. Advances in technology have made it possible to generate real and virtual environments with breath-taking graphics and high-fidelity audio. However, without stimulating the other senses such as touch and smell, and even taste in some cases; they may lack realism. I envision a future where all our major senses are considered to create more immersive and compelling interactions with technology. To realize this vision, I follow and apply a human experience-centered approach. In other words, I study human experiences, try to describe and formalize those experiences to guide the design of future interactions with technology.
A key challenge and stumbling block for progress in this domain is the lack of a common language to talk about tactile, gustatory, and olfactory experiences. Contrary to our other senses where we have a basic agreed vocabulary to work with (e.g., color names, RGB, hue, saturation, sound pitch, and volume), we do not have such a common reference point for touch, taste, and smell in HCI. Moreover, we lack concrete tools, toolkits, and frameworks to support the design of novel sensory interaction experiences in a variety of application use cases. For example, touch is a powerful vehicle for communication between humans. The way we touch (how) embraces and mediates certain emotions such as anger, joy, fear, or love. While this phenomenon is well explored for human interaction, HCI research is only starting to uncover the fine granularity of sensory stimulation and responses about certain emotions. As part of my research, for example, I explored how technology could augment human-human communication through the use of tactile stimuli. I can imagine the design of entirely new human-computer interaction experiences that could disrupt existing inequalities in access to digital content for people with sensory impairments; e.g., people who are blind or deaf. All taken together, requires a fundamental and evidence-based understanding of experiences elicited through all our human sensory system.
How did you become interested in the area of interactive technologies built around our human sensory system?
I have always been fascinated by the way people explore and experience the world, how we describe those experiences to each other, and how we could exploit those descriptions in the design of new interactive experiences for technology. That interest is grounded in my interdisciplinary background in communication science and computer science, and very much shaped my specialization and research in the field of HCI.
Throughout my Ph.D. at the computer science department at the University of Salzburg in Austria, I was mainly focusing on the question of how and why people modify and customize interactive systems in their homes. What are the experiences they are missing and that are not provided by the market? I was intrigued by the do-it-yourself movement and how to lead users to innovate far beyond what large companies imagine. During that time, my focus was on audio-visual interaction design and experiences. Only later, when I joined Newcastle University through a two-year Marie Curie fellowship, my efforts were directed towards the opportunities around touch, taste, and smell as interaction modalities. The fellowship provided me with the freedom, time, and resources to dive into and explore this emerging multisensory technology space.
A significant challenge, right from the beginning of studying sensory experiences, was the definition of the methodological approach. I was driven by the question on how to capture the subjective qualities of users' sensory experiences alongside with the perceptual effects and physical properties of sensory stimuli. The way I addressed and still address this challenge is through combining different methods, particularly applying a cross-disciplinary approach that combines experimental psychology, HCI with user experience (UX) methods and techniques. One method I explored and applied in my research on sensory experiences is the explicitation interview technique (also referred to as micro-phenomenology or psycho-phenomenology). I have used this approach in novel and emerging contexts (i.e., for mid-air haptics experiences creating a first human-experiential vocabulary for mid-air haptics, and for taste experiences establishing an experience-centered design framework for taste). My ambition is to establish a semantically rich experiential vocabulary for sensory and multisensory experiences in order to facilitate the design and replication of new interfaces, computing systems, and applications beyond existing interaction paradigms.
What projects and initiatives are you currently leading to better integrate touch, taste, and smell with HCI so as to enable advanced multisensory digital experiences?
At the Sussex Computer Human Interaction (SCHI 'sky') Lab, which I established about four years ago, I am particularly focusing on the investigation of touch, taste, and smell experiences that can make a difference in a variety of application scenarios (from entertainment to education, future automation to healthcare services). While I am making progress towards establishing a common reference point/language around touch, taste, and smell experiences in my research, there are still several other challenges ahead of us including the challenge on how to translate the human vocabulary into meaningful machine representations and how to develop a sustainable ecosystem of tools, interfaces, widgets, [and] toolkits for multisensory experience design.
To tackle some of those challenges to make touch, taste, and smell more accessible for interaction and experience design, I was involved in the development of a novel design tool for smell (i.e., OWidgets). Together with two of my colleagues we formalized and translated the knowledge we have established around smell experiences into specific toolkit features to enable smell-based experience design. We presented the first proof of concept at the Consumer Electronics Show (CES) 2018 in Las Vegas, which was a great experience to present research to the broader public, commercial entities, and potential investors in new startup ideas. Such events, but also other public engagement events, become increasingly relevant and part of our academic life in order to demonstrate the possible impact of our research and help create public engagement through new forms of science communication. For example, I teamed up with astrophysics experts from the Imperial College London to create a multisensory journey through dark matter at the London Science Museum.
Beyond my primary research efforts, I am also promoting a much broader multisensory perspective in the discussion on the future of computing and food, an initiative I started as part of the ACM Future of Computing Academy. One of the first outcomes from this initiative was the formulation of a "Manifesto on the Future of Computing and Food" that was published in July 2018. The ambition behind this manifesto is to make people think about what technology could do to solve problems linked to food—from production, [to] distribution, to consumption. For example, research has shown that our taste perception diminishes with age. We could use novel multisensory interfaces to augment eating experience to ensure that we keep enjoying food as we grow older through flavor-enhancing interfaces and novel scent-delivery technologies.
While we may not know what the future of computing will look like, I believe the appetite for innovation in multisensory human-computer interaction is growing and touch, taste, and smell interfaces will gain their rightful place next to visual and auditory interfaces. In summary, if you got excited and curious about the role of our human senses in the design of future technologies, please don't hesitate to get in touch with me. I am happy to bounce ideas, provide further reading advise, but also explore collaboration opportunities. If you are a student and look for some new exciting research challenges, then get in touch to discuss internship or Ph.D. opportunities. If you are based in a company and look for inspiration beyond the known, drop me an email and we can arrange a Lab visit. If you are looking to invest in new ideas, send me an email and I will share our pitch on how smell will change the way we will interact with technology in the future.
Obrist's research is mainly supported and funded by the European Research Council (ERC) under the EU's Horizon 2020 research and innovation program under Grant No.: 638605.
Bushra Anjum is a software technical lead at Amazon in San Luis Obispo, CA. She has expertise in Agile Software Development for large scale distributed services with special emphasis on scalability and fault tolerance. Originally a Fulbright scholar from Pakistan, Dr. Anjum has international teaching and mentoring experience and has served in academia for over five years before joining the industry. In 2016, she has been selected as an inaugural member of the ACM Future of Computing Academy, a new initiative created by ACM to support and foster the next generation of computing professionals. Dr. Anjum is a keen enthusiast of promoting diversity in the STEM fields and is a mentor and a regular speaker for such. She received her Ph.D. in computer science at the North Carolina State University (NCSU) in 2012 for her doctoral thesis on Bandwidth Allocation under End-to-End Percentile Delay Bounds. She can be found on Twitter @DrBushraAnjum.
©2019 ACM $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.