acm - an acm publication

Articles

Enabling Intentional Human Engagement with AI

Ubiquity, Volume 2024 Issue March, March 2024 | BY Bushra Anjum


Full citation in the ACM Digital Library  | PDF  | m4a


Ubiquity

Volume 2024, Number March (2024), Pages 1-5

Innovation Leaders: Enabling Intentional Human Engagement with AI
Bushra Anjum
DOI: 10.1145/3654808

In this interview, Ubiquity's senior editor, Dr. Bushra Anjum, chats with Ming Yin, an assistant professor at Purdue University, about how intentional human interactions can fully realize the potential of AI in augmenting human intelligence. We also discuss Dr. Yin's research to empirically understand how humans engage with AI, computationally model this engagement, and design interventions to improve humans' engagement with AI by both influencing human behavior and adjusting AI designs.

Ming Yin is an assistant professor in the Department of Computer Science at Purdue University. Her current research interests include human-AI interaction, crowdsourcing and human computation, and computational social sciences. She completed her Ph.D. in computer science at Harvard University and received her bachelor's degree from Tsinghua University. Dr. Yin was the Conference Co-Chair of AAAI HCOMP 2022, and has served as the area chair/associate chair/senior program committee member for many conferences including AAAI, CHI, FAccT, the Web Conference, etc. Her work was recognized with multiple best paper (CHI 2022, CSCW 2022, HCOMP 2020) and best paper honorable mention awards (CHI 2019, CHI 2016). Dr. Yin can be contacted via mingyinATpurdueDOTedu

What is your big concern about the future of computing to which you are dedicating yourself?

In today's rapidly evolving world of artificial intelligence (AI), there is a growing need for humans to effectively collaborate with AI systems. However, this collaboration often falls short of its potential due to various challenges. AI technologies are rapidly developing and quickly penetrating numerous industries as well as people's daily lives. People increasingly need to interact with AI technologies to complete their tasks, such as decision making. For example, home buyers are assisted by AI models to predict property prices and determine their bidding strategies, while credit analysts may consult the AI model's recommendation when assessing the default risks of loan applicants. The promise is that with human judgment and contextual awareness coupled with the data-driven insights of AI, the human-AI team can potentially make more informed, accurate, and ethically sound decisions. Unfortunately, in practice, it is quite rare for this desired scenario of "human and machine intelligence getting combined synergistically and outperforming either party alone" to happen.

One of the key bottlenecks here is that people may utilize AI assistance in a very superficial way—for example, they may just blindly follow the AI recommendations in their decisions or always ignore them. In other words, people may not engage with the AI recommendations thoughtfully by carefully analyzing when they are reasonable and when they are not, and this results in people's inappropriate reliance on AI and unsuccessful collaboration with AI. Sometimes, this lack of engagement with AI reflects people's cognitive bias in information processing. For instance, we find that when people don't have access to any information about an AI model's performance, they tend to gauge the trustworthiness of the model by examining how frequently the model's recommendations agree with their own judgment on decision cases that they are highly confident in. To some extent, this reflects people's "confirmation bias" towards AI. At other times, people's limited engagement with AI may be due to their lack of AI literacy or lack of accurate mental models of AI. As an example, one of our studies reveals that laypeople often believe that the AI model can perform equally well under different scenarios. They are not aware that an AI model's performance can degrade substantially on a novel distribution of data that is different from its training data, so they set the wrong expectation for AI's capability and tend to over-rely on AI in these scenarios.

Insufficient and inappropriate engagement with AI may hinder the capability of AI in augmenting human intelligence. For example, people utilize AI assistance without thoroughly analyzing its trustworthiness, thus adopting it when it is untrustworthy or dismissing it when it is trustworthy. This could potentially impact human autonomy and agency in the long run. So, we'd like to explore what can be done to enhance people's engagement with AI and eventually help them utilize AI assistance more appropriately.

How did your background and educational experiences shape your fascination with enhancing the collaboration between humans and artificial intelligence?

I began my Ph.D. program at Harvard University in 2011 under the supervision of Professor Yiling Chen. I was a part of the Harvard EconCS group, which pursues interdisciplinary research on the interface of computer science and economics. Since my Ph.D. days, I have always been fascinated by the question of how to enable "hybrid systems" that can effectively combine human and machine intelligence. But at that time, I was mostly thinking of combining the two types of intelligence through "human-in-the-loop systems," that is, humans help improve AI by providing annotated training data or feedback. A key challenge in this research is to explore how to make sure humans contribute their best wisdom to enhance machine intelligence. My exposure to various behavioral and cognitive science research in the EconCS group helped me address this challenge by thinking about how to better engage humans in the human-in-the-loop systems by understanding their needs, wants, and biases and accounting for human behavior in the designs of the systems (e.g., model humans' reactions to financial rewards and dynamically determine how to provide rewards to them to maximize their work quality in human-in-the-loop systems).

Towards the end of my Ph.D., with the large amount of human intelligence conveyed through the annotated data and the rise of deep learning, AI technologies experienced rapid growth. Many user-facing AI applications have been deployed in various domains to provide assistance to humans. This is when I started to realize that combining human and artificial intelligence is no longer just about "humans augment machine intelligence" but could also mean "machines augment human intelligence." Similarly to before, "engagement" remains a key challenge, because machine intelligence can only enhance human intelligence if people can make the best use of the AI assistance.

Indeed, around that time, the research community identified several issues with modern AI technologies, including fairness, interpretability, and trustworthiness; it becomes clear that to "achieve the best of the two worlds," people need to know how to interact/engage with AI appropriately. I was very fortunate to join Microsoft Research NYC (MSR NYC) as a postdoctoral researcher after I obtained my Ph.D. degree. MSR NYC just created its FATE (Fairness, Accountability, Transparency, and Ethics in AI) group at that time, so I got the opportunity to interact with some of the pioneering FATE researchers like Jennifer Wortman Vaughan and Hanna Wallach to explore the emerging area of human-AI interaction. Since then, examining how to help people better engage with AI and release the full potential of AI in augmenting human intelligence has become one of my primary areas of research.

What projects are your research group leading with the goal of improvements in human engagement with AI, and what are the key findings or insights from your research in this area?

With my group at Purdue University, we are looking into a few fundamental aspects of better understanding and enhancing humans' engagement with AI, with the goal of enabling humans to rely on AI assistance appropriately and collaborate with AI effectively. Our research is often conducted in the setting of AI-assisted decision making, and we aim to improve human decision makers' engagement with AI-based decision aids so that they can identify the reliable data-driven insights of AI while maintaining their agency in decision making by actively and properly integrating their own knowledge with the AI insights.

To achieve this, we initiate large-scale human-subject experiments to empirically comprehend how humans interact with AI-based decision aids, such as identifying what factors may influence people's trust and reliance on the AI model's decision recommendation. We also build computational models to characterize the cognitive mechanisms underlying humans' engagement with AI. For example, we build models to predict whether a human decision maker will rely on AI, given the AI model's decision recommendation and confidence, as well as their own judgment and confidence on a decision case.

Building upon both our empirical and computational understanding of human-AI interaction, the final key component of our research focuses on designing interventions to facilitate more effective engagements with AI. Broadly, we consider two large families of interventions—interventions that aim to influence human behavior and interventions that adjust the AI model. For instance, we explore the use of large language models (LLMs) as a third-party tool to encourage critical reflection on decision rationale. LLMs can act as facilitators, conveying AI model insights in natural language and prompting discussions with humans, even challenging AI recommendations making sure that people would not blindly follow them. We also investigate adaptive interface designs to influence how humans engage with AI, guiding them to rely on AI assistance when it is beneficial, such as when AI has a higher likelihood of making correct decisions compared to humans in specific scenarios.

In terms of designing interventions that adjust the AI model, the key intuition is that the AI model should be developed to take how humans will react to it into account (which can be predicted through our earlier computational models) rather than optimizing for its independent performance. In this way, the AI model can effectively "anticipate" humans' engagement behavior and adjust its recommendations to best support and complement humans.

If you would like to learn more about our research on understanding, modeling, and improving human-AI collaboration or explore research topics in this domain, please do not hesitate to reach out to me.

DOI: 10.1145/3654808

Author

Bushra Anjum, Ph.D., is a health IT data specialist currently working as the Senior Analytics Manager at the San Francisco based health tech firm Doximity. Aimed at creating HIPAA secure tools for clinicians, she leads a team of analysts, scientists, and engineers working on product and client-facing analytics. Formerly a Fulbright scholar from Pakistan, Dr. Anjum served in academia (both in Pakistan and the USA) for many years before joining the tech industry. A keen enthusiast of promoting diversity in the STEM fields, her volunteer activities, among others, involve being a senior editor for ACM Ubiquity and the Standing Committee's Chair for ACM-W global leadership. She can be contacted via the contact page bushraanjum.info/contact or via Twitter @DrBushraAnjum.

Copyright 2024 held by Owner/Author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.

COMMENTS

POST A COMMENT
Leave this field empty