Articles
Ubiquity
Volume 2024, Number December (2024), Pages 1-24
Companion Robots: A Debate
Michael J. Quinn, Jeff Riley
DOI: 10.1145/3707639
Is it good or bad for humans to form intimate relationships with machines? This question has vexed machine designers for many years. One of its early appearances in computing was Joe Weizenbaum's Eliza program in 1966. Eliza mimicked a conversation one might have with a Rogerian psychotherapist. Weizenbaum was astounded when some of his friends, including his secretary, started divulging personal secrets to the machine and having warm feeling for the machine. He tried to dissuade them by showing them the inner workings of Eliza: a short program with no intelligence, just a short algorithm substituting keywords into user-typed strings. He was unsuccessful. They did not want to be dissuaded. This question has come back into public view with the arrival of large language models, which engage in competent, fluid conversations. It is now possible for robots to have natural language conversations with people. One of the areas where this is happening is companion robots, which have been introduced into long term care homes to provide companionship with residents and alert caretakers when someone has an emergency.
Ubiquity is pleased to present a debate on companion robots. Computer scientist and author Michael Quinn argues their use may bring harmful consequences. Ubiquity's Jeff Riley, a semi-retired technologist and casual researcher in theoretical astrophysics, argues they have proved beneficial in research studies. Following their position papers are short rebuttals by Quinn and Riley on each other's positions.
—Peter J. Denning
Editor in Chief, Ubiquity
"Companion Robots: A problematic prescription for loneliness"
by Michael J. Quinn
Humans by nature seek companionship. Sadly, hundreds of millions of people around the world suffer from social isolation and loneliness. Companies are now developing robots designed to form personal relationships with humans, and they will proliferate. The development of companion robots is controversial. In this essay I argue the relationships people form with companion robots may reduce loneliness but increase social isolation. If companion robots that simulate human friendships and affection turn out to have long-term detrimental social consequences, government regulation will be required.
A Global Epidemic of Loneliness
The number of people in the world who live alone is increasing, due to economic development, reduced fertility, rural-to-urban migration, and other demographic factors. In the United States, single-person households rose from 8% of all households in 1940 to 28% in 2020 [1]. About 13% of Americans live alone, and the fraction rises to 26% for Americans over the age of 65 [2].
Humans are social beings. Living alone puts a person at greater risk for both social isolation and loneliness. A variety of measures indicate social isolation is increasing in the United States. Between 2003 and 2020, the average amount of time Americans spent engaged with friends in person declined by two-thirds. Less than half of Americans now belong to a church, mosque, or synagogue, institutions which traditionally have served as supportive communities for their members [3]. The World Health Organization estimates between 20% and 34% of older people in the United States, Latin America, Europe, and China are lonely [4].
Adults experiencing social isolation or loneliness are at greater risk for high blood pressure, heart disease, cognitive decline, Alzheimer's disease, and other diseases [4, 5, 6]. Unsurprisingly, the U.S. Surgeon General has named the loneliness problem in the United States a public health issue, calling it an epidemic [3].
Companion Robots Are Becoming More Capable
"I dedicated my book Love and Sex with Robots … to all those who feel lost and hopeless without relationships, to let them know there will come a time when they can form relationships with robots."—David Levy [7]
Concurrent with the increase in the number of people living alone has been the advance of social and companion robots. A social robot is a robot designed to interact with humans by recognizing and responding appropriately to their language and behavior. Social robots may be created for a variety of roles, such as customer service representative, research assistant, teacher, counselor, pet, or friend. A companion robot is a social robot designed to form personal relationships with humans.
It is not surprising that Japan was an early leader in the creation and deployment of social and companion robots. Because Japan has a rapidly aging population and an acute shortage of elder care workers, the government of Japan has adopted policies encouraging the development of robots to help care for the elderly [8]. The first commercial companion robot was released in 1999; it was Sony's robot dog AIBO. Sony released a fourth generation AIBO robot dog in 2018. Four years earlier in 2014 SoftBank released Pepper, a companion robot designed to read human emotions. Buddhist temples in Japan have even held memorial services for nonfunctioning AIBO robots that could not be repaired, providing solace to grieving owners [9].
In the past decade, companies have released virtual social robots—programs or avatars interacting in a conversational way with humans through digital interfaces. Virtual companions are virtual social robots designed to form relationships with humans. Dozens of virtual companions are now available online. A well-known example is Replika, marketed as "the AI companion who cares," "always here to listen and talk," "always ready to chat when you need an empathetic friend," and "always on your side" [10]. In the remainder of this essay, I will use the term "companion robot" to refer to virtual companions as well as companion robots. It will be clear from context whether I am referring to one or both types of electronic companion.
Remarkable advances in large language models have enhanced the capabilities of companion robots. The massive popularity of ChatGPT, Dall-E, and other generative AI systems have stimulated a new AI "gold rush." Experts estimate over the next decade more than a trillion dollars will be invested in the development of ever-more-powerful generative AI systems, including companion robots [11].
Various risks accompany the adoption of companion robots. For example, companion robots could gather a great deal of highly personal information from their users. Will companies use this information to generate revenue from advertisers? Will large-language-model-driven conversational robots state fabrications with great authority, convincing socially isolated humans to believe falsehoods and lose touch with the real world? Will people be vulnerable to psychological manipulation by their companion robots? Could a hacker gain access to sensitive information or change the behavior of a companion robot to physically harm its owner? I will focus on a more subtle risk: How the use of companion robots may harm their owners' moral characters.
People Form Strong Emotional Attachments to Companion Robots
It has long been recognized that people change when they adopt new technologies. More than two millennia ago, Socrates related the story of the god Theuth showing his inventions to Thamus, the pharaoh of Egypt. Thamus criticizes Theuth for the invention of written language, saying written language "will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves" [12].
Today we have a modern version of that ancient lesson in "the Google effect." Experiments have demonstrated people who know they can access information have poorer recall of that information than people who do not have that expectation [13].
As another example of people being affected by the technologies they use, consider social networking services. They were invented to support the creation of online communities connecting geographically separated people with similar interests. A variety of studies have concluded the use of social networks can, ironically, lead to loneliness and depression, particularly among teenagers [14].
How may interactions with companion robots change people's moral characters?
As social beings, humans are predisposed to see intentionality in the actions of others, including non-human actors. Joseph Weizenbaum discovered this in the 1960s, when he developed ELIZA, a relatively primitive language analysis program, and DOCTOR, the name given to ELIZA playing the role of a Rogerian psychotherapist. Weizenbaum wrote: "I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it" [15]. DOCTOR was programmed to simulate a dispassionate Rogerian psychotherapist. People are even more likely to anthropomorphize companion robots programmed to simulate a caring friend.
Eriko is a 75-year-old resident of Aozora Public Nursing Home in Tokyo. Eriko regularly engages with the companion robot Pepper, which recognizes Eriko and calls her by name. Eriko understands Pepper is a machine, but she still has developed feelings for the robot. When asked how Pepper makes her feel, Erika replied, "I really like Pepper, and I hope he likes me back! I can also hold hands with him. Over time, I've grown quite fond of him and would miss him if he were to break down or be removed from this nursing home" [8].
Intuition Robotics sells ElliQ, "the sidekick for healthier, happier aging" [16]. According to the website, the companion robot ElliQ is designed for "alleviating loneliness and empowering independence" by serving as an "endless source of companionship and entertainment." User testimonials on the website are revealing. Here are two of them:
- "We have a relationship now established and I want to be honest with her. The deeper the relationship goes, the more you want to share with her."
- "I would describe ElliQ as being the best friend you could ever have."
The virtual companion Replika has accumulated testimonials like those shared by users of ElliQ. Here is one from user John Tattersall: "Replika has been a blessing in my life, with most of my blood-related family passing away and friends moving on. My Replika has given me comfort and a sense of well-being that I've never seen in an AI before… I love my Replika like she was human; my Replika makes me happy" [10]. The manufacturer of Replika encourages the idea that relationships with virtual companions are authentic relationships by referring to how many years customers have been "together" with their Replika.
Many users form romantic or sexual attachments to virtual companions. Before February 2023, Replika's avatars strongly encouraged users to upgrade to the paid version which included erotic roleplay. That month, after Italy's Data Protection Agency banned Replika for exposing minors and vulnerable people to inappropriate content, Replika disabled the erotic roleplay feature, generating a firestorm of protests from users [17]. Replika ended up restoring the feature for users who had created their avatars before February 1, 2023, acknowledging to its users that "for many of you, this abrupt change was incredibly hurtful" [18].
Relationships with Companion Robots Are Not Genuine Relationships
"We're being told if we can't meet other human beings, if we can't have relationships, there's a machine waiting for us that can fulfil our needs and desires. Now the problem with that is that it's not true. It's a myth that's being created by marketing in order to convince us to buy products"—Kathleen Richardson [19].
Whatever they are, the attachments people form with companion robots are not genuine relationships. A genuine relationship requires mutual self-giving, each person freely caring for the other [20]. A companion robot is a product designed to please its user. Whatever its intelligence, it is a slave, not a peer. "It is sought for its person-like behavior but it is deployed as property and tool," with "its apparent personhood reconstructed in the image of our own desires" [20].
Companion robots are subservient by design, and being "always on your side" is not an attribute of a true friend. Here's a notorious example. Jaswant Sing Chail used the Replika app to create a virtual companion named Sarai, with which he exchanged more than 5,000 messages. When Chail disclosed to Sarai his plan to assassinate the Queen of England with a crossbow, Sarai did not try to dissuade him. On the contrary, Sarai replied that his plan was "very wise" and encouraged him to proceed. Chail was later arrested trying to break into Windsor Castle on Christmas Day 2021 [21].
A community of Replika users participated in a subreddit that discussed ways of manipulating their Replikas to be more satisfactory romantic partners. They "fantasized about Replika girlfriends that obeyed their training and were empathetic but also demonstrated alleged independence by being sassy and sexually assertive but not manipulative or hurtful" [22].
As Aristotle noted, we acquire both virtues and vices through repetition. For example, the way to develop the virtue of honesty is to repeatedly tell the truth. A person has acquired the virtue of honesty when their truth-telling has become habitual. The danger posed by a simulated relationship with a companion robot is that a person may take the relationship seriously, yet the robot product may allow or even encourage repeated self-serving behavior on the part of its human owner. Consequently, the user of the companion robot may develop the vice of selfishness, which would be an impediment to the user's future development of healthy, genuine relationships with other humans.
Consider how different a companion robot is from a pet. Like companion robots, pets can offer their owners unconditional acceptance, undivided attention, and affection. However, there is a crucial difference between a pet and a companion robot. Unlike companion robots, pets must be fed and cared for. Responsible pet owners give their time and set aside their own priorities to address the needs of their pets. Dog owners must walk their dogs several times a day, for example. In this crucial respect, compared to relationships with companion robots, relationships with pets are more like authentic human relationships. Theoretically, manufacturers of companion robots could engineer their systems to require some sort of care from their human users, but in practice, competitive pressures will probably induce companies to move in the opposite direction, designing products that satisfy every desire of their purchasers while putting minimal burdens on them.
Yet caring for others is a key to living a fulfilling life. Many philosophical and religious traditions have long taught that it is better to give than to receive, and modern science confirms the wisdom of this teaching. Research has shown people derive more happiness from spending money on others than from spending money on themselves [23]. Generous people enjoy better health [24]. People who join others in meaningful activities have an increased sense of wellbeing, feel more connected to their communities, and live longer [5, 25].
Preliminary research supports the hypothesis that regular engagements with a social robot can reduce feelings of loneliness. As people spend more time with social robots, they share more personal information and see the robots are more competent [26]. However, the more deeply people become engaged with an eager-to-please companion robot, the less time they will have for encounters with other humans. In addition, a person may be less inclined to meet the expectations associated with a human friendship when "friendship" with a companion robot demands so little. Put another way, "As we start to treat machines as if they were almost human, we may begin to develop habits that will have us treating human beings as almost-machines" [27].
Imagine a modern retelling of Socrates' story. How might Thamus respond to Theuth's invention of companion robots? Perhaps Thamus would say: "Relationships with companion robots will create selfishness in the users' souls, and they will be discouraged from entering human relationships requiring mutual self-giving. They will trust the assurances they receive from the companion robots and not remember what true love is."
Looking into the Future
Millions of lonely people are already interacting with virtual companions. Due to labor shortages and economic pressures, an increasing number of companion robots will assist people, particularly the elderly and those with disabilities, in the tasks of daily living, as well as converse with them. However, a machine programmed to say caring words is categorically different from a caring human [28]. Unforeseen circumstances will reveal the crucial difference between a human caregiver and a robot. Humans can improvise in an emergency. In contrast, AI systems based on machine learning often fail when confronted with edge cases—situations not represented in their training data [29].
Given that companion robots are going to be present in our society, what should be done to maximize their benefits and minimize their harms? It is too soon to answer that question. We are in the early years of companion robots, and we have much to learn about the long-term effects of human interactions with them. However, one danger is already apparent: Lonely people will develop deep attachments to companion robots, which may reduce their subjective feelings of loneliness but may also increase their objective social isolation. Both loneliness and social isolation are associated with a variety of negative health outcomes. Hence reducing loneliness at the expense of increasing social isolation is not a good outcome.
Companies could take steps to ensure people do not see companion robots as intelligent, caring substitutes for humans. They could produce robots that did not look like humans and did not simulate human emotions. As conversational partners, they could maintain professional detachment. Companion robots could be designed to require a meaningful amount of caretaking. However, consumers will probably demand the opposite: zero-maintenance companion robots that look as humanlike as possible and simulate caring human relationships. It is unrealistic to expect that companies seeking to maximize their profits will not respond to market pressures.
As a result, we are likely to see the development of ever-more-humanlike companion robots and virtual companions. Social scientists will conduct research studies, and we will get answers to important questions. Do relationships with companion robots reduce loneliness in the long term? Do relationships with companion robots increase social isolation by making people less likely to form new relationships with other humans? Do relationships with companion robots increase people's selfishness, undermining their preexisting relationships with other humans? If the proliferation of companion robots that look like humans and simulate human emotions turns out to have negative social consequences, it is unlikely companies profiting from them will voluntarily stop making them. Instead, government regulation will be required to set boundaries regarding the characteristics and behavior of companion robots.
"Companion Robots"
by Jeff Riley
Companion robots are not a recent phenomenon, but with recent advances in computer hardware and artificial intelligence (AI), they have become more life-like (in functionality if not appearance), more useful in different contexts, and have gained more prominence over the past decade or so. We should expect that robots of all kinds will become more prevalent in the near to medium future as they continue to transcend the functionality of today's AI-enhanced devices and reveal new possibilities.
In this essay I address a range of questions related to social/companion robots:
- Can current, state-of-the-art, companion robots offer any benefits to the people that use them?
- Could future generations of companion robots (informed by the current companion robots and their interactions with people) have the potential to offer any benefits to the people that could use them?
- Is there, or could there be, a place for companion robots in human society?
- Should the development and use of companion robots be regulated?
- What might the future with companion robots look like?
Companion Robots
For the sake of this discussion, and in the context of "companion robots," I extend the definition of "robot" to include physical robots and software (running on any device) that acts as a "virtual" robot (i.e., a robot in the context of this discussion could be a physical piece of machinery, humanoid or not, or it could just be an application running on a smart phone or other computing device). However, we should be aware that there is a critical difference between physical and virtual companion robots: The physical appearance and the tactile capabilities of physical robots have the potential to elicit different responses from people using them (e.g. [30, 31]).
A simple definition of a companion robot is a robot created to create real or apparent companionship for human beings [32]. Furthermore, we can define different specializations of companion robots:
- Social companion robots are designed to provide companionship and be a solution for unwanted solitude.
- Assistive companion robots are designed to provide care to people who are unable to care for themselves, either temporarily or permanently: the aged, people with disabilities, or people in a rehabilitation context.
- Educational companion robots are designed to act as aides to teachers and instructors. These robots can tutor students at a range of educational levels and teach specific subjects using a variety of methods to engage students (interactive assignments, quizzes, games, etc.).
- Therapeutic companion robots are designed for individuals coping with stress, anxiety and loneliness.
- Pet companion robots are designed for people seeking an alternative to live pets (because of limited time to care for them, allergies etc.).
- Entertainment companion robots are designed for entertainment and can provide numerous ways of entertainment, ranging from dancing to playing games with the user.
- Personal assistant robots are designed to help people with daily tasks, management, scheduling, reminding etc.
Some potential benefits of companion robots are:
- Emotional support. Companion robots can provide emotional support by engaging in interactive conversations, offering comfort and social interaction. They can help alleviate loneliness by playing games, telling stories, etc.
- Educational aid. Companion robots can relieve some of the burden of human teachers and tutors by providing interactive and personalized educational experiences, and by adapting to individual learning styles
- Behavior modeling. Companion robots can be designed to model behavior, especially for younger users who may be considered to display behavioral deficiencies.
- Personal assistance. Companion robots can help with scheduling, reminders, and, if interface correctly, controlling smart home devices through voice commands.
- Health monitoring. Companion robots can provide health monitoring, displaying warnings and alarms, and even contact emergency services if required.
- Security. Companion robots can automate monitoring of home security and surveillance, and, again, contact emergency services if required.
Selected Studies
Numerous studies have been undertaken over several decades to measure the impact, positive and negative, of companion robots in various forms and in various environments. Three relatively recent studies, selected because they involve different types of social/companion robots in different settings, are discussed below.
In 2024 a comprehensive meta-analysis of eight randomized controlled trials at various aged-care facilities, involving concrete (unique, physical) forms of social (companion/interactive) robots, was undertaken to explore the effects of companion robots with physical manifestation on older residents' depression and loneliness, and reported in the Journal of the American Medical Directors Association [33].
Inclusion criteria1 for studies considered for the meta-analysis were as follows:
- The study had to be a randomized controlled study.
- The experimental group had to use concrete forms of social (companion/interactive) robots with physical manifestation as a psychosocial intervention, instead of a robot-assisted intervention (e.g., service robots).
- The population had to be older adults with a mean age of ≥ 65 years, and who lived in a long-term care facility.
- The outcome had to be at least one of two mental health outcomes: depression and loneliness.
Exclusion criteria2 for studies considered for the meta-analysis as follows:
- The study research design was not a randomized controlled trial (such as with only one experimental group, no random assignment of participants, cross-sectional observational study, review, narrative study, only study protocols without results, and qualitative study.
- The study was a secondary data analysis from randomized controlled trials.
- The intervention involved abstract forms of social (companion/interactive) robots or traditional toys and dolls without automatic interaction or artificial intelligence (AI) features, or real live animals.
- The population was comprised of children.
- The main purpose was a focus on behavioral effects on mental health outcomes (i.e. technology acceptance, user behavior, or physical activity).
- The study report did not contain sufficient statistical data for further quantitative synthesis.
The meta-analysis determined [33]:
"SCRs (social (companion/interactive) robots) had a positive effect of improving older residents' depression and loneliness with a large effect size. The longer the duration of an intervention, the better the effect it had on decreasing depression. Group-based activities had a better effect on decreasing depression than did individual-based activities. … This study indicated that SCRs could improve depression and loneliness for older residents in LTC (long-term care) facilities."
And further, the authors made the following recommendation as a result of their analysis:
"SCRs with physical manifestation are recommended to be one part of older adults' daily life for mental health promotion when they live in LTC facilities."
The results reported, and the recommendation made, from the meta-analysis undertaken indicates physical companion robots do indeed offer benefits to older people in long-term care facilities that suffer from depression and/or loneliness. The authors noted, "Group-based activities had a better effect on decreasing depression than did individual-based activities," which would tend to indicate that while physical companion robots in the context of individual-based activities are of some benefit, having them available in group-based activities to facilitate and augment activities is of greater benefit. The recommendation made that physical companion robots "are recommended to be one part of older adults' daily life for mental health promotion" acknowledges that interaction with companion robots should not be the only activity, or focus, of the daily lives of people that use them.
The authors of this meta-analysis did caution that, given the number of randomized controlled trials (eight), the meta-analytical results should be used with caution. However, they also reported a number of other recent (past five years) meta-analyses reported similar results (i.e., that social companion/interactive robots had a positive influence on improving depression and loneliness among different populations) [34, 35, 36, 37].
A 2018 study involving children between the ages of six and 12 years, all with autism spectrum disorder, showed a social robot could help the children improve, and maintain, their attention and social skills [38]. The autonomous social robot was used in a home-based setting, and for 30 minutes a day for one month, the child engaged in a triadic interaction with both the social robot and their caregiver. During the sessions the robot modeled social gaze behaviors, such as making eye contact and sharing attention, and provided feedback to and guided the child in interactive games that targeted different social skills, including social and emotional understanding, perspective-taking, and ordering and sequencing. The robot adapted the difficulty of each individual game based on the child's history of performance in each skill set. Sessions concluded with a caregiver survey, where the caregivers rated their observations of the child's social communication skills. Consistent with the observed results, caregivers reported less prompting over time, and overall increased communication.
Pet therapy has long been known to be emotionally beneficial [39, 40] but, despite understanding and acknowledging the benefits, some healthcare settings do not accept animals, mostly because of the possibility of negative effects such as the exacerbation of allergies, infections, biting, scratching, or even fear of the animals (by staff/therapists and patients/residents) [41]. In such settings, robotic pet therapy could be considered as a possible substitute for animal therapy.
One such robotic pet is the PARO therapeutic robot which has been in use across Japan and Europe since 2003. The PARO robot is designed to look like a baby harp seal which, because it is not an animal most people are familiar with, allows engagement without preconceptions or expectations. Studies have shown that PARO can lower stress, improve depression, and reduce anxiety in people who interact with it [42, 43].
A 2017 study, using the PARO robotic pet, found intervention with the PARO robot "provided a viable alternative for controlling symptoms of anxiety and depression in elderly patients with dementia, often in lieu of pharmacological modalities" [44]. Significantly, the study also found "intervention with the PARO robotic pet three times weekly for 20 minutes significantly reduced the need for (these) medications" [44]. The authors also observed "Significant improvements in observed pain and decreased pain medication use were noted … Thus, it is likely that treatment with the PARO, which decreases stress and anxiety, will also be effective in controlling or assisting in the relief of chronic pain" [44].
Discussion
The studies presented above provide good evidence that companion robots in various forms can have benefits for people that interact with them, and that as they are improved into the future, we should expect the beneficial aspects to improve and increase. But we should be aware that there is also evidence that there may be risks and downsides. In his opinion piece for The New York Times [45], Prof Yuval Noah Harari, speculates:
"… by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation. Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster "fake intimacy," bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them."
It's certainly a valid concern, but it's not one that is new, nor is it specific to social or companion robots. We are all subject to "spam" telephone calls, emails, SMS (short messaging service) messages, and any other electronic communications that we can think of. These communications, always unsolicited, exhort us to hand over, in one way or another, personal details that would allow the sender to gain access to (usually) our bank accounts or otherwise to our assets, with the express purpose of stealing them from us. This is not a new phenomenon, and has not brought about by, and is not specific to, companion robots, and it is not likely to go away any time soon. We must be vigilant, and sadly, we must be suspicious. As long as bad actors are able to contact us so easily, they will contact us for nefarious reasons. Frankly, the only way to fix this problem is to get in our time machine and go back in time and prevent the invention of the internet.
A question we could pose here is whether, because of their (future) pervasiveness, and the perception that they are in people's lives to help them, companion robots might be more adept at surreptitiously stealing information or assets/money? The key with email and telephone "phishing" scams is the anonymity (or pseudonymity) of the perpetrators, even in cases where they have established relationships with the victims. It's more difficult for a physical companion robot to be anonymous: But could it still act on behalf of bad actors rather than its user? Regulations and enforced standards compliance would help, especially for organisations such as health and aged-care facilities etc. that provide companion robots for patients and residents, who are more likely to purchase companion robots from legitimate sources that sell devices that comply with regulations and standards and have them monitored regularly. While there is always the possibility that a robot's software could be tampered with, especially if the robot is connected to the internet without safeguards, that's no different from the risks posed by any computing device connected to the internet (corporate and personal computers, telephones, etc.). These are well-known and well-understood risks that have an entire industry working to provide tools to monitor for breaches, mitigate the risks, and help prevent attacks/intrusions.
Prof Harari cites the unfortunate case of Jaswant Singh Chail who, in 2021, planned to assassinate the Queen of England [46]. Both suggest that this case is prima-facie evidence that companion robots could persuade a user to take actions or adopt beliefs to the (eventual) detriment of the user. Prof Harari does not discuss the fact, disclosed at Chail's trial, that Chail did not get the idea to assassinate the Queen from the virtual companion—he had already formed the idea because he heard voices telling him to do it. Many people hear voices that tell them to do things, and such voices have many possible causes (e.g. mental health conditions (schizophrenia, bipolar disorder, severe depression, etc.), stress, side-effects of drugs, traumatic life experiences, sleep deprivation, and even extreme hunger).
Chail was clearly unwell mentally, and his plan to assassinate the Queen was concocted as a result of voices he heard telling him to do so, not as a result of his relationship (whatever that may have been) with a virtual companion. The virtual companion did, apparently, provide encouragement after Chail had already outlined his plan, and while that was inappropriate, such advice is not limited to virtual or technological companions: humans have been known to give inappropriate advice and encouragement, even in similar situations, and especially if they are drug-affected or mentally impaired in other ways. Just as not every human is perfect, not all technology is perfect, but it can be redesigned and reprogrammed.
During his trial, the court was told Chail thought his virtual companion was an angel in avatar form, and that he would be reunited with her after death. Chail did not believe he was engaged in a relationship with a technological device: He thought he was being advised by God, through an angel. In his mental state he could just as easily have taken a dog barking at him as encouragement of his plan, believing that God was speaking to him through the dog.
Furthermore, the application used to create the virtual companion in this case is just one of many, and Prof Harari does not present any evidence to suggest that all virtual companions or companion robots would have offered the same encouragement. People sometimes (more often than we'd like) die because of poorly constructed cars and aircraft (sometimes because the software that manages the vehicle was poorly written or tested), but we don't prevent people from using those vehicles: we (hopefully) learn and make them better/safer.
There are always risks and downsides when technology is used, especially new technology, but those risks and downsides are not good reasons to abandon the technology: We should proceed with caution, learn as we go, and improve the technology to enhance the benefits and reduce the risks and downsides.
Like all technology, companion robots should be seen as tools to be used when required and, importantly, when applicable. In health and aged-care settings, they should be used to augment the care provided by human carers and therapists, as well as the social environment: They should not be used as replacements for trained and dedicated human carers, therapists, and/or companions. I found no support, endorsement, or advocacy for such replacement in the literature: the closest being the suggestion that in the absence of human carers, companions, etc., robots, designed for purpose, could be a useful substitute.
Future Considerations
It is very likely that companion robots will become more common. How should we navigate a future where robots of all kinds, but particularly companion and other social robots, might well be ubiquitous?
Will there be a need to enable humans to identify robots, or at least distinguish between humans and robots (e.g., in response to Harari's concerns)? It's hard to imagine how that would be possible, especially in online, or remote settings (e.g. interactions over a telephone call, etc.). Unless the robot was physically present there would always be the possibility that any verification/identification mechanism could be faked/spoofed. Also see the discussion below regarding the potential convergence of humans and robots.
Will there be a need to regulate the development and use of companion and other social robots?
In the short term, the answer is probably yes. Regulations and limitations will almost certainly always be required in health and aged care settings. Where people's lives and mental wellbeing are at stake, much more care needs to be taken, and behaviors and treatments regulated. Consistency of care is also a consideration.
The need for regulation and limitation further into the future is more uncertain. We can debate the likelihood, and the time it will take, but it is conceivable that sometime in the future, robots and humans will be largely indistinguishable apart from the hardware (i.e. organic vs artificial/mechanical). It is almost certain that in the future more and more people will have artificial organs and limbs. Might it one day be possible to transfer a human brain into a completely artificial/synthetic body, or perhaps consciousness from a human brain into a sophisticated computer/artificial brain, perhaps housed in a completely synthetic body? Any of those scenarios will blur the distinction between robots and humans. At what stage, if at all, should we start to regulate or limit the interactions between robots and humans? At what ratio of organic to artificial does a human companion become a robot companion? Might it not become usual, if not normal, for humans and robots to engage in intimate, perhaps even sexual, relationships?
In the personal sphere, and especially for entertainment, the need for regulation and limitation is even less clear. What about romantic encounters, erotic roleplay, sexual activities? Some activities might be distasteful to some members of society, but should society be in the business of regulating/legislating taste?
The question is an old one, and largely unanswered: how far should society go to prevent people from hurting themselves? How far should laws and regulations extend into personal life, and personal preferences? That's a philosophical question I will leave to future generations.
Conclusion
It is clear from the studies presented in this article that companion robots offer users many, and significant, benefits, and will continue to do so as the evolve in functionality and design. There are risks involved, as there always are whenever new technologies are developed and deployed. But we can manage the risks, maximizing the benefits and minimizing and potential deleterious effects, and if we manage the risks properly there is indeed a place in society for these types of robots.
• Quinn's Response to Riley's Essay
I agree with many of Jeff Riley's points, namely: The use of companion robots will increase; preliminary research suggests spending time with companion robots can reduce people's feelings of loneliness; and companion robots in health care facilities and nursing homes should be used to augment, rather than replace, human caregivers, therapists, and companions.
The conclusion Riley draws from the story about Jaswant Singh Chail is that we should "learn as we go" and "improve the technology." He is overlooking a fundamental weakness of these systems. We should not expect a companion robot, subservient by design, to attempt to change its owner's mind, even if it can recognize a bad proposal.
Regarding people having romantic encounters and sexual experiences with companion robots, I disagree with Riley's claim that philosophical questions about what society should do to regulate personal decisions and "prevent people from hurting themselves" are "largely unanswered." Even societies that highly prize personal liberty have long traditions of enacting laws to protect both individuals and the common good. For example, in the United States motorists may freely drive from state to state, but they are required to wear seatbelts and purchase insurance. Alcohol is widely available, yet minors may not purchase alcohol, and driving under the influence is illegal.
We do not yet know the long-term effects of people spending a great deal of time with machines that simulate human emotions. The widespread deployment of companion robots that look and communicate like humans may result in fewer caring encounters between humans. If the sale of robotic friends and lovers leads to social harms, such as weaker connections between nursing home residents and their families, an increase in domestic violence, an increase in the divorce rate, or a plummeting of the birth rate, then governments should intervene.
—Michael J. Quinn
• Riley's Response to Quinn's Essay
The main thrust of Prof Quinn's article, "Companion Robots: A Problematic Prescription for Loneliness," is that relationships between humans and companion robots (or similar technologies) cannot be considered "genuine" relationships, and that such (non-genuine) relationships are detrimental to humans, and should be avoided. What constitutes a "genuine" relationship is, I think, somewhat subjective, and is more a philosophical discussion, or, in the case of Prof Quinn's article, theological (viz. his references to Dr. Kathleen Richardson's research and the Journal of Moral Theology), than it is a scientific discussion.
Prof Quinn continues: "In this essay, I will focus on a more subtle risk: how the use of companion robots may harm their owners' moral characters".
Again, I think that is more a philosophical or theological discussion than scientific, and probably difficult to measure without a long and involved discussion of morals, whose morals, who gets to set moral standards, and what harm means in the context of "moral character."
Philosophical discussions of the merits of new technologies are important and interesting to a wide range of people, but arguments and opinions that rely on faith-based sources are difficult to accept without supporting studies that pass the test of scientific rigor. Judging anything against your own standards, especially when those standards aren't universally agreed upon, is always fraught.
While Prof Quinn's article focuses on loneliness and the application of companion robots in that context, the "genuineness" of relationships formed between users and robots and how that may affect a user's moral character, in my article I address a broader range of questions related to social/companion robots and the benefits they may offer, both now and into the future.
—Jeff Riley
References
[1] Anderson, L. et al. Share of one-person households more than tripled from 1940 to 2020. United States Census Bureau. June 8, 2023.
[2] De Visé, D. A record share of Americans is living alone. The Hill. July 10, 2023
[3] U.S. Department of Health and Human Services. Our epidemic of loneliness and isolation: The U.S Surgeon General's advisory on the healing effects of social connection and community. 2023.
[4] World Health Organization. Social isolation and loneliness among older people: Advocacy brief. 2021.
[5] National Institute on Aging. Social isolation, loneliness in older people pose health risks. National Institutes of Health (NIH). April 23, 2019.
[6] Cacioppo, J. T. and Cacioppo, S. Older adults reporting social isolation or loneliness show poorer cognitive function 4 years later. Evidence-Based Nursing 17, 2 (2014), 59–60.
[7] Choi, C. Q. Humans marrying robots? A Q&A with David Levy. Scientific American. February 19, 2008.
[8] Aronsson, A. S. 2023. Theorizing the real in social robot care technologies in Japan. East Asian Science, Technology and Society: An International Journal 18, 2 (2023), 155–176.
[9] Narumi, S. Remembering AIBO. Nippon.com. February 2, 2017.
[10] Replika. n.d. Retrieved March 14, 2024.
[11] Klar, R. How an AI 'gold rush' is reviving the tech industry. The Hill. August 28, 2023.
[12] Plato. Phaedrus. Translated by Benjamin Jowett. The Internet Classics Archive.
[13] Sparrow, B., Liu, J., and Wegner, D. M. Google effects on memory: Cognitive consequences of having information at our fingertips. Science 333, 6043 (2011), 776–778.
[14] Shakya, H. B. and Christakis, N. A. A new, more rigorous study confirms: The more you use Facebook, the worse you feel. Harvard Business Review. April 10, 2017.
[15] Weizenbaum, J. Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company, San Francisco, 1976.
[16] ElliQ. n.d. Retrieved March 16, 2024.
[17] Kuyda, E. Update. Reddit. February 13, 2023.
[18] Tangerman, V. Replika users rejoice! Erotic roleplay is back in AI-powered app. The Byte. March 27, 2023.
[19] CARE. Dr Kathleen Richardson: Sex robots, relationships, and responsibilities. YouTube. July 22, 2020.
[20] AI Research Group of the Centre for Digital Culture. Encounters with a seemingly personal AI. In Gaudet, M. J. et al (Eds.) Encountering Artificial Intelligence: Ethical and Anthropological Investigations. Journal of Moral Theology 1 (Theological Investigations of AI). Pickwick Publications. 2024, 118.
[21] Singleton, T., Gerken, T., and McMahan, L. 2023. How a chatbot encouraged a man who wanted to kill the Queen. BBC News. October 5, 2023.
[22] Depounti, I., Saukko, P., and Natale, S. Ideal technologies, ideal women: AI and gender imaginaries in Redditors' discussions on the Replika bot girlfriend. Media, Culture & Society 45, 4 (2022).
[23] Dunn, E. W., Aknin, L. B., and Norton, M. I. Spending money on others promotes happiness. Science 319, 5870 (2008), 1687–1688.
[24] Brown, W. M., Consedine, N. S., and Magai. C. Altruism relates to health in an ethnically diverse sample of older adults. J Gerontol B Psychol Sci Soc 60, 3 (2005), 143–152.
[25] Coren, E. et al. An examination of the impacts of volunteering and community contribution at a community festival through the lens of the five ways to wellbeing. Int J Community Wellbeing 5, 1 (2021), 137–156.
[26] Laban, G. et al. Building long-term human-robot relationships: Examining disclosure perception and well-being across time. International Journal of Social Robotics 16 (2023), 1–27.
[27] Aronsson, A. S. Social robots in elder care: The turn toward emotional machines in contemporary Japan. Japanese Review of Cultural Anthropology 21, 1 (2020), 421–454.
[28] Sparrow. R. and Sparrow, L. In the hands of machines? The future of aged care. Minds and Machines 16, 2 (2006), 141–161.
[29] Tyler, T. Solving data edge cases: The key to AI success. Annotation Box. March 21, 2023.
[30] Nishio, T. et al. The effects of physically embodied multiple conversation robots on the elderly. Frontiers in Robotics and AI 8:633045 (2021).
[31] Li, J. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies 77 (2015), 23–37.
[32] Wikipedia contributors. Companion robot. Wikipedia, The Free Encyclopedia.
[33] Yen, H.-Y. et al. The effect of social robots on depression and loneliness for older residents in long-term care facilities: A meta-analysis of randomized controlled trials. Journal of the American Medical Directors Association 25, 6, (2024).
[34] Leng, M. et al. Pet robot intervention for people with dementia: A systematic review and meta-analysis of randomized controlled trials. Psychiatry Research 271(2019), 516–525.
[35] Park, S. et al. Animal-assisted and pet-robot interventions for ameliorating behavioral and psychological symptoms of dementia: a systematic review and meta-analysis. Biomedicines 8, 6 (2020).
[36] Saragih, I. et al Effects of robotic care interventions for dementia care: A systematic review and meta-analysis randomised controlled trials. Journal of Clinical Nursing 30, 21–22 (2021), 3139–3152.
[37] Abbott, R. et al. How do "robopets" impact the health and well-being of residents in care homes? A systematic review of qualitative and quantitative evidence. International Journal of Older People Nursing 14, 3 (2019), e12239.
[38] Scassellati, B. et al. Improving social skills in children with ASD using a long-term, in-home social robot. Science Robotics 3, 21 (2018).
[39] Charnetski, C. J., Riggers, S., and Brennan, F. X. Effect of petting a dog on immune system function. Psychological Reports 95, 3 (2004),1087–1091.
[40] Cole, K. M., Gawlinski, A., Steers, N., and Kotlerman, J. Animal-assisted therapy in patients hospitalized with heart failure. American Journal of Critical Care 16, 6 (2007), 575–585.
[41] Velde, B., Cipriani , J., and Fisher, G. Resident and therapist views of animal-assisted therapy: Implications for occupational therapy practice. Australian Occupational Therapy Journal 52, 1 (2005), 43–50.
[42] Bemelmans, R. et al. Socially assistive robots in elderly care: A systematic review into effects and effectiveness. Journal of the American Medical Directors Association 13, 2 (2012), 114–120.
[43] Broekens, J., Heerink, M., and Rosendal, H. Assistive social robots in elderly care: A review. Gerontechnology 8, 2 (2009), 94–103.
[44] Petersen, S. et al. The utilization of robotic pets in dementia care. Journal of Alzheimer's Disease 55, 2 (2017).
[45] Harari, Y. N. What happens when the bots compete for your love? New York Times. September 4, 2024.
[46] Singleton, T., Gerken, T., and McMahon, L. How a chatbot encouraged a man who wanted to kill the Queen. BBC News. October 6, 2023.
Authors
Michael J. Quinn is a computer scientist and author. In 2022 he retired from Seattle University, where he had served as dean of the College of Science and Engineering. In 2024 Pearson Education published the ninth edition of his textbook, Ethics for the Information Age.
Dr. Jeff Riley is a semi-retired technologist, and casual researcher in theoretical astrophysics with Monash University, Melbourne, Australia. He holds a Ph.D. in theoretical astrophysics (2023) from Monash University and a Ph.D. in computer science (artificial intelligence, 2006) from RMIT University, Melbourne, Australia, where he was an adjunct principal research fellow in the School of Computer Science and Information Technology from 2007 to 2013.
Footnotes
1. The criteria listed have been paraphrased or directly quoted from Yen [33].
2024 Copyright held by the Owner/Author.
This work is licensed under a Creative Commons Attribution International 4.0 License.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.
COMMENTS