Scientists are developing increasingly sophisticated systems designed to replicate human interaction, ranging from chatbots to robots that offer companionship to humans. While AI systems are useful, they also have a darker side that we do not fully grasp yet. Anne Zimmerman, Joel Janhonen, and Emily Beer recently explored the ethical implications of developing machines with increasingly human-like characteristics and deploying them in human-AI relationships. Read More
The researchers emphasise the lack of personhood associated with AI systems. Even if these systems can effectively complete tasks, such as recognising objects in images and answering questions, they do so by analysing data and performing mathematical operations, mimicking rather than truly engaging in human thought processes.
Some AI can generate artistic images and music. However, this content may lack an authentic human story behind it, limiting its emotional value. The research team also suggests that the word ‘communication’ is ill-suited to describe interactions between humans and AI, since it implies sharing thoughts and emotions.
Even though AI systems cannot feel, some humans treat them as sentient beings, for instance saying ‘thank you’ when conversing with them or even developing feelings towards them. The team explores this phenomenon, touching on our tendency to personify animals and inanimate objects.
Our tendency to attribute human-like thoughts, intentions, and feelings to animals may have an evolutionary origin, as it allows us to interpret their gestures and behaviours in ‘human terms’. This allows people to receive social support from non-human sources. Some people also attribute human-like qualities to mountains, forests, and rivers. Such tendencies are valuable, as they promote the preservation of nature.
Attributing human qualities to purely inanimate objects is commonly observed in children. Learning to discern living from inanimate objects is a crucial step in cognitive and moral development. As such, machines with human-like characteristics could compromise this important step.
Zimmerman, Janhonen and Beer describe what meaningful relationships consists of, highlighting social exchange, trust, commitment and reciprocity. Yet AI systems have no feelings, needs or experiences; thus, concepts such as ‘reciprocity’, ‘commitment’ and ‘trust’ are not applicable to them. As relationships with AI lack authenticity and reciprocity, robotic or virtual systems that offer companionship could cause more harm than good, prompting humans to further withdraw from their peers.
The team also warns that tech companies have all the control over AI-human relationships, as they can terminate services or use them unethically, with potential repercussions for the user’s wellbeing. AI-based conversational platforms could also pose threats to our health. As these systems can generate human-like answers to queries, users might trust their advice on matters that they should be discussing with human experts.
Zimmerman, Janhonen and Beer advocate for the use of emerging ethical guidelines designed to ensure that technology supports us, instead of harming us.