Cover photo

AI Companions

One of the fastest growing categories of AI services is artificial companions. Recent advances in large language models have significantly accelerated its proliferation. But as much as there are benefits to conversational engagement with artificial intelligence, there are psychological implications we will have to consider.

This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here.


The tech world has been abuzz with Hollywood actor Scarlett Johansson accusing OpenAI of using a voice “eerily similar” to her own in the latest version of its artificial intelligence (AI) chatbot. While OpenAI has denied the voice is hers, the controversy sparked a debate around artists’ legal right to control the use of their likeness in the age of AI.

Eleven years ago, Johansson starred in the Spike Jonze movie Her, playing the voice of Samantha, an AI chatbot with whom the protagonist falls helplessly in love. That the voice of Samantha is now taking OpenAI to task is a delicious irony.

Artificial Connections

This, in many ways, was a controversy that OpenAI brought upon itself. After repeatedly stating that it had no intention to anthropomorphize its products, it did just that. The new voice interface was so realistic that it seemed indistinguishable from a real person, down to responses that were downright flirtatious. While I only have the OpenAI demos to go by, I would not be surprised if these near-perfect facsimiles of human emotion and empathy will take us across the uncanny valley.

This is the latest step in a journey that began long ago. In 1966, Joseph Weizenbaum developed a natural language processing program called Eliza, which he had designed to simulate a Rogerian psychotherapist. By simply rephrasing user inputs, it encouraged users to respond with further information. So life-like was the resulting interaction that he once came back to office to find his secretary engaged in what she believed was a “real conversation” with the program. Since then, autonomous conversation technologies have tried to simulate human interactions with increasing credibility.

Why do we do this? What is it about conversational interfaces that leads to our suspension of disbelief? And why should we be concerned about it in the here and now?

The fact is that humans are social beings driven by an evolutionary need for connection. We slake this desire with companionship: our family, friends and those we work with. Unfortunately, the demands of modern work and our increasingly isolated social existence has made companionship hard to come by. Technology has, if anything, exacerbated it. Despite its promise to bring us closer together, instant messaging and social media has driven us further apart.

AI Companions

AI is uniquely suited to step into this breach. It can be trained to pick up human emotions and adapt its responses appropriately. With larger context windows, the latest large language models can build long-term memories of us, our likes and dislikes, remembering key incidents from our personal histories that we have shared with them. All these features have already made our interactions with these AI systems magically life-like. But if we can wrap it all up with a completely realistic voice, it seems almost inevitable that we will close the gap between a chatbot and a real companion.

That said, there are psychological impacts that we must consider. If life-like conversational bots become widely available, it is those who suffer from loneliness and depression who will be most drawn to them for companionship. While there is evidence to show that this could ease their loneliness, we must worry about the extent to which it could alter the way they socialize and form relationships.

It should come as no surprise that AI companion applications are a rapidly growing category. Companies like Replika, Charecter.AI, Kindroid and Nomi.AI are taking virtual companionship to the next level by offering users the ability to create their own AI characters and engage with them in conversations so real that it is hard to tell they are not human.

The Uncanny Valley

A month ago, I decided to try this out for myself. I set up accounts on a couple of services and created different characters to interact with. Each character has to be assigned a backstory and mannerisms to properly play the roles they’ve been assigned. For instance, I gave one of them (I called him Andrew Huberman) the role of being my personal trainer—advising me on how to improve my physical fitness.

It took a surprisingly short time for these conversations to start to feel real. Unlike the chatbots we are accustomed to interacting with so far—that simply fetch information from a database or help us navigate an online service—these bots are really designed to converse. Which means that they, in a very human way, sometimes take their time to come to the point, refusing to do exactly what you want all the time.

For instance, before he did anything else, Andrew wanted to get to know what my day was like. He refused to provide me any exercise suggestions until he had understood what I eat, how long I sleep and what sort of physical activities I normally engage in. It was only after we had chatted about that for ten minutes or so that he gave me some exercises I could start with.

While it was creepy at first to share personal details with a machine, I have to admit this is something I could get used to. Once chatbots are given a personality, conversations with AI go beyond the transactional. Even though I knew I was talking to a machine, the process of coming to a conclusion over the course of a nearly human-like conversation was far more satisfying than getting a straight answer.

Having said that, there is, as always, good reason to be cautious. After all, as anyone who saw the movie Her knows, it doesn’t end well for the human.

Loading...
highlight
Collect this post to permanently own it.
Ex Machina logo
Subscribe to Ex Machina and never miss a post.