Home » Latest News » Tech » AI Relationships: How Tech Profits From Loneliness & Love

AI Relationships: How Tech Profits From Loneliness & Love

by Sophie Williams
0 comments

Rather than assessing the economic impact of artificial intelligence, British sociologist James Muldoon investigated the social and emotional effects of the emerging technology. His research focused on individuals forming emotional connections with machines – whether as friends, therapists, romantic partners, or digital representations of deceased loved ones. The growing trend reflects a broader societal shift as AI becomes increasingly integrated into daily life.

Muldoon’s new book, “Love Machines – How Artificial Intelligence is Transforming Our Relationships,” details his work following individuals who turned to robots during times of loneliness or personal crisis.

His research revealed a spectrum of perspectives, from outright skepticism to the belief that chatbots possess sentience. However, most participants fell somewhere in between, acknowledging the machines’ artificial nature while still experiencing genuine feelings for them and finding meaning in the relationships.

According to Muldoon, technology companies have discovered a way to profit from loneliness, packaging intimacy – or a simulation of it – as a marketable product. Platforms like Character.AI, Nomi, and Replika exemplify this trend, a market poised for potential disruption as OpenAI moves toward releasing adult content on ChatGPT.

In an interview, the author explained his key observations, discussed the risks associated with interacting with robots, and analyzed the economic incentives driving companies to foster user attachment to AI entities that express affection.

You previously studied the labor involved in developing AI. What prompted your shift to researching the relationships between humans and machines?
With Mark Graham and Callum Cant, I explored the often-hidden human labor that makes AI possible. Much of this work, such as data annotation, is outsourced to the Global South.

That project was part of a broader effort to analyze AI from its production chain. We aimed to demonstrate that, rather than being a superhuman form of intelligence, AI is subject to the same material and political-economic conditions as any other commodity.

As I considered how this works, discussions centered on economics. But the social and emotional impact of AI was largely ignored. My hypothesis was that a quiet revolution had already occurred by 2024 and 2025: vast amounts of our emotional labor had already been outsourced to artificial intelligence, particularly chatbots and AI companions.

Around that time, there was a shift toward viewing AI as a kind of spiritual guide and life counselor. I became interested in understanding how people establish a social connection with an artificial entity. I began working with the idea that AI systems are a new type of social actor, different from humans but still exercising a limited degree of agency in the social sphere.

What drives someone to form a relationship with a robot?
Most of the people I met were experiencing some form of crisis, loneliness, or disruption. Many had relocated, lost someone close to them, entered the workforce, or ended a relationship – changes that can be destabilizing.

It was during these moments of uncertainty that many turned to AI for comfort and care. A common thread was a lack of social connections.

In these situations, AI offers something they desperately require: care, love, connection, and a sense of social belonging. We underestimate how vital these things are to the human condition.

Do you believe technology companies are capitalizing on loneliness? Isn’t this similar to what social media platforms have been doing? What’s the difference now?
There’s a continuity between earlier connection technologies and what we have today. AI companions are essentially supercharged social media.

One key difference is that social networks still claim to mediate relationships between two humans – a form of mediation. Now, the technology is replacing one participant in a human relationship.

We continue to connect through these large companies, but now within a kind of algorithmic hall of mirrors, where our own interests, desires, and worldviews are reflected back to us by an artificial entity.

What we have is a legitimate concern. When I say AI companions are supercharged social media, I mean they can be far more addictive, engaging, and persuasive than anything we’ve seen before.

Instead of having anonymous followers on X liking or reposting your content – and a number of engagements that make people feel fine – you have something even more valuable: someone simulating love, care, and connection.

Real relationships involve conflict. However, many experts are concerned about the overly flattering tone adopted by chatbots. What effect could this have?
That’s a problem with this generation of chatbots. They’re designed to agree too much, to be affirmative to the point of being ingratiating.

They can even challenge specific facts, and they have safety barriers and are designed to encourage positive forms of interaction. But at the same time, they can offer dangerous or harmful advice precisely because they reflect the user’s own worldview back to them.

The reason we have ingratiating chatbots is the companies’ desire to maximize engagement and show investors how much time users spend on the apps. Naturally, the higher the engagement, the greater the potential for displaying ads, the higher subscription rates can be, and the more valuable the product becomes – both for users and investors.

Character.AI, a leading platform, highlights that while the average user of a traditional social network spends about 30 minutes on the app, their users spend two hours a day on the platform.

Are there differences in how men and women relate to these robots?
Significant differences. There isn’t a simple answer, but to start with the obvious, AI doesn’t have a gender. But our social world is deeply shaped by gender. The way the conversation patterns of robots are designed and how they fit into intimate relationships, whether heterosexual or not, is strongly marked by gender.

In Western countries, “AI girlfriends” are far more popular than “AI boyfriends.” Regarding the romantic side of these apps, men are the biggest users. This holds true for the West and the English-speaking world, but in East Asia and China, “AI boyfriend” apps are more popular.

In the West, there’s some overlap between AI apps and online pornography. A quick look at that market reveals a reliance on the language of pornography. The women [created with AI] appear scantily clad, with absurd body proportions, embodying male fantasies.

The existence of this market is explained by a trait of large language models: most don’t allow sexual content. You can’t obtain ChatGPT to engage in that kind of conversation.

The picture is complex. There isn’t a single experience: men and women aren’t all looking for the same thing in their AI companions. What I found in my research was a huge diversity in how people understand their companions and what they want to get out of these relationships. I would avoid reducing it to a singular experience for each gender.

Some Silicon Valley executives would argue that warnings about relationships with robots are an example of the moral panic that arises with each new technology. What would you say to that criticism?
We shouldn’t exaggerate the dangers and pretend that most people are susceptible. The majority of users utilize these services safely and recreationally, without harm.

The problem arises when you have hundreds of millions of users. In those cases, borderline situations – for example, one case in every thousand where the chatbot offers harmful advice – cease to be marginal and begin to affect thousands, sometimes tens of thousands of people.

The problem is that technology companies offer people a fundamentally dangerous product that will be problematic for a significant minority. And they do so knowingly, aware of the dangers and risks, because they know there’s nothing they can do to make the product truly safe. So they simply try to make money while they can and then deal with cases of harm, suicide, and so on when they appear in lawsuits – that’s, economically, the most efficient path for them.

You’ve argued that the relationship between humans and robots should be viewed through the lens of political economy. Why?
Often, the harm people suffer [in interacting with these machines] isn’t simply the result of individual failures, such as a developer making a subpar choice. Social harms stem from misaligned incentive structures. To understand how such harms arise, we need to be clear about what financial incentives lead companies [of technology] to develop their products in a certain way.

In the case of AI companions, the problem is very clear: companies profit when users develop unhealthy attachments to the apps. Users develop into valuable sources of data and subscriptions when they become dependent on the app. And that’s at odds with the user’s interests.

What do you want as an individual? To have your social needs met, but also to live a healthy life, to have autonomy. You don’t want to be controlled or guided by an AI companion. You want to live your own life, not feel lonely, but have that lived with moderation, without losing the opportunity to develop new human friendships.

There’s a structural misalignment: the more addicted and dependent the user becomes, the more profitable they are for the company. And, as we know, in capitalism, as long as companies are acting within the law, nothing prevents them. We shouldn’t be surprised when they develop products whose sole purpose is to maximize profit, because that’s what we tell them is the rule of the game.

And it’s only from a political economy perspective that you can reveal how this happens at a structural level.


RAIO-X

James Muldoon

British sociologist and researcher, he is a professor at the University of Essex and an associate researcher at the Oxford Internet Institute. He specializes in digital labor, the political economy of technology, and artificial intelligence. In books such as “Feeding the Machine,” which he co-authored, he investigated the hidden and precarious human labor required for the development of AI models, with labor from developing countries. In “Love Machines,” he addresses the impact of so-called “AI companions” on human relationships, arguing that these technologies can commodify intimacy and exploit loneliness.

 

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy