How do people view the relationship with AI: partner, servant or master? [Guest]
How we perceive our relationship with Artificial Intelligence.
Hey, it’s Devansh 👋👋
Our chocolate milk cult has a lot of experts and prominent figures doing cool things. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on. If you or someone you know has interesting ideas in Tech, AI, or any other fields, I would love to have you come on here and share your knowledge.
I put a lot of effort into creating work that is informative, useful, and independent from undue influence. If you’d like to support my writing, please consider becoming a paid subscriber to this newsletter. Doing so helps me put more effort into writing/research, reach more people, and supports my crippling chocolate milk addiction. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.
PS- We follow a “pay what you can” model, which allows you to support within your means, and support my mission of providing high-quality education to everyone for less than the price of a cup of coffee. Check out this post for more details and to find a plan that works for you.
Riccardo Vocca is a research assistant and author exploring the social/relational dynamics between humans and AI. I am very interested in what he had to say for two reasons-
The AI-human relationship dynamic is not something that I know much about.
Originally, I was very skeptical of the growing speculation that more advanced technology would cause mass withdrawal into the digital world. I based my skepticism on two things- 1) I am a Gen Z kid who grew up with technology, and 2) While I can’t imagine my life without tech, most of the activities that I enjoy are physical that would be very hard to simulate adequately without a lot of human experimentation, lots of failures, and serious cash investment. This makes me doubt the viability of any such endeavor. However, recent conversations with people (including some who seemed to form a quasi-parasocial bond with me) have made me more open to this concept. As such, this is an idea I want to learn more about, given the long-term socio-economic implications of people continuing to engage with technology more deeply.
And that is what Riccardo is here to do. He looks at various research publications at the intersection of psychology and Tech to create well-researched pieces on exploring our relationship with our technology. This article is a very interesting exploration of how different people form differing relationships with their voice-controlled smart devices. I’m sure you will love it. For more of Riccardo’s writing, check out his newsletter- The Intelligent Friend.
As you read, think about my question for you: How would you react if your friend, child, or another close one said that their best friend or new lover was a virtual Assistant, AI avatar, or robot? Let me know about your response after reading this piece.
Hi AI Made Simple readers!
I'm Riccardo Vocca, author of the newsletter The Intelligent Friend. My publication covers all the relational aspects of AI. That is, how we build relationships with Artificial Intelligence (and also in some cases with other technologies), how we interact, what the consequences are from a psychological, and social point of view, and how we behave as consumers.
My newsletter is based only on scientific papers. Therefore, all the insights they provide derive from research, reading, and scrutiny of various studies told engagingly. I have written several issues on the topic: for example, I talked about how we can 'fall in love' with Alexa, how we can become friends with it (or ‘her’), and how we can make risky decisions with AI or, again, feel less uncomfortable when we buy an embarrassing product.
I would like to sincerely thank
for the wonderful opportunity to write this issue. Enjoy the reading!Alexa, will you marry me?
If you have Alexa, consider the last time you ‘talked to her’. Don't just think about what you asked for, but above all how you did it. Were you kind? Were you tough? Or didn't you pay the slightest attention to it? Even if we don't realize it, when we interact with devices like Alexa or Google Home, and increasingly with chatbots like ChatGPT-4o, Gemini, or Claude, we have different ways of thinking about ourselves and we relate to them differently from other people.
To give you a concrete idea, Amazon reported that half a million people told Alexa they loved her1, and a good portion of those even said they would marry her2. Alexa, Google Home, and similar devices fall into the category of so-called 'voice-controlled smart assistants' (VCSAs), i.e. smart devices that enable humans to interact and request tasks or services through verbal communication. It is precisely on these devices and how we interact with them that the researchers of the paper I want to tell you about today focus. In particular, in their study, they tried to understand how people - therefore consumers - perceived their relationships with these 'housemates', in a study that gave many subsequent avenues of exploration to scholars.
In this regard, before delving into the study, two important things must be specified:
As much as these insights apply in the specific focus of the study to VCSAs, I believe they are also very important for interactions with other forms of AI and GenAI, especially given the recent developments of ChatGPT-4o and voice-based interactions. It is no coincidence that several studies have applied taxonomies similar to those of this study also, for example, in the case of relationships with robots;
The second point I would like to underline is that, although it is interesting to know how we relate to these tools, the results have concrete impacts on us as consumers, and therefore are valuable for anyone who deals with AI in general or wants to implement solutions based about AI in your company or organization.
With that said, let's dive a little deeper into the study.
The paper in a nutshell
Title: Servant, friend or master? The relationships users build with voice-controlled smart devices. Authors: Schweitzer et al. Year: 2019. Journal: Journal of Marketing Management.
‘You're not just a machine’
We have learned that we form relationships with different AI-based devices and tools. And the scholars, as anticipated, have tried to answer a question: what are these relationships? However, there is a missing piece. If it is true that by interacting we form relationships of different types, the real first question to focus on is: why do we form relationships? What mechanisms underlie this dynamic?
There are different approaches and perspectives to answer this question. Today's scholars, however, focus on the broad concept of anthropomorphism: essentially, it is humans' tendency to perceive humanlike agents in nonhuman entities and events, such as seeing faces in clouds or attributing emotions to pets.
Anthropomorphism extends to several consumer products: for example, individuals often attribute human characteristics to gadgets, like wall clocks. This dynamic has already seen several important results in terms of effects on consumers: evidence shows that anthropomorphized products can enhance consumer preference3, make products appear more vivid4, and increase their perceived value5. However, when we talk about this topic there is always a concept that comes back and that must be taken into account: the Uncanny Valley, which makes us understand how people perceive Anthropomorphism, namely how people think positively or negatively regarding how AI is similar to humans.
To illustrate this, let's think about a practical example. For example, imagine interacting with an automatic checkout at the supermarket to pay for your groceries. Now imagine it has a square shape and a small screen. Nothing special, right? Now imagine that this increasingly takes the form of a human, like that of an interacting cashier. As similar as it is to a cashier, you still perceive that it is a machine. Therefore, despite the ‘human touch’, your perception becomes negative, because you feel something human but not ‘really’ human. Now think at other similar technologies: your smartphone, your clock, your ‘smart’ car: our perceptions of functionality and efficacy are influenced by how these technologies and products are similar to humans maintaining a core technology aspect. Summing up, the Uncanny Valley represents clearly how different degrees of anthropomorphism can change our feelings and attitudes toward technologies and AI assistants.
This feeling of 'awareness' will grow until it brings you to a point of discomfort. This point (reduced very briefly) is the so-called 'Uncanny Valley', a concept coined since 197067to describe the effects of an increasing degree of anthropomorphism of a technology on user perceptions.
Anthropomorphism is therefore one of the first dynamics that comes into play when we talk about building a relationship.
An extension of yourself
A second fundamental paradigm that scholars use as a basis for this study is the so-called 'self-extension theory', very important in marketing. Basically (and simplifying), if you think about the influence that particularly valuable products have on you, you increasingly consider them extensions of yourself.
Belk (1988)8 posits precisely that consumers perceive close others, as well as certain tangible and intangible possessions, as extensions of their self. In particular, the research tells us that self-extension is particularly likely when consumers have control over objects with relatively little autonomy. But if this is true, consequently, consumers may relate to anthropomorphized products either as others or as extensions of their self9.
This passage is very important because it underlines once again the importance that relational perspectives - in this case in the basic mechanisms through which relationships are formed according to today's scholars - influence our being consumers and our interactions with technologies and brands. However, if you think about some of the products that you have purchased in the last year, for example, you will understand that not all of them are equally important. Maybe you are a Disney fan, and therefore the Mickey Mouse t-shirt is important to you, but your checked sweater received from an aunt perhaps is not one of the 'indispensable' possessions that you would carefully preserve. And so from a daily intuition, we derive that products contribute to the extended self to varying degrees, with some being crucial and others less so.
This contribution also varies among different consumer personalities10. For instance, if products resemble their users, consumers evaluate them more positively when their anthropomorphized representation matches the consumer's gender11. This particular result, which I consider particularly relevant, suggests a tendency to perceive anthropomorphized smart devices as part of the extended self.
Therefore, in the authors' perspective, the extension of ourselves played by our VCSAs that we are going to anthropomorphize has a very strong impact on the formation of relationships with these devices. Now that we have discovered this, we can go deeper into the question: what type of relationships do we perceive with Alexa, Google Home, and much more (among which I would suggest, also ChatGPT or Gemini)?
Partner, Servant, or Master?
In the paper we are exploring, several participants interacted with different VCSAs (including Google Home and Siri), and their experiences were documented over three weeks. They were also interviewed extensively by the researchers to derive different insights. Furthermore, and I would like to specify this, the scholars report that 'during a post-experience interview, the participants were asked to describe the bond between themselves and their VCSA': the students (participants) therefore also reported emotional aspects regarding the interaction with VCSAs.
From the results of the study three different relationships emerge that I think are really interesting in consumer perceptions:
One group depicted the relationship as a servant-master dynamic, with the VCSA playing the role of a subordinate assistant;
Another group saw the VCSA as the dominant entity, effectively making the user a servant to its commands;
A third group described their relationship with the VCSA as one of equal partners.
Servant
To illustrate them, let's think once again about a concrete case. Imagine all the times during the day you have asked Siri to add a reminder, set an alarm, or send a message or an email. Think carefully about that interaction. How did you feel? You will likely have said 'thank you' to Siri in an almost friendly way, and you will have smiled slightly at that task completed as if you had a real personal assistant at your side (a bit like in the film 'The Devil Wears Prada'). It is exactly the first type of relationship that emerges from the study. In particular, young consumers frequently envisioned their VCSA as ‘a servant that helps consumers realize their tasks’.
From a more emotional point of view, they anthropomorphized the VCSA as a friendly, helpful, and reliable figure, akin to a professional secretary—polite, somewhat submissive, and always in the background (a bit like Anne Hathaway in the film I just mentioned). This perspective stemmed from the VCSA’s nature of responding to commands without initiating actions, making it seem dependent on the user's instructions. However, the thing that I found interesting about this interaction, in addition to the representation of this AI as a real personal assistant, is the perceptions of how these interactions took place.
In fact, the authors report that these users found interacting with the VCSA straightforward and beneficial, appreciating its role in enhancing their capabilities and information searches. They viewed the VCSA as an empowering tool, extending their abilities and acting as a digital extension of themselves (and here comes the theme of self-extension).
Master
However, if a subset of participants saw Siri, Alexa, or others as a subordinate assistant, some perceived exactly the opposite. This once again underlines the importance of digging deeply into consumers' perceptions by trying to understand what their fears are, what they value, and what turns them away from an interaction, as well as taking into account different traits and characteristics that can influence this. that you are observing or studying. A contrasting perspective was held in fact by those who viewed the VCSA as a master, where they felt like servants bound by its rules. These users described the relationship as a reversal of roles, with the VCSA being unpredictable and untrustworthy. In short, in this case, we are the AI secretaries who, despite often starting interactions, submit ourselves and are ready to help it in many possible ways.
Above all, and I think this deserves particular attention, they struggled to anthropomorphize the VCSA positively, sometimes comparing it to a ‘mentally impaired old man’. Their interactions were often negative, marked by impatience and frustration, as they felt stuck in unproductive exchanges and failed to achieve their goals. According to the authors, digging deeply, these sensations can be traced back to the studies of Hoffman and Novak (2018), and to the broader concern of losing control to automated systems, suggesting a need for user-friendly overrides to prevent such disempowering dynamics.
If you have ever imagined robots or an autonomous machine that take control on their own and begin to act independently of commands, and you are afraid of it, you likely have a first clear reference to this phenomenon illustrated by scientific evidence.
Partner
Finally, there is the group that I think is the most interesting of all: the perception of the relationship with the VCSA as a real partner. The participants did not feel subordinate or superior to the AI assistant, but, rather, exactly on the same level. This anthropomorphization attributed a distinct personality to the VCSA, making it an appealing and congenial entity with its own 'life'.
These users invested time in nurturing the VCSA, finding amusement and affection in its occasional missteps. They valued the relationship-building aspect, striving to develop a positive rapport. However - and this is important to specify - initial excitement often turned to disappointment when the VCSA’s repetitive and unsuitable responses frustrated their efforts, leading to emotional reactions and a sense of wasted time.
Practical insights
This truly engaging study showed us that on the one hand, we build relationships with AI-based devices, entities or tools through repeated interactions; that these can be of different types depending on the individuals; which can have a concrete effect on our attitudes, our actions and, ultimately, our choices as users and consumers.
It will therefore come as no surprise that the 'Managerial implications' section of this paper is particularly rich. The insights, even just from a first reading of what I have told you, are many and applicable.
Encourage Partner-like interactions: use speech acts and algorithms to promote the perception of VCSAs as partners. For instance, integrating proactive messages like “Hi there, you haven't talked to me for a while” can help VCSAs stay relevant and engage users more effectively;
Balance imperfection for relaxed interactions: Design VCSAs to exhibit some level of imperfection. Users tend to be more relaxed and forgiving when VCSAs make occasional mistakes, leading them to continue using the device for other tasks even if it fails at specific ones.
Provide control and feedback mechanisms: Ensure that VCSAs include features that give users a sense of control and the ability to communicate successfully with their devices. This could involve implementing verbal equivalents of "close door" buttons to reinforce user control and confidence;
Tailor user experience with emotional intelligence: Develop VCSAs to not only deliver accurate responses but also to create a better, individually tailored user experience. Focus on incorporating emotional intelligence to enhance user engagement and satisfaction.
A closing note
I would like to personally thank you for reading this issue. I hope you enjoyed it, that it was interesting, and that it gave you insights or ideas for your work or what you do. The video presentation of ChatGPT-4o gave us a powerful glimpse of how relational perspectives linked to AI, GenAI, and new technologies are crucial topics that will increasingly attract the attention of companies and scholars.
If you would like to discover more about all these psychological, social and relational aspects linked to Artificial Intelligence, I would be really happy to have you as readers of The Intelligent Friend, the only newsletter focused on these topics and which is based only on scientific papers. For each paper I also summarize the main results each time and give practical insights like the ones you have seen.
Finally, I would like to thank
once again for the wonderful opportunity to write for this newsletter, one of the ones I admire most of all.If you have anything to tell me, want to comment on this issue, or propose something, don't hesitate to write to me in chat or on LinkedIn: I'm always happy to meet new people and exchange opinions!
I hope we'll see you soon,
Thank you!
Riccardo
Risley, J. (2015, November 17). One year after Amazon introduced Echo, half a million people have told Alexa, ‘I love you’. GeekWire, https://www.geekwire.com/2015/one-year-after-amazon-introduced-echo-half-a-million-people-have-told-alexa-i-love-you/.
Murdoch, C. (2016, October 28). Want to marry Amazon’s Alexa? You’re not alone. Vocativ. https://www.vocativ.com/371706/amazon-alexa-propose-marriage/index.html
Wan, E. W., Chen, R. P., & Jin, L. (2017). Judging a book by its cover? The effect of anthropomorphism on product attribute processing and consumer preference. Journal of Consumer Research, 43 (6), 1008–1030.
Noble, C. H., Bing, M. N., & Bogoviyeva, E. (2013). The effects of brand metaphors as design innovation: A test of congruency hypotheses. Journal of Product Innovation Management, 30(S1), 126–141.
Hart, P. P. M., Jones, S. R. S., & Royne, M. B. M. (2013). The human lens: How anthropomorphic reasoning varies by product complexity and enhances personal value. Journal of Marketing Management, 29(1–2), 105–121.
Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7, 33.
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics & automation magazine, 19(2), 98-100.
Belk, R. W. (1988). Possessions and the Extended Self. Journal of Consumer Research,15(2), 139–168.
Belk, R. W. (2014). Digital consumption and the extended self. Journal of Marketing Management, 30(11-12), 1101–1118.
Lee, M. S. W., & Ahn, C. S. Y. (2016). Anti-consumption, materialism, and consumer well-being. Journal of Consumer Affairs, 50(1), 18–47.
Van Den Hende, E. A., & Mugge, R. (2014). Investigating gender-schema congruity effects on consumers’evaluation of anthropomorphized products. Psychology and Marketing, 31(4), 264–277.
If you liked this article and wish to share it, please refer to the following guidelines.
That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow.
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
What an interesting article, thank you for writing it.
I've never considered the AI as dominant relationship but from your observations my parents certainly fit into this category when interacting with systems. Its not a knowledge/skill level so much (Dad was a sysadmin) as how they see the devices errors being enforced on them, rather than mistakes from a voice operated system.
I've noticed in my own house the servant relationship with simple interactions like timers and music requests, it moves into partner for more qualitative interactions like randomised music requests and weather (as this can only ever be a guess). Particularly amusing is the response to 'will it rain today' of 'it's raining right now', those little variations improve the interaction.
And my children have taken it a step further; they treat it as an initial source of all human knowledge and will start asking questions before using any other research source. The way they interact is respectful full of the knowledge, but I would label as partner because they will exercise their own judgement over the responses (which are often missing or wrong). What surprises me is that with a 50% positive answer rate to their strange queries, the kids still find it a more natural starting point for queries than using search/youtube/adults.
I like to view it this way:
Think of AI as your trusty sidekick in the workplace. When OpenAI released their now famous ChatGPT large language around 2018 users flocked to test out its abilities. While it was fine for basic tasks, the real power came with fine-tuning models for specific functions and behaviors. Need to crunch numbers, analyze data, or automate repetitive tasks? AI has got your back. It's like having an extra pair of hands, freeing you up to focus on what truly matters: creativity, critical thinking, and complex problem-solving.