A new study suggests that using AI systems like ChatGPT to compose messages to friends may lead to decreased satisfaction and increased relationship insecurity if the AI use is discovered.
Using AI applications like ChatGPT to compose messages for friends may be a bad idea, as a new study found that people perceive AI-generated messages as requiring less effort on the part of the sender.
This perception, in turn, makes recipients of AI-generated messages feel less satisfied with their relationship and more insecure about where they stand with their friend.
The study involved 208 participants who were presented with various scenarios, such as experiencing burnout and needing support, having a conflict with a colleague and needing advice, or having an upcoming birthday. They were asked to shoot a message to their fictional friend, Taylor, describing their feelings.
We need you, Taylor
Participants were told that their fictional friend Taylor had sent them a response, some of which was AI-assisted, some of which was edited by a member of a writing community, and the rest of which was written solely by Taylor.
The results showed that the recipients of the AI-enhanced responses found them less appropriate than those who received messages written only by Taylor. Their judgment had nothing to do with the quality of the response itself, but only with how much effort Taylor had supposedly put in.
In addition, receiving AI-enhanced messages made participants less satisfied with their relationship and more insecure about their friendship with Taylor. Interestingly, receiving messages written with help from another human, rather than just AI, had the same negative impact on the relationship.
Lead author Bingjie Liu, an assistant professor of communication at Ohio State University, has an explanation for this phenomenon: She believes that effort is crucial in a relationship, and using AI or third-party help to craft messages can be perceived as taking shortcuts and lacking sincerity.
As ChatGPT and similar services become more popular, Liu suspects that people may develop an unconscious behavior of scanning messages from their friends for AI patterns, something she calls a "Turing test in the mind." If they find out the message was written by an AI, it could damage relationships.
Some personal musings on AI messaging
I've had similar thoughts before about AI-generated response suggestions in messengers and emails. I have to admit that I use them, sometimes even to reply to my partner or friends. It can feel like a lazy shortcut that doesn't give my counterpart the attention he or she deserves at that moment.
Interestingly, for me, this feeling increases with the discrepancy between the AI-generated response and the response I would have given if I'd typed it myself. These can be very subtle differences, like using a particular emoticon or saying "Hey my friend" instead of just "Hi".
But if the AI suggestion is 100 percent correct, like "thank you" when all I really want to say is "thank you," that feeling goes away.
So it may not be the act of typing, it may be whether your response is truly authentic to you and not just a "good enough" approximation of what you meant to say.
Because if an approximation feels strange to me as the sender, it may feel even stranger to the receiver. So our effort might be to make sure that we say what we want to say.