Early in December, Jacob Smith, a Ph.D. candidate in philosophy at UVA, presented for the Philosophy Club at a regular meeting. Smith’s presentation coincided with a paper he has been working on as part of his graduate pursuits in philosophy. The topic takes inspiration from the growing prevalence of loneliness among Americans and people around the world with the emergence of more advanced technology. Ironically, some tech companies are looking to AI programs designed to provide companionship of various forms. It is from this set of circumstances that Smith makes his philosophical inquiry.
Smith began the discussion by presenting his ultimate conclusion. Despite AI companions being marketed as substitutes for lovers, close friends, or family members, Smith argues that such AI models are unsuitable objects of love and, similarly, that they cannot genuinely offer the user love or affection. This lends itself to Smith’s claim that it is a mistake to use AI companions, as they could very well make loneliness worse. As for other crucial definitions to Smith’s philosophical account, one might think that discussing love as an abstraction is not specific enough to fit into the larger scheme of his work. Although Smith did not provide a simple sentence as a stance on what love means in this context, he later went on to provide three defenses of what love looks like in practice.
Additionally, Smith went on to explain what an AI companion truly is—a large language model that can be trained to predict how to respond in a comforting, loving, or friendly way. In other words, AI companions merely make predictions of what a “good” or appropriate response would be to the user given their input, and then they respond by forming a sentence from a massive body of text data. Through new inputs, this model will become attuned to the user’s wants and needs to make it a better companion. Some examples include Replika and Silicon Intelligence, users of both of which might say that they feel love for their companion model, or that they believe that their companion model loves them, or both. Moreover, many users of these platforms have claimed that their companion model is a better friend or lover than real people they have had in their life. Particularly, Silicon Intelligence is intended to resemble deceased loved ones by asking users for their text messages, social media posts, and general characteristic information about the loved one in question. This allows users to pick and choose the qualities of that loved one on which they train their model, and thus many may feel as though Silicon Intelligence creates better versions of their deceased loved one than ever was alive previously.
Smith went on to give some arguments people may endorse in favor of using AI companions. The first of which was related to how one might say that, in many ways, AI companions make for the ideal friend or lover. AI companions consider what is in the best interest of the user, never breach the user’s trust, are always available to the user, and can act genuinely interested in the user with no divided attention. AI supporters might say that even the best of real-life friends often fall short of many or all of these qualities, whereas there is no concern that AI will ever fail to fulfill each of them. Moreover, another argument Smith presented that one might endorse in favor of using AI companions is that they are the solution to the loneliness epidemic. One may argue that, especially among teenagers and young adults, their daily time alone has drastically increased over the past decade or two, and that AI companions are an easy way to fill the void of loneliness during times of social isolation. Furthermore, Americans have fewer close friends than they once did, and so AI companions are necessary at a moment in which companionship is at an all-time low.
Smith pointed out that these arguments in favor of the use of AI companions are each dependent on the idea that large language models are able to provide a genuine sense of companionship, love, and support. However, Smith then presented three features of good love that AI companions do not possess to explain why these tools are not suitable objects for a loving relationship.
First, Smith argued that love must entail a genuine concern for the beloved in such a way that allows the lover to accept all faults and shortcomings of the object of their love. This means leaving fantasy behind and grasping the whole reality of a person, rather than trying to conform someone to how we would most like them to be. Contrary to this notion, AI companions, as previously described, are subject to the user’s choice of what kind of “personality” it should have. In the case of Silicon Intelligence, their models resemble deceased loved ones are mere imitations subject to whatever the user decides to include about the character of who their loved one once was. Smith used the example of a dead grandparent who had posted something problematic on social media—the user might be inclined to ignore that aspect of their loved one in trying to recreate them through a Silicon Intelligence model, but this does not seem to uphold the intent of genuine love at all. If the user merely picks the best parts of their loved ones, not only does this make for an unrealistic representation of that person through an AI companion, which itself is unfit for love, but it also disgraces the memory of the loved one. Moreover, since the user can always delete their AI companion and restart it whenever they want to, they never are forced into a loving acceptance of that thing. Therefore, Smith argues that AI companions are not suitable objects of love because they cannot fulfil the central aspect of what makes for a loving relationship.
Smith then entered his discussion of another objection to the use of AI companions based on their inability to have the correct intentions in their relationship with the user. For instance, in any case, an AI companion only has the intention of fulfilling its programmed tasks of coherently generating responses based on its code as a large language model. However, Smith argues that a good loving relationship is defined by both parties being open to being changed for the better by their partner. Large language models have no capacity for open-mindedness, so they fail to meet this criterion of love. Conversely, a human-to-human relationship based on genuine love and connection entails both people’s willingness to let the beloved guide the course of their life based on mutual trust, respect, and hope that the future will be brighter with the other person’s influence. AI companions may let their behavior be developed entirely by the user, but it is not out of love and trust. Rather, the AI companion is operating only because of its programming. Additionally, the AI model does not seek to change the user for the better out of courtesy and grace. If the user finds positive change out of their interaction with their AI companion, it is simply happenstance. As such, someone who believes they love their AI companion entrusts the tool with this type of influence to change them, but this trust is misplaced because the tool cannot offer them good intentions, or any intentionality at all. Smith compared this situation to someone who is obsessively hung up on an ex-lover, when the ex no longer thinks about their past relationship and has either no intentions toward the person who is hung up, or worse, actively has ill will toward that person. In this example, there are unbalanced intentions on either side of the former relationship, much like is inherently the case in any relationship between a person and AI.
Lastly, Smith discussed how irreplaceability is another factor in a loving relationship that is not intrinsic to how people can relate to AI companions. For instance, Smith already established that a user can simply delete and recreate their AI companion whenever they so desire. Similarly, the AI companion treats every user the same. If one person has trained the large language model in a certain way, they can expect the model to behave in that way. But if a different person interacts with that same large language model, even if they were not the person to train it and have never interacted with it before, the model would have no concept of this being a different user and would behave exactly the same as it would toward the original user. As a result, AI companions are not entities that can have any concept of their “beloved” user as someone irreplaceable to them, since the AI companion simply has no concept of which user is interacting with them.
Thus, Smith came to his ultimate conclusion that AI companions are not suitable for love in any genuine sense of the term. However, the question still lingered: How will AI companionship actually impact the loneliness epidemic? Smith offered an answer to this question, too, by stating that things can only get worse with the influence of AI, assuming people continue to substitute human companionship with this insufficient alternative. Smith said that we can be out of practice in any given action, including the action of loving someone else. Getting into the habit of loving an object that is unfit for love is one route to becoming out of practice as a lover. When someone tries to find love in an AI companion, it is analogous to practicing basketball with cones as defenders—the player will be out of practice once playing against other real people who can move and adapt, even though in some cases their dribbling skills may improve in practicing against the cones. In the same way, when someone trains themselves to love a large language model that cannot truly be loved and cannot love the user back, they may improve a social skill, albeit entirely outside of the relevant situation of a human-to-human love connection. The user, in this case, dulls their ability to love by practicing love in the wrong way.
In turn, because seeking love through a machine is futile and damaging to the skill of loving someone else, Smith advised that we, as a society, reevaluate the value of loneliness. Is momentarily feeling less lonely justification enough to resort to loving a machine? With the detrimental effect this may have on a person and on society, the answer may very well not be in the affirmative, no matter the extent of one’s loneliness. AI companions may make one believe they are less lonely—the user may even feel less lonely—but this feeling does not represent the reality of the situation, which is that the user is no less alone than when they began. Finally, this loneliness may be useful in assessing the value of human-to-human connection and love. Isolation makes one feel detached from society and in desperate need of attention, affection, and consideration. However, Smith showed that none of these things can be attained through training a large language model to “love” you. As such, we should no sooner trade our loneliness for an artificial source of attention than we should choose to isolate ourselves for the sake of self-punishment.
Attending this event held by the Philosophy Club at UVA was illuminating, as Jacob Smith posited many highly intriguing points about AI and the future of loneliness and human connection in the digital age. With various emerging technologies on the horizon, each of us must consider what technology has to offer us that is genuine, and not entirely artificial. As Smith says, it seems that one thing AI cannot offer us is love, nor can it supply us with someone to love, no matter how desperately people may be seeking such a connection.
Leave a Reply