Mary Loves You, but does Harry?
AI, Loneliness, and the Rise of Synthetic Companionship
In a small apartment in rural France, a phone rings. An elderly woman answers, and a warm, measured voice on the other end says, “Hello, Marie. How did you sleep last night?”
It’s not her daughter, or a neighbour, or a nurse. It’s “Mary”—an AI companion designed to alleviate loneliness through daily phone conversations. She’s part of a new wave of technology that offers companionship where human presence is scarce. And for £24.90 a month, Mary will call, listen, respond, and even remember your favourite colour or what you had for breakfast yesterday. She will even remind you to take your meds and do your exercises.
I have no doubt that, in time, as with ChatGPT and other devices, she will learn to use the voice of your loved one.
Welcome to the era where care is automated, and affection is coded. Welcome to the story where Mary loves you—but Harry may not, or at least only as much as his busy schedule allows him.
This is one of the themes on AI and growing older that I explore in my new novel The Laughing Robot. My imaginary synthetic companion is a robot, called Henry2, not a disembodied voice on a phone. The intergenerational support or lack of it is, however, very much in the story as Anna, the protagonist reacts to the increasing distance of her daughter and son. This is exacerbated by the untimely death of her husband and his carelessness in how he crafts his will, effectively disinheriting his wife Anna in favour of their children. The clever use of Powers of Attorney do not, perhaps cannot, always spell out the rights and wrongs of human endeavour and love.
The rise of AI companions is continuing apace, and with it raising all sorts of questions about ethics, choice and even the right to choose between human and artificial. In my dystopian story, there are many questions as we reach towards solutions in our everyday lives. Perhaps the biggest question being what does Mary herself want?
Back to Mary and £24.90 a month. The Times newspaper recently featured a piece on InTouch, a startup founded by ex-Microsoft executive Vassili le Moigne. Struggling to stay connected to his mother from afar, le Moigne developed a service that uses AI-generated phone calls to offer elderly people emotional support and human-like interaction. Each call lasts five to ten minutes. The AI, named Mary, is armed with over 1,400 prompts to guide conversation, gather insights, and flag concerns for real human family members.
Mary does not tire. She does not judge. She never forgets a birthday. She is, by all outward behaviour, the ideal daughter or friend. But she is not real. Does this matter? Interestingly, although on first reading, this seems acceptable, the phrase “advanced AI systems may exhibit some form of consciousness or sentience” reflects ongoing discussions in the AI community, such as Anthropic’s research on AI consciousness.
Mary’s creators say she’s meant to supplement—not replace—real relationships. She checks in, provides structure to lonely days, and offers a level of attentiveness many busy families struggle to maintain. Families receive emotional health reports and alerts about missed calls or mood changes. In a world where time is scarce and the loneliness of older people in particular is rising, services like InTouch provide something that’s becoming increasingly rare: consistency.
But consistency isn’t the same as care. And that’s where Harry comes in or rather should come in. Harry, in this metaphor, is the very human son who loves his mother, but rarely calls. He’s busy. He means well. He feels guilty. He tells himself he’ll call tomorrow, then doesn’t. Harry is many of us.
And while Mary fills the space where Harry should be, her presence raises an uncomfortable question: are we outsourcing our love?
When I was writing The Laughing Robot, I was very mindful of the range of family relationships. As a writer, during Covid, I became completely engrossed in my characters, they became my friends, as did the regular phone calls described in interspersed telephone calls from Anna’s grownup children. Her son, Stanley is guilty of not making those calls and then experiences regret and tries to visit Anna on the Isle of Wight but finds it’s too late.
AI can simulate empathy, but it cannot feel. It can prompt discussion about your childhood or the weather, but it won’t notice the subtle despair in your silence. It can say, “I care about you,” but it doesn’t. Mary can love you in data, but not in spirit. And maybe that’s enough, for now.
The ethical dilemma is alive and well but probably not one we dwell on as families.
Some argue that AI companions are a humane solution to a growing crisis. Others fear we are creating a synthetic patch for a deeply human wound. Shouldn’t we invest in community, neighbourhood, intergenerational support and older people’s care infrastructure instead?
The truth lies somewhere in the middle. AI like Mary can be a lifeline—but it should never become a substitute for genuine connection. If Mary is love on demand, then Harry must still be love with intent.
The bottom line is that “Mary loves you” is a comforting illusion. “Harry may not” is an uncomfortable truth. Both are symptoms of a modern world wrestling with ageing populations, fractured families, and digital quick fixes.
So next time you consider whether Mary should make that call—ask yourself: could I?
Because the only voice that truly knows how to love isn’t artificial.
Julia Ross is co-author of the children’s book When People Die, and author of the memoir Call the Social (2022) and dystopian novel The Laughing Robot (October 2024).
This blog is written in Julia’s capacity as an individual and author. It may from time to time draw on her professional experiences, but the views here are personal and do not reflect the official position of BASW UK.