In a recent article for MIT Technology Review, writer Charlotte Jee profiled her experience sampling “grief tech,” an emerging field dedicated to creating everlasting A.I. clones of deceased loved ones.
How the hell does that work? Well, it’s important to understand that voice recognition and “voice cloning” software is way more advanced than the average Siri-summoning consumer might realize. Neural networks called “large language models” (LLMs) are now capable of processing hundreds of gigabytes of data, in order to generate text, memorize mannerisms and answer impromptu questions…all in the name of generating a disturbingly dynamic mimeograph of someone’s memory.
In other words: if you pump enough information into a machine about a person, it’s possible to then have a “conversation” with a digitized version of that person long after they’re gone.
Silicon Valley subscription services like HereAfter AI, StoryFile, You, Only Virtual and Replika (among other startups, with more coming) are all selling various riffs on a future lived alongside virtual beings. The functional premise, though, remains consistent: while those beings are still alive, they’ll need to sit down with a representative from the company (sometimes a bot — in an particularly dystopian twist of the knife), and answer hundreds of questions about their lives. These interviews can then be cross-referenced with emails, text messages and additional information from spouses, children and friends, what have you, until the software has enough of a portrait to hold convincing conversations.
How convincing are those conversations, really? Jee seemed less than impressed. She persuaded HereAfter AI to let her sign up her parents for the service (who are both thoroughly alive at the moment), and chatted with them throughout 2021. While noting that the bots did sound familiar, and surprisingly “relaxed and natural” at points, Jee also found their programming hiccups difficult to ignore. For instance: the clones more or less lean on pre-established scripts, like video game characters (if someone Jee didn’t know at all, like me, asked her “parents” the same question she did, they’d give me the same answer), and while they do respond to prompts, they can’t process curveball answers. (When Jee responded “I’m feeling sad today” after her dad bot asked her how she was feeling, he cheerily responded “Good!”)
This is fraught, ethically-dubious technology, pushing us to the precipice of a frontier that many of us might gladly ignore entirely. Then there are some who might be too into it, regardless of its conversational limitations — imagine someone unable to reconcile with the loss of a loved one. Would this sort of automation help them process their grief? Would it be a comfort to have on hand, like a preserved Instagram page or a never-deleted voicemail? Or would it be a poison pill, which offers them permission to wallow in anguish forever?
Some creators in the space seem to have the right idea and refer to their services as “digital monuments.” That seems reasonable enough. But grief tech faces an elemental issue — as the technology gets better, and the individual relationships human beings have with the virtual beings become more convincing, won’t it become more difficult to think of the latter as little more than monuments?
The tagline for one of the services is telling: “Never Have to Say Goodbye.” Is that comforting? Dangerous? For now, it’s a little bit of both, and perhaps a worthy reminder — if you’ve got 15 minutes today, call someone you love. They’ll know what to say if you’re feeling sad.
Whether you’re looking to get into shape, or just get out of a funk, The Charge has got you covered. Sign up for our new wellness newsletter today.