Eliza Strickland: Hi, I’m Eliza Strickland for IEEE Spectrum‘s Fixing the Future podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe.
Imagine getting a birthday email from your grandmother who died several years ago, or chatting with her avatar as she tells you stories of her youth from beyond the grave. These types of post-mortem interactions aren’t just feasible with today’s technology, they’re already here.
Wendy H. Wong describes the new digital afterlife industry in a chapter of her new book from MIT Press, We the Data: Human Rights in the Digital Age. Wendy is a Professor of Political Science and Principles Research Chair at the University of British Columbia. Wendy, thanks so much for joining me on Fixing the Future.
Wendy H. Wong: Thanks for having me.
Strickland: So we’re going to dive into the digital afterlife industry in just a moment. But first I want to give listeners a little bit of context. So your book takes on a much broader topic, the datafication of our daily lives and the human rights implications of that phenomenon. So can you start by just explaining the term datafication?
Wong: Sure. So datafication is really, I think, quite straightforward in the sense that it’s just kind of trying to capture the idea that all of our daily behaviors and thoughts are being captured and stored as data in a computer or in computers and servers all over the world. And so the idea of datafication is simply to say that our lives are not just lived in the analog or physical world, but that actually they’re becoming digital.
Strickland: And, yeah, you mentioned a few aspects of how that data is represented that makes it harder for it to be controlled, really. You say that it’s sticky and distributed and co-created. Can you talk a little bit about some of those terms?
Wong: So in the book, what I talk about is the fact that data are sticky, and they’re sticky in four ways. They’re sticky because they’re about mundane things. So as I was saying, about everyday behaviors that you really can’t avoid. So we’re starting to get to the point where devices are tracking our movements. We’re all familiar with typing things in the space bar. There’re trackers when we visit websites to see how long it takes us to read a page or if we click on certain things. So these are behaviors that are mundane. They’re every day. Some might say they’re boring. But the fact is they’re things we don’t and can’t really avoid through living our daily lives. So the first thing about data that makes it sticky is that they’re mundane.
The second thing is, of course, that data are linked. So data in one data set doesn’t just stay there. Data are bought and sold and repackaged all the time. The third thing that makes data sticky are that they are fundamentally forever. And I think this is what we’ll talk about a little bit in today’s conversation in the sense that there’s no real way to know where data go once they’re created about you. So effectively they are immortal. Now whether they are actually immortal, again, that’s something that no one really knows the answer to. And the last thing that makes data sticky, the fourth criteria I guess is that they are co-created. So this is a big thing I spend a lot of time talking about in the rest of the book because I think it’s important to remember that although we are the subjects of the data and the datafication, we are actually only half of the process of making data. So someone else—I call them the data collectors in the book—typically they’re corporations, but data collectors have to decide what kinds of characteristics, what kinds of behaviors, what kinds of things they want to collect data on about what human beings are doing.
Strickland: So how did your research on datafication and human rights lead you to write this chapter about the digital afterlife industry?
Wong: That’s a really good question. I was really fascinated when I ran across the digital afterlife industry because I have been studying human rights for a couple of decades now. And when I started this project, I really wanted to think about how data and datafication affect the human life. And I started realizing actually that they affect how we die, at least in the social way. They don’t affect our physical death, unfortunately, for those of us who want to live forever, but they do affect how we go on after we’re physically gone. And I found this really fascinating because that’s a gap in the way we think about human rights. Human rights are about living life to a minimal standard, to our fullest potential. But death is not really part of that framework. And so I wanted to think that through because if now a datafied afterlife can exist and is possible, can we use some of the concepts that are very important to human rights, things like dignity, autonomy, equality, and the idea of human community? Can we use those values to evaluate this digital afterlife that we all may have?
Strickland: So how do you define the digital afterlife industry? What kind of services are on offer these days?
Wong: So I mean, this is, again, like a growing, but actually quite populated industry. So it’s really interesting. So there are ways you can include services like what to do with data when people are deceased, right? So that’s part of the digital afterlife industry. A lot of companies that keep data, big tech, like a lot of the companies we know and are familiar with, like Google and Meta, they’re going to have to decide what to do with all these data about people once they physically die. But there are also companies that try to either create persons out of data, so to speak, or there are companies that replicate a living person who has died. I mean, it’s possible to replicate that person when they’re living too, in a digital way. And there are some companies that will have advertised posting information as though you’re living whether you’re sleeping or dead. So there are lots of different ways to think about this industry and what to do with data after we die.
Strickland: Yeah, it’s fascinating to see what’s on offer. Companies that say they’ll send out emails on specific dates after your deaths, you can still communicate with loved ones. And although I don’t know how that would feel to be on the receiving end of such a message, honestly. But the part that feels creepiest to me is the idea of a datafied version of me that sort of living on after I’m gone. Can you talk a little bit about different ideas people have had about how they can recreate someone after their death? And oh, there was a Microsoft patent that you mentioned in the chapter that was interesting in this way.
Wong: Yeah, I mean, I’m really curious why your discomfort with that, but let’s sort of table that. Maybe you can talk a little about that too, because I mean, for me, what really hits home with these sort of digital avatars that act on their own, I guess, in your stead, is that it pushes back on this question of how autonomous we are in the world. And because these bots or these algorithms are designed to interact with the rest of the world, it is a little bit weird, and it speaks to also what we think the edges of human community are.
So most of the time when we think about death, there’s a way to commemorate a dead person in a community, and sort of there’s a moving on to the rest of the living, while also remembering the person who’s died. But there are ways that human communities have developed to deal with the fact that we’re not all here forever. I think it’s a really interesting anthropological and sociological question when it is possible that people can still participate, at least in digital fora, even though they’re dead. So I think that’s a real question for human community.
I think that there are questions of dignity. How do we treat these digitized entities? Are they people? Are they the person who has died? Are they a different type of entity? Do they need a different classification for legal, political, and social purposes?
And finally, the other human rights value that I really think this chapter actually pushes on is that question of equality. Not everyone gets to have a digital self because these are actually quite expensive. And also, even if they become more accessible in price, perhaps there are other barriers that prevent certain types of people from wanting to engage in this. So then you have a human community that is populated only by certain types of digital afterlife selves. So there are all these different human rights values questions. And in the process of researching the book, yes, I did come across this Microsoft patent. They have put things on hold as far as I can tell. There was a bit of publicity around it, several media reports around this patent that had been secured by Microsoft, essentially to create a version of a person living or dead, real or not, based on social data. And they define social data very broadly. It’s really anything you think about when you interact with digital devices these days.
And I just thought there’s so many concerns with that. One, I mean, who authorizes the use of that kind of data, but then also, how does the machine actually recognize the type of data and what’s appropriate to say and what’s not? And I think that’s the other thing that is not a human rights concern, but it’s a human concern, which is that we all have discretion when we’re living. And it’s not clear to me that that’s true if we’re gone and we’ve just left data about what we’ve done.
Strickland: Right, and so the Microsoft patent, as far as we know, they’re not acting on it, it’s not going forward, but some versions of this phenomenon have already happened. Can you tell me the story of Roman Mazurenko and what happened to him?
Wong: Yeah, so Roman’s story, it’s very tragic and also very compelling. Casey Newton, a reporter, actually wrote a really nice profile piece. That’s how I initially got familiar with this case. And I just thought it illustrated so many things. So Roman Mazurenko was a Russian tech entrepreneur who unfortunately died in an accident at a very young age. And he was very much embedded in a very lively community. And so when he died, it left a really big hole, especially for his friend, Eugenia Kuyda, and I hope I’m saying her name right, but she was a fellow tech entrepreneur. And because Roman, he was young, he hadn’t left really a plan, right? And he didn’t actually have a whole lot of ways for his friends to grieve loss of his life. So she got the idea of setting up a chatbot based on texts that she and Roman had exchanged while he was living. And she got a handful of other family and friends to contribute texts. And she managed to create, by all accounts, a very Roman-like chatbot, which raised a lot of questions. If me, I think in some ways it really helped his friends cope with the loss of him, but also what happens when data are co-created? In this case, it’s very clear. When you send a text message, both sides, or however many people are on the text chain, get a copy of the words. But whose words are they? And how do you decide who gets to use them for what purpose?
Strickland: Yeah, that is such a compelling case. Yeah, and you asked before why I find the idea creepy of being resurrected in such a digital form. Yeah, for me, it’s kind of like a flattening of a person into what sort of resembles like an AI chatbot. It just feels like losing, I guess, the humanity there. But that may just be my current limited thinking. And maybe when I– maybe in some decades, I’ll feel much more inclined to continue on if that possibility exists. We’ll see, I guess.
Wong: In terms of thinking about your discomfort, I don’t know if there’s a right answer because I think this is such a new thing we’re encountering. And the level of datafication has become so mundane, so granular that on the one hand, I think you’re right, and I agree with you. I think there’s more to human life than just what we do that can be recorded and digitized. On the other hand, it is starting to be one of those things where philosophers and other folks who really think about the bound, what does it mean to be human? Is it the sum total of our activities and thoughts? Or is there something else, right? This idea, whether they believe in a soul or you believe in conscious, like what consciousness is, like these are all things that are coming into question.
Strickland: So trying to think about some of the things that could go wrong with trying to replicate somebody from their data, you mentioned the question of discretion and curating. I think that’s a really important one. If everything I’ve ever said in an email to my partner was then said to my mom, would that be a problem, that kind of thing. But what else could go wrong? What are the other kind of technical problems or glitches that you could imagine in that kind of scenario?
Wong: I mean, first of all, I think that’s one of the worries I would have is, because we don’t tag our data secret or only for family, right? And so these are things that could come up very readily. But I think there are other just very common concerns like software glitches. Like what happens if there’s a bug in the code and someone or someone, like the digital representation of someone says something totally weird or totally offensive or totally inappropriate, do we then, how do we update our thinking about that person when they were alive? And is that digital version the same thing as that living person or that deceased person? I think that’s a real judgment call. I think that some other things that might come up are simply that data could get lost, right? Data could get corrupted. And then what? What happens to that digital person? What are the guarantees we might have if someone really wanted to make a digital version of themselves and have that version persist even after they’re physically dead, what would they say if some data got lost? Would that be okay? I mean, I think these are sort of questions that are exactly what we’ve been talking about. What does it mean to be a person? And is it okay if data from a five-year period of your life is lost? Would you still be a complete human representation in digital form?
Strickland: Yeah, these are such interesting questions. And you also mentioned in the book the question of whether a digital afterlife person would be sort of frozen in time when they died, or would they be continuing to update with the latest news?
Wong: And is that okay? Again, those are, you don’t want to make someone a caricature of themselves if they can’t speak to current events. Because sometimes, we think we have these thought experiments, like what would some famous historical figures say about racism or sexism today, for example? Well, if they can’t update with the news, then it’s not really useful. But if they update with the news, that’s also very weird because we’ve never experienced that before in human history, where people who are dead can actually very accurately speak to current events. So it does raise some issues that I think, again, make us uncomfortable because they really push the boundaries of what it means to be human.
Strickland: Yeah. And in the chapter, you raised the question of whether a digitally reconstructed person should perhaps have human rights, which is so interesting to think about. I guess I sort of thought of data more as like property or assets. But yeah, how do you think about it?
Wong: So I don’t have an answer to that. One of the things I do try to do in the book is to encourage people not to think about data as property or assets in the transactional market sense. Because I think that the data are getting so mundane, so granular, that they really are saying something about personhood. I think it’s really important to think about the fact that these are– data are not byproducts of us. They are revealing who we are. And so it’s important to recognize the humanity in the data that we are now creating on a second-by-second basis. In terms of thinking about the rights of digital persons if they are created, I think that’s a really hard question to answer because anyone who tells you something– anyone who has a very straightforward answer to this is probably not thinking about it in human rights terms.
And I think that what I’m trying to emphasize in the book is that we have come up with a lot of rights in the global framework that try to preserve a sense of a human life and what it means to live to your fullest potential as an individual. And we try to protect those rights that would enable a person to live to their potential. And the reason they’re rights is because their entitlements, they’re obligations that someone has to you. And in our conception now, it’s usually states have obligations to individuals or groups. So then if you try to move that to thinking about a data person or a digital person, what kind of potential do they live to? Would it be the same as that physical person? Would it be different because they’re data? I mean, I don’t know. And I think this is a question that needs exploration as more of these technologies come to bear. They come to market. People use them. But we’re not thinking about how we treat the data person. How do we interact with a datafied version of a person who existed, or even just a synthesized computer person, a person or– sorry, a digital version of some being that’s generated, let’s say by a company based on no real living person? How do we interact with that digital entity? What rights do they have? I don’t know. I don’t know if they have the same kinds of rights that human beings do. So there’s a long way to answer your question, but in a way, that’s exactly what I’m trying to think through in this chapter.
Strickland: Yeah. So what would you imagine as sort of next steps for human rights lawyers, regulators, people who work in that space? How can they even begin to grapple with these questions?
Wong: Okay, so this chapter is one of several explorations of how human rights are affected by dataification and vice versa. So I talk about data rights. I talk about facial recognition technology. And I talk about the role of big tech as well in enforcing human rights. And so I end with a chapter that argues that we need a right, we need a human right to data literacy, which is tied to our right to education that already exists. And I say this because I think what we all need to do, not just lawmakers and lawyers and such, but what we all need to do is really become conversant in data. Not just digital data. I don’t mean everyone should be a data scientist. That’s not what I mean. I mean we need to understand the importance of data in our society, how digital data, but also just general data really runs how we think about the world. We’ve become a very analytical and numbers-focused world. And that is something that we need to think about not just from a technical perspective, but from a sociological perspective, and also from a political perspective.
So who is making decisions about the types of data that are being created? How are we using those? Who are those uses benefiting? And who are they hurting? And really think about the process of data. So, again, back to this co-creation idea that there is a data collector and there’re data subjects. And those are different populations often. But we need to think about the power dynamic and the differences between those, between collectors and subjects. And this is something I talk a lot about in the book. But also, I think we need to think about the process of data making and how it is that collectors make different priority choices over selecting some types of characteristics to record and not others.
And so once we kind of understand that, I think then once we have sort of this more data literate society, I think it’ll make it easier, perhaps, to answer some of these really big questions in this chapter about death. What do we do? I mean if everyone was more data literate, maybe we could enable people to make choices about what happens to their data when they die. Maybe they want to have these digital entities floating around. And so then we would need to decide how to treat those entities, how to include those entities or exclude them. But right now, I do think people are making choices or would be making choices based on a lack of support. When we die, there’s not a lot of options right now, or they think it’s interesting, or they want to be around for their grandkids. But at what cost? I think that’s really— I think that’s really important and it hasn’t been addressed in the way we think about this stuff.
Strickland: Maybe to end with a practical question: Would you recommend that people make something like a digital estate plan to sort of set forth their wishes for how their data is used or repurposed or deleted after their demise?
Wong: I think people should think very hard about the types of digital data they’re leaving behind. I mean let’s take it out of the realm of the morbid. I think it’s really about what we do now in life, right? What kind of digital footprint are you creating on a daily basis? And is that acceptable to you? And I think in terms of what happens after you’re gone, I mean, we do have to make decisions about who gets your passwords, right? Who has the decision-making power to delete your profiles or not? And I think that’s a good thing. I think people should probably talk about this with their families. But at the same time, there’s so much that we can’t control. Even through a digital estate plan, I mean, think about the number of photos you appear in in other people’s accounts, I mean. And there’re often you know multiple people in those pictures. If you didn’t take the picture, whose is it, right? So there’re all these questions again about co-creation that really come up. So, yes, you should be more deliberate about it. Yes, you should try to think about and maybe plan for the things you can control. But also know that because data are effectively forever, that even the best-laid digital estate plan right now is not going to quite capture all the ways in which you exist as data.
Strickland: Excellent. Well, Wendy, thank you so much for talking me through all this. I think it’s absolutely fascinating stuff, really appreciate your time.
Wong: It was a great conversation.
Strickland: That was Wendy H. Wong speaking to me about the digital afterlife industry, a topic she covers in her book, We the Data: Human Rights in a Digital Age, just out from MIT Press. If you want to learn more, we ran a book excerpt in IEEE Spectrum‘s November issue, and we’ve linked to it in the show notes. I’m Eliza Strickland, and I hope you’ll join us next time on Fixing the Future.