Opinion | This Conversation Changed the Way I Interact With Technology
I’m Ezra Klein, and this is “The Ezra Klein Show.”
So before we get into it today, we’re going to be doing an ask me anything episode of the show. If you’ve got a question you’d like to hear me answer, email it to [email protected]. Again, [email protected].
Today’s show is about technology. And I want to be upfront about this. I would be useless without technology. I mean, I’m for the first part, functionally blind without glasses. I can’t see anything. I’m a writer, but I have this terrible, unreadable handwriting that makes it difficult for me to communicate using a pen — both you can’t read it, but also I can’t think clearly while doing it. I’m so distracted by the way it turns out.
I got rejected for my college newspaper. And so the path I took into journalism — it was completely built on a narrow technological moment. I happened to have a lot of free time because I was in college at the exact moment blogging became a thing. And that is how I got into journalism. And so I think of myself fundamentally as a techno- optimist. I mean, I believe that technology can make our lives better, richer, more fulfilling.
A lot of the things I care about politically, ranging from climate change to animal suffering to the dignity people have at work, it seems to me we are going to need big technological advances to make the politics of those things easy enough to overcome. But partly because I’m so taken by the power of what technology can do for us, I think we underestimate — and actually worse, we ignore — what technologies to actually do to us.
We do not just use these tools. We become them. We are reshaped by them. This was a big theme in 20th century media criticism. And if you read Marshall McLuhan or Neil Postman, it is all over their work. And it is still true. One of the critics carrying these ideas into the modern era, into our modern technologies, is Michael Sacasas, or as his pen name is, L.M. Sacasas. I know his work because I follow his excellent newsletter, “The Convivial Society,” which I highly recommend. What he does in that newsletter is interesting. He brings theorists of the past into conversation with the technologies of the present. And he does so to look at today’s technologies outside of their current narratives and business contexts, to treat their evolution not as inevitability, but a series of choices we made, all of which should revolve around the human character and experience. And certainly, the way we evaluate whether or not these tools are serving us should revolve around the human character and experience
Sacasas recently did a piece in which he posed 41 questions — 41 — we should ask of the technologies we use. And technology’s defined here broadly. It’s computers and artificial intelligence and Zoom, but it’s also tables and alarm clocks and ovens. And what I loved about these questions is they’re ways of not just thinking about technologies, but about ourselves, and how we act, and what we want. And what, in the end, we truly value. As always, my email — [email protected].
Michael Sacasas, welcome to the show.
Thank you. Pleasure to be here.
So I want to begin with a technological experience many of us have had over the past year. Why is talking to someone on Zoom so much more exhausting than talking to them in person or even talking to them on the phone?
Yeah, so the way I began to think about this early in the pandemic last year is that we’re abstracting the body from the act of communication. And not entirely — in fact, Zoom in some respects provides more of a view of the body than, say, a telephone call does. But the body is really essential to the work of meaning making in communication settings. Right, so we pick up on all sorts of cues from one another to register whether someone’s paying attention or they’re losing interest, or whether they’re tracking with what we’re having to say.
There are ways in which we use our body to generate meaning. We might point or gesture. Body language, again, conveys a lot of the sense of the interaction. And so when we’re on Zoom, there are a number of things that sort of distract us from that. For one thing, if we haven’t hidden self view, we have a tendency just to look at ourselves in these settings. We see ourselves there and we want to make sure that we don’t look too foolish in our presentation. And our eye glances at that.
There’s no need to get personal.
[LAUGHS] I’m thinking just in my own experience. The camera is positioned in such a way that if I try to give you eye contact, I can’t see your eyes and vice versa. And so we lose that ability to look into one another’s eyes. And again, I think we’re laboring. Our mind is sort of laboring when it’s used to using these tacit cues from the bodily experience, and it loses track of those or has a really hard time focusing on them. I think it’s just really laboring to just make sense of the kind of things we’re trying to talk about. And for that reason, I think it becomes exhausting. At least that’s, I think, a big part of the picture.
One thing I loved about the piece is, you gave me language for something I was feeling, and then gave me courage to stop Zooming with people early in the pandemic, which — there was this period when everything tried to move there. But something I reflected on a lot after it is, well, then why do I find it easier to be on the phone?
And I do think it comes back to this idea of the body. You talk about the body being there and not there on Zoom. You can’t see it. You can’t move it. It’s micro-delayed, given what you would expect on the other end of the connection.
But you’re also stilled. So on the phone, I can’t see your body right now. You can’t see mind. We’ve turned off the cameras for all these reasons. But it means I can move around.
My body can be like a body, at least on my end, as opposed to a still body, because I’m worried about your end. And just that point — that what is happening in our bodies affects profoundly what is happening in our interactions with technologies and each other — it’s pretty big, and it’s laced through your work. So I was wondering if you could talk a bit about it.
So I’ve picked up bits and pieces from a variety of philosophers and theorists to kind of help me make sense of what we’re doing when we’re using technology. And at some point early on, it became clear to me that the body is a really essential part of this picture. I sometimes think of the body and the world in our minds kind of creating a circuit.
And so then, we introduce a tool into that circuit, and it’s going to shape how we perceive the world, and it’s going to shape how we interpret the world. And so the body is at the nexus of our experience of reality, and technology enters into that loop of perception in ways that can be benign, in ways that can be beneficial, in ways that can be detrimental. But it certainly changes it.
And so that’s one important lesson that I’ve taken from people like Don Ihde, who is a philosopher of technology, who I think is one of the ones that has kind of made this a central concern in his little branch of philosophy of technology. And I think it’s always useful to ask that question: how is this tool that I’m bringing to my body, into my interactions with the world, shaping the way that I perceive the world, in often very subtle ways?
We’re about to ask a lot of good questions like this about technology, but let me ask you about one place I’ve seen you bring this, which you’ve made me attentive to, which is technologies that make you forget your body is there. Can you talk a bit about that?
Yeah, so many of us whose work focuses around the computer or around the workspace where we sit down to do knowledge work, I feel like we get very absorbed in that. There’s a tendency to just become absorbed in what we’re doing and to forget the needs of the body, right? I’m thinking, for example, of this idea of email apnea, which was coined by Linda Stone, a researcher with Microsoft many years ago.
You know, you essentially kind of catch your breath when you’re focusing on what you’re reading online. It’s one way in which it kind of upsets the ordinary rhythms of our bodily existence. There’s a lot of effort made to make our tools ergonomic, friendly to the body.
But more often than not, we find ourselves in postures that are not really great for us. And so we have to be reminded. We have an app that reminds us to get up every so often, so we don’t stay in that position. So that’s one way, I think, in which a very, very common work experience for individuals — and an experience that frankly is not just limited to work — can make us forgetful of the needs of the body.
So this begins to get at something you do a lot in your work, which is reverse the way we usually think of the ethics of technology. And you use it in this essay that will frame a lot of our conversation about 41 questions one should ask of technology. You begin with the example of a hammer. Now, a hammer can be used, you say, to build a home. It can be used to bash in a skull.
So one way of looking at a technology like a hammer is the ethics of it are simply what we do with it. It is just our ethics, transferred to the hammer. But you suggest a different question, which is, how does having the hammer in my hand encourage me to perceive the world around me? What feelings does it arouse in me? So tell me a little bit about that move, from us directing the technology, to the technology changing our experience, or the nature of ourselves.
A lot of thinking about ethics or technology traditionally — a little less so now — often involved the question of, what am I going to do with this tool? So hence, the example of the hammer. In this view, the tool is ostensibly neutral.
And I think that’s the idea that I find myself pushing back against a good bit. So in one sense, it makes a certain amount of sense that, yeah, I can do good things with this tool, this hammer. I can use it to build a house or repair something, or I can use it to hit somebody. And in that sense, what matters is my intention and the use to which I put it.
So I want to push back on not the fact that that’s untrue, but that it’s inadequate as a way of thinking about how technology impinges on the moral life or what we think of as ethics. And so the example that I give there with the hammer has to do with perception. So one of the key ways in which I think technologies fail to be neutral is that they shape how we perceive the world, and they dispose us in a certain way towards the world.
So the hammer’s a trite example. People often have heard the expression: to the person with a hammer, everything looks like a nail. And this reflects the way in which, when that hammer comes into that circuit of mind, body, and world, it transforms how the world appears to us or what it makes us see the world as.
And so that is one example. A camera is another example. And by cameras, I simply mean the smartphone so many of us carry with us all the time, and how it kind of reframes aspects of experience as memories to be recorded, for example.
So we might have felt differently about experiences, seen it differently, without the camera in hand. But now that it’s there, even if we choose not to take the picture, for a moment, it has changed how we interpret what is happening or what is going on.
The camera is such a good example of this. I think sometimes about how much the smartphone camera has changed my experience of parenting. Because constantly, when my son does something cute, my instinct is that I need to whip out my phone to record the cute thing, so it can be shared with my family or memorialized for the future.
And seven times out of 10, when I do that, I stop the thing that was happening. He sees the phone, he gets interested in that, he sees me doing something, he just gets interested in whatever change had happened to me, and I break the circuit of the experience. I’m not even saying it’s all bad. I’m happy to have many of the photos and videos I have of him. And then, sometimes I now try to not have my phone when I’m with him. I leave it at home. But then, he’ll do something cute, and I’ll be, in a part of my head, frustrated that I just have to sit there and experience it, and I can’t let anybody else know this wonderful thing has happened. And it’s a really different experience, even though I just didn’t have access to a smartphone camera at all.
Yeah, I mean, that’s a great example. I have two little girls, so I can very much relate to this. And I want to echo the point that you made. A lot of this is not about saying it’s good or bad. And I think very often, people just want to know, is this a good thing? Is this is a bad thing?
And I think part of the point that I often try to make is that something can be morally significant without necessarily being good or bad by itself. So that’s a point that we can come back to at some point. But yeah, definitely, the experience of wanting to document reality. You know, one thing I’ve found — I don’t know if this is true for you as a parent — is that somewhere, there’s this sense that you want to kind of arrest the growth of your kids, you know.
They’re growing up so fast, and you want to document where they’ve been along the way. Paradoxically, in my experience anyway, I think I’ve almost found that having this very pervasive record, visual record of their growth, has made that actually a more pronounced experience, a greater sense of things slipping by, slipping away.
And I often wonder, how would the experience of being a parent in relation to a child in this way have been different, as it was the majority of human history. People just didn’t have a photographic record of the sort. All they had was their memory to work with. And so to me, that’s an interesting question. It’s a moral question that involves a very profound human relationship and how we experience it.
So let me take that as a bridge to the 41 questions. And I’m simply going to pose these to you. You’re the one who wrote them. I appreciate you making my job so easy.
And I will maybe sometimes offer a prompt of the technology we can talk about, but of course, you should feel free to take it wherever you want. And so the first question you implore us to ask of a technology is, what sort of person will the use of this technology make of me? And I would say, let’s use Twitter as the example.
Yeah, and that’s the most general of those questions, I think, and it comes out of my sense that we become what we habitually do. So I’ll say that the way I think of the moral life and moral formation is really influenced by virtue ethic theory, which places a lot of emphasis on habit, disposition, and inclinations.
And so I’m on Twitter a lot. So this is the one social media platform that I am on with some regularity. And I find that when I’m on Twitter, I tend to feel a little anxious, a little scatterbrained. I do feel like my focus is sort of distracted in ways that aren’t entirely good for me.
So I use it as a way of building relationships, garnering information, kind of keeping an eye on the way the world in that little Twitter sphere is reacting to current events. But I do feel it taxes me mentally. It frames the way I think about what I say.
So I’m very aware of the audience on Twitter and how they might respond to what I may want to tweet. And so there’s this little editing voice in my head that takes the Twitter audience for granted. And I think eventually, that sort of spills out, that may spill out into other spheres of life.
And then, there’s of course the tendency to take the experience of Twitter and normalize it, or to say that this is a one-to-one map of reality. And we have to, I think, guard against that tendency. So there are ways in which it kind of plays with our emotional, cognitive lives — the way it frames the self.
It’s a kind of performance that we’re undergoing for the audience on Twitter. Those are some of the ways that I think come to mind, in terms of how that’s beginning to shape me as a person, the way that I think about myself and what I do.
I want to hold on that idea of the self as a performance. Because that’s one of the things I noticed. People who listen to the show know I have a lot of thoughts on Twitter, and I stay off of it a lot of the time, and then I tend to jump on it when I have something to promote.
But particularly when I’m using it more, one of the things that is striking to me about it is it makes me somebody who thinks a much wider expanse of my thoughts are things other people should also hear. You know, I was a writer before there was really Twitter.
And as a writer, the things I thought people should hear were of a certain variety. They had a certain weight to them. A certain amount of effort went into them. They were about a certain set of topics, for the most part. And on Twitter, it’s literally what I thought of the “Loki” season finale. I mean, it’s anything.
And it makes me more audience- and approval-hungry, and possibly more backlash-aversive or something. I do think it is both believing more of what I think should be shared, and also shaping that more to social approval, than I do in other mediums. But over time, that actually does change how I think, and if I let it, what kind of person I am, but also what kind of person I present myself as to the world.
Yeah, no, absolutely. An example of this resonates what you just described. I found myself reading a book a couple of days ago, and underlining some passages of note. And immediately, my first thought was, I’ve got to put this on Twitter.
And I had to resist the urge, and I consciously thought of, how would I have done this if I didn’t have Twitter? How would my experience of reading have been a little bit different? And why do I feel compelled to share this? Do I feel compelled to share this because I think, oh, this will play really well within my networks?
And I think that sense of approval, of — it’s sometimes described as a kind of dopamine hit that you get — and then, we begin to crave that, and then that bending of the self to the perceptions of the audience, that feedback loop, I think, can become really powerful.
Let’s go to the next question. What habits will the use of this technology instill? And let’s talk about electric lighting.
I recently wrote a little bit about the loss of the night sky, so this is what comes immediately to mind. Famous anecdote — I think it was in LA. There was a massive power outage.
And there were a number of calls to 911 about this striking, glowing thing in the sky, which turns out to be the Milky Way, which numerous people hadn’t seen. And so one thing that comes to mind with electric lighting — this maybe isn’t quite a habit, but there’s a sense of where I look. What can I see? How do I experience darkness?
And there are long social trends here, going back to the beginning of electrification and even gas lighting in European cities. But it changes the character of daily life, in terms of my habits even of experiencing sleep and rest. And so it’s mundane technology, but it’s one that’s been profoundly formative of just the experience of the day, how we order and structure our day.
It opened up the night, in many respects, for activities that wouldn’t have been possible elsewhere. But I think also, and again, going back to the question of the body, remembering we’re embodied creatures. Maybe it’s kind of messing with the rest our body needs. We have the habit of staying up later than perhaps we ought. We’re timing ourselves to rhythms that are not necessarily conducive to our well-being.
And so there’s a way of experiencing the night, both at a macro level with regards to what we see, and the loss of connection with the naturally dark sky, to social life, impact on social life, and then impact on personal life. So at those three scales, different habits will be generated by the fact that we can flip a switch and carry on with our activities when the sun goes down.
Your next question, I really love. How will the use of this technology affect my experience of time? And I’ll let you choose the example here.
Oh, the clock. I love thinking about technologies that we take for granted, that we don’t think of anymore as technologies. And so the clock is a fascinating piece of technology, the mechanical clock. It was originally used to help monks keep their daily rhythm of prayer, and then it comes to structure so much of modern life.
Lewis Mumford in the 1930s, in his book “Technics and Civilization,” makes a great deal of this. He says that it’s the clock that is the centerpiece of the modern world, in that it divides time. It segments time into discrete measurable units. To think that without the mechanical clock, it really doesn’t make sense to say, I’ll meet you at 12:10.
But that just wasn’t the way that human beings experienced the passage of time. It gives us a sense of time as something to be lost or wasted, measured. It generates a kind of anxiety about that. So time, I think, is one of the fundamental moral dimensions of human experience.
And so we tend to think, well, time is just time. Everybody experiences it similarly, and we relate to it similarly. But in fact, it’s one of the realities that has been, I think, most profoundly shaped by the technologies that we use to measure time. And then, of course, when we were able to put that measurement device on our wrists, it made that ubiquitous.
We all know where we are in this finely calibrated ordering of time, and that allows us to relate to it in different ways, to think of punctuality differently. It changes our sense of the politeness of arrival and departure. And I think there are still certain cultures in which you can see that there is a profound difference with the way, certainly, that Westerners tend to think about what it means to abide by time.
Speaking of always knowing where you are, the next question, which I really like, is how will the use of technology affect my experience of place? And I want to use here a technology that arose in my lifetime, which is ubiquitous GPS maps.
That’s another great example. I actually thought of it on the way to the studio here. I don’t have a smartphone, so I don’t have GPS on me all the time, and I have an old car.
So I made note. I did use Google Maps to find the location of the place, relative to where I was, and then just made a mental note of it, and I made my way here. And I thought, well, it’d be really bad if I got lost and wasn’t on time, so I made a point of taking down the number.
And one thing I’ve observed in other contexts is that sometimes these technologies that make things very easy — very efficient, in some respects — they eliminate certain things. So what would I have done if I had gotten lost? I would have stopped for directions. It would have required a kind of human interaction.
And so that changes. But then, also, I think the way we tend to use — when I have used GPS — the way we tend to use GPS is that it directs our attention not so much to the place itself, but to the directions we’re receiving. So if we’re just listening to the voice that’s going to tell us, turn in 100 yards or whatever the case may be, our attention is focused on that, rather than, if I were told, you need to watch out for the corner of this intersection, then I’m more actively engaged in figuring out where I am.
And so it’s not that we have to do that all the time, that we need to necessarily — that using GPS is bad. But it does, I think, change the relationship that we have to the place, our ability to know that place well. And you may or may not put a moral value on that. But if you do, yeah, definitely, I think that ubiquitous GPS use has an effect of creating a certain distance from place, of abstracting us from place, making us less attentive to it in ways that might be beneficial.
How will the use of this technology affect how I relate to other people? And the example I’d like to use here is search engines.
That’s a good example, in that it pushes it a little bit. Because if we rely on the search engine, for example, to form our picture of the world, our idea of what others are like, when we try to understand those that are not immediately in our network of friends or colleagues, then it filters a picture of the world of others to us.
How are those search results being determined? What is being included? What is being excluded? How is the algorithm calibrating the kind of information I’m going to receive?
And I think that does tend to impact the shape of our perception of others, not necessarily those closest to us, but those others that we know in this more mediated fashion. It changes our understanding of who they are, and eclipses, I think, important aspects of the fullness of their personality, or the complexity of their view of the world. I think that would be perhaps a risk with the search engine as the mediator of our relationship to others.
You know, there’s somewhere else I thought you might go, but this is in truth just where I went when I saw the question, which is, it made me think about how many conversations with other people I do not have because of search engines.
How many times when my map of knowledge to fill something in would simply require, and did require when I was younger, just asking. Do you know? What do you think? Where should I go to dinner? Do you know this person’s phone number? Have you heard of? Do you remember that president? Do you know when this happened?
And on the one hand, the information I got from those conversations was probably much less precise. And on the other hand, there was a lot of other information, and there was relationship building that happened in those conversations. And so I don’t think I would tend to think about search engines as a social technology or a technology with a heavy social effect.
They’re not social media, famously. They’re the thing that came before it. But they actually really changed the social world and took a whole expanse of interaction out of it and into a sort of bilateral between me and the computer.
Yeah, and that’s a great example. I think that’s a wonderful thing about these questions, you know. Whatever might come to my mind is not going to be what comes to somebody else’s mind. But that’s, I think, a terrific example of the way that it enters into that loop of social relations, yeah.
I’m going to jump forward a little bit here. What practices will the use of this technology displace? What do you think of when you hear that?
I think of something like the example of the GPS. So the practice of finding my way on a map or getting directions from someone, and so that social connection that gets set aside because I can just look this up on my phone and find my way there. I can think even of something more mundane, the way we organize our dinners or our meals together.
There’s a philosopher of technology, Albert Borgmann, who famously made a big deal about this. And we think about what was involved for a family, and again, one has to recognize that not all families are structured similarly. But if we think about the way that all members of a family might have been involved in putting a meal on the table and gathering around it. So then we think of the alternative, which initially, maybe in the 1980s when Borgmann was writing, was just, pop something into the microwave, and everybody’s served.
More recently, you might think of the app, delivery app, that just brings you the food. It has displaced certain rituals or roles within a family, certain interactions within a family or within a network of friends, even, who might gather for a meal. That might be a felt loss.
Again, not necessarily morally wrong or morally right, but consequential with regards to what is binding that family or that network of friends together. There was a kind of labor involved in putting that meal together, and that labor itself had an important role to play in the dynamics of the relationship that are outsourced when we change the practice by finding technological shortcuts around it to get to the same end, but through different means.
I really like this couplet of questions: what will the use of this technology encourage me to notice? And the technology that came to mind there was the ubiquity of social media or just social profiles for people — that when I meet somebody or even often before I meet them, I can look them up on a profile that is going to encourage me to notice other things than I would have if I had just called them up or heard about them from a friend.
Yeah, each of us are such complex realities, such a tangle of desires, emotions, insecurities, capacities and capabilities, and histories and narratives. And any attempt to kind of capture that, certainly in an online profile, is necessarily going to leave some important dimensions of the person out.
And if we come to know a person chiefly, initially, through a profile by looking them up, we’ll bring those preconceptions to the table when we meet them, and it will have the tendency, I would say, to reduce our understanding. But of course, that can change over time. The dynamics of the relationship might be such that we get to see these other aspects of one another.
Well, if there is a relationship, it might. But what that made me think of: there was a very funny but also telling article in New York Magazine, probably a month ago now. And it talked about how on dating apps in New York, Tinder and, I guess, Hinge — I don’t know what everybody’s dating on in New York. But it talked about the amount of very far-left signaling on a lot of the apps.
So eat the rich, or I’m going to burn civilization down and light my joint in the fires, or — just a lot of very hardcore socialist signaling. But then, people meet each other, and of course they’re not that hardcore, and they’re not really revolutionaries, and they’re not really trying to upend society. And there’s a very, very funny anecdote in there of a woman who ended up on a date with a guy whose profile was all about how much he hated the rich, about how much he wanted to abolish billionaires, and so on.
And then, when they met, after a couple times — and he just kept ranting about how he hated the rich — he’d be like, listen, I’m actually rich. And she was like, oh, well, I still like you. [LAUGHTER] Let’s keep dating.
I think a lot of the way we display who we are in flattened profiles is wrong about who we are, what tradeoffs we really make. But what it does is, it creates a kind of filtering of, is this person like me or not?
I wonder how much of that has to do with the scale at which we’re operating. So you know, like you said, if we build a relationship, we might correct our perception. But we’re not going to build that kind of relationship with the vast number of people that we interact with online, even if it’s not directly with them online.
And there is a need to find some quick way of sorting, of categorizing. And so there’s this built-in temptation, I think, to use these categories to drop people into — or if we think that that’s what they’re going to expect of us, to try to fill that role or to live up to that role.
So now, the other side of the couplet, which is, what will the use of this technology encourage me to ignore? And I want to cue you to talk about a particular technology in a piece you wrote: so Wi-Fi and pervasive internet.
I think of how easy it is to take recourse to the ubiquitous internet whenever we’re in a situation where we have to wait for something, whether we’re stopped at a traffic light, we’re in line, waiting for something, waiting for a friend at a restaurant. And the desire to just be patiently attentive to the world around us doesn’t really stand up to the immediacy of what can be provided for us through these online interactions or services.
So Facebook had a commercial a few years back, where a young girl was sitting at a table with her family. Maybe it was a holiday dinner or something. And all of the relatives are portrayed in kind of stereotypically negative ways, and this young lady is able to escape that world through all of what Facebook brings to her on her smartphone as she’s holding it underneath the table, beneath everybody’s view.
And the world sometimes can’t quite compete, if looked at from a certain perspective, with the immediate satisfactions and pleasures and distractions that we can call forth immediately on our smartphones. But it has its own kind of richness that requires a kind of attentiveness. And sometimes it requires us to look very carefully and very patiently to listen, to engage our senses in a more genuine way.
And I tend to think that there are ways in which in the world will repay our attention, others will repay that kind of careful attention. But we’re too easily distracted sometimes by the lure of our devices, to give that kind of attention to the world beyond the devices.
I’m gonna group the next set together. So what was required of other human beings, of other creatures, of the earth, so that I might be able to use this technology? When you ask that, when you think of that, what comes to mind?
So I recently wrote a piece, and its premise was that sometimes we think of the internet, of digital life, as being immaterial, existing somewhere out in the ether, in the cloud, with these metaphors that kind of suggest that it doesn’t really have a material footprint. But the reality of course — I think as most of us are becoming very aware — is that it very much has a material reality that may begin in a mine where rare earth metals are being extracted in inhumane working conditions at great cost to the local environment.
But that’s very far removed from my comfortable experience of the tablet on my couch in the living room. And so with regards to the earth, the digital realm depends upon material resources that need to be collected. It depends on the energy grid. It leaves a footprint on the environment.
And so we tend not to think about that by the time that it gets to us and looks so shiny and clean and new, and connects us to this world that isn’t physically necessarily located anywhere in our experience. And so I think it is important for us to think about the labor, the extraction cost on the environment, that go into providing us with the kind of world that we find so amusing and interesting and comfortable.
I like the next one, and you note that you wrote this before Marie Kondo, or at least before you had heard of Marie Kondo. But does use of this technology bring me joy? And I want to use here a technology that people don’t always think of as one, but that you’ve written about eloquently, which is the table.
Yeah, I owe that to Hannah Arendt, the thinking about how the table gathers and separates and creates a setting for conversation, for fellowship. That’s been so fundamental to human experience. Food and sociability have been a centerpiece of human cultures throughout recorded history and beyond.
And so the table that we gather around in this way can be a great source of joy. I was, at the time, thinking — I often found myself and listening to others complain about their experience of this social media platform or the other. And the question often just came to mind, well, why do we persist?
Why are we doing this? If I leave it frustrated and anxious, and if it doesn’t bring me a measure of satisfaction, maybe we ought to rethink our relationship with it. Not that everything should necessarily bring joy or happiness in a sense, but I think I would oppose the conviviality of the table, the way it relates us and brings us together, to the table-less world of the internet, where we’re all thrown together.
There’s no space to be silent together. I think about how in even just a directly embodied context, we have the capacity to be silent. And that silence becomes meaningful. But we can’t really do that online, which I think is often the source of a lot of our angst.
And to add another way that you can interpret these questions differently, one of the things I thought about with the table was that my experience of the table as a technology has changed a lot during the pandemic. In offices, for years now, I have primarily eaten at my desk. I go out, I grab a fast casual salad, or sometimes I bring lunch, and I eat, typing away like the pathetic worm that I am. [LAUGHS]
And then, the world went to hell, and I was caught in my house all the time. And most days, I eat at a table with my partner. And that’s nice. Like, I mean, not like it’s amazing that I’ve discovered it’s nice to eat with other people. I knew that before. But it’s a difference, and it is a difference that is somewhat imposed by the spaces I am in.
Right. Yeah, and I sometimes talk about technology, sometimes talk about material culture. And even the way furniture is organized, something as simple as a table — the way it organizes our relationships and mediates those relationships, the way it, again, brings us together or excludes us. And I think there’s a lot of talk about the open office and those sorts of dynamics, I think we’re, in some respects, attuned to that.
But even in our homes, the ordering of this material space through these various artifacts can be more or less conducive to encouraging connection, human relationship, conversation.
The next one, and you tagged this a minute ago, is, does the use of this technology arouse anxiety? And rather than give you a technology, I want to observe something that I have discovered about myself to my disquiet over the past, let’s call it, five or six years, which is that I would have thought what motivates me to use technologies is joy or interest or utility.
And as I’ve become more mindful of what is going on in my body, and I have had times like a book leave when I tried to pull myself away from certain technologies, I have been struck to find that anxiety is a motivating emotion. That much of what I check constantly — I don’t go to my email because I’m struck with a quick bolt of joy when I think of opening my email inbox. I go there because there is an almost subconscious anxiety about what might be waiting for me. The same is true of Twitter. The same is true of many other notification devices. To some degree, the same is true — although actually less so — of the news for me. And I’m not saying that my experience is everyone’s, but my experience is mine, and it is not until I began looking very closely at what I thought my experience was. I thought I used these things because I liked them.
I actually use them because I’m a little bit afraid of them. That is their real pull on me. And anxiety as a motivating force to use technology is, I think, a bit of an under-discussed topic.
Yeah. Anxiety or fear. There’s an impulse that is very modern to try to control as much of our experience as possible. I think of this. I don’t know if this is your experience, but I think of this with my children.
I want to know where they are, what they’re doing. And there are ways that technology increasingly allows me to surveil them constantly, if I wanted to, right? To have a regular feed of their health metrics sent to me.
My friend, Alan Jacobs, has this line about how surveillance is becoming the normative form of care, and it’s driven by anxiety. It’s driven out of fear. And a lot of times, the technology itself, instead of alleviating that anxiety, of course will just heighten it. Because we’ll become more sensitive to every seeming deviation and whatever we think is the normal rhythm or pattern of things. Any kind of blip on the metrics will make us anxious.
I used to joke when texting was becoming a little bit more common back in the early 2000s, and the lag time from how long it took us to send a text and hear a response, you know, it was either a, this person hates me, or they’ve died. There’s a sense of anxiety about what has happened. And in some respects, it might have alleviated our anxiety to be able to text somebody to say, hey, how are you doing. And now, it has the odd effect of generating more anxiety about that.
The next one is, how does this technology empower me? At whose expense? And I want to elicit something very specific here, which may not be the obvious place the question would go.
But you’ve written an essay that is, probably out of everything you’ve written, the thing that I think about most, which is, ”[When] Silence Is Power.” And could you talk about the technologies and the conditions in which the most powerful thing you can do is remain silent?
That piece — the first time I remember writing on that theme — was, I think, around 2013, and it arose out of frustration with the dynamics of social media. And on social media, right, you exist by saying things. It goes back to your point earlier about how Twitter kind of prompts us to speak, to say. You don’t exist unless you’re saying things.
So they’re notoriously bad actors on Twitter. There’s hate speech. There are opinions that we want to challenge. And the problem, as I began to see, is that challenging those very often had the opposite effect, of amplifying.
Now, it’s more common to give that precise term to this dynamic. You’re amplifying. And in the so-called attention economy, in many respects, your actions are counterproductive to your express goals, right?
And so I began to think that, if I am a node in this network, if I want to challenge something that’s absurd or despicable, the best way to do it might simply be — the way I thought of it to myself was — to kill that message in my little part of that network. And the way I do that is by remaining silent, and that means I’m not spreading the actual content of that message on to those on the other side of my network.
My silence is calculated. And I often think about this, and I confess that I’m sometimes uneasy, because I understand the need to speak, and sometimes to challenge obvious falsehoods, or distortions of truth, or attacks on somebody’s character. But there’s a role for strategic silence, I think, in these contexts, where the silence ends up being more powerful in the larger frame of these network dynamics.
You think you’re uneasy? You should think how I feel.
I mean, I think about this essay a lot. Because in my world — and I’m a social media user that is in politics, and so there’s a tendency for there to be a mass of position-taking on whatever, to use your term, is the pseudo- event of the day is. Sometimes that event is very real, but is just not an event I have expertise on.
Sometimes it isn’t even that political. It’s that somebody died. And Twitter fills up with eulogies from people who don’t have actually any personal connection, and in some cases, not even a very deep fan connection to the person. But all this begins to change you to believe — and there’s an algorithmic incentive for this — that you should take a position on everything. And if you don’t — and this is the weirdest thing about it to me — if you don’t, there’s a part of you that thinks that is going to be really noticeable.
Like, people are gonna be like, well, why didn’t he come and comment on this thing everybody’s mad about, that he has no expertise on? And to me, something that’s been very influential for me from this essay — I’m going to read a little bit of it — is the idea that (and others have come up with this, too, Whitney Phillips is a communications scholar and has done great work here. But that you have to understand, on a lot of social media, what gives an idea of power is whether it is talked about, not how it is talked about.
You talk about pseudo- events as being immune to critical speech. Speaking of them, even to criticize them, strengthens them. And then, you say a little bit later, “how does one protest when acts of protest are constantly swallowed up by that which is being protested? When the act of protest has the perverse effect of empowering that which is being protested?” And your answer to that is silence.
But on the other hand, it feels sometimes like that’s a collective action problem. Like, that would be the right answer if everybody used it. But then, if you’re the only one using it, then aren’t you just giving up your leverage in the conversation, or even worse, leaving that space in the conversation open to worse actors? I think that is how people feel or end up feeling trapped in the cycle of speaking on everything, because if they don’t, well, then look who will be.
Yeah, I think the first time I began thinking about this was after the shooting in Newtown, Connecticut. And I found myself wanting to say something, just something expressing my condolences, my sadness. And I asked myself, why do I want to say this, and for whom?
I think, again, if I were face-to-face with somebody mourning, it would have been perfectly appropriate for me to be silent, for my presence to speak as it were, to embrace, to hold a hand, to allow tears to flow. There are all these ways to react to that kind of situation. But you can’t do any of that on social media, so you feel the urge, the imperative to speak, and I don’t know — I don’t have a formula for how you sort that out. But I did want to put silence on the table somehow, and I think it can be meaningful and useful in certain contexts for certain people.
I also just think, as a human being in the world we live in now, it is good to cultivate the discipline of shutting up. [LAUGHTER] And I think it often is a discipline. I think it is good to cultivate the discipline of nope, not everybody needs to know what I’m thinking right now. Not even sure I need to know what I’m thinking right now.
And I’m not saying that I have cultivated it, but I’ve begun to think of it because it is actually much more natural than the opposite. I’ve begun to think of it as a harder road and something I want to be better at. Which brings me to another question here, which I like, which is, how does this technology encourage me to allocate my time?
And something I wanted to bring up here as a conversational object is the difference between what I think of as transient information and more permanent sources of information, which is to say that there are certain kinds of information that one of their allures is, if I don’t look at them now, they will be gone. Social media feeds are like this, but to some degree, the news can be like this. There’s a lot like it, whereas the books sitting on my table, they’re just always there.
And so it creates this idea that there’s an urgency to this and not to that, but of course what’s really limited is my time. And if I spend all my time on the transient information and not the more permanent, the books don’t get read, and the Facebook messages or whatever do. I’m curious how you think about that, the way we are pushed to allocate time between different kinds of information or mediums.
Yeah, I mean, I think you articulated it well. It’s the imperative of the present, right? So if we think even if social media sort of temporally, it’s almost as if the past falls off a cliff after 48 hours. Yesterday might as well have been two years ago.
What matters is what’s happening right now in this particular cycle of events or discourse. And we know it. We recognize it, right? We know how these hashtags trend for x amount of hours, and then they’re gone. And then we think about them in retrospect, and they appear silly, but we invested ourselves very much into them at the time, and they took up a lot of our headspace, if you like, in our attention.
And so the lure, the temptation, I think, of the structure of social media is very much to concentrate our attention on what is immediate. And if we don’t speak to it, if we take a day to think about what we might want to say, the moment will have passed. And so I think there is a discipline not just of shutting up, which I think is really well put, but a discipline of maybe — I don’t know if not caring is the right way of putting it, but of recognizing what I don’t need to attend to.
The news certainly, I think, partakes of it. You know, Neil Postman, I think, was very articulate on that count. But as far back as the 19th century, Soren Kierkegaard writes about this in relation to the way that the telegraph made daily news possible, in bringing news from other parts of the world and other parts of the country to the attention of everybody all the time.
So it’s that collapse of the time necessary to carry a message that, beginning with electronic media, was possible, that asked us to give our attention to a much greater range of subjects and topics, and now requires us to kind of think actively. A Chinese philosopher, Yi-Fu Tuan, in the late ‘70s has a book on place, and he has this interesting little observation about how place used to structure time.
Because the longer it took for information to get to me, the farther away it was, and thus theoretically, the farther away from my own lived experience and what was important to me it might have been. And so once electronic media kind of collapsed that ordering function of distance, then now, we have to become active in deciding what is it important for me to give my attention to right now. I mean, that itself, just having to make that decision, can be very taxing.
I’m going to combine a few of these. What knowledge has the use of this technology disclosed to me about myself? What knowledge has it disclosed to me about others? And is it good to have this knowledge? What’s in your mind as you think about that?
So I think, with respect to the last two questions, there are aspects of people’s lives that often now tend to get captured online. And I think of some kind of egregious examples of this. Maybe it was a year ago, two years ago.
Somebody on a flight is sort of narrating on their Twitter feed the discussion a couple in front of them is having, and it goes viral. That’s just one example, but there are various aspects of what used to be considered private segments of our life, of our experience, that are increasingly made publicly available. And I wonder if some of those aspects of our own lives might not be better left private, that I have no business — I have to learn to avert my eyes, I think, sometimes from those kinds of examples.
Not in a prudish sense, but just because there’s a kind of imposition in the autonomy of the people involved. When these aspects of their lives are captured, especially without their consent, and are made accessible to me, I need to learn to look away. It’s not good for me to know that, right?
And even knowledge of myself: when I turn it to myself, then I think about maybe the kind of knowledge about myself that a genetic analysis might make available to me. I think it was a conversation on your show recently, where I think you were talking about whether you’d want to know the precise time of your death or something along those lines. And would that kind of knowledge about the kind of genetic package that I bring into the world — how useful would that be to me?
Would it be good for me to have that? I don’t have a good answer to that question, but I can certainly see where it would be problematic and sometimes might be better left unknown. At least I think a case could be made for that under certain circumstances.
There’s a question that’s very close to my heart, on issues I care about a lot, which is, does my use of this technology encourage me to view others as a means to an end? And I’m somebody who thinks a lot about animal suffering and animal rights issues, and part of what I think permits the kind of industrial-scale cruelty we now have — which is not just about the question of eating animals, which we’ve done for human history, but it’s about treating them simply as inputs to an industrial process, and having the technology to do that.
But all these technologies, from refrigeration to the way we transform them and process them to what comes out the other end — it doesn’t look at all like an animal, and it’s shrink-wrapped, and it’s unrecognizable. And so it gets further and further and further from whatever it once was. But also, we begin to think of the animals as simply raw materials that come in for processing.
And obviously, there are a lot of examples of this for human beings, too, but I’d be curious if you have any reflections on that. Because I do feel strongly about it. I think a lot about the ways in which certain technologies teach us or change us from being little kids, like the one I have, who believes animals to be animals, and they’re in all of his books and his toys and his cartoons, to, at some point, they’re not. They’re something else.
Yeah, there’s this tendency — Martin Heidegger talked about it in terms of rendering all of reality as standing reserve, raw material for our projects. And the non-human world simply becomes things to be disposed of at will. And so we end up not only doing this, in his example, with trees in a forest, but even with, as you describe, the land itself, and with animals that lose their moral standing.
Things in this worldview don’t have moral standing, and so to objectify a thing is to deprive it of its moral standing and whatever rights might come from that. And I think it certainly is a lot easier to do that when the process itself has veiled the fullness of the reality of the animal, for example, from view. If we’ve isolated ourselves enough through the different layers of artificiality that we have built up around us, we lose sight of what those layers of artificiality depend upon, whether it’s the land or the non-human world. And so to become attentive to these again, I think, would be certainly, probably very important, morally significant.
I think it’s a very nice lead-in to the last question on your list that I’ll ask of you, which is, can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t? Could you talk about that, and a bit about how you think of the ethic of responsibility in technology?
So it has struck me that a lot of the systems that are in place now can be seen as machines for the evasion of responsibility. I think I use that phrase at some point. For example, the way that algorithmically structured processes might so distribute agency that it’s hard to then hold anyone responsible for their outcomes, where they purposely obfuscate the nature of responsibility. It becomes very easy to say, the algorithm made me do it, to put it tritely.
Bureaucratic structures did this even prior to the age of algorithms. You distribute human agency into the operations of a system in such a way that someone can plausibly, if not ultimately, legitimately, say, this wasn’t my fault, or I had no power over this process. And so the degree to which we distribute agency in this way through our technological systems — I think a lot of the discourse around autonomous vehicles, for example, goes or works itself around this question of where responsibility lies for whatever actions an autonomous vehicle may take.
But I think you don’t have to look at that example, not yet fully-realized example. There are multiple ways in which we’ve structured modern life in such a way that it’s easy to say this isn’t my fault, that the machine made me do it, the process made me do it, the system made me do it, where I think this is ultimately not really conducive to a healthy society. I don’t know what the answer is, once we’ve already kind of put these structures in place. But I think it’s important to be able to locate moral responsibility as much as possible, and especially if I don’t want it to be attached to me.
I think that’s a nice place to come to a close, so let me ask you finally what’s always our final question, which is, what are three books you recommend — but particularly, what are three books you’d recommend on also being a human being? Because we’ve been talking a lot about how technology changes us, but you quote from a lot of work that thinks deeply about what kinds of practices and social structures make us more deeply human. And I’d love some recommendations in that space.
Yeah, I think I’ve gotten through this whole talk without mentioning Ivan Illich once, and Illich’s work is very important to me. And so I will take at least this opportunity to mention, perhaps, “Tools for Conviviality.” There’s a very clear sense of what human flourishing entails in Illich’s work, and “Tools for Conviviality” is at least one good place to start with that.
And so I mentioned that Hannah Arendt has been very important to me as well. “The Human Condition” explores a lot of that terrain, in terms of what political structures are conducive to human flourishing, what work means in human life, where we can find satisfaction in work and labor and world-building. And I’ll mention a third book, then, by Albert Borgmann. I think I did mention Borgmann at one point.
“Technology and the Character of Contemporary Life”: it has some profound political reflections, but I think most of it focuses much more closer to our day-to-day lived experience and how technologies can lead to genuine satisfaction, enjoyment, build communities, build stronger, interpersonal relationships. And so I think that the question of the good and human flourishing is very much alive in Borgmann’s work.
Michael Sacasas, your newsletter, which I highly recommend and have subscribed to for some time is “The Convivial Society.” Thank you very much.
Thank you, Ezra. It was a pleasure.
“The Ezra Klein Show” is a production of New York Times Opinion. It is produced by Jeff Geld, Rogé Karma, and Annie Galvin; fact-checked by Michelle Harris; original music by Isaac Jones; and mixing by Jeff Geld.