A Pastor and a Philosopher Walk into a Bar

Porn, Games, and Echo Chambers: An Interview with Dr. C. Thi Nguyen

January 27, 2022 Pastor Philosopher Season 2 Episode 14
A Pastor and a Philosopher Walk into a Bar
Porn, Games, and Echo Chambers: An Interview with Dr. C. Thi Nguyen
Show Notes Transcript Chapter Markers

This one's fun. Thi Nguyen is a philosopher from the University of Utah and studies epistemology and philosophy of art, community, and games. He also used to be a food critic and enjoys great whiskey. A man after our own hearts.

In this episode, we talk about things like the seduction of clarity, moral outrage porn, epistemic bubbles, echo chambers, expertise, and our gamified world. Seriously fascinating stuff. You can find Thi's work at his website: https://objectionable.net/. His recent book, Games: Agency As Art, won the 2021 APA Book Prize.

In our tasting, we enjoyed the limited release Cabin Strength Bourbon from Central Standard Distillery available exclusively at Story Hill BKC in Milwaukee.

The beverage tasting is at 1:37. To skip to the main segment, go to 5:23.

You can find the transcript for this episode here.

Content note: this episode contains profanity.

=====

Want to support us?

The best way is to subscribe to our Patreon. Annual memberships are available for a 10% discount.

If you'd rather make a one-time donation, you can contribute through our PayPal.


Other important info:

  • Rate & review us on Apple & Spotify
  • Follow us on social media at @PPWBPodcast
  • Watch & comment on YouTube
  • Email us at pastorandphilosopher@gmail.com

Cheers!

Kyle: [00:00:00] Well, welcome to A Pastor and Philosopher Walk into a Bar. Today, we have a guest that I may be more excited about than any guests we’ve had. I’m not sure I don’t want to, I don’t want to oversell it, but I’ve been following this guy’s work for a long time. He writes basically all the things I wish I had written. He publishes and all the coolest philosophy journals. All of his arguments are like novel and interesting and get a lot of traction. And he’s also like a pretty up and coming public intellectual. So he’s able to somehow get op-eds and all the coolest places as well. And he’s just so interesting. I don’t know. I’ve heard him on a couple podcasts and he’s just a really, seems like a fun guy to talk to. So I’m really pumped about this.

Randy: Well now you just set us all up for disappointment, no matter what he says. No, I’ve, I’ve listened to a couple of podcasts as well. A couple of his, him speaking [00:01:00] and, holy cow, he grabbed me within three minutes. I mean, I was in. Super, super fascinating, interesting, philosophical, but really relevant stuff to our culture.

Kyle: Yeah. Yeah. So we’re talking to a guy named Thi Nguyen and he’s an epistemologist, like myself, except a real one, I guess. He works at the University of Utah and has graciously agreed to appear on our podcast, even though he doesn’t really work on anything religious.

Randy: Yeah. I mean go figure. I mean, maybe he’s Mormon. He’s in Utah.

Kyle: Maybe! We’re going to find out. That’ll be the first thing I ask him now.

Randy: Before we, you know, intro our friends out of listening, let’s get to the tasting, which is what we do around here. We are A Pastor and a Philosopher Walk into a Bar, so we taste something that’s alcoholic in nature. This is Central Standard Distillery’s offering called Cabin Strength. They have this really great award-winning bourbon called Red Cabin, which is just this four to six year old standard bourbon that then finishes in Cabernet casks.

This one, Cabin Strength, is the [00:02:00] same four to six year bourbon, which is very standard, in North American oak barrels. But then they pull it out from those barrels and then they re-oak it in brand new, fresh oak barrels for another year. And then they do what everybody loves these days, this huge fad.

And I know you know what I’m talking about you bourbon drinkers—it’s uncut. So this is a strong, strong lady. It’s 125 proof. You’re going to have… You might hear some, some coughing here in this tasting, some gasping and wheezing.

Kyle: Yeah. And I promise listeners, I have not, like, pre-sipped, so pallet is not primed. So let’s see how this goes.

Randy: Ready? You ready for this? I’m just gonna take the nose first.

Elliot: It is…

Kyle: I get strong cherry on the nose, 

Elliot: …dark toasted caramel color. 

Randy: Yeah. And dark toasted caramel fruit nose to it. Yeah. 

Kyle: Yeah, I like this already. Actually, not nearly as much of a punch as I expected when you said 125.

Elliot: Same here.

Randy: No, no. I get the double oaked, majorly.

Kyle: Yeah. I’ve [00:03:00] had lower proof that’s, like, far stronger of a burn than this. 

Elliot: I really enjoy it. And I want to try it cut to see, just because I’m curious, but I think I would drink this way.

Randy: Same. That’s delicious. I’m totally surprised.

Kyle: It’s so sweet. Which I love. That’s not a critique at all. I love sweet bourbons.

Randy: So sweet can mean new makey, which, I don’t like that.

Kyle: It can, but not here, no. 

Randy: Here, it’s vanilla; it’s cherry.

Kyle: It’s all barrel sweetness. Yeah. 

Randy: Man. This is good stuff.

Kyle: Yeah. That extra time in the new barrel definitely added some stuff.

Randy: The color is beautiful. Yeah, the smoky oakiness. Elliot now just cut his.

Kyle: Which just means added a little bit of water for those that want to know.

Randy: What do you think of the cut?

Elliot: I look it cut too, I don’t know. It doesn’t change it much for me. 

Kyle: Yeah. People say, you know, it “opens it up,” and that’s kind of the phrase, and it’s very difficult to think of a better way to put it.

Randy: It does though. It, this one, I [00:04:00] think it does open it up.

Kyle: Yeah. It like, I don’t know, covers more of my palate; I don’t know how to say that.

Randy: It harmonizes it, I would say, here. In this case, it’s more harmonized, but that’s not a necessarily good thing because I taste less distinctive flavors.

Kyle: Exactly. It makes it more, I don’t know, typical. Yeah.

Randy: It’s like, I want my beef stew to marry and all those flavors to come together rather than tasting the tomatoes or the potatoes or the onions or whatever; I wanted that to marry. But in this case, I think the bourbon might be better un-harmonized where you can taste all those flavor profiles differently. 

Kyle: So you’re saying leaving it uncut was a good decision.

Randy: Yes. I’m happy we did that.

Elliot: All right.

Kyle: The fad has something to it.

Elliot: Would you say it’s more than just a fad?

Kyle: Abso… my favorite bourbons are uncut, and usually when I cut them, I kind of wish I hadn’t. And that’s true in this case as well. 

Randy: Yeah. Usually I like, I prefer to trust the master distiller more than myself. But this right here, just want to tell you listeners, it’s available for a limited time. There’s one barrel that Story Hill BKC [00:05:00] purchased, and it’s available until it’s not. So go to Story Hill BKC if you’re in Milwaukee, ask for the Cabin Strength, Central Standard Distillery. Tell them that the podcast told you to ask for it, and then you’re welcome. Enjoy it. And start out…

Kyle: It’s really delicious. 

Randy: … at least start out uncut. 

Kyle: Yeah. Like I want a bottle.

Randy: Yeah, totally, go to, go to Story Hill and buy one. Cheers.

Kyle: Well, Thi Nguyen, thanks so much for being on A Pastor and Philosopher Walk into a Bar.

Thi: Thank you. It’s great to be here.

Kyle: So one thing we like to ask our guests before we get rolling is if you’re drinking anything that you want to tell us about.

Thi: I am drinking a bit of Leopold Rye. Leopold is, in my opinion, the best distiller working in America right now. And with a lot of like, they do kind of like, open barrel ferments, and they do these weird wild, and it’s like, their stuff is like odd and interesting and crystal pure.

Randy: Open barrel whiskeys? 

Kyle: But this is a whiskey?

Thi: [00:06:00] This is a whiskey, but it’s, like, fermented open style. Like, you know, like you would do like a Belgian beer, and so it’s picking up, like, wild yeasts. 

Kyle: Sure, and you can taste that after distillation?

Thi: Oh fuck yeah.

Kyle: Interesting. Now I’m very intrigued. 

Randy: Absolutely. Have you ever had Leopold? 

Kyle: I have not. 

Randy: No. Me neither. 

Thi: Leopold’s is the most interesting. They’re in Denver. And they’re right now, to me, the most interesting distiller in America. I used to be a food critic.

Kyle: We’re gonna, we’re gonna look this up and get it on, get it on the podcast.

Randy: You used to be a food critic?

Thi: I used to be a food critic.

Randy: Oh, I wish I knew that, would have added four more questions.

Kyle: Why are you a philosopher? Food critic seems like a much better gig. 

Thi: Yeah, it’s interesting. So I actually got the food critic job cause I was in graduate school in Los Angeles and I started drunk posting on a food board called ChowHound back before Yelp. And then apparently my posts were sufficiently interesting that the food editor just like called me and offered me a job.

Kyle: Wow. 

Thi: I remember the post that did it too. I was like, [00:07:00] there’s this place called a Roscoe’s House of Chicken and Waffles, which I love, like fried chicken and gravy and waffles and syrup.

Randy: Sign me up.

Thi: And then some asshole was like, well, these aren’t proper Belgian waffles. They’re soft instead of crispy. And I’m like, fuck you, you’re wrong. And I’m going to get drunk and do some empirical research. So I went down there and I was like, no, he’s wrong, like, it’s much better soft. And then I was like, oh, this is really familiar. And then I realized like the soft waffle and the gravy and the crispy chicken and the maple syrup was like the same pattern as, like, Peking duck, you know, has like a soft bun and, like, plum sauce and then crispy duck. And that I ranted at like two in the morning online…

Randy: Dude.

Thi: …about, like, Joseph Campbell and the eternal forms of food. And I got a food writing gig,

Kyle: That’s amazing. Would never happen again.

Randy: I mean, Kyle, Kyle tonight is going to go do some, some random Yelp reviews, hoping to land that gig here. 

Kyle: If only. Yeah. Now I’m really hungry. Thank you. 

Randy: [00:08:00] So Thi, can you tell us, just tell our listeners about who you are, what area of philosophy you, you, you specialize in and just your whole world.

Thi: I’m Thi Nguyen. I am associate professor of philosophy at University of Utah. My, apparently my specialization in philosophy is so weird that when some of my colleagues were trying to describe to someone else, they just like gave up in, like, conniptions of laughter. Like, I work in philosophy of art… So, okay. Stuff I’ve written about recently: games as an art form, gamification, echo chambers, trust, porn.

Kyle: Yeah. Some of our listeners just did a double-take.

Thi: I mean, I think it’s all related. And one way to put it is that, like, I think a lot of philosophers historically have been interested in, like, the individual out of context. And I get super interested in the individual in context. And a lot of the ways that people work on the individual in context just involve talking about, say, communities or governments. And I think that’s right, but also technology is part of it.

Like, it’s, [00:09:00] technology is part of the way we communicate. And I end up working across two fields I think not many people work in. One is social epistemology, which is a study of, like, how people know things in communities as groups. And the other is the philosophy of art, which to most people is like a weird connection.

But to me, art has, the philosophy of art has always been the study of the relationship of communication and technology because the philosophy of art studies, like, oh, what happens when we get photography? What happens when we get film? What happened? Like, how does each of these change the ways that we connect to each other and express things and express subtle things?

And so, like, the last seven years of my life were obsessed with trying to get a better theory of, like, what games could communicate. That was unique. That’s what I work on.

Kyle: Awesome. Were you a gamer prior to that?

Thi: Oh my God was I a gamer. Yes. 

Kyle: Yeah. 

Thi: Though most of my interests, I think I’m, I barely play computer games these days, partially because I’m married with kids and I don’t have time, and partially because I feel like mainstream computer games are [00:10:00] getting better and better at the technology of addiction, which I’m sure we’ll talk about.

Kyle: Yeah. that’ll come up.

Randy: So just for the gamers who are listening, what are like two, three games that you lost years of your life to?

Thi: Oh, my God, the Civilization games, like, like, I’m never allowed to touch any Civilization game again. Like…

Kyle: I’ve never played one specifically for that reason. I know myself too well.

Thi: And then like, I mean, I can’t really, like, I was really into the old school DND computer roleplay, like before Skyrim, there was like, you know, Ultima and Baldur’s Gate, and like that era of…

Kyle: Hmm. 

Thi: …like, or anything like very tactical, like EXCOMM, like from the original, like I mean, I, I had an original Atari, like 2600, like, I’m from that era, but I was, I was like, anything that involves leveling and grinding is really dangerous.

Randy: This sounds like a dance club to me. I have no idea what you’re talking about.

Thi: You have no idea what I’m talking about.

Randy: Leveling and grinding (laughter). [00:11:00] 

Thi: You lucky dog. 

Randy: So Thi, you’ve talked about what you’ve written about recently, and I’ve, I’ve listened to a few things that you’ve talked about the things that you’ve written, and I was fascinated. So let’s just dive right in. You talk about porn, and you talk about specifically, what I loved, was you’re talking about moral outrage porn.

So can you first give us your definition of porn, whether that’s sexual or whether that’s food or whether that’s real estate porn, all that stuff. Give us that kind of… fit, fit our listeners in there. And then tell us about moral outrage porn and what, what you see, what you think.

Thi: I wrote a paper with Becca Williams. This paper is a product of our thinking together, and the conversation actually started as a drunk, late night conversation between the two of us on somebody else’s Facebook posts. Um, and it was just about, I was just starting to joke around like, you know what, we don’t have a good philosophical account of porn in a general sense.

Like, you know, like food porn or real estate [00:12:00] porn. Like, so my wife loves this site called Things Neatly Organized, it’s a Tumblr. And it’s just like pictures, close-up loving pictures of, like, pencils arranged by color or like corks neatly stacked. This is all, like, if I told people, like, oh, this is organization porn, like, everyone knows what that means. Right.

So it seems like there’s this general notion of porn that we, we get. Actually, one of my favorite examples, we were writing this paper and Becca called me and she was like, oh my God, I was watching Saturday Night Live; it was like, I don’t know, like five months after Trump had gotten elected and someone, one comedian was like, just stop it with all this impeachment porn.

And, like, everyone laughs because they know exactly what this means. And funnily enough, if you look at the, all the discussion of porn in philosophy, the definitions of what porn is are all inherently sexual. And so we’re trying to figure out like, no, we know what this means. What could it be? And Becca actually found this this paper by Michael Rea, [00:13:00] who’s a really good philosopher, about sexual porn.

Kyle: Yeah, we’re hoping to get him on the podcast. 

Thi: Oh yeah, he’s awesome. But his definition of sexual pornography is basically that sexual pornography is images of sexual content used for immediate gratification while avoiding sexual intimacy or the development of relationship. Cause what he was really interested in, and I think this is deeply right, is that two people in a loving relationship can exchange, like, erotic nude photos with each other and that’s not porn. Right.

And I think this is really deeply interesting that mere nude, sexual, or erotic content isn’t porn, if it’s part of a relationship, right. If it goes somewhere. And so we were like, oh my God, this seems right. We can generalize this. And so our definition of porn, of any kind of porn, philosophers, we call it, you know, X porn, where X is a, like, algebraic signifier, right? So X porn is any case in which you [00:14:00] have representations of X which are used for immediate gratification while avoiding the costs and consequences of genuine entanglement with X itself.

Randy: That’s so good.

Thi: So like food porn, right. Food porn is pictures of food you use for immediate gratification, but you don’t have to cook it or deal with the calories or deal with nutrition, or, like, try to go to a restaurant. Or, like, real estate porn is you just see images and it makes you feel good. You don’t have to buy it. You don’t have to maintain it. You don’t have to keep it clean. Right. 

Randy: SNL had this amazing skit this last year, about how Zillow has turned into, like, the 30 and 40 year olds’ 900 call-in line, and you’re, it’s exactly what you’re talking about. All the gratification without any of the property taxes, without any of the upkeep and maintenance it’s porn.

Kyle: Yeah. This is HGTV.

Randy: Yeah, yeah, yeah. Continue Thi.

Thi: Oh, yeah, so, so we were like, okay, then if you have this definition and it’s a good definition, it should help you, like, it should be useful and help you identify new kinds of porn. So here’s one kind of porn: moral outrage porn. [00:15:00] And moral outrage porn is representations of moral outrage used for immediate gratification while avoiding the cost and consequences of actual, genuine moral outrage.

Randy: Oh, so many listeners are just slayed right now.

Thi: Okay. So, I mean, I think people immediately know what this is. They know what… I think in the paper we’re like, maybe, like, 50% of Twitter is moral outrage porn. But I just want to say something: so this paper of ours—we should’ve seen this coming—has already been abused. And I just want to tell you about the abuse that annoys me the most, because it’s the opposite of what we want to say. What annoys me the most is people who want to be like, oh, this means that any moral outrage is bad. Don’t… 

Kyle: Right. 

Thi: …express moral outrage. Like, just be a calm, rational person in the center. And I’m like, no, that is not what we were trying to say. Right. What we’re trying to say is that genuine moral outrage is deeply important.

[00:16:00] Like, genuine moral outrage oriented at the, like, the morally terrible and the unjust—that is what gets you to act. Like, genuine moral outrage. So I believe philosophers like Martha Nussbaum who say things like, well-tuned emotions are perceptions of moral states of the world. Like, well-tuned anger and outrage is a rendering of how unjust the world is. Like, that’s the important stuff.

The problem with moral outrage porn is it undermines the importance of genuine moral outrage, right? So either moral outrage porn is something you enjoy without taking action, right? Or it’s something that you enjoy, and because you want to have the pleasures of it, you simplify your morality. You get attracted to simple expressions of outrage, instead of trying to figure out the nuanced, complex, nauseating thing, which is the actual, which genuine moral life actually is.

So this was never intended to be, like, an attack on moral outrage. [00:17:00] This is intended to say something like moral outrage is so important that the pornification of it, like, threatens the ability to have genuine moral progress against injustice.

Randy: Yep. And would you say a facet of moral outrage porn is the, the act of feeling righteous and like, my voice will be heard, without actually doing anything about it?

Thi: Well, yeah, I mean, okay, so this is, again, I just want to be so cautious because it’s not that every feeling of righteousness is a case of moral outrage porn. You can be righteous because you’re on the side of right. Right? Abolitionists in the antebellum south were righteous. They should’ve been so, and feeling good about being righteous in that case is really valuable. Right? It keeps you going.

The worry is that if you just want the feeling of righteousness, devoid of actual moral truth, then what you’re motivated to do is to find the simplest, easiest position you [00:18:00] can, that just lets you get moral outrage all the time and to take a stance that won’t lead to action because action is hard, right? Just the pleasures of simplified moral outrage are easy.

Randy: Yeah. There’s this little nuance between this idea of moral grandstanding and moral outrage porn. And you’d say probably there’s a difference between the two, right?

Thi: I mean they’re really similar in a certain way. Both… so the notion of moral grandstanding is basically using expressions of morality for social status, and the idea of moral outrage porn is using judgments of morality for pleasure. I think, like, both of them have the same, like, what moral expressions are supposed to do is track genuine morality and get you to take genuinely moral action.

Like both of these are, you might say, perversions of morality, right? They’re perversions of, either, either aimed outwardly or aimed inwardly. But again, like, but you also see the parallel problem. Like, one of the things that really irritates me about the current social uptake of moral grandstanding is that people have [00:19:00] used it to make accusations against anyone that makes any moral claim, and they’re like, oh, you’re just grandstanding.

And it’s like, no, no, no, no, no, no. It’s not grandstanding if it comes from a genuine moral belief. It’s grandstanding if it’s been re-aimed at status. And it’s often really hard to tell. One of the things I noticed is that a lot of people that make accusations, or I’ve seen people, like, using the stuff about moral grandstanding and moral outrage porn to, like, attack anyone that expresses moral, a moral stance.

I keep thinking like, you know, it’s really funny that people keep accusing their political opponents of grandstanding and moral outrage porn, but not their side. And that itself is a sign that something has gone funky and wrong.

Kyle: Yeah. So if this is too nerdy, we can take it out. But in your conception of moral outrage porn, is it deontic, consequentialist, virtue theoretic, or none of the above? So here’s a case. So let’s imagine that Kim Kardashian gets really pissed off about something and genuinely morally outraged, posts about it to her however many [00:20:00] million Twitter followers, and because she has that many Twitter followers, actually changes the situation for the better. Is that an instance of moral outrage porn? Why or why not?

Thi: So this account of pornography is, the essential part is a particular user’s intention in taking it up. So the same thing might be porn for one person and not for another. And this is actually built into the original Michael Rea account. Like, this is a lovely point of his, that, you know, a couple in a relationship can exchange nude photos and that’s not porn. And then someone can take that out of context and use it as porn. And so the fact that someone’s expression of moral outrage was genuine doesn’t mean that somebody else couldn’t also use it as moral outrage porn. Right. 

Kyle: Yep. 

Thi: As to the first question, I like, I don’t, we tried to write this thing so it was compatible with almost any…

Kyle: Didn’t tie you to anything specific.

Thi: Yeah, we were trying to, we were trying to write it like, basically, as long [00:21:00] as you think that moral beliefs should track actual state of affairs, then you should get, you should get on board.

Kyle: Okay. Yep. 

Randy: You’ve, these, these sound similar to me, or seem similar to me: you’ve also written and talked about the seduction of clarity and what comes from the seduction of clarity, understanding as orgasm, fascinating stuff. Can you bring our listeners and us into these ideas and concepts?

Thi: This is another, like, this stuff is another case where there’s this thing that’s really good—that’s clarity—and then people give you, I dunno, like a fake version of it. And that’s the thing that I’m really, I got really interested in. So, I mean, I started thinking about this just because when I was working on some other stuff, I ended up reading a lot of, I spent a lot of time on conspiracy theory websites, just like hanging out reading forums, trying to understand how things are working.

And like, it seems like one of the, one of the incredibly interesting things about conspiracy theories is they offer you a single potent explanation for everything. And people say things like, it’s very powerful and it’s very clarifying. And [00:22:00] I think part of the thing is like, actually don’t think the world is that simple. Right.

But it, like, feels so good to have a single explanation that just takes everything into account. So I was getting really interested in how this might work. And basically my thought is something like, so we’re limited cognitive beings and we can’t investigate everything forever, for all time. And so we need to basically guesstimate, what’s worth investigating and what’s worth not investigating, what’s not worth investigating. Right? Because you can’t think about everything. You could, you could go down the rabbit hole about any question forever. Right? So you can make this estimate. So it seems like there was a lot of, empirical evidence, in various forms of sociology, a lot of us are using as our heuristic for when we should stop investigating, a feeling of clarity. 

Randy: Can you explain heuristic for us non-philosophers? 

Thi: So in many cases, [00:23:00] getting a calculation exactly right is really, really difficult. And a heuristic is a really simple rule of thumb that gets you through things. So let me give you an example of a heuristic. So at one point I used to eat like crap and I tried to eat better. And the first thing I did, which is what a lot of people do, is you start trying to track the calories, nutritional content of every single thing you eat. Right? And this is like, this is dead. You can’t, no one can do this for, like people burn out.

And then you start looking for really simple rules of thumb. Like one I ended up using was like, don’t eat processed carbs, right? Like, don’t eat stuff that has flour in it. This is not a perfect rule of thumb, and you can move beyond it. But as a beginning maneuver, like, it’s, it’s a really useful, like, first step. So because we’re limited beings, we need a way to, like, quickly estimate what’s worth investigating or not. Cause we [00:24:00] can’t investigate everything to figure out if it’s worth investigating.

And so one thought is we use the sense that something is clear, right? So what’s that, so at this point, like there’s, there’s actually a really interesting amount of philosophy about this, about what actual clarity is, what actual understanding is. And one of, um… Thank you. I’ve just been brought a cocktail.

Randy: What a good interruption.

Thi: And a bottle of booze! Thank you.

Randy: Lucky man.

Thi: My spouse understands me and how, what I need for a podcast. So in this literature, in the philosophy of science and the philosophy education, there’s this idea that what it is to really understand is to get a coherent model that can explain as much as it can, right, that can explain as many things and find as many connections between things, that can be communicated easily.

So for those of you who have some philosophy background, old school philosophy and epistemology used to think the goal is knowledge, having true beliefs, right? And what a lot of [00:25:00] people ended up saying was, that’s not enough because under that model, you can have a ton of individual knowledge, but you have no coherency or overall, like. Picture. Like, what do you want in science? You don’t just want a bunch of true facts. You want a model that can unite all the true facts, explain them, and, like, make predictions and connect new phenomenon.

So my thought was like, okay, if that’s what it is to actually understand, what would it be like to fake that feeling? Right? How do you give people that feeling without actual understanding? So what you want is to give them a really powerful model that can fit, explain all kinds of things, right? At, any new phenomenon, it can just handle. And one suggestion is that’s why conspiracy theories are so compelling, right? Because they’re like a cartoon, simplified, powerful version of understanding that just like explodes out and just can give you everything. I think it’s interesting in particular because if you [00:26:00] believe in science and the specialization involved in science, no actual human being can explain everything, right.

And that’s, that’s kind of a sad place to be in. But a lot of these systems, the conspiracies theoretic systems, make it feel like everything is suddenly in your grasp and you have a model that can explain everything. So it feels more like understanding than the actual world, because you can never actually get, if you believe in science, an actual feeling of understanding for everything, because there’s too much stuff to now.

Randy: So spin-off question or follow-up question that wasn’t on the outline, but when you talk about really simple systems of truth that can kind-of explain everything and people feel really good about it and you know, that’s conspiracy theories, would you—and you can be totally honest because I’ve got a really thick, spiritual or religious skin—would you put religions in that category as well?

Thi: Um, yes, and then let me say more. So it’s interesting. One of my favorite undergraduate teachers was this English professor named Richard [00:27:00] Marius, who was just a lovely guy. I remember him, we were sitting outside one day, office hours, we were talking about Thomas Pynchon. And he said, oh yeah, what Thomas Pynchon makes me think about is that there’s a deep similarity between the aesthetic of mystery novels and the aesthetic of religion, because in both, you have all these seemingly random events and then you find out this thing that provides this unifying explanation where everything makes sense. One caveat: it’s not like there aren’t true theories that make sense of a lot of things, right? That’s what scientists trying to get us. Right, like, and in many cases, the whole point is that what you want is a theory that can explain everything. So the fact that the theory has explanatory power and a great unificatory power doesn’t necessarily mean it’s bad. Right?

My worry is that there are certain systems that have been optimized for the feeling of understanding and not actual understanding. One thing I should say though, that might soften the bite here is I think [00:28:00] another place where you find exactly this effect is a lot of bureaucratic systems of justification; that has nothing to do with religion. Like. I see this constantly in administrative life in universities, right? We have simple metrics and we’re trying to create justificatory systems in which you can explain any action in terms of a couple of simple metrics. Sometimes I think, there are a lot of intellectual systems whose appeal is that you can get everything from a single principle: utilitarianism, libertarianism, right? And you might also make such an accusation of some versions of such systems.

Randy: Yeah. Yeah. I mean, as a pastor, I will say if you’re a Christian or a spiritual person who enjoys the quick and easy answer or your church leader has a habit of giving you a quick answer, pat answer for every profound question about the universe, question things, and question whether the motivations of that, and question your motivation of feeling good about simple answers to [00:29:00] complex questions, because it’s just not usually the truth.

And there’s a whole lot of mystery within, there should be a whole lot of mystery within our spirituality. So I’m fully endorsing what you’re saying. We should be a little bit suspicious when we hear easy answers to really heavy questions. 

Kyle: Like “where did the world come from”? It’s like, it’s like that old joke, right? That the right answer to every Sunday school question is Jesus. 

Randy: Yeah, yeah. 

Kyle: There’s a reason that’s funny. 

Thi: The one thing I say at the end of this “Seductions of Clarity” paper is something like, what’s the response, and it’s something like heuristics are good until people know what they are and starting gaming them. Like think, for example, in our evolution, we probably evolved to have an instinctual heuristic, and that heuristic is consume as much sugar and fat as you can.

And that heurisitc, I think, works in what the evolutionists call the environment of evolutionary adaptiveness, because there’s not that much sugar and fat around. Like, if you just cram your mouth with as much fruit and animal as you can, you’ll be fine. But then that heuristic gets gamed. And what [00:30:00] that gaming looks like is, like, Cheetos and Nilla wafers.

And so I think, like, what you have to have to evol, what I have to evolve, because I, I, I speak as someone that’s capable of taking down, like, a Costco size bag of kettle chips in a single go…

Randy: Yeah, baby. Oh.

Thi: …is, I think, now that we know that there are people out there that are trying to game our sense of deliciousness, we have to devolve something, which is not, it’s not saying that deliciousness is bad, but when you eat something and it’s just so addictively tasty that you have to immediately be like, wait, wait, wait, wait, wait, wait, hold on a second, let me look at this bag. What’s going on?

I think there’s something similar. It’s not, again, that things that make sense and are easy are necessarily wrong. It’s that because we’re in a world in which people are trying to game our heuristic of easiness of understanding, if something just feels good, you should immediately be suspicious. You shouldn’t just accept, like, the response [00:31:00] to ease of understanding should be suspicion in an environment where people are trying to game you.

Randy: Yup. Yup. Oh man. Politically, can you imagine how much that would change things in our…QAnon…whatever, I’m not going to go there. So this is kind of speaking to like, can you put on your philosophical life coaching hat? I don’t know if you’ve ever put on a hat like that before, but yeah, so, I’ve heard you speak on whether or not we actually love the truth, whether or not we’re dedicated to the truth, or if we just want to have everything that we already think and believe affirmed, you know, confirmation bias, whether or not it’s true.

So philosophy is all about loving the truth and following where the argument leads and trying to get to the, to the bottom of it. Can you just, putting on that coaching hats, can you tell us how to actually seek and love the truth rather than living in constant confirmation bias? How do we, how do we, like, examine ourselves and our own motivations as we’re trying to figure out what’s true or not and what, what, what side to land on?

Thi: Yeah. I mean, I think, to spin off the last thing I said, [00:32:00] given that we’re in an environment where people are trying to give us cooked versions of the truth that go down easy, right? Okay, so let me try it this way. If you were trying to manipulate people and get them to accept what you wanted them to believe, a good strategy would to be make believing your chosen belief more pleasurable and easy.

Right? So given that, I think we need to be suspicious of belief systems that are pleasurable and easy. Which is not to say that pleasure and ease are always false. Right? There are plenty of good things that are delicious, and there are plenty of truths that are incredibly pleasurable to grasp. But given the, we know that there are manipulators who have a lot of motivation to get a lot of power by using the manipulation of pleasure and beliefs to get us on their side, we should [00:33:00] immediately be suspicious and ask what’s going on. 

Kyle: Yeah. So I heard you say something similar on a different podcast interview, and I had this thought, and I’m curious what you think about it. Cause it, it kind of sums up my psychology in some ways. So it, it seems like, on reflection, that maybe the best answer or the most convincing answer that I, like, feel in my gut is convincing, that feels clear to me, to any big question, like why should I care about truth? or why should I care about a good method of gathering information? or why should I care about morality or what’s you know, right or wrong? or what’s healthy? or whatever. Why should I do any of that stuff? Maybe the most convincing answer to me is because it’s hard, and the other thing is easy. Now, maybe that’s something peculiar to my psychology that that’s what, like, hits me home the most.

But, like, I’ve heard religion summed up in that way. Kierkegaard kind of sums up religion in that way, Christianity, right? His version of it is very hard, maybe even impossible. And [00:34:00] that’s why it appeals to me. So does that say something deep about human nature? And if so, can we weaponize it uh, to actually combat conspiracy theories or misinformation? 

Thi: I think that theory’s too easy.

Kyle: Why?

Thi: “Do the hard thing.” I mean, I’ve actually said that to people before, but I don’t, I mean…

Kyle: So let me, let me, let me, let me complexify it a litte. So I don’t think that it’s actually effective at people who just want the easy thing, but I think it’s a very good way of weeding out who wants the easy thing and who actually wants the truth. Right? If you’re someone who actually is prone to the truth, or maybe has the capacity to desire it or something…

Thi: Yeah…

Kyle: …coming to the recognition that this is going to be hard and I’m going to have to dig deep to get it is maybe the best sales pitch for that kind of person.

Thi: Yeah, maybe it is a good sales pitch, but again, like in the background, I think there’s a lot of difference between a theory that says all falsehoods are easy and all truths are hard. Like…

Kyle: Sure. [00:35:00] 

Thi: That’s too easy, right? “Maybe we can find the truth by just doing the hardest thing,” like that’s, like, that’s a meta-easy; that’s like, that’s a meta-easiness for someone that’s like trapped in some Protestant work ethic of, like, the hardest thing is always the best thing, be more productive, be more overworking. Right? I don’t, I don’t think there’s any, I mean, I think the equivalent is nutritionally: “eat the most disgusting thing.” That’s not, that’s not, like… Actually there’s incredibly beautiful, delicious, wonderful food that’s actually, like, deeply nutritious and deeply…

Kyle: Yeah, but, but balancing it is more difficult than just eating the most disgusting thing.

Thi: Right. Yeah, I mean…

Kyle: So I mean, you’re still aiming at what is actually going to challenge me.

Thi: But that’s what, like… I think, I think, I think one thing that I might accept is, given the presence of manipulators who are trying to game you, it is very unlikely that the easiest path is the right path. But that’s really different from saying, like, always do the hard thing, right? [00:36:00] Actually I think the hard thing is starting out in the fact that some easy things are true and some easy things are false. Like if it was that all easy things were false, then this would be trivial, right? “Don’t believe any easy things,” but that’s, that’s, that’s too easy. Sorry, now I’m wrapped up in meta-knots, right? Does that make sense?

Randy: This is a fun little, like, being able to sit in on a philosophy debate is very enjoyable. 

Kyle: This is what every after-conference drinks feels like. 

Randy: Nice, very good. So switching gears Thi, you’ve spoken about epistemic traps or epistemic filter bubbles and echo chambers, and the differences between them. And it’s all very fascinating to me, especially being a church leader. So can you bring us into what is, what are echo chambers, particularly, what are epistemic filters and filter bubbles, all the, all the words that you use?

Thi: Right, the words are important. I mean, I started writing on this cause I got irritated by the way people use words. So basically, for me, there are two different concepts that people have been [00:37:00] blurring together, and it’s really important to keep them separate. So one concept is an echo chamber, and the other concept is sometimes called a filter bubble, but I want to call it an epistemic bubble for complicated reasons.

So the bubble concept is the concept that most people have become obsessed with lately. A bubble is some kind of social phenomenon where you don’t hear the other side or you don’t get exposed to the other side’s arguments. This got really famous from a book from Eli Pariser, The Filter Bubble. And he was really interested in the fact that, you know, if all your friends on Facebook share your politics, you’ll just never hear the other side. They’ll never be exposed to the evidence, right?

Lately people have been using the term “echo chamber” and the term “bubble” synonymously to refer to that. But if you actually look at the early research that leads to this concept of echo chamber, in particular a book called Echo Chamber by Kathleen Hall Jamieson and Frank Cappella, they have a different concept of an echo chamber. An echo chamber for them is a community where [00:38:00] people distrust everyone on the outside. And the difference between never being exposed to the ideas of people on the outside and distrusting systematically everyone on the outside is just totally different. These are different concepts.

So the first thing I want to say is people blur these things together a ton, and there’s a lot of research that says, oh, there’s no such thing as echo chambers or filter bubbles, which is all showing that actually conservatives know what the liberal arguments are and liberals know what the conservative arguments are, and climate change deniers know what the climate change arguments are. And I’m actually fairly sympathetic to the idea that there actually aren’t many filter bubbles or epistemic bubbles in this world, that right now, given the media environment we are in, most of us know what the other side’s arguments are. Right. I’m progressive, I know what Trump’s arguments are.

Randy: Yeah. And we just inherently distrust the other side.

Thi: Yeah. It’s, it’s that we think that the other side is systematically biased…

Kyle: Yeah. Yeah. So What is a uh, what’s a runaway echo chamber? Because you’ve used that phrase as well.

Thi: Oh, a runaway echo chamber, that’s… [00:39:00] A runaway echo chamber is a case where the following happens. You pick all your advisors based on your estimation of who’s expert or good. But if your notion of who’s expert or good is flawed, then you’re going to pick a bad advisors. For example, if you’re a white supremacist, you’re going to pick moral advisors who are other white supremacists, and they’re just going to confirm your white supremacy, right?

Similar thing. I actually, it, this may, I think people found this interesting… Although this has a lot of political implications, I actually started thinking about this thinking about art. I was interested in artistic echo chambers, because I was in one. And my version of this was I was raised on European classical music.

And all the people that I trusted were people that were good at European classical music. Right. And all of them thought rap was shit. So I grew up having no ability to understand rap and also having picked, because of my classical background, only advisors who [00:40:00] were, you know, who thought that European classical was the highest form.

It turns out that not only is rap amazing, but part of the problem is that because rap is rhythmically complex in a way that is kind of askew to the complexities of European classical, that if you are raised in European classical, you won’t have the rhythmic skill to hear what’s going on in rap. But you can maintain your belief if everyone you trust is like, oh, that rap stuff is crap. Don’t spend any attention on it. Right. So that’s…

Kyle: So there’s some standpoint epistemology here too. Yeah. So let’s keep with that analogy then; what brought you out of that echo chamber? What was it that enabled you to appreciate rap?

Thi: That’s, that’s an interesting question. So I think in my case, so there were two things that happened and I think this can be generalized. One was at some point I looked at my shelf of music and I was like, everyone here is white. There’s probably something wrong with me. Right? Like that’s, and I, so I’m not sure, some of your listeners may… like I’m [00:41:00] Vietnamese. This is like, not, and it’s important that I am from the kind of Vietnamese who were wealthy enough to go to French schools and had a conception of French culture as the highest culture. There are a lot of complexities in the background.

But at some point, growing up in America, if you look at your bookshelf or your music shelf, and you’re like, everyone here is white, there’s probably a good explanation of why, and it’s probably not that white people are just better at culture. Right? It’s probably that there’s something, some systematic racial bias has gotten into your education.

The other is someone I trusted, right? I, I, I met someone who was, knew a lot of classical, and they were like, no, you should listen to rap.

Kyle: Yeah.

Thi: Listen to this. Like, and they had really complex, subtle views about classical, so I trusted them. And so they got me to pay attention to rap. Right. And I think that’s, like, that’s a similar thing across all echo chambers. Like one of the interesting things is when you see people’s stories of people leaving echo chambers, it does seem to be because of [00:42:00] personal relationships of trust.

Randy: There it is.

Kyle: Yeah. And this, honestly, is the most terrifying fact about echo chambers to me because this is not a scalable solution, right, to depend on the patience and dedication of someone outside your chamber, who also has taken the time to understand your chamber, is not a scalable solution. Like, we’re not going to fix climate change if that’s, if that’s what it takes.

Thi: Yeah. Since I’ve written this stuff, people keep asking me like, what’s the large-scale policy solution. I don’t know.

Kyle: Maybe there’s not one that’s. Yeah.

Thi: Yes. Uh, Yeah. Uh, Yeah, I mean, I, I, yeah.

Kyle: Yeah. No, no seduction to that clarity. Um, So let’s, let’s talk about how rationality works in an echo chamber, because one of the things that you’ve pointed out beautifully, that I try to convince other people of all the time, including in a talk that I just gave yesterday, is that being in an echo chamber does not make you irrational and that writing [00:43:00] off people as stupid or lazy or irrational or uneducated or fill in the blank of your, you know, favorite easy dismissal, is just going to, well, for one, it runs afoul of the evidence, but it’s just not going to be helpful with any social problem facing us, but it’s the easy thing to do, right? So explain to us how it’s possible for an echo chambered person to be acting rationally.

Thi: To understand this, we have to, like, we have to get rid of this, like, profoundly false conception about how knowledge works that seems to affect a lot of us that shouldn’t. And that profoundly false conception is that we’re capable of knowing everything we need to know individually, that we have the ability to know things, everything that matters, on our own.

That’s the ideal of intellectual autonomy. And that’s just obviously false. Like, in the current era of the proliferation of the size of science, like, no human being can master even like [00:44:00] one one millionth of the amount of knowledge that’s out there. I mean, I had a kind of conversion experience because of a book from a philosopher named Elijah Millgram called The Great Endarkenment.

And the book’s argument is basically that the essential epistemic condition of our era is that knowledge is so hyper-specialized that every single genuine practical conclusion comes from crossing so many fields that no person can actually master the whole argument. Right? Chemical engineers trust statisticians, trust physicists, right, like there are these huge long chains of trust that run totally out of our control. I think the way that we start our life is intellectually, we end up trusting other people, right, about tons of things. We trust large institutional structures. I trust my doctor with my life. Like, literally my doctor says take this pill, and I’m like, okay, I don’t understand any of it.

And not just that, if I asked my doctor to explain, my doctor [00:45:00] probably can’t explain all of the chemistry involved behind that and all the statistical modeling behind that. So my wife is a chemist and I asked her, I was like, you know, how much of chemistry can you explain? And she was like, look I can, I understand like one one hundred thousandth of chemistry, right?

Like, there’s, like, neighboring fields of chemistry I know nothing about, it’s just so complicated. I just know my little patch of chemistry. I’m good at reading organic chemistry on one set of instrumentation; that’s my specialty, took me ten years to learn.

So our essential position is one in which we’re born in the world and we just have to start trusting people without understanding them. Right. We can’t monitor and check all the people that we trust in. We trust large-scale institutions. Like, I mean, I believe the climate change is real, but can I give you the evidence? Nope. Right. It involves, I actually did this as an experiment once. I checked out climate change models, and I can [00:46:00] understand, like, maybe the first hundred words, and then it gets into complicated statistical modeling of, like, meteorological events and I’m like, I’ve got no clue. Right?

Why do I trust them? I trust them because they’re professors from, like, Princeton and Yale that are published in Science. Right? So what I’m trusting is large-scale institutions. So if you grow up with your trust settings set to the wrong large-scale institutions, you can go through the same procedure of using the people that you trust and the knowledge gathered from your trust networks to check up on new things.

Randy: Yeah. 

Kyle: Yeah. So, so for the average person who believes that you know, that the election was stolen from Trump and handed to Biden, how can that be rational? What, what large-scale institutions are they trusting in a, in a way, in a way that’s kind of blameless?

Thi: Yeah, I mean, this is the point where I genuinely don’t know if that position is blameless. I can imagine blameless positions. [00:47:00] At that point, you might start to worry about how much blatant counter-evidence is available, that people are dismissing. On the other hand…

Kyle: That they’re aware of and ignoring, yeah.

Thi: …you can tell another story, and that story, it looks like, a person might arrive at the view that large-scale mainstream media is corrupted, right. And that only a small news source is to be trusted. And that’s not an inherently false view. We know plenty of positions… I mean, people always make fun of philosophers for talking about Nazi Germany, but imagine you’re a resistance fighter in Nazi Germany, right? It’s true that most news sources are corrupt, and then only a tiny, tiny fraction of the news sources are trustworthy. That’s an available position.

But yeah, I don’t, I don’t know if I’m willing to say that that particular case is a case in which there is a clearly rational procedure to enter. I mean, it’s just so hard because I mean, let me think about my views about the importance of vaccination and the deadliness of [00:48:00] the COVID epidemic, which are denied by the other side, right. My views come from believing in the deadliness of COVID and the effectiveness of vaccines, but where do I get that information? I get that information from the New York Times and the New England Journal of Medicine. Right? I haven’t collected that for myself. Right? That information comes from a preexisting set of trust in a large-scale set of institutions. So I don’t know.

Kyle: Yeah. So do you—this will be my last question about echo chambers—do you think there is some kind of moral flaw somewhere in the causal history of all echo chambers? Is there a liar back there somewhere? 

Thi: I don’t know. I suspect not; that’s not necessary. Right, because you can get echo chambers without that. All you need is someone to generate a plausible-sounding explanation that’s a little too easy, right? All you need is someone to get too excited by an easy explanation and they’ve come up [00:49:00] with one and be convinced by it.

Kyle: Yeah. And then a bunch of other people are also sincerely convinced by it, and, yeah. So you’ve written quite a bit about, like, gurus and conspiracy theorists and whatnot, and there’s this, there’s this way, this I dunno, method, this tendency of manipulation that a lot of them have that seems remarkably effective, that I want to hear you riff on a little bit. So I don’t know if you could call it just, like, information overload or what, but, like, they have this way of just dumping really expert sounding information in front of you and then kind-of making it appear as though the ball’s in your court. What are you gonna do with that? Right? And, and so either you just give up because you don’t have the time to engage or whatever, and of course, that’s going to make it seem like you lost the argument, right, and everybody else watching is going to be like, oh, look, our guy won or whatever. So they like make the costs of engaging too high, which makes all the reasonable people [00:50:00] self-select out. And you’re just kind of left with the people who can’t see through it or are too lazy to see through it or something like that. 

So do you want to talk about that at all, I don’t know, like, what can the average person who encounters something like that do about it?

Thi: Yeah, it’s, the thing you’re talking about, I think, is a really interesting and subtle strategy. And I was working it out with a guy named Aaron who runs a podcast called Enter the Void. And we were talking about, basically, the philosophy of spamming. The essential idea of spamming is that you are generating content and the cost of generating the content is really low for you, and the cost of dealing with it is quite high.

Kyle: Yeah.

Thi: So I think one bad faith debate strategy is to just, like, generate an incredibly large number of theories and responses that would cost a huge amount of energy to reply to…

Kyle: Yeah. 

Thi: ...but which you can just generate easily. I think that’s a really, that’s a really hard debating strategy to deal with. And one of the [00:51:00] reasons we want good faith conversations is because in good faith conversations, we don’t spam each other. Right. We don’t just overload each other. And I don’t know how you deal with a bad faith arguer who’s spamming you.

Kyle: Mmhmm. Or even as somebody who just wants to, you’re, you’re a good faith arguer, you know the other side is a bad faith arguer, but you also know there’s a large audience here that has a number of good faith arguers in it, or at least a number of good faith observers. And you’re, you want to communicate with them cause you know you’re not going to communicate with the other side. What’s the, what’s a strategy for dealing with this kind of information dump? 

Thi: I honestly don’t know. If I do, I would tell people, but I don’t.

Kyle: Yeah.

Randy: Dammit Thi we want that. We want the seduction of clarity, we want that…

Kyle: Cause like in my, in my younger days, you know, when I was a certain kind of fundamentalist, I would, I would engage and I would do my best to humiliate the other person, and I would chase down every rabbit hole and the last person to give [00:52:00] up wins. And I was the last person, like, I get the last word, and therefore the people watching think that I won and that vindicates my view.

Thi: I mean, one of the things that’s intere…, so just a background thought: a lot of the stuff I’ve been working on this space is about the problem of expert recognition. This is a problem that’s as old as Socrates, right? If you’re not an expert, how do you pick the right expert? In particular, how do you pick a real expert from someone that’s posing and trying to, like, present as an expert?

Kyle: A Sophist.

Thi: Yep, the Sophists. And I, like, I’m, a lot of the work I do is based on pessimism about this problem. Like, I don’t think there’s a good solution to it. And I got into it partially because of this work about how juries respond to expert witnesses. And it turns out that like, you know, juries tend to treat as expert the people that say clear, unqualified, confident things, but actual experts are often, like, these things are super complicated, they qualify things, they would say that things are unsure for this and that, that reason. [00:53:00] 

And because of that, juries typically treat them as inexpert and not knowing what they’re talking about. So, I mean, I think part of the problem is if your audience members are already inclined to take clear, confident statements as signs of expertise and worrying and fussing about details as not, then you’re already fucked. 

Kyle: Yeah. 

Thi: I don’t know what to do about that except if you could somehow teach people ahead of time that clear, confident statements are not actually inevitable signs of expertise. I don’t actually know how to do that. Maybe more philosophy classes?

Kyle: Yeah. 

Randy: Nice. So changing directions Thi, you’ve done a lot of research on the philosophy of games and you’ve tossed around the word “gamed” as a verb, you know, and can you tell us what the philosophy of games is? What is, what is gamification and why are you interested in it? And after that, I’ll ask you what do gamification and QAnon and conspiracy theories and cults have in [00:54:00] common, and how do they gamify things?

Thi: I mean, I literally just sort of book about this and you’re like, quickly, tell me about… Okay, I can do this.

Kyle: 30 seconds. 

Thi: Okay. Really briefly. What is the philosophy of games? So I got into this question partially because I was irritated at people who were talking about video games as an art form, because they were a kind of movie and I was like, games aren’t just a kind of movie, but like something special, they’re different. So I like went down a rabbit hole for five years and I ended up with this theory. And the theory is that games are unique as an art form because they work in the artistic medium of agency itself. What a game designer is doing is not just making an environment or telling a story, but a game designer is creating an alternate self for you to be, an alternate agency, designing that agency, and then you pick that agency up and enter it into it. So part of what that is, is the designer gives you abilities. I mean, I think everyone recognizes, a game designer tells you, oh, you can run and jump or [00:55:00] oh, you have a portal gun, or, oh, you can, like, trade money, right? Or bid. But most importantly, the game designer tells you what to want in the game.

So I got this idea from one of my favorite game designers, Reiner Knizia. He’s this German board game design genius. He’s been called the Mozart of game design. And in an interview, he says the most important tool in his toolbox as a game designer is the point system, because the point system tells the characters what to care about. Right. It tells the players whether they are cooperating or on a team or against each other, or trying to collect money or trying to kill each other. Right. That tells you what to want.

As a philosopher though, when you hear something like this, you’re holy shit that’s right, a game does tell you what to want. And that’s, that’s part, that’s the core of the theory, right? That a game designer is not just creating a world, but creating a self with an alternate value system that cares about competing or killing or building an efficient railway network or collecting gold.

Kyle: Yeah. 

Thi: Right? So the [00:56:00] basic theory is that game designers, through the point system, specify an alternate value system and you enter into it and it gives you certain pleasures.

And one of the biggest pleasures for me is that games give you a sense of value clarity, right? That in our normal world values are complex and plural and nauseating and unclear, but in games, for once in your life, you know exactly what you’re trying to do, you know exactly what counts as success, and you know exactly where you stand.

So yeah, that’s great in games. That’s fantastic in games because in games, this is a temporary value system; you step into it for a moment and then you step back.

So now let’s move to gamification. So a standard view in the industry is that games are good, so gamification is good. So gamification is any process where we take a kind of normal activity and then we add points and levels to it, right? Like Fitbit gamifies fitness, Duolingo gamifies language learning, a lot of educational… So an [00:57:00] early gamification I had as a child is, I think it was in my elementary school, we got a certificate for a free Pizza Hut pizza for every 500 pages we read.

Randy: Yes, I did that too.

Thi: Yes. Okay.

Kyle: That’s a lot of pages, nice.

Thi: That’s gamification. Right? You’re getting points and clear levels and clear awards.

Randy: Those personal pan pizzas paid off.

Thi: Alright, so here’s, here’s a worry, here’s my worry. Gamification increases your motivation by simplifying the value system. You’re not reading for pleasure or richness or curiosity, but just…

Kyle: Yeah.

Thi: …for the number of pages.

Kyle: Numbers of pages, yeah. You’re probably going to read pilf, cause it’s easier to get the page count.

Thi: Right. Right. I mean, if you want pizza, you should read the dumbest shit possible. Right. So similar thought with Twitter, right? There are all kinds of complex values for communication, but Twitter doesn’t measure all of them. It just measures short-term popularity. 

All right. So the worry with gamification, for me, is in a lot of gamified [00:58:00] systems, you get an increase in motivation, but you get it for being pegged to and allowing yourself to be motivated by a simplified value system where that simplified value system often has to meet the requirement of being easily instantiatable in a mass producible technology.

Kyle: Yeah.

Randy: Yeah. 

Thi: Twitter can’t measure empathy or understanding. It measures people punching the like button, which is much simpler.

Kyle: Yeah. 

Randy: Yeah. So now I feel like almost everything that we’ve been talking about is, like, crashing down upon itself with, you know, the seduction of clarity and moral outrage porn and gamification. All of it happens within social media in really potent ways. And I’m, I’m scared to ask because I don’t really want to change my social media habits, but can you just, how is social media messing with and fucking up our brains? 

Thi: Right, let me go simple and then I’ll go philosophical. The simple version is simply that it is capturing your motivations [00:59:00] and redirecting it along pre-established lines. So, by the way, this isn’t, this is not a guarantee. It’s not like this’ll happen instantly if you put on a Fitbit or start using Facebook, but insofar as you’re motivated by likes, then you are now motivated to aim at a preestablished value system that is there partially because it’s easily instantiatable in a mass technology.

So in the games book, I call this phenomenon value capture. And value capture is any case where your values are rich or subtle, or in the process of becoming more developed. And you get put in a place where the world gives you a simple, often quantifying value system. The simple version takes over. So since I’ve written the book, I’ve been working more on this stuff, and I have a better way to put this. I think what’s going on is you’re outsourcing your value system, right? That you should be figuring out in response to the rich, emotional experience of being in the world, what you care [01:00:00] about, but instead you’re outsourcing what you care about to Facebook or Twitter.

And I just want to be clear: this is not, I think there’s a way of doing this that’s technophobic, and I think these are just new wrinkles of something that’s been going on for a while. So I think one of the best, most empirically well-studied examples is the coming of law school rankings and university rankings in the U.S. News and World Report, right? Wendy Espeland and Michael Sauder have a book, Engines of Anxiety, that I think really, really has a lot of carefully researched empirical evidence for what I would describe as people outsourcing their values about their education and their career to the U.S. News and World Report, and the U.S. News and World Report is really insensitive to any particular person’s cares about their life or legal education, right? It tracks a few simple, easily accessed data points.

Kyle: Yeah.

Randy: Yeah.

Kyle: And easily manipulated data points too. Right? The last university I taught at limited the class [01:01:00] size to 15 students, but only in the fall, because that’s only, that’s the only time the U.S. News and World Report looked at it; in the spring it was double.

Randy: Right, yeah.

Thi: There’s so much filthy gaming. If you read this book, it’s just horrifying. Like, so one of the main things they track is, U.S. News and World Report tracks, is employment rate in the nine month mark after graduation, and so law schools started, like, telling their law students to take any job, including, like, at a nail salon nine months out, because it would make the ranking go up.

Randy: Yeah, so good universities are changing their best practices and their value systems in order to rank higher on U.S. News and World Report, even though it might be a shittier method of education.

Kyle: And not even just the U.S. News and World Report, which is at least like an organization that hires people that try to be professional, like, they, they outsource that shit to, like, one guy who has a blog that’s influential, and you know who I’m talking about. 

Thi: Yeah. In the, for people not in philosophy, a lot of philosophy, for a while, was ruled by the equivalent of [01:02:00] U.S. News and World Report, which was just one dude ranking journals and universities, which became incredibly influential. There’s a footnote in my paper about this, by the way. It is, I mean, I think it’s, you see the same thing in academic research for, like, things like citation rates and impact factors.

I think it’s like, it’s, it’s really broad. And I just wanna, I just want to say that if you read stuff on the history of bureaucracy and quantification, it should be clear that this is it just social media. This is, social media is one instantiation of a really long trend towards hyper-simple metrification. I mean, other examples that might be familiar to people: grade point averages. Right?

Kyle: Yeah. Grades.

Thi: Right.

Kyle: Just in general. Yeah, right. Or, or to bring it home, church. 

Randy: Well, yeah, I was just going to say for us, for you churchy people listening, gamification happens in churches. I mean like there, the way that many churches measure success is by number of baptisms because number of baptisms equals the number of people [01:03:00] who follow Jesus. And so literally churches will have a baptism service and they’ll have people planted there who’ve already been baptized and they already know, and they’ll say, when we invite people to be baptized, you stand up and come down and to get baptized again because that’ll motivate everybody else to go get baptized. And instead of saying, following Jesus is a lot of hard work that’s going to be a lifelong thing. We just want you to get dunked so that we can have it on our stats on our website. This is how many people we’ve baptized this year.

Thi: Oh my God. 

Randy: It happens.

Thi: I had no idea.

Kyle: The church I grew up in had a sign hanging on the wall with numbers on it. This is the number of people that committed, this, this is what we got in the collection plate last week, this was…

Thi: That is amazing. I mean, another version of this is get any journalist in a room and start asking them about clicks and how trackable clicks have, like, completely changed everything about journalism.

Kyle: Yeah. I mean, we’re on a fucking podcast. We care about that; trackable downloads is, it’s a big deal to us.

Randy: Yeah. People ask. 

Kyle: Yeah. So we’ve kind of already [01:04:00] tackled this a little bit, but just segueing from my comment about church there. If you have any insight about this at all, great, if not, no worries. So I don’t, I don’t know how much, like, religious epistemology you’re familiar with, but do you, do you think as an expert on echo chambers that there’s anything, is there a good explanation of why religious people seem to be so prone to them?

And there’s good, there’s some empirical data to this too, right? It’s not just my, my sense of it. I just came across a paper the other day, the title of which was “Belief in Fake News is Associated with Delusionality, Dogmatism, Religious Fundamentalism, and Reduced Analytic Thinking.” So there’s a, there’s a, there’s a pretty good amount of empirical data for this too. So any insight as to why that might be, that religious people are particularly prone to living in echo chambers?

Thi: I should offer a provisor here or a qualification, which is, I’m not a religious scholar in any way. I don’t know anything. As a raw guess, I think for many people, one of the appeals of religion [01:05:00] is having a comprehensible understanding of the world. So one of the interesting things for me, thinking about conspiracy theories, is that, the way it’s like a parody of certain scientific and enlightenment values, right?

Like this, this is a thought that Elijah Millgram, this philosopher who got me into this stuff about the size of scientific knowledge, says, which is the whole thing that started science was this Enlightenment ideal of intellectual autonomy, that we should all be thinking for ourselves and not trusting and not uptaking stuff. But what it’s created is this world with so much information that no one can think for themselves and everyone has to trust this vast realm of experts. And one of the appeals of a certain kind of conspiracy theory is you get to throw away the experts and you get to put it all back in your head again and explain everything from something that you can hold. And I think, not all, but from my experience, some religions or at least some expressions of some religions offer…

Kyle: Yeah, 

Thi: …something like a complete, [01:06:00] holdable explanation of the world.

Kyle: But I want to push on that just a little bit, because in both cases, that’s an illusion. So the average QAnon conspiracist or the average vaccine denier can no more explain to me the deep state or what precisely is wrong with the Pfizer vaccine then I could explain general relativity. We’re both accepting that on authority. So it’s just an illusion that I’ve got this simplistic explanation.

Thi: Yeah. I mean, that’s the point, it’s an illusion. And if you’re attracted to that illusion and that illusion is behind a particular brand of religiosity, then it should also, certain conspiracy theories that offer another version of that illusion should also be appealing. 

Kyle: Sure. 

Randy: Yep. So Thi, you’ve referenced a number of books already. Where can we find your stuff? We’re going to put links on our show notes to your books, but what’s the easiest way to find your stuff?

Thi: My website is objectionable.net. There are links there to all my papers. My book is called Games: Agency As Art and I’m [01:07:00] on Twitter still as @add_hawk. And you can find me in any of these places, along with a lot of increasingly weird papers. I’ve been getting emails from people that they, that they’re finding my papers more and more disturbing, like the newest one on transparency.

Kyle: What is that about? 

Thi: It’s called “Transparency As Surveillance.” And it’s the claim that institutional transparency is also a form of monitoring that undermines expertise and trust.

Kyle: Okay. I am immediately suspicious so I’m going to go read that. Excellent. 

Randy: Awesome. Well, Thi Nguyen, this has been super fun, hilarious, and really insightful. Really appreciate you spending time with us.

Thi: Thank you so much. Thanks for having me. It’s been a good time. 

Elliot: Thanks for listening to A Pastor and Philosopher Walk into a Bar. We hope you enjoyed the episode. And if you did, please rate and review the podcast before you close your app. You can also share the episode with friends or family members with the links from [01:08:00] our social media pages. Gain inside access, extra perks, and more at patreon.com/apastorandaphilosopher. We’re so grateful for your support of the podcast. Until next time, this has been A Pastor and a Philosopher Walk into a Bar.

Beverage Tasting
Interview