Elliot has a few more questions about AI following our conversation with Derek Schuurman. Is AI a Tower of Babel moment? Is there any real reason for concern? Will it go so great that it makes us all soft? If you could live forever, would you? What about the end of the world?
Big questions! We do our best for answering on the fly. If you'd like to ask a question about a recent episode to be featured on one of these bonus segments, email us at email@example.com.
Content note: this episode contains profanity.
The transcript of this episode can be found here.
Want to support us?
The best way is to subscribe to our Patreon. Annual memberships are available for a 10% discount.
If you'd rather make a one-time donation, you can contribute through our PayPal.
Other important info:
NOTE: This transcript was auto-generated by an artificial intelligence and has not been reviewed by a human. Please forgive and disregard any inaccuracies, misattributions, or misspellings.
So in this edition of our q&a series, we're talking just among the three of us. Normally, we're going to have our friend Jeff, join us for post episode q&a sessions. But this time, since this episode was kind of Elliott's idea to talk about AI, we're gonna let him lead the q&a.
Oh, do I have questions? So, AI, when you look at the development of what's, what's possible, like what might be coming down the pipe, there's kind of a few premises that somebody could thoughtfully set up that you know, if true, would be really interesting for humanity. So I want to just kind of lay out the kind of the basic structure of what that type of a future could look like, just briefly, and then ask us to suspend disbelief for a moment. And if this is if there's a nonzero chance that this could be, what comes to pass? It seems like anything above a zero chance, is worth conversation. Kind of thing. Sure. So let me just lay this out. And then I've got, I've got just a small handful of questions to, that I'd love to hear each of you respond to and this can be enlightening form. Or if it gets longer, that's fine, too. So all right. So AI, right now is already a black box. And this is one of the big conversation pieces. So the explainability of AI, it's kind of this double edge thing in order to, to receive the full benefit of AI, we kind of have to take our hands off the reins, because it's going to come to conclusions and solve problems in ways that aren't humanly possible. It's recognizing patterns. And so this is this is where the AlphaGo thing, like you just don't tell it anything and you give it a goal. And however it gets there is what happens. But if we had defined the path, it wouldn't have actually been as good. So if we have to take our hands off of AI in order for it to reach its ultimate end, like that's, so that's one kind of pillar I would set up. Then there's the idea that we haven't yet, we haven't yet done this, at least at a large scale level. But AI will eventually get to the point as we take our hands off the reins, where it will need to be the one to improve itself rather than us programming. You take the current intelligence of, of this system, you turn it on itself, so it it now can self improve, it gains greater intelligence. And now you apply that greater intelligence to that cycle once again, and it creates this loop that sends us on an exponential accelerating trajectory towards super intelligence that would make for that AI to explain to us why it's doing what it's doing, or where its motivations are coming from or anything, like explaining to an ant how to build a skyscraper, like bending down on the sidewalk and trying to have that dialogue like it just is that it will be that far different from us and our intelligence and our comprehension. So we've talked about I mentioned before, for the rest of history, we've only ever known to interact with something that is omnipotent, omniscient, omnipresent. I guess unless you guys do more takedown episodes on these terms. Like these, these are the things we've traditionally defined as gods. So at the point where something else could start to fit those labels, like that has to mean something so. So it's difficult to know how super intelligence would begin to develop its own motivations and shaping reality in ways that are really hard to anticipate right now. But in a world that is super connected. So we're economies and utilities and supply chains and militaries and media and every connectivity itself is all controlled by whatever this thing is. It's not unreasonable to think that this would lead to a thing that controls everything as every aspect of life and then whatever it wants to have happen, would happen. So if everything goes well, this could ultimately lead to something that resembles human immortality. If AI decides that it's going to be a good idea to solve medicine and climate change and biology and like just go right down the list. but it could maintain our humanity or at least our consciousness indefinitely. So that's, that's one way or on the flip side, it could be our extinction event. If it decides that actually the problem is the humans, the whole climate would just be better off without the weight of this pesky species. So, yeah, nonzero chance that it goes one of those two directions. So let me just ask now a couple questions that would kind of play out one of those scenarios or the other. I'd love to hear your response. So so how should Christians relate? Or maybe Is there a point where they should resist the rise of a godlike being? So should we see this as like, this is a tower of Babel moment from which we should abstain, even if the outcome is ultimately good?
It's hard to answer a question that like there's so many multipliers and factors built into your end game scenario that what I what we just heard from Derrick sound unlikely, for a long, long time, because computers are still AI is still doing what we tell it to do. It has parameters and boundaries around what what it thinks about. Yes, very categorical, right? You're talking about something that all of a sudden transcends those categories and can think and build on its own. So like that, I wish you the best kind of Derek is what I want. But as far as Will there be boundaries that we will as Christians say, we can go this far and no further? I think probably Yeah, I think so. I think thinks spiritual movements and religions, they get trashed all the time for destroying the world in all the wars being blamed on religion, all that stuff. And some of it is true, but some of it is an overestimation. Because I think people in religious and spiritual communities have been kind of the, in some ways, the, the ethical hearts of humanity, instead, ask questions when slavery, whatever, you know, yeah. So I do think that we should be engaged. I do think that we as Christians, as followers of Christ should be in on these these conversations. And we shouldn't be at the boardroom meetings we should be in this isn't like the domains of like, let's be involved, so we can take over the world. It's just, I think, I want to hear from more people like Derek Sherman, and people who are thoughtful, but also faithful people. Because I do think we're going to be hit with a lot of ethical and moral questions about how much is too much. And when we should we stop pushing the button that says that keeps going forward? And I think from listening to Derrick, it sounds like we have more control of that than the doomsday scenarios, like you just mentioned, kind of play out that that's more it's about likely and unlikely, but I'm stumbling my way through saying yes, I think we should be ready to hit that like nope, that's that's too far. We shouldn't do that. That's, that's not going to be good for humanity, even though there's many a million reasons why it might be. And I think we're faced with those choices all the time. When it comes with violence, war guns, you name it. The depressing thing was when you when you mentioned guns, and how much Christians have fought against regulating guns there's a million reasons behind that as well and hopefully is different with AI vaccines you could go down the list but
yeah, my hope in regulation being something that somehow contains AI like when you look at how effectively Bennett regulating something as like a single channel of social media. It doesn't leave me hopeful that a revolutionary technology like it's been over a decade now of social media having its way with our brains, and and yet we still haven't, we're not even close to figuring out what do we do to actually channel this technology in a helpful way for humanity? Yeah, so hopefully, we'll
learn some lessons from all of those failures.
Is, is we were talking through with Derek historical technological advances, electricity being created or telegraph being invented all the things, you know, like, people were freaking out when machines could sew things, you know, in the early early 20th century, saying this is gonna be take all our jobs away and all that stuff. We will always be having those conversations, I think, whether it's AI or the next developments. And always I think like we should be at a moment where we have all this information and we don't have half of our country believing in good spirit conspiracy theories in, in, in lies, basically. Yeah, but the reality is, is that we have all this technology we have all this ability to find out what's true, and what's real, and we choose to ignore it. So part of me also thinks that we might be the limiting factor in how far yeah, I might go because we choose to believe lies sometimes
Yeah, and to tip my hand this is i Ask this question, because I'm an AI optimist. And I do think I think a lot like you said car like, this could be the thing, just like you put all of those other technological advancements together, and the quality of life that we experienced the lifespans, and just the comfort that we experience are so far above what it's been for the rest of history. Like I do see that I think AI brings us even further along that line, like the biggest danger I see with AI, like realistically, I guess, aside from extinction is we all just get really soft, because life is so good and comfortable. Those are the types of questions I would think about. Okay.
And let me just let me just give a disclaimer. I just said that half the country, you know, believes in false information and conspiracy theories. That's not the truth. The reality, though, is that half the country voted for a person who supports those things. But I know that not everybody believes in
like shockingly large numbers of people do straightforwardly believe conspiracy theory. Yeah.
Half of our religious tradition, I guess. More than that, yeah. Probably move on, or do you want to use on this go?
Oh, there were other questions. Oh, yeah. Okay. Well, I wanted to ask Randy, about the Babel analogy, and what you thought of that, like, what do you think this is that does? What does Babel actually what does that story actually about? Like, it seems unlikely to me? That it's a kind of cautionary tale about human ingenuity. That's how it's often read. But I don't
think no, like, so the Sunday School answer, I'll give that and then you can like, show me where it's wrong. Like this was people trying to get to God with their own strength, like trying to be transcend?
Yes, I have to. I didn't think about this going in, obviously, somebody, John Walton, the former guests, he wrote about this, and I read this in his commentary that he, the Tower of Babel was probably a ziggurat in ziggurats were built to bring us closer to God in the Tower of Babel. The idea is, let's get God to come to us in that. So there's, there's more to the Tower of Babel, besides the God just confusing humans.
It seems anti intellectual, if you read it that way. And that just seems like that can't be the reading.
Yeah, there's more to it. I'm just not at a place where I'm remembering or qualified to answer that. Like, I
don't see any anti intellectualism in what I would consider essential Christian thought. And I know that's like, dangerously close to a fallacy that I want to avoid. But like, I don't read anti intellectualist strains in the Bible when I read the Bible. I see a lot of it in Christian traditions, but I don't see, like essential moves to stop human evolution. No. And in fact, I read the New Testament as kind of the opposite of that. Yeah. As like encouraging evolution, moral evolution, mostly. But I don't think it has to stop there. I think it can be more holistic than that. And so I'm like you, I'm very tech optimistic. And I view AI as kind of a second order technology in the sense that it doesn't have any purpose on its own, other than to augment other technologies. Like it's fun to chat with, or whatever. But that's not the point. It's to make other things better. And so I think if we can, as I said, In the interview, I'm getting this from Dan Dennett. So take that for granted. For what for what you will, but I think he's right, like you should keep AI, a series of tools, rather than trying to replicate human nature. May, you know, the difference between somebody like Dan and somebody like, Derek, is that Derek would say, we never could do that. Dan would say we probably can. We just shouldn't. It's a bad idea. And maybe, you know, maybe most people wouldn't actually want that. I don't think I would want that. I think some tech billionaires might want that and might overestimate how many other people do. And so I think maybe an obvious key is to keep that decision out of their hands. And to make AI a series of really interesting tools, there might be some black box stuff going on. That's something I wanted to ask him about. But we didn't get to, but it doesn't, you know, it can be bounded, right, we can still decide the functions that it's aimed at. It doesn't have to just be blank, check, do whatever you want, like, it can always be, here's a series of problems that we want solved. Here are the tools we're willing to grant you to solve them. And then we draw a line. And we're, we're not actually trying to make a human brain or we're not actually trying to make something that I could be someday.
Yeah. Okay, so again, remember the premise like there is a nonzero chance. And so let's walk right up to the edge, look over it and see what we see. So if AI gave you the prospect of eternal life, yeah, so either embodied or your consciousness may be occupying, you know, and just recycling biological forms as they age out, but you get to continue. So would you take it? And how would you think about this when we're supposed to go be with Jesus when we die instead of just continuing
you Yeah, this one's easy for me. So, I mean, my my eschatology is so vague, and, like, thin, that there's lots of room for possible interpretations of what it means to say Jesus is coming back. And I've long thought that some kind of moral progress is the most important element of that. And whether Jesus comes back literally physically or not, in some sense, Christ returns when humanity is worthy of Christ, when we have taken when we have embodied his character when we've practiced the osis, essentially, that's my view. So if an instrument on the way to that is some kind of artificial extended lifespan, Sign me up. Very interested in that don't see any obvious like, you know, uniquely Christian objections to it there. Obviously, all sorts of qualifications, you'd want to put on it about really terrible things you could do with that, or putting that kind of power in the wrong hands or, you know, rampant inequality, whatever. But as far as Is there anything worrisome about that from a Christian perspective, as far as I understand what Christianity is, I don't see anything. Right.
So we're uploading Kyle.
Yeah. Well, here's the thing, like upload implies some kind of disembodied thing right now. But in the future, it might not right. So I don't want to be in software, but I'm very interested in having a longer life. So there are many, many ways one could approach that. Yeah.
Yeah, no. I mean, if eternal life looks like living in a computer, don't applaud me. I don't want to but it also, that question comes with like, what what is quality of life look like? Because I mean, I've known and loved many 90 some year olds who are ready to die, because their experience on Earth is just something that they don't enjoy anymore. And they're actually just like, I've had a good life. I'm ready.
Now at the point where we can put you in a computer, we can give that thing a great time.
Yeah, and that's where I'm with Kyla, I'm not interested in in disembodied reality. I think part of what it means to be human, and part of what I love about being human, is an embodied experience of reality, is tasting things and touching things. And, you know, hearing the crack of a bat at a baseball game that I went to today and enjoying the celebration within 38,000 other human beings and my family and all this stuff. And if I don't get to experience that, if I'm just going to live consciously in a, in a program somewhere, that's, that's not a you know, but
if you do, if you do get to experience all that use, then you hit the Forever button.
Maybe there's some that's one of those questions, I cannot answer off the cuff.
Yeah, it has such theological implications, like the like, if you do that, then how does the whole like gathering around the throne at the end thing work and like we would go be a part of the resurrection.
That's what first of all, those are metaphorical symbolic images. The book of Revelation again, is not this literal, you know, his literal, sometime in the future, it's going to happen exactly like this. It's all symbolic and metaphorical imagery, so we can with, with eschatology know that, but also, yeah, Christians are supposed to affirm life, in I think, in pretty much all ways. And hopefully, these conversations are formed and shaped around that kind of ethic of we affirm life, we strive to better life and to make choices that actually lead to life. And if a if technology can help us in that regard, and it already has, right, like we're living longer lives than people hundreds of years ago. Why wouldn't we say? Yes?
Not one more. Are we done? Sure. All right. So last one, if we come to realize that now we're on the cusp of extinction, it actually all went terribly wrong. AI is decided to end humanity. So how might we reconcile this with things like the coming of the Kingdom of God? This plays a little bit on the last question, like, is there a spot where you push whatever AI could do far enough? Does it run headlong into our eschatology in a in a way that that breaks anything? So now now, we're all going extinct, like we're all going to die the earth blows up, whatever whatever.
You're wrong about the soul of Christianity. Do you think well,
would we still be or do we just do we die go be with Jesus or is there like how the coming of the Kingdom of God does it does it ring acquire this ultimate combination into, I guess non extinction would need to be a part of a good outcome.
I mean, I think you could and we've had kind of adjacent conversations to this before, but like, I think you'd have to choose whether like, is Christianity a religion that is, I think Christianity, biblically is a religion that's moving somewhere good, that we're heading towards new creation. And this is a all of scriptures, and especially the New Testament and idea that is clear that like God is has redeemed, he all things. And we're in the, in the course of that working itself out. So if we're sitting there and we're like, Okay, this is the last year for humanity, then I would say, yep, that wasn't we were wrong about that. That's kind of Christianity. If that's not the case, then Christianity is wrong in one way, but in the other way, like, the Jesus way, living, loving it, in all the ways like Jesus, that's something that I don't think is tied to a certainty of whether Christianity is real, or whether the scriptures are reliable, or whether new creation and new in the kingdom of God is going to come and be be a thing. That so then it would for me, it would just jump to like, a way of living in an ethic of like, I love Jesus way. I'm a Jesus follower. It's kind of like the Who's the guy in India? Gandhi, whatever yet. Kind of Gandhi but whatever. We'll have to edit this stupid. But then it just the Dalai Lama, the Dalai Lama. Then it becomes that's when it becomes like, I'm following a guru named Jesus. Yeah, for me. Yeah. You know what I mean?
Yeah, so that's, that's really interesting that it goes, it goes there for you before it goes to the like, Whatever, I'll fly away or it's burning anyway. I'm just headed to heaven. Like, it doesn't go there for you. It goes straight to like, it's this is this is an ethic we must have been wrong about the coming of
property, I'd be pretty confused. Yeah. As a as a Christian, if the world doesn't head towards new creation, and in heaven and earth coming together. Like that's, that's such a central theme for in Christianity that if it's like, turns out, we were wrong about that, I think then we might be wrong about it. A lot of things.
So then I can add whatever conviction I have about what you just said, to my tech optimism, and say like, really, this can only go Good.
Perfect, I'm glad.
Yes, that's interesting. That has to do with what we're asking Tom about where we both different from him because he thinks it can all go bad. And there's no guarantee because God can't guarantee it. There's nobody else to guarantee it. Yeah, I've never thought of that as like, a way to falsify Christianity, but I guess it makes sense. I mean, that would How would you ever know that? How would you ever know that you're at the point until you're dead? Yeah. Right. Where because as far as any individual is ever going to know, it could still work
out. Now. This is like, after the oxygen has left thoughts. You're just kind of waiting it out at that point? Yeah.
I don't know. I guess I've never I need to think about this more carefully. I've never thought of an apocalyptic future scenario as being in conflict with an overall optimistic Christian take, because the Christian views of resurrection and you have to die before you can be resurrected. So there's got to be some way that it goes bad, right? Unless you think resurrection is just a metaphor.
If you haven't been thinking about this, what have you been doing since Chet GPT? came? Yeah. I don't know. Thanks for Thanks for playing along. It's, this is this is fun. And it does help. Like even just kind of finding our way into the conversation seems like the first step and who knows if this ever becomes something beyond the hypothetical, yeah. But since there's a nonzero chance,
and if it happens in the next 10 or 15 years, we're gonna seem super, like smart and prescient. People are gonna think we were really honest on our podcasts. And
I think, Elliot, one more thing to what you were saying, like for me, it just, it comes down to there is a point where I would stop being a Christian where I would say, I think this is wrong, and facing a apocalypse and, you know, the end of humanity would be one of those occasions where I would be like, I'm going to choose not to have faith anymore that the Bible is correct, because this seems so category, categorically opposed to what I hear in the scriptures that I think it's I'm just done with it. Yeah.
Again, that's just so I, I see where you're coming from, but the apocalyptic end is a pivotal piece of so many People's theology that that would be the thing that
yeah, it's only if we've been beamed out and raptured Yeah. Yeah, I guess. And then I'd just be depressed. Like God really is like that and condemn a majority of human beings and going to rapture the good ones out. I don't think I'm going to get ruptured. If that's the case, right.
Be really confused if you found yourself there.
Well, that was annex. Maybe I got grandfathered. All right, thank you.
Thanks for listening to this q&a session. We love this kind of dialogue. So if you have questions, you can email us at
pastor and firstname.lastname@example.org. And if you submit a really good question, it might end up in one of these q&a segments. And if you really want to go deeper, subscribe at our top tier on Patreon and just have all the chats with us you want