A Pastor and a Philosopher Walk into a Bar

The AI Takeover: A Conversation w/ Dr. Derek C. Schuurman

September 22, 2023 Randy Knie & Kyle Whitaker Season 4 Episode 3
A Pastor and a Philosopher Walk into a Bar
The AI Takeover: A Conversation w/ Dr. Derek C. Schuurman
Show Notes Transcript Chapter Markers

The rise of artificial intelligence has happened at a startlingly rapid rate, and it seems like it's only accelerating by the month. Whether it's AI writing (plagiarizing) college papers and lazy pastors' sermons,  potentially curing horrific diseases, making fake humans, or taking over the world, we all have an endless amount of questions when it comes to what to expect from AI.

Dr. Derek C. Schuurman is a computer science professor at Calvin University and has written extensively on the rise of AI and what a Christian's proper response to it might be. In this episode, we talk to Derek about all of those questions and more. Cheers!

In this episode, we tasted Fortaleza Anejo Tequila. To skip the tasting, jump to 8:27.

You can find the transcript for this episode here.

=====

Want to support us?

The best way is to subscribe to our Patreon. Annual memberships are available for a 10% discount.

If you'd rather make a one-time donation, you can contribute through our PayPal.


Other important info:

  • Rate & review us on Apple & Spotify
  • Follow us on social media at @PPWBPodcast
  • Watch & comment on YouTube
  • Email us at pastorandphilosopher@gmail.com

Cheers!

NOTE: This transcript was auto-generated by an artificial intelligence and has not been reviewed by a human. Please forgive and disregard any inaccuracies, misattributions, or misspellings.

Randy  00:06

I'm Randy, the pastor half of the podcast, and my friend Kyle is a philosopher. This podcast hosts conversations at the intersection of philosophy, theology, and spirituality.

Kyle  00:15

We also invite experts to join us, making public space that we've often enjoyed off-air around the proverbial table with a good drink in the back corner of a dark pub.

Randy  00:24

Thanks for joining us, and welcome to A Pastor and a Philosopher Walk into a Bar.

Elliot  00:35

I'm Elliot show, producer, and Randy cow, I have been looking forward to this episode for literally months. So it's about eight months ago that chat GPT brought awareness to all of us of what has been decades of AI development, but now we can actually play with it and interact with it. And so yeah, so it's, it's still very dumb. Chat. GPT is very elementary when you actually get to it. But it's kind of opened your eyes to where this could all go. And so there's been a lot of talk, there are a few premises around the development of AI that, however unlikely, a thoughtful person can't totally write off. And when you put these premises together, it creates this picture of, of what would be a really interesting future with huge implications both in the religious space spirituality, as well as philosophy. So that's the conversation that I feel like, I've listened to dozens and dozens of podcast episodes about AI, but have yet to hear the conversation about how this actually plays out for those of us who would call ourselves spiritual, or so these questions of what's what makes a human, a human or maybe just as much what makes God God, when you start to think about AI and the way that it's developing this super intelligence that we've only ever associated with ideas of Deity before, I think that's really interesting. And that's the space that it feels like this podcast could occupy.

Randy  02:11

Yeah, no, that's I am with you. Super fun. Don't expect that out of this conversation to be wrapped up and done. Right. This is the beginning of a number of conversations that I anticipate we have about artificial intelligence, the impacts of it the implications, spiritually, philosophically, for humanity. Yeah, no, I'm excited about it, too. I'm excited for you, Elliot, this is a big day for you.

Kyle  02:34

Let's be honest, you know, yeah, I remember you had written a bunch of questions and an outline before we ever had a guest lined up about this. And we put folded some of those into this conversation. I don't know how satisfying they were or not. But I feel a little guilty now, because I kind of steered the conversation more towards the technical side for a while before we got into the spiritual stuff. But we did pick a guest who could do both intentionally. And I definitely want to have more conversations about that. Unfortunately, there aren't a ton of people who straddle both spaces. Derek is one of the few who is both an expert in certain kinds of technology and artificial intelligence and also tries to inform that with his faith. That's kind of unusual.

Randy  03:14

Yeah. So today, we're talking to Dr. Derek C. Sherman. He's the Professor of Computer Science at Kelvin University, which just happens to be like, we just keep talking to people from California.

Kyle  03:24

There. It's not intentional, not interesting people who happen to be

Randy  03:27

exactly in Derek, he'll walk us through his journey with computer science and with technology and AI, but in some of his articles, which are great, like you can go to his website, on Kelvin university and get links to him. But he said something to the effect of when I did my graduate work 20 years ago, I would have never imagined that we would be here where we are now with AI like it's just gone so fast. Yeah. And it makes me wonder what 20 years from now looks like, will he will we be saying the same things. So we have this conversation with Derek about will artificial humans come onto the scene sooner or later? Will cancer be cured? Well, all these big questions that a bunch of us simple people have, we ask those questions, and then Kyle gets into the weeds a little bit with some philosophy and ethics and really fun stuff. So I'm excited to share the episode with you. I'm also excited about what we're about to taste. So we are pastor and philosopher walk into a bar. And that means that we would just want to hold space for conversations that you would have in a bar rather than a church or classroom. And that sometimes requires alcoholic beverage. So for the first time we're tasting a tequila.

Kyle  04:30

Yeah, yeah, never done this before. I love tequila. It's probably my I don't know third favorite spirit. But that the rank them and I recently acquired this one. So this is from a producer called Fortaleza and this is their Ania HoH which means it is aged for 18 months in this case, I think in general it just means aged more than a year if I'm not mistaken. So if you if you know anything about tequila, you know there are basically three cons there's Blanca which is the clear stuff that you're going to get in the bar. If you order a Margaret Margarita, it's very young, it's literally desolate, straight into the bottle only for mixing pretty much, although there are premium versions of it. But yeah, you want to put that in a margarita. There's reposado, which is aged for a little while. And then there's on Yoho, which is aged for longer. And then there's, I think, super on yeah, I've never had any of that. So there's obviously highly premium stuff. So this is a pretty nice bottle. And it was the best Yeah, this is hard to find. This is expensive. And a local liquor store just happened to get some and so I rushed down and got some I've actually seen people trade pretty nice bourbon for this. And it's the only tequila I've ever owned. That is sippable.

Randy  05:40

Yeah, I mean, I love to kill as well. It's my I would say tequila is my second favorite spirit.

Kyle  05:44

Yeah, I mean, I've had nice tequilas, but like you still want to mix them into something. And I do want to mix this but I also like to sip it.

Randy  05:51

I love a good tequila flight. The nose on this is like, for me, I think of savory tropical fruit. Like it's, it's got some sweetness to it, but it's got that funky savoriness to it. It's almost like an umami kind of note. It's it's complex. Yeah.

Elliot  06:08

On the nose. I get lime. And I was trying to it's not like grass clippings. It's more like Forest ferns like it's theirs.

Randy  06:16

Yeah, yep. Now let's just say that. Yeah, I can smell that.

Elliot  06:19

And you're saying Kyle off off Mike earlier, it's glue would be a common for casing

Kyle  06:25

or something with I guess I don't know if it's common, but I've always for whatever reason, and tequila gotten a glue note mostly on the nose. Yeah, but that sounds terrible. But

Elliot  06:35

soon as you said, there's something that was just like a sourness to it. Yeah,

Kyle  06:39

I assume it's the agave or the process or something. I don't know.

Elliot  06:42

Yeah. Who of us doesn't have positive memories of like Elmers glue? Just channel that?

Kyle  06:49

Yeah, so this is double distilled in copper stills. handblown bottles. Pretty fun stuff.

Randy  06:55

It's lovely. I mean, it. It brings that there's no tequila sting until maybe like five seconds after in your throat but so much peppery effervescence on my tongue after it goes down. And it's got that really subtle tequila flavor that it brings. That's kind of again, a little bit savory, a little bit citrusy. But it's very, very mild. Yeah.

Elliot  07:18

Yeah, all the heat. I would associate with something like green chilies or something like Yeah,

Randy  07:22

yeah. Like, yep. It just kind of bounces on your tongue and the heat isn't heat. It's like, again, peppery? Yeah,

Kyle  07:29

I wonder how much of that is that I associate tequila with a certain kind of cuisine. Right. And so I wonder if it's just impossible for me to separate that experience out from it. Now that I experienced the flavors and the spices of the food. When I'm drinking tequila, it's very difficult to block that. I

Randy  07:45

think certain foods are made for certain drinks, you know, like, seafood should be drank with or should be had with white wine. And Italian food should be had mostly with red tuna, Mexican food and tequila friends. I mean, I don't know the the history behind it. But it probably is influenced by that. But I don't care. It's delicious. Yeah,

Kyle  08:05

this is a really wonderful tequila. It's 40%. So it's like really easy drinking. It doesn't have any harsh notes. It's well rounded, apparently, like additives in tequila is a big issue. And it's kind of hard to find brands that don't do that. And this is one of the

Randy  08:20

bigger players that don't know. Well, one more time. What is it?

Kyle  08:23

This is 40 later on. Yeah. Whoa.

Randy  08:25

Awesome. Thanks for sharing. regular listeners will know that we like to read reviews that you guys leave because we love them. We love hearing from you. We love hearing from you via email. But we really love the reviews because it just helps the show. And so wanted to feature this one from a person called Ron the W said I've been listening to the podcast for a month or so now. And I'm really glad that I found it. The podcast does a really good job of dialoguing necessary, challenging and divisive topics in Christianity that are unaddressed often, or unquestioned. Go into it with an open mind and know that the point isn't to agree on everything that is expressed but to be challenged by it to do your own work as you grow in each topic. Run the W cheers. Peer review. Yeah,

Elliot  09:12

gotta be Ron Weasley, right?

Kyle  09:14

I would assume so. That's gotta be it. If it's not disappointing,

Randy  09:18

man, I'm not geeky enough to put those together.

Kyle  09:22

No, no, thanks, Ron. I assume small addendum. The goal actually is to agree with me. So I understand the confusion. But no, thanks. We love your reviews. We also love your questions. Every now and then we get a good one. And so we'd like to feature those on the show. And I got one from one of our Patreon supporters named pear Lumpkin. And pear was listening to our conversation with Tom Ord, about omnipotence. Go back and listen to that, if you haven't heard it, and had a question about open theism. And this is something we haven't dove specifically into on the show, but it has come up from time to time. And Tom himself is of course what he calls an open and relational theologian an open theists. I'm an open theist insofar as I've thought about it carefully, you were at one time and I think aren't anymore. This one might

Randy  10:09

as a child, talk to the child.

Kyle  10:13

Excellent, excellent. But it's really a question about prayer. So here's here's how pair asks it while thinking and reading about open theism, and am impotence, which is Tom's view, instead of omnipotence, as contrasted with classical theism, which is the standard view of most Christian theologians in history. This question occurred to me how can a classical theist argue without unnecessary mental gymnastics that prayer is efficacious? Unless God is really an actually relational and the future is open? It seems to me he says the classical the classical theist is simply saying that God is going to do what God has preordained. Therefore prayers really have no effect on the future and don't change God's mind. God is not going to heal your mother from cancer unless God had preordained that she would be healed. Unless, of course, serve some other purpose, pretty common question and has obvious relevance to open and relational theology versus what he's calling classical theism here. So as I'll answer for the open side, and you can tackle it from your side, so essentially, open theism says, this is a very crude way to put it, but essentially, the future is open, which means it is undecided. So some open theologians will put it this way, there are real maybes in nature. There are ways things might go and might is like an ontological category. Whereas on a more predetermined view, there are ways things will go, or possibly would go, but God knows how everything is gonna go has known it all along, set it up, and it's gonna go that way. And then they'll have internal disputes about how much of it God planned or ordained or whatever. But open in relational theology says God might not this is kind of an entailment of the claim, God might not know some stuff about the future, because it's undecided because there isn't a thing there to know. Which is interesting that Tom would hold a view like that, because we kind of pressed him on a similar point about omnipotence, right? Is it? Is it speaking against Omnipotence to say that there are some things God can't do? I want to say no. And you can make the same point about knowledge, right? Is it speaking against God's knowledge to say that he doesn't know the future? Not if there's nothing there to be known? So anyway, that's a quibble we could have a ton. Yep. But Paris a nice question here. Because if you think prayer does something as efficacious, as he says, then how can you believe that if you don't think the future can be different from how it has been decided to be? And people, you know, people, theologians, no less than Thomas Aquinas came down on the side of saying, essentially correct. You know, the middle evilness can correct me here. But essentially, prayer isn't efficacious. Or if it is, it's because God is honoring our wishes in some sense, and then carrying them out. But really, and this is like CS Lewis to really prayer is therapeutic, it's for me, and it's not really changing God's mind or doing anything in the world. I find that view of prayer really weak, honestly, really lacking. I like to think of prayer as insofar as I think it is a valuable thing to do, as having some some power and doing something in the world. We're like partnering with God to bring about some kind of end and prayers, you know, kind of a means of doing that, or at least an expression of someone engaged in that project. So I'm right there with the pair. And I'm very sure. Randy might respond to this, because I don't like the view, that prayer is just for me.

Randy  13:36

I don't know what Pierre means by classical theism. For one thing like, is that is he talking about Reformed theology? So is he talking about

Kyle  13:43

I think it includes kind of all the branches that say that God knows every true fact. And that includes facts about the future?

Randy  13:52

Yeah, so the reason that I stopped being for a moment and open theist is mostly because of the Bible. Because clearly we see I think open theists have to I've seen open theists, mainly Greg Boyd, jumped through hoops to avoid the prophetic verses in the Bible to jump through hoops to avoid Jesus talking about the future and predicting the future. It's clear to me in the Bible, that God knows things about the future, whether God knows all things about the future, whether God laid them out. I don't believe that God laid out all things and it's all going to happen. It's inevitable because for sure that prayer would be absolutely just for us are useless. But here's my deal with prayer as a non open theist. It's a mystery. There's just no you can there's there's been a million books written about prayer, talking about the what it does what it doesn't do. I've I'm 45 years old. I've been a pastor for more than you know, I think, yeah, about almost half of my life now. And I've seen miracles happen as a result of prayer. What I believe is as a result of prayer, you pray for something dramatic, and it happens. I've seen that happen. I've heard stories I've seen I've been firsthand witness of a blind person getting prayed for and they can see just like that. I've have friends who watched a leg grow that was shorter than the other. I mean, there's just crazy stuff that it seemed like prayer worked. And then I've seen exponentially more where we pray, pray, pray, pray, pray for this person, and nothing happens, they still die, or they still, you know, this, this bad situation is still happening. And that's why I say pastoral. I can just say prayers a mystery. And I'm going to, I've been praying for our guests, you know, his he's has a family member who their health has been not great. And he told us about it. And I've been praying for them. I pray for people who asked me to pray for them. But my, I think prayer is for me, it's more for me, it's more to try to relate on some level to a deity that is unseen. And has no you know, like, to me that's that's what prayers for and for me prayer. Looks like less words these days. Sometimes no words.

Kyle  15:57

Yeah, we're praying with a tradition like praying through a book calm person. Yeah, that's a lot more meaningful to me these days.

Randy  16:04

Yep. So I think trying to say our tradition is better because prayer works for it is just disingenuous. Because an open theists can pray for something and it may or may not happen. But the problem the reality is, it's probably unlikely to happen for any of us, no matter what view of prayer we have.

Kyle  16:19

All right, well, obviously, this could be its own episode. Maybe someday we'll be we never promised that we're gonna satisfactorily answer these questions, but we will air them if you send us a good one. So thanks, pair. If you want to hear more about our thoughts on miracles, go back on our feet and listen to our conversation with Craig Keener, where we tackle a lot of these same issues.

Randy  16:48

Well, Dr. Derek Sherman, thank you so much for joining us on a pastor and a philosopher. Welcome to a bar.

Derek  16:53

Yeah. Great to be with you. And thanks for the invitation.

Randy  16:55

Absolutely. So Derek, as you can tell, by our outline, we've been chomping at the bit to talk to someone about AI. So we gave you a bajillion pages of notes with questions on them. Were chomping at the bit, but can you tell us who you are? What you do? What's your area of expertise? And why are we chomping at the bit to talk about AI with you?

Derek  17:13

Well, I'll leave the last question for you to answer. But I'm basically been working for the last 20 years as a professor of computer science, and most recently at Calvin University in Grand Rapids, Michigan. Prior to that, I worked as an engineer for about a decade or so roughly a decade. And my PhD sort of area of research, which was now going on 20 years ago, was using machine learning in the field of robotics and computer vision. And already at that time, I noticed there was a lot of people basically trying to apply machine learning for computer vision in the area of face recognition. That was sort of a hot topic, a lot of the cool kids were doing face recognition. And it sort of struck me that that was a problematic application area that there was all kinds of, you know, pitfalls. And so my instincts sort of nudged me to kind of look at other areas for applying that. And so eventually, when I finished my graduate work, I went on to establish a bit of a research program looking at things like the recycling, using the machine learning for visual recycling of goods. So using it to help sort recyclable materials. And of course, in the last 20 years, things have really taken off. And especially in the last year or two, the sort of public awareness of chat GVT has brought people to begin to ask big questions about where are we going with this. So in the last 510 or more years, I've actually shifted a little bit to thinking more philosophically and theologically about this to kind of realizing that my engineering education didn't really equip me to kind of think, think well about these things. You know, how then, shall we engineers a question that, that requires a philosophical and theological background. So So working at a Christian college, I had the opportunity to rub shoulders with theologians and, and philosophers and social scientists and other thoughtful people and, and began to try and think of ways that I could look at my discipline, but through the lens of Bath through the biblical story, and through Christian philosophical thinking.

Randy  19:23

It's so interesting, Derek, and as you're talking, I just realized, Calvin university now has, we've interviewed more professors from Calvin university than any other school.

Kyle  19:34

Because coincidentally, we've talked. We've talked to

Randy  19:37

Jamie Smith, as you were talking about your

Kyle  19:42

Calvinists joke.

Derek  19:43

Some great colleagues here, actually. So So you could do worse elsewhere, I'm sure. Yeah, absolutely.

Randy  19:49

And Kristin do may as well. So you're, you're the third Calvin. Professor. Yes.

Kyle  19:57

Yeah. So that's one of the reasons why I talk to you wait, there's time. tons of experts on AI specifically, we could talk to you, maybe we will in the future, but you're one of the few who straddles the divide between that and theology, or at least trying to make the theological significance of artificial intelligence explicit. And so that's the key word there. Yeah, so we're gonna have some questions about that, too. But before we get to that, let's just talk about the, the straight science if we can for a bit just to get our listeners kind of on the same page about where things are. And that is fundamentally the first question. What is the current state of artificial intelligence? What can it do right now? If you have a prediction for what you think it'll be able to do in the next five or 10 years? Great, but also, what can't it do? Because there's all sorts of stuff going around about, you know, from fears to AI taking over the world to is it? Is it possibly intelligent right now, it seems like maybe it can already pass the Turing test. What does that mean? What can and what can't it do? And where are we? Yeah, I

Derek  21:01

mean, in broad, broad strokes, artificial intelligence is the science of, you know, programming a computer to do something that would appear to be intelligent, right to us. And in the field, basically goes back to like the 1950s, with pioneers like Marvin Minsky, and Alan Turing, and others who are ready at the beginning of the dawn of computing are sort of asking these questions, what can we do with a computational kind of engine? And what can what is it possible in terms of mimicking sort of human intelligence? So that question goes goes way, way back and computer science. And, of course, it's only been in and of course, there's been lots of research activity over the decades, things like neural nets that are sort of, you know, quite quite common words, nowadays, were being bandied around already in the 80s. And the 90s, people have been exploring these different approaches to to artificial intelligence. And machine learning is sort of where a lot of people have been focusing these days, machine learning is a subset of artificial intelligence. And then neural nets, sort of the technology that's captured most people's attention and have come up with some startling sort of accomplishments in recent years, is sort of one, one area of machine learning so so the field is quite broad. But it's the sort of deep neural nets that have recently caused a lot of people to take notice, because they've been able to do some quite remarkable things, right? Back when I was in grad school, if you had asked me, you know, what, uh, would you ever be able to make an autonomous vehicle, I would have sort of laughed at you, because the computer vision problem was just so big back then, being able to sort of recognize objects in an unstructured environment with highly variable lighting conditions, and you know, all kinds of unforeseen sort of objects that you would encounter just seemed like too big of a problem. And it just turns out that if you have a very large dataset, and you're able to create a neural a deep neural network, that was sufficient training, and tuning and so on, these things are quite capable of doing some remarkable things. And so within 10 years, when I finished my, my graduate work, people are already showing proof of concept of autonomous vehicles. And then image recognition. We were in we were doing working computer vision, but a lot of these deep neural nets, were doing really remarkable recognition rates on you know, recognizing cats and images and these sorts of things, which is actually a fairly challenging problem. And, and then more recently, the large language models have really taken over people's attention. And most people have played around with Chad GPT, or heard about it. And and it is remarkable what these what these tools can do with with the size of dataset, and the sort of way that the field has matured to this point that they're able to kind of do what they're able to do

Kyle  23:58

a couple of terms use there that I want to ask you to define machine learning. Deep learning. I'm not sure if you mentioned that one. But I wanted to ask you about it anyway, in neural nets, neural nets, and also large language models. If you can define all of those terms, that'd be great.

Derek  24:14

Okay, I'll give sort of broad strokes sort of explanations of them. So why don't we start with a neural net is basically an interconnection of nodes that are connected by weights where you can have certain inputs and the interconnection of these nodes basically multiply different values by different weights, and then you get a number. So a number goes in and a number comes out. And it turns out that if, if you're if you tune these things really really carefully and in a strategic way, you can have inputs coming in that represent things like temperature and humidity or perhaps brightness pixels on an image. And if you tune all these weights in this network accordingly, you could actually have an output that indicates you know whether or not a certain object is found in the image, or certain types of colors or certain types of features. And a deep neural network is basically taking that structure and just adding lots and lots and lots of layers to it. So that there's lots and lots of different weights and nodes inside of this large network. And then its capabilities for distinguishing and classifying things just just grows. Machine learning doesn't necessarily have to use neural nets, machine learning is the process by which you take a bunch of sample inputs. And you train an algorithm or your program, basically, to classify those inputs. So you, you basically take those inputs, you train a program to recognize them, and then you input new inputs that it hasn't seen before, and you verify whether or not the learning has been successful. And machine learning back when I was doing graduate work was using things called principal component analysis and support vector machines, which are still used today. But the main way people do machine learning nowadays is using these neural networks. And they're called neural nets, because they're kind of a simulation of the neurons in your brain, right? If you remember from from basic biology, your your brain is made up of a huge network of nerve cells that have axons and dendrites and they're all interconnected. And they basically, you know, send messages through to each other, and they amplify them or attenuate them. And that's basically how your brain learns things. So that that the neural network is sort of biologically inspired by by neurons in your head. And then as computing complexity really, really grew, the ability to put lots and lots and lots and lots of these weighted sort of nodes, and networks into a computer became more and more plausible and possible. And, and we began to discover that these things actually work quite well for doing lots of recognition tasks.

Kyle  26:59

Right? And so deep learning, where does that come in? Because that's what that's just

Derek  27:02

more and more inner layers of these, these sort of waited networks. And just, it's just going and building these at a very large scale, essentially.

Kyle  27:14

Okay. And there's like a recursive aspect to it right about, you're giving the thing a bunch of data. And it's, I'm not even sure how to ask the question I want to ask, but there's a kind of like, through lots and lots of iterative. Yeah, it improves itself. In a way, can you explain how that works?

Derek  27:33

Yeah, so the the algorithms, typically referred to as backpropagation, where you sort of put an input, and then you look at the outputs. And if they're not really classifying things correctly, you go back through all the different weights, you know, working backwards, you works for all the weights and adjust them such that the output is is nudged in a more correct way. And you do this repeatedly over time, until the sort of network converges and is able to have a, an acceptable recognition rate. So that this is, this is a big algorithm that's used to sort of train it. And then once it's set up, you know, you the hope is you put new inputs into the device, and it's seen enough training images. And the weights have been, you know, appropriately set such that you can now input new data and how to correctly classify things,

Kyle  28:22

right. So I just listened to an interview with the head of deep mine the other day, and he was talking about this is Google's advanced AI, right. And so they were talking about because they, you know, very famously created an AI that could be the best Go player in the world. And that's what Go player, it's like the most complicated, it's kind of like a more complicated version of chess, basically, super old game. I've never played it personally. But I think they call it Yeah, yeah. AlphaGo is the name of the AI. And so they got it to where, based on what we've been talking about, essentially, they got it to where it could be the best human players. And then they kind of did a new thing. And I don't know the details well enough, go listen to this interview if you want to, but they they made a new one, I think they call it alpha zero or something like that, that could demolish the other one. And it was essentially they told it nothing. They just let it evolve. So the first one, they had primed with all these rules, and it watched a bunch of human players and kind of picked up on the sorts of things that they were doing. And so it was already primed to do what a good human player would do. But with the updated version, having told it nothing, it figured out ways to play the game that no human had ever thought of. In fact, it would do things that human players thought were bad moves. And when you ask them why did you think that was a bad move? They're like, that's what I was taught. And so we never did that. But the AI figured out a way to master that and then use it to destroy not just human players, but the other AI. So how does something like that where you don't give it a set of rules and you don't, you know, train it on a ton of experience. You just kind of give it I don't even know how it works but uh You let it evolve itself? How does something like that come about? Yeah,

Derek  30:04

I mean, that's a really good question. This is what sort of attracted me to machine learning back when I was doing my graduate work. So the sort of different ways of doing computer vision back then where you could either set up a camera, and then meticulously sort of calibrate it, and then come up with all the geometry, right, where you're projecting a 3d world onto a 2d image, plane, do all the, you know, linear algebra and mathematics to determine, you know, how things appear when they're projected onto onto a CCD and sort of do all the math or, you know, if you use machine learning, you set up a camera. And in my case, we didn't even have to calibrate it, you just show it a bunch of images, and you, you classify them. And then you allow the machine learning system to kind of learn what the appearance of something is. And then you show it new images, and it's able to recognize things, it just seemed like a much more powerful way to solve a complex problem, rather than going back to meticulously computing all of the geometry and ray tracing and all this sort of stuff. And, and that's what makes machine learning so attractive, is that it can kind of find its way to a to a good solution. And your question about how does it do this, while it's, it's all math, basically. So even in the case, where you aren't giving it, the rules, and all this sort of stuff, there has to be some kind of goal function, you need to define mathematically. So you need to mathematically define a goal of some kind. And then there are very elaborate algorithms that do something that people refer to as steepest gradient descent. But basically, the idea is, is that if you have if you have a mathematical representation of sort of the space that you're working in, if you can find a way to try to optimize your your goal function by moving in a direction by tweaking weights, and so on within the network, so that you get closer to achieving your goal, which you need to define mathematically, right. So that that would be whatever winning is, whatever constitutes winning that over time, it'll it'll basically find its way to to minimize minimizing that function. Now, there are there challenges with that sometimes it finds a local minima doesn't find the maximum the sort of the the main minimum sort of goal. But sometimes it'll find sort of a local pool and local minima. So these algorithms basically mathematically marched in certain directions so that that goal function gets minimized or maximized depending on how you express it with each iteration until it sort of converges on on a reasonable solution. So that the end effect is that you can, you know, without giving it any a priori information, as long as the goal function is clearly and robustly defined, you know, we you can see an AI begin to playing to begin playing games, right? You see AI is playing games, that have all kinds of complex kind of interactions, but over time, through training, they begin to discover mathematically by following this sort of steepest gradient descent to minimize that goal function. And, and the results are quite remarkable. So it's, it's all math under the hood, it's not conscious, it's not thinking about, you know, what do I need to do to make this better, it's just basically chugging through and trying to minimize a particular goal that has to be expressed mathematically.

Kyle  33:24

So it was one of the limitations then on building what they're calling an artificial general intelligence, which is something that can not just do one specific type of thing like beat a human player at go or, you know, write a clever essay electorate GPT, or something like that, or an apparently clever essay, right? But something that you can feed it novel problems, or you can give it any any number of kinds of tasks, and it can just figure out how to do them. As far as I understand it, that's what an artificial general intelligence is, correct me if I'm wrong about that is one of the limitations on building something like that, from where we are, that it would be very difficult to define a goal function for anything.

Derek  34:03

Yeah, and now we're getting into more philosophical thinking, not everything that counts can be counted. So one of the things about a computer is that it's, it's sort of bias towards quantifiable information, stuff that can be computed. And so when we're thinking about, and I was part of a conference just today, where we were talking about ethics and virtue and AI, when you begin thinking about different aspects of creation, like ethics, and and in virtue and aesthetics and these sorts of things and justice, can you boil those things down to goal functions or two? Can you express them numerically? Can you express them mathematically? And, and I would say that to say that you can presuppose is a very reductionistic way of looking at ad creation, essentially. I think one of the starting points that I have as a Christian computer scientist, is that the world is complex and diverse, and there are certain things that are irreducible. Now, it turns out that, you know, the numeric sort of sort of data in the world is very powerful. We can learn a lot of things about the world through data and through numerical analysis and through computation. But it doesn't capture all of reality, we can't reduce all of creative reality to numbers and data, I'm a little

Kyle  35:23

suspicious of that, I think maybe we can.

Derek  35:27

Let's put it this way, you know, the three of us are having a conversation. There's all kinds of numbers going on, right? There's three of us in conversation, every single pixel and sound value that's going back and forth can be quantified, right. So my voice waveform has a particular shape that can be quantified with different magnitudes over time, each of the I'm looking at you on the screen here. And there's rows and columns of pixels, that all have brightness values, and red and green and blue components. So all of that can, there's there's all kinds of numeric aspects to what we're doing right now. But the essence of what we're doing, you know, the, the sort of philosophical thinking this sort of relational kind of exchange back and forth the social sort of aspects there, they can't be reduced to just numbers that there's more going on than just than just, this sort of knew what we're doing. Yeah,

Kyle  36:21

but can they be reduced to some kind of processing of information that, you know, numbers might sound to some simplistic, but if you believe as I do, that, in some way that we don't even come close to understanding the brain is fundamentally an information processor. And it is somehow creating this experience that we're having inclusive of the logical aspects of it, and the emotional aspects of it and the experiential aspects of it, maybe even the conscious aspects of it, that gets a little tricky. But if we could figure out a way, which everybody's trying to do, of actually simulating brain activity, now, we should say, hopefully, you will agree with us computers are simply nowhere close to that, and not really even doing anything very much like what the brain is doing. Not only that, we don't really know what the brain is doing. So we're like, not even kind of approaching that we're centuries away, maybe from from that. But if we could, and it seems like maybe the only thing holding us back, it's not like some fundamental possibility barrier or something, it's just, maybe it's just processing power. That's what the, you know, people like DeepMind seem to think maybe it's more than that. But if we could get the right tools, and we could have the right amount of information, maybe we could replicate something like the human brain. And if we could, maybe we could do the thing that everybody seems to be aiming for, which is create a thing that, from for all intents and purposes is indistinguishable. From our perspective, from what we're doing. Now, it's a different thing to say, it's actually conscious, it's having an experience, it's actually intelligent. That's, that's a separate issue. But you know, if we follow Turing here, we might conceivably be able, from where we are now, to create a thing that you are, I could not tell the difference between that.

Derek  38:10

I think that's why Christians need to be conscious of ontology, right, you know, what does it mean to be an anthropology what does it mean to be human. But I think the technology and AI in particular, is, is moving at such a pace and the computational sort of power and, and some of the datasets that we have, are so powerful, that I think that we'll be able to build machines that will be able to fool most of the people most of the time, in terms of interacting with them, you know, I think the Turing test will will easily be passed. But that does not mean that we have built a human.

Kyle  38:49

Right, right. So most of the worries that I've encountered anyway, and I'm very much a novice on this issue. Most of the worries, though, that I encounter amongst experts, and included, and, you know, AI companies, like what's the one that broke off, anthropic or something like that it broke off from a larger one, specifically, because they didn't think they were taking the threat seriously enough, and, and sort of defined themselves according to that. And so you've got people like Nick Bostrom, and whoever, who are really worried not so much about creating a sentient thing, but creating a thing that gets out of control and destroys us all by doing essentially what it was designed to do. Is that sort of super intelligence, they call it is that something that worries you? And if not, why not?

Derek  39:33

Yeah, I'm not as worried about that. I wouldn't be so bold as to say that can't happen. I mean, if anything, I think our science fiction sort of movies are great thought experiments about all the ways things can go wrong when we make poor choices and design decisions. And aren't we, the Frankenstein narrative is, is basically very alive and well and much of our science fiction, movies and I think in many ways, the artists kind of out there first, they sort of asked this question right about, well, what was it that one science fiction author wrote once that the job of a science fiction author is not to imagine the car, but to imagine the traffic jam? Right, so, so to think about sort of the implications of these things, so I'm not as worried about the existential threat. I think I think it makes for a fascinating sort of narrative and interesting discussion. But I'm more worried about more, you know, kind of garden variety threats, like, you know, injustice going on, you know, deploying algorithms and AI for determining who gets a loan and who gets parole from a prison and who gets a job and who gets a raise, and who gets audited, and like all these sorts of things that can be easily automated by machines who gets insurance. And so these sorts of things, I think, have already proven to be highly problematic, right, the bias and injustice that have been propagated by some of these things, one, very, very popular writers, Kathy O'Neill, she wrote this, this book titled weapons of mass destruction. And she she highlights already today how data science, and people working with big data and AI, are throwing these sorts of tools at all kinds of problems, to automate things to make them more efficient, but in the process are introducing all kinds of injustice and so on. So I'm more worried about those sorts of things rather than, you know, the the sort of Skynet Terminator scenario. And so, and I think those are the ones that that are ones that we need to grapple with, like right now. And so, you know, in terms of threats, I'm less conscious of or less worried about the Skynet scenario,

Randy  41:54

in when you talk about injustice, you're talking about humans, who are who have an inherent sense of bias, who are programming, programming that into computers, then is that what we're talking about?

Derek  42:06

Yeah, it's more than datasets. You know, there's sometimes you know, sometimes humans can be creating these systems, but naively doing so by using datasets to train artificial intelligence. So So here's one sort of really simple example. Let's say you want to create an AI to help with hiring right for human resources. So you take the dataset of all your existing employees that do a really good job, you use that to train for what you should be looking at in resumes of people who are applying. But what happens if your existing workforce is mostly white, mostly trained in Ivy League schools, mostly people over 50, then you know, these, that the AI will pick up on the historical bias and patterns that are already present in your company, and then perpetuate them in terms of hiring into the future. And AI is very good at pattern matching. So even when you take out things like race and so on, they, you know, these systems can zoom in on things like zip code and other things that can be a proxy for race or for social class or, or other things. So. So yeah, the issue of bias isn't so much, you know, evil programmers putting bias into the code. It's more naive programmers who are just trying to solve a problem technically, without thinking about all of the implications that they should be thinking about,

Randy  43:22

in our other people having these conversations, I'm assuming. So

Derek  43:25

the one thing that's heartening is to see the sort of discipline sort of becoming aware of how, how much responsibility we have when we start deploying some of these systems. And so that these, these conversations are happening far and wide. Unfortunately, I think, you know, in sort of the corporate environment, there's often a race to kind of get your product out there. And there's sort of financial pressures and others that I think make it more difficult for people working in those contexts. But among sort of academics, especially I think that the issues of of bias and justice in Data Science and Artificial Intelligence is is a theme that you hear quite frequently.

Elliot  44:15

Friends before we continue, we want to think story Hill PKC for their support. Story Hill. bkca is a full menu restaurant and their food is seriously some of the best in Milwaukee. On top of that story, Hill PKC is a full service liquor store featuring growlers of tap available to go spirits, especially whiskies and bourbons thoughtfully curated regional craft beers and 375 selections of wine. Visit story Hill pkc.com For menu and more info. If you're in Milwaukee, you'll thank yourself for visiting story hope PKC and if you're not remember to support local one more time that story Hill pkc.com

Randy  45:05

So I want to take the conversation to a more spiritual, clearly spiritual direction or theological. You've written about the possibilities of AI deities that could be created and people who are right now working on creating an AI Godhead that would work for good in the world. How that sounds crazy to me, but also then not crazy. How prevalent Do you think these cocktails of technology and spirituality will get? And how do you think this will shape our ideas of the Divine?

Derek  45:30

Yeah, that that's a really interesting question. So the, the piece that I did write about, it was about Anthony Levandowski, who was an AI practitioner, very clever programmer, who actually wanted to start in virtue of AI. Now, this has, since from what I understand been disbanded, and so on, and people have moved on. But there was this impulse to kind of see AI as this sort of super intelligence that was going to basically far exceed human capacities. And people's response was religious, right. And, and you see this in in a lot of different ways, you know, whether people are talking about fear or whether they're talking about sort of utopian dreams, you know, they see technology as either the Savior or the villain. And, and I think, you know, one thing that the Bible teaches us is that there's nothing in creation, that's either the Savior or the villain, right? I mean, redemption comes to Christ. And the problem is sin. But yes, and sin has this impact and all of our cultural activities. So, so yeah, I see a lot of a lot of sort of religious sort of language when people talk about, you know, on the utopian side, you know, ushering in an era where AI will cure every disease and sort of wipe away every tear. It's almost, you know, biblical language that they're using when they talk about how AI might solve every problem. Yeah, a well known computer scientist named Marc Andreessen. He was part of the early internet sort of web development, wrote an article recently called AI will save the world. And it's basically an eschatological vision of technology solving a lot of humankind's most fundamental problems. And I think it's religious at its core. And I'm not surprised, because I think at our essence, we are all religious, right? We all have we were created for God. And, you know, in the words of Augustine, right, and we have this god sized hole in our hearts remained restless until we rest in Him, and we're looking for all kinds of other ways to fill that. And for some people, it's technology.

Kyle  47:38

I was just gonna ask you to expand on which specific part of that you think is religious, because they're obviously if you have a cult that wants to worship, and I yeah, that's clearly religious. But if it's just like, you know, based on the evidence that I see currently, and the trajectory and the speed of progress, I can imagine a not too distant future where AI has done something like, you know, cured all the diseases or, you know, done something like eliminated inequality, something like that, right. And I can feel really happy and confident and optimistic about that, without being religious. So like, Where does the religious aspect of it come in? Yeah, I

Derek  48:18

think, you know, when when, when you get a chance to talk to folks, you realize that they're animated by some kind of narrative, right? And, you know, the Bible has this creation, fall redemption narrative. And what I find so remarkable is that when I look at some of the different views, and you sort of scratched the surface, you see, they basically replaced the biblical story with another kind of narrative. Right? So So basically, what's the problem with the world? And what's the remedy? Right? And different religious stories I would call them are different meta narratives will answer the questions about, you know, what's wrong with the world? And what's the remedy? And where does redemption come from? And what's the nature of what it means to be human, there's all presuppositions that everyone holds, about these sorts of things. And, and I would say that they function as as a kind of religious vision of the world, a worldview. And you see them in different ideologies. You see them in certain narratives about how things are shaped. And I think at their core, they're fundamentally religious. Now, I think people would probably push back on the word religious, that word is more, more loaded now, perhaps. But in terms of a worldview, in terms of commitments that people have, the narrative that people have, I think that that all of these different views have have got some kind of narrative behind them. And when it comes to AI, I think we need to ask the question, you know, what, what story are we a part of? I like to quote Alistair McIntyre, right? He's this great philosopher of ethics. And he once wrote, you know, we can't answer the question what ought we to do until we answer the prior question of what story are we apart and on which Story are we apart? And so I think if you begin with that, then you can begin to think about what is it what is responsible action look like in technology or in this field or in this area based on the story in which I'm living?

Kyle  50:12

I think I disagree with Alice, it would be a much longer conversation.

Derek  50:19

His book after virtue, yeah. But yeah, not everyone would sign up.

Randy  50:26

So allow the least educated person in this conversation to ask some sixth grade questions that everyone wants to know.

Derek  50:35

Oh, I wouldn't say at least educated differently educated, right? You're a theologian.

Randy  50:39

I'm least educated for sure. But I'm not a theologian. However, here's here's some, here's a couple of simple questions. How close or far into the future, do you think is AI from being able to cure all sorts of diseases that we've dreamt about curing for decades?

Derek  50:54

Yeah, I mean, it's already doing it in some ways, like, you know, drug discovery, AI is able to search massive spaces there, there's programs, there's a program called Alpha fold, where you just have massive supercomputers that are just chugging away looking for ways to sort of have proteins interact. And, and these sorts of programs, and these sort of capabilities, allow us to sort of explore sort of these spaces in ways that just you know, as humans, we, we couldn't imagine or, or plausibly do right, it would be intractable for us to kind of search on all of these things. You know, and then I think AI has a lot of beneficial applications in medicine, where, you know, you can use artificial intelligence to to look at medical images. I was looking the other day at, you know, someone's trying to develop an app where you can kind of look at skin marks and sort of have him kind of flagged or not, you know, there's AI for the environment, you know, so looking at sort of environmental issues that can impact health. I think that AI now curing every disease, I think that's only going to happen when we enter the new heavens and the new earth. So so my my eschatology is that the new heaven and the new earth, the New Jerusalem comes down out of heaven. It isn't subcontracted to humans. But I do think that through the blessing of technology, we're able to use that to push back some of the effects of the fall on through technology rightly directed, be able to show love for our neighbor and to Yeah, help overcome, you know, some of the some of the struggles that we face, including including some of the health ones. So I'm optimistic. I think that's a good area for AI. And I think that that's an area where we're investment, I think, is not only something that we can do, it's something we ought to do.

Randy  52:56

So we are affected by this word cancer. People in this conversation are affected by this listeners are affected by it. I just want to add, like, are we decades away from curing cancer? Are we more than a century away? Is that not even possible with what you see now? Like, how fast are these technologies developing? How far out are we looking at something outstanding and crazy like that?

Derek  53:18

I mean, it's Yogi Berra, who said, once, right, it's hard to make predictions, especially about the future. But I, and I'm not in medicine, and I'm not a doctor. And I've heard at once said that cancer isn't one disease, it's many diseases. But But I'm hopeful. I'm hopeful that, that AI will will will allow us to work better towards better treatments, and perhaps even some cures in many cancers. I think, you know, it's hard to predict, but I think I would be reluctant to say it'll cure all cancers. But But yeah, I hope so. Let's put it that way. Okay.

Randy  54:00

Couple more quick questions. In one of your articles you referenced, is it Ray Kurzweil?

Derek  54:05

Yeah, probably, I like to quote Ray Kurzweil.

Randy  54:07

He said that he believes humans will be able to upload our brains into a computer and live forever within this century, in this idea of is called the rapture of the geeks. Do you think of that? And do you think that's a real possibility?

Derek  54:24

Yeah, actually, I think that title Rapture of the geeks, I saw in an IEEE Spectrum magazine. So the I triple E is this huge technical organization. And they actually have an article called the rapture of the geeks. And it shows you the sort of religious underpinnings of some of these views, right? I mean, there's certain presuppositions that you have to have about what it means to be human, to even consider the possibility that one could upload their brain into a computer and live forever. And I think, you know, my sense is that's a highly reductionistic view of what it means to be human. And I think death will only be basically ended when Christ returns. But but you know, Ray Kurzweil is a very clever guy, he worked at MIT. And he's got all kinds of inventions under his belt, and he's director of engineering at Google right now. But you know, he's taking all kinds of vitamins each day trying to preserve his health and his body. So that, you know, once the technology advances sufficiently, he will be able to upload his brain and avoid death. And so, you know, I think from a Christian anthropology sort of standpoint, we're, you know, the, we would say that we're much, much more than the electrochemical reactions in our brain. I mean, if if you presuppose sort of physicalism, that all we are, can be completely captured by the electrochemical reactions in our brain, and we can simulate those with high fidelity on a computer, then, if you hold that presupposition, then then then I grant that, you know, that that could be a possibility. But I think being human is, is much, much more than the electrochemical reactions in our brain. And, you know, from the Christian story and creation, we see already right with, with Adam, that he was made out of the dust of the earth, but also the breath of the Lord, you know, the, the dry bones and the vision of Elijah, they were all put back together into bodies, but it's still required the breadth of the Lord, there's this, this spirit as well as body. So our sort of anthropology is much more complex than a bunch of than a bunch of atoms and molecules. There's much more to us than that. I think of the words of Psalm 115. When I think of that, that vision of downloading a brain into a computer, you know, where Psalm 115 says that those who trust in idols will become like them. And I think that's exactly what will happen. When you download your brain into a computer, you'll basically become a computer.

Randy  56:47

I mean, it sounds like we're having a conversation. In a sci fi movie, this is ridiculous. But I was also tempted to make a joke about the idea of downloading your brain into a computer and living forever and a computer and eternal conscious torment being about the same thing. But I will

Kyle  57:03

very much depends on the software.

Randy  57:07

last quick question, or less kind of like, sixth grader, will this ever happen? The idea of AI humans and AI humans becoming a thing? Do you think that's I mean, I've already watched commercials of organizations right now trying to make that happen. And they're in the infancy stages, and it's full of all sorts of glitches, but it seems like we're headed that way. Would you agree with that? Or do you think we're gonna have push pause before that happens? Or what does that look like? You mean, so

Derek  57:33

creating AI so that it looks just like a human or?

Randy  57:36

Exactly, and yeah, talks to you like a human and you go to the doctor, and it's not a doctor, but it's a, you know, physician's assistant, that's actually a robot.

Derek  57:45

I think that impulse is there. I mean, you see people trying to create machines in our own image all the time. And I find that problematic. I, you know, I think I'm all for using AI to help with medicine and with, you know, climate change, and with environmental monitoring, and Designing Safer cities and all these sort of good applications. But trying to build a person, I think just leads to ontological confusion. You know, I think machines are machines, and people are people, and neither are God. And I think trying to build a machine that looks or sounds like a person is actually problematic. I mean, when when when you talk to Siri, and it uses the first person pronoun. It's pretending to be a person, right, but it's not it. And I think a person who's written really well about this, a thoughtful social scientist is Sherry Turkle. Right, her books alone together and reclaiming conversation and so on, has kind of engaged the things of robotic partners. I mean, sex bots are the thing now people, people are developing sex bots. And I think a relationship between you and an AI robot. Or or even if it's disembodied, like the movie Her right you and an AI entity of some kind, is a relationship, essentially one person and, and I think that's problematic for people sort of empathy and mutuality, require interacting with with other real people. But I think this impulse is is going to be there. I see researchers just trying to trying to build robots that look more and more like us. And my inclination is, you know, build robots and build AI. But don't try to fool people by making them look like real people because they're not, right. So if you're gonna build a machine that washes dishes, don't build a humanoid robot, build a big box where you can put dishes in and then it washes them right, it looks more like a dishwasher than a person, right? If you're gonna build, you know, a robot to help with nursing. Don't replace nurses with robots have the sort of caring, empathetic human touch but build machines that can help with lifting people and doing some of these really More difficult and challenging tasks. And they may not be the robots may not look like people at all, they may just look like machines for doing those tasks. So, yeah, I think it's problematic. And my preference would be not to have all kinds of machines looking like people and preserving the uniqueness and distinctiveness of what it means to be human.

Randy  1:00:22

As you talk, I feel like I can hear like, that's a losing argument already. Like, wouldn't you rather be lifted off of a bed in a hospital by someone who feels and looks in? Exactly, that's where human being then and that's

Kyle  1:00:32

where it's gonna start? Right. I was recently watching a conversation between Dan Dennett and David Chalmers, couple of very famous philosophers. And that's what they were talking about, like, this is going to start in nursing homes, essentially. And then dinner interesting, I thought of him while you're talking, because I think he would agree with everything you said, but for, like, almost opposite reasons. Or that you would come at it from different places. But he's very concerned about, you know, counterfeit people and deep fakes and stuff like that, as a militant atheist, we can we can agree about some of that stuff, regardless of our religious assumptions.

Derek  1:01:04

I tell my students, when I get old and gray, don't put me in an old age home, I don't want to be in an old age home with with robots, I want real people to come to me.

Kyle  1:01:12

But of course, you'd also I would assume, I don't know, I'm reading into it, I would want, I would prefer, you know, human like robot to know. And so it's maybe the option is more more likely than not loneliness or a robot rather than family or robot. Unfortunately, I

Derek  1:01:29

think the impulse there is to try to turn finding an efficient technical solution for a problem, right? And if that's where you're coming from, and this is what jakka Law wrote about, right, he called a technique, right, absolute efficiency for everything. And if human care is a problem that needs to be solved, then there's an efficient way to do it by just building machines that can provide that. And I think, I think that we need to ask ourselves, what are the jobs that we need? We want to be done by people where care and compassion and wisdom are required that we don't want to offload to machines? And what can we do to make sure that we value those kinds of jobs and reward them and, and give people their due for for sort of doing those things and, and ourselves, you know, making sure we care for their own people in our life. So And Sherry Turkle has written a lot about that, as well. And I'm very sympathetic to her her thoughts on that as well.

Kyle  1:02:27

Yeah, let's keep AI tools and stop trying to turn them into people. Um, so I did want to ask you, so what is, let me see if I can phrase this in a clearer way. So there's what you were describing earlier as religious motivations, I might just call a lot of that ethical, okay, I'm fine with using religion in a very broad way. And I think most people are religious, in some sense, without realizing it. I also like to respect people when they tell me they're not religious. So let's say, I think a lot of what you're describing as a religious motivation, can also possibly be described as an ethical motivation. And so one of the things that I think about a lot is, what about Christianity is distinctive. And by that, I mean, can't be reduced to just some ethical thing that you could get, and lots and lots of other ways. Now, I do think Christianity offers ethical insights. And so I don't want to say it doesn't have any distinctive ethical insights. But I would want to class those somewhat separately than the specifically religious stuff. Maybe you wouldn't. And so maybe we could disagree about that. But you seem to think that Christianity particularly has some insight to offer about things like technology and artificial intelligence, I can definitely see how their ethical considerations and what little I've read and heard from you, I hear a lot of ethical considerations. And maybe Christians have some interesting insights into those ethical considerations. But like, you know, qua Christian, what do I have? Or what does any Christian? Or what does the Bible I guess, or any Christian tradition have to say to technology beyond just Let's be good people in XYZ ways? Does that make sense?

Derek  1:04:10

Yeah, I think I would quibble a little bit with the notion that ethics and religion are sort of interchangeable ethics is sort of, you know, what right action looks like in the world, you know, if you define it as that, and then there needs to be certain presuppositions about what does it mean to be human? And what is the nature of the world and these sorts of things, and those are, those are what I would call the religious presuppositions or the worldview presuppositions or the ideological presuppositions. And so I think ethics does have sort of a prior thing. And actually, we encounter this in engineering, I was I was part of a panel discussion with the with the ACM, which is a large computing organization, about technology and values, ethics and values. And so we talk a lot about ethics. So oftentimes, you know in the computing profession, we're talking about ethics, we need more ethics and students, CS students, computer science students need to be taught ethics. But I said, you know, what comes before ethics, if you're gonna say, these are the sorts of rules that we need to live by, they presuppose certain things about what is to be valued about what it means to be human. And actually one of the people who wrote the the ACM code of ethics, which a lot of computer scientists learn and are aware of, actually came up to me sort of privately afterwards. And he said, Well, actually, those ethics are based on Judeo Christian values. But we don't really say that. I found that kind of interesting. But but the truth is, is how do we agree what to do when we all come from different stories, different narratives? I think the Christian story is just really compelling. I mean, I'm a Christian. So so perhaps that's obvious. But I think in a lot of ways that our narrates a lot of other stories. But I think one of the things that we're called to do, and one of the things that I tried to do is work from a from a position of principle, pluralism, right, where we're working in the world. But we're working alongside people that are all sort of working with different stories. And we're trying to find common cause, right, building AI that's going to do good, and so on, and trying to work together towards that. And what's amazing is that even though some people have different religious presuppositions or ideologies, we can find some common agreement. I mean, that that's sort of how work happens in the public square. I think that's how things have to happen in the boardroom. I tell my students to when you go off to Silicon Valley, you can't stand up in the boardroom and say, Thus saith the Lord, and then sort of, you know, give your sort of, you know, take on what things should be, you got to work together with people, and you got to build relationships with them. And you know, your faith, of course, is animating you behind the scenes, but you're looking for common cause to collaborate. So I'm not sure if that completely addresses your question. But that's that's sort of where my mind goes, let me talk about ethics and religious. Sure, presupposition.

Kyle  1:07:03

No, no, that's helpful to put it in a very crude way. So let's say we were to have, you know, a panel discussion about artificial intelligence or technological advancements or whatever. And we wanted a bunch of experts on the panel. And so we get, you know, an engineer, and we get somebody who, you know, specializes in whatever kind of AI we're talking about. And we get a anthropologist, and we get a psychologist, and we get a sociologist. And we get a, I don't know, you know, we fill out the panel right with whatever we need to fly. Yeah, maybe we have philosopher of mind or something. Do we need a pastor? On the panel?

Randy  1:07:39

I hope you say yes.

Kyle  1:07:41

And we have an ethicist. Let's say there's already one of those.

Derek  1:07:45

So I'm part of an actual organization called AI and faith. And it's actually an interfaith group. So we're having conversations with people from Muslim tradition, from the Jewish traditions, Christian tradition, from other world religions. And all of these world religions have different insights that are informed by their beliefs. But it's quite fruitful to have those sorts of conversations because you find that there are a lot of things of common cause that you can come together with. And I mean, we need to do this in the public square. But to answer your question, yeah, I think it'd be helpful to have a theologian on that kind of panel, the Christian religion and other other religions have some of them 1000s of years of social thought, and wisdom that's inside of that. And and I like to think that 2000 years of Christian social thought has something to say about you know, this this moment, especially since I think most AI questions have to start with the question, what does it mean to be human? And then your sort of decisions about how you proceed sort of flow from a certain instinct about those sorts of questions. And philosophers and theologians can can bring can bring wisdom from those traditions and from from other things. I certainly hope there's not only computer scientists on that panel, heaven help us if it's only computer scientists design me and I'm a computer scientist. So I know we need the help of social scientists, and philosophers and people in the humanities and, and so on, because because these are complex problems. These are cultural things, not just technical problems.

Randy  1:09:21

My imagination in these conversations has been inspired by Hollywood. I feel like we're close to living in Westworld or the matrix. And then I think about the Avengers in the wrong guy getting his hands on the Tony Stark's equipment and all hell breaks loose, you know, but I think a lot of us are scared about that possibility. Stephen Hawking said the development of artificial intelligence could spell the end of the human race, Elon Musk before he was like the, you know, technological, bad, bad guy called AI our greatest existential threat. Humans have created and use technology for tremendous good and really profound evil. How concerned are you with bad people doing using AI in war or in any kind of unimat Chernobyl evil to bring an apocalyptic end or, or something just short of that.

Derek  1:10:05

I think this is a real possibility, right that, you know, bad actors can get a hold of powerful tools. And technology basically amplifies our human ability to do good and evil. And so and so bad actors, getting getting a hold of powerful tools can lead to bad consequences, right. And so thinking about people with no scruples, using AI in warfare, or using AI for any number of different things that you can, you can perhaps imagine. So, yeah, I think that that's real. And I think that's why, you know, we need to be vigilant, I think that's why some kind of AI regulation might be helpful, you know, regulations is a bit of a dirty word. But in other technologies, like automobiles and aviation, we have, you know, federal jurisdictions that sort of watch him in those things.

Kyle  1:10:58

Alright, yeah, no,

Randy  1:11:00

sorry, you missed a joke about gun regulation that I just dropped in there. Yeah, I'm

Derek  1:11:04

a Canadian, actually. And I've learned never to bring up guns. Avoid that, altogether. But yeah, you know, I think I think there's a role for that, and maybe some kind of world consensus about, just like you'd have the Geneva Convention, we'd have certain conventions about how AI ought or ought not to be used. You know, the Christian tradition has a lot of insights from just war theory, right, from St. Augustine, that can help inform how AI ought to be used in warfare, in the whole area of lethal autonomous robots and so on is a whole other area that could take up a whole episode. But ya know, I, I'm worried because sin is real, and people who are armed with powerful tools can can wreak more havoc. But you know, I think that AI, as we mentioned, also has all these other wonderful possibilities. And so the question is, how do we develop it responsibly? And yeah, and adding regulations in order to safeguard and provide guardrails for for its use?

Randy  1:12:04

Yeah. As I was putting questions together, the thing that encouraged me is the reality that we've had our hands in, by we, I mean, humanity has had our hands on nuclear weapons for about 80 years, almost a century. Yeah. And we are the only nation to ever use them in war. That's an encouraging thing to me. Not that we've used them, but that only twice is an atomic bomb or nuclear bomb been dropped in the world. It's had taken a lot of work, right. There's a lot of bilateral work that has to be done a lot of regulations, like you're saying a lot of agreements, they're still in, we're still trying to keep nuclear weapons out of crazy people in North Korea and Iran, all that stuff. But it encourages me that we've been able to, like not blow up the whole world with nuclear weapons, it seems to me that like the next administration should have a new cabinet position that is all about AI and technology and regulations or whatever. I mean, how is it that imminent? Is it feels to me?

Derek  1:12:57

I think so. And I think, you know, at the UN level as well, that nation states begin to talk about, you know, how are we going to regulate this stuff or make agreements that there's certain lines that we're not going to cross, the EU already has regulations about AI, they're usually ahead on this stuff, data, privacy and other areas, the EU tends to lead. And their legislation is interesting, because one of the ways they lay it out is to say there are some things we're just going to outright outlaw. We're just gonna say these are not things that we want to do as a society. This is not where we want to go. And and yeah, I think I could see that that's, that's where we're going. You're mentioned of the atomic bomb made me think of the new Oppenheimer movie, and I haven't seen it yet. But I'm really sort of equal. I

Kyle  1:13:41

was thinking about it the whole time. We've been talking about this. The other day, I won't give any did

Derek  1:13:46

I mean, there's this wonderful, not wonderful, but disturbing quote that is attributed to Oppenheimer, right? He said something like, when you see something technically sweet, you go ahead and you do it. And then you figure out the consequences. Yeah, afterwards,

Kyle  1:13:59

right. That specific quote, didn't make it into the film. But something similar did and there's an extended scene, I won't give too much away. But there's an extended scene that we have transcripts for. And so a lot of what's being said on screen was actually said, and essentially, they're asking him, why did you change your mind from being obviously head of the Manhattan Project to being kind of an activist against at least hydrogen bombs, not particularly atomic bombs, but hydrogen weapons? And kind of the answer he gets to is when I realized if we have a weapon, we'll use it. So that's concerning if he's right about human nature in that way.

Derek  1:14:35

I mean, it's a tiny movie in the sense that we have these these sort of powerful technologies that that also do, you know, have the possibility of doing great harm and so we need to ask those questions again, about what we're doing. You know, the code about doing something technically sweet. You know, when you see something technically sweet you do it is kind of a Silicon Valley narrative, right? Move quickly and break things right and then figured out later on, is not a really good way to love your neighbor or to build a society. Actually, you know, I'm not against innovation, but I think that there are responsible ways to do that.

Kyle  1:15:10

So we've talked about the evils, the potential evils that most people are concerned about in terms of AI. But there's also the optimistic side, right, so I'm an optimist at heart, I'm a huge Star Trek buff, I have really high hopes for human future and technology. And so maybe it'll all go great. And maybe, you know, maybe it's not going to be a thing that's to be worshipped or anything like that. But maybe it'll just be a really great tool, or lots of really great tools, and human life in 100, or 200 years will be unrecognizable, much like human life now would have been unrecognizable 200 years ago, but possibly exponentially more. So do you see anything religiously concerning about that possibility? Because I look at it. And I think that's wonderful. And I feel no conflict with my religious or spiritual sensibilities whatsoever. But based on our conversation, I'm wondering if you might?

Derek  1:16:01

Well, this, this question reminds me of sort of historical examples. So when electricity was was invented, you know, Tesla, who was one of the pioneers and exploring technical power, he made these predictions about how electricity in the city will annihilate diseases and people will be safe, and there'll be, it'll be impossible to be hurt in the city, he made all these sort of bold predictions. If you go back to the Telegraph, when the telegraph cables were first sort of laid across the the oceans, people made predictions that wars will never happen anymore. There'll be no more misunderstandings. Once we're able to communicate with each other people were saying things like, we're going to turn our muskets into candlestick candlestick holders, right. So there was the sort of sort of really, really broad sort of proclamations that we're going to usher in a new era of peace and prosperity. And actually, when the early internet was starting to be developed, as well, when the web started to, to emerge, there was bold predictions about, you know, how this sort of worldwide knowledge would lead to peace and understanding and the lack of sort of the the end of ignorance and, and all this sort of stuff. You know, there was even an effort at MIT to sort of give every child in the world a laptop, and they would just drop them by helicopters and chilled they come back and they would expect to see children would have overcome all kinds of problems so so that it's a kind of Tecnis ism technics ism is the, the idea that a trust in the progress of technology to solve all of our problems. So so when I hear predictions of AI ushering in a new age, my sort of Christian instincts are to avoid this sort of, you know, basically avoid idolatry sort of seeing seeing technology as the solution to sort of our human problems. And, and like I said, there's this historical sort of record of every time new technology comes along, people making quite bold proclamations. But that being said, you know, new technologies do come with a lot of benefits, I love electricity. What it does for us, it hasn't, you know, it hasn't put an end to sin and all that sort of stuff. But it does, does make doing podcasts at 10pm at night a lot easier. And, and I think AI will bring up some remarkable developments in human society and culture. But I think the sort of challenge for each generation is, is not to turn anything into to replace the creator with anything in creation, right and turn it into an idol. So So even as these things are capable of remarkable things. And as we see remarkable accomplishments unfold, that we continue to place our trust in God, that's sort of my Christian instincts and be thankful to him for technology technology is part of the latent potentials and creation, God built a creation with the possibilities for AI, and technology and our calling. And our responsibility is to unfold that and use it in ways that love our neighbor and care for the earth. That's that's how we ought to be doing

Randy  1:19:03

it. Thank you, Derek. Um, you've written a number of books that our listeners I think, would be very interested in. Can you tell us a little bit about some of those books that our listeners can find? We'll have links in the show notes to be able to go to these books, but what are some books that we can dive into?

Derek  1:19:18

Yeah, so the last two are probably the ones that that might be of interest. The more recently last year there was a book called The Christian Field Guide to technology for engineers and designers. It'll probably appeal to lots of people, but it's specifically written for people working in technology and engineering, and looking at sort of all of that through a Christian lens. My other book is called shaping a digital world faith, culture and computer technology. And and both of those books were published by InterVarsity. Academic Press.

Randy  1:19:48

Derek, thank you so much for staying up late with us tonight. And thanks for for just sharing yourself with us. It's fascinating conversation that we could have three, four or five more of these. But we're in we're just scratching the surface. So thanks for We're doing that with us tonight. Derek.

Derek  1:20:01

Yeah, thank you for the invitation. I think it's great pulling in philosophers and theologians and pastors and computer scientists. I mean, it makes for a rich conversation. So thanks for inviting me.

Kyle  1:20:12

Well, that's it for this episode of A Pastor and a Philosopher Walk into a Bar. We hope you're enjoying the show as much as we are. Help us continue to create compelling content and reach a wider audience by supporting us at patreon.com/apastorandaphilosopher, where you can get bonus content, extra perks, and a general feeling of being a good person.

Randy  1:20:32

Also, please rate and review the show on Apple Podcasts, iTunes, and Spotify. These help new people discover the show, and we may even read your review in a future episode. If it's good enough.

Kyle  1:20:43

If anything we said really pissed you off or if you just have a question you'd like us to answer, or if you'd just like to send us booze, send us an email at pastorandphilosopher@gmail.com.

Randy  1:20:53

Catch all of our hot takes on Twitter at @PPWBPodcast, @RandyKnie, and @robertkwhitaker, and find transcripts and links to all of our episodes at pastorandphilosopher.buzzsprout.com. See you next time.

Kyle  1:21:08

Cheers!

Beverage Tasting
Review
Listener Question
Interview