By: Kevin Seguin
I have to admit, I'm kind of a sci-fi nerd. Growing up I loved Star Trek, and when I got to college, I fell in love with Star Wars too. Then a couple years ago I was introduced to Doctor Who and my training was complete. But it's not just the TV shows I like, and it's not just sci-fi that I like. I love watching these stories and then thinking about what implications they have in the real world, and how Christians ought to respond.
I recently watched a movie called "Ex Machina". I have to admit that it got me thinking a lot about the future, AI, and human rights. Follow me on this. The movie is about a robot who can pass the Turing test, which if you're unfamiliar with is the test which will determine when we've designed true AI. Essentially, and I'm over simplifying here, a computer passes the Turing test when it can carry on a conversation with a human and the human doesn't know he's talking to a computer. It's one of the last things that is predicted to happen before what's called "The Singularity" which is the point where computers become intelligent to the point of having the ability to self-replicate and self-improve.
The really neat thing? It's projected to happen during our lifetime.
This is where we circle back to "Ex Machina" and human rights. In the movie, and I'll try not to spoil it if you want to watch it, the director really tries hard to make a bad-guy out of the robot's creator. Everything from the dialogue to the music to even the acting wants the viewer to see the creator of this human-except-for-not-being-human robot is the bad guy trying to "murder" his new robot.
And I just couldn't buy it.
I tried, I wanted to! After all, as a sci-fi nerd I love the idea of interacting with intelligent androids. I've always wanted to be friends with Commander Data. In fact, by the end of the movie I was so conflicted I actually called up the second season TNG episode "Measure of a Man" because if anyone can convince me that intelligent android AIs are people, it's Captain Jean-Luc Picard.
Nope. Turns out Dr. Pulaski was right: "Mr. Data is a toaster."
A lot has changed since I first watched that episode as a kid. I'm a Christian now, and my faith has taught me what being human, bearing God's image, means. I tried to find a way around it, but I couldn't. No matter how intelligent, how indistinguishable, how lifelike robots and androids become, that's all they are. Life "like"; not actual life. Without that divine spark, without the Imago Dei, even Mr. Data really IS a toaster.
But why bother mentioning this at all? Well, if there's one thing that has defined the culture around us for particularly the last 20 years or so, it's the issue of human rights. So inevitably, the issue of sentient-android-human-rights will arise; and perhaps quicker than some of us think.
Ray Kurzweil, who is a noted futurist, (noted because his predictions have come true about 85% of the time) has predicted that what is known as "the singularity" which brings with it sentient and self-aware AI ought to occur between 2020 and 2040. Since I don't really plan on dying in the next 24 years, that's well within this generation's life span. Knowing this, I think it is at the very least conceivable though I would argue that it's downright likely that my generation will be the one to deal with this question first. "Should sentient AI have human rights?" My own position is pretty clear, and I think few thoughtful believers would disagree with me, but my concern isn't really if they should have rights but rather, how should Christians react when they get them.
The first thing I would suggest is that human rights for AI isn't a fight we should be fighting. It's not a hill to die on. We should absolutely participate in the conversation, after all we are talking about what makes us human, but we should avoid the kind of venom and vitriol that has come to categorize many evangelical arguments in areas like abortion, euthanasia, and LGBT issues. In the culture we live in, we will likely lose this issue as well, and true AI will gain similar if not identical rights to our own. Let's be engaged, but not jerks about it.
That's not to say that there aren't issues within this issue that are of critical importance to us as believers? Can an AI sin? Can an it come to faith? Can an android repent? Can they serve in churches? Can they serve in leadership? Pastoral ministry? These are the questions that matter to the church, or rather, will matter greatly to the church. but before we start wrestling with these questions we ought decide how we will interact with them in the world.
My suggestion is that we, in the spirit of winsomeness, do what our Lord commanded us to do in Luke 6:31: "as you wish that others would do to you, do so to them". Are they "people"; "humans"? certainly not, but there are some, perhaps many people who will believe they are, that there is no effective difference between android and human. It's for those people we need to be mindful about how we treat androids. We ought to treat AI and androids with politeness, dignity, and respect not because of our witness to them, rather because it impacts our witness to others.
Besides, who doesn't want Mr. Data as a friend?
CONNECT WITH US
SUBSCRIBE VIA EMAIL
Privacy: We hate spam as much as you, so we will never share your e-mail address with anyone.
SUBSCRIBE TO THIS BLOGS RSS FEED
AND GET ARTICLE UPDATES