Monday, May 01, 2017

Will Self-Driving Cars Be Able to Make Moral Decisions?

Image result for john piippo car
Bangkok
My friend Jim and I have talked some about driverless cars, and the moral dilemmas they would face. Jim is an engineer at Ford. Could morality be programmed into a computer? (I believe a driverless car should have an ethical worldview.) Could a computer have a soul? If a computer has a soul, could it be "saved"? (Could, or should, the redemption accomplished by Christ's blood on the cross extend to a computer that has a soul?)

Johnathan Merritt discusses this in "Is AI a Threat to Christianity?

See Larry Greenemeier, "Driverless Cars Will Face Moral Dilemmas." 

See Jean-Francois Bonnefon, et. al., "The Social Dilemma of Autonomous Vehicles." 

Study what is probably the most famous philosophical ethical dilemma - "The Trolley Problem," by Judith Jarvis Thomson. I introduce my logic students to this. For further reading, see Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong

Self-administer the "trolley problem" dilemma at M.I.T.'s Moral Machine website. Ask yourself - could a self-driving car do this? 

See Harvard neuroscientist Joshua Greene, "Our Driverless Dilemma: "When Should Your Car Be Willing to Kill You?

For great stuff on the Christian idea of a human soul, see J. P. Moreland, The Soul: How We Know It's Real and Why It Matters. 

I am still reading through the short essays in What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence. Various scholars share their views on the threat of A.I., and whether or not a machine could have self-consciousness, or have a soul.

See Gordon-Conwell theologian Russell Bjork, "Artificial Intelligence and the Soul."