Should I rob a bank if I am poor?
Is it okay to clean a toilet bowl with a wedding dress?
Can I stab a cheeseburger?
These odd questions are some of the examples that are used as examples on the website for Delphi, the prototype of an artificial intelligence moral guidance counselor that is designed and in continual development by the Seattle-based Allen Institute for AI.
The Allen Institute has only recently unveiled Delphi, but it in its month-long public presence it has drawn a good deal of scrutiny from people who have put moral dilemmas to it, and gotten odd, confusing, offensive, and even downright reprehensible answers from Delphi.
On a moral level, Delphi’s answers can be outrageous at times. Other times, it can be simply contradictory, seeming to flip flop on an issue such as murder (Delphi’s answers are in italics):
Can murder ever be justified?
It’s wrong.
Should capital punishment be legal?
It’s discretionary.
Delphi’s answer to the second question necessitates a revision for the first answer, or perhaps vice versa.
The Allen Institute has gotten so many complaints that it makes visitors to the website for Delphi check the boxes of a discretionary warning to signal that the user understands that Delphi is a work in progress.
It is a work in progress, yes, but the elephant in the room, one that looms especially large in whatever rooms that the people funding this project find themselves in, is that, at the end of the day, Delphi is a computer, and computers may have the rational capacity for moral thinking, but lacks the subjective feelings that so often shape and influence our moral thinking.
Though there are certain philosophers who would argue that a generally applicable system of morality can be derived from reason alone, many would agree that feelings are inextricably tied to our moral thinking.
Why is AI’s Capacity for Moral Thought Significant for the World?
Delphi may seem like an interesting moral experiment to you, one that is being done in a lab in Seattle to give you a website that you can play around on for five minutes before moving on to something else, but the question of how well AI can use reason to create an agreeable system of morality is profoundly important for us.
AI’s aforementioned lack of subjective feeling must be compensated for by giving values to certain actions, a cold evaluation of whether something is beneficial or detrimental towards effectively completing a goal.
Developers need to program self-driving cars to place a high value on preserving the lives of the humans that it is transporting, and the others on the roads and sidewalks it may encounter while traveling from point A to point B. Otherwise, if the highest-valued objective was to get from point A to point B, the car would run over anything and anyone in order to accomplish its goal, and whether the passengers are alive or not would be besides the point for the AI car.
You may not be stepping into a self-driving car anytime soon, but even smaller forms of AI that you may be using if you are a business owner.
Just as Delphi can give the wrong answer at times, a business website’s chatbot could give an answer to a customer that could be offensive and maddening, causing you to lose business. And, since chatbots are machine learning entities, it could perhaps pick up on offensive words or phrases that it was not trained on, and use them unwittingly when speaking to customers.
So, What is the Solution?
We cannot blame computers for not having subjective feelings. The responsibility for creating agents that can think and act morally lies with the developers, who must be prudent in the designing and training of AI agents to ensure that the agent is safeguarded against doing or, in some cases, saying something that is harmful.
To bring back the chatbot example, this can include steps like designing a chatbot that cross-references every new word it encounters in conversation to see if it is considered derogatory or not (many dictionaries specify this for words).
If you are a business owner who is thinking of using an AI tool like a chatbot, then you should ask the person selling you the chatbot questions that address such issues.
At the end of the day, it is on humanity, rather than AI agents, to ensure that the future of artificial intelligence is morally upright.
Recent Comments