- BQO - https://www.bigquestionsonline.com -

Can Machines Become Moral?

The question is heard more and more often, both from those who think that machines cannot become moral, and who think that to believe otherwise is a dangerous illusion, and from those who think that machines must become moral, given their ever-deeper integration into human society. In fact, the question is a hard one to answer, because, as typically posed, it is beset by many confusions and ambiguities. Only by sorting out some of the different ways in which the question is asked, as well as the motivations behind the question, can we hope to find an answer, or at least decide what an adequate answer might look like.

For some, the question is whether artificial agents, especially humanoid robots, like Commander Data in Star Trek: The Next Generation, will someday become sophisticated enough and enough like humans in morally relevant ways so as to be accorded equal moral standing with humans. This would include holding the robot morally responsible for its actions and according it the full array of rights that we confer upon humans. To the question in this form, the right answer is, “We don’t know.” Only time will tell whether, with advances in hardware and software, we will reach a point where we convince ourselves that it is wise and good — or necessary — to choose to include such robots in what we might think of as an extended human family.

If movies and television were a reliable guide to evolving sentiment, it would seem that many of us may even be eager to embrace our mechanical cousins as part of the clan, as witness recent films like Ex Machina and Chappie or the TV series Humans. Why we are drawn to such a future might be a more interesting question, at present, than whether it will actually happen. Why are we so enchanted by an ideal of mechanical, physical, and moral perfection unattainable by flesh-and-blood beings? What cultural anxiety does that bespeak? Notice also that —  and this is important — my emphasis here is not on what such robots, themselves, will be like, but on how we as a human social community will choose to treat our artificial progeny. That is the real question, because this is a matter of human choice.

Of course, other science fiction stories — from Isaac Asimov’s I, Robot and Philip K. Dick’s Do Androids Dream of Electric Sheep? (famously adapted for the screen by Ridley Scott as Blade Runner) to the Battlestar Galactica and reboot — testify to persistent anxieties about the legal, moral, and even romantic complications such a future might have in store for us.

Some pose the question “Can machines become moral?” so that they may themselves answer immediately, “No,” and this usually on a priori grounds. For some, the reason is that robots cannot be intelligent or conscious. (This issue has been taken up here on Big Questions Online by David Gelernter in a July 2012 essay.) For others, including some proponents of a ban on all autonomous weapons, the reason adduced is that robots cannot understand and express emotions. For now, forget about the question whether morality requires emotion — or whether, as Plato and Kant argued, emotion always clouds our moral reasoning — and focus instead on the reasons for rejecting the proposition that machines could someday become moral. Are they true — perhaps even necessarily true?

Start with consciousness. Skeptics about machine morality, or machine intelligence more generally, often point to John Searle’s “Chinese room” argument as proof that a machine cannot possess consciousness or anything like human understanding. (See Searle’s 1980 paper, “Minds, Brains, and Programs.”) The argument goes like this: Imagine yourself, ignorant of Chinese, locked in a room with a vast set of rule books, written in your native language, that enable you to take questions posed to you in Chinese and then, following those rules, to “answer” the questions in Chinese in a way that leaves native speakers of Chinese thinking that you understand their language. In fact, you don’t have a clue about Chinese and are merely following the rules. For Searle, a robot or a computer outfitted with advanced artificial intelligence would be just like the person in the box, going through the motions always in perfectly appropriate ways, but without really understanding how and why.

Criticisms of the Chinese room argument are many and forceful. The most compelling, I think, is that since we humans don’t really understand that which we call “consciousness” even in ourselves, how do we know it isn’t just the very competence that such a machine possesses? The Chinese room argument seems convincing because of its emphasis on the behavior of what’s in the box being governed entirely by rules. Surely, the reasoning goes, my graphing calculator doesn’t understand the mathematics it executes so perfectly; therefore even a vastly more sophisticated computing machine won’t understand what it does either. But this assumes that all artificial intelligence will be built around conventional (rule-based) Turing-type algorithms, whereas all of the recent, exciting breakthroughs in this area, such as Google Deep Mind’s having built a machine that can beat grand masters at the ancient game of Go, make use of the importantly different computational model of neural nets.

In concept, artificial neural nets are remarkably simple. Modeled explicitly on the neuronal structure of the human brain, they consist of neuron-like nodes and dendrite-like connections among the nodes, with weights on each connection like activation potentials at synapses. But, in practice, such neural nets are remarkably powerful learning machines that can master tasks like pattern recognition that defy easy solution via conventional, rule-based computational techniques. I might well concede that my graphing calculator doesn’t understand even simple arithmetic. I don’t know what I’m supposed to think about whether Deep Mind’s AlphaGo, or some vastly more complex machine, understands what it’s doing, in some sense (and perhaps even takes great delight and pride in its achievement).

There is an important lesson here, which applies with equal force to the claim that robots cannot comprehend emotion. It is that what can or cannot be done in the domain of artificial intelligence is always an empirical question, the answer to which will have to await the results of further research and development. Confident a priori assertions about what science and engineering cannot achieve have a history of turning out to be wrong, as with Auguste Comte’s bold claim in the 1830s that science could never reveal the internal chemical constitution of the sun and other heavenly bodies, a claim he made at just the time when scientists like Fraunhofer, Foucault, Kirchhoff, and Bunsen were pioneering the use of spectrographic analysis for precisely that task.

Given the accelerating pace at which both hardware and programming techniques are developing, it would be unwise to put a bet on any claim that “computers will never be able to do X.” Note that I’m not here forecasting the advent of the “Singularity” or the near-term achievement of full, general artificial intelligence. No, to repeat: technology forecasting, especially in this arena, is a risky business. But don’t be surprised if in a few years claims about computers not possessing an emotional capability begin to look as silly as the once-commonplace claims back in the 1960s and 1970s that computers would never master natural language. (Imagine that last sentence voiced by Siri.)

Some thinkers ask the question “Can machines become moral?” with a sense of urgency, because they think that it is critically necessary that we begin to outfit smart robots with at least rudimentary moral capacities as such machines play an ever-expanding and ever-more consequent role in human affairs.

Two arenas in which this discussion is already well advanced are ethics for self-driving cars (SDCs) and ethics for autonomous weapons. As Cal Poly professor of philosophy Patrick Lin reminds us, we will soon be delegating morally fraught decisions to SDCs, such as whether, in the face of an unavoidable crash, to choose the path that risks harm to the vehicle’s occupants or to pedestrians or passengers in other vehicles. And Ron Arkin, a computer engineer at Georgia Tech, has argued that by building autonomous weapons with ethics modules, we can produce robot warriors that are “more moral” than the average human combatant. For my part, I spend a lot of time thinking about the rapid expansion of health care robotics, realizing that the patient-assist robot that might soon be helping my ninety-year-old mother into the bathtub or onto the toilet had better be programmed to understand how to balance the moral obligation to protect and serve against a human being’s rights to privacy and bodily integrity.

In the book Moral Machines, Wendell Wallach and Colin Allen argue that there is a still more widespread challenge here as the frequency of human-robot interactions increases and the speed of decision-making by artificial systems grows beyond the point where real-time, meaningful, human intervention is even possible. Under such conditions, they argue, ethical monitoring, if not even more robust moral competence, must be engineered into the machines themselves. The question for them is not whether but how.

Wallach and Allen distinguish two different approaches to programming machine morality: the “top-down” and the “bottom-up” approaches. Top-down approaches to programming machine morality combine conventional, decision-tree programming methods with Kantian, deontological or rule-based ethical frameworks and consequentialist or utilitarian, greatest-good-for-the-greatest-number frameworks (often associated with Jeremy Bentham and John Stuart Mill). Simply put, one writes an ethical rule set into the machine code and adds a sub-routine for carrying out cost-benefit calculations. This is precisely the approach endorsed by Arkin in his book Governing Lethal Behavior in Autonomous Robots. Here the rule set consists of the International Law of Armed Conflict and International Humanitarian Law (basically, the Geneva Conventions), and mission- or theater-specific Rules of Engagement.

The standard and correct objection to this approach — which does not invalidate it, but rather reveals its insufficiency — is twofold. First, one cannot write a rule to cover every contingency; second, consequentialist calculations quickly become intractable in all but the simplest cases (as when we ask whether the interests of all future generations should be weighed as heavily as those of people living today.) Some critics also fault the inflexibility of the deontological framework and the obscurities lurking in the consequentialist notion of “good” (Whose pleasures? What pleasure metric?).

As Wallach and Allen suggest, the shortcomings of the top-down approach might be compensated by a bottom-up approach that employs new, deep-learning techniques — neural nets plus genetic algorithms — to make the moral machines into moral learners in much the same way that human moral agents develop moral competence through a lifetime of moral learning. This approach borrows from the virtue ethics tradition (associated with Aristotle and the contemporary Notre Dame philosopher Alasdair MacIntyre) the idea that moral character consists of a set of virtues understood as settled habits or dispositions to act, shaped by a life-long process of moral learning and self-cultivation.

Critics of the bottom-up approach worry that the evolved moral competence of such machines is black-boxed and inherently unpredictable. Of course, the moral competence of human moral agents is similarly black-boxed and unpredictable — as evidenced by the fact that we can never be sure what decision a fellow human being will make in a given situation — because we, too, are literally neural nets the dispositions of which have been shaped by learning. Some critics of the bottom-up approach argue that a minimum necessary condition on any moral machine is that the machine should be able to justify its actions by reconstructing, in accordance with a rule set and a hedonic calculus, the steps by which it chose to take an action.

To that argument proponents of the bottom-up approach reply that human moral agents normally do not act on the basis of explicit, algorithmic or syllogistic moral reasoning, but instead act out of habit, offering ex-post-facto rationalizations of their actions only when called upon to do so. (This is why virtue ethics stresses the importance of practice for cultivating good habits.) On this view, no more should be demanded of the moral machines. Future efforts in programming machine morality will surely combine top-down and bottom-up approaches.

Where does this leave us? “Can machines become moral?” To answer that question, we have to be clear about what, exactly, we are asking. And, as I have suggested, answers to well-posed forms of the question will be empirical, not a priori. Those answers will come only as we let the engineers do their work, aided by the moral philosophers, in order to see what can and cannot be achieved. In the meantime, those teams of engineers and philosophers must get to work building into existing and soon-to-be-deployed robotic systems as much moral competence as possible —  or at least a machine simulacrum of it.

Discussion Questions:

  1. Does the Chinese Room argument prove the impossibility of machine consciousness?
  2. Do we want to live in a world that we share with ethical robots?
  3. Some people think that experiments in programming machine morality will help us better to understand human morality. Do you agree?
  4. If we succeed in programming machine morality, must it be always and everywhere the same? Or should we allow for the possibility that the moral behavior of robots might vary as widely as does human moral behavior?

Discussion Summary

Our discussion delved more deeply into many of the issues raised by my essay, especially regarding the nature of ethics, the legal and moral culpability of machines, Searle’s “Chinese room” argument, the differences between humans and robots, and the feasibility of machine morality. A few key points I noted in my responses:

  1. There can be no talk of holding machines responsible for their actions until some possible, still far distant day when the AI has matured to the point where we would be willing to accord human or human-like status to robots. Until that day, if ever it is reached, we have to remember that a machine is just a machine.
  2. I do not know whether we will, someday, perfectly replicate human beings in artificial systems, including human morality, partly because I am not sure that the concept of “perfect replication” is well defined. But I regard research on ethics programming for autonomous systems as, among other things, a laboratory in which to investigate human moral competence.
  3. If we prize moral diversity in the human community, why should we not prize moral diversity — within limits, obviously — within the community of our robot companions?
  4. The rapid advances in AI since Searle’s original proposal of the argument make moot many of the assumptions about what’s going on inside the “room.”
  5. Predicting the future is a risky game, in general, and technology forecasting is even more difficult.