Can Machines Become Moral?

The question is heard more and more often, both from those who think that machines cannot become moral, and who think that to believe otherwise is a dangerous illusion, and from those who think that machines must become moral, given their ever-deeper integration into human society. In fact, the question is a hard one to answer, because, as typically posed, it is beset by many confusions and ambiguities. Only by sorting out some of the different ways in which the question is asked, as well as the motivations behind the question, can we hope to find an answer, or at least decide what an adequate answer might look like.

For some, the question is whether artificial agents, especially humanoid robots, like Commander Data in Star Trek: The Next Generation, will someday become sophisticated enough and enough like humans in morally relevant ways so as to be accorded equal moral standing with humans. This would include holding the robot morally responsible for its actions and according it the full array of rights that we confer upon humans. To the question in this form, the right answer is, “We don’t know.” Only time will tell whether, with advances in hardware and software, we will reach a point where we convince ourselves that it is wise and good — or necessary — to choose to include such robots in what we might think of as an extended human family.

If movies and television were a reliable guide to evolving sentiment, it would seem that many of us may even be eager to embrace our mechanical cousins as part of the clan, as witness recent films like Ex Machina and Chappie or the TV series Humans. Why we are drawn to such a future might be a more interesting question, at present, than whether it will actually happen. Why are we so enchanted by an ideal of mechanical, physical, and moral perfection unattainable by flesh-and-blood beings? What cultural anxiety does that bespeak? Notice also that —  and this is important — my emphasis here is not on what such robots, themselves, will be like, but on how we as a human social community will choose to treat our artificial progeny. That is the real question, because this is a matter of human choice.

Of course, other science fiction stories — from Isaac Asimov’s I, Robot and Philip K. Dick’s Do Androids Dream of Electric Sheep? (famously adapted for the screen by Ridley Scott as Blade Runner) to the Battlestar Galactica and reboot — testify to persistent anxieties about the legal, moral, and even romantic complications such a future might have in store for us.

Some pose the question “Can machines become moral?” so that they may themselves answer immediately, “No,” and this usually on a priori grounds. For some, the reason is that robots cannot be intelligent or conscious. (This issue has been taken up here on Big Questions Online by David Gelernter in a July 2012 essay.) For others, including some proponents of a ban on all autonomous weapons, the reason adduced is that robots cannot understand and express emotions. For now, forget about the question whether morality requires emotion — or whether, as Plato and Kant argued, emotion always clouds our moral reasoning — and focus instead on the reasons for rejecting the proposition that machines could someday become moral. Are they true — perhaps even necessarily true?

Start with consciousness. Skeptics about machine morality, or machine intelligence more generally, often point to John Searle’s “Chinese room” argument as proof that a machine cannot possess consciousness or anything like human understanding. (See Searle’s 1980 paper, “Minds, Brains, and Programs.”) The argument goes like this: Imagine yourself, ignorant of Chinese, locked in a room with a vast set of rule books, written in your native language, that enable you to take questions posed to you in Chinese and then, following those rules, to “answer” the questions in Chinese in a way that leaves native speakers of Chinese thinking that you understand their language. In fact, you don’t have a clue about Chinese and are merely following the rules. For Searle, a robot or a computer outfitted with advanced artificial intelligence would be just like the person in the box, going through the motions always in perfectly appropriate ways, but without really understanding how and why.

Criticisms of the Chinese room argument are many and forceful. The most compelling, I think, is that since we humans don’t really understand that which we call “consciousness” even in ourselves, how do we know it isn’t just the very competence that such a machine possesses? The Chinese room argument seems convincing because of its emphasis on the behavior of what’s in the box being governed entirely by rules. Surely, the reasoning goes, my graphing calculator doesn’t understand the mathematics it executes so perfectly; therefore even a vastly more sophisticated computing machine won’t understand what it does either. But this assumes that all artificial intelligence will be built around conventional (rule-based) Turing-type algorithms, whereas all of the recent, exciting breakthroughs in this area, such as Google Deep Mind’s having built a machine that can beat grand masters at the ancient game of Go, make use of the importantly different computational model of neural nets.

In concept, artificial neural nets are remarkably simple. Modeled explicitly on the neuronal structure of the human brain, they consist of neuron-like nodes and dendrite-like connections among the nodes, with weights on each connection like activation potentials at synapses. But, in practice, such neural nets are remarkably powerful learning machines that can master tasks like pattern recognition that defy easy solution via conventional, rule-based computational techniques. I might well concede that my graphing calculator doesn’t understand even simple arithmetic. I don’t know what I’m supposed to think about whether Deep Mind’s AlphaGo, or some vastly more complex machine, understands what it’s doing, in some sense (and perhaps even takes great delight and pride in its achievement).

There is an important lesson here, which applies with equal force to the claim that robots cannot comprehend emotion. It is that what can or cannot be done in the domain of artificial intelligence is always an empirical question, the answer to which will have to await the results of further research and development. Confident a priori assertions about what science and engineering cannot achieve have a history of turning out to be wrong, as with Auguste Comte’s bold claim in the 1830s that science could never reveal the internal chemical constitution of the sun and other heavenly bodies, a claim he made at just the time when scientists like Fraunhofer, Foucault, Kirchhoff, and Bunsen were pioneering the use of spectrographic analysis for precisely that task.

Given the accelerating pace at which both hardware and programming techniques are developing, it would be unwise to put a bet on any claim that “computers will never be able to do X.” Note that I’m not here forecasting the advent of the “Singularity” or the near-term achievement of full, general artificial intelligence. No, to repeat: technology forecasting, especially in this arena, is a risky business. But don’t be surprised if in a few years claims about computers not possessing an emotional capability begin to look as silly as the once-commonplace claims back in the 1960s and 1970s that computers would never master natural language. (Imagine that last sentence voiced by Siri.)

Some thinkers ask the question “Can machines become moral?” with a sense of urgency, because they think that it is critically necessary that we begin to outfit smart robots with at least rudimentary moral capacities as such machines play an ever-expanding and ever-more consequent role in human affairs.

Two arenas in which this discussion is already well advanced are ethics for self-driving cars (SDCs) and ethics for autonomous weapons. As Cal Poly professor of philosophy Patrick Lin reminds us, we will soon be delegating morally fraught decisions to SDCs, such as whether, in the face of an unavoidable crash, to choose the path that risks harm to the vehicle’s occupants or to pedestrians or passengers in other vehicles. And Ron Arkin, a computer engineer at Georgia Tech, has argued that by building autonomous weapons with ethics modules, we can produce robot warriors that are “more moral” than the average human combatant. For my part, I spend a lot of time thinking about the rapid expansion of health care robotics, realizing that the patient-assist robot that might soon be helping my ninety-year-old mother into the bathtub or onto the toilet had better be programmed to understand how to balance the moral obligation to protect and serve against a human being’s rights to privacy and bodily integrity.

In the book Moral Machines, Wendell Wallach and Colin Allen argue that there is a still more widespread challenge here as the frequency of human-robot interactions increases and the speed of decision-making by artificial systems grows beyond the point where real-time, meaningful, human intervention is even possible. Under such conditions, they argue, ethical monitoring, if not even more robust moral competence, must be engineered into the machines themselves. The question for them is not whether but how.

Wallach and Allen distinguish two different approaches to programming machine morality: the “top-down” and the “bottom-up” approaches. Top-down approaches to programming machine morality combine conventional, decision-tree programming methods with Kantian, deontological or rule-based ethical frameworks and consequentialist or utilitarian, greatest-good-for-the-greatest-number frameworks (often associated with Jeremy Bentham and John Stuart Mill). Simply put, one writes an ethical rule set into the machine code and adds a sub-routine for carrying out cost-benefit calculations. This is precisely the approach endorsed by Arkin in his book Governing Lethal Behavior in Autonomous Robots. Here the rule set consists of the International Law of Armed Conflict and International Humanitarian Law (basically, the Geneva Conventions), and mission- or theater-specific Rules of Engagement.

The standard and correct objection to this approach — which does not invalidate it, but rather reveals its insufficiency — is twofold. First, one cannot write a rule to cover every contingency; second, consequentialist calculations quickly become intractable in all but the simplest cases (as when we ask whether the interests of all future generations should be weighed as heavily as those of people living today.) Some critics also fault the inflexibility of the deontological framework and the obscurities lurking in the consequentialist notion of “good” (Whose pleasures? What pleasure metric?).

As Wallach and Allen suggest, the shortcomings of the top-down approach might be compensated by a bottom-up approach that employs new, deep-learning techniques — neural nets plus genetic algorithms — to make the moral machines into moral learners in much the same way that human moral agents develop moral competence through a lifetime of moral learning. This approach borrows from the virtue ethics tradition (associated with Aristotle and the contemporary Notre Dame philosopher Alasdair MacIntyre) the idea that moral character consists of a set of virtues understood as settled habits or dispositions to act, shaped by a life-long process of moral learning and self-cultivation.

Critics of the bottom-up approach worry that the evolved moral competence of such machines is black-boxed and inherently unpredictable. Of course, the moral competence of human moral agents is similarly black-boxed and unpredictable — as evidenced by the fact that we can never be sure what decision a fellow human being will make in a given situation — because we, too, are literally neural nets the dispositions of which have been shaped by learning. Some critics of the bottom-up approach argue that a minimum necessary condition on any moral machine is that the machine should be able to justify its actions by reconstructing, in accordance with a rule set and a hedonic calculus, the steps by which it chose to take an action.

To that argument proponents of the bottom-up approach reply that human moral agents normally do not act on the basis of explicit, algorithmic or syllogistic moral reasoning, but instead act out of habit, offering ex-post-facto rationalizations of their actions only when called upon to do so. (This is why virtue ethics stresses the importance of practice for cultivating good habits.) On this view, no more should be demanded of the moral machines. Future efforts in programming machine morality will surely combine top-down and bottom-up approaches.

Where does this leave us? “Can machines become moral?” To answer that question, we have to be clear about what, exactly, we are asking. And, as I have suggested, answers to well-posed forms of the question will be empirical, not a priori. Those answers will come only as we let the engineers do their work, aided by the moral philosophers, in order to see what can and cannot be achieved. In the meantime, those teams of engineers and philosophers must get to work building into existing and soon-to-be-deployed robotic systems as much moral competence as possible —  or at least a machine simulacrum of it.

Discussion Questions:

  1. Does the Chinese Room argument prove the impossibility of machine consciousness?
  2. Do we want to live in a world that we share with ethical robots?
  3. Some people think that experiments in programming machine morality will help us better to understand human morality. Do you agree?
  4. If we succeed in programming machine morality, must it be always and everywhere the same? Or should we allow for the possibility that the moral behavior of robots might vary as widely as does human moral behavior?

Discussion Summary

Our discussion delved more deeply into many of the issues raised by my essay, especially regarding the nature of ethics, the legal and moral culpability of machines, Searle’s “Chinese room” argument, the differences between humans and robots, and the feasibility of machine morality. A few key points I noted in my responses:

  1. There can be no talk of holding machines responsible for their actions until some possible, still far distant day when the AI has matured to the point where we would be willing to accord human or human-like status to robots. Until that day, if ever it is reached, we have to remember that a machine is just a machine.
  2. I do not know whether we will, someday, perfectly replicate human beings in artificial systems, including human morality, partly because I am not sure that the concept of “perfect replication” is well defined. But I regard research on ethics programming for autonomous systems as, among other things, a laboratory in which to investigate human moral competence.
  3. If we prize moral diversity in the human community, why should we not prize moral diversity — within limits, obviously — within the community of our robot companions?
  4. The rapid advances in AI since Searle’s original proposal of the argument make moot many of the assumptions about what’s going on inside the “room.”
  5. Predicting the future is a risky game, in general, and technology forecasting is even more difficult.

17 Responses

  1. Chen Sun says:

    Can a machine follow a set of rules that comply with what are moral rules, however intricate? Yes, it appears so, and better than humans can.

    Can a machine step outside its set of rules? I don’t know how.

    Also, there are rules (laws) that are difficult to set, because we don’t know how or what they are. For example, how does someone become enlightened? According to Hinduism and Buddhism, one meditates a long time, perhaps eons. A machine can hibernate. Can it meditate? Can it receive enlightenment from meditation?

    Also, recall in Plato’s Republic that Plato acknowledges two paths to enlightenment: one, the ascent from the cave, by extensive learning; the second, the spiritual sages’ path, which Plato didn’t detail. The spiritual sages’ path can’t be attained by extensive machine learning. And the Platonic learning method focuses on harmonizing the soul’s three parts — the reasoning part, the warrior, and the desiring part. Do machines have the second and third parts? And are they able to harmonize the three parts?

    The original question was about machine morality, and the fountainhead of most present morality rules are the spiritual sages’ and Platonic learning methods. Heuristic based rules are derived afterwards. A machine can follow the heuristic rules, but can it perform the genesis of these rules?

    Lastly, if machines can be built to be moral, superior-functioning machines can also be built to be immoral.

  2. Vlad says:

    We don’t know the answers to those questions. My hunch is that at some time in the future, consciousness (or its surrogate) will be created. We are looking for aliens using powerful measuring instruments, hoping that somewhere far away in a galaxy there is life. And we don’t suspect that we can create “aliens” here, on Earth. Perhaps such discoveries will solve some problems, but we for sure will get new ones.

  3. tdietterich says:

    As a computer scientist, I lack formal training in theories of ethics. But it seems to me that when we use the phrase “ethical robots,” we can mean two different things. First, we might mean that the behavior of the robot would generally be regarded as ethically appropriate by the human society in which it acts. Second, we might mean that the robot will be held morally responsible for its actions.

    As an engineer, I understand the first of these. As a society, we must determine what it means for robots to behave correctly. This is the same problem we face with any software or hardware system that we create. In his introduction to the Handbook of Artificial Intelligence, Avron Barr writes that AI has its origins in the realization that specifying correct “intelligent behavior” is so difficult that standard software engineering methods do not suffice. This is certainly the case with machine learning approaches to creating AI systems.

    I don’t understand what it means to hold a robot responsible for its actions. What sanctions do we have against a robot? If one Tesla car makes a mistake, do we lock it up? The owner can just buy an identical one and use it instead. We aren’t really sanctioning the car (which does not experience pain and does not suffer from a loss of liberty). We are sanctioning the owner. And it DOES make sense to me that we hold responsible the humans who design, manufacture, test, market, own, and use the robot.

    There is one way in which we might hold a robot morally responsible, as suggested by David Vladic in his article “MACHINES WITHOUT PRINCIPALS: LIABILITY RULES AND ARTIFICIAL INTELLIGENCE.” He argues, for utilitarian reasons, that each self driving car should be a kind of “legal person” that is required to carry liability insurance. If someone is injured by the car, they should be able to sue the car and receive compensation. Then the insurance companies will be left with the job of deciding who pays for the insurance premiums (designer, manufacturer, retailer, owner, passenger).

    • Don Howard Don Howard says:

      It is mainly your first meaning that I intend in the article, designing artificial systems that can learn to function in ethically appropriate ways. As to your second meaning, the question of responsibility, itself, comes in at least two different forms, both hinted at in your comment. First is the question about holding machines, themselves, responsible for their actions. Simply put, there can be no talk of this until some possible, still far distant day when the AI has matured to the point where we would be willing to accord human or human-like status to robots. Until that day, if ever it is reached, we have to remember that a machine is just a machine. Thus, the second and more important problem, which is, to whom to assign responsibility as more autonomy is engineered into more systems. But I think that this is, in principle, an easy and straightforward question. Whether we hold the designer, the manufacturer, the retailer, the owner, or the user responsible will depend on the details of the case, just as it does in current product liability law. A different legal regime applies in the case of autonomous weapons, but the same kinds of principles are involved. I found it interesting that Volvo announced last year that it would assume legal responsibility for all accidents involving Volvo vehicles when they are operating in autonomous mode. Some other manufacturers followed suit. This move by Volvo makes sense, because, if the promise of drastically reduced accident and injury rates is realized, the cost to Volvo of assuming liability should not be all that great, and Volvo’s doing that removes what might, otherwise, have been one of the more serious, potential regulatory obstacles to the widespread deployment of self-driving cars.

  4. sergio says:

    Theoretically, given the fact we exist, there must be a combination of hardware + software that can perfectly replicate human beings. So every question regarding AI is essentially a question about human beings. So I might say the answers are within us.

    But the post provoked other thoughts for me, such as: What happens if an artificial morality surpasses human morality? One of the outcomes I see is like the “I, Robot” finale, since a strict, hardcoded morality would apply itself no matter the liability.

    • Don Howard Don Howard says:

      I do not know whether we will, someday, perfectly replicate human beings in artificial systems, including human morality, partly because I am not sure that the concept of “perfect replication” is well defined (notwithstanding the humanoid Cylons in “Battlestar Galactica”). But I regard research on ethics programming for autonomous systems as, among other things, a laboratory in which to investigate human moral competence, on the argument that you never understand something really well until you try to build it. One result of that research might well be systems that, in some respects, outperform human moral agents. How, one might ask? Well, humankind has learned new moral truths in the course of its history. There was once a time, not so long ago, when most human beings believed that slavery was perfectly natural and morally justified. They were wrong, and now most people believe otherwise. Why would we think it impossible for similar moral progress to be made in the future? And is there any a priori reason — I know of none — why our teachers could not be our artificial, moral progeny, just as parents sometimes achieve new moral insights thanks to their grown-up children?

  5. randy morris says:

    The best argument against the “Chinese room” is that the machine Searle describes has fetch and compare states without any provision for modifying the data base. It could not possibly pass a Turing test because it could never (truthfully) answer the question: What state am I in? It is an example of a machine that could not pass a Turing test and that no one would describe as intelligent. The Chinese room argument does not even pretend to address the question: Is it possible to design an intelligent machine? It only gives a single example of a machine that obviously cannot be intelligent. I never understood why this was even considered as an argument against any form of AI. As to the question: Can machines become moral?, the answer is obviously yes, since we are machines and, by our own definitions, we are moral.

    • Don Howard Don Howard says:

      Randy — Space was limited, so I chose to include above only what I take to be the most compelling of the many objections to the Chinese room argument. I agree that the particular “machine” imagined by Searle is an especially simple-minded one, basically nothing more than a look-up table. With this, as with most other such questions, my advice is to be patient and let the engineers see what they can achieve. Google translate gets better every day. It still makes too many mistakes. But I can imagine a day when, especially as its context-sensitivity and its “ear” for idiom, figures of speech, and vernacular language grows, more users will think that they are dealing with an “intelligent” system.

  6. tnmurti says:

    I think the assessment of moral situations for intervention could differ with humans and robots significantly.

    Humans are guided by a set of moral rules, some rigid and some others not so rigid. These rules are in place to
    create a discretionary space, and from within that space intervention decisions come up, guided by feelings.

    The discretionary space, itself, differs from individual to individual, and with robots it is almost nil.

    This suggests that robotic intervention may not meet the human standards as consciousness cannot be
    built into robots.

    • Don Howard Don Howard says:

      You seem to assume that ethical robots will always respond in exactly the same way under the same circumstances. That might well be true with the older, Turing conception of computation. But neural nets trained up with learning algorithms are inherently stochastic systems, so variation will be built in from the beginning. Speaking for myself, I think that’s a plus. Indeed, I think that it should be a design objective. If we prize moral diversity in the human community, why should we not prize moral diversity — within limits, obviously — within the community of our robot companions?

  7. Thanks for this insightful piece (on a subject I’ve written on before). The question of whether machines will ever possess felt properties of mind — sensation, emotion, self-awareness — is older even than computers, and I won’t venture a guess here.

    But I think there’s a more definite answer to whether machines will ever be moral in the sense of exhibiting processes of moral decisionmaking, whether rule-based or habituated. Here I believe the answer must be: yes, because they already do — not just computers, but machines in general. The behavior of machines is inherently the product of human design; it’s human agency at a distance, like a book is human intention at a distance.

    Take, for instance, the refrain from Patrick Lin and many other futurists that “we must begin to think now about how to make machines moral” — before, it seems implied, the machines make moral decisions without us. But seatbelts, crumple zones, and lighter-weight car construction are already forms of moral engineering. The SmartCar and the HumVee embody clear and distinct utilitarian calculi about the outcomes of collisions (the word “embody” can be taken in a nearly literal sense here). It’s not at all obvious how the choices about collision handling encoded in the software of today’s human-driven cars or tomorrow’s robotic cars are any different in kind. They only increase the distance of the action from conscious human agency.

    My point is not at all to discount the urgency of clear ethical reasoning in the systems that as we speak are being designed to make real-time autonomous decisions about who dies in a car wreck or a far-off battlefield. Quite the opposite: I’m suspicious of language like Lin’s for suggesting a looming moral discontinuity. I wonder if this rhetoric actually functions as preemptive moral evasion, suggesting that autonomous systems will bear their own agency, exempting their creators from responsibility in the way we exempt parents from legal responsibility for the actions of their mature children.

    • Don Howard Don Howard says:

      Total agreement on what you say about morality already being built in as part of the design of many systems. But we might disagree on two points:

      (1) The moral competence of a seat belt is entirely scripted by the designer. But we are entering a world in which, in effect, we will give the machines discretionary moral authority, as it were, since the moral choices they will make (I’m assuming neural nets trained up with learning algorithms) are not thus scripted from beforehand. That means that we need more, and more careful, thinking about how to engineer such discretionary moral authority.

      (2) I do not understand your point about “preemptive moral evasion.” I do not think that it is Patrick Lin’s intention to exempt the engineers from responsibility. Far from it. I think that Patrick’s message has been clear all along, namely, that the engineers have to accept responsibility and that, having accepted such responsibility, they have to collaborate with the philosophers, the policy makers, and the public to get it right.

  8. Jonathan says:

    If the most compelling response to the Chinese room argument is “that since we humans don’t really understand that which we call ‘consciousness’ even in ourselves, how do we know it isn’t just the very competence that such a machine possesses?”, then I would think the responses to the Chinese room argument are pretty weak. The Chinese room argument isn’t designed to reveal what consciousness is, but to show that you don’t get there by stimulus response.

    You might as well say that since we don’t really understand that which we call consciousness, we could arrive at it by banging sticks together, for all we know…

    • Don Howard Don Howard says:

      Jonathan, see the comment above from randy morris and my reply to it. We might well agree that a “stimulus-response machine” doesn’t evince “consciousness,” whatever “consciousness” might be. But such a machine is an especially dumb one. I do not understand why the capabilities of such a crude machine settle the argument in Searle’s favor. The serious point here is that the rapid advances in AI since Searle’s original proposal of the argument make moot many of the assumptions about what’s going on inside the “room,” all the more so if what’s in the “room” is not just a look-up table but something as sophisticated as Watson or Watson’s grandchildren.

  9. Amandine Picot says:

    I think the machines are going to become moral. Not today, but some time, yes. You know the expression “truth is stranger than fiction”? That’s exactly what is happening right now: We create machines, which are in our service, for the moment. The subject has already been treated in a lot of movies, but what will happen when machines feel more human than us? When they want to have the same rights as us? They will be stronger than us because they will be machines, they won’t have to eat or drink, they will be more resistant to extreme temperatures, maybe they will also be able to resist to nuclear impacts. So when we finish killing each other, they will remain while we disappear. They won’t even have to fight to take our place; they will just have to wait and watch.

    • Don Howard Don Howard says:

      Predicting the future is a risky game, in general, and technology forecasting is even more difficult. I don’t know what the capabilities of future robots will be. But I do like to think about the question of robot rights, on which point I always ask my students to watch that great episode from “Star Trek: The Next Generation,” called “The Measure of a Man” (2:9). In this episode, a Star Fleet engineer arrives on the Enterprise with orders allowing him to disassemble Data in order to verify a hypothesis about how the positronic brain functions. This risk is that they won’t be able to reassemble Data, in which case he will die. Data refuses to submit, and a quasi-judicial process is convened to settle the question whether Data has a right to defend his own existence. You can guess the decision of the court.

  10. Patrick Lin says:

    Don is correct: In just about everything I’ve written in this area, it should be clear that (1) I’m not letting engineers off the hook but quite the opposite, and (2) I think it’s far-fetched to say that AI or robots can “make decisions” for themselves in a way that can’t be traced back to some human design choice or responsibility. For instance, see my testimony at UN’s CCW last year:

    http://www.theatlantic.com/technology/archive/2015/04/do-killer-robots-violate-human-rights/390033/

Leave a Reply to randy morris Cancel reply

Your email address will not be published. Required fields are marked *