Will Machines Ever Become Human?

Machines become Human?Shutterstock

No. Digital computers won’t; and in the world as we know it, they are the only candidate machines.

What does “human” mean? Humans are conscious and intelligent — although it’s curiously easy to imagine one attribute without the other. An intelligent but unconscious being is a “zombie” in science fiction — and to philosophers and technologists too. We can also imagine a conscious non-intelligence. It would experience its environment as a flow of unidentified, meaningless sensations engendering no mental activity beyond mere passive awareness.

Some day, digital computers will almost certainly be intelligent. But they will never be conscious. One day we are likely to face a world full of real zombies and the moral and philosophical problems they pose. I’ll return to these hard questions.


The possibility of intelligent computers has obsessed mankind since Alan Turing first raised it formally in 1950. Turing was vague about consciousness, which he thought unnecessary to machine intelligence. Many others have been vague since. But artificial consciousness is surely as fascinating as artificial intelligence.

Digital computers won’t ever be conscious; they are made of the wrong stuff (as the philosopher John Searle first argued in 1980). A scientist, Searle noted, naturally assumes that consciousness results from the chemical and physical structure of humans and animals — as photosynthesis results from the chemistry of plants. (We assume that animals have a sort of intelligence, a sort of consciousness, to the extent they seem human-like.) You can’t program your laptop to transform carbon dioxide into sugar; computers are made of the wrong stuff for photosynthesis — and for consciousness too.

No serious thinker argues that computers today are conscious. Suppose you tell one computer and one man to imagine a rose and then describe it. You might get two similar descriptions, and be unable to tell which is which. But behind these similar statements, a crucial difference. The man can see and sense an imaginary rose in his mind. The computer can put on a good performance, can describe an imaginary rose in detail — but can’t actually see or sense anything. It has no internal mental world; no consciousness; only a blank.

Bur some thinkers reject the wrong-stuff argument and believe that, once computers and software grow powerful and sophisticated enough, they will be conscious as well as intelligent.

They point to a similarity between neurons, the brain’s basic component, and transistors, the basic component of computers. Both neurons and transistors transform incoming electrical signals to outgoing signals. Now a single neuron by itself is not conscious, not intelligent. But gather lots together in just the right way and you get the brain of a conscious and intelligent human. A single transistor seems likewise unpromising. But gather lots together, hook them up right and you will get consciousness, just as you do with neurons.

But this argument makes no sense. One type of unconscious thing (neurons) can create consciousness in the right kind of ensemble. Why should the same hold for other unconscious things? In every other known case, it does not hold. No ensemble of soda cans or grapefruit rinds is likely to yield consciousness. Yes but transistors, according to this argument, resemble neurons in just the right way; therefore they will act like neurons in creating consciousness. But this “exactly right resemblance” is just an assertion, to be taken on trust. Neurons resemble heart cells more closely than they do transistors, but hearts are not conscious.

In fact, an ensemble of transistors is not even the case we’re discussing; we’re discussing digital computers and software. “Computationalist” philosophers and psychologists and some artificial intelligence researchers believe that digital computers will one day be conscious and intelligent. In fact they go farther and assert that mental processes are in essence computational; they build a philosophical worldview on the idea that mind relates to brain as software relates to computer.

So let’s turn to the digital computer. It is an ensemble of (1) the processor, which executes (2) the software, which (when it is executed) has the effect of changing the data stored in (3) the memory. The memory stores data in numerical form, as binary integers or “bits.” Software can be understood many ways, but in basic terms it is a series of commands to be executed by the processor, each carrying out a simple arithmetic (or related) operation, each intended to accomplish one part of a (potentially complex) transformation of data in the memory.

In other words: by executing software, the processor gradually transforms the memory from an input state to an output or result state — as old-fashioned film was transformed (or developed) from its input state — the exposed film, seemingly blank — to a result state, bearing the image caught by the lens. A digital computer is a memory-transforming machine, where the process of transformation is dictated by the software. We can picture a digital computer as a gigantic blackboard (the memory) ruled into squares, each large enough to hold the symbol 0 or 1, and a robot (the processor) moving blazingly fast over the blackboard, erasing old bits and writing new ones. Such a machine is in essence the “Turing machine” of 1936, which played a fundamental role in the development of theoretical computer science.

So: everyone agrees that today’s computers are not conscious, but some believe that ever-faster and more capable computers with ever-more-complex, sophisticated software will eventually be conscious.

This idea also makes no sense. Today’s robot zipping around the blackboard changing numbers is not conscious; why should the same machine speeded up, with a different program and a larger blackboard, be conscious? (And why shouldn’t other robots executing elaborate programs to paint cars or slice chickens have the same sort of consciousness?)

Digital computers will never be conscious. But what about intelligence? Could we build a zombie using a digital computer? — an entity or robot that is unconscious but nonetheless able to think, talk and act like a human?


The tricky part here is the nature of thought and the cognitive spectrum. Sometimes (when you are wide-awake, mentally alert) you think analytically. But as alertness falls, your thought becomes less focused and abstract, your tendency to drift or free-associate increases, and the character of thought and memory changes.

Every day we pass from the sharply focused reds and oranges of analytical thought through the lazier, less exhausting yellows and greens of habit and common sense and experience into the vivid blue of uncontrolled thinking, free association — and, finally, into the deep violet of sleep and dreams.

Partway down the spectrum, as you pause and look out a window, your thoughts wander. They move “horizontally” instead of straight-ahead in a logical, analytic way. But as you lose the ability to solve problems using logic and abstraction, you gain the capacity to solve them by remembering and applying earlier experiences. As your focus drifts still lower and you approach sleep, your loss of thought-control and your withdrawing from external reality progress. At the bottom of the spectrum, on the brink of sleep, you are free-associating.

It follows that your level of “focus” or “alertness” is basic to human thought. We can imagine focus as a physiological value, like heart rate or temperature. Each person’s focus moves during the day between maximum and minimum. Your focus is maximum when you are wide-awake. It sinks lower as you become tired, and reaches a minimum when you are asleep. (In fact it oscillates several times over a day.)

We can’t hope to produce artificial thought on a computer unless we reproduce the cognitive spectrum. It’s an immensely hard technical problem that goes way beyond the brief sketch I’ve given here. But many years down the road, we will solve it.


So we arrive back at that strange independence of consciousness on the one hand and intelligence on the other. Either can exist by itself.

If we put the two together, the result is obviously more powerful than mere consciousness without intelligence. But is it more powerful than intelligence without consciousness? Are human beings more capable than zombies?

Yes, in the sense that zombies can’t imagine us (can’t grasp what consciousness is), but we can imagine them. (Zombies can’t imagine consciousness because they can’t imagine anything.)

But what practical, biological use is consciousness if a zombie and a person can, in principle, lead indistinguishable lives?

Does consciousness give the possessor some added survival or reproductive advantage? Philosophers and scientists have proposed answers; but the question remains open. If the answer is no, we’re faced with a different question: why did evolution “develop” a complex mechanism that is biologically pointless?

Obviously consciousness serves a spiritual purpose. No zombie could suffer and sacrifice for a friend on principle. The zombie could talk a good game and, if we program it right, would be thoroughly self-sacrificing. But its good deeds resemble small change handed out to the poor by a billionaire, whose actions seem like charity although they require no sacrifice of him at all.

Do the spiritual possibilities (and the many pleasures and satisfactions) opened by consciousness make up for the reality of suffering and pain? This question resembles one asked in the Talmud: would it have better for human beings had they never been created? The rabbis’ answer is, ultimately, yes: it would have been better.

But this question too remains open.

Discussion Summary

I addressed the question of building a “human” computer — using software, in other words, to build a mind, hence a mindful computer — hence a human computer. Mind has two basic aspects: thinking and feeling or (equivalent to feeling) awareness, qualitative experience, consciousness. Of course these two aspects of mind color each other deeply — like two lighthouses with their beams fixed on each other.

I believe that one day we will build a thinking computer. I don’t believe we will ever build a conscious computer. We will wind up with an immensely powerful, useful and dangerous machine but not one that is human: although it will think, it will be unconscious. It can claim to be conscious: ask if it’s conscious and it can say (indignantly) “Of course! If you doubt that I’m conscious, why don’t you doubt that you are? Hypocrite.” (And it walks away, sulking.) All the same, within this computer’s “mind” there is no one home; the machine is what we’ve come to call a zombie: an unconscious thinker. And there’s one other important distinction, in consequence, between humans and zombies: we can imagine what it’s like to be a zombie, but a zombie can’t imagine what it’s like to be human. In fact, it can’t imagine anything at all.

Reader’s comments covered a fairly wide range of questions and objections (and after all, my views on the topic — anyone’s views — are highly arguable and controversial); but two important themes emerged.

The first is more important: it’s hard for many people to accept that there’s anything computers can’t do. Many people’s confidence in the power of digital computers seems unbounded. One commentator wrote “there is no reason consciousness cannot be simulated on a powerful enough computer”; others had similar thoughts.

In fact, some people’s confidence extends to the idea of treating a human as if he were a digital computer. Some people believe that by capturing a mind in software, the mind or even the mind’s owner could (in effect) be uploaded to the Internet. “If a human being could successfully be uploaded into a machine….”; and similar comments.

It’s natural for people to be optimistic about the power of computing. It’s also natural to equate the mind with software. The mind has a puzzling relationship to the brain: it’s created by the brain, but it’s intangible and invisible. If you open someone’s head, all you see is brain cells wired together. The virtual machine created by software (such as the browser you’re using now) is created by an electronic computer, but if you crack open the processor and memory chips and look inside, you see only micro-electronics; you can’t see software. The idea that mind relates to brain as software relates to computer is natural, even self-evident.

But it’s also wrong. These comments all take an oddly cramped view of human beings. Imagine that we are somehow able to capture some particular mind in software, or (equivalently) in data that software ingests, whereupon it becomes a. Say you’re the lucky test subject: your mind has been captured in digital data: a long list of binary numbers. Can you actually believe that these numbers plus a computer are your identical twin? Is your personhood, your way of experiencing the world, yourself as meager as that? Consider a photo of a person: it might be high-definition, even 3D, but you’d never confuse the photo and a human being. You’d never describe the photo as your twin, as another person made of different stuff. The list of binary numbers is a different sort of photo; that’s all. As for the idea some people have that someday, they will upload themselves to the internet — and thereby live forever, remember that you could crumple the list of numbers, set it on fire, toss it in the trash — makes no difference you; you don’t die when that happens. And when you do actually face death, the existence of those numbers on paper or on the internet won’t matter to you either.

The other main theme in the comments: granted digital computers will never be conscious, never be human; but what about some other form of computer, some other machine?

In the 1930s, Alan Turing and other logicians (Post, Kleene) set off to give a precise definition of “compute.” Their work led to the strong result that the computer you’re using right now — assuming you give it as much memory and time to work as it needs — can do every computation that exists, with nothing left over. A different sort of machine might compute faster or use different kinds of physical processes and materials, but there’s no fundamental way in which it’s different from the computer in front of you now. Even a quantum computer adds nothing fundamental: although its memory and speed are immensely larger than a classical computer’s, it the end it’s able to do exactly the same computations as the laptop in front of you.

I’ll finish with two questions based on these comments.

New Big Questions:

1. Why are we so willing to believe in the omnipotence of computers?

2. Why did no one raise the Judeo-Christian objection to “human” machines, that human beings are “the image of God,” and how could God’s image (any more than God Himself) possibly be reduced to a list of binary numbers? Are there so few Jews and Christians left? Or have Jews and Christians been intimidated by scientism into silence?

And why aren’t these questions crucial to Templeton?

27 Responses

  1. mikebrown says:

    An essential ingredient of being human is consciousness.  It seems customary these days to assume that consciousness is emergent, i.e. it is essentially associated with matter.   However, bearing in mind the mysteries of quantum theory, for example entanglement, is it not possible that consciousness can exist in a dimension beyond matter yet interface with it?  We need to know this when considering if machines can become human.

    • David Gelernter says:

      Quantum mechanics is a deep & fascinating field, but I’d argue that studying consciousness “in a dimension beyond matter” would tell us exactly nothing about machines becoming human.  Human implies conscious as humans are conscious.  If some other consciousness exists, we’ll recognize it as consciousness exactly to the extent that it resembles human consciousness; that’s the only kind we know, and the only definition of consciousness we have.

      • mikebrown says:

        @David>  Interesting points!  I only cited quantum theory as one example of mysteries at the frontiers of science.  Extra dimensions of course also appear in string theory. To my mind the possibility of extra dimensions leaves the question of dualism open.  I have seen some cutting edge ideas about the roots of consciousness by such freontier scientists as Hammeroff and Penrose and   (see microtubles theory in Shadows of the Mind, by Penrose) .   However, it seems to me that we still do not know if consciousness is emergent from the brain or if the brain acts as an interface with other dimensions.     There is not even a symbol in the whole of science to represent consciousness.

        When I disagree about a work of art with a fellow human being I cannot know what his state of consciousness is.  Even less can I know for a monkey, dog, bat, or worm.  Thus conscious states seem to lie on a continuous spectrum and who knows if machines, or the sort of computers that we can envisage as being built in the medium term,  will have states with which humans can interact – as with a pet dog?.    If consciousness arises from some form of complex arrangement of physical components – I think probably not.   If consciousness is some state in another dimension, yet to be identified – possibly .

        • David Gelernter says:

          There’s nothing other-dimensional about dog or human consciousness; why should I doubt that consciousness in both cases is a physiological phenomenon that results from a network of neurons connected to an ordinary organic body?  It’s true we don’t know how consciousness works, but we don’t know a lot of things about the human body.  Occam famously said “pluralitas non est ponenda sine necessitate”–I think–my Latin memory might easily be off!–meaning, don’t posit plurality unless it’s necessary.  Occam’s razor–keep it simple!–has guided science for many centuries, and we should stick to it unless there’s a compelling reason not to.

  2. siehjin says:

    hi david (or mr. gelernter?)

    thanks for the thought-provoking essay! =)

    i’m not sure if this is cheating, but i’m wondering if machines might become conscious (or human) via another pathway. for example, i just read about this Project Avatar which aims to upload human beings into machines, thus achieving immortality (you can read more about it here: http://www.gizmag.com/avatar-project-2045/23454/?utm_source=Gizmag+Subscribers&utm_campaign=889155f3de-UA-2235360-4&utm_medium=email)

    if a human being could successfully be uploaded into a machine, i guess the machine would be conscious, right? so this would be a way for machines to become conscious without that consciousness having to somehow arise from the sum of its parts.

    • David Gelernter says:

      which is probably just as well…  Your comment deals with a bizarre, tragic misunderstanding that’s a sort of side-effect of the internet age.  “Uploading you” would mean reducing the entire current state of your body (including your brain) to a list of numbers.  (Computers after all only deal in numbers; only numbers can be uploaded.)  You could print out the list, hold it in your hands, scan it through.  “You” are still unique, there’s one you, not two; the list of numbers is obviously not you.  It’s just one form of description of you, like a CAT-scan or MRI image or a photo or a painting.  There are lots of ways to describe or respresent you.  So now I upload these numbers to a computer.  So far the computer has acquired lots of new numbers.  And now I “activate” the numbers.  What happens is that I get a sophisticated simulation of you.  Again, there are many ways to simulate a human being; and there are many ways to simulate a rainshower.  But no one ever gets wet no matter how good the simulation; the computer never becomes conscious, no matter how good the simulation; and suppose that, two minutes after the uploading is complete, a meteorite knocks you over & you’re dead.  Does the existence of a simulation somewhere make you any less dead?  Nope; not even a tiny bit less dead.

  3. Sunny Day says:

    A time will come when some things are part human and part machine. There will be a spectrum of consciousness.

    • David Gelernter says:

      I’m part machine myself.  I have a tooth implant–a fake tooth that’s screws into an implanted anchor.  Happy to report that my consciousness is unaffected!  But there is certainly a spectrum of consciousness out there too.  An insect is unconscious, a frog is pretty close to unconscious; a rabbit is probably closer, & complex, intelligent animals–parrots, elephants, apes etc–are certainly much closer to human-like consciousness.  The spectrum is there & machine-man exists.  We’ve arrived!

  4. ntadepalli says:

    I have a model based understanding of brain.

    1.      There are stimuli to brain ,either sensory inputs from external world or memory inputs.They trigger a process in which conditioned neurons participate.

    2.      The triggered process can be modulated by reactions of neuronal conditioning or influences from memory (past experience,knowledge,nurture ).


    The running process is the software  like thing producing a sensation representing the external objects.

    Thus we know that 1, the brain can make its own software 2.the brain can refine the software to tackle the environment –a self-regulation and 3.the brain can self-transform to satisfy itself by suitably altering its own conditioning.

    I believe a suitable substrate material can be found to perform all the above,namely making of own software,self-regulation and self- transformation.

    Then the substrate will be very near to any human brain.

    • David Gelernter says:

      When you write that “the brain can make its own software,” you’re speaking metaphorically.  There’s nothing wrong with that, so long as you don’t confuse a metaphor with a fact.  Software has a precise mathematical & a precise technological definition: it’s a computable (or recursively enumerable) function; it’s a series of intructions to be executed by a digital computer.  Calling brain functions “software” might be useful for many purposes, but doesn’t mean that they are software.

  5. twomeyw2 says:

    Of Course (the software running on) a digital computer could be conscious.  A digital computer programmed correctly could, for example, simulate a neuron, or an entire brain made of neurons (this assumes that neurons are required for consciousness, which in and of itself I find absurd).  The transistors of the computer do not have to behave exactly like neurons in the brain to achieve consciousness, and arguments suggesting this seem a bit simplistic.

    There is no doubt that the architecture of a computer is different from that of the brain, but this doesn’t prove that a computer cannot be conscious.  Software running on a computer abstracts the hardware, and can be used to generate any possible reality that we can imagine -given enough knowledge (for the human programmers), and speed and memory for the computer. 

    And strictly speaking, knowledge (specifically about the operation of the brain) is not necessarily a requirement.  An evolutionary software program could, for example, evolve a conscious brain over time by performing iterations of its ‘brain’ (just like human evolution) to end up with a conscious brain that may work in a completely different manner than the software engineers of the evolutionary code ever predicted.  Such software is already used for engineering purposes (such as the design of an RF antenna).

    • David Gelernter says:

      Of course software can simulate neurons, brains and nuclear explosions–which doesn’t mean that software can transmit a spike or any electrical signal, or can be conscious, or can blow up a city.  Is is true that software “can be used to generate any possible reality that we can imagine”?  Not exactly.  Can it generate rain during a drought?  Can it generate love, fear, oxygen?  Of course not.

  6. wegbert says:

    Prof. Gelernter, it’s clear that you don’t believe that “machines” (meaning digital computers) will never become “human” (meaning both intelligent and conscious), but I’m still not quite clear on exactly why. I suspect the issue may be with one of the distinctions you make at the start and the end of this provocative essay.
    From the start you specify that “digital” computers will not attain consciousness, which left me wondering if you thought that some other model of mechanical computing might. If “digital” can be taken to mean “binary,” I wondered, do you thought the less dualistic qubits of quantum computers might someday be capable of consciousness?
    Also, at the very end of this otherwise thoroughly materialist essay you introduce the idea that, lacking an obvious biological/materialist purpose, consciousness must confer some “spiritual” advantage to have resulted from natural selection. That made me wonder if the main reason you dismiss the possibility of machine consciousness is that you consider consciousness to be inextricable from spirituality and machines to be non-spiritual entities. Your earlier comment about canine consciousness would seem to preclude that stance (unless you consider dogs to be spiritual beings), but I would still hope you might expand on your views about the relationship between consciousness, spirituality and machines beyond what you included in your essay.

    • David Gelernter says:

      “digital” only means (in this context) a finite machine that computes with numbers expressed in a finite number of digits.  a “quantum computer” might be different — if it were being used to create some new model of computing, not merely as a new way to implement digital computers.  I have no reason to think that a quantum computer would be conscious; as with any other non-brain, the burden of proof is on those who say it can be.  I haven’t seen any persuasive arguments yet — but I’ll certainly admit that there’s more to consciousness than we’re able to understand right now.

      I’m not a materialist; I’m a practicing Jew.  Materialism is the only hypothesis that’s ever made successful science possible.  But science is not the be all & end all of human existence.

      If machines (they won’t be digital computers, but some other machines) attained consciousness, they’d be capable of becoming “spiritual beings” — though, as dogs demonstrate, consciousness by itself isn’t enough.  Human-level consciousness is a necessary, not a sufficient condition.

  7. twomeyw2 says:

    Can you prove that your existence is not in fact, inside a digital simulation running on a powerful computer?  Of course not.  This fact alone proves that a simulated conscious is a conscious, unless you’d agree that you may not be conscious.

    But, to go further, we are not talking about a physical thing.  We are talking about consciousness.  My consciousness is not my physical brain any more than a computer’s consciousness is the computer it is running on.  My brain is necessary for my consciousness as a computer and software is necessary for artificial consciousness (as we are discussing), but Requires != Is.

    If my heart were to stop beating, I would collapse and die, yet my physical brain would still exist (for a while anyways).  My consciousness is a stream (or infinite loop) running inside my brain in combination with my memories, experiences, preferences, etc. stored within my brain.

    Likewise, you could destroy a computer’s consciousness (without destroying the computer) by removing its power source.  The consciousness exists on the computer and requires the computer, but is not the computer.

    Claiming that to be conscious, you must exist on a brain made of physical neurons with the same DNA structure of that of humans, is missing what it means to be conscious.

    • David Gelernter says:

      I’ve already proved that I’m not a simulation inside a digital computer, at least as we define simulation and digital computer.  A simple throught experiment makes it clear that, no matter how fast I execute computer instructions and no matter what those instructions are, no consciousness ever results — I can change numbers on a blackboard for the next million years and still be no farther along on the road to consciousness.

      Of course “requires” is not “is.”  I wrote that, as far as we know, an organic brain is a necessary condition for consciousness.  If you disagree, you must point to some instance of consciousness in the absence of an organic brain, or to some argument that proves such a thing to be possible.  Otherwise you’re merely piling up unsupported assertions that show what you believe, but can’t be expected to convince anyone else.

      • wegbert says:

        It seems your argument boils down to the assertion “as far as we know, an organic brain is a necessary condition for consciousness.” However, there was a time early in the computer age when most scientists would agree with the assertion “as far as we know, an organic brain is a necessary condition for intelligence,” but clearly computer science has progressed to the point that most scientists (including yourself) would disagree with it now.  You say that those who disagree with the first assertion can only disprove it by pointing to an example of consciousness in the absence of an organic brain, but who at the dawn of the computer age could then point to an example of intelligence outside an organic brain?  The absence of such examples obviously did not make the second assertion true, so why would it now provide support to the first?

        In the absence of extant examples, you do also allow for the possibility of an argument that proves the possibility of consciousness independent of an organic brain, so I would offer the argument that consciousness in an emergent property of intelligence, not a separate quality of mind.  Consciousness implies self-awareness, which is clearly linked to intelligence — dogs have some, apes have more, humans have lots.  (The theoretical example of a “conscious non-intelligence” at the start of your essay would not, I posit, be recognized as consciousness, as it is generally understood.)  So given the fact that we can all agree digital computers can (and inevitably will) attain greater and greater levels of intelligence, it follows that this “machine intelligence” will at some point give rise to “machine consciousness.”  Or at least it provides a decent argument as to why machine consciousness shouldn’t be written off as impossible.

  8. Nat Singleton says:

    While I am no Talmudic scholar, I think the answer to question posed is no, otherwise it can never be asked. Now, ‘Will Machines ever become Human’, I suspect the answer is yes but they won’t be digital machines, they will be us in every way except there elemental construct may be something other than our carbon based life form. Nature has had billions of years to create life and multicellular life, at least on our planet is a recent addition. So give us humans a break and some time to be deities of creation. We will get there for better or worse. Having said that, the first step in solving any problem is to ask the right question and to have the tools to be able to formulate that question. The human mind, or any mind for that matter, works nothing like a digital computer but it may have some parallels with a quantum computing (It remains to be seen if that is true or not). To have this discussion, we need to have a good idea of what mind is and how it ‘works.’ To that end I suggest one read the book “On Intelligence’ by Jeff Hawkins, the founder of Palm Computing.


    • David Gelernter says:

      If you look in masechet Shabbat you’ll find that the answer given is yes (although this is the one example in Shas where we never learn whether this “winning” argument belonged to Beis Hillel or Beis Shammai).  Why speculate about what the text says? Read it!

      If you say that machines will become human but those machines will be just like us, the argument is already over: will machines become human?  Yes, and we are those machines.  I prefer my understanding of the mind and intelligence to Jeff Hawkins’, but to each his own, and this argument is unlikely to be settled any time soon.

  9. twomeyw2 says:

    Assuming our universe doesn’t contain a mystical / religious element to it (e.g. there are fundamental laws of physics that describe our universe) there is no reason consciousness cannot be simulated on a powerful enough computer.

    Modern computers are no where close to being powerful enough to simulate a human brain, so the fact that such a simulation doesn’t exist doesn’t mean it can’t. All logic says it can -with a powerful enough computer- which is what this argument is about. I agree that existing computers, or a blackboard, is not sufficient to create consciousness. But just as my computer can simulate an electronic circuit (that follows the law of physics), a more powerful computer could simulate the feeling of love (an electrochemical reaction) in a simulated brain.

    In order to prove that your existence is not being simulated on a computer, you would have to prove that your existence incorporates some sort of spiritual element (such as a soul) that is beyond the laws of physics and that cannot be simulated. No human to date has been able to prove such a thing.

    If I claim that humans are merely biological robots that can be simulated on a powerful enough computer, I’m not speculating anything out of the ordinary. If the simulation requires detail down the quantum level, I’ll add the requirement that the computer must have a true random number generator, but that is a minor detail.

    Even if we go along with the idea that consciousness has a special requirement beyond the laws of physics, such as a soul granted by a God, who is anyone to claim that such a God (who freely distributes such souls to each new human) would not distribute such a soul to a newly simulated human brain (or just a digital brain in general).

    I think you are making some unprovable assumptions here, and they laying the burden of disproof on the other side. If you believe that consciousness requires something special (beyond the laws of physics – which are simulateable) the burden would be on you to demonstrate that. Otherwise, logic dictates that just as we are able to simulate much more complicated physical phenomena now (with more power computers) than 2 decades ago, in the future we will be able to simulate even more complicated phenomena, such as consciousness, which is yet another physical phenomena – unless someone proves otherwise…

    • David Gelernter says:

      There’s a fundamental unwillingness here to grasp the nature of computers and computer simulations.  There is no chance that either of us is a computer simulation, & that fact has nothing to do with the power of today’s computers.  A computer can’t do anything but compute numbers: that’s what computation is.  I can attach peripherals, added machinery, that allow the numbers be interpreted as something else:  If I have a graphics card, it can take arrays of numbers and turn them into images on a screen.  I can print the numbers, send them over a network, and in the near future will be able to turn them into some sort of 3D image.  But there’s no peripheral out there that creates consciousness, or human awareness.  If you really believe that you might be a computer simulation (I hope you don’t!), are you a 2D image on a screen?  Are you printed on paper?  How did the computer succeed in manufacturing your awareness?–your experience of the world and of your own being?  Either what you’re saying simply makes no sense, or you’re assuming the existence of some new sort of magic machine which, when attached to a computer, turns numbers into human awareness (and a complete set of memories) — or creates a whole society of human awarenesses, or something even more exotic.  I have no more reason to think such a machine exists than I have to believe in the philosopher’s stone that turns lead into gold.  You’re giving computers far, far too much credit.

  10. Nat Singleton says:

    A couple of points, I think I was trying to make:

    1.       Trying to create an analogue to human consciousness by going down the path of digital computing is a dead end and I know of no non-organic example of consciousness –and neither does anyone else.

    2.       We need to understand the physical architecture of the brain and through experimentation, try and  replicate it. Thus my reference to the book, ‘On Intelligence.’ I’m not saying the book is correct but it does provide a lot of food for thought. Also, I think Jeff Hawkins would argue that there is a good possibility that mind and intelligence are emergent properties of the architecture.

    3.       I make no pretense into even having  a superficial understanding of the Talmud which is why I qualified my comment. My response was to the statement as it appeared on the page, devoid of all its context.

    Science isn’t the be all and end all of anything. It is just a tool that allows us to acquire ‘knowledge.’

    Dogs! I have a dog. Dogs are man’s Frankenstein creation. My dog lives in a dog world and more than likely thinks I’m the dumbest dog he has ever met.  As far as I know or don’t know, he may be quite the spiritual being in his world and we don’t meet the necessary conditions of that world to be ‘spiritual.’

    • David Gelernter says:

      In one way I agree: trying to make a conscious computer is a dead end.  But if the effort allows us to sharpen our understanding of consciousness, then even tho we lose we still win.  And trying to make an intelliigent (albeit unconscious of zombie) computer is not a dead end.

      We “need to” understand the brain, agreed; humans need to understand everything.  But I don’t see any need to “replicate it”  (ie the brain). Why bother, unless for clinical as opposed to scientific reasons?

      I have two brilliant, beautiful parrots.  But neither is spiritual in any sense we could recognize.  You might say yes, but suppose thei’re spiritual in some way we can’t recognize?  Then we don’t mean spiritual literally, b/c the only definition of that word I know is the spirituality I recognize in all sorts of different kinds of human beings, and their work–& in nothing and nowhere else.  There is no other way to define or discuss “spirituality” (in the same sense in which human intelligence is the unique gold standard of intelligence).

  11. Nat Singleton says:

    Whoa, whoa, and whoa, full stop. I just read your curriculum vitae and re-read your essay. I have been sufficiently chastised.

    Your eminence , you sure got a lot out there to argue with but I’m not going down that path. Discussing whether digital machine will ever be come conscious is a waste of time. However, you did raise an issue of  some import and immediacy, Intelligent Agents (Zombies).  They are here, today and will insinuate themselves into every intellectual task ,  displacing most of us, save the most creative. Two questions come to mind. How will we live in a world with nothing to do? How can we give Zombies a sense of ethics and fair play? Example of ‘Zombies gone Wild’, the Flash Crash of 2010.

    • David Gelernter says:

      Zombies won’t ever replace us, b/c human beings care above all else about each other.  We want contact with other human beings, and won’t ever be satisfied with fakes.  Not that zombies won’t do many things that humans do now.  But in the end we’ll want and need people like us to be with & live with.  And keep in mind that creativity depends on analogy, which–in my view–depends ultimately un the subtly-nuanced spectrum of human emotions, where the body & mind work together.  Reproducing this function in software is doable, but it will be many generations, my guess is, till we figure out how.  In the mean time, lots will have changed.

      Giving zombies a sense of ethics will be hard, especially since so many potential zombie-builders have little sense of ethics themselves.  And that’s, in a sense, the point: teaching ethics to computers is interesting & important, but pales to trivialiity compared the problem of teaching ethics to human beings — a project that’s always been hard, never more so than today; our education system is failing badly in this area.  If we could trust human beings to be ethical, we can trust them not to release unethical zombies into daily life.  But this is a problem we’ve never even come close to solving.  Will we ever?  Given mankind as it is, human beings as they are?  I doubt it.

  12. twomeyw2 says:

    When I view the world, my brain (a biological computer) interprets signals from my eyeballs (a camera like peripheral assembly) among other sensors.  But, how can I know that the signals my brain is interpreting are from actual –physical- sensors (eyeballs, ears, nose, tongue, skin, etc.) or are just signals generated from a simulation.

    I’d guess the likelihood of me existing in a simulation is low, but the point is that it is impossible to prove that I’m not.  My stream of conscousness on a computer may be represented by a state machine itself eventually represented by 1s and 0s, as my consciousness in my brain may be represented by the state of neurons, or however else my consciousness is physically implemented.

    You still haven’t provided any reason why consciousness could not exist on a computer, at least that I have understood.  What about consciousness is so special that, unlike other physical phenomena, it cannot be broken down into a mechanical system?  Isn’t our brain just a biological computer, whos function could be replicated by a digital computer?

     If consciousness depends on a soul or something beyond the realm of physics, I’d agree that a computer MAY not be able to achieve consciousness (depending on if a God was willing to grant machines souls), but there is no scientific proof to date of a thing.

  13. dyaseen says:

    Your argument boils down to “it can’t happen because it’s never happened before.” I can imagine the same reasoning coming from a (proto-) human that fire can only come from lightning because that’s the only source he’s ever experienced. It would of course be obvious to him that the creation of fire requires great elemental force, perhaps directed by some spirit or divinity, that is entirely absent in the rubbing-together of sticks.

    Color me unconvinced.