What Is the Difference Between Knowledge and Understanding?

Difference bt Knowledge and Understanding - 990x580Jeopardy Productions, Inc.

Everyone knows something. Some people know a lot. But no human being knows as much, apparently, as Watson, the IBM computer that defeated the greatest champions of the television quiz show Jeopardy at their own game.

Faced with the prompt Aeolic, spoken in ancient times, was a dialect of this, Watson effortlessly answered Ancient Greek (or rather, using Jeopardy’s answers-as-questions format, What is Ancient Greek?). Confronted with Classic candy bar thats a female Supreme Court justice, Watson shot back Baby Ruth Ginsburg. Very impressive. But does Watson understand what it’s talking about? Answering that question will point the way to the distinction between mere knowledge and true understanding.

When given This “insect” of a gangster was a real-life hit man for Murder Incorporated in the 1930s & 40s, Watson answered James Cagney. Surely it knows that James Cagney was an actor, not a “real-life” gangster? And asked a question in the category U.S. Cities, Watson notoriously replied Toronto. So it knows that Aeolic is a kind of ancient Greek, but not that Toronto is in Canada?

As the commenters later explained, Watson does not figure out the answers to these questions in the way that humans do. Whereas we summon up a list of gangsters or U.S. cities and then ask ourselves whether they meet the other criteria explicitly imposed or implicitly suggested by the clue, Watson consults a sophisticated table of statistical associations between words, extracted from a massive stock of written material, from newspaper reports through encyclopedia articles. Aeolic, in the few places in which it appears, is linked closely to Ancient Greek. The same goes for Baby Ruth and candy bar and of course Ruth Bader Ginsburg and Supreme Court justice; Watson is then clever enough to see the overlap. Unfortunately, the same strong connection is found between James Cagney and various terms associated with organized crime. Watson has a great deal of information at its cybernetic fingertips about gangsterism, gangster movies, and the key figures in both, but it seems not to understand what it means to be a real-life gangster.

And how could it? How could a table of statistical associations comprehend the difference between fact and fiction, between murder and make-believe?

Watson knows a lot of things—assuming that knowledge is a matter of fast, reliable, retrieval of the facts. But it has little or no understanding of the things that it knows. It knows that Aeolic is a dialect of Ancient Greek, but it does not know what it is to be a dialect, or even—in spite of the fact that it is connected to the outside world only by words—what it is to be a language. It can talk (at least in the context of a quiz show) as informatively and as accurately as some of the most knowledgeable people on the planet, but its grasp on the facts it conveys is even less certain than that of a cocktail party habitué who has read all the latest reviews but has never glanced at a page of the books themselves.

What is Watson missing? I have given it a name—understanding. But what is that? There are two kinds: understanding language and understanding the world. Consider this sentence: Δέδυκε μεν ἀ σελάννα. You won’t understand it unless you read Aeolic Greek. But if I tell you it means “The moon has set”, you grasp immediately what the sentence is about: you know what it is for the moon to disappear below the horizon. You do not understand the sentence (unless you speak Aeolic), but only because you do not understand the language, not because you fail to understand its subject matter.

Watson’s problem is that it does not understand the world. (Some philosophers would argue that it does not genuinely understand language either, but I put that question aside.) It gives answers, but it has no grasp of what makes its answers correct.

You do not have to be a computer to find yourself knowing without understanding. There are some facts that real flesh and blood people know only in a Watson-like way. Perhaps you know that Bach wrote fugues, but you don’t understand what a fugue is—the more so, perhaps, if you are tone deaf. Another case: when I was a boy, intoxicated by science, I knew—in a trivia-contest-winning sort of way—that the hydrogen and oxygen molecules in water were held together by covalent bonds, but covalent was not much more to me than a glamorously technical word.

A little later I learned that a single electron could be in a superposition of two different places at once. All physicists know this, yet arguably, no one yet really understands what it means. We have the sentences, or mathematical formulae, to represent superposition, but we don’t know what, deep down, these sentences are talking about. And wouldn’t we love to? Knowledge is good, but isn’t understanding much better still?

I am, however, supposed to be analyzing, not acclaiming, understanding. What is it that Watson does not grasp about the movies, that the younger me did not grasp about covalent bonds, that no one perhaps grasps about quantum superposition?

One way to answer this question is to ask how we might distinguish facts that are truly understood from facts that are merely known Watson-style. It seems easy to make such distinctions from the inside, about our own knowledge. My bad conscience as a young boy told me that I was only feigning chemical expertise; the non-musical know-it-all is well aware that they don’t really understand what a fugue is.

We all know, by contrast, that we have a grip on the setting of the moon. Close your eyes and imagine: the familiar orb drifts steadily downward, is sucked into the horizon, is gone. Or think: as the Earth turns on its axis we stationary observers on its surface speed toward, then away from the moon; eventually our planet comes to occlude it entirely. Or feel: the setting of the moon measures the passing of time, the distance traveled by a departing lover, the passing of life.

Watson misses all of that, you might suppose. But so what? Watson has many peculiarities: it is blind, has no lovers, and is theoretically immortal. Surely none of this, however, stands in the way of understanding. We may relate to the moon through our senses and emotions, but might not other beings take a different but no less profound approach?

A different test for understanding investigates abilities rather than internal imagery, feelings, and thoughts. It is easy to discover that something important is missing in the youthful Michael’s grasp of covalent bonds. Ask me to define “covalent”, and I would have faltered. Or better, ask the younger me to explain how to solve problems in quantum chemistry, or ask someone who has just read Bach’s Wikipedia entry but who has no interest in music to tell a fugue from a passacaglia. The gap in understanding emerges soon enough.

Watson will not crack so easily. Imagine a more versatile version of Watson, proficient in answering questions generally, not just on Jeopardy—exactly the kind of expert system that IBM is using its Watson technology to build. Such a system would have no trouble defining covalent, fugue, or any other term that you throw at it. Presumably, it might learn to solve problem sets in a science class or to classify works of music, using the same statistical techniques that work so well in Jeopardy to distinguish the right moves from the wrong moves.

Why does the machine seem all the same not to achieve understanding? One answer is that its expertise is parasitic: it learns the right moves by examining the moves already made in the vast body of text that its programmers supply. Arguably, though, most of us require a similar degree of assistance—most of what we know we learn from others, rather than by figuring it all out for ourselves. A deeper answer is that there is something about Watson’s statistical ways of knowing that is incompatible with understanding.

Watson and you both answer questions by seeing connections between things. But they are different kinds of connections. Watson picks up from things it reads that there is a correlation between a sphere’s rotating and a fixed point on its surface having a constantly changing view of the rest of the world. You grasp why this correlation exists, seeing the connection between the opacity of the Earth, light’s traveling in straight lines, and geometry of the sphere itself. For you the statistics are a byproduct of what really matters, the physical and causal relations between things and people and what they do and say. Grasping those relations is what understanding consists in. Watson lives in a world where there are no such relations: all it sees are statistics. It can predict a lot and so it can know a lot, but what it never grasps is why its predictions come true.

Discussion Questions

1. Could a machine ever understand things in the way that we do?

2. Is understanding a matter of having knowledge of certain special facts, such as causal facts? Or is it a matter of having a special kind of knowledge of facts: transparent, deep, luminous, or something like that?

3. Many scientists believe that, at bottom, our thought is implemented in neural networks that make statistical associations. Does that mean that we are no better than Watson? That our sense of understanding is an illusion?

Discussion Summary

 

The point of differentiating understanding and knowledge is to get a better grip on understanding. In my essay, I used the example of Watson, IBM’s Jeopardy-playing machine, as a case study of a system that displays plenty of knowledge but that, because it computes its answers by finding statistical associations between words, apparently has little or no understanding of the facts that it emits to win the game.

So what is it to understand a fact? Several possible components of understanding were mentioned in my essay and pushed by various commenters:

1.   Some sort of grasp or knowledge of the underlying structures that give rise to the fact to be understood, such as causal structures.

2.   Some sort of direct experience of the subject matter.

3.   Some sort of direct experience of the underlying structures.

A few commenters wondered whether we have direct experience of any of the world’s underlying structure. On these skeptical views, it is either impossible to grasp causal structure, or causal structure is an imposition of the mind that does not reflect the structure that’s really out there, the objective structure that you would need to be acquainted with to have true understanding. Does that mean that we are no better positioned than Watson to understand what’s going on in the world? Not necessarily: perhaps it is not essential, after all, to have deep knowledge of structure in order to have understanding.

Other commenters focused on direct experience itself. How closely is it connected to consciousness? Must you be conscious of the world to have understanding? A theme that came up several times was whether other people were the natural locus of understanding. We never got to the bottom of this, but one reason you might suppose that we are better placed to understand people than things is that we have direct experience of minds—namely, our own minds. Causality might be foreign to us, but thought is surely not. We can understand other people because we can, to some extent, know what it is like to be them.

This naturally leads to the question whether understanding other people has anything in common with understanding physical processes. Is psychology, like physics, just a matter of knowing the causes of things? An important related question is currently rather topical: is the kind of understanding you get from a university education in the humanities (literature, English, philosophy) qualitatively different from the understanding you get from the sciences?

Another line of thought in the comments pursued the topic of machine understanding, picking up on my claim that Watson understands nothing. Can a machine have knowledge of causal structure? Can it have direct experience of anything? (If experience requires consciousness, we need to know whether a machine can be conscious. That is a different big question; we put it aside.) I myself am somewhat optimistic that a machine might one day be built that is not only as intelligent as us, but that has genuine understanding. A sticking point is the notion of direct knowledge or grasping; as one commenter noted, though we may know it when we see it, we don’t have much in the way of a philosophical or psychological theory of the notion.

Near the end of the comments I raised the topic of moral understanding. How useful is a theory of causal understanding, or more generally, a theory of our understanding of the kinds of facts that turn up in Jeopardy, to thinking about moral understanding? I suggested an analogy between the two, with general moral principles standing in for causal laws. Could a computer one day have moral understanding? Could we automate the law courts?

A theme we didn’t get to—we had only a week, after all—was aesthetic understanding. That can mean, on the one hand, understanding literature and art, but also, on the other hand, using literature and art to understand both the world and our part in it. The possibility of a literary understanding of life suggests that grasping causal principles is not the only route to understanding; this takes us back to question of understanding people as opposed to things, and so the humanities versus the sciences.

Understanding is one of the biggest and oldest topics in philosophy, but it has not been much discussed in the last hundred years or so, at least in the English-speaking world. Thanks to the Templeton Foundation for helping to bring it back!

New Big Questions:

  1. Can a machine make moral decisions? Can it have moral understanding?
  2. Do we understand the behavior of people in the same way that we understand the behavior of things? Does understanding in the humanities work the same way as understanding in the sciences?
  3. How does literature help us to understand life?

49 Responses

  1. Lui Di Martino says:

    Hi Michael,

    You bring up a question that has occupied my own mind , on and off, for a fair few years! I wonder if undertsanding is simply ‘standing under’, an experience of the knowledge in an intimate way. I don’t believe it is possible to create a machine that can experience this.

    The machine that beat the best chess player in the world at the time didn’t exactly understand what it had done, and had no urge to go out and celebrate, as it were!

    Knowledge perhaps is more a state of motion (journey), whereas understanding is a state of being (destination). I can sell the field , as it were, once I find the pearl. Once I realize love is what there is to understand, I can let go of the memories and knowledge that got me to that realization if I want, and simply take on the state, standing under the ‘light’ of that state. There’s not going to be a machine that can achieve this, imo.

    And perhaps knowledge is ’cause and effect’, and understanding is free will?

    Thank you

    Lui

  2. trehub says:

    1. Could a machine ever understand things in the way that we do?

    Only if the machine were conscious, as we are. For more about thbis see “Space, self, and the theater of consciiousness” and “Where Am I? Redux”, here: https://www.researchgate.net/profile/Arnold_Trehub

    3. Many scientists believe that, at bottom, our thought is implemented in neural networks that make statistical associations. Does that mean that we are no better than Watson? That our sense of understanding is an illusion?

    Scientists who believe our thoughts are simply statistical associations are simply wrong. The weight of empirical evidence argues against a statistical neuronal associationist explanation for our conscious experience.

    • Michael Strevens says:

      Both Lui Di Martino and trehub make an interesting suggestion: it’s essential for understanding to have a certain kind of direct or immediate or transparent connection to the thing you are trying to understand. (Lui Di Martino says “an experience of the knowledge in an intimate way”; trehub says “consciousness”.) This is, I think, an important idea. Some things we only know about, like people we have heard of but never met (the fugue or the covalent bond in my essay above), but some things we feel that we know directly, like old friends—we know them for what they really are. That kind of deep knowledge that comes from immediate acquaintance seems enormously important for understanding.

       Many philosophers have thought that a direct knowledge of reality is impossible; there is simply too much coming between our minds and the world. Those philosophers—such as David Hume and Immanuel Kant—think that we can never understand the world as it really is. Hume thinks that the sense of familiarity we feel about certain aspects of the world is nothing over and above a sense of the ease with which our thoughts transition from one thing to another, when we think about a topic with which we have a lot of experience. Kant thinks that the aspects of reality with which we feel a direct acquaintance are familiar because they have been put there by our own minds; they are not, then, part of the real world. If either Hume or Kant is right, we ourselves are closer to being “Watsons” than we realize.

      • trehub says:

        Michael: “Kant thinks that the aspects of reality with which we feel a direct acquaintance are familiar because they have been put there by our own minds; they are not, then, part of the real world.”

        This is a bit tricky because what we take as the real world is a transparent representation of the world we live in that is constructed by our brain. So the “world” (in our brain) with which we have direct acquaintence is, in fact, a part of the real world, but it is not a direct and completely veridical representation of the real world. It is our phenomenal world and it is the best that evolution has given to us so far.

         


        • Michael Strevens says:

          trehub said: “the ‘world’ (in our brain) with which we have direct acquaintance is, in fact, a part of the real world, but it is not a direct and completely veridical representation of the real world”.

          Very nicely put. The big question for us twenty-first century philosophers, in the wake of Kant and cognitive science, is: Just how veridical is that cognitive construction? Is our brain reconstructing causal structure that is really out there? Or is causality a mental imposition on the world, our way of “tagging” certain statistical connections as especially important? (I choose causality as an example because so many philosophers, including me, have thought that scientific understanding is in part a matter of grasping how things are caused.) So the question of understanding is directly connected to some of the deepest questions in philosophy.

          • trehub says:

            Michael: “(I choose causality as an example because so many philosophers, including me, have thought that scientific understanding is in part a matter of grasping how things are caused.)”

            Causality is another tricky concept. We can never know its essential nature, or even if what we call causality is a real event in the universe. But we do know that the concept of causality together with the concept of mechanism enables humans to invent artifacts and scientific theories that have dramatically changed the world we live in.

            To my mind, the most important question for philosophy and science is how does the human brain construct the phenomenal world within it that is the object of all of our questioning. In this connection, see “Evolution’s Gift: Subjectivity and the Phenomenal World”, here: https://www.researchgate.net/profile/Arnold_Trehub

            In the article cited above, I describe my SMTT experiment in which a vivid hallucination is induced and systematically varied in normal subjects. This experimental finding was predicted (on causal grounds) by the neuronal structure and dynamics of a theoretical mechanism that I call the brain’s retinoid system. I argue that the SMTT experiment demonstrates that consciousness can be understood in terms of a complementary relationship between the pattern of activity in the brain’s retinoid space and one’s phenomenal experience. If so, the SMTT experiment is as important for our understanding of consciousness as the double-slit experiment is for our understanding of light. What are your thoughts about this proposal?

             

  3. trehub says:

    “Could a machine ever understand things in the way that we do?”

    This could happen only if the machine were conscious in the way that we are. For this to be the case, the machine would need to experience the world in which it exists from a fixed locus of perspectival origin. For more about this, see “Spoace, self, and the theater of consciousness” and “Where Am I? Redux”, here: https://www.researchgate.net/profile/Arnold_Trehub

    “Many scientists believe that, at bottom, our thought is implemented in neural networks that make statistical associations. Does that mean that we are no better than Watson? That our sense of understanding is an illusion?”

    Our sense of understanding is real. There is abundant empirical evidence that argues against the notion that human thought is simply a matter of statistical association.

  4. larryniven says:

    You’re really talking about at least two separate topics: there’s Watson’s goofy “Toronto” answer and then there’s the difference between knowledge and understanding. It doesn’t do any good to conflate those two or mix them up. More details at http://rustbeltphilosophy.blogspot.com/2014/05/watson-once-again.html

    • Michael Strevens says:

      To larryniven: I use Watson’s goofy answer as a clue to what is going on in its programming — nothing more. Once you see roughly how the programming works, you grasp the sense in which Watson lacks understanding. No conflation; just one thing leading to another.

      • larryniven says:

        That’s precisely what I’m saying, though: it’s not a clue as to what’s going on. They’re unrelated issues. People – actual people; human Jeopardy contestants – give equally goofy answers to subjects that they understand perfectly well. There is no connection to be made.

        • Michael Strevens says:

          larryniven: OK; I think I see what you’re getting at. If all you knew about Watson was that it made this one mistake, you wouldn’t be able to guess how it goes about answering questions in general. The reason I write about Watson is not to point to any particular mistake it made, but to ask whether something that answers questions using Watson’s statistical methods understands the questions’ subject matter.

          • larryniven says:

            Sure, fine – as I say in my post, I think that’s a perfectly valid question and I think that your answer to it is correct. But your focus on statistical methods is still mistaken and still has nothing to do with the difference between knowledge and understanding.

            So, to explain this very slowly, let’s start at the beginning. You say that we, unlike Watson, “summon up a list of gangsters or U.S. cities and then ask ourselves whether they meet the other criteria explicitly imposed or implicitly suggested by the clue.” Okay – but then how do we know which list to summon up? Surely that knowledge doesn’t happen magically; we don’t simply pluck the right category out of thin air. To the contrary, the way that we identify the proper category is by parsing the sentence – which, of course, just means that we apply a statistical method of interpretation to the words. Human language-parsing mechanisms are probably different from the algorithms that Watson used, but they’re both essentially statistical.

            So, okay, now we have the proper category. How do we then identify the proper member of that category? Well, again, we take the words that we were given (“the other criteria explicitly or implicitly suggested by the clue”) and then correlate those with the various properties of the members of the list. It is, in other words, just more statistics. Again, we’re working with a different set of properties than Watson evidently was, but that doesn’t mean that our work is anything other than statistical.

            It isn’t, then, that Watson’s methods are to blame for its lack of understanding. Better language processing software might’ve helped it to avoid giving such stupid answers – but, again, plenty of real-life humans screw up just as badly, and we know that their language processing skills are as good as you could realistically hope for. What it really needs for understanding is a more robust, more diverse set of data from which to draw correlations – i.e., the sort of data set from which we draw our correlations. Towards the end of your article you seem to say that the crucial ingredient is our direct access to the original source of the correlations in question, but I know of no reason to believe that and I certainly don’t see one in your article. As far as we know now, understanding is only a matter of having a heterogeneous enough set of data and a reliable enough correlation map of that data. The “kinds of connections,” in other words, are not different. It’s what is being connected that’s different, and that has literally nothing to do with “Watson’s statistical methods” or any such thing.

          • Michael Strevens says:

            larryniven said: “As far as we know now, understanding is only a matter of having a heterogeneous enough set of data and a reliable enough correlation map of that data.”

            I was hoping that someone would push this point of view; thank you! In the spirit of appreciating both sides of the question, let me provide some additional argument for the view (so in this comment I will be arguing against the view in my  own essay, just to see where it goes).

            Take language comprehension, a topic brought up by larryniven. Many scientists now suspect that statistical analysis, bearing some resemblance to the methods that Watson uses, plays an important role in understanding language. Once upon a time, this was a very unpopular idea. It seemed obvious that we understood language by, as it were, reversing the process of the grammatical construction of sentences. Comprehension seemed rule-based and non-statistical. The science is still a work in progress, but perhaps the statistical theorists are right. Intuitively we think that we understand language one way, by straightforwardly decoding grammatical structure, but in fact we do it a different way. Maybe we shouldn’t be surprised that we’re wrong: we know what it feels like to understand language, but that doesn’t mean that we know how the process actually works, any more than our familiarity with the color blue gives us knowledge of the way that color works, either in the world or in the brain.

            The same might be true for philosophers writing about understanding. When understanding something, I think that I am not just following statistical connections. Even in a simple case such as understanding the moon’s setting, I think that I am grasping the geometrical structure of space and the causal history of light beams and putting them together to gain some deep knowledge of the underlying structure of the situation that goes far beyond statistical correlation. But perhaps these feelings of “depth” and “causality” and even “geometry” are illusions or side effects generated by the fluency of the transitions that take me across the network of correlations.

            What do you all think?

          • larryniven says:

            What do you mean by “grasping”? What does “grasping” involve and how is it different than “understanding”? Because if it’s not different than understanding, then it isn’t very helpful to say “I understand the moon’s setting by understanding this part of the moon’s setting and that part of the moon’s setting.” That still doesn’t get us any closer to figuring out what understanding is; all it does is give understanding another name.

            What do you mean by “putting together”? That sounds like a statistical correlation to me – statistical correlation is, after all, a way of putting-together – but you say it like it’s something else. But what else is it?

            How does “deep” knowledge differ from, I guess, shallow knowledge? I’ve said already that understanding might require a heterogeneous data set – is diversity of data all that you require for “depth”? Or do you mean something else? (Or, in all of these cases, are you not really sure what you mean, such that you’re just using these words for effect?)

            And are you sure that you want to limit understanding to cases where people know about “the underlying structure of the situation”? Because that is going to drastically limit the number of instances in which understanding occurs. People have been watching the moon set for, y’know, millions of years, but they haven’t grasped (or understood, or known about, etc.) the underlying physics of the situation until very, very recently. To require people to know about “the geometrical structure of space” and “the causal history of light beams” and whatnot is to exclude the vast majority of humans from ever having understood practically anything. Which, I dunno – maybe you’re okay with that. But I think that that’s a bit absurd.

          • Michael Strevens says:

            larryniven says: “People have been watching the moon set for… millions of years, but they haven’t grasped… the underlying physics of the situation until very, very recently. To require people to know about “the geometrical structure of space” and “the causal history of light beams” and whatnot is to exclude the vast majority of humans from ever having understood practically anything… I think that that’s a bit absurd.”

            People have known that light travels in straight lines for a long time, probably since they first noticed their shadows. So I think they understand quite a bit about moonset, sunrise, and so on. But anyone who doesn’t know that the earth is rotating on its axis doesn’t fully understand these things, familiar and predictable though they may be. That’s not absurd; that’s the nub of the notion that science greatly enhances our understanding even of everyday phenomena.

          • larryniven says:

            Aha – “enhances,” yes, but only enhances. In other words, we can have (some) understanding (or other) of a phenomenon without possessing advanced scientific knowledge pertaining to that phenomenon, but we have more (or perhaps a better) understanding when we have more (and better) scientific knowledge. That sounds entirely reasonable.

            My point, then, seems to stand: you don’t really need to have much in the way of “knowledge of the underlying structure of the situation” in order to have understanding. And that, remember, is the original question: what differentiates knowledge from understanding? True, having more or better knowledge can make the difference between achieving a <i>robust</i> understanding and achieving a merely <i>basic</i> understanding. But that’s not what we’ve been talking about this whole time.

          • Michael Strevens says:

            larryniven, it sounds as though we are thinking about the enhancing effect of scientific knowledge in different ways. I am thinking that science enhances understanding by providing more of whatever it is that gives you even a little bit of understanding in the first place. If that’s correct, we can generalize from the way that science gives understanding—knowledge of causal structure and all that—to the way that we get understanding generally, concluding that rudimentary understanding  comes from rudimentary knowledge of causal structure.

            That said, I don’t think that scientific understanding can be the model for every kind of understanding. Mathematical understanding doesn’t work by grasping causal relations. Nor, I think, does moral understanding. Those are topics I didn’t raise at all in my essay. I wonder if anyone on this thread has thoughts about these things?

          • larryniven says:

            I agree that scientific knowledge “provid[es] more of whatever it is that gives you even a little bit of understanding in the first place,” but I still disagree about what that thing is. Science obviously does provide knowledge about causal structures, yet knowledge about causal structures is a type of correlative knowledge. Indeed, it’s a very good type of correlative knowledge, probably even the best that we could possibly achieve. (I mean, say what you like about the Humean causation-is-just-correlation thing, but you’ll certainly have to admit that causation at least entails correlation even if it doesn’t reduce to correlation.)

            So this generalizing-from-science argument – the one that you’ve just given – is not capable of promoting your position over mine. Each of our theories is equally compatible with the idea that we can trace the enhancing effects of science to the roots of understanding. You think that those roots pertain to causal structures as such, which is certainly something that science addresses; I think that those roots simply pertain to correlation, which is also something that science addresses.

            At any rate, if you don’t think that moral understanding can be modeled on scientific understanding, then how do you think that moral understanding works? That’s very puzzling to me.

          • Lui Di Martino says:

            If the universe and everything in it can be summed up as information, and that humans are the result of that information, then I suppose we could find the correct methods to bring about understanding in machines that we originally program. Still not completely sure though. The universe has brought the machine out through us, but is the machine an ongoing part of the inward process within Nature? How will its connection to nature be seen? We can trace our evolution and our connection to nature all the way. But has nature brought about this machine, in that it is the next step in the evolution of a human, or any specie? Is it a naturally occuring specie? Obviously not in that respect. The machine can only trace its evolution to human creators.

          • Michael Strevens says:

            larryniven: I take very seriously the view that causal connections may be nothing more than a certain very particular kind of statistical connection. If that’s true, then what’s essential to understanding is seeking out and grasping this special kind of statistical connection; that’s something Watson doesn’t do, but a more sophisticated machine might.

            Why would we regard one kind of statistical connection as more special than the rest? You can find a paper of mine on the topic here: http://www.strevens.org/research/cogsci/aitiopraxis. (It’s an academic paper, but it doesn’t presuppose too much background.)

            I’ll say something about moral understanding later this weekend.

            By the way, there is a thought-provoking post on science versus the humanities from shaun2000 in the thread above; I didn’t see it earlier and i wonder if others may have missed it too.

  5. Meyer1953 says:

    Understanding is not ever a matter of a person having knowledge of certain facts, but rather understanding is having a personal presence of others about – especially, widely about – that person. To the extent that the personal presence of the others pertains to issues at hand, then those issues are indeed “understood” since the life that the diverse parties involved does, does well by the understanding.

    To be alive is, first, a matter of others about one. This is exactly why we are carefully and attentively raised for well over a full decade, to get ahold of what “one another” we all are. The others about one, are represented into that one’s very being in life, and they thus are presences. So these presences then have say, and carry meaning, and indeed carry on. Think for a moment – do you still hear your parents calling on a soft summer evening to drop your play and come to dinner? That’s a presence of your parents, just for one representative example.

    So what we understand, we understand by way of – by way of – one another. The thing is never ever the issue. Even when the thing is the issue, such as for an aircraft flight crew, the only reason that the thing works at all is, that the flight crew carries along their full whole others every moment of the day. Why else do we, as the public on the ground, get so concerned about other people’s tragedies? Because, if it is not one another that we are living forth, we are not at all capable of actually living forth.

    Al Haynes, the pilot who with great courage and skill commanded the flight that crashed at Sioux City twenty five years ago, lost less than half of the passengers in the crash. The plane had lost every shred of its controllability, except that the engines could be manipulated to lurch it here and there. So in the utter impossibility of anyone surviving, Haynes commanded the plane to a landing that more than half of the people survived. But when he was well enough to discuss it, he asked a fellow pilot if everyone made it. No, the answer, they did not all. Haynes said, “Then I killed people.” He focused not on his heroics nor on his extraordinary compassionate skill. He focused straight on the lives of those others. This is how excellence comes about.

    Knowledge is all about the LIFE we carry, each, of each the other. Facts do certainly pertain to such life, and when they do we say “My, my, you surely have some real knowledge.” But it is the life that we are, one to and with and in one another, that we living are.

    Nothing else even does nor ever could come close. Life alone, we each and together, is.

    • Michael Strevens says:

      Meyer1953’s post brings up, among many different things, the issue of what it is for one person to understand another. Is understanding a person a fundamentally different thing from understanding a mechanical process, such as the moon setting or the way a vacuum cleaner works? There is at least one important thing that the two kinds of understanding have in common: they require some sort of direct, transparent, immediate knowledge. (See my “Understanding and intimacy” comment above, and the comments I was reacting to, as well as Meyer1953’s own “soft summer evening”.)

      Many philosophers have thought, however, that there are also deep differences between understanding humans and understanding things. Grasping how a person’s behavior is caused is not enough to understand them; you also need to grasp the reasons that they act the way they do. How big of a difference is that? Not so big if, as some other philosophers think,  reasons are nothing more than a special kind of psychological cause.

      One important thing that hinges on this is whether history and the social sciences (or at least the parts of the social sciences that are concerned with explaining individuals’ behavior) are aiming for the same sort of understanding as the physical and life sciences. Should the humanities try to be more like physics? Or would they, in doing so, annihilate themselves?

  6. siti says:

    It seems clear to me that knowing and understanding are not just related but interdependent. And it also seems clear to me that whilst you can indeed have “knowing” in the absence of “understanding”, there is no way to have “understanding” without some underpinning “knowledge”. I can’t see any reasonable objection to the idea that “understanding” need not be any more than the ability to relate facts (the atoms of knowledge) in a particular (admittedly complex) way. Perhaps we could draw an analogy from chemistry – as (material) atoms are to simple molecules, individual facts are to knowledge and as molecules are to self-replicating complex molecules like DNA, knowledge is to understanding. On that analogous scale, I would put the most advanced computers somewhere around the protein mark – relatively complex ability to relate facts, but still a long way short of genuine  “understanding” – but nevertheless heading in that evolutionary direction. 

    The really interesting question for me in that scenario is where does “experience” fit in? If facts are atomic in the sense that every existing entity carries some with it, could we define “experience” as simply “the subjective knowledge of facts”? In that case, as long as there is more than one thing in existence, any and every least particle of reality “experiences” the world at some level and “conscious experience” might then be no more than an holistically complex organismic expression of a fundamental aspect of all reality??? And if that is the case, I don’t see any reason why machines might not eventually attain to consciousness.

    But then again, of course, even the most complex levels of consciousness are capable of mis-understanding too.

  7. siti says:

    It seems clear to me that knowing and understanding are not just related but interdependent. And it also seems clear to me that whilst you can indeed have “knowing” in the absence of “understanding”, there is no way to have “understanding” without some underpinning “knowledge”. I can’t see any reasonable objection to the idea that “understanding” need not be any more than the ability to relate facts (the atoms of knowledge) in a particular (admittedly complex) way. Perhaps we could draw an analogy from chemistry – as (material) atoms are to simple molecules, individual facts are to knowledge and as molecules are to self-replicating complex molecules like DNA, knowledge is to understanding. On that analogous scale, I would put the most advanced computers somewhere around the protein mark – relatively complex ability to relate facts, but still a long way short of genuine “understanding” – but nevertheless heading in that evolutionary direction.

    The really interesting question for me in that scenario is where does “experience” fit in? If facts are atomic in the sense that they are the fundamental bits of which knowledge is composed; and every existing entity carries some fact(s) with it, could we then define “experience” as simply “the subjective knowledge of facts”? In that case, any and every least particle of reality “experiences” the world at some level and “conscious experience” might then be no more than an holistically complex organismic expression of a fundamental aspect of all reality??? And if that is the case, I don’t see any reason why machines might not eventually attain to consciousness.

    But then again, of course, even the most complex levels of consciousness are capable of mis-understanding too.

    • trehub says:

      siti: “If facts are atomic in the sense that they are the fundamental bits of which knowledge is composed; and every existing entity carries some fact(s) with it, could we then define “experience” as simply ‘the subjective knowledge of facts’?”

      The problem is that both “knowledge” and “facts” don’t exist in the absence of subjectivity. So the proposed definition of “experience” begs the qwestion “What is subjectivity?”


      • siti says:

        trehub: “The problem is that both “knowledge” and “facts” don’t exist in the absence of subjectivity.” Right – and since the world seems (as far as we know) to continue to work even in the absence of anything we would recognize as having the capacity for conscious subjectivity, perhaps subjectivity too runs much deeper than we usually think. E.g. does an electron have a subjective experience of the fact that it is in close proximity to a trio of quarks we usually refere to as a proton? If yes, then subjectivity and experience (albeit in very rudimentary form) might exist at the deepest levels of physical reality; if no, then how does the electron “know” what it is supposed to do when it finds itself in that situation? Why does it “behave” differently depending on its environment if it does not in some sense, experience its environment subjectively? IO (in less anthropomorhic terms) W – How does an electron do what physical law dictates it should do under certain external conditions without some means of subjectively processing the facts about its external world? And what, if any, is the difference between “subjectively processing the facts of the external world” and what we call “experience”?

    • Michael Strevens says:

      siti suggests that understanding is having a certain kind of knowledge. I think that’s right: for understanding physical things, for example, you need knowledge of causal structure.

      siti also asks “Where does experience fit in?” (following up on earlier discussion). I’ve suggested that experience may give us a familiarity with the objects of our knowledge — familiarity with causal structure, say — that goes beyond merely “knowing about” something, as the tone deaf person merely knows about fugues, without understanding what it is to be a fugue.

      But perhaps that sense of immediacy is nothing more than a feeling of fluency. (The work of philosopher J D Trout makes this suggestion; you can see us debating these things on Philosophy TV at http://www.philostv.com/michael-strevens-and-j-d-trout/.) Then you would get an intermediate view: understanding requires a special kind of knowledge, so not just knowledge of correlations (as larryniven proposes). But there is no further contribution to be made by experience or immediacy.

      • siti says:

        Michael: Yes – I was thinking more about immediate experience – I mean the fact that we are “in touch” momentarily (at each moment) with the real, external world – we experience the world sensually. How deep does that go? Does it sit at a level above facts and knowledge, or is it more fundamental? (see my reply to trehub). Another way of asking the question would be: do facts establish experience or does experience establish facts? I think this is key because if facts are fundamental, then probably machines will never rise to the level of consciousness (or at least not before we have figured out how to raise them to the level of subjective experience); but if reality is fundamentally experiential, then, in one sense, Watson is already “experiencing” the world and conscious experience may be on “his” horizon. Not that he would know that, but (as far as we know) neither did most of our evolutionary ancestors recognize that it was on theirs. 

      • Lui Di Martino says:

        From what is very much my own layman pov, it just struck me that once we know the difference between our telling the truth or telling a lie, then we have crossed over from knowledge into understanding. In that respect humans learn understanding very young. Do our present machines understand the lie or the truth they may be programmed to quote, or even understand its mistakes now and again, or even understand that it has been programmed to ‘understand’ and grow from that mistake? This would be a breakthrough I am sure! Intuitively I feel there is a vital spark missing here, and it may be my own lack of understanding of this subject thoroughly that is providing that view in my mind.

        Humans can manipualte a situation by either telling a lie or the truth. This form of manipulation is again based on some understanding of the situation, that a machine at present cannot grasp.

        Humans grow to know what to do with knowledge and undertsand when they are manipulating it.
        A machine can be taught to lie or even manipulate, but will those mechanical virtues bring about the human-like awareness?

        I have little doubts we can build machines that experience a full gamot of emotions and biological processes like ours. WIll it evolve to turn and face those emotions, question them and perhaps see itself as more than those things it has been wired to experience? Until now I have always lived with the idea that it is the human soul that is a participant in this journey, and perhaps the machine can only evolve journeys it cannot come to understand, just meaningless paths of possibilities stretching out of its own insights, but lacking in the meaning the soul provides.

        • Michael Strevens says:

          Lui Di Martino: “once we know the difference between our telling the truth or telling a lie, then we have crossed over from knowledge into understanding”

          Let me pick up on this to see where it might lead. One way of thinking about truth is as a correspondence between words and the world. Understanding that what you say is true (or false), then, requires some sort of separate awareness of your words and of their subject matter — and so some sort of grasp of the world that does not go by way of words. I’m not endorsing this myself, but it is one reason that some philosophers have thought that understanding requires a kind of immediate awareness of the things that are understood.

          The word “grasping” is sometimes used to refer to this awareness. But you can see that verbally defining grasping (as larryniven demands) is in that case perhaps impossible and certainly beside the point. That leaves us philosophers in a bind, since the only tools we have are words.

          • Lui Di Martino says:

            Michael -Understanding that what you say is true (or false), then, requires some sort of separate awareness of your words and of their subject matter — and so some sort of grasp of the world that does not go by way of words.

            Thank you, I wasn’t completely aware I was thinking along that line, but that is the one. Our understanding is what led to the origins of language through symbols and eventually alphabets. These in turn expanded our understanding even more, until we developed more words that were required to represent a meaning that someone was trying to convey. Then from successfully creating the words that supported that meaning, it led to even greater awarenesss and understanding, and so on.

            There has always been this need to express oneself, even if the first need had no idea of what our needs would become today. But this inner process seems never to have been born in machines. The machine requires the ‘alphabet’ approach in order to function. But the alphabet didn’t create the need for expression. So the machine is our tool, and can be no more, and will not experience this human-like need for expression outside of those symbols that we use to carry that expression and meaning.

  8. shaun2000 says:

    “…whether history and the social sciences… are aiming for the same sort of understanding as the physical and life sciences. Should the humanities try to be more like physics? Or would they, in doing so, annihilate themselves?”

    I don’t see the humanities aiming for the same sort of understanding as the physical and life sciences, I think they should not try to be more like physics. Nor, I think, should scientists take terms to do with conscious volition and try to redefine them in logical and scientific terms. Knowledge can be assessed numerically, you’re welcome to that, but understanding is something existing between people, depending on the nature of consciousness for its parameters (“reaching an understanding,” for example). Trying to rationalize “understanding” is likely to drain it of its current meaning, without adding to it any power or sublety. Science could instead use terms like “coincident/complementary knowledge,” or “knowledge complete with reference to…”

    Positivist science bound itself to avoid considering volition, natural or supernatural. The result should be a body of knowledge without reference to the experience of conscious volition, and best kept isolated from the traditional discourse used to communicate the experience of conscious volition. The humanities are keepers of that discourse.

    Science is an enduring artefact contributed to by people, each of whom has one life, one present moment and one conscious mind (I believe). For some people the finiteness of that life and the quality of that mind’s conscious experience are more meaningful than that artefact. Science is no more than a tool, to me secondary in importance, while the shaping of a life and the quality of conscious experience are primary. I vote for reason and logic to be kept apart from traditions they can have no relevance to.

  9. Meyer1953 says:

    The author asks a question in the discussion, “Is understanding a person a fundamentally different thing from understanding a mechanical process, such as the moon setting or the way a vacuum cleaner works?” The answer is, that we never ever understand one another, no matter what. We gain understanding of one another. To actually understand another person, would make that person irrelevant by redundancy. That person would literally be eclipsed by that person’s own (external to them) self.

    But a person is the fantastic extravaganza of life, being, and the universe in totality. So we gain understanding of one another. Facts are simply one way in which we do so. To just gain facts makes time stand still, makes time not happen. But to gain one another thunders time itself along. It is not a goal, to gain one another. It is the journey that in this paltry little bunch of Standard Model particles, we can do such a resplendent joy as to actually gain one another.

    The joy of a moonset, the croaking of frogs, and symphonies of cicadas place into tangibility the life that we know, each and suspecting more, of one another. No, we are not the only creatures – but first, we look to our own house since that is where our life takes its first scale of root. So what we see is, that it is one another that we only ever see no matter the context. Our finest minds have all been the same. One anecdote reported of Einstein is that he met a student at a footbridge on campus, and since their languages were different he merely pointed into the creek and with excitement said “Fish!”

    The entry into and content of understanding is, that we are these spectacularly awesome lives. Everything after that, by one another and with one another and in one another, is joy and victory.

  10. Michael Strevens says:

    Lui Di Martino, you and perhaps some others on this comment thread are skeptical that computers could ever have understanding. In my essay, I used Watson as a paradigm of a non-understanding computer. But (to reveal something more of my own views) I do think that a sufficiently sophisticated machine—not Watson, maybe not a machine that will be built in my lifetime, but some machine—could have understanding. That machine would have to possess certain special forms of knowledge, such as knowledge about causal structure or reasons. It would have to see how the things or the people it understands actually work, which is more than a matter of successfully predicting their behavior. Further, the knowledge would have to possess the sort of directness that we have been calling “immediate” or “transparent” or “grasped”, and that some of you have tied directly to consciousness. I can’t say exactly what this immediacy is (larryniven is right about that) but I do think that it can itself be understood, and that it does not depend on consciousness in the “felt experience” sense. One reason that it is worth trying to describe the difference between knowledge and understanding is so that we can better appreciate how to build such a machine—a machine that will not only beat us at Jeopardy, but afterwards grasp the scale of its achievement.

    • Lui Di Martino says:

      Michael – you and perhaps some others on this comment thread are skeptical that computers could ever have understanding.

      And at the same time remember to remain open minded! The machine is a great knowledge store. Can its knowledge ever turn to understanding? Obviously I cannot know the truth about that at the moment.

      I do know that a musician who had expert knowledge in music theory, once played me a composition of his, and it was the most complicated meaningless series of chord progressions and melodies. It was expertly structured and put together, and his son and I could not begin to keep up with it. But it was clear that the musician didn’t understand composition or the value of each step. One would think that with all that knowledge he would have understood more than those with far less knowledge. I would have expected music superior to Pink Floyd, or on a par with Beethoven at least!

  11. Meyer1953 says:

    Hello all;

    Moral understanding is simply an extension of morality itself.  So we look into what morality is.

    Morality is, an effective relationship with (eventual) consciousness.  In consciousness, with all things that “go around,” those things also “come around.”  Essentially, the Golden Rule, but in time that becomes quite real.  Who ever does act according to the full response of that which our affect produces, that person is “moral.”

    So, moral understanding is just an effective relationship between any person’s NOW and the EVENTUAL of that person’s affect towards others.  Those who make the correlation are called moral, and the embodiment of such a morality is called a moral understanding.

  12. Michael Strevens says:

    Young children know from their parents’ expressions of disapproval or wagging fingers that name-calling is wrong. But they may not understand why name-calling is wrong until later in life. What takes them from moral knowledge to moral understanding?

    Here’s my answer. There are certain general moral principles: not to harm other people without good reason, or as Meyer1953 suggests, the golden rule (related, but not the same thing). Moral understanding consists in seeing how an act’s wrongness (or rightness) is determined by those moral principles, in virtue of the nature of the act. We see that name-calling is hurtful; we know that hurting people is in general wrong; we conclude that name-calling in particular is wrong. By grasping these things, we understand name-calling’s wrongness.

    Note that there are three sorts of knowledge that go into this understanding. First, there is knowledge of the general moral principles. Second, there is knowledge of the non-moral properties of the act, for example, knowledge that most people (or at least children) find being called names hurtful. Third, there is grasping that the non-moral properties fall under the moral principle; in this case, that the hurtfulness of name-calling results in the general rule’s determining it to be wrong. The third kind of knowledge is pretty trivial in this particular, overly simplified case, but in more complex cases it is the source of most moral disagreement.

    An analogy can be drawn, I think, between the role of knowledge of general moral principles in moral understanding and the role of knowledge of causal principles in scientific understanding.

    Thoughts?

    • Lui Di Martino says:

      An analogy can be drawn, I think, between the role of knowledge of general moral principles in moral understanding and the role of knowledge of causal principles in scientific understanding.

      Yes, it will be very interesting to see how testing such an anology will progress. I am supposing the idea is to encode the machine with some kind of moral principle, so it can come to know right and wrong things or ways to behave. The analogy makes sense, because if one took away the role of knowledge there wouldn’t be the subsequent understanding.

      One question that springs to mind is will the moral understanding gained by a machine built on scientific understanding of causal principles be the same as human moral understanding?

      It may still be that it is never knowledge itself that actually creates human understanding. Even though the role of knowledge in causal and moral understanding is similar, on one it can be acting as a form of conditioning on the human soul, which then brings understanding to it, and on the other it is ‘moral understanding’ based on the rules of knowledge that have been encoded into the machine.

      If knowledge does not contain its own walkthrough into understanding, but is just the stuff that causes a human to understand, we still cannot be certain the machine will ever gain this type of understanding.

      But it’s ok if that is wrong. I would ultimately accept the truth. And finding out will be another incredible journey.

      • Michael Strevens says:

        Lui Di Martino raises the question whether machines can have moral understanding. If the analogy I’ve drawn between scientific understanding and moral understanding is correct, and if machines can have scientific understanding, what could stand in the way of mechanized moral understanding?

        I can think of only one obstacle: machines may not be able to grasp moral principles in the same way that they can grasp causal principles. Why would that be? Suppose, as larryniven has suggested, that causal connections are just a very special kind of statistical connection, and that machines grasp causal connections by having the right sort of statistical knowledge. Can they grasp moral connections this fashion? Many philosophers would say no: whatever moral connections are — whatever it is that makes the infliction of pain wrong, say — they cannot be a matter of statistics. There may be a lot of pain, a lot of deliberate infliction of pain, but that could never make pain morally good.

  13. ianful says:

    We seem to build knowledge through personal observation or collection of observations and interpretations by others. Then we classify all of these to make sense. We build this into our personal history and memories. We also construct meaningful models to explain past outcomes and predict future outcomes, and then call this understanding. The trouble is that we are only capable of skimming the surface in observing natural and social phenomena – even if we have the most powerful scientific instruments and statistical analysis possible. Consequently we are kidding ourselves if we assume that we actually know why our predictions come true.

    My point is that a computer could be programed with my personal history/memories, provided that I could comprehensively download it and pass it on. Maybe the machine would not be as forgetful as I am. As a corollary to my previous discussion of our human understanding, I find it is essentially an illusion, but don’t tell anyone as that will undermine their self-importance.

    • Michael Strevens says:

      ianfull: “We are kidding ourselves if we assume that we actually know why our predictions come true”.

      What do you think is missing? If we can use our instruments and our statistics to discover the fundamental laws of nature, and if we can see how the laws determine the things we predict, don’t we understand those predictions?

  14. Meyer1953 says:

    “An analogy can be drawn, I think, between the role of knowledge of general moral principles in moral understanding and the role of knowledge of causal principles in scientific understanding.”

    The role of knowledge in general moral principles comes from one and both of two sources: 1) rote performance, and 2) gained discovery. For rote performance, it is the usual “do it because I said to,” whereas gained discovery is learned and adopted from experience. Of the two, (1) is weak to the point of uselessness unless it becomes (2). Our prisons, for example, are populated with many of those who do not adopt (2). And in WW2, it took an entire destruction of two empires before those empires adopted (openly, in ongoing politics) option (2). So we are still back to the premise that morality is a carried-forth consciousness, typified as the Golden Rule.

    The full adoption of morality in the twentieth century required a global cataclysm, and unbelievable heroism and determination among quite diverse peoples. So the evidence of that century suggests all but conclusively that there is a very, excruciatingly, definite tendency in life for morality to emerge. This is to say, that there is apparently a causal effect that gives rise to open morality.

    Now we may compare apples, to apples. In science the study of causal principles defines, essentially, science itself. If a thing cannot be caused now, and then again, at will, it is not considered scientifically understood. So, causal relationship – though not exclusively necessary in science – is the heart and core of scientific endeavor and accomplishment. Now, we have a causal-level comprehension of morality.

    I suggest that the depth of life involved in moral understanding indicates an otherwise unsuspected depth active in science. I find that science touches lightly on all things until it models those things by our own living depth as people. Science usually gets it the other way around, pooh poohing people as mere bizarre accidents, whereas depth upon depth is the daily life we all know. So I suggest that causal principles of science have a full, generally new, depth available by analogy with the depth found in moral emergence.

    • Michael Strevens says:

      Meyer1953 said: “The full adoption of morality in the twentieth century required a global cataclysm… there is apparently a causal effect that gives rise to open morality”

      Meyer1953 expresses one kind of moral optimism: the world will force moral enlightenment on us whether we seek it or not.

      I am inclined toward a different kind of optimism: moral enlightenment can be had by thinking — by moral understanding. Though it took a terrible war to eliminate slavery in the United States, in many other countries it was abolished with little or no physical conflict. Many moral advances are achieved through persuasion, by showing people how the general moral principles to which they are already committed apply to, say, the pain and indignity of slavery, or to the injustice of excluding women from politics.

      That’s another reason why we should care about having not just knowledge, but understanding — understanding leads to more knowledge.

  15. ianful says:

    What is missing is our understanding of our reality. Unfortunately/fortunately(?) I have seen the nature of our reality, I have only witnessed it but cannot understand it, and it has taken me years to make any sense of what I witnessed. Our reality is a projection from somewhere else, and cosmologists and theoretical mathematicians dealing with fundamental physics have been recently confirming this. To use an analogy … Our reality here and now bears a similar relationship to that of an outside live broadcast appearing on a TV screen; where it is a representative facsimile of the original, but lacks the richness of the original. Sure we try to understand the mechanics of our reality, but that is like engaging with the monkey and not the organ grinder himself.

  16. Meyer1953 says:

    I appreciate that optimism holds much promise.  But, I think that the optimism I spoke of in terms of global cataclysm, and the author’s “moral enlightenment can be had by thinking — by moral understanding” optimism are really the same thing.  Because, for a thing to become mature thought, to become understanding, it must have been experienced in life as life by people who practice it.  The practice is not from surface-only thought, but from depth of thought – which depth comes from within one’s own life by experience.

    I would also point out, that slavery was indeed abolished in the North long prior to the Civil War – it was only in the South that the conflagration had to be fully engaged.  So even in our Civil War, moral thought reigned and decidedly won.

  17. Michael Strevens says:

    Thanks everyone for joining the discussion! Our time is nearly up. Any last minute thoughts on experience, “grasping”, causation, morality, machines? Did I convince you all that there really is a difference between knowledge and understanding?

    • Lui Di Martino says:

      Thank you for a great discussion! Even in this discussion, some of the things we understand from it have not actually been put into words and then posted. We have probably grown in understanding on various levels and regarding any other subject. We understand things in the strangest ways. And it may be that every way can be represented as an encoded program, or something programmable that can lead to the same way, and that way is known intimately.

      Perhaps the littlest of a certain kind of understanding is greater than the greatest knowledge. And that we would trade such knowledge for a dose of that understanding. Obviously we have lots of understanding on many subjects. But there’s an understanding that I feel confident all humans would probably trade all their ‘knowledge’ for. Without that one I feel we can’t fully trust knowledge. I suppose a machine might grow the same problem!

      But regardless of my meandering, there have been various essays that have given me a deeper interest in the subject, and I watched your video and bookmarked the philosophy tv link. So Thanks!

      • Michael Strevens says:

        The nature of understanding is indeed difficult to put into words, but we philosophers will keep trying. Some of the comments above have pointed to reasons that this might be flat out impossible. They suggest that understanding is “deeper than words” — as perhaps is consciousness, on which it might depend. But I am convinced that there is plenty more to say, even if it is impossible to say it all!