What Is the Value of Imagination and Wishful Thinking in Science?

A statue of Scottish scientist James Clerk Maxwell, discussed below, who formulated the classical theory of electromagnetism.A statue of Scottish scientist James Clerk Maxwell, discussed below, who formulated the classical theory of electromagnetism.Flickr jmj2001 (CC)

How do we learn? Learning about learning isn’t easy, but since the rewards are potentially enormous, it’s worth looking hard to find answers. One way to begin is to ask what non-human examples, from biology and technology, teach us about learning.

How Nature Learns
The world’s most impressive example of learning is all around us, in the phenomenon of life. Living organisms have “learned” how to gather energy and information amidst a complex environment to grow, to repair themselves, and to produce offspring that repeat those feats.

Biological evolution’s learning strategy, through which the biosphere’s organisms became so sophisticated, is, of course, natural selection. As elucidated by Darwin, natural selection has two basic components: descent with modification, and what he called the struggle for existence. The idea is that organisms produce offspring similar but not identical to themselves, and that the “fittest” offspring — those best adapted to their conditions of life — are more likely to reproduce abundantly. Over time, successive generations become better adapted.

The definition of fitness is circular, but not empty. The fittest organisms are, by definition, those which succeed in producing many descendants. Increasing fitness, in that sense, does not in itself imply increasing sophistication or complexity. (For example, simple bacteria are still extremely abundant!) But evolutionary history shows that increasing sophistication and complexity is one major path to fitness.

Much has been learned since Darwin’s day about how descent with modification actually works. Parents provide a set of coded instructions, in the form of DNA molecules, for assembling children. The code is embodied in DNA’s sequence of nucleotides, which can be visualized as a long succession of As, Cs, Gs, and Ts.

Natural selection’s learning scheme has been a fruitful model for machine learning. “Genetic algorithms” are a prime example. In order to find programs that achieve some goal, one creates a population of functioning programs, each with different properties, specified by its “genome.” Then one runs the programs and compares their results. The least successful are discarded, while the most successful leave several descendants, whose genomes may be slightly different. One can repeat that cycle many times. Impressive programs, which would have been very difficult to design directly, have been created this way, through artificial “evolution.”

It’s important to emphasize that the design of genetic algorithms involves many choices — the design of the master program (which uses the genome); the initial set of “genes”; the rules for culling, reproduction, and mutation; and, crucially, the definition of what counts as “success.”

Other Success Stories
Prior to Darwin’s natural selection, a different theory of evolution had been advanced by Jean-Baptiste Lamarck. According to Lamarck, the “modifications” in “descent with modification” result from the life experiences of the parents. Thus, for example, Lamarck taught that giraffes gradually acquired their long necks as successive generations of proto-giraffes stretched to graze on the leaves of tall trees.

As a theory of biological evolution, Lamarck’s idea was superseded by Darwin’s. But its spirit survives, as an important strategy for machine learning.

In reinforcement learning, as in genetic algorithms, one has a skeletal master program and experiments with alternative ways of filling it in to create functional programs — or, to use a friendlier word, “plans.”

Plans are prescriptions for making a sequence of choices among available options. When things work out successfully — or when they don’t — we can’t necessarily tell which choice was responsible. In reinforcement learning we make incremental changes in the probabilities for selecting among the available options, based on their appearance in successful outcomes.

Put more simply, the basic idea of reinforcement learning is to compare many plans, and to fine-tune them by building up sub-plans that participate in success. Reinforcement learning, therefore, is a kind of Lamarckian evolution.

Typical applications of reinforcement learning include helping robots to move efficiently, and helping game-playing computers to play well. Reinforcement learning was central to AlphaGo, the AI program that in 2016 defeated Lee Sedol, the human Go champion, by 4 to 1 in a five-game match. Good Go-playing requires pattern recognition and long-term thinking, “intuitive” abilities that computer scientists have found difficult to program using conventional, top-down approaches. AlphaGo reached new levels of excellence by playing many games against itself and improving its strategies by experience, using reinforcement learning.

In recent years, many other games — including even Jeopardy and poker — have been “conquered” by machine learning, in the sense that the world’s best players are computer programs.

The programs that conquered checkers and chess follow a simpler version of the approach we’ve already seen: before making a move, they imagine lots of possibilities, evaluate the results, and select the most successful one. (Of course, the best programs add many wrinkles and refinements to this guiding strategy. They are marvels of human ingenuity!) But in the fierce “struggle for existence” among individual moves there is only one survivor, and that survivor leaves no descendants — the whole process starts from scratch at the next move. It’s quite possible that even stronger chess programs could be developed by incorporating more evolutionary ideas.

Imagination and Play
What do these spectacular success stories in non-human learning teach us about human learning? Perhaps ironically, one big takeaway is the value of a quintessentially human characteristic: imagination.

At first that conclusion might sound strange. Do genes or computers really have imagination? But, on deeper reflection, I think you’ll find it compelling. All the approaches we’ve discussed feature, at their core, spinning out and examining many alternative possibilities. And what is imagination, but the ability to consider what is not, but might be?

Many creative people, including many physicists, have testified to the value of imagination in their work. In one of his most famous quotes, Albert Einstein said: “Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.”

The great quantum physicist Paul Dirac, when asked about how he made his epochal discoveries, replied: “I like to play about with equations, just looking for beautiful mathematical relations which maybe don’t have any physical meaning at all. Sometimes they do.”

And Richard Feynman said: “The game I play is a very interesting one. It’s imagination, in a tight straitjacket.” The straitjacket, he goes on to explain, is that if your goal is to discover new things about physical reality, you have to respect known facts, accumulated over centuries of research.

Perhaps the most impressive testimonial to scientific imagination comes from James Clerk Maxwell, the leading theoretician of electromagnetism, in his tribute to Michael Faraday, its leading experimenter. The self-taught Faraday had to rely on his visual imagination, because his mathematical training was sketchy. This led him to new ways of thinking, as Maxwell described:

Faraday, in his mind’s eye, saw lines of force traversing all space where the mathematicians saw centres of force attracting at a distance: Faraday saw a medium where they saw nothing but distance; Faraday sought the seat of the phenomena in real actions going on in a medium.

Later, Maxwell converted Faraday’s imaginative visions into wonderful new kinds of equations, which we still use today.

Cultivating Imagination
Having recognized the value of imagination as a tool for problem-solving, how can we cultivate it?

In their free play, children exercise extraordinary imagination spontaneously. So, one promising idea is to encourage a playful approach to problem-solving. Humans are predisposed to enjoy this — as witness the racks of popular magazines at airports or supermarkets, which usually feature collections of word puzzles (e.g., crosswords and acrostics), math puzzles (e.g., Sudoku and KenKen), and logic puzzles, some of which are quite challenging.

I’d like to mention some wonderful examples of imagination-expanding resources, suitable for young people, that take things to a higher level. Each of these has meant a lot to me, personally. Edwin Abbott’s Flatland is a mind-expanding novel that explores life in different numbers of dimensions. Robert Forward’s Dragon’s Egg imagines life arising on a neutron star, based on nuclear physics. It too is mind-expanding fun. George Gamow’s Mr. Tompkins is a kind of scientific Gulliver, who in the illustrated novels Mr. Tompkins in Wonderland and Mr. Tompkins Explores the Atom finds himself moving very fast, where the effects of special relativity come into play, or shrunken down to atomic size, where quantum theory rules, among other adventures. And the late Ray Smullyan wrote several witty collections of logic puzzles, including Forever Undecided, which go quite deep (touching on, for example, Gödel’s incompleteness theorems) while advancing humorous narratives. I especially love his retro chess puzzles, The Chess Mysteries of Sherlock Holmes and The Chess Mysteries of the Arabian Knights. These present challenging problems in visual imagination and logical reasoning, posed in the context of the classic, slightly exotic characters and situations suggested by their titles.

Schools at all levels would do well to bring these and other imagination-expanding resources — including such computer games as Sim City, Civilization, and Minecraft — front and center.

Wishful Thinking
But imagination is only half the story. In each of the problem-solving strategies mentioned above, the process of imagining possibilities must be married to a method for evaluating their success.

In games, the final goal — like winning or solving the puzzle — usually is specified clearly. But in interesting games it’s not possible to see a clear path to that goal straight away. To make progress you must form more limited, tractable, intermediate goals, and aim for those. In other words, you’ve got to decide what to wish for.

Wishful thinking, in this sense, is an essential part of problem-solving. This becomes even more true when we move outside the context of games, where often there are no set rules that supply the definition of ultimate success.

It’s very plausible, then, that an important step toward achievement, whether for machines or human beings, is to cultivate systematic wishful thinking. New Year’s resolutions, business plans, and visionary “to do” lists embody that strategy.

Ultimate Goals
The issue of goals arises most keenly at the highest levels of learning and problem-solving.

Trying to make a scientific breakthrough, write a novel or compose a symphony, or create some other great work of art can be hard, frustrating work. Success is not guaranteed, and the tangible, economic rewards are usually modest. What keeps people going?

Some motivations are obvious. The dopamine rush that accompanies successful problem-solving can be its own reward. And successful problem-solving can earn the esteem of others — another thing people find rewarding in itself. But this isn’t the whole story.

For many of the greatest physicists, religion was a big motivator. Galileo, Newton, Faraday, and Maxwell were all deeply believing, if not entirely orthodox, Christians. Here’s how John Maynard Keynes, the economist, described Isaac Newton, in his famous lecture “Newton, the Man”:

He looked on the whole universe and all that is in it as a riddle, as a secret which could be read by applying pure thought to certain evidence, certain mystic clues which God had lain about the world to allow a sort of philosopher’s treasure hunt to the esoteric brotherhood. He believed that these clues were to be found partly in the evidence of the heavens and in the constitution of elements (and that is what gives the false suggestion of his being an experimental natural philosopher), but also partly in certain papers and traditions handed down by the brethren in an unbroken chain back to the original cryptic revelation in Babylonia. He regarded the universe as a cryptogram set by the Almighty.

And here is Maxwell again, delighting in his discoveries:

The vast interplanetary and interstellar regions will no longer be regarded as waste places in the universe, which the Creator has not seen fit to fill with the symbols of the manifold order of His kingdom. We shall find them to be already full of this wonderful medium; so full, that no human power can remove it from the smallest portion of space, or produce the slightest flaw in its infinite continuity.

These scientists were determined, as Stephen Hawking put it, to “know the mind of God.” That inspired them to work prodigiously hard.

Though Albert Einstein was not religious in any conventional sense, he, too, was driven by a passion to know. In his Autobiographical Notes, he recounts his fascination as a child when his father showed him a compass: “This experience made a deep and lasting impression upon me. Something deeply hidden had to be behind things.” And elsewhere he writes movingly of his struggle to reach the general theory of relativity: “The years of searching in the dark for a truth that one feels but cannot express, the intense desire and the alternations of confidence and misgiving until one breaks through to clarity and understanding.”

Although I am no longer a believer, I am grateful for the training I received in Roman Catholicism, which taught me to see the world as having grandeur and hidden meaning. That vision helped inspire my wishful searching for grandeur and meaning in the physical world, which became habitual.

Finally, let me add that in deciding what to wish for, a feeling for beauty is an invaluable asset. Exposure to beautiful objects and sounds — art and music — as well as beautiful ideas can develop that sense.

Discussion Questions:

  1. In addition to imagination and wishful thinking, are there other habits of thought that be helpful in learning and specifically in science?
  2. Are there other books or games, beyond those mentioned here, that can encourage imaginative play?
  3. Are there any limitations to using computer programs as models for thinking about learning?

 

16 Responses

  1. Sampan Chakraborty says:

    Decisive article, good descriptions. I would add one point on Faraday: He was a self-taught prodigy who started in earlier days as a book binder, came in contact with Davy, and later became the greatest experimentalist of all time. Faraday had some sort of weak mathematical link. That was also the case with some noted inventors including Edison. But where Faraday excelled was in his visual and spatial intelligence, putting an idea forward in order then to verify it. (Edison was a similar sort of self-taught prodigy.)

    • Suen says:

      Going off this idea, there’s an old debate in philosophy about whether images should play a role in the kind of abstract reasoning characteristic of modern science. The philosopher of science Bachelard, for instance, argued that if students of science are taught to think too visually it winds up inhibiting their ability to grasp complex and inherently abstract concepts such as rest mass or configuration space, misleading them into conflating these concepts with the more familiar notions from common sense or classical physics, such as heaviness or 3-space.

      Dr. Wilczek, you discuss both puzzles and imagination. Would you argue that “visual intelligence” is integral to abstract thinking?

      • Frank Wilczek Frank Wilczek says:

        R. P. Feynman said “The game I play is a very interesting one: It’s imagination in a tight straitjacket.” It is extremely valuable to make simplified, concrete models — whether physical or conceptual — of complex or counterintuitive things, so we can use all the tools our brains provide. That’s what I emphasized in the article. But there is a danger that you take the models too literally, and base false inferences on them. So, you always have to check. One good way is to have several different simplified models, and see whether they give consistent results. My feeling is that if you don’t use visual models you’re sacrificing a lot of your potential processing power. I say use them, but don’t trust them blindly.

    • Frank Wilczek Frank Wilczek says:

      Thank you for your kind words, and the additional material on Faraday.

      People with very different styles of thinking have made major contributions to science. The great French mathematical physicist Joseph-Louis Lagrange (1736–1813) emphasized symbolic thought, and took pride in the fact that his masterwork on mechanics, the Mécanique analytique, does not contain a single diagram. Newton’s founding Principia, by contrast, is full of diagrams.

      The Wright brothers got their start building and repairing bicycles — activities which bring in muscular as well as visual areas of the brain. Newton as a boy built mechanical models and kites, and grew up to be a great experimenter as well as a great theorist; Feynman repaired radios and appliances. You’ll find similar stories about the childhoods of many other historic scientists and inventors.

      I conclude that it’s good to have many different kinds of brains contributing, as well as many parts of each single brain.

  2. Russ says:

    2) of course, you can learn from ANY book, ANY game, ANY movie. Sci-fi, Fantasy, any time you read or watch fiction you are glimpsing into the imagination of another’ mind. Why did they choose THAT red, why THAT shape, why THAT accent etc. It’s different to what you would have come up with yourself.

    • Frank Wilczek Frank Wilczek says:

      I love this comment. Thinking deeply about anything makes it interesting and quickly leads you into mysteries, especially if you think about how it might have been different. That said, some things teach you more than others! To take an extreme example: You’ll learn more by engaging Bach or the Beatles than from listening to a cat walking across a piano, and wondering about the pattern of the notes.

  3. Prof. Jimmy says:

    Imagination is not an intellectual faculty. So if imagination plays a pivotal role in science, does that mean that science is, at least in part, a non-intellectual activity — intuitive, say, or artistic?

    • Georgio says:

      So are you saying that science is subjective? Science aims to be empirical = objective.

      • Frank Wilczek Frank Wilczek says:

        Empirical validity is and ought to be the ultimate criterion for scientific truth-value — no quarrel from me about that. But the process of getting there — of formulating candidate hypotheses and designing experiments to test them — pure logic isn’t enough, as a practical matter. Looser, more inclusive kinds of thinking are essential.

    • Frank Wilczek Frank Wilczek says:

      I’ll assume that by “intellectual” Jimmy means “logical”, since that makes the question interesting. With that interpretation, I agree with Jimmy that science is in large part not logical, but includes strong aesthetic elements. These come into play, especially, when one is choosing what to work on, or what to make, or assessing what lines of attack seem promising. In A Beautiful Question, I documented at length how aesthetic thinking — looking for beauty — has led to decisive progress in fundamental physics.

  4. Machine Learner says:

    Will machines one day be able to make scientific discoveries, and not just solve problems posed to them?

    • Frank Wilczek Frank Wilczek says:

      If you believe that mind is an emergent property of brain, and that brain functions according to known physical laws, then I think that you have to accept that in principle machines can do anything that humans can, because we know how — in principle — to build machines that replicate each of the relevant physical behaviors. (“In principle,” because we don’t really know how to assemble the pieces, and are far too many of them.) Of course, it’s not yet strictly proven, empirically, that mind is an emergent property of brain, nor that brain functions according to presently known physical laws. But most working neurobiologists make those assumptions, and so far, they’ve held up pretty well. Also, there doesn’t seem to be a usefully concrete alternative theory.

      As a practical matter, existing machines are nowhere near matching general human intelligence. I think they will have to engage the world more actively, have better sensory input and processing systems, and acquire abilities to understand, formulate, and carry out broadly formulated goals (in loose, natural language, as opposed to strict computer code) before that happens.

  5. ScienceGuy says:

    Thank you, Dr. Wilczek. Fascinating discussion. However, I am not sure I understand your use of computation as an analogy for human learning. Do you mean to suggest that the human mind is like a computer? I don’t believe computers could ever use imagination or engage in “play.”

    • Frank Wilczek Frank Wilczek says:

      I have to disagree, for the reasons mentioned in the preceding answer. Let me add that some spectacular recent advances in computer science have come through use of “neural net” architectures, inspired (as their name suggests) by biology. We don’t know how our own thought processes work, and we do know that matter — if you put lots of atoms together in clever ways — can do some extremely surprising and impressive things.

  6. Wendy Moira says:

    I worry that beauty is not only a subjective concept, but that it can also lead us astray. For instance, many theoreticians find string theory beautiful, but the majority of scientists seem to think that this theory lacks other key features that make for a good scientific theory, e.g., predictive power. How do we differentiate between theories that are beautiful and true and those that are false but nevertheless beautiful?

    • Frank Wilczek Frank Wilczek says:

      This is a question that divides professional physicists! I don’t think it should, really, once we are clear about some basic distinctions.

      Beauty is, as you say, subjective — but not entirely. Some forms of beauty have an objective component. In physics, almost everyone will agree that simple equations that explain are more beautiful than complicated equations that explain a little. In the initial excitement about string theory, people hoped that it would lead to simple equations that explain a lot. So far, though, it’s been disappointing that way, as far as explaining a lot about the physical world. String theory has, on the other hand, been a fruitful source of mathematical discovery. So, to me, it has some beauty, but not as much as was hoped for (and advertised).

      The ultimate criterion of scientific truth is, and ought to be, empirical validity, as I discussed above. But the failure of a theory to make testable predictions does not make it false. Depending on one’s mood and prejudices, one might say simply that it is “not scientifically true,” or spin it hopefully as “not yet proved,” or spin it derisively as “not even wrong.” All three attitudes are common.

Leave a Reply to Frank Wilczek Cancel reply

Your email address will not be published. Required fields are marked *