In 1894, the physicist and Nobel laureate Albert Michelson declared that science was almost finished; the human race was within a hair’s breadth of understanding everything:

It seems probable that most of the grand underlying principles have now been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice.

Bold and heady predictions like this often seem destined to topple, and, to be sure, the world of physics was soon shaken by the revolutions of relativity and quantum mechanics.

But as the 20th century unfolded, it turned out to be the phenomena closest to our own human scale — biology, social science, economics, politics, among others — that have most notably eluded explanation by any grand principles. The deeper we dig into the workings of ourselves and our society, the more unexpected complexity we find. Fittingly, it was in the 20th century that science began to bridge disciplinary boundaries in order to search for principles of complexity itself.

What is Complexity?

The “study of complexity” refers to the attempt to find common principles underlying the behavior of complex systems — systems in which large collections of components interact in nonlinear ways. Here, the term nonlinear implies that the system can’t be understood simply by understanding its individual components; nonlinear interactions cause the whole to be “more than the sum of its parts.”

Complex systems scientists try to understand how such collective sophistication can come about, whether it be in ant colonies, cells, brains, immune systems, social groups, or economic markets. People who study complexity are intrigued by the suggestive similarities among these disparate systems. All these systems exhibit self-organization: the system’s components organize themselves to act as a coherent whole without the benefit of any central or outside “controller.” Complex systems are able to encode and process information with a sophistication that is not available to the individual components. Complex systems evolve — they are continually changing in an open-ended way, and they learn and adapt over time. Such systems defy precise prediction, and resist the kind of equilibrium that would make them easier for scientists to understand.

Transforming Our Understanding

Of course all important scientific discoveries transform our understanding of nature, but I think that the study of complexity goes a step further: it not only helps us understand important phenomena, but changes our perspective on how to think about nature, and about science itself.

Here are a few examples of the surprising, perspective-changing discoveries of Complex Systems science. (If these don’t seem so surprising to you, it is because your perspective has already been changed by the sciences of complexity!)

Simple rules can yield complex, unpredictable behavior

Why can’t we seem to forecast the weather farther out than a week or so? Why is it so hard to project yearly variation in fishery populations? Why can’t we foresee stock market bubbles and crashes? In the past it was widely assumed that such phenomena are hard to predict because the underlying processes are highly complex, and that random factors must play a key role. However, Complex Systems science — especially the study of dynamics and chaos — have shown that complex behavior and unpredictability can arise in a system even if the underlying rules are extremely simple and completely deterministic. Often, the key to complexity is the iteration over time of simple, though nonlinear, interaction rules among the system’s components. It’s still not clear if unpredictability in the weather, stock market, and animal populations is caused by such iteration alone, but the study of chaos has shown that it’s possible.

More is Different

Above I reiterated the old saw, “the whole is more than the sum of its parts.” The physicist Phil Anderson coined a better aphorism: he noted that a key lesson of complexity is that “more is different.”

Ant colonies are a great example of this. As the ecologist Nigel Franks puts it, “The solitary army ant is behaviorally one of the least sophisticated animals imaginable…If 100 army ants are placed on a flat surface, they will walk around and around in never decreasing circles until they die of exhaustion.” Yet put half a million of them together and the group as a whole behaves as a hard-to-predict “superorganism” with sophisticated, and sometimes frightening, “collective intelligence.” More is different.

Similar stories can be told for neurons in the brain, cells in the immune system, creativity and social movements in cities, and agents in market economies. The study of complexity has shown that when a system’s components have the right kind of interactions, its global behavior — the system’s capacity to process information, to make decisions, to evolve and learn — can be powerfully different from that of its individual components.

Network Thinking

In the early 2000s, the complete human genome was sequenced. While the benefits to science were enormous, some of the predictions made by prominent scientists and others had a Michelsonian flavor (see first paragraph, above). President Clinton echoed the widely held view that the Human Genome Project would “revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases.” Indeed, many scientists believed that a complete mapping of human genes would provide a nearly complete understanding of how genetics worked, which genes were responsible for which traits, and this would guide the way for revolutionary medical discoveries and targeted gene therapies.

Now, more than a decade later, these predicted medical revolutions have not yet materialized. But the Human Genome Project, and the huge progress in genetics research that followed, did uncover some unexpected results. First, human genes (DNA sequences that code for proteins) number around 21,000 — much fewer than anyone thought, and about the same number as in mice, worms, and mustard plants. Second, these protein-coding genes make up only about 2% of our DNA. Two mysteries emerge: If we humans have comparatively so few genes, where does our complexity come from? And as for that 98% of non-gene DNA, which in the past was dismissively called “junk DNA,” what is its function?

What geneticists have learned is that genetic elements in a cell, like ants in a colony, interact nonlinearly so as to create intricate information-processing networks. It is the networks, rather than the individual genes, that shape the organism. Moreover, and most surprising: the so-called “junk” DNA is key to forming these networks. As biologist John Mattick puts it, “The irony…is that what was dismissed as junk because it wasn’t understood will turn out to hold the secret of human complexity.”

Information-processing networks are emerging as a core organizing principle of biology. What used to be called “cellular signaling pathways” are now “cellular information processing networks.” New research on cancer treatments is focused not on individual genes but on disrupting the cellular information processing networks that many cancers exploit. Some types of bacteria are now known to communicate via “quorum sensing” networks in order to collectively attack a host; this discovery is also driving research into network-specific treatment of infections.

Over the last two decades an interdisciplinary science of networks has emerged, and has developed insights and research methods that apply to networks ranging from genetics to economics. Network thinking is the area of complex systems that has perhaps done the most to transform our understanding of the world.

Non-Normal is the New Normal

In 2009, Nobel Prize-winning economist Paul Krugman said, “Few economists saw our current crisis coming, but this predictive failure was the least of the field’s problems. More important was the profession’s blindness to the very possibility of catastrophic failures in a market economy.” At least part of this “blindness” was due to the reliance on risk models based on so-called normal distributions.

Figure 1: (a) A hypothetical normal distribution of the probability of financial gain or loss under trading. (b) A hypothetical long-tailed distribution, showing only the loss side. The “tail” of the distribution is the far right-hand side. The long-tailed distribution predicts a considerably higher probability of catastrophic loss than the normal distribution.

The term normal distribution refers to the familiar bell curve. Economists and finance professionals often use such distributions to model the probability of gains and risk of losses from investments. Figure 1(a) shows a hypothetical normal distribution of risk. I’ve marked a hypothetical “catastrophic loss” on the graph. You can see that, given this distribution of risk, the probability of such a loss would be very near zero. Less probable, maybe, than a lightning strike right where you’re standing. Something you don’t have to worry about. Unless the model is wrong.

The study of complexity has shown that in nonlinear, highly networked systems, a more accurate estimation of risk would be a so-called “long-tailed” distribution. Figure 1(b) shows a hypothetical long-tailed distribution of risk (here, only the “loss” side is shown). The longer non-zero “tail” (far right-hand side) of this distribution shows that the probability of a catastrophic loss is significantly higher than for a system obeying a normal distribution. If risk models in 2008 had employed long-tailed rather than normal distributions, the possibility of an “extreme event” — here, “catastrophic loss” — would have be judged more likely.

Because long-tailed distributions are now known to be signatures of complex networks, our growing understanding of such networks implies that risk models need to be rethought in many areas, ranging from disease epidemics to power grid failures; from financial crises to ecosystem collapses. The technologist Andreas Antonopoulos puts it succinctly: “The threat is complexity itself.”

Is Complexity a New Science?

“The new science of complexity” has become a catchphrase in some circles. Google reports nearly 87,000 hits on this phrase. But how “new” is the study of complexity? And to what extent is it actually a “science”?

The current scientific efforts centered around complexity have several antecedents. The Cybernetics movement of the 1940s and 50s, the General System Theory movement of the 1960s, and the more recent advent of Systems Biology, Systems Engineering, Systems Science, etc., all share goals with Complex Systems science: finding general principles that explain how system-level behavior emerges from interactions among lower-level components. The different movements capture different (though sometimes overlapping) communities and different foci of attention.

To my mind, Complexity refers not to a single science but rather to a community of scientists in different disciplines who share interdisciplinary interests, methodologies, and a mindset about how to address scientific problems. Just what this mindset consists of is hard to pin down. I would say it includes, first, the assumption that understanding complexity will require integrating concepts from dynamics, information, statistical physics, and evolution. And second, that computer modeling is an essential addition to traditional scientific theory and experimentation. As yet, Complexity is not a single unified science; rather, to paraphrase William James, it is still “the hope of a science.” I believe that this hope has great promise.

In our era of Big Data, what Complexity potentially offers is “Big Theory” — a scientific understanding of the complex processes that produce the data we are drowning in. If the field’s past contributions are any indication, Complexity’s sought-after big theory will even more profoundly transform our understanding of the world.

It’s something to look forward to. In the words of playwright Tom Stoppard: “It’s the best possible time to be alive, when almost everything you thought you knew is wrong.”

Discussion Questions

  1. Can you identify any ways in which your own way of thinking has been changed by Complex Systems science?
  2. The discussion above stated that when systems get too intricately networked, “the threat is complexity itself.” The network scientist Duncan Watts suggested that the notion “too big to fail” should be rethought as “too complex to exist.” Should we worry about the world becoming too complex? If so, what should we do about it?
  3. To what extent do you think the ideas of complex systems are new? What would it take to create a unified science of complexity?

Resources and Further Reading:

http://complexityexplorer.org

Anderson, P. W. More is different. Science, 177 (4047), 1972, 393-396.

Bettencourt, L. M., Lobo, J., Helbing, D., Kühnert, C., & West, G. B. (2007). Growth, innovation, scaling, and the pace of life in cities. Proceedings of the National Academy of Sciences, 104(17), 7301-7306.

Franks, N. R. Army ants: A collective intelligence. American Scientist, 77(2), 1989, 138-145.

Hayden, E. C. Human genome at ten: Life is complicated. Nature,, 464, 2010, 664-667.

Krugman, P. How did economists get it so wrong? New York Times, September 2, 2009.

Miller, J. H. and Page, S. E. Complex Adaptive Systems. Princeton University Press, 2007.

Mitchell, M. Complexity: A Guided Tour. Oxford University Press, 2009

Newman, M. E. J. Networks: An Introduction. Oxford University Press, 2009.

Watts, D. Too complex to exist. Boston Globe, June 14, 2009.

West, G. Big data needs a big theory to go with it. Scientific American, May 15, 2013.

Discussion Summary

My Big Questions essay on complexity elicited several thought-provoking comments from readers. Like the field of complexity itself, the discussion ranged across many topics, including questions about what makes systems complex, whether there is intrinsic randomness in nature, the role of non-coding DNA, the existence of “symbols” in the brain, and several other fascinating issues in modern science. Many thanks to the people who took the time to comment on my essay! I will summarize some highlights of the discussion here.

In my essay, I stressed the importance of nonlinear interactions among the components in a complex system. I said that “nonlinear interactions cause the whole to be ‘more than the sum of its parts.’” One commenter asked about the role of nonstationarity as a factor contributing to complexity. Nonstationarity refers to the idea that the statistics underlying a system can change over time. Complex systems often exhibit nonstationarity, but this also can be a result of nonlinear interactions. I would say that nonstationarity should probably be viewed as an effect, rather than a cause, of the complexity of a system.

My discussion of the unpredictability of complexity systems led to one commenter asking if there is in fact intrinsic randomness or unpredictability in the world, or if the unpredictability we see in complex systems is simply a result of our lack of knowledge. This is an important question that is also at the heart of some arguments in theoretical physics. Does quantum mechanics involve “true” randomness, or are there “hidden variables” whose invisibility mask what is actually a deterministic universe? I think most physicists today believe in fundamental randomness at the quantum level, and the existence of chaos implies that fundamental randomness can in principle be amplified at the macroscopic level to produce fundamental uncertainty in complex systems. But I can’t say whether this accounts for the uncertainty we currently have about specific complex systems in nature.

Another commenter asked about the relevance of Kurt Gödel’s incompleteness theorem in looking for a “unified theory” of complex systems. Does Gödel’s theorem preclude a unified theory, because of intrinsic limits to deduction? My response is that scientific theories are different from mathematical theorems because they are largely based on induction, not deduction. One doesn’t prove a scientific theory in the way one proves a mathematical theorem. Scientific theories are supported by empirical evidence, but never definitively proved in the sense of mathematical proof.

In the essay I mentioned the notion that as systems become more complex, the risk of catastrophic failures becomes higher. One commenter followed up on this by asking the following: “Complexity ‘science,’ any ‘science’ today, seems to grab hold of the panoply of tools in the computer’s bulging software toolbox to collate, regress, smooth, iterate, animate, graph, color, prettify and do more with data, with software from many sources, with data compiled by computer according to rules derived from other work…. At what point does work drawing on many interdisciplinary people, papers and tools, become too complex to depend on?” The commenter goes on: “Is Complexity science, after a certain point, self-defeating because of its own complexity?”

This is an interesting question. My own response is this: Just because complexity science studies complexity doesn’t mean the science itself needs to be complex. The goal, of course, is to better understand the systems we currently call “complex.” Science always strives for simple explanations (cf. Occam’s razor) and the science of complex systems is no exception. The science of complexity will be successful when it shows that complexity in nature is explained with simple, common principles. There’s a famous quotation (author unknown) that says something like “I would give my life for the simplicity on the other side of complexity.” That’s the simplicity that I think all science is striving for.

Another commenter voiced the opinion that we should worry not about the world becoming too complex, but rather “we should worry that the world will not become complex enough to solve the problems we are facing.”

My favorite statement from the comments is also from this last commenter. In the essay, I paraphrased William James: “Complexity is not yet a science; it is the hope of a science.” The commenter responded, “Complexity may not just be ‘the hope of a science’, it may be the hope of all sciences.”

New Big Questions:

  1. Is the unpredictability we see in complex systems due only to our lack of understanding, or is it due to some intrinsic unpredictability in nature?
  2. Do you agree that “we should worry that the world will not become complex enough to solve the problems we are facing?”
  3. Does the science of complexity have to be complex itself?

26 Responses

  1. Fred says:

    I find the emphasis on nonlinearity in complexity to be misplaced.  Instead, I would suggest nonstationarity is a bigger contributor to complexity.  Do you differentiate between the two and if so, what is your view on nonstationarity (which could be argued as a proxy for population heterogeneity)?

    • Melanie Mitchell says:

      Fred,

      I don’t have a good answer to this.  Nonlinearity and nonstationarity are quite distinct ideas, though there can be some interplay between them.  As I mentioned in the essay, nonlinearity refers to the idea that the behavior of the whole system is different from what you would predict from summing up the parts.    Nonstationarity refers to the idea that the statistics underlying a system can change over time.  The details of these two notions can get pretty technical.   

      You’re certainly correct that the systems we call “complex” can exhibit both nonlinear interactions among the componets and non-stationary behavior over time.  It’s hard to say, for a given system, which one is a bigger contributer to complexity — it depends on what kind of complexity you’re talking about.     Of course it’s hard to talk about these ideas without specific examples of systems and specific notions of complexity.    I’m not sure why you say that nonstationarity is a proxy for population heterogeneity — it seems to me that  these are independent properties of a system.

      • Fred says:

        Thanks Melanie.  In your prior response to George you mention parameter uncertainty leading to unpredictable systems.  Heterogeneous populations generate nonstationary summary statistics.  In finance (a decidedly complex system), it has been shown that the first two moments of a distribution are nonstationary.  Various attempts at improving estimation techniques (mainly through resampling and simulation) have not yielded valid estimates.  Thus leaving us with great parameter uncertainty and unfortunately an unpredictable system.  The question turns to why?

        We must look to the population from which we are estimating our parameters.  We are certain of a homogeneous population’s statistics and parameters (army ants, grains of sand, or 100×100 pixels per Bak et al.).  Thus, if a system’s components exhibit nonlinear interactions but they do so with no variance to their parameter estimates, “complexity” is reduced to “tedious” and predictions can be made.  We cannot measure every individual of a population, but if the samples exhibit nonstationarity, then these results are the artifact of population heterogeneity and responsible for the deviation from the desired certainty.

        “More is different” seems to speak of the additivity of causal flow within a system rather than nonlinearity of the components.  You are right to point out that each system deserves its own treatment with specific examples and specific notions, but generally, nonstationarity should be the primary focus to address (as a proxy for population heterogeneity).  Thanks for your response and further insights.

        • Fred says:

          I forgot to mention one supporting point.  The population heterogeneity issue I raise is not new.  In economics Homo Economicus was developed in an attempt to homogenize the population with axiomatic  assumptions from decision theory.

          It did not work…

  2. George Gantz says:

    Melanie – Thank you for the article, and for your book (which I devoured recently).  You have a wonderful way of explaining these topics without getting bogged down in the technicalities.  In response to your first question, one of the fundamental implications of complexity theory is the inherent non-predictability of many physical processes.  This seems to be another death-blow (quantum physics being the first) to the concept of determinism, which rests on the specificity of causal pathways in determining outcomes.

    I am curious about your sense of what other grand conclusions might be drawn from the different findings in complexity theory.  Is there a possibility that the mathematical properties discovered, for example in the study of fractals, are a fundamental structure that physical reality must and always will obey?  Or to ask a simpler question, is the math descriptive, or is it prescriptive?

    I also worry about whether math will ever yield the “Big Theory” as you hypothesize.  Specifically, in accordance with the work of Kurt Godel, all the far-flung corners of math are themselves subject to his incompleteness theorems, and as long as we insist on consistency, there will be truths that we will be unable to determine.

    • Melanie Mitchell says:

      George — Nice to hear you “devoured” my book — I hope    it was tasty! 🙂

      The question of  determinism is still hotly debated in philosophy and physics circles.  Quantum physics seems to say there is fundamental randomness in the world, but there are people who disagree.  The arguments get pretty technical, physics-wise, and I usually can’t follow them.

      Interestingly, what the mathematics of chaos tells us is that, in the real world, determinism does not imply predictability.  That is, you can have a completely deterministic system (e.g., the famous “logistic map”), whose mathematics you understand perfectly, but, for certain values of the parameters of the system, if you have any uncertainty at all about the current state of the system, no matter how small your uncertainty, the system won’t be predictable.  To me, this is an astounding result.

      Of course this touches on the touchy subject of “free will”.  Fortunately I’m no philosopher, so I don’t have to argue one way or the other on that!

      I’m not sure what you mean by “is the math descriptive or prescriptive”.  Maybe you can clarify this for me.

      As for Godel’s theorem, I don’t personally think it’s very relevant to science.  Godel’s theorem tells us that if an axiomatic system (e.g., arithmetic) is consistent, then there are true statements that can’t be proved within the system.  This might be bad news for mathematicians, though no one knows if any mathematical statements we care about have this limitation.  

      In short, Godel’s theorem is about possible limits to deduction.  I don’t think much of science is about
      deduction or proving theorems.  It’s an inductive endeavor.  We create models to account for data.  We try to find the simplest models that are good at predicting phenomena, and we try to understand their mechanisms.  There’s no proof that these models are “correct”; they are “verified” only by showing that they work on data we have seen so far.  In short,  induction, not deduction, is the way most science works. 

      • George Gantz says:

        Thanks for the reply.  Yes the book was tasty but left me hungry for more.  PS – How can a determinist conduct a debate on determinisim?

        Perhaps I am overly concerned about Godel – but it seems to me the inductive process applies to the mapping of physcis to math – the math part is deductive.  Can we see inconsistency or incompleteness when we see it.

        As for “descriptive”or “prescriptive”, I was asking whether the mathematical truths to which physics conforms are a necessity or coincidental.  This is in reference to Max Tegmark’s work (see the recent article in Nautilus, or his new book Our Mathematical Universe), or Mario Livio’s book Is God a Mathematician.  Must physics conform to math? Does the math come first?

  3. Hernan Eche says:

    Hello,

    Melanie, thank you for the introduction to complexity course!, it’s inspiring to read and listen to someone who have your knowledge and still kept alive such genuine curiosity.

    ————————–

    1. Can you identify any ways in which your own way of thinking has been changed by Complex Systems science?

    It made me feel good. How? Well, It allowed me to know there are many people out there working on questions and issues that I consider contemporary, and I consider it to be a next steps for science, I’ve even found some answer along these years (automata hierarchies, computational complexity classes are a great binnacle of this struggle to tackle complexity), perhaps in other centuries was ok to wonder about earth shape, but today complexity is ubiquitous (not because there were not complexity before!, but because now we are surrounded by human made complex systems, so we know that a very complex system as the Internet can be implemented and understood (in some way, by layers), so citing Gregory Chaitin : “before we could recognize natural DNA software as such, we had to invent artificial software”, now we have discovered complex systems, so we see them everywhere, because of the instruments and the tools we have, that allow us to see differently, tools that are not only hammers that make every problem a nail, but as a telescope allow us to see far with vision, computers allow us see far with thought). It’s amazing how people as Philip Warren Anderson could foresaw a layered model (unavoidable today that we have seen internet working).

    Complexity science is interesting not because it’s an already successful/useful theory or practice but because it addresses questions that arise today by watching around everyday systems and nature, at least to anyone interested in life, knowledge and it’s limits.

    ————————–

    2. The discussion above stated that when systems get too intricately networked, “the threat is complexity itself”.  The network scientist Duncan Watts suggested that the notion “too big to fail” should be rethought as “too complex to exist.”  Should we worry about the world becoming too complex?  If so, what should we do about it?

    I think we have to trust in the future and to bring ethical actions to everything we do, accept mistakes, but never give up on ethics, our words and actions may seem to some extent or large, but complexity has taught us that there is no such thing as a little thing.

    ————————–

    3. To what extent do you think the ideas of complex systems are new? What would it take to create a unified science of complexity?

    As I wrote above, there are some new ideas and some old ones, but mainly now we have visible issues and ideas that we were not capable to foresee.
    I think we have already the ingredients, the tools and the need, so: What’s between Newton to “Physics I”? time !, it will takes time, a complexity issue by itself !

    • Melanie Mitchell says:

      Hernan,

      Thank you very much for your thoughtful responses to my discussion questions.  I enjoyed reading them.

  4. Meyer1953 says:

    I think that there is a definition of complexity that is pertinent to great things, and that there is another definition that pertains to mundane things.

    The definition that applies to mundane things would be, that those things do not have a focus of their relationship.  If one member is perturbed, no other member steps in to fill the gap left by the first.  A house of cards, or a sand dune, or a weather front would be such.  These things require vast computer resource to model, and they could well be worth so modeling, but they are mundane.

    For a vibrant complexity, such as a biological cell, there is an entire category of address, radically different from mundane analysis, that is required to perceive the relationship.  For the participants in any high system, its analysis must be functionally at hand in order for the system to function highly.  The basis of high systems, especially life processes, would fall within vibrant complexity.

    As it turns out, vibrant complexity is the kind that can be effectively understood.  This is nominally surprising, but makes better sense when one considers that a high system has, indeed, a thrust or a direction to begin with.  Then, when one also considers that we people are similarly endowed, it makes sense that we can do such analysis since it is simply a matter of self-discovery.

    Not that science is up to the task, as presently practiced, since science has been in thrall to mathematics since Newton announced calculus.  But understanding of vibrant complexity is available, and is so simple that it is electrifying.  It is also powerful.  Very, excruciatingly, powerful.

    I’m just telling the picture here, so that some one might consider it.  Vibrant complexity is discernible, and the discernment is available.  This is a hooray.

    Okay, it would be crummy to leave it at that.  I’ll put it this way, that an eight word statement reveals the whole analysis.  The core is here, in these eight words, though it may be necessary to spend some time and some heart in looking into them.

    The eight words:

    A sentence is composed as noun and verb.

    Equivalently:

    A noun and a verb compose a sentence.

    From here on, vibrant complexity falls open like a book.

    • Melanie Mitchell says:

      Your distinction between “mundate” and “vibrant” complexity is interesting.  I’m wondering if it is the same (or largely overlapping) between complex systems that are not adaptive, and those that adapt.  This is a distinction that some in the complex systems community make.  Some people go so far as to include adaptation in the list of requirements for a “complex system”.   Your contrasting eight-word phrases seem to imply a difference between being passive and active; being constructed by something outside oneself, or actively constructing oneself. 

      • Meyer1953 says:

        I use “vibrant complexity” to refer to a system that has an internal definition, something that involves its components into a direction that some of them cooperate to pursue. Hence, a biological analogy with cells. A “mundane complexity” is something that just swirls or tumbles and so forth, that may be quite important to model but has no internally defined course or concourse. The vibrant system would be inherently adaptive, the mundane would merely tumble off a different cliff if given choice B instead of choice A.

        The whole point of excitement over vibrant systems is, that precisely because they have an internal definition – a locus about which members cooperate – the system can to a real extent be discerned. Especially poignant is, that such a system once discerned has everything to do with far vaster systems such as the very ones that give us our own biology. So, they pertain to us and we can draw on our own experience and insight to discern them. A double victory.

        The distinction between the eight-word phrases is merely a rewording of the same statement, for emphasis and robustness. The point of the phrases is, that the very and exact key to entirely all of vibrant complexity is the extreme power of “bivalence,” here represented as the bivalence “noun and verb.” But I present the concept as an instance of it, a grammatical sentence, which is a bold and elegant introduction. A picture, if you will, rather than the thousand words.

  5. Conroynaas says:

    Hi, Melanie, Ralph Stacey draws on analogies from the complexity sciences to challenge and refresh the dominant power coalition thinking and practices in managing and leading organisations. I would like to have your opinion on his concept of “complex responsive processes of relating” that brings in the essential characteristics of human agents. The view is of organizations as ongoing patterning of interactions between people which produce further patterns of interaction, not systems that are ‘things’ outside the interactions and that can be viewed from outside. The macro emerging from the micro in social processes of communicative interaction has crucial implications about life in organisations, focussing on how members cope with the unknown as they continually create organizational futures together – self-organizing evolution that includes individual sense-making that is both scientific & non-scientific, rational and emotional with consciousness and psycho-social behaviours unique to humans. This view challenges dominant discourse on leadership, managing change, and also makes the distinction between organisational dynamics (org. behaviour) and strategic management redundant. Although we are in what may be a non-science arena, a scientific attitude to the contribution of complexity sciences as an analogy, instead of mechanistic and biological analogies of orgs. seem to me to be more useful and factual, especially as it draws on such multi-disciplines.

    And would you include psychologists and sociologists in this transformation initiative?

    • Melanie Mitchell says:

      I’m not familiar with the work of Ralph Stacey so I can’t give you an informed opinion on this.  Certainly human organizations can be complex systems, and I’m sympathetic to the focus on interactions as a way of understanding these organizations.  I also like your observation that “strategic mangement” is not distinct from organizational dynamics.    There are a number of people who have used ideas from complex systems science in thinking about organizational dynamics (in the sense you’re referring to), but I don’t know very much about this area. 

      I would certainly include psychology and sociology as fields whose perspectives are changing due to ideas from complex systems.  Sociologists, especially, think about networks, and some of the work coming out of network science is directly relevant to questions in sociology.   

    • Conroynaas says:

      I note the Plexus Institute refers to both you & Prof. Ralph Stacey – I think managers don’t interact enough with scientists & vice versa…more to the detriment of mgt. http://networksingularity.com/2010/03/17/plexus-institute–coming-events.aspx

  6. oddfractionaldimension says:

    these are only my conscripts, but i propose creating a small quantum device that records ever visual actions of any human event or happening to get the exactness of measure to create computer program language that can handle shorter segments of time versus exact movements in this recorded mediums. Yoccatosec laser(fasters) can be use to compare the event to the speed of light. My quantum computer will be ascribed out of my theory of chaos modified data base that is founded on actual real life events and video extracts! I use this in an augmented reality glass type sun glass design for my visual controlled I/O. I can actually get some real mathematically tested experiments.

  7. herb wiggins says:

    “And as for that 98% of non-gene DNA, which in the past was dismissively called “junk DNA”, what is its function?”   As my 1st post here, would like to orient all to where I’m coming from & to clarify the discussion in more defined way. Essentially a field biologist from age 12, and am now 50 yrs. older and hope more knowledgeable. As a retired medical professional with a postdoc degree, have always been interested in the “Big Questions”, & worked in the psych/rehab/neuro areas & sciences for 35 years for that reason, among others. My approach is that of  Structuralism, of which “More is different”, as written above, is a very welcome to see & key principle. I like to take things from the specific to general, with good, existing scientific examples showing these structure/function relationships. Alexander Kuhn’s ” The Structure of Scientific Revolutions” gives great examples of the structuralist approach to answering these “Big Questions”.   We knew that junk DNA MUST have a function for one big reason. The Least Energy Principle (LEP). This is an enormously important principle in our universe because everything seems to abide by it, 1 way or another. To paraphrase the MH Encyclopedia of Science & Tech,, the LEP provides a means of solving problems which are otherwise intractable.   Let’s apply it here. So called “junk DNA”  can’t be junk, or the cell wouldn’t have so much of it. That much DNA costs a LOT of energy and resources to create and maintain. That would violate the LEP. So it Must do somethings. Complex systems thinking shows that very few parts of complex systems function in just 1 way. A >  B is linear, not complex. A > B, C, D’, & a few others, such as found in multiple functions of defined cell receptor sites, is a good, real example of a complex system at work. This is a characteristic of complexity, which underlies almost all biology.   This is what has recently been found as one part of an answer as to what junk DNA does. It probably does lots more, but this is one piece of an answer.   http://www.evolutionnews.org/2013/06/more_light_is_c073041.html   The entire article is interesting, but scroll down to the paragraph titled “Epigenetics and Environment”   & it makes evolutionary sense. Because part of the junk DNA regulates the expression of the genes themselves & creates greater diversity possible than with simple gene expression. Taking 2 specific examples of dominantly inherited genes, with incomplete expression, such as diabetes and paranoid schizophrenia/bipolar related disorders, we see this. Diabetes in my family can be traced back 7 generations to 1845 in my gr-gr-grandfather who died of it ca. 1920, by his death certificate in Missouri. The dominant gene has continued through the family to my generation and the next. BUT, and this is the important point, it’s not always expressed at the same age, or to the same force. Some get by with simple oral hypoglycemic agents. Others must use insulin to control the blood sugar. & some do not get it at all!! But, interestingly, in studies of twins with a AODM (adult onset diabetes) gene, if the one twin gets diabetes, within 2 yrs., the other will as well. It’s a probability, not a certainty, another complex system characteristic. It’s NOT linear determinism at all.   The same is true of the paranoid schizophrenia/bipolar disorder genetics, and that dominant gene not only is closely associated with full blown schizophrenia, but also with real bipolar disorder without schizophrenia. This is complexity in a nutshell. & not only that, in contrast to diabetic geneiics in twin studies, if the 1 ID twin gets schizophrenia, the other twin has only a ~50% chance of coming down with it. & what’s more, those who carried that gene might not at all get either disorder, and it’d be often unsuspected, passing as a carrier state. How’s that for complexity?   But the point is this, let’s apply one of the important characteristics of evolutionary model. If the gene is to survive, then it must do something to enable the possessor to survive. It must have some advantage. If a gene were to be only off or on, or be in its usual state of regulating certain metabolic effects due to concentration of the substance creating a feedback effect to the gene, such as the one which controls insulin production, which only responds to high blood sugar, or not, then it’s simple gene expression.   But, and here it is again, if a single gene can be expressed in many ways, in addition to its metabolic regulatory effects, then the organism has gotten a LOT more room to adapt to the environment. The epigenetic effects of the junk DNA actually INCREASE the diversity of the organism’s potential responses beyond simple genetic expression. And THAT may be why junk DNA is preserved and passed on. When the genetic diversity is increased, it increases the chances of survival. So, using an evolutionary principle, we can understand why there is junk DNA & epigenetics. Do you see how using complex systems characteristics can simplify vast amounts of data, and is a preferred method to linear thinking?   This is an example of process thinking using the characteristics of complex systems, combined with real examples. It’s specific, and it goes directly from real existing examples, to the higher abstractions of the LEP, genetic diversity as a survival characteristic, and complex systems characteristics, using real, existing examples. It’s creative and it works. It’s a valid, useful method.   & using the above evolutionary principle, why then is AODM so very prevalent in our society? If it kills & shortens lifespans, often markedly, why is so much of the gene in our population? An answer to that question is even more interesting, but is off topic.   This is a somewhat more rigorous and concrete approach than other methods. & it suggests how many scientists/trained professionals think within complex system models.

    • Melanie Mitchell says:

      Thanks for your comments!  I agree with much of what you say, and you bring up some really fascinating areas.  One thing I disagree with is “If the gene is to survive, then it must do something to enable the possessor to survive.”   For example, my understanding is that there are genes with no function, or maladaptive function, that are able to “hitchhike” along with other, more “useful” genes.   I think it is incorrect to assume that every gene we have (or every phenotypic trait, for that matter) has to have some adaptive purpose. 

      By the way, as a layperson interested in genetic networks and the function of non-coding DNA, I found Sean Carroll’s book “Endleess Forms Most Beautiful:  The New Science of Evo Devo” to be a great introduction to this area.    Highly recommended.

  8. donwilhelm3 says:

    Donald Wilhelm “Complex systems are able to encode and process information in a way that is greater than the sum of its parts”. By this definition, the human brain is not a complex system. If we use Einstein’s method and rethink an assumption (that language is symbolic), the brain is a simple dynamic system that lets input act directly on output, without a computational layer in between. If words are not symbols, but rather vocal muscle actions in the culture, they do not represent any entity. Thus there are no representations in the brain, and nothing to do computations on. All memory is unconscious culturally trained muscle activation patterns, made conscious by perception of the results of activations, which allows conscious adjustment of the patterns, in a feedback loop, until the task is accomplished. Since this algorithm cycles at perhaps 10-100 Hz, the Movie in the Mind appears continuous. Thus, perception and action are not separate mechanisms, but each is part of the other, allowing all memory to be unconscious muscle activation patterns. This explains consciousness. There is no encoding or processing of information, since perception is directly connected to output. The classical Sandwich model of AI assumes a computational complexity that is not there, which is why classical AI cannot think.

    • Melanie Mitchell says:

      Thanks for your comment.  I guess we can agree to disagree on this!    That is, I agree that “classical AI cannot think” but don’t agree that there are no representations in the brain.   

      • donwilhelm3 says:

        Having representations in the culture allows pin-point precision.  Since we can access the culture by vocal actions, there is no need for a second set of representations, in the brain.  Especially since brain limitations are leading to theories of “sparse” representations, which are not at all pinpoint accurate.  The muscular algorithm also explains consciousness and qualia, since perception and action are not separate from each other.  This is Einstein’s method – to rethink an assumption.  In this case, the assumption is that language is symbolic.

  9. Zeta says:

    Hi Melanie,

    Thank you for this wonderful article and the introduction to complexity course!

    For me, to think in a way of complexity is to unify the way of thinking in reductionism and holism. More specifically, to study a system (Let’s call it S), reductionism emphasizes properties of basic components (Let’s call these properties R), while holism emphasizes the emergent properties of a system (Let’s call these properties H). What complexity science studies tell us is that First, neither R, not H is a complete characterization of S; Second, H is different with R (“More is different”). Third, only by studying S, R and their dynamic interactions could we fully understand a system. If you agree with the view from above, I would say the ideas (not complexity science) of complex systems are not new. The concepts of Yin (corresponding to R) and Yang (corresponding to H) in Daoist philosophy describes exactly the dynamic interactions between R and H.

    What would it take to create a unified SCIENCE of complexity? To answer this question, we may first ask if such a unified science of complexity (Let’s call it Science101) exists, what would we expect from it? Should it help us understand “ant colonies, cells, brains, immune systems, social groups, and economic markets”? If so, what will be the relationship between Science101 and all the scientific disciplines specialized in each individual system? Following the way of thinking described in the paragraph above, I would like to say Science101 should subsume all current scientific disciplines as its components, but it will never replace any of these disciplines. In another words, Science101 would be a complex system emerging from creative interactions between all established and to-be-established scientific disciplines. So, what should we do now? Obviously, we should encourage and support crossing disciplines studies. And this is exactly what the founders of Santa Fe Institute (The Cowan Collaborative, http://www.santafe.edu/support/the-history/conception/ ) had in their mind .We should embrace the ideas of Open Data and Open Science, which will give support to anyone who want to do crossing disciplines studies. This in turn, would create a positive feedback to these open sciences.

    Should we worry about the world becoming too complex? My answer is NO. Instead, I would like to say that we should worry that the world will not become complex enough to solve the problems we are facing. Joseph A. Tainter’s work (“Problem Solving: Complexity, History, Sustainability”, http://link.springer.com/article/10.1023%2FA%3A1006632214612 ) explained this point very well.

    Above all, Complexity may not just be “the hope of a science”, it may be the hope of all sciences.

    Best,
    Zeta

    • Melanie Mitchell says:

      Hello Zeta,

      I really enjoyed reading your comment.  I especially like your idea of Science 101 itself being an “emergent” complex system, with its components being other sciences.    I also like your “hope of all sciences” notion.  I’ll take a look at the book by Tainter.   Thank you!

  10. zhuhongtao says:

    If you are Humain Being, please tell me how to create humain by simple method. Firstly, you shoud know a human person can do what?  But maybe you think it, you never create any person like us. 

  11. wondering14 says:

    The author writes, “Complexity refers not to a single science but rather to a community of scientists”, and “to what extent is it actually a “science”?”, and of “a scientific understanding”.

    Do we know anymore what is science, a scientist, a scientific understanding? As years pass, each definition is ever expanding, becoming looser, less definable, more probabilistic. What is science, a scientist, an understanding?

    Complexity “science”, any “science” today, seems to grab hold of the panoply of tools in the computer’s bulging software toolbox to collate, regress, smooth, iterate, animate, graph, color, prettify and do more with data, with software from many sources, with data compiled by computer according to rules derived from other work…. At what point does work drawing on many interdisciplinary people, papers and tools, become too complex to depend on?

    In other words, when we find out why there is self-organization in an ant colony, can we depend on the reason being true? When does a ladder built on rungs from different sources, fastened in different manners by different workers, become undependable. The ladder might reach the roof, but won’t hold the weight of a heavy roofer.

    Is Complexity science, after a certain point, self-defeating because of its own complexity? What will supercede Complexity science?

    • Melanie Mitchell says:

      Dear wondering14,  

      My response is:  just because “Complexity science” studies complexity doesn’t mean the science itself needs to be complex.  The goal, of course, is to better understand the systems we currently call “complex”.  Science always strives for simple explanations (cf. Occam’s razor) and the science of complex systems is no exception.  The science of complexity will be successful when it shows that complexity in nature is explained with simple, common principles.  There’s a famous quotation (author unknown) that says something like “I would give my life for the simplicity on the other side of complexity”.    That’s the simplicity that I think all science is striving for.