**Complex Behavior from Simple Rules**

Stephen Wolfram

Creator, *Mathematica*

MacArthur Fellow

Author,* A New Kind of Science *

Website

The massive progress of the scientific enterprise for the past several centuries has owed its success in large part to the development of mathematical models that are capable of predicting real world events. However, the limitations of describing the world with mathematical equations becomes readily apparent as one begins to deal with complex phenomena. There have been many approaches to describing such complexity, but until now, none has proposed to define a new kind of science. Well, joining us today to discuss these issues is Dr. Steven Wolfram. Dr. Wolfram is founder of Wolfram Research, the creators of Mathematica. He received his PhD in theoretical physics from Caltech at the age of 20 and was awarded a MacArthur Genius Fellowship just a few years later. He is scientist and author behind the new book *A New Kind of Science* and joins us today to discuss these issues. Charles Lee (**CL**) talks with Stephen Wolfram (**SW**):

** CL:** Dr. Wolfram thank you very much for joining us today.

**SW:** Thank you.

** CL:** Well, it’s certainly our pleasure. You’ve written a fascinating book and no doubt a very controversial one. Before we actually get into the ideas of

*A New Kind of Science*, I’m just curious if you could explain to our audience the broad problem in science that this is trying to address.

**SW:** You kind of alluded to it in your introduction. I mean, for the last 300 or so years, the exact sciences have been dominated by what is really a good idea, which is the idea that one can describe the natural world using mathematical equations. And that idea has led to lots of the advances that we’ve seen in science in the past 300 years, but there are also places where science has not so far been able to make exact progress and where one sees lots of complex phenomena in nature, one is confronted with the same kinds of problems over and over again. And the thing that got me started on the science that I’ve been building now for about 20 years or so was the question of okay, if mathematical equations can’t make progress in understanding complex phenomena in the natural world, how might we make progress? And so the thing that I realized was, well, if you are going to do theoretical science at all, you have to assume that nature operates according to some kind of definite rules. These rules have to be based on the same constructs we have set up in human mathematics, things like numbers, exponentials, and integrals and so on. Or can the rules be somehow be more general? So the thing I realized rather gradually — I must say starting about 20 years ago now that we know about computers and things — there’s a possibility of a more general basis for rules to describe nature. And that more general basis is the kind of rules that we can embody in computer programs. So, what I ended up doing was asking the question, if natures uses these more general kinds of rules, how might that work? So the first issue is what’s a typical arbitrarily chosen program do? The programs that we use in practice tend to be programs that are set up for particular purposes, let’s say to do word processing or to be mathematical or something like that. But the question I wanted to ask was a basic science question, which is if we look at very simple programs, programs that could be described by let’s say one line of computer code, programs that where the instructions were chosen at random and there are just a few of them, what would programs like that typically do? A nd what I certainly thought was if the program was simple enough, then somehow what it does must also be correspondingly simple, but what I decided was that I should actually do a systematic experiment to find out what simple programs actually do , and what happened was that I found something completely different from what I had expected and extremely surprising at least to me, which was even some of very simplest programs we constructed ended up showing behavior that was incredibly complicated and what was more exciting was that the behavior that they had showed was complicated seemed to be complicated in the same kind of ways as a lot of behavior that we see in nature is complicated. So this kind of got me started on this direction of understanding, that was a new way to approach doing science that was based on these more general rule that can be embodied in simple computer programs. And I guess the core of what I tried to do is a new kind of basic science that is concerned with exploring the computational world, sort of asking the question if one has an instrument like a telescope or something, you look out into the astronomical world and see the phenomena out there. With computers, we can look out into the computational world and see what kinds of phenomena are out there and what I found is that there’s some very exciting phenomena out there and the core of the basic science that I tried to build is concerned with the question, “What’s out there?”

** CL:** I wonder if you could explain what some of these types of programs might look like as it is certainly well illustrated in the book.

**SW:** Right, so the typical kind of thing would be the kind I found particularly easy to display graphically and so on are things called cellular automata. So a typical way that’s setup is a line of cells, each cell is either black or white, and let’s say you start off with just one black cell in the middle and all the other cells are white. And then the way it works is that you have the same sort of evolved sequence of steps and at each step, the color of a particular cell is determined by the color of the cell right above it and the cells to its left and right and above. And you just have some simple rule that says what the color of the cell will be given the colors of the previous cells. Start off from one black cell at the top and you might have thought with a really simple setup like that, that you get patterns that would always look visually very simple and that certainly what I thought would be the case. But if you actually just try out all the possible rules, you find that some of them can kind of number all the rules and when you get to, for example Rule 30, one of my all time favorite simple programs, you find that you start off with one black cell and it makes this pattern that looks incredibly complicated and actually quite random. In fact, it makes good enough randomness that we’ve been able to use it as the source of random numbers in Mathematica for the past 15 years or so. But the point is that it’s an extremely simple rule. It’s the kind of thing that one can readily imagine some system in nature might follow, yet even though the rule is very simple, the behavior that is produces is extremely complicated. Our usual intuition, which we get from our experiences in everyday life and engineering and so on, is to make something complicated. You somehow have to go through lots of effort to do that. What’s remarkable about something like Rule 30 is that even with a simple rule, it effortlessly produces extremely complex behavior and what’s so interesting about that is nature, we often see that same kind of thing going on, effortlessly lots of complicated behavior produced. And I think that it is, in a sense, embarrassing about our current technology that if you compare artifacts with systems from nature, one of the ways to tell if it’s an artifact is by what tends to look simpler because it seems like nature has some secret that it uses to make complicated things that we, when we do engineering, don’t yet seem to be using. And I think this phenomena that we see in something like Rule 30 is the key to that secret that nature has that allows it to make complicated things very easily.

** CL:** So rather than complexity arising from complexity, it arises maybe from a simple type of rule.

**SW:** Right, the issue is if you restrict the kinds of rules that you look at to ones that you can readily analyze, and that’s what tended to happen in the mathematical approach to science, that the rules that one looks at, the equations and so on that ones looks at tend to be restricted to ones where there’s at least some kind of analysis that can readily be done on them. But if you explored the computational world arbitrarily, just looking at what’s out there, not choosing what you studied according to what you can analyze, that you find that there is all sorts of very different kinds of things that can happen and those seems to be a lot more like what nature is often actually using.

** CL:** How might these rules then be implemented in the physical universe? Where would we find these rules and how would they work?

**SW:** One thing that one has to understand about the model of things is that models are abstractions of systems. When we look at for example traditional Newtonian mechanics — let’s say some model of the Earth going around the Sun and being described by differential equations and so on — we don’t imagine that sort of inside the Earth, there are lots of little Mathematicas solving differential equations. What we imagine is that these differential equations provide some sort of abstract description of the way that the Earth moves around the Sun. It’s the same kind of thing with these simple programs. For example, if you are trying to make a description of how a mollusk produces a pattern on its shell, what one imagines is that these simple program, let’s say cellular automata, describe some sort of abstract idealized way in which a row of pigment producing cells at the leading edge of the growing shell operate. I’ve looked at a bunch of different situations where one can dig in and take what one has learned from exploring the computational world and see how to apply it to particular systems in nature. It’s been interesting because lot’s of cases where phenomena that before seemed to be completely mysterious, and it seemed there was no way one could get any real understanding of how something that was happening could happen. It seems one can start to say things. An example is fluid turbulence. If you have a fluid flowing past an object, it’s a universal thing that if you have it flow fast enough behind the object, you get this random complicated pattern of flow that gets produced, so-called turbulence. And it’s a basic question, which is why is that the kind of randomness we see in turbulence? Well, there’s a long story of what has been investigated about that, but the bottom line is one still has not had a fundamental explanation for why there is that kind of randomness. And I think that things like this Rule 30 phenomenon finally gives one an actual explanation for why there is that kind of randomness and if there is predictions about — some rather surprising predictions actually — how this randomness should work.

** CL:** I see and how does this differ from say the methods that chaos theory uses to try to explain randomness in terms of varying the initial conditions and things like that?

**SW:** Right, so what happens is if you look at for example, the phenomena of randomness and you ask okay, we’re seeing something random happening. We can ask where does that randomness come from. And basically, there are three explanations that one imagines. One is something like the way that randomness happens when a boat is bobbing up and down in the ocean. There is nothing particularly random about the boat itself. It’s just the fact that the environment of the ocean is random, produces the bobbing of the boat. And the traditional scientific explanation for randomness is that it comes from noise in the environment. Then the chaos theory explanation, that’s about a hundred years old but been popular for perhaps 20 years, is the ideas that no, it does not come from randomness in the environment, it comes specifically from the initial conditions for the system that one is looking at. So for example, the typical case would be when you flip a coin. The fact that there is something uncertain about the initial velocity of the coin because let’s say one flipped it by hand means that when the coin finally lands, it’s random in a sense which way up the coin lands because it’s that randomness that one put in through uncertainty and exactly how one flipped it at the beginning. But again, that’s sort of an explanation for randomness that says well, the randomness does not come from inside the system we’re looking at, in this case the coin. It comes from outside the system, namely, the way that the initial conditions were prepared for the system. Well, what happens is the things I looked at is that there is a third kind of randomness that I call intrinsic randomness generation that happens in things like this Rule 30 cellular automata that I mentioned. What happens is there is no kind of noise from the outside, there’s just a very definite rule that gets followed at each step. Similarly, there’s nothing that’s been elaborately prepared about the initial conditions. The initial conditions might just consist of one black cell but yet when you run the system, it’s just intrinsically produces randomness just by the character of following its rules. There are analogies of that actually and areas of mathematics, for example if you look at the digits of pi, the actual procedure for generating the digits of pi is very deterministic and even fairly simple, yet the digits once produced seem to us completely random. And that this Rule 30 phenomenon is a taste of that, that is more general and more directly related to what happens in nature. That sort of the character of the explanation that one has for randomness that occurs in nature. It’s something that comes intrinsically from following these rules and the rules that produce randomness are ones that are actually quite common. If one samples the possible rules at random, they are very rare. If one chooses the rules, in fact one will never find them. If one chooses the rule to study on the basis of being able to analyze the rules by particular methods of analysis that for example come from mathematics.

** CL:** So what you do you think this then says about the fundamental nature of the universe that simple rules can give rise to complex behavior? Does this mean that perhaps we don’t understand the patterns that are generated and characterizing them as random or is there something more fundamental regarding the universe as this entails.

**SW:** So when I first saw the Rule 30 phenomenon, my first instinct was there must be regularity in these patterns. It’s just that there is something imperfect about our visual systems. There is really regularity here but we just can’t see it. So, I ran all sorts of mathematical, statistical and so on tests and I found that not so far as I could tell, the thing really did seem perfectly random. Well, it took me a long time to really understand what foundationally was going on there and I could explain it. It’s related to a thing that I call the principle of computation equivalence. Essentially, you can think of any of these processes that are going on — forming these patterns in black and white squares or whatever — as computational processes. It’s like for you, you put in some input to the computation at the top and then the pattern that’s produced is the output from the computation. You can also think of the processes that you go through in analyzing these patterns as being computations and then the question is which of these computations is more powerful: the one that’s used to produce the pattern or the one we used to analyze the pattern? Now one might have thought is that something based on very simple rules about black and white squares and so on would somehow be intrinsically, computationally much less sophisticated than we as analyzers of this system will be. But the surprising thing that came out of explorations in the computational world has been this thing that I call the principle of computational equivalence, which basically among other thing says one should expect that whenever there is a system that looks complicated to us, it will just be as computational sophisticated as for example we are or as any other system that we find in the universe should be. And so in a sense. this principle of computational equivalence is the underlying foundational origin of this phenomenon, that one that can get very complicated behavior from very simple rules because it says that even with very simple rules, you reach this area of computational equivalence, you reach systems that are equivalent in terms of their computational sophistication with any systems that we might have for example to analyze those systems. So this leads to phenomena that I call computational irreducibility. It has to do with the following thing. Traditionally, in science one of the things one wants to do is to make fast predictions of what will happen in a system. So, for instance if you are looking at the Earth going around the Sun described by some simple two body formula, if you want to know where the Earth will be a million years from now, you don’t have to trace a million orbits of the Earth around the sun. You basically just fill in a number into this formula and immediately get out the answer. Ands that’s a case where we by doing more sophisticated computation have been able to reduce the effort necessary to figure out what will happen in the position of the Earth, but the thing that comes out from what I’ve done and from this principle of computational equivalence is that actually there are lots of systems where you can’t do this. There are actually lots of systems which are computationally irreducible where effectively to know what will happen in the system takes us much computational effort as a system itself has to go through to work out what it will do, so a fundamental limitation on at least precise predictions that we can make by scientific methods so to speak.

**CL:** Certainly the principle of computational irreducibility limits our predictive capabilities in terms of describing some complex phenomenon. So would you say it is better than to use mathematical models and add some factor and say what was the best approximation?

**SW:** Unfortunately, when you say let’s make an approximation, it turns out you can’t get it exactly, you can’t get certain kinds of approximations either. You can ask for a model of the approximations too and that runs into the same issue. It’s not to say that one can’t say anything about what these systems do. In fact, for example there is a practical matter using simple programs, you can often get efficient simulation methods which will allow you to say and practice a lot about what a particular system will do. In fact, this phenomenon of computational irreducibility just puts more pressure on having the best minimal models for systems. Because if you have to, at every step in figuring what a system will do, you have to do all sorts of elaborate numerical analysis and solve all sorts of complicated partial differential equations or whatever, it becomes very difficult in practice to figure out what the system will do. If you can use underlying models and very simple program, then as a practical matter you will often be able to figure out what the system will do. The most extreme case of all of this — you know we’ve been talking about making models for things and so on — the most ambitious case is for the whole universe. The question of having seen what happens in the computational universe? What happens when you look at these different rules and their consequences and having seen very simple rules can produce very complicated behavior? But the ultimate question one might ask is okay, what about our whole universe? We see this complicated behavior in our universe, could it be the case that all this complicated behavior is really the result of applying over and over again some extremely simple underlying rule. Now, if you look at the physics for example, one is pessimistic perhaps about that outcome because for example as we’ve looked at greater and greater levels of smallness in the physical universe, it’s tended to seem that the kinds of mathematical approaches that we have to use get ever more complicated. It doesn’t seem like that this is coming to an end and that we’re never going to find some simple underlying rule that describes our whole universe, but from what I’ve seen exploring the computational universe, I’m much more hopeful about that. In fact I’ve done lots of work and continue to do quite a bit of work now and try to figure out precisely what might be the ultimate rule for the universe.

**CL:** We’re running a little bit out of time here, but I’m just curious maybe you could actually describe what might this rule for the universe look like.

**SW:** Right, so one of the thing that’s true about it is if you want to fit everything in the universe into that very small package of a simple rule, then almost nothing about that rule will tend to be familiar from our everyday experience with the universe. There just isn’t room in one very simple rule to fit in the fact that there are three dimensions in space or that the mass of the muon compared to the electron or whatever. It has to be mixed together and packaged in a way that to us will not necessarily look extremely abstract. And the best representation of what I think may be going on, that I’ve found is to think about space as a giant network where there is a collection of discrete points, and so all one knows is how each point is connected to other points. Then time works by updating this network in various ways and what’s interesting in the beginning is that the kinds of things one needs in order to get a consistent update in this scheme in this network, turn out to immediately imply special relativity. So in fact by knowing basic things about the way that space and time are set up, one immediately can derive something one doesn’t normally does not expect to be able to do, in physics or something like that, the principle of relativity and one can even go on and derive the properties of gravity. And I think I’m beginning to see how to derive various features of quantum mechanics from the same kind of thing. A very simple underlying setup in which essentially the universe consists of this giant network where all one knows is how each discrete point is connected to other discrete points in space. It’s a little abstract and takes a little bit longer to explain even what I’ve figured out so far. Once one actually knows the final answer, it will probably be somewhat easier to explain than when one is still looking towards the answer. But you know, if I’m right about this, the way that things come out is that one will be able to say this little rule that we can write down in a few lines of Mathematica code or in some pictures or something. This little rule, if we started off with this initial condition, will make absolutely everything that now exists and will exist in our universe. And that’s something that will be an exciting thing to happen in science because it is at the edge of science in some sense. And I think that what one sort of now is beginning to get is some of the intuition that one needs to be able to see how to get there from the effort of exploring the computational universe.

**CL:** That would certainly be a remarkable goal if such a rule can be written and I think it’s a perfect place to end our talk here, but Dr. Wolfram I just want to thank you very much for joining us today to give us a lesson about ideas contained in your book *A New Kind of Science* and thank you for joining us today.

**SW:** Thank you very much.

## Leave a Reply