On the same blog, God Plays Dice, I learned two important things today:
(1) The Napkin Ring Problem, which basically says if you take a sphere, and cut out a cylinder from the center (and take off the two caps at the top and the bottom of the sphere), the volume of the remaining solid will not depend on the radius of the sphere at all!
In totally unrelated news, today we wrapped up our initial foray into finding volumes by slicing, by revolution, and by washers in calculus. I wonder what I’ll be doing with them tomorrow. I dearly wish I had an interesting problem for them to work through. Sigh.
All those years ago (okay, not so long ago) I was at MIT getting my first taste of chaos theory. I took this course with Daniel Rothman, who is an amazing teacher, and who taught from an amazing textbook (Strogatz). I loved the class so much that when I was asked if I was interesting in grading the problem sets for the following year’s class, I jumped at the chance.
It was in that course that I was first introduced to Edward Lorenz and his paper on weather (!) which launched a whole phenomena: chaos. But I also got so into the subject that I tried to fix a broken chaotic waterwheel that was being left unused in the basement of MIT. (I was unsuccessful, though I still wrote my final paper for a class on a modification of the Lorenz equations.) And when I was in graduate school, I was asked to write a “history of chaos theory” article for an encyclopedia that never really got published (to my knowledge). Also when I was in graduate school, I traveled to Beijing for a summer school (4 weeks) on complexity and chaos, thinking that I might want to do my dissertation around the topic of complexity and chaos in the sciences. And now that I’m teaching in high school, I wanted to introduce some of the budding mathematicians to the topic, so in math club, I gave a 3 part mini-lecture on chaos and the logistic equation.
Needless to say, chaos = cool. When I took that initial class with Prof. Rothman, Lorenz came to give a guest lecture. I’m sad that he’s gone. He is partially responsible for a really interesting part of my life that keeps recurring, no matter where I am or what I’m doing.
If you want to read my encyclopedia entry on the history of chaos theory (which is okay, but I’m sure has a number of problems with it), I’ll post it below. It talks about Lorenz’s contribution, if you don’t know.
Chaos theory is the study mathematical systems which exhibit certain characteristic properties, one of which is extraordinarily erratic behavior. Examples of such systems include population growth, turbulent fluids, and the motion of the planets. Though chaotic systems had been identified in nature before, it was not until the 1970s that the mathematical tools were in place to examine these sorts of complicated behaviors in a quantitative fashion. Through intensive interdisciplinary work done by an international set of researchers, the study of chaos moved from being studied by a small group of interested practitioners into a world-wide phenomenon. Chaos theory was applied to study the weather, populations, economics, turbulence, information theory, and neuroscience, among other topics. Knowledge about chaos entered into public consciousness with the spreading of colorful fractal pictures, the publication of popular science books, and public debates over its validity. Even though it was popular among select audiences, chaos was seen by others as an attack on the existing way of doing science, reductionism, and early practitioners were met with a lot of resistance. Even today, many researchers share concerns about the use of chaos in research—both in the natural and social sciences.
A Newtonian Solar System
Before the term “chaos” was used by mathematicians and scientists, chaotic phenomena had been observed in nature. In the middle of the seventeenth century, Isaac Newton developed the mathematics of differential equations—equations that show how a quantity changes over time—and used it to describe the laws of planetary motion. Newton was able to solve the problem of determining the location of a single planet orbiting the sun at a particular time in the future, the so-called “two-body problem,” but extending this to more bodies, such as the moon or other planets, was problematic. The mathematics showed that the equations of the conservation laws (such as the conservation of energy) which governed the motion of the system of more than two bodies could not be solved with simple, algebraic methods. The many-body problem was more than theoretical; knowledge of the position of the moon was important for celestial navigation. However, existing analytic methods were not powerful enough to solve it, and it was not until 1885 that headway was made.
In this year, a mathematics professor in Sweden held a mathematical competition in honor of King Oscar II’s birthday, asking entrants to address any of four pressing questions. One of them, that Henri Poincaré wrote on, dealt with the problem that Newton could not solve: the many-body problem. The question read something like this: Given a number of masses that obey Newton’s law of gravitation, find the equations describing the position of the masses at any time. In the years after Newton’s first attempt, mathematicians came to understand that not all differential equations could be solved exactly, and mathematical tools had been developed to explain the qualitative properties of the solution. The judges, under the time constraints of selecting a winner before the birthday celebrations, saw something important in Poincaré’s paper and awarded him the prize. In his submitted paper, he had derived a result demonstrating the stability of the solar system (a many-body problem) – that in the future the planets would not eventually fly off into deep space. When this paper was prepared for publication, an editor pointed out unclear parts of the manuscript to Poincaré, who acknowledged that he had made a fatal mistake. The error invalidated his stability result.
A greatly revised and lengthened version of the paper contained the first hints of chaotic behavior, describing a figure of curves which formed an intricate mesh that was so complex that he refused to attempt to draw it. His work, because of the impossibility of simple solutions, was largely qualitative and based on geometric reasoning. He also noted that a small difference in the initial position of the planets result in very large differences in the position of the bodies in the long run. A meteorologist would come to similar conclusions in the 1960s.
Three Routes to Chaos
At the Massachusetts Institute of Technology in 1960, Edward Lorenz programmed twelve equations into his vacuum tube computer. He was a meteorologist and his equations simulated weather. Equations governing the motion of air and water—both treated as fluids in the mathematical equations—had long been known, but computing the evolution of the weather system over time was plagued with difficulties. These equations, like the equations Poincaré and Newton had studied before, described a dynamical system—a system which evolves over time. In a dynamical system, both initial conditions of the system (such as the positions and velocities of the planets at a particular time) and a set of rules (the differential equations governing motion) are used to calculate the state of the system in the future. For most dynamical systems, the future state of the system cannot be found immediately: to do so, the state the system has to be calculated for every moment from the beginning to the final time. Without a computer, this task would have been virtually impossible, but with his computer, Lorenz was able to create printouts of a series of numbers which showed various features of the simulated meteorological system, like temperature.
In the winter of 1962, Lorenz wanted to examine a particular sequence of numbers from his weather simulation in closer detail. When he ran the simulation a second time, he got radically different results. This was unexpected because the weather system was deterministic—with the same initial conditions and the same equations, the system should have behaved exactly the same. Investigating the discrepancy, Lorenz discovered the cause of the strange behavior: he ran his second simulation with the initial condition, 0.506, but the computer stored six digits, 0.506127. In a dynamical system, he had observed that two slightly different initial conditions of the same system (0.506 instead of 0.506127) would lead to radically different behavior in the long run—a conclusion which harked back to Poincare’s conclusion decades earlier. The technical term for this conclusion, sensitive dependence on initial conditions, eventually obtained a popular moniker: the butterfly effect. Practically, the butterfly effect suggested that perfect weather prediction might be unattainable. If the equations governing weather were anything like the equations Lorenz used to simulate the weather in his computer, the smallest error in the initial data would yield radically different results in the future state of the weather system.
To examine this effect further, Lorenz created a simpler system describing atmospheric heat flow, known as convection. The three convection equations were deterministic, just like in his previous twelve equation model. For the most part the system was periodic: it would repeat its behavior after a set period of time. However, when the temperature difference between the top of the atmosphere and the bottom of the atmosphere was great enough, the system would show turbulence. An analogy can be drawn to a pot of boiling water. As the bottom of the pot heats up, the temperature difference between the top of the water and bottom of the water grows ever larger, until eventually turbulence occurs: the water boils.
The second model behaved similar to the first, also displaying a sensitive dependence of initial conditions. Unlike Poincaré, Lorenz was able to use his computer to generate a graphical display of his system. Plotting the evolution of the convection system, starting with a particular initial condition, he traced out a curve which ended up looking like butterfly wings. Further investigation revealed that all initial conditions would trace a similar curve. These curves defined a space which would come to be known as the Lorenz attractor, part of a larger class of mathematical objects known as strange attractors. After Lorenz, more strange attractors would be discovered out of work with physical systems, including the Rössler attractor, which came out of the study of chemical reactions, and the Chua attractor, which came out the study of an electronic circuit. Lorenz published his results in a now-famous 1963 paper “Deterministic Nonperiodic Flow” in the Journal of the Atmospheric Sciences. From Lorenz’s pioneering work, it is clear that modern chaos theory would not have been possible without computers which allow scientists perform millions of operations in a short period of time. But more than simply being a calculating robot, computers took on a novel role in science, becoming electronic laboratories themselves, simulating natural phenomena.
Chaos theory did not develop with one person or in one place, but rather was the product of different scientists working in different places solving different problems. In addition to Lorenz, two other figures in the 1960s were crucial in laying down the intellectual framework for the emergence of chaos theory. One of them was mathematician Stephen Smale, located at the University of California at Berkeley. He became internationally famous for his study of topology, the study of the properties which remain unchanged as geometric figures are stretched and folded. He was also known for being politically active, speaking out against the Vietnam War. In the 1960s, Smale turned his research from topology to dynamical systems. His training allowed him to bring mathematical tools used in topology to the study of dynamical systems. Early in his work, he conjectured that stable systems—systems where a small perturbation would not change the overall outcome of the system—could not behave erratically. At this point, he was not aware of the Lorenz attractor which was a structurally stable system which did behave erratically. This is unsurprising. Lorenz’s paper was published in a specialty journal, not traditionally read by non-meteorologists, and like many important scientific discoveries, his work was not immediately flagged as important. A colleague, however, informed Smale of a different counterexample to his conjecture. Forced to rethink his conclusion, Smale applied topological tricks to his study of dynamical system and came up with a visual way to understand why his conjecture was incorrect: the horseshoe map. The horseshoe map was a way to topologically deform a system, say a square sheet of rubber, through repeated squeezing and folding, so that any two points close to each other in the original system will end up arbitrarily far apart after enough folding and stretching. Thus he had discovered a topological version of sensitive dependence on initial conditions. The significance of Smale’s work was the expansion of the study of dynamical systems into the domain of topology. Smale himself said, quoted in James Gleick’s Chaos: Making a New Science, “When I started my professional work in mathematics in 1960, which is not so long ago, modern mathematics in its entirety—in its entirety—was rejected by physicists, including the most avant-garde mathematical physicists… By 1968 this had completely turned around.”
The last figure setting the stage for an explosion of research in the 1970s was mathematical physicist David Ruelle. Located at a prestigious institute outside of Paris, in 1971, he and mathematician Floris Takens published “On the Nature of Turbulence.” This paper suggested that the onset of turbulence in a fluid was caused by topological properties of the fluid equations themselves, known as the Navier-Stokes equations. Just as the equations Lorenz used to study convection gave rise to a strange attractor—the butterfly-shaped Lorenz attractor—the authors argued that so too did the Navier-Stokes equations give rise to strange attractors. And the presence of these attractors were responsible for the onset of turbulence.
Since the early nineteenth century, the transition of a stable system into a turbulent system had been ill-understood. It has been called by some the greatest unsolved problem in classical physics. The prevailing theory before Ruelle and Takens was promoted by Lev Landau in the 1940s. His model argued eddies formed in fluids that will generate smaller eddies within them, and these smaller eddies will then generate even smaller eddies within them, ad infinitum. As more and more eddies are created, the fluid flow begins its transformation from being predictable and regular to exhibiting turbulence. The eddies, in Landau’s theory, are created through small external disturbances to the fluid. For Landau, the transition to turbulence was based not on the fluid equations themselves, but on external noise influencing the system. Completely opposite, Ruelle and Takens argued that turbulence could be explained by the fluid equations alone.
At the beginning of the 1970s, the study of fluid mechanics, and the question of turbulence, lay at the uneasy intersection between mathematics and engineering. Ruelle and Takens’s work had two important consequences, outlined by David Aubin and Amy Dahan Dalmedico in “Writing the History of Dynamical Systems and Chaos.” First, it inaugurated the merger between the study of fluid mechanics and dynamical systems. Second, it suggested physical experiments that could check the validity of Ruelle’s ideas. No longer would the study of these strange dynamical systems be confined to paper or computer simulations. Aubin and Dalmedico note that with Lorenz, Smale, and Ruelle building a foundation, by the end of the 1960s, “elements were in place for the recognition of the inescapable role that complex… dynamical systems had to play in the understanding of the world.” The stage was set for chaos to enter the academic community.
An Era of Interdisciplinarity
Lorenz was a meteorologist, Smale a mathematician, and Ruelle a mathematical physicist, yet all three were working on similar things. The 1970s ushered in a period of interdisciplinarity, where biologists, hydrodynamicists, meteorologists, mathematicians, and physicists, among others, would read each others’ papers, attend the same conferences, and enter into work traditionally outside of their own specialized fields. Divisions among scientific disciplines have never been so solid as to prevent interdisciplinary work, but chaos, acting as a common meeting ground, would bring together scientific nomads. It was also during this decade that experimental work would bring some credibility to chaos, but at the same time, practitioners were unable to pin down exactly what they were studying.
Scientists, when investigating natural phenomena, would often encounter phase transitions—a point when a system would change its character dramatically. One example would be the onset of turbulence, but there are many more, such as the magnetization of a non-magnet or the transition of a conductor into a superconductor. A number of studies drew analogies between various types of phase transitions: mathematically their descriptions appeared similar. In 1973, researchers Harry L. Swinny and Jerry Gollub were investigating phase transitions in fluids. They set up an inexpensive apparatus: two cylinders, one inside the other, with a fluid in between. As they began to rotate the inner cylinder (keeping the outer cylinder at rest), they studied the properties of fluid flow. Once they reached a certain threshold, they observed turbulence. The experiment was not new; it had been performed before in 1923. What was new was the method of collecting data, using the deflection of a laser beam in the fluid to quantitatively measure the scattering. Swinny and Gollub expected to confirm the older theory of turbulence proposed by Lev Laundau. What they found comported better with the ideas of Ruelle and Takens. A second similar and widely-discussed experiment was performed by French physicist Albert Libchauber in 1977. More generally, the study of turbulence and convection was in the beginning stages of becoming an interesting scientific topic again. Instead of being squarely in the purview of mathematics or engineering, it began to be investigated by hydrodynamicists, plasma physicists, statistical physicists, thermodynamicists, and chemists. In the years between 1973 and 1977, a good number of conferences were held on turbulence. These conferences acted simultaneously as sites where disciplines collided and where collaborations were forged.
It is in this period that the term “chaos” was coined. James Yorke, a mathematician, had stumbled across Lorenz’s paper 1963 paper almost a decade after its publication, and was enchanted by its conclusions. He sent a copy to Smale, and made and distributed a number of other copies for his colleagues. He also co-wrote a 1973 article in the widely read American Mathematical Monthly bringing the ideas to a generation of mathematicians; it was in the title of this article that the term “chaos” was first used in a technical sense.
Yorke’s article made an important claim using a simple equation that was first used in 1838. The logistic equation is a model which predicts the population of a species over time, given information about how fast the species reproduces and the maximum population sustainable in an environment. Yorke used the logistic equation to illustrate a powerful point. For most situations, the population of a species would remain the same after a long enough time had passed. However, after tweaking the conditions, he found that the population could eventually oscillate between two values—one year the population would be x, the next year y, the year after x, and so forth. Mathematically, this doubling is called a bifurcation. It turns out the conditions could be tweaked so that the population would oscillate between 4, 8, 16, 32, etc., values. At a certain point, however, the system would become what Yorke called chaotic: the population would not oscillate among a fixed number of values, but rather, it would never repeat itself.
Yorke used the logistic equation to illustrate a powerful mathematical theorem. Technically, it said that in any one-dimensional system (like the logistic equation), if a cycle of period 3 appears, then there would have to be cycles of every other period, as well as chaotic cycles. In a general sense what it said was that even a simple equation can demonstrate some peculiar and unexpected behavior—chaos. More than that, in the midst of such chaotic behavior, there can be pockets where the system would behave nicely. In this way, chaos began to take on the meaning of its opposite: order. Even though the term chaos implied something erratic, it arose out of very simple systems, and even when chaos was observed, there seemed to be some kind of mathematical order within the chaos.
Factbox: Iconic images of chaos
The first image is the Lorenz attractor, while the second image is of the logistic map. The Lorenz attractor is a graphical representation of the behavior of the convection equations. The logistic map is a discrete version of the logistic equation and is mathematically written: xn+1=rxn(1-xn). We need not go into details about the mathematics here, but what should be clear is that this simple one-dimensional equation demonstrates some very complicated behavior. As the value of r is increased, the long-term behavior of the system changes. For small values of r, the system eventually settles down to a single value. As r was increased, something special would happen: the system settles down not to one, but two values. It has bifurcated. As r is increased even more, the system settled down to four values, then eight, then sixteen, and so on. Eventually, when r reached a critical value, the system never repeats itself, never settles down to a finite set of numbers. Note that the behavior of the equation becomes more and more erratic as r increases, but even in the chaotic regime, there are pockets of non-erratic behavior.
Universality In Chaos
In 1975, Mitchell Feigenbaum was a researcher at Los Alamos. After listening to a lecture by Stephen Smale, he began his work on the simple logistic map. He found that the bifucations came at somewhat predictable intervals. They, in mathematical terms, were converging geometrically. The convergent rate for the logistic map was calculated to be about 4.669. Feigenbaum discovered that the critical value of 4.669 occurred not just in the logistic map, but in a large class of equations—just as Yorke found that chaotic behavior occurred in a large class of equations. He recalled in an article from 1980, “Universal Behavior in Nonlinear Systems,” that “I spent a part of a day trying to fit the convergent rate value, 4.669, to the mathematical constants I knew. The task was fruitless, save for the fact that it made the number memorable.” What that indicated was that this number was a new universal constant. Just as the ratio of the circumference to the diameter of any circle will always be π, the convergent rate for any of a large class of equations will always be about 4.669. Feigenbaum believed he had discovered a new law of nature. If a natural phenomenon could be mathematically described by one of the large class of equations that Feigenbaum studied, then Feigenbaum’s work gave a quantitative way to discuss the route to the phenomena exhibiting chaotic behavior. Through his work, the analogy made between hydrodynamics (the transition to turbulence) and phase transitions in physics was firmed up. The equations were all part of the same class of equations.
As often occurs with historically important works, Feigenbaum had difficulty publishing—it was not quite mathematics to the mathematicians because it did not have a grand level of abstraction, nor was his claim rigorously proved. In spite of his inability to get published immediately, his ideas spread through lectures and conversation, and generated excitement in an assortment of intellectual circles. In retrospect, scientists would look back on this work as a watershed, bestowing a sense of legitimacy to the idea that chaos was present in many natural systems, and that this chaotic behavior could be explored quantitatively instead of qualitatively and geometrically.
Moreover, in subsequent analyses, he used mathematical tools (“renormalization group methods”) commonly used by physicists, thus making his chaos relevant for that community. As a result, a number of physicists in the late 1970s and early 1980s began looking at hydrodynamics and turbulence, when previously the problem had belonged in the domain of mathematicians and engineers; a disciplinary reorientation took place. The physicist Libchaber’s 1977 experiment was on turbulence, for example, and was concerned with calculating Feigenbaum’s constant through an experiment. His experimental value and the theoretical value were dissimilar but not incompatible, and later experiments conducted by others in the early 1980s would result in a closer correspondence between the theoretical and experimental values. These experiments showed that Feigenbaum’s constant, and chaos theory in general, had a role to play in the physical world.
The end of the 1970s brought two important chaos meetings. In 1977, the New York Academy of Sciences held a conference on “Bifurcation Theory and Applications in Scientific Disciplines.” This meeting, the first of such a large scale, brought dozens of researchers together: economists, physicists, chemists, biologists, etc. A second meeting was held two years later, this time attended by hundreds of researchers. Chaos was becoming a more popular topic of study in a variety of fields. By 1990, one bibliography of chaos – in Chaos II, edited by Hao Bai-Lin – touted 117 books and 2244 articles on the subject. The bibliography indicates that the works were from a variety of disciplines, and significantly, from many different countries too.
Sidebar: Examples of Chaos in Nature
“In the past few years a growing number of systems have been shown to exhibit randomness due to a simple chaotic attractor. Among them are the convection pattern of fluid heated in a small box, oscillating concentration levels in a stirred-chemical reaction, the beating of chicken-heart cells and a large number of chemical and mechanical oscillators. In addition, computer models of phenomena ranging from epidemics to the electricity of a nerve cell to stellar oscillations have been shown to possess this simple type of randomness. There are even experiments now under way that are searching for chaos in areas as disparate as brain waves and economics.”
Source: Crutchfield, James P., et al. “Chaos.” Scientific American. December 1986, 46-57.
While chaos theory’s audience was increasing, there was—and remains—no single accepted definition. Even in 1994, Steven Strogatz wrote in his textbook Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering that “No definition of the term chaos is universally accepted yet, but almost everyone would agree on the three ingredients used in the following working definition: Chaos is aperiodic long-term behavior in a deterministic system that exhibits sensitive dependence on initial conditions.” We have already discussed above what it means for a system to be deterministic and display a sensitivity on initial conditions. Aperiodic long-term behavior simply means that the system does not settle down to a state in which nothing moves, or a state in which the system repeats itself over and over. Instead, the system should eventually lead to erratic behavior like that in a Lorenz attractor.
Popularization: Chaos in Popular Consciousness
Fractals are important in the study of chaos theory both for their visual aspect as well as for their mathematical importance to studying chaos. Benoit Mandelbrot, a refugee from the Nazi takeover of Europe, saw a universe full not of ordinary Euclidean geometry—in which figures were smooth—but of something else. In 1975, he coined the term fractal, and would eventually publish a number of popular science books on them. They are objects which display self-similarity at all scales: if a fractal is magnified, the resulting image will have properties similar to the original fractal. Because fractals are so intricate—they can be magnified over and over again—they are often described as being beautiful. As a result of their intricacy, however, everyday concepts like distance and area are difficult to apply. Mandelbrot, when studying these figures, came up with a new way to measure the dimension of an object. With his definition, a line still has one dimension, a plane still has two dimensions, but with his definition, fractals could possess a dimension in-between. They could have a non-integral dimension, such as a 3/2 dimension. The connection to chaos is direct: strange attractors, such as the Lorenz attractor, are not points, lines, or surfaces, but rather a fractal with a dimension between 2 and 3. (A trajectory in the Lorenz attractor is infinitely long, but it is bounded by a finite volume—the butterfly.) Thus, understanding fractals could shed light on chaos.
Also important, however, is the explosion of fractals into popular culture. Mandelbrot’s popular books, Fractals: Form, Chance and Dimension and The Fractal Geometry of Nature, transported fractals to a wider audience, including a number of professionals. David Ruelle is quoted in Gleick’s Chaos as exclaiming: “I have not spoken of the esthetic appeal of strange attractors. These systems of curves, these clouds of points suggest sometimes fireworks or galaxies, sometimes strange and disquieting vegetal proliferations. A realm lies there of forms to explore, and harmonies to discover.” And the 1970s and 1980s saw the creation of art based on the geometry of fractals. In the sciences, fractal geometry was used to create illustrations of nature – of leaves and mountains and clouds – in addition to more abstract entities, known as Julia sets. Artists too found fodder in a new conception of space. One example is Rhonda Roland Shearer, a sculptor who hypothesized that fractals would bring about a revolution in art, as space and form took on new meanings in science.
Sidebar: Benoit Mandelbrot, Father of Fractals
Thought the face of Benoit Mandelbrot is not well-known, the colorful fractal figures he pioneered are familiar to almost everyone. He is one of the most famous living mathematicians, developing a geometry which would infiltrate a number of different arenas, including the study of turbulence, the stock market, physiology, and art.
Mandelbrot was born on November 20, 1924 in Warsaw, Poland. His father was a buyer and seller of clothes, his mother was a medical doctor, and his uncle, Szolem Mandelbrojt, was a mathematician at the Collège de France. It was his uncle who was to make a great impression upon Mandelbrot, from whom he learned that mathematics was an honorable profession. However, his uncle was a purist who believed that mathematics and beauty were mutually exclusive. Benoit Mandelbrot would eventually decry his uncle’s belief in his own work, often discussing fractals and aesthetics in the same breath. His advanced education took place mainly in France in mathematics, punctuated by brief stints at the California Institute of Technology and the Institute for Advanced Study. He was seriously affected by the existing political conditions, noting in an interview in New Scientist that “when I look back I see a pattern. For a long time that pattern was imposed by catastrophes, namely the fall of Poland and the occupation of France during the second world war. Those events dictated everything… Being raised under such hair-raising conditions can have a strong effect on someone’s personality.” In 1952 he would obtain a doctorate in the mathematical sciences at the University of Paris.
Mandelbrot joined the research team at the IBM Thomas J. Watson Research Center in New York in 1958 and would remain there until 1987, when joined the mathematics department at Yale University. At IBM, Mandelbrot reveled in the freedom he was afforded in his research, allowing him to move in directions that a university position would not encourage; his research included the flooding of the Nile River, cotton prices, and the geometric shape of coastlines. It was in these studies that the idea of fractals became to form, and Mandelbrot continued to expand and develop them in the following years. One of the most famous fractals, called the Mandelbrot Set, is based on work done by Gaston Julia and Pierre Fatou. The complex Mandelbrot Set below is created through a simple mathematical transformation. Every point in the plane undergoes this transformation, and if the result is infinity, the point is shaded a particular color. If the result is not infinite, the point is shaded black. Interestingly, this work was introduced to him by his uncle Szolem in 1945, but Mandelbrot would not return to it until the 1970s. In 2005, Mandelbrot retired from Yale.
In addition to Mandelbrot’s popular science books and the widely reproduced pictures of fractals, James Gleick’s popular science book Chaos: Making of a New Science also assisted in propelling chaos theory into popular consciousness. Published in 1987, Chaos was an immediate bestseller. Narrating the history of chaos as a paradigm shift, Gleick introduced chaos theory not only to the wider public, but also to numerous scientists. The impact of this book on the field of chaos can be seen in the high number of citations it has receives, in both scientific and popular literature. (The Science Citation Index, which counts the number of times a particular book or article is cited in various research journals, puts the number at well over 1,000 articles.) Lastly, the publication of Michael Crichton’s Jurassic Park in 1990 and the release of Steven Spielberg’s major motion picture based on the book three years later, put chaos theory center stage. In the movie, the scientist Malcolm saw the dinosaur park as a physical system governed by chaos. His interpretation of chaos theory was a theory of inherent unpredictability and disorder. In fact, however, his interpretation is not quite accurate. Chaotic systems are often defined by deterministic equations—meaning that, in theory, they are perfectly predictable. And as noted before, one of the key features of chaos is the concept of order—not disorder. Regardless, Jurassic Park, Gleick’s book, and captivating pictures of fractals, generated by many on their home computers, helped bring an interest in chaos to the wider public.
Chaos Outside the Traditional Sciences
Social scientists have long used statistics and mathematical models to better understand social phenomena; some, like Euel Elliot and L. Douglas Kiel, in the introduction to Chaos Theory in the Social Sciences: Foundations and Applications, hold the hope that chaos theory will be “a promising means for a convergence of the sciences that will serve to enhance understanding of both natural and social phenomena”. With chaos theory, they believe, the complex phenomena observed in social systems might finally be explained.
Mainly done in the 1990s onwards, a substantial body of literature exists in political science, economics, sociology, and even in literary theory, making use of chaos theory. Some of these fields rely heavily on the mathematical tools used in chaos theory to analyze social data, while others capitalize on the metaphorical power of chaos. Political scientists, for example, have studied shifting public opinions and international conflict using some of the same mathematical tools used to study chaos. Some economists have argued that the consequence of the premise that many simple systems show a sensitive dependence on initial conditions calls for a reevaluation of the standard form of economics, neoclassical economics. And literary theorist N. Katherine Hayles makes the case that both science and the humanities are both rooted in the same culture, and precisely when scientists were countenancing the implications of disorder, so too were fiction authors and literary theorists.
Complexity: The Emergence of a New Kind of Science?
In recent years, chaos has lost some of the excitement which fueled it on. Per Bak, a physicist at Brookhaven National Laboratory, believed that chaos theory had run its course as early as 1985. Physicist Doyne Farmer is quoted in M. Mitchell Waldrop’s Complexity: The Emerging Science at the Edge of Order and Chaos as saying “After a while, though, I got pretty bored with chaos… I felt ‘So what?’ The basic theory had already been fleshed out. So there wasn’t that excitement of being on the frontier, where things aren’t understood.” Chaos today can be seen not as a highly active area of scientific research, but instead as a conglomeration of mathematical tools used in a wide variety of disciplines.
Chaos, however, has taken on a new incarnation in recent years: complexity. Complexity, like chaos, defies a universal definition. (One scientist compiled 31 distinct definitions for the word.) However, most often it is described as the state between order and chaos. It is in this regime that complexity researchers believe self-organization occurs. In highly ordered systems, nothing novel can emerge. Chaotic systems are too erratic to have something structured emerge. It is at the intersection, some believe, that interesting and complex behavior emerges. Complexity theorists do work in a number of different fields, including artificial intelligence, information theory, linguistics, chemistry, physiology, evolutionary biology, computer science, archeology, and network theory. In fact, in 1999, the prestigious journal Science devoted an entire issue to examining research done on complex systems in a variety of fields.
The major institution promoting the study of complex systems is the Santa Fe Institute. George Cowan, once the head of research at the national laboratory at Los Alamos, sat in the White House Science Council providing advice for President Regan. During this time, he was made further aware of the interconnections between science and morality, economics, the environment, and so forth. Looking back, he is quoted in M. Mitchell Waldrop’s Complexity: The Emerging Science at the Edge of Order and Chaos as saying: “The royal road to a Nobel Prize has generally been through the reductionist approach… You look for the solution of some more or less idealized set of problems, somewhat divorced from the real world, and constrained sufficiently so that you can find a solution… [a]nd that leads to more and more fragmentation of science. Whereas the real world demands—though I hate the word—a more holistic approach.” A reductionist view, he believed, also leads to studying simple systems, instead of whole, complicated, messy systems. With the rise of computer use and numerical experimentation complex behavior could be investigated. By the 1980s, with personal computers, the complex behaviors Cowan was interested in were being investigated. These complex systems were not linear, in the sense that they could be broken down into smaller parts; what made them interesting was their global behavior created by the nonlinear interactions between individual parts. He took this belief and transformed it into the world famous, interdisciplinary, Santa Fe Institute, founded in 1983 and eventually brought three Nobel laureates to its staff. This institute is now synonymous with the study of complexity. At the institute and elsewhere, the study of chaos theory was incorporated into a broader framework of “complexity.”
Sidebar: Reductionism v. Non-reductionism
In his review of a science book, Nobel winner in physics Steven Weinberg notes that the increasing study of complexity in physics has led to a disciplinary division. There are those who believe that reductionism is the best way to study nature, and those who argue that complexity is better. Weinberg levies a common criticism at those favoring complexity: there is no set paradigm which complexity has produced.
“There is a low-intensity culture war going on between scientists who specialize in free-floating theories [like chaos] and those (mostly particle physicists) who pursue the old reductionist dream of finding laws of nature that are not explained by anything else, but that lie at the roots of all chains of explanation. The conflict usually comes to public attention only when particle physicists are trying to get funding for a large new accelerator. Their opponents are exasperated when they hear talk about particle physicists searching for the fundamental laws of nature. They argue that the theories of heat or chaos or complexity or broken symmetry are equally fundamental, because the general principles of these theories do not depend on what kind of particles make up the systems to which they are applied. In return, particle physicists like me point out that, although these free-floating theories are interesting and important, they are not truly fundamental, because they may or may not apply to a given system; to justify applying one of these theories in a given context you have to be able to deduce the axioms of the theory in that context from the really fundamental laws of nature.
Lately particle physicists have been having trouble holding up their end of this debate. Progress toward a fundamental theory has been painfully slow for decades, largely because the great success of the “Standard Model” developed in the 1960s and 1970s has left us with fewer puzzles that could point to our next step. Scientists studying chaos and complexity also like to emphasize that their work is applicable to the rich variety of everyday life, where elementary particle physics has no direct relevance.
Scientists studying complexity are particularly exuberant these days. Some of them discover surprising similarities in the properties of very different complex phenomena, including stock market fluctuations, collapsing sand piles, and earthquakes… But all this work has not come together in a general theory of complexity. No one knows how to judge which complex systems share the properties of other systems, or how in general to characterize what kinds of complexity make it extremely difficult to calculate the behavior of some large systems and not others. The scientists who work on these two different types of problem don’t even seem to communicate very well with each other. Particle physicists like to say that the theory of complexity is the most exciting new thing in science in a generation, except that it has the one disadvantage of not existing.”
Source: Weinberg, Steven. “Is the Universe a Computer?” The New York Review of Books vol. 49, no. 16 (October 24, 2002).
Chaos and complexity: these terms hint at a new structure for science. Like Cowan, many see the sciences moving away from reductionism and linearity into something completely different. The term “paradigm shift” is often invoked by chaos researchers and popular writers to describe this transformation, harking back to the work of historian Thomas Kuhn. In the 1962 publication of The Structure of Scientific Revolutions, Kuhn argued that science did not progressively accumulate facts. Instead, science is a cyclical process, punctuated by large scale transformations known as “paradigm shifts.” Paradigms, loosely defined, are ways of understanding the world. Paradigm shifts are brought about by finding anomalies which can not be explained in the current paradigm. Resolving these anomalies lead to a new way of understanding the world, a way in which the anomaly makes sense. (It should be noted that historians of science have found much to praise and criticize with Kuhn’s theory about how science operates.)
Researchers interested in chaos and complexity tout the beginnings of a paradigm shift. The old paradigm was reductionism, and was on its way of being replaced, or at least supplanted by, non-reductionism which sees the world as a system of many parts which interact in nonlinear ways. Many scientists are skeptical of this claim. They do not believe complexity is a paradigm shift in the Kuhnian sense, nor do they believe that the reductionist program has yielded all its secrets. Whether a paradigm shift, a fad, or something else, science has of recent been changing its flavor, and chaos and complexity have played a role.
1660s: Newton invented calculus and explained planetary motion.
1800s: A number of analytic studies on planetary motion were conducted.
1890s: Poincaré worked on the many-body problem and got one of the first glimpses of chaos. In his works from this time, he developed a number of mathematical methods which would be later used by people studying chaos.
1940s: Mary Cartwright, John E. Littlewood, and Norman Levinson further scholarship on the topology of differential equations. The work they did stemmed directly from the war work done during World War II.
1959-1970: Stephen Smale synthesized topology and dynamical systems.
1963: Edward Lorenz publishes his famed paper on sensitivity to initial conditions, “Deterministic Nonperiodic Flow.”
1971: David Ruelle and Floris Takens publish a seminal paper on turbulence, suggesting that the phenomenon arose out of strange attractors within the Navier-Stokes fluid equations. This paper suggested that the accepted theory of Lev Landau, where turbulence was brought about by external noise.
1975: Tien-Yien Li and James A. Yorke coin the term “chaos” in a paper. Also, Mitchell Feigenbaum found a new universal constant arising out of simple dynamical systems. His discovery gave credibility to those who found the studies on erratic behavior to be a mathematical fad.
1970s: A number of experiments, most famously by James Gollub, Harry Swinny, and Albert Libchauber, were done to examine Ruelle and Taken’s theory of turbulence. Their experimental results did not contradict it, and as more experiments were done, the results became even closer.
1983: The Santa Fe Institute is founded, providing a world-class research institution promoting the study of complexity and interdisciplinarity.
1987: James Gleick publishes his bestselling book Chaos, bringing knowledge of chaos, nonreductionism, and complexity to the wider public.
1990s: Methods used to study chaos enter the the social sciences, literary theory, and art. Some of this work utilizes the quantitative tools that chaos theory provides, such as determining the non-integral dimension of a set of data, while other work utilizes the chaos as a cultural metaphor.
Aperiodicity: The name for the property of a set of data does not repeat itself.
Bifurcation: The division into two branches. In dynamical systems like the logistic map, a bifurcation is a period doubling which leads to the onset of chaos.
Deterministic system: A system in which the end state can be determined with perfect certainty.
Dynamical system: The name for a system which can be described as evolving from one particular state to another particular state over time.
Fractal: An object that displays self-similarity on all scales.
Logistic map: A one dimensional recurrence relationship first used to describe population growth. The logistic map shows chaotic behavior at certain places.
Phase transition: The shift in a physical system from one state, having particular properties, into a completely different state, with radically different properties.
Reductionism: The belief that all entities can be reduced into the sum of some basic entities. This process works in mathematics when the equations are linear, in contrast to the nonlinear systems studied in chaos theory.
Sensitive dependence on initial conditions: The property of a system which states that initially nearby points of a system eventually will evolve to be far apart. It is often used as one of the key characteristics of chaos.
Strange attractor: An attractor is a space which all trajectories eventually head towards and remain within. A strange attractor displays chaotic behavior or has a non-integer dimension.
Topology: The branch of mathematics which studies the properties of geometric objects which are preserved as they undergo twisting, stretching, and deformation.
Turbulence: The flow of a fluid in which the velocity at any point fluctuates in an irregular manner.
Universality: A mathematical property which extends to all members of a class of objects.
Bai-Lin, Hao. Chaos II. Singapore: World Scientific, 1990.
Gleick, James. Chaos: Making a New Science. New York: Viking Press, 1987.
Kiel, L. Douglas and Euel Elliott, eds. Chaos Theory in the Social Sciences: Foundations and Applications. Ann Arbor: The University of Michigan Press, 1996.
Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago: The University of Chicago Press, 1962.
Strogatz, Steven H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Cambridge: Westview Press, 1994.
Waldrop, M. Mitchell. Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon and Schuster, 1992.
Wise, M. Norton, ed. Growing Explanations: Historical Perspectives on Recent Science. Durham: Duke University Press, 2004.
Aubin, David, and Amy Dahan Dalmedico. “Writing the History of Dynamical Systems and Chaos: Longue Durée and Revolution, Disciplines and Cultures.” Historia Mathematica vol. 29 (2002): 273-339.
Crutchfield, James P., et al. “Chaos.” Scientific American. December 1986, 46-57.
Horgan, John. “From Complexity to Perplexity.” Scientific American. June 1995, 104-109.
Jamieson, Valerie. “A Fractal Life.” New Scientist. November 13, 2004, 50-53.
Kauffman, Stuart A. “The Sciences of Complexity and ‘Origins of Order.’” PSA: Proceedings of the Biennial Meeting of the Philosophical Science Association vol. 2 (1990): 299-322.
Lorenz, Edward N. “Deterministic Nonperiodic Flow.” Journal of the Atmospheric Sciences vol. 20 (1963): 130-141.
Shearer, Rhonda Roland. “Chaos Theory and Fractal Geometry: Their Potential Impact on the Future of Art.” Leonardo vol. 25 (1992): 143-152.
Science vol. 284, no. 5411 (April 2, 1999).
Weinberg, Steven. “Is the Universe a Computer?” The New York Review of Books vol. 49, no. 16 (October 24, 2002).