My total academic exposure to the scientific discipline of physics was one course I took in college that could have been named, "Physics for Dummies." It was a physics course with very little math, but a lot of broad-brush treatments of really cool concepts such as "infinity" and "reductionism."
Reductionism is the view that complex systems can be understood by decomposing them into their constituent parts. The idea is that if you understand the fundamental components (such as atoms or cells), you can explain the behavior of the entire system. In the 1970s, reductionism promised to solve every problem in physics with the "theories of everything."
Fifty years later, we're still waiting.
In fact, as Adam Frank, professor of astrophysics at the University of Rochester, points out in his article in The Atlantic, "progress in the most reductionist branches of physics has slowed." Instead, in the 1980s, physicists began to develop new mathematical tools to study “complexity."
Complexity describes "systems in which the whole is far more than the sum of its parts," writes Frank. If the goal of reductionism was to explain the universe as a result of particles and their interactions, complexity recognized that "once lots of particles come together to produce macroscopic things—such as organisms—knowing everything about particles isn’t enough to understand reality," Frank writes.
Complexity has startling implications for physics.
From a physicist’s perspective, no complex system is weirder or more challenging than life. For one thing, the organization of living matter defies physicists’ usual expectations about the universe. Your body is made of matter, just like everything else. But the atoms you’re built from today won’t be the atoms you’re built from in a year. That means you and every other living thing aren’t an inert object, like a rock, but a dynamic pattern playing out over time. The real challenge for physics, however, is that the patterns that make up life are self-organized. Living systems both create and maintain themselves in a strange kind of loop that no existing machine can replicate. Think about the cell membrane, which enables a cell to stay alive by letting some chemicals in while keeping others out. The cell creates and continually maintains the membrane, but the membrane is also itself a process that makes the cell.
An astrophysicist such as Frank can use reductionism to predict accurately the life cycle of a young star, its lifespan, and its eventual demise. But taking a single cell from Earth's distant past — say, four billion years ago — and attempting to predict what it could become yields something new and unexpected: a phenomenon called “emergence.”
Life is so unpredictable that the fundamental laws that govern matter and energy cannot predict another fundamental property of life. "It is the only system in the universe that uses information for its own purposes," writes Frank.
Reductionism has its limits. Physicists are now turning their attention to complexity to try to grasp the answer to a question so elemental it has escaped understanding since science began asking it.
What is life?
Using these skills, physicists—working together with representatives of all the other disciplines that make up complexity science—may crack open the question of how life formed on Earth billions of years ago and how it might have formed on the distant alien worlds we can now explore with cutting-edge telescopes. Just as important, understanding why life, as an organized system, is different at a fundamental level from all the other stuff in the universe may help astronomers design new strategies for finding it in places bearing little resemblance to Earth. Analyzing life—no matter how alien—as a self-organizing information-driven system may provide the key to detecting biosignatures on planets hundreds of light-years away.
One exciting sideline to this new approach is that it's likely to help us better understand intelligence and build artificial versions of it. The debate over artificial general intelligence (AGI), where we can build out a large language model so that a machine can become sentient, could be better understood by employing new ways to study life itself.
"Bringing the new physics of life to problems of AI may not only help researchers predict what software engineers can build; it may also reveal the limits of trying to capture life’s essential character in silicon," writes Frank.
It's a long journey from my "Physics for Dummies" course to taming AI. However, we waited 50 years for reductionism to give us promised answers. How long before complexity succeeds where reductionism failed?






