A Likely Story

Is cosmology a science? Is scientific cosmology even possible, because it is about events so unique and fundamental that no test in any laboratory can truly repeat them? Questions like these pop up often enough, and you can find many good answers to them through e.g. quora, which I will not repeat here.

For the layman thinker, the difference between truth and lies is simple and clear, and it would be natural to expect the difference between science and non-science to be simple and clear as well. The human brain is a categorizing machine that wants to put everything in its proper place. Unfortunately the demarcation of science versus not-science is not so clear.

Tischbein - Oldenburg

Karl Popper modeled his philosophy of science on the remarkable history of general relativity. In 1916, Albert Einstein published his long-awaited theory, and made sensational predictions, reported in newspapers around the world, that would not be possible to verify until the next total eclipse of the Sun. It was almost like a step in classical aristeia, where the hero loudly and publicly claims what preposterous thing he will do, before going on to achieve exactly that. Popper’s ideas about falsification are based on this rare and dramatic triumph of armchair theory-making, not so much on everyday practical science work.

If we want a philosophy of science that really covers most of what gets published as science these days, what we really need is a philosophy of statistics and probability. Unfortunately, statistics does not have the same appeal as a good story, and more often gets blamed for being misleading than lauded as a necessary method towards more certain truths. There is a non-zero probability that some day popularizations of science could be as enthusiastic about P-values, null hypotheses, bayesians, as they are today about black holes, dark energy and exotic matter.

Under the broadest umbrella of scientific endeavors, there are roughly two kinds of approaches. One, like general relativity, looks for things that never change, universal rules that apply in all places and times. These include the ‘laws’ of physics, and the logical-mathematical framework necessary for expressing them (whether that should include the axioms of statistics and probability, if any, is the question).

The other approach is the application of such frameworks, to make observations about how some particular system evolves. For example, how mountains form and erode, how birds migrate, how plagues are transmitted, what is the future of a solar system or galaxy, how climate changes over time, what are the relationships between different phyla in the great tree of life, and so on. Many of such fields study uniquely evolved things, such as a particular language or a form of life. In many cases it is not possible or practical to “repeat an experiment” starting from the initial state, which is why it is so important to record and share the raw data, so that it can be analyzed by others.

From the point of view of the theoretical physicists, it is often considered serendipitous that the fundamental laws of physics are discoverable, and even understandable by humans. But it could also be that the laws that we have discovered so far are just approximations that are “good enough” to be usable with the imperfect instruments available to us.

The “luck” of the theorist has been that so many physical systems are dominated by one kind of force, with the other forces weaker by many orders of magnitude. For example, the orbit of the Earth around the Sun is dominated by gravitational forces, while the electromagnetic interactions are insignificant. In another kind of system, for example the semiconducting circuits of a microprocessor, electromagnetism dominates and gravity is insignificant. The dominant physics model depends on the scale and granularity of the system under study (the physical world is not truly scale invariant).

As the experimental side of physics has developed, our measurements have become more precise. When we achieve more reliable decimals to physical measurements, we sometimes need to add new theories, to account for things like unexpected fine structure in spectral lines. The more precision we want from our theories, the more terms we need to add to our equations, making them less simple, further away from a pythagorean ideal.

The nature of measurement makes statistical methods applicable regardless of whether measurement errors originate from a fundamental randomness, or from a determinism we don’t understand yet. The most eager theorists, keen to unify the different forces, have proposed entire new dimensions, hidden in the decimal dust. But for such theories to be practically useful, they must make predictions that differ, at least statistically, from the assumed distribution of measurement errors.

Many theorists and philosophers abhor the uncertainty associated with probability and statistics. (Part of this is probably due to personality of each individual, some innate unwillingness to accept uncertainty or risk.) To some extent this can be a good thing, as it drives them to search for patterns behind what first seems random.

But even for philosophers, statistics could be more than just a convenient box labeled ‘miscellaneous’. Like in the Parmenides dialogue, even dirt can have ideal qualities.

Even though statistics is the study of variables and variability, its name comes from the same root as “static”. When statistics talks about what is changeable, it always makes implicit assumptions about what does not change, some ‘other’ that we compare the changes against.

It is often said that statistical correlation does not imply causation, but does cosmic causation even make sense where cosmic time does not exist? Can we really make any statistical assumptions about the distribution of matter and energy in the ‘initial’ state of all that exists, if that includes all of space and time?

One of the things that Einstein was trying to correct, when working on general relativity, was causality, which was considered broken in the 1905 version of relativity, since causes did not always precede their effects, depending on the movement of the observer. General relativity fixed it so that physical events always obey the timeline of any physical observer, but only by introducing the possibility of macroscopic event horizons, and strange geometries of observable spacetime. But the nature of event horizons prevents us from observing any event that could be the primal cause of all existence, since it would be outside of the timeline from our point of view. We can make estimates of the ‘age’ of the Universe, but this is a statistical concept, no physical observer experiences time in the clock that measures the age.

Before Einstein, cosmology did not exist as a science. At most, it was thought that the laws of physics would be enough to account for all the motion in the world, starting from some ‘first mover’ who once pushed everything in the cosmos to action. This kind of mechanistic view of the Universe as a process, entity or event, separate from but subservient to a universal time, is no longer compatible with modern physics. In the current models, continuity of time is broken not only at event horizons, but also at the Planck scales of time and distance. (Continuing the example in Powers of Two, Planck length would be reached in the sixth chessboard down, if gold were not atomic.)

Why is causality so important to us, that we would rather turn the universe into swiss cheese than part with it? The way we experience time, as a flow, and how we maintain identity in that flow, has a lot to do with it. Stories, using language to form sequences of words, or just as remembered sequences of images, dreams and songs, are deeply embedded into the human psyche. Our very identities as individuals are stories, stories are what make us human, and plausible causes make plausible stories.