Tuesday, February 27, 2018

The conjecture of "fine-tuning"...and "cosmopsychism"?

            A persistent claim in what one might call the philosophy of cosmology is the supposed “fine-tuning” of the constants of physics to conditions we consider suitable for sustaining living things. Consider, as a representative example, the introduction to a recent essay by Philip Goff in Aeon:

In the past 40 or so years, a strange fact about our Universe gradually made itself known to scientists: the laws of physics, and the initial conditions of our Universe, are fine-tuned for the possibility of life. It turns out that, for life to be possible, the numbers in basic physics – for example, the strength of gravity, or the mass of the electron – must have values falling in a certain range. And that range is an incredibly narrow slice of all the possible values those numbers can have. It is therefore incredibly unlikely that a universe like ours would have the kind of numbers compatible with the existence of life. But, against all the odds, our Universe does.

            Goff goes on to interpret this “fact” of fine-tuning as support for “…the idea that the Universe is a conscious mind that responds to value.” In his view, the Universe has a clear telos – the production of intelligent life. Given how central “fine-tuning” is to Goff’s claim, one might be forgiven for more closely examining the basis for his probability argument – the likelihood of a given universe having physical constants compatible with intelligent life.
            Probability, in its simplest form, is a calculation of the likelihood of a particular outcome given the range of possible outcomes. If, for simplicity, we assume that all potential combinations of the physical constants are equally likely, then the probability of getting a universe that can support intelligent life is a simple ratio: the number of possible universes that we judge could potentially support such life, divided by the number of possible universes. To get to Goff’s conclusion that this outcome is “incredibly unlikely,” we have to know both how many possible universes there are, and how many of them could support intelligent life. In terms of the constants in the laws of physics (those parameters that must be measured empirically, rather than calculated from theory), we need to know what range of variation is possible for each constant, and how much of that variation is compatible with intelligent life. This is where we get Goff’s basic claim of “fine-tuning” - “that range [of values of physical constants] is an incredibly narrow slice of all the possible values those numbers can have.”
            A crucial assumption in this view is that physical constants could potentially vary at all. Goff argues, for example, that we are fortunate that the parameter 𝛆 - representing the efficiency of the fusion of hydrogen to helium - has the value 0.007, since a universe where 𝛆 was slightly larger or smaller would have either very little or only hydrogen. However, the fact that we can simply substitute other values of 𝛆 in our equations hardly demonstrates that other values are actually possible. Further, nothing in our actual experience suggests that 𝛆 can vary; in fact, it seems to be the same everywhere in the universe (the fact that we can observe stars across vast separations in distance and time being but one example). Taken from another perspective, it would be truly remarkable if a parameter like 𝛆 could have a range of values yet somehow always turn up with the same value whenever we measure it. To claim “fine-tuning” is to claim that some entity could adjust the value of parameters like 𝛆; it takes a remarkably imaginative line of thinking to argue that our base assumption about a parameter should be that it is a variable, when all our experience suggests it is a constant.
            This is not new territory, either. In the late 17th and early 18th centuries, ingenious observations of Jupiter’s moon Io by Ole RΓΈmer – and conversions to absolute distances by Christian Huygens – showed that the speed of light in a vacuum was both finite and quite fast – about 220,000 km/s, as compared to the modern value of 299,792 km/s. One could have wondered why the speed of light had that particular value…until around 1864, when James Clerk Maxwell calculated what the speed of light had to be if it were an electromagnetic wave. The only speed compatible with mutual electric and magnetic induction – and with the conservation of energy – was remarkably close to the observed results. Before Maxwell, one could have imagined light having many potential speeds and wondered about their consequences, but after Maxwell those flights of imagination were simply implausible. Physics explained why light had one particular speed – the speed toward which experimental measurements were rapidly converging.
            Even if we were to grant that the constants could vary, it is rather difficult to determine how much variation “fine-tuning” advocates think is possible. A factor of two? An order of magnitude? Any number we can imagine? A statistical estimate of variation relies on measuring a value many times in a sample to determine how much variation is likely. To estimate potential variation the physical constants, we must measure each of those parameters in many different contexts and calculate a value and uncertainty. For the gravitational constant – which is notoriously variable in its measured value – the variation in modern measurements is on the order of 10-4, or one part in ten thousand. For the mass of the electron, the uncertainty derived from measurement is on the order of 0.1 parts per billion (depending on which units one uses to express the mass). In that light, considering universes where the electron is 2.5 times as massive (as Goff does) is utterly hypothetical. Or, to put it another way, we have no reason to think that the physical constants themselves could vary outside a remarkably tiny range of values, and that range itself is likely a product of the uncertainty of our measurements. The careful reader might fairly object that this is simply a restatement of the remarkable constancy of the measured parameters in physics. That is precisely the point.
            Thus the denominator in Goff’s hypothetical probability calculation – the range of possible combinations of the physical constants – is actually quite tiny; from an empirical point of view, only a minuscule range of the values we can imagine correspond to observations of the actual Universe. If the constants are truly constants, the denominator is simply one – ours is the only possible version of the current laws of physics. But what of the numerator – the portion of possible universes compatible with intelligent life? Here again, proponents of fine-tuning are rather vague on their probability assessments for a rather simple reason: we have almost no grounds to evaluate what kinds of universes could support intelligent life in principle, because we cannot possibly imagine all the ways intelligent life could arise. When we think of life in other universes (or even on other planets), “suitable for intelligent life” is usually shorthand for “suitable for life that uses solar energy to convert carbon dioxide and water to oxygen and carbohydrate, later releasing energy in the oxidation of that carbohydrate; most likely using a particular set of nucleotides to encode information and translate that information into protein macromolecules; organizing independent subunits (cells) into hierarchies which can specialize to the degree that a complex internal model of the outside world is represented inside the resultant organisms.” In other words, we are quite good at enumerating the requirements for the intelligent life we know best (humans), but our particular case does little to delimit the range of possible ways to get intelligent life in principle. To do so, we would have to have “comprehensive imagination,” a complete understanding of all the possible ways intelligent life could arise in a variety of possible universes. Our track record of predicting where relatively familiar life might be found on Earth is rather poor (e.g. hydrothermal vent communities); there is no reason to expect our imagination to be any less myopic when conceiving of ways unfamiliar life could arise or produce intelligence.
            In sum, an estimate of the probability of getting a universe that can produce intelligent life (the estimate Philip Goff characterizes as “incredibly unlikely”) relies both on estimating the actual range of possible universes and the number of those universes compatible with intelligent life. Fanciful speculation aside, we have no empirical reason to think that the constants of the physical universe could be anything else but the ones we know. Further, only an excess of hubris could lead us to think that we are able to comprehensively imagine all the ways intelligent life could arise in a given universe. As a result, we can only conclude that “fine-tuning” is a speculative story, and any assessment of its probability is groundless. If a scholar like Goff wants to postulate a “cosmopsychic” hypothesis of the “Universe [as] a conscious mind that responds to value,” he is more than welcome to do so. Offering that hypothesis as an solution to a problem – the popular metaphor of “fine-tuning” physical constants – only works if we have good reasons to think there is a problem at all.