The argument that the universe is "fine-tuned" to produce human intelligence is interesting on so many levels, from the limits of our physical theory to the problems of estimating probabilities from a sample of one (universe). To me, however, the most interesting part of the argument is the way it recapitulates the ancient human urge to project ourselves onto the universe as its purpose. Many quite scientifically-minded folks have been swayed by arguments that the universe was built to produce us.
A basic tenet of the strong fine-tuning argument is that only a universe
set up the way it is could produce intelligent life. The physical
constants have to be just so, or we couldn't exist, and therefore
intelligent life couldn't exist. See the sleight-of-hand there?
I'll come right out and say it: I think this is basically solipsism in disguise, but that's not much more than name-calling. Let me do better and give you an analogy.
Consider Picasso's masterpiece Guernica. Imagine it was the only piece of art known to the world (it's an analogy, OK?). We could tell a long narrative about the historical events that inspired Picasso to paint it. We could point out how a bullfighting motif is central to the symbolism of the work. We could even discuss the oil pigments, brushes, and canvas that had eventually made their way into Picasso's hands. Our story would be full of specific historical events (the bombing of a villiage during the Spanish Civil War), contingencies (the Germans lending air support to Franco), and happy accidents (pigments discovered that reflect light in the narrow visual range of the human eye, oil paint reaching the naval power of Venice in the 15th century, where sail canvas was easy to get). It could be a very long and detailed causal network that resulted in this particular painting.
We could probably all agree that without all those events, contingencies, and happy accidents, Guernica would not exist; too many specifics of the painting have been triggered by those antecedent events. This is the metaphor's equivalent to human intelligence; there are so many crucial events in the causal chain to Homo sapiens, from the cooling of the early universe to form electrons, protons, and neutrons, to the formation of heavier elements in second-generation stars, to evolution of a DNA-based genetic system in early living things, to locomotor animals which benefit from central nervous systems, to the vocalizations and social systems of our recent ancestors which drove the development of symbolic language and culture. Without all those events, and the physical conditions that made them possible, our particular form of life would not exist.
The advocate of fine-tuning, however, goes one huge step farther. He or she does not just see our specific form of life as contingent on the current conditions of the universe; all possible forms of intelligent life are contingent on this universe. There could have been no life without the universe being carefully set up the way it was, because life (like the form we know) would be impossible if things were different. Lower the strength of the strong nuclear force much, and there's no iodine, and we all know iodine is essential to life...
To return to the Picasso metaphor, we can all agree that without Franco, oil paint, or the Wright flyer, there could be no Guernica. The fine-tuning argument is like a claim that without Franco, oil paint, or the Wright flyer, there could be no art.
Wednesday, April 3, 2013
Thursday, January 10, 2013
Trying to see beyond the telos...
I just finished reading Jim Holt's "existential detective story" Why Does the World Exist?. It's a fun romp through a variety of answers to the ultimate question in philosophy; particularly striking (to me, at least) is the way so many of the answers end up projecting our human desires onto the universe rather than examine the universe on its own terms. From the infinitely good creator of Richard Swinburne who nonetheless acts very much like a human parent, to the "ethical need for a universe...full of happiness and beauty" of John Leslie, human values and analogies pepper many of the expeditions searching for some ultimate telos (Aristotle's term for the final cause or purpose) for the universe. But should we really expect the existence of the immense universe of which we are but a tiny part to be explained in terms of human values and characteristics?
Aside from the really absurd notions such as strict creationism, the best current example of this philosophical narcissism has to be the notion of the Strong Anthropic Principle. This line of thinking extrapolates from the rather obvious assertion that we must be living in a universe that is compatible with our survival (the "weak" anthropic principle) to a full-blown assertion that the universe was set up with the purpose (telos again!) of producing intelligent beings like us. There is more to the Strong anthropic principle than just the bald assertion, of course; the key point is the supposed "fine-tuning" of the constants of the physical universe to the conditions of human existence. There are 25 physical constants in the Standard Model of quantum mechanics (add another for the cosmological constant so we can use gravity). The value of these constants are not defined by theory; instead, they must be measured experimentally. Small changes in most of those constants would result in radically different universes, many of which look to be incompatible with the survival of any of the living things we know. Advocates of the Strong Anthropic Principle claim that those constants must have been carefully adjusted to the values we measure to produce a world in which humans could live. Put simply, the Strong Anthropic Principle claims that the universe was literally set up to suit us.
The claim of fine-tuning does, however, have some serious shortcomings. Statistically, it cannot be evaluated; our sample set of possible universes includes exactly one, which means we have no way to judge how much the physical constants can vary among possible universes, or if they can vary at all. While there is some intriguing evidence suggesting that the fine-structure constant shows very small variations in time and space within our universe, we are still stuck with a sample of one. In other words, we currently don't know whether there is a knob for fine-tuning, say, the strong nuclear force attached to our universe, and the metaphor of an entity carefully adjusting these 25 physical constants is completely hypothetical.
Since we have not observed variation in the constants, we are left with evaluating the plausibility of fine-tuning. An advocate of the idea could do pretty well sticking with the existing Standard Model of quantum mechanics, arguing that there are quite a few (25 is a lot) fundamental quantities that are not defined but merely measured. Within the mathematical framework of the Standard Model, nothing prevents us from adjusting the charge of the electron to a different value and working out how the universe might look; in general those changes produce hypothetical universes where atoms disintegrate, stars fail to form, hydrogen is fused to helium far too rapidly to allow life to evolve, etc. In other words, if we assume that constants not defined by our physical theories are free to vary, then it can seem very fortunate for us that our universe includes values for those constants that allow our species to survive.
That is, however, a big assumption. There is no particular reason to assume that a quantity that must be measured rather than defined by theory has an arbitrary value. Indeed, the history of science suggests that things which could only be measured at first were later defined by theory. One can recall, for example, the huge number of undefined values in the ancient Earth-centered cosmos, from the distance to the stars, Moon, Sun, and other planets, to all the various mathematical devices like the epicycle used to calculate their complex apparent movements. Compare that multitude of observed properties to Kepler's Sun-centered laws of planetary motion, which define the positions and movements of objects in the Solar System much more precisely, with far fewer values simply measured from observation. Another great example is the periodic table in chemistry. Before the periodic table, it might be easy to marvel that iron has the particular chemical properties it does, making possible, for example, the ability of red blood cells to bind oxygen from air in the lungs. Those properties (atomic weight, valence, etc.) could be measured, and one could argue that it was extremely fortunate for living things that iron was present on the earth with precisely those chemical properties. After the periodic table, however, the properties of iron (and all the other chemical elements) are not arbitrary or fortuitous, but rather part of a systematic series, the result of atoms formed with equal numbers of electrons and protons (and later, with quantum theory, neutrons which are compatible with a stable nucleus). The alchemist's pure empiricism, with lost of characteristics of elements to remember, is replaced with a model which predicts many of those characteristics, as well as the characteristics of elements not yet discovered in nature. There is no reason to assume that the Standard Model is the last word in physics, either; its limitations (like its incompatibility with general relativity) are well-known. If the history of science has anything to tell us, it is that the theory that replaces the Standard Model is likely to define many more of the constants that can currently only be measured. Just as we no longer marvel that potassium has exactly 19 electrons, we may find the particular value of many of the constants in the Standard Model to be predictable in a future theory.
Let's grant the big assumption, however, for the sake of argument. What if the constants could vary? Then we should feel either very lucky that they took the values they did, or we should be grateful for the supreme being that fine-tuned them to suit us so well. Otherwise there wouldn't be any intelligent life, and the universe would be a pretty desolate place...right?
If we're wedded to the idea that intelligent life should look like us, perhaps. It would be tough for life as we know it to exist with very different values for the constants (although not, as Victor Stenger and Fred Adams have shown, for all such changes). Note the phrasing: life as we know it. We know a lot about the range of life forms that have flourished based on organic chemistry, a phospholipid bilayer membrane, and a nucleic-acid genetic system on one rocky planet orbiting a small main-sequence star. Are we really so arrogant as to think we know all about how intelligent life can form in general? We can say with some certainty that the examples of life we are familiar with could not tolerate big changes in the constants, but we cannot seriously think that our imaginations are fertile enough to deduce all the forms life could take, and rule them out in all the hypothetical universes created by varying the physical constants.
This kind of provincial thinking - that if we cannot imagine something, it must be impossible - is common enough to have a name: the philosopher's error. It is a product of extrapolation, of projecting the way we work and think onto the world beyond us. When we think of life on other worlds, we tend to think of it in ways remarkably similar to ours (for instance, our obsession with liquid water on Mars). Even on Earth, we can be surprised by life harvesting chemical energy from hydrothermal vents on the ocean floor (rather than from sunlight). We should be more honest with ourselves about the limits of our imagination.
Finally, we need to understand that life as we know it isn't some fixed entity that could only match certain physical conditions. Life evolves to better fit its physical environment. We could imagine a narcissistic fish contemplating how well-designed the ocean is to flow over its gills; how well-suited the viscosity of water is to flow smoothly across those gills; how well-balanced the salinity of the ocean is with that of the fish's blood. Imagine the fish's shock when we explained that the fish itself had evolved to better match the characteristics of the ocean! Douglas Adams came up with an excellent metaphor for the problem, published in his posthumous work The Salmon of Doubt:
The thinking behind the Strong Anthropic Principle is just one striking example of the pervasive anthropomorphic strain in human thought. We take concepts that work quite well in human social relations (such as the concept of the purpose of a person's actions) and apply those concepts to the universe far beyond that social context. Nevertheless, humanity has managed to move, in the last five hundred years, from a cosmos centered on us, to a cosmos centered on the Sun, to a cosmos with no particular center. Knowing that our planet orbits one of a hundred billion stars in one of roughly five trillion galaxies (in the visible universe), shouldn't we finally have the humility to realize that the universe is not constructed around our needs? Can we finally stop projecting our human concepts on the universe, and instead accept it on its own cosmic terms?
Aside from the really absurd notions such as strict creationism, the best current example of this philosophical narcissism has to be the notion of the Strong Anthropic Principle. This line of thinking extrapolates from the rather obvious assertion that we must be living in a universe that is compatible with our survival (the "weak" anthropic principle) to a full-blown assertion that the universe was set up with the purpose (telos again!) of producing intelligent beings like us. There is more to the Strong anthropic principle than just the bald assertion, of course; the key point is the supposed "fine-tuning" of the constants of the physical universe to the conditions of human existence. There are 25 physical constants in the Standard Model of quantum mechanics (add another for the cosmological constant so we can use gravity). The value of these constants are not defined by theory; instead, they must be measured experimentally. Small changes in most of those constants would result in radically different universes, many of which look to be incompatible with the survival of any of the living things we know. Advocates of the Strong Anthropic Principle claim that those constants must have been carefully adjusted to the values we measure to produce a world in which humans could live. Put simply, the Strong Anthropic Principle claims that the universe was literally set up to suit us.
The claim of fine-tuning does, however, have some serious shortcomings. Statistically, it cannot be evaluated; our sample set of possible universes includes exactly one, which means we have no way to judge how much the physical constants can vary among possible universes, or if they can vary at all. While there is some intriguing evidence suggesting that the fine-structure constant shows very small variations in time and space within our universe, we are still stuck with a sample of one. In other words, we currently don't know whether there is a knob for fine-tuning, say, the strong nuclear force attached to our universe, and the metaphor of an entity carefully adjusting these 25 physical constants is completely hypothetical.
Since we have not observed variation in the constants, we are left with evaluating the plausibility of fine-tuning. An advocate of the idea could do pretty well sticking with the existing Standard Model of quantum mechanics, arguing that there are quite a few (25 is a lot) fundamental quantities that are not defined but merely measured. Within the mathematical framework of the Standard Model, nothing prevents us from adjusting the charge of the electron to a different value and working out how the universe might look; in general those changes produce hypothetical universes where atoms disintegrate, stars fail to form, hydrogen is fused to helium far too rapidly to allow life to evolve, etc. In other words, if we assume that constants not defined by our physical theories are free to vary, then it can seem very fortunate for us that our universe includes values for those constants that allow our species to survive.
That is, however, a big assumption. There is no particular reason to assume that a quantity that must be measured rather than defined by theory has an arbitrary value. Indeed, the history of science suggests that things which could only be measured at first were later defined by theory. One can recall, for example, the huge number of undefined values in the ancient Earth-centered cosmos, from the distance to the stars, Moon, Sun, and other planets, to all the various mathematical devices like the epicycle used to calculate their complex apparent movements. Compare that multitude of observed properties to Kepler's Sun-centered laws of planetary motion, which define the positions and movements of objects in the Solar System much more precisely, with far fewer values simply measured from observation. Another great example is the periodic table in chemistry. Before the periodic table, it might be easy to marvel that iron has the particular chemical properties it does, making possible, for example, the ability of red blood cells to bind oxygen from air in the lungs. Those properties (atomic weight, valence, etc.) could be measured, and one could argue that it was extremely fortunate for living things that iron was present on the earth with precisely those chemical properties. After the periodic table, however, the properties of iron (and all the other chemical elements) are not arbitrary or fortuitous, but rather part of a systematic series, the result of atoms formed with equal numbers of electrons and protons (and later, with quantum theory, neutrons which are compatible with a stable nucleus). The alchemist's pure empiricism, with lost of characteristics of elements to remember, is replaced with a model which predicts many of those characteristics, as well as the characteristics of elements not yet discovered in nature. There is no reason to assume that the Standard Model is the last word in physics, either; its limitations (like its incompatibility with general relativity) are well-known. If the history of science has anything to tell us, it is that the theory that replaces the Standard Model is likely to define many more of the constants that can currently only be measured. Just as we no longer marvel that potassium has exactly 19 electrons, we may find the particular value of many of the constants in the Standard Model to be predictable in a future theory.
Let's grant the big assumption, however, for the sake of argument. What if the constants could vary? Then we should feel either very lucky that they took the values they did, or we should be grateful for the supreme being that fine-tuned them to suit us so well. Otherwise there wouldn't be any intelligent life, and the universe would be a pretty desolate place...right?
If we're wedded to the idea that intelligent life should look like us, perhaps. It would be tough for life as we know it to exist with very different values for the constants (although not, as Victor Stenger and Fred Adams have shown, for all such changes). Note the phrasing: life as we know it. We know a lot about the range of life forms that have flourished based on organic chemistry, a phospholipid bilayer membrane, and a nucleic-acid genetic system on one rocky planet orbiting a small main-sequence star. Are we really so arrogant as to think we know all about how intelligent life can form in general? We can say with some certainty that the examples of life we are familiar with could not tolerate big changes in the constants, but we cannot seriously think that our imaginations are fertile enough to deduce all the forms life could take, and rule them out in all the hypothetical universes created by varying the physical constants.
This kind of provincial thinking - that if we cannot imagine something, it must be impossible - is common enough to have a name: the philosopher's error. It is a product of extrapolation, of projecting the way we work and think onto the world beyond us. When we think of life on other worlds, we tend to think of it in ways remarkably similar to ours (for instance, our obsession with liquid water on Mars). Even on Earth, we can be surprised by life harvesting chemical energy from hydrothermal vents on the ocean floor (rather than from sunlight). We should be more honest with ourselves about the limits of our imagination.
Finally, we need to understand that life as we know it isn't some fixed entity that could only match certain physical conditions. Life evolves to better fit its physical environment. We could imagine a narcissistic fish contemplating how well-designed the ocean is to flow over its gills; how well-suited the viscosity of water is to flow smoothly across those gills; how well-balanced the salinity of the ocean is with that of the fish's blood. Imagine the fish's shock when we explained that the fish itself had evolved to better match the characteristics of the ocean! Douglas Adams came up with an excellent metaphor for the problem, published in his posthumous work The Salmon of Doubt:
Not only could life plausibly exist in many more ways than we can imagine, it could likely evolve in many more ways to fit its environment, whatever that environment might be.This is rather as if you imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in — an interesting hole I find myself in — fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!'
The thinking behind the Strong Anthropic Principle is just one striking example of the pervasive anthropomorphic strain in human thought. We take concepts that work quite well in human social relations (such as the concept of the purpose of a person's actions) and apply those concepts to the universe far beyond that social context. Nevertheless, humanity has managed to move, in the last five hundred years, from a cosmos centered on us, to a cosmos centered on the Sun, to a cosmos with no particular center. Knowing that our planet orbits one of a hundred billion stars in one of roughly five trillion galaxies (in the visible universe), shouldn't we finally have the humility to realize that the universe is not constructed around our needs? Can we finally stop projecting our human concepts on the universe, and instead accept it on its own cosmic terms?
Sunday, October 21, 2012
Armstrong, EPO, and living in the real world
The whole world seems ready to pile onto Lance Armstrong this week, after the US anti-doping agency (USADA) released an extensive report supporting its claims that Armstrong and his team conspired to use the drug EPO and blood doping to "ultimately gain an unfair competitive advantage through superior doping practices." The narrative of the investigation has even inspired a potboiler article in the NY Times detailing the story of how one teammate after another testified to the culture of doping within the US Postal team, with all fingers pointing to Armstrong.
What's lost in this discussion, of course, is context. It seems clear from the testimony that Armstrong and US Postal systematically used EPO and blood boosting to increase the oxygen-carrying capacity of their bloodstreams. What isn't being reported, however, is the way many of Armstrong's competitors of the era had used the same techniques. At a deeper level, the laser-like focus of the USADA on Armstrong ignores an uncomfortable truth: banning a practice that can't be reliably detected puts competitive athletes in an impossible situation.
In the late '90s, cycling was rife with use of an artificial copy of the human protein EPO, which increases the number of red blood cells in the blood. Because we all have EPO circulating in our blood, detection of recombinant (artificial) EPO is difficult. When Armstrong was winning his first Tours de France, there was no test in use for EPO, and even when a urine test for EPO was introduced, it could only pick up recombinant EPO for about four days after injection, while the effects lasted much longer. In short, there was no reliable way to catch cyclists who were using EPO. As a result, EPO use was rampant, as post-retirement confessions have revealed (for instance, Tour winner Bjarne Riis). The US Postal case further illustrates the pattern; riders on the team during Armstrong's winning run in the Tour were not caught in in-competition testing, but rather have confessed EPO use and doping under intense pressure long after the fact. Still not convinced? Here's a telling figure ("Tour riders with a doping past" - thanks to Alexey Merz) showing who Armstrong was competing against. Most of these riders were also not caught in competition but after later revelations.
In such an environment, an athlete faces a stark choice. If a banned practice cannot be reliably enforced, the athlete can reasonably assume that his or her competitors are using the practice. The athlete must either forsake the banned practice, putting him or herself at a disadvantage relative to the competition, or must take advantage of the banned practice to level the de facto playing field. In a sport like competitive cycling, a factor like the oxygen-carrying capacity of the blood is crucial, and a disadvantage there could literally separate winners from "also-rans." The justification for using the banned practice isn't just that "everyone else is doing it," but rather, that other competitors are doing it and not getting caught.
One point that has not been emphasized enough in the Armstrong case is that while there was no solid test for long-term EPO use during his career, the UCI had instituted another control that was easily measured, a maximum hematocrit of 50. In other words, there was a cap on how many red blood cells could be circulating in a rider's blood, whatever the cause. Suspensions were not punitive, but they prevented cyclists from competing with hematocrits above the threshold. The tests are straightforward, and the result was that cyclists boosted their blood to levels just under the threshold by a combination of training and, in all likelihood, EPO use and transfusions. While this is far from a perfect solution, in that it did not punish those that used EPO, it did have the effect of limiting the potential advantages of blood boosting by effectively putting all riders at roughly the maximum hematocrit. The USADA statement that Armstrong was able to "gain an unfair competitive advantage" should be re-examined in that context.
The situation may be improving with the introduction of the UCI's Biological Passport, a long-term profile of cyclists' hematocrits and the levels of both "young" and "old" red blood cells (comparing those levels may reveal artificial disruptions from boosting or EPO). That program was begun in 2007, two years after Armstrong's last Tour win. If the Passport proves to be a reliable way to detect artificial blood-boosting, then athletes can compete without worrying that their rivals are doping; if there is an EPO-free future to cycling, it lies with reliable and effective testing, not with extensive investigations into past conduct focused exclusively on winning cyclists or teams. We can vent our indignation at Armstrong, and perhaps that makes us feel better, but it is not fair to expect a competitive athlete to behave differently given the situation of the early 2000's. We could not detect the cheaters, with the result that most of the others decided to cheat as well. To expect a different result is simply unrealistic.
UPDATE (7/23/13): This week the Economist pointed out a further wrinkle in the doping environment based on game-theory analysis. Buechel, Emrich, and Pohlkamp modeled the atheletes, the sanctioning bodies, and the fans and sponsors as agents, finding that doing only occasional testing as a kind of PR exercise (and reporting only the positive tests) created a public perception of a few "bad apples" doping in an otherwise clean sport, when in fact most competitors could be doping as a rational response to the limited-testing environment. The authors suggest that more frequent testing, with public disclosure of all results (both negative and positive) would dispel a false perception that doping was limited.
The results of the game-theory analysis are remarkably consistent with actual events, such as scorn heaped on particular athletes who test positive (Armstrong). It also jibes with the impressions of many that race organizers and sanctioning bodies would promote scapegoating narratives about those particular athletes to deflect criticism from their own practices (limited private testing), and the suspicion that those sanctioning bodies are not really interested in actually eliminating doping from the sport for rationally selfish reasons. The conclusions also predict the withdrawals of sponsors in the face of explicit doping, including the withdrawal of Rabobank from the image of its team in 2013 after allegations against Armstrong, with the Dutch bank continuing to honor contracts and pay expenses, but withdrawing its name, logo and colors from team jerseys, which were re-styled as "Blanco Pro". The T-mobile team, riddled with doping allegations, was likewise purged of riders who had tested positive and re-introduced as HTC-High Road to shed the stain of doping on the organization.
Perhaps sponsors and fans can push the UCI and race organizers to be more persistent, dogged, and transparent about testing regimes; Buechel's analysis suggests that is the only way forward that aligns the rational interests of racers with a doping-free race environment. After all, the existence of pro cycling depends completely on fans and the sponsors who want to look good for those fans.
What's lost in this discussion, of course, is context. It seems clear from the testimony that Armstrong and US Postal systematically used EPO and blood boosting to increase the oxygen-carrying capacity of their bloodstreams. What isn't being reported, however, is the way many of Armstrong's competitors of the era had used the same techniques. At a deeper level, the laser-like focus of the USADA on Armstrong ignores an uncomfortable truth: banning a practice that can't be reliably detected puts competitive athletes in an impossible situation.
In the late '90s, cycling was rife with use of an artificial copy of the human protein EPO, which increases the number of red blood cells in the blood. Because we all have EPO circulating in our blood, detection of recombinant (artificial) EPO is difficult. When Armstrong was winning his first Tours de France, there was no test in use for EPO, and even when a urine test for EPO was introduced, it could only pick up recombinant EPO for about four days after injection, while the effects lasted much longer. In short, there was no reliable way to catch cyclists who were using EPO. As a result, EPO use was rampant, as post-retirement confessions have revealed (for instance, Tour winner Bjarne Riis). The US Postal case further illustrates the pattern; riders on the team during Armstrong's winning run in the Tour were not caught in in-competition testing, but rather have confessed EPO use and doping under intense pressure long after the fact. Still not convinced? Here's a telling figure ("Tour riders with a doping past" - thanks to Alexey Merz) showing who Armstrong was competing against. Most of these riders were also not caught in competition but after later revelations.
In such an environment, an athlete faces a stark choice. If a banned practice cannot be reliably enforced, the athlete can reasonably assume that his or her competitors are using the practice. The athlete must either forsake the banned practice, putting him or herself at a disadvantage relative to the competition, or must take advantage of the banned practice to level the de facto playing field. In a sport like competitive cycling, a factor like the oxygen-carrying capacity of the blood is crucial, and a disadvantage there could literally separate winners from "also-rans." The justification for using the banned practice isn't just that "everyone else is doing it," but rather, that other competitors are doing it and not getting caught.
One point that has not been emphasized enough in the Armstrong case is that while there was no solid test for long-term EPO use during his career, the UCI had instituted another control that was easily measured, a maximum hematocrit of 50. In other words, there was a cap on how many red blood cells could be circulating in a rider's blood, whatever the cause. Suspensions were not punitive, but they prevented cyclists from competing with hematocrits above the threshold. The tests are straightforward, and the result was that cyclists boosted their blood to levels just under the threshold by a combination of training and, in all likelihood, EPO use and transfusions. While this is far from a perfect solution, in that it did not punish those that used EPO, it did have the effect of limiting the potential advantages of blood boosting by effectively putting all riders at roughly the maximum hematocrit. The USADA statement that Armstrong was able to "gain an unfair competitive advantage" should be re-examined in that context.
The situation may be improving with the introduction of the UCI's Biological Passport, a long-term profile of cyclists' hematocrits and the levels of both "young" and "old" red blood cells (comparing those levels may reveal artificial disruptions from boosting or EPO). That program was begun in 2007, two years after Armstrong's last Tour win. If the Passport proves to be a reliable way to detect artificial blood-boosting, then athletes can compete without worrying that their rivals are doping; if there is an EPO-free future to cycling, it lies with reliable and effective testing, not with extensive investigations into past conduct focused exclusively on winning cyclists or teams. We can vent our indignation at Armstrong, and perhaps that makes us feel better, but it is not fair to expect a competitive athlete to behave differently given the situation of the early 2000's. We could not detect the cheaters, with the result that most of the others decided to cheat as well. To expect a different result is simply unrealistic.
UPDATE (7/23/13): This week the Economist pointed out a further wrinkle in the doping environment based on game-theory analysis. Buechel, Emrich, and Pohlkamp modeled the atheletes, the sanctioning bodies, and the fans and sponsors as agents, finding that doing only occasional testing as a kind of PR exercise (and reporting only the positive tests) created a public perception of a few "bad apples" doping in an otherwise clean sport, when in fact most competitors could be doping as a rational response to the limited-testing environment. The authors suggest that more frequent testing, with public disclosure of all results (both negative and positive) would dispel a false perception that doping was limited.
The results of the game-theory analysis are remarkably consistent with actual events, such as scorn heaped on particular athletes who test positive (Armstrong). It also jibes with the impressions of many that race organizers and sanctioning bodies would promote scapegoating narratives about those particular athletes to deflect criticism from their own practices (limited private testing), and the suspicion that those sanctioning bodies are not really interested in actually eliminating doping from the sport for rationally selfish reasons. The conclusions also predict the withdrawals of sponsors in the face of explicit doping, including the withdrawal of Rabobank from the image of its team in 2013 after allegations against Armstrong, with the Dutch bank continuing to honor contracts and pay expenses, but withdrawing its name, logo and colors from team jerseys, which were re-styled as "Blanco Pro". The T-mobile team, riddled with doping allegations, was likewise purged of riders who had tested positive and re-introduced as HTC-High Road to shed the stain of doping on the organization.
Perhaps sponsors and fans can push the UCI and race organizers to be more persistent, dogged, and transparent about testing regimes; Buechel's analysis suggests that is the only way forward that aligns the rational interests of racers with a doping-free race environment. After all, the existence of pro cycling depends completely on fans and the sponsors who want to look good for those fans.
Tuesday, August 28, 2012
Seeing the forest *in* the trees
I previously posted on the human bias toward seeing patterns and causation everywhere, even when the pattern doesn't hold up. There are, however, cases where we fail to really notice a pattern that is all around us. A great example to an evolutionary biologist like me is the hierarchical pattern manifest in all the animals and plants we see in the world around us.
I often ask my students "Wouldn't it be cool if you found a squirrel with a crab claw?" They usually chuckle at the apparent absurdity of the question - of course you would never find that. Mammals don't come with crustacean parts. They're right, of course - but is it so absurd to ask?
Here is a pattern that we're so accustomed to that we don't often bother to ponder it. The diversity of the natural world is divided up quite distinctly; you have conifers and angiosperms in plants, mammals, birds, lizards, fish, and amphibians among animals with backbones, insects, crustaceans, and arachnids among invertebrate animals, etc. We are so accustomed to this pattern that it would be really surprising to see a pine tree with a dogwood flower or an oak tree with a pine cone, or to find a bird giving birth to live young. In simple terms, birds are birds, fish are fish, and insects are insects - the names correspond to a suite of characteristics that are, quite literally, synonymous with the group.
This pattern gets interesting when you move to more or less-inclusive groups of species. For instance, within mammals, you can find species that give live birth to well-developed offspring (placental mammals - including us humans), species that give birth to live young that must latch on and nurse their mom for a long period of time (marsupials - think kangaroos with their cute pouches) and species that actually lay eggs (monotremes like the duck-billed platypus). In other words, there are subgroups within mammals that are clearly mammals, yet they differ in pretty fundamental ways. Those subgroups are also really distinct; it's tough to confuse a marsupial mammal with a therian mammal if you look carefully. That's why opossums seem so weird to North Americans - it's basically the only marsupial we ever encounter in our backyards.
Going in the other direction, we find that while birds and mammals are not at all hard to tell apart, they have lots of characteristics in common, from the bones in our skeletons to lots of details of our physiology and development. In other words, birds and mammals are distinct, but not nearly as distinct as birds and insects, or birds and flowering plants.
What's so interesting about this pattern is that there is no a priori reason for it. Why should diversity be divided up hierarchically? Why can't we have a squirrel with a crab claw? In the world of human-created things, there is lots of sharing around between designs (think, for example, about how much computer technology has infiltrated your car; I doubt carmakers were thinking about that in the days of ENIAC!).
Carl Linnaeus noticed the hierarchical pattern of characteristics way back in the 18th century and built a system of naming species on it (the one we have all seen that calls us Homo sapiens). What produced the pattern wasn't really worked out until the 19th century, when various theories of evolution from common ancestors suggested that the hierarchical pattern reflected the degree of shared ancestry among groups of species. For the vertebrate animals, the hierarchy looks like this:
In other words, the hierarchy of characteristics is the product of a hierarchy of ancestry of animal and plant species. The things we think of as "mammalian" are the vestiges of the common ancestor of what we call mammals (and for most people, that's the ancestor of marsupial and placental mammals). The things we think of as defining birds owe their origins to the ancestor of that particular lineage of sauropods. The pattern of diversity is the product of the evolutionary process.
Often when the theory of evolution is attacked, one will see it defended with documented artificial selection among laboratory bacteria, or observed small-scale adaptation in Darwin's finches. Those are fine examples, but they pale compared to the way the whole grand tapestry of species screams out "I'm the product of evolution from common ancestors!" The hierarchical pattern is ubiquitous and incredibly exhaustive (I leave it to the reader to work out the interesting example of whales and dolphins, and the ways in which they are quite distinct from their fishy marine colleagues) because the evolutionary process is likewise ubiquitous.
So the next time someone wonders if evolution really happens, explain to them that there are a lot of bats out there that could really use some feathers. And send me some gills if you can.
![]() |
"Get OFF my lawn!" |
Here is a pattern that we're so accustomed to that we don't often bother to ponder it. The diversity of the natural world is divided up quite distinctly; you have conifers and angiosperms in plants, mammals, birds, lizards, fish, and amphibians among animals with backbones, insects, crustaceans, and arachnids among invertebrate animals, etc. We are so accustomed to this pattern that it would be really surprising to see a pine tree with a dogwood flower or an oak tree with a pine cone, or to find a bird giving birth to live young. In simple terms, birds are birds, fish are fish, and insects are insects - the names correspond to a suite of characteristics that are, quite literally, synonymous with the group.
This pattern gets interesting when you move to more or less-inclusive groups of species. For instance, within mammals, you can find species that give live birth to well-developed offspring (placental mammals - including us humans), species that give birth to live young that must latch on and nurse their mom for a long period of time (marsupials - think kangaroos with their cute pouches) and species that actually lay eggs (monotremes like the duck-billed platypus). In other words, there are subgroups within mammals that are clearly mammals, yet they differ in pretty fundamental ways. Those subgroups are also really distinct; it's tough to confuse a marsupial mammal with a therian mammal if you look carefully. That's why opossums seem so weird to North Americans - it's basically the only marsupial we ever encounter in our backyards.
Going in the other direction, we find that while birds and mammals are not at all hard to tell apart, they have lots of characteristics in common, from the bones in our skeletons to lots of details of our physiology and development. In other words, birds and mammals are distinct, but not nearly as distinct as birds and insects, or birds and flowering plants.
What's so interesting about this pattern is that there is no a priori reason for it. Why should diversity be divided up hierarchically? Why can't we have a squirrel with a crab claw? In the world of human-created things, there is lots of sharing around between designs (think, for example, about how much computer technology has infiltrated your car; I doubt carmakers were thinking about that in the days of ENIAC!).
Carl Linnaeus noticed the hierarchical pattern of characteristics way back in the 18th century and built a system of naming species on it (the one we have all seen that calls us Homo sapiens). What produced the pattern wasn't really worked out until the 19th century, when various theories of evolution from common ancestors suggested that the hierarchical pattern reflected the degree of shared ancestry among groups of species. For the vertebrate animals, the hierarchy looks like this:
image credit: University of California Museum of Paleontology's Understanding Evolution, http://evolution.berkeley.edu |
In other words, the hierarchy of characteristics is the product of a hierarchy of ancestry of animal and plant species. The things we think of as "mammalian" are the vestiges of the common ancestor of what we call mammals (and for most people, that's the ancestor of marsupial and placental mammals). The things we think of as defining birds owe their origins to the ancestor of that particular lineage of sauropods. The pattern of diversity is the product of the evolutionary process.
Often when the theory of evolution is attacked, one will see it defended with documented artificial selection among laboratory bacteria, or observed small-scale adaptation in Darwin's finches. Those are fine examples, but they pale compared to the way the whole grand tapestry of species screams out "I'm the product of evolution from common ancestors!" The hierarchical pattern is ubiquitous and incredibly exhaustive (I leave it to the reader to work out the interesting example of whales and dolphins, and the ways in which they are quite distinct from their fishy marine colleagues) because the evolutionary process is likewise ubiquitous.
So the next time someone wonders if evolution really happens, explain to them that there are a lot of bats out there that could really use some feathers. And send me some gills if you can.
Wednesday, August 8, 2012
A Laffer of a statistical gaffe...
We humans are funny creatures. We're so primed to look for patterns that it gets in the way of our judgement about the real world. Examples run the gamut from cryptozoology to astrology. Much of the time, we harden our misinterpretations of reality with confirmation bias, the tendency to see new evidence as confirming what we already think. It takes real mental discipline to be honest about what we really know, versus what we would like to think is true.
You might think that we could improve that discipline with an extensive education, especially in a model-based field like economics. Indeed, a tough educational environment that challenges our beliefs can reduce our tendency to self-deception. But it's not a perfect solution, judging by a recent example from the Opinion page of the Wall Street Journal.
Arthur Laffer, who holds a PhD in economics from Stanford and was tenured at the Chicago Graduate School of Business, included in his piece a figure comparing the rate of change of government spending with the change in gross domestic product for the 34 nations in the OECD. Laffer then points out that several nations that have notably high increases in government spending also have big drops in GDP. In other words, what statisticians call a correlation. Laffer then goes on to claim that
Unfortunately, there are lots of other interpretations completely consistent with the table. One could simply reverse the causation - isn't it remarkable how much a drop in GDP caused governments to spend more money on social support! - and still get the same data. Or there could be a third factor that caused the changes in the things measured (perhaps there is some other similarity among the countries with a big drop in GDP that caused both changes). In short, you simply cannot know from a correlation between things how, or even if, one caused the other.
To be sure, someone of Arthur Laffer's background should know better than to draw such unjustified conclusions from these data, and the WSJ should know better than to print it. One might see the invisible hand of supply-side ideology behind that novice's mistake. Nonetheless, the example points out once again how vulnerable we are to confirmation bias - and how much more stringently we must examine our ideas about the world.
You might think that we could improve that discipline with an extensive education, especially in a model-based field like economics. Indeed, a tough educational environment that challenges our beliefs can reduce our tendency to self-deception. But it's not a perfect solution, judging by a recent example from the Opinion page of the Wall Street Journal.
Arthur Laffer, who holds a PhD in economics from Stanford and was tenured at the Chicago Graduate School of Business, included in his piece a figure comparing the rate of change of government spending with the change in gross domestic product for the 34 nations in the OECD. Laffer then points out that several nations that have notably high increases in government spending also have big drops in GDP. In other words, what statisticians call a correlation. Laffer then goes on to claim that
...there's no arguing with the data in the nearby table, and the fact that greater stimulus spending was followed by lower growth rates. Stimulus advocates have a lot of explaining to do. Their massive spending programs have hurt the economy and left us with huge bills to pay.However, Laffer has made the oldest statistical error in the book in trying to align reality to his views: he has assumed that correlation implies causation. In other words, he is assuming that as two things changed, one clearly caused the other. It starts with the simplest nuance, describing the change in government spending before describing the change in GDP; that reads a lot like a narrative, where one thing happened before the other. The first must have caused the second, right?
Unfortunately, there are lots of other interpretations completely consistent with the table. One could simply reverse the causation - isn't it remarkable how much a drop in GDP caused governments to spend more money on social support! - and still get the same data. Or there could be a third factor that caused the changes in the things measured (perhaps there is some other similarity among the countries with a big drop in GDP that caused both changes). In short, you simply cannot know from a correlation between things how, or even if, one caused the other.
To be sure, someone of Arthur Laffer's background should know better than to draw such unjustified conclusions from these data, and the WSJ should know better than to print it. One might see the invisible hand of supply-side ideology behind that novice's mistake. Nonetheless, the example points out once again how vulnerable we are to confirmation bias - and how much more stringently we must examine our ideas about the world.
Subscribe to:
Posts (Atom)