One big unknown in the controversy about heavy metal pollution in southeast Portland (OR) has been the amount of accumulation of pollutants identified by the Oregon Department of Environmental Quality as above its benchmark levels in air. Cadmium (Cd) and arsenic (As) were measured well above emissions benchmarks at an air monitoring station near the Bullseye Glass production facility; while those emissions were not particularly high with respect to health risks for inhalation, we did not know if the pollutants had accumulated in soils. For heavy metals this can be a real concern, as atomic elements like Cd and As do not convert into other substances as a molecular pollutant can. There has been a legitimate concern that even air concentrations that were not acute could have built up to dangerous levels over the functional lifetime of the Bullseye facility.
Now we have some illuminating data, in the form of soil testing results from the very close Children's Creative Learning Center, about 150 meters away from the Bullseye factory. The report from testing is posted here, and I've included the data table below:
Helpfully included in the test results are background levels for the Portland area. The 6" and 12" designations refer to the depth at which soil was sampled at each of six sites around the CCLC campus. The table also includes DEQ soil screening values for residential areas. Not included are the DEQ risk-based concentrations (RBCs) for urban residential areas, which are 1.0 mg/kg (As) and 160 mg/kg (Cd).
So what do the results tell us? First off, that there's no evidence for an overall accumulation of arsenic at the site, since the averages at both depths fall below the background level for the City as a whole. It is true that sites 5 and 6, which are on the west side of the building, are a bit higher than the other samples, especially at the 12" depth; this could be an effect of proximity to Bullseye, or even just proximity to a nearby road. Sample #4, however, is immediately adjacent and shows lower As levels, especially at 12" depth. The values do fall above the DEQ's risk-based concentration of arsenic for soil ingestion, skin contact, and inhalation through contaminated soil, with the RBC set to match the 1 in 1 million risk of causing a cancer. The average value at 6" depth would correspond to roughly a 1 in 70,000 increase in the change of developing a cancer using assumptions like roughly year-round exposure. The maximum at 12" would correspond to a 1 in 20,000 increase in cancer risk with similar assumptions.
For cadmium, the sample values are more consistent from location to location. At depth, they fall very close to the background for Portland, though at 6" they are about twice that value, so there may have been some accumulation of Cd in the soil. The values are all very far below the risk-based concentration from DEQ for soil (by a factor of 50 or more), so there appear to be no grounds to worry about a health risk for cadmium from soil at CCLC.
Incidentally, you may have noticed that the DEQ screening values (the RBCs) are really different for the two elements; at first I thought that the value for cadmium was a typo! But the big difference seems to be that arsenic is volatile (i.e. it can evaporate into air from soil) while cadmium cannot, along with the fact that arsenic is more easily absorbed through the skin.
So what is the overall conclusion? I hesitate to generalize too much, but the amounts of As and Cd measured at CCLC all fall within about 2X the background for the Portland basin as a whole. For arsenic, only at the 12" depth are samples well above that background. Conversely, for cadmium, the values at depth are near baseline, while the shallow samples are elevated. The difference could be due to different leaching rates for the two elements, or they could represent different modes or times of accumulation. So there may be some accumulation of these elements from nearby industrial operations like Bullseye. On the other hand, this is for a very close site to the Bullseye factory; given the relative values observed in the moss and lichen study from the US Forest Service, soil levels much farther away are likely to be at background levels for the City if soil concentrations drop at a similar rate to moss and lichen concentrations. As an example, airborne Cd levels drop by a factor of about 2 by the time one reaches Cleveland HS or Winterhaven Elementary in the moss and lichen study; if soil accumulation drops off at a similar rate, those schools would be expected to have soil levels of Cd and As near the background levels. For students and staff at CCLC, continued exposure to unremediated soil may be unwise, but health risks from soil will be small for arsenic and negligible for cadmium.
UPDATE: The Oregonian has released their own set of soil testing data here. The levels tend to drop off quickly with distance from Bullseye, with a few scattered high results among mostly below-detection-threshold tests. One very high test result near Cleveland HS should probably be followed up, though as an outlier it is likely to come from another source than settling from Bullseye. A high lead level next to Powell Blvd. is likely (IMHO) to be the result of years of auto traffic using leaded gasoline.
Thursday, February 18, 2016
Saturday, February 13, 2016
Glass manufacturing update for chromium
Since I put up my recent post on cadmium (Cd) and arsenic (As), attention in SE Portland (OR) has shifted to the possibility of hexavalent chromium pollution in the area around Bullseye glass. To try to help everyone understand the data for Cr(VI), I'm adding this post, but a word of caution at the outset: the data for chromium are far less clear.
First off, everyone needs to understand that there are two common forms of chromium used in industry. Cr(III), or trivalent chromium, does not have documented health risks; it is the form of chromium found most often in the human body. Cr(VI), or hexavalent chromium, is more dangerous, because it is a powerful oxidant; it strips 3 electrons off other atoms or ions in converting to Cr(III). If you've heard of oxidative damage, this is an example. You could write these as Cr3+ and Cr6+, but I can't get Blogger to do superscripts...so they look funny.
Hexavalent chromium, therefore, is the form of the element that matters for health risk, and all the standards I can find are for Cr(VI), including the ones in the following table, which again combines DEQ air monitoring data with Oregon benchmarks, EPA cancer risks, and reference values for non-cancer health effects. I left the Cd and As rows in for comparison. Everything here applies to inhalation risk, with units of micrograms/cubic meter.
The other uncertainty is whether the bulk of the chromium is coming from Bullseye at all. While the monitoring station is close by, we haven't got a relative concentration map for chromium from the moss and lichen sampling done by the US Forest Service, as we do for cadmium and arsenic. In the case of those metals, the USFS study pointed pretty clearly to Bullseye's location. It does seem that Bullseye was using Cr(III) and Cr(VI), since they have suspended production with Cr(VI) but not Cr(III) at DEQ's request. However, the company claims that its factory was not operating on the peak days of chromium emission; there are indeed two days with much higher (roughly 20X) chromium emissions in the DEQ data that are responsible for most of the average (66% of the total emissions measured come from those two days). While a correspondence between cadmium and arsenic points to glassmaking as an industry, and Cr(VI) salts like potassium chromate and dicrhomate are used to color glass, lots of industries use Cr(VI) salts, for example for the chrome plating industry. We also don't have the benefit of knowing (from the moss and lichen monitoring) how quickly the chromium exposure lessens - if at all - as one moves away from the DEQ monitoring station. In sum, the source of the chromium in the DEQ data is unclear at this point.
Given those caveats, the chromium levels are still concerning. If the source of the chromium is industrial, it is quite possible that it is indeed mostly Cr(VI). In that case, the lung cancer risk from chromium exposure at the monitoring station is close to 1 in 1,000 for someone breathing that air 24/7 for a lifetime using the EPA model. That is still much lower than the baseline lung cancer risk, but if you eliminate the effect of smoking (which is linked to about 90% of lung cancer), this potential Cr(VI) risk might add as much as 14.5% to the risk of lung cancer from all causes except smoking - a worst-case scenario for someone living at the monitoring station. That's still only about a sixth of the contribution of radon exposure (you have had your house radon-checked, right?), but it's potentially more than for cadmium or arsenic. The non-cancer risks don't look too bad; they are actually far below the level where particulate emissions cause non-cancer disease, and the EPA has low confidence in the study that gives the much lower threshold for mist and aerosol exposure, for what look to my eyes like great reasons (see the "Chronic effects - noncancer" section in the IRIS summary).
So the picture for chromium is much more vague. It would be great, for example, to know how much of the chromium DEQ is finding in the air is hexavalent. It is possible to distinguish the hexavalent chromium ion, but to do a Cr(VI)-specific test requires an extra separation step (silica gel chromatography) that was presumably not part of the DEQ protocol. It would also be very helpful to know if the USFS data includes relative chromium levels, even if those samples don't distinguish the oxidation state, to help pinpoint the source of the emissions. Those avenues should definitely be pursued; the existing DEQ emissions data give us a worst-case risk that is not minor. Further data would let us know just how close we actually are to that worst case.
Tuesday, February 9, 2016
Cadmium, Arsenic, and glass manufacturing in Portland: What do the test results mean?
Tonight at Cleveland High School (in Portland, OR) the Multnomah County government convened a public meeting to discuss the recent pinpointing of elevated cadmium (Cd) and arsenic (As) levels in Portland to the Bullseye Glass manufacturing facility at 3722 SE 21st. Avenue. I was frankly hoping that the representatives of the Department of Environmental Quality (DEQ) and Oregon Health Authority (OHA) would put the levels DEQ measured of those metals in October into a health context. Unfortunately, they did not, and the meeting degenerated into an expression of public anger that didn't answer many questions.
Fortunately, we have well-established models of cancer risk from the Environmental Protection Agency (EPA) and reference exposures from the California EPA for the threshold of likely health risks in general. Those models and exposures are based on data from industrial use of Cd and As, as well as documented environmental exposures from published studies. In other words, we have an evidence-based context for the numbers DEQ measured in the air outside Bullseye's production facility. In hopes of helping people understand the big question voiced at the meeting tonight ("What do the emissions mean for my family?"), I've compiled the results from air quality monitoring with those reference levels.
To make things easier to compare, I've put all the exposure data and relevant reference levels in a table with everything in the same units (micrograms per cubic meter).
The Oregon DEQ benchmarks are reported here. These numbers are set based on the EPA lung-cancer model, where the benchmark is the level at which exposure would increase cancer risk by 1 in one million for a person who breathed that air for their entire lifetime (chronic exposure).
The EPA cancer risk and CalEPA reference exposures come from this page (arsenic) and this page (cadmium). I used the 1 in 10,000 lung cancer risk estimate from EPA because it was closest to the exposures reported near Bullseye. It's a linear model, so it shouldn't be surprising that you can just multiply the DEQ benchmark by 100 to get that number as well. Note: The CalEPA reference exposures are given in milligrams per cubic meter, so you have to multiply those numbers by 1,000 to get to micrograms per cubic meter.
From a first look, you may notice that the average exposure DEQ measured falls near the 1 in 10,000 risk for lung cancer from the EPA model (it's actually about 1 in 6,300 for As, and 1 in 20,400 for Cd). The averages are also 2-3 times the CalEPA threshold below which no health effects at all are expected.
Hopefully you've noticed something else from the table above: the DEQ benchmarks are quite stringent. They are far below the CalEPA thresholds for any likely health effects at all from inhaling either cadmium or arsenic, and they correspond to a 1 in one million increase in lung cancer risk (based on the EPA models). For some perspective, men have a 1 in 13 chance of developing lung cancer in their lifetime (and a 1 in 2 chance of any cancer), while women have a 1 in 16 lifetime chance for lung cancer and a 1 in 3 lifetime chance for all cancers (check out the data from the American Cancer Society). Taken another way, if one million people lived at the DEQ benchmark, we would expect one lung cancer due to cadmium exposure, and about 70,000 lung cancers from other causes. Even going up to the 1 in 10,000 risk in the table above, you would expect around 50 cadmium-related lung cancers in a population of one million who all lived at the monitoring station, as compared to the 70,000 cases from other causes. Did I say stringent?
It's important to repeat that the data above come from right next to Bullseye. If you go even a short distance away, mixing in the atmosphere will lower the concentration of arsenic and cadmium. DEQ has even published a preliminary map of expected cadmium exposures here, based on relative concentrations of the elements in moss samples collected by the US Forest Service. Here's a screenshot:
These numbers are in nanograms per cubic meter again, so you have to divide by 1,000 to compare with the table. It may help to think of the boundary between the highest two concentrations (around 30 ng/cubic meter) as the average from monitoring (29.4 ng/cubic meter). By the time you get to the Fred Meyer day care, the estimated levels have dropped to about 0.01 micrograms/cubic meter - or right at the CalEPA threshold below which any health effects are unlikely. Farther out, at Cleveland HS or Winterhaven Elementary, levels are more like 0.005 micrograms/cubic meter, half the CalEPA health threshold and in the 1 in 100,000 probability of cancer in an individual due to the cadmium exposure. You may also notice a smaller point-source in north Portland, which looks like it's at Ostrom Glass and Metal Works.
So what's the conclusion?
Even for students at the Fred Meyer day care facility, inhaling the air is unlikely to cause any ill effects at all based on the CalEPA threshold and EPA cancer models. The cancer risk estimate is already down to 1 in 60,000. In other words, if there were 60,000 students in that day care, we would expect one of them to get a lung cancer from Cd exposure. There are in fact approximately 100 children who attend the facility, and they will not breathe this air 24/7 for 70 years (the assumptions of the EPA model). As for other health effects, the exposure looks like it would be right at the CalEPA threshold outside (and probably lower inside).
Further away (for example, in the residential neighborhoods east of 26th Ave. and south of Gladstone St., exposures should be lower. While there are reports of a detectable level of Cd and As in the soil at the day care facility, soil and air tests at Cleveland HS and Winterhaven Elementary ordered by Portland Public Schools did not detect any Cd or As. (I hope to link to those results if and when they are released).
In short, the levels DEQ detected are at or below where one would expect much of an elevation in lung cancer risk or other health consequences. Personally, I don't see grounds to get particularly worried about human health given the numbers that have been reported.
Let me be clear: It would be great to get cadmium and arsenic exposures down to the DEQ benchmarks, especially since they are elements and do not "break down" in the environment the way molecular toxins can. Given the possibility of soil accumulation, DEQ's efforts make sense. I hope that the employees at Bullseye are being evaluated for As and Cd exposure, since their exposures are potentially much higher inside the factory. And re-evaluating the pollution control measures at Bullseye is a no-brainer, since we now know that it is a significant point source for As and Cd. There is lots of work to be done.
However, the community anger expressed at tonight's meeting is, in my opinion, totally unjustified. It was clear from DEQ and Forest Service presentations that sleuthing out the source of Cd and As was an involved process, from DEQ identifying the two elements as more concentrated in monitoring than predicted from modeling known sources, to the Forest Service using a moss and lichen model to establish relative exposures throughout the City, to the DEQ doing targeted monitoring near Bullseye to confirm the source and document the exposure levels. Put simply, government agencies discovered the problem and tracked it to its source before anybody noticed any consequences. To me, that looks like effective detective work on the part of DEQ and USFS. The first commenter, reading a prepared statement from Neighbors for Clean Air, characterized the process as "an abysmal failure of government." While I understand that many of my neighbors are worried and angry, I think we can do better than that.
UPDATES:
As the conversation has broadened to include emissions of hexavalent chromium, I've written another post dealing with that metal.
Also, soil testing results from the nearby daycare have been released, and I've worked through those in another post.
The OR state cancer registry (OSCaR) has released the results of an analysis of lung and bladder cancer in the area around Bullseye glass; here are those results with my analysis.
Tuesday, January 27, 2015
Does Eric Metaxas really have comprehensive imagination?
Recently an opinion piece by Eric Metaxas, "Science Increasingly Makes the Case for God," caused a bit of a storm after the Wall Street Journal ran it on Christmas Day. The argument it makes - that the physical world is fine-tuned to match the needs of living things - is nothing new; similar claims from authors such as Michael Behe, William Dembski, and Michael Denton were part of founding the Intelligent Design movement in the early 1990s. The fact that Metaxas can still get publicity from his claims suggests a serious examination is in order.
The foundation of Metaxas's argument is the striking correlation between the physical conditions of the Earth and the needs of organisms that live on that Earth. As he puts it:
There are really two possibilities. The first explanation argues that life is picky, fragile, and static, and that the universe must have been adjusted to the particular needs of living things. This is where Metaxas's description, that "every single [parameter] must be perfectly met" comes from. If the universe needed adjusting (often called "fine-tuning" when describing the constants of physics), then there must have been an intelligent being doing the adjusting - hence "the Case for God."
The other possibility is that the conditions of the universe are relatively static and not "adjusted" to benefit life, but that living things are capable of evolving so that they can tolerate those conditions (at least in a few places like Earth). Either explanation could produce the striking correlation; how do we choose between them?
Each explanation makes strong demands on our understanding. One particularly strong demand for the fine-tuning explanation, often unstated or ignored, is a complete understanding of the possible ways living things could exist. Arguing that the universe has to be adjusted to the requirements of Earth life includes the implicit assumption that the Earth includes the complete sweep of potential living things. If you want to argue that Metaxas's "200 parameters" have to be the way they are, you have to argue that the only way living things could possibly exist is pretty much just the way they do here on Earth.
As an example, if you insist that life absolutely requires liquid water, having planets with liquid water on their surface seems like a reasonable demand of the universe. Can you be sure of that assumption when the only examples of living things you know of all come from a planet with oceans? It's rather like insisting that all mammals deliver well-developed live young, completely ignorant of the marsupial branch of the mammal family tree. Time has not been kind to this assumption of "comprehensive imagination." That living things could survive in deep oceanic hydrothermal vents (way too deep to get energy from the sun), at the near-boiling temperatures of hot springs, or in extremely acidic conditions were all hardly conceived (not even taken seriously enough to be considered impossible) until the organisms living in those environments were discovered. Claiming to imagine every possible way of being alive seems both remarkably arrogant and extremely unlikely. To borrow from Daniel Kahneman's remarkable book, this explanation suffers from the "WYSIATI" fallacy: What You See Is All There Is.
But what about the other explanation - that life has evolved to match the conditions in which it finds itself? That lineages of organisms can change over time was once considered an extraordinary idea, and one might rightfully demand some pretty strong evidence of its reality. After over 150 years with the idea of natural selection, the evolutionary biologist can point to a range of different types of evidence, from change in fossil lineages, to the hierarchical pattern of diversity of living things, to the underlying biochemical similarity of all living things. One can observe the evolutionary process in the lab with organisms whose generation times are conveniently short, as well as in decades-long studies of wild populations. We even know that life on Earth has adjusted to huge shifts in the composition of its atmosphere caused by living things themselves! In short, that lineages of living things on Earth can change over time is a well-documented fact. That other forms of life could evolve does not seem far-fetched, since there is no apparent reason the conditions for evolution (reproduction, a genetic system, and differential survival) could not be fulfilled in radically different forms of life.
So in understanding the strong correlation between environment and living things, we can either claim that life is inherently dependent on the conditions here on Earth (again, a claim of comprehensive imagination for which we cannot have any real basis) or claim that life can adapt to at least some of the environments available (for which we have abundant evidence, at least for Earth-life). If we are really honest about the limitations of our examples of life - and the limits of our imaginations - we should see the absurdity of the assumptions necessary to claim proof of supernatural fine-tuning. The far more plausible explanation is the evolutionary one, and it is the only one with sound empirical evidence.
(note: not long after I wrote this post, the AAAS hosted a session on what a "shadow biosphere" (i.e. not based on DNA/protein) might be like. There's an account of the session here, if you're curious.)
The foundation of Metaxas's argument is the striking correlation between the physical conditions of the Earth and the needs of organisms that live on that Earth. As he puts it:
Today there are more than 200 known parameters necessary for a planet to support life—every single one of which must be perfectly met, or the whole thing falls apart.This correlation is something any knowledgeable observer should accept; the match between organisms and their environments is quite striking. Given that correlation, we naturally want to understand the cause, and that is where Metaxas's views differ so radically from those of most scientists.
There are really two possibilities. The first explanation argues that life is picky, fragile, and static, and that the universe must have been adjusted to the particular needs of living things. This is where Metaxas's description, that "every single [parameter] must be perfectly met" comes from. If the universe needed adjusting (often called "fine-tuning" when describing the constants of physics), then there must have been an intelligent being doing the adjusting - hence "the Case for God."
The other possibility is that the conditions of the universe are relatively static and not "adjusted" to benefit life, but that living things are capable of evolving so that they can tolerate those conditions (at least in a few places like Earth). Either explanation could produce the striking correlation; how do we choose between them?
Each explanation makes strong demands on our understanding. One particularly strong demand for the fine-tuning explanation, often unstated or ignored, is a complete understanding of the possible ways living things could exist. Arguing that the universe has to be adjusted to the requirements of Earth life includes the implicit assumption that the Earth includes the complete sweep of potential living things. If you want to argue that Metaxas's "200 parameters" have to be the way they are, you have to argue that the only way living things could possibly exist is pretty much just the way they do here on Earth.
As an example, if you insist that life absolutely requires liquid water, having planets with liquid water on their surface seems like a reasonable demand of the universe. Can you be sure of that assumption when the only examples of living things you know of all come from a planet with oceans? It's rather like insisting that all mammals deliver well-developed live young, completely ignorant of the marsupial branch of the mammal family tree. Time has not been kind to this assumption of "comprehensive imagination." That living things could survive in deep oceanic hydrothermal vents (way too deep to get energy from the sun), at the near-boiling temperatures of hot springs, or in extremely acidic conditions were all hardly conceived (not even taken seriously enough to be considered impossible) until the organisms living in those environments were discovered. Claiming to imagine every possible way of being alive seems both remarkably arrogant and extremely unlikely. To borrow from Daniel Kahneman's remarkable book, this explanation suffers from the "WYSIATI" fallacy: What You See Is All There Is.
But what about the other explanation - that life has evolved to match the conditions in which it finds itself? That lineages of organisms can change over time was once considered an extraordinary idea, and one might rightfully demand some pretty strong evidence of its reality. After over 150 years with the idea of natural selection, the evolutionary biologist can point to a range of different types of evidence, from change in fossil lineages, to the hierarchical pattern of diversity of living things, to the underlying biochemical similarity of all living things. One can observe the evolutionary process in the lab with organisms whose generation times are conveniently short, as well as in decades-long studies of wild populations. We even know that life on Earth has adjusted to huge shifts in the composition of its atmosphere caused by living things themselves! In short, that lineages of living things on Earth can change over time is a well-documented fact. That other forms of life could evolve does not seem far-fetched, since there is no apparent reason the conditions for evolution (reproduction, a genetic system, and differential survival) could not be fulfilled in radically different forms of life.
So in understanding the strong correlation between environment and living things, we can either claim that life is inherently dependent on the conditions here on Earth (again, a claim of comprehensive imagination for which we cannot have any real basis) or claim that life can adapt to at least some of the environments available (for which we have abundant evidence, at least for Earth-life). If we are really honest about the limitations of our examples of life - and the limits of our imaginations - we should see the absurdity of the assumptions necessary to claim proof of supernatural fine-tuning. The far more plausible explanation is the evolutionary one, and it is the only one with sound empirical evidence.
(note: not long after I wrote this post, the AAAS hosted a session on what a "shadow biosphere" (i.e. not based on DNA/protein) might be like. There's an account of the session here, if you're curious.)
Saturday, April 12, 2014
What would an Intelligent Design theory look like?
When I first read about the "hypothesis" of Intelligent Design, I figured it would go away soon enough. Clearly, I underestimated the degree to which a vocal segment of society will prioritize their personal beliefs in a Designer over all the evidence that suggests there is no design in nature (and yes, I know people talk about natural "designs" all the time in a casual way, just like we use the term natural "selection," when both are mere metaphors that save a lot of verbiage). Advocates of "ID" are constantly claiming it is science (it helps justify putting it in a public school science classroom), so I'm going to take the intelligent design argument seriously enough to put it in scientific terms. What would a theory of Intelligent Design look like, and what testable predictions would it make?
1. If natural structures are the result of a design process rather than undirected evolution, they should be consistently optimal. An entity intelligent enough to design any of the complex creatures we see around us (and powerful enough to put those designs into use) would be expected to do a good job, and the designs should be really good. As a corollary, we shouldn't see a lot of designs that mere humans like us could easily improve.
2. Good designs should not be limited to one taxonomic group. If it's a good design - one that solves a problem well - it should be widely used.
3. We shouldn't see a lot of history in an intelligently designed world. There's no reason a designer would need to provide continuity between the organisms it designed, especially if the intermediate forms between current organisms (what we call common ancestors in evolution) did not perform as well as the ones we see today. (If you like, you could call this prediction a more general version of #2).
4. An intelligent designer would try to make its different designs compatible. In other words, an designer with any intelligence would never design parasites. Why would such a creative and intelligent entity go to all the trouble to design an fancy multicellular creature, then toss in some single-celled bacteria (or even viruses) that could take that creature down?
Now I'm sure some of the folks on the ID side of things will find fault with these predictions; they may say that I'm limiting the mystery of how a Designer would have worked, or its purpose in creating living things, etc. But I'm trying to do science here, and if ID is to be a scientific hypothesis, it has to have specific testable predictions, just like all the great scientific explanations.
So how well does the evidence from nature match these predictions? Not very well. Starting with prediction #1 (designs should be consistently optimal), there's lots of non-optimal design out there. Using our own bodies as an example, humans have this terrible design in our throat, where we cross swallowed food with inhaled air and occasionally choke to death because of it. We also have chronic back difficulties from standing on two legs with a spine that works just fine for animals on four. Childbirth in humans is much more difficult than other mammals, because moms push our babies' big, brainy heads through a small opening in the pelvis (caesarian sections work in difficult deliveries because removing a baby through the big abdominal opening is far easier than the natural route for birth). And finally, anyone who has studied the hormonal mechanism for regulating blood pressure ends up with a big facepalm; the kidneys respond to an decrease in blood pressure by modifying a hormone made by the liver, which is then modified again in the lungs and kidneys, which then both goes back to the adrenal glands to make another hormone which tells the kidneys to increase salt retention in the urine, while all the while the third version of the liver hormone goes to the brain, which makes another hormone to tell the kidneys to absorb more water from the urine. In simpler terms, the kidneys use the liver, lungs, brain, and adrenal glands as intermediaries in the process of the kidney telling itself to absorb more salt and water from the urine. NOBODY would call that a good design. In each case, I'd say we humble humans could have done much better. And those are just a few examples from one species (us).
So how about prediction #2 (good designs should be used all over the place)? There are a lot of species out there that could really benefit from some traits from other groups. Gliding snakes and squirrels are pretty cool for snakes and squirrels, but they are awful compared to true fliers like birds or bats (feathers would be helpful). We mammals would be much better at long aerobic efforts if we had the flow-through lungs that birds do. Those birds would benefit from better recognition of their chicks than just "it hatched in my nest" (you did know that cuckoos are nest parasites, right)? And there are the marine mammals, which do remarkably well given the fact that they have to routinely return to the ocean's surface to breathe ("I could sure use some gills, Shamu!"). Examples of good "designs" limited to one group of organisms abound here.
Now for prediction #3 (we should not see history). I'll just come out and say it: history pervades the record of life on Earth. Extinction has eliminated most of the intermediate ancestors between the species we see today, but we know they were there from fossils. It seems very strange that a designer would respect the ancestry of its various groups so carefully ("no nursing for you - you're a bird!" or "yes, whale, I know you have fins, but they still have to have all the fingers inside"). It also seems strange that a designer would put out an Archaeopteryx - a stepping-stone species between dinosaurs and birds that could never compete against a modern bird, but could make it without any true birds around yet. Why design a cobbled-together intermediate when you could just do real birds?
I can imagine someone arguing that a designer was learning as it went, and that the various groups of organisms were somehow the "rough drafts" along the way (no doubt arguing that we humans are the polished final draft). But that means that this intelligent designer didn't just publish the best designs, but all the "rough drafts." Does that make any sense?
And finally, prediction #4 (different designs should be compatible). I'll leave predators and prey aside, and just point out that there are roughly four species of parasite on earth for every non-parasite species (and plants may justly feel that most of those animals are also parasites as well - at least the ones that aren't helping them with pollination). While parasites certainly have ecological roles and big effects, it's hard to see those roles as necessary ones. Ultimately, why would any intelligent designer handicap its designs with so many effective parasites?
So there you have it. Some realistic and rational predictions from intelligent design, none of which match the actual world we have. I'm sure that ID advocates will argue that I'm setting up a straw man, and trot out their own prediction that they see "complex specified information" or "irreducible complexity" as predictions of the model. You should realize, however, that ID is not the only theory that predicts "CSI" (evolution would do the same), so it hardly qualifies as a prediction that discriminates ID from other explanations. As for "irreducible complexity," every time the ID folks argue something is "irreducibly complex," patient biologists point out the "reducibility" to simpler forms, at which point the ID movement abandons the case and picks out something else. Camera eyes, bacterial flagellae...it's like a big game of evidentiary whack-a-mole. I'm just going to ask you to judge my predictions on their own merits, and remind you that there's this other theory that would predict that organisms would often be bolted together in funny ways, that ancestry would be central to what features those organisms would have, and that struggling to survive would put organisms at cross-purposes all the time. I bet you've even heard of it...
1. If natural structures are the result of a design process rather than undirected evolution, they should be consistently optimal. An entity intelligent enough to design any of the complex creatures we see around us (and powerful enough to put those designs into use) would be expected to do a good job, and the designs should be really good. As a corollary, we shouldn't see a lot of designs that mere humans like us could easily improve.
2. Good designs should not be limited to one taxonomic group. If it's a good design - one that solves a problem well - it should be widely used.
3. We shouldn't see a lot of history in an intelligently designed world. There's no reason a designer would need to provide continuity between the organisms it designed, especially if the intermediate forms between current organisms (what we call common ancestors in evolution) did not perform as well as the ones we see today. (If you like, you could call this prediction a more general version of #2).
4. An intelligent designer would try to make its different designs compatible. In other words, an designer with any intelligence would never design parasites. Why would such a creative and intelligent entity go to all the trouble to design an fancy multicellular creature, then toss in some single-celled bacteria (or even viruses) that could take that creature down?
Now I'm sure some of the folks on the ID side of things will find fault with these predictions; they may say that I'm limiting the mystery of how a Designer would have worked, or its purpose in creating living things, etc. But I'm trying to do science here, and if ID is to be a scientific hypothesis, it has to have specific testable predictions, just like all the great scientific explanations.
So how well does the evidence from nature match these predictions? Not very well. Starting with prediction #1 (designs should be consistently optimal), there's lots of non-optimal design out there. Using our own bodies as an example, humans have this terrible design in our throat, where we cross swallowed food with inhaled air and occasionally choke to death because of it. We also have chronic back difficulties from standing on two legs with a spine that works just fine for animals on four. Childbirth in humans is much more difficult than other mammals, because moms push our babies' big, brainy heads through a small opening in the pelvis (caesarian sections work in difficult deliveries because removing a baby through the big abdominal opening is far easier than the natural route for birth). And finally, anyone who has studied the hormonal mechanism for regulating blood pressure ends up with a big facepalm; the kidneys respond to an decrease in blood pressure by modifying a hormone made by the liver, which is then modified again in the lungs and kidneys, which then both goes back to the adrenal glands to make another hormone which tells the kidneys to increase salt retention in the urine, while all the while the third version of the liver hormone goes to the brain, which makes another hormone to tell the kidneys to absorb more water from the urine. In simpler terms, the kidneys use the liver, lungs, brain, and adrenal glands as intermediaries in the process of the kidney telling itself to absorb more salt and water from the urine. NOBODY would call that a good design. In each case, I'd say we humble humans could have done much better. And those are just a few examples from one species (us).
So how about prediction #2 (good designs should be used all over the place)? There are a lot of species out there that could really benefit from some traits from other groups. Gliding snakes and squirrels are pretty cool for snakes and squirrels, but they are awful compared to true fliers like birds or bats (feathers would be helpful). We mammals would be much better at long aerobic efforts if we had the flow-through lungs that birds do. Those birds would benefit from better recognition of their chicks than just "it hatched in my nest" (you did know that cuckoos are nest parasites, right)? And there are the marine mammals, which do remarkably well given the fact that they have to routinely return to the ocean's surface to breathe ("I could sure use some gills, Shamu!"). Examples of good "designs" limited to one group of organisms abound here.
Now for prediction #3 (we should not see history). I'll just come out and say it: history pervades the record of life on Earth. Extinction has eliminated most of the intermediate ancestors between the species we see today, but we know they were there from fossils. It seems very strange that a designer would respect the ancestry of its various groups so carefully ("no nursing for you - you're a bird!" or "yes, whale, I know you have fins, but they still have to have all the fingers inside"). It also seems strange that a designer would put out an Archaeopteryx - a stepping-stone species between dinosaurs and birds that could never compete against a modern bird, but could make it without any true birds around yet. Why design a cobbled-together intermediate when you could just do real birds?
I can imagine someone arguing that a designer was learning as it went, and that the various groups of organisms were somehow the "rough drafts" along the way (no doubt arguing that we humans are the polished final draft). But that means that this intelligent designer didn't just publish the best designs, but all the "rough drafts." Does that make any sense?
And finally, prediction #4 (different designs should be compatible). I'll leave predators and prey aside, and just point out that there are roughly four species of parasite on earth for every non-parasite species (and plants may justly feel that most of those animals are also parasites as well - at least the ones that aren't helping them with pollination). While parasites certainly have ecological roles and big effects, it's hard to see those roles as necessary ones. Ultimately, why would any intelligent designer handicap its designs with so many effective parasites?
So there you have it. Some realistic and rational predictions from intelligent design, none of which match the actual world we have. I'm sure that ID advocates will argue that I'm setting up a straw man, and trot out their own prediction that they see "complex specified information" or "irreducible complexity" as predictions of the model. You should realize, however, that ID is not the only theory that predicts "CSI" (evolution would do the same), so it hardly qualifies as a prediction that discriminates ID from other explanations. As for "irreducible complexity," every time the ID folks argue something is "irreducibly complex," patient biologists point out the "reducibility" to simpler forms, at which point the ID movement abandons the case and picks out something else. Camera eyes, bacterial flagellae...it's like a big game of evidentiary whack-a-mole. I'm just going to ask you to judge my predictions on their own merits, and remind you that there's this other theory that would predict that organisms would often be bolted together in funny ways, that ancestry would be central to what features those organisms would have, and that struggling to survive would put organisms at cross-purposes all the time. I bet you've even heard of it...
Friday, March 21, 2014
A metaphor for the anti-vaccination crowd
I'm perpetually amazed that we still have to discuss this, but the anti-vaccination movement still hasn't faded into deserved obscurity. For example, the keynote speaker for this year's National Science Teachers Association conference is Dr. Mayim Bialik - you probably know her as an actress on Big Bang Theory - who, despite a neuroscience degree, has not vaccinated her kids. She gave a very non-committal answer to a question from Ira Flatow about vaccination, arguing that it's just a personal choice for each family.
Of course, that's simply not true, if you understand epidemiology. Take, as an example, the fact that outbreaks of whooping cough (pertussis) are geographically connected to places where higher numbers of parents opt out of giving their kids the vaccine for non-medical reasons. In other words, everybody is put at higher risk of getting whooping cough in places where parents don't vaccinate, even in a state with a 91% average rate of vaccination for kindergartners. And yes, it's the unvaccinated who are at most risk, leading to the remarkable result that "with vaccine-preventable infectious diseases, the risk is higher for those higher in socioeconomic status" (Dr. Kenneth Bromberg, director of The Vaccine Research Center). After all, it's those with time on their hands who have the luxury of delving into the kinds of conspiracy theories that fuel the pseudoscience of the anti-vaccination movement. Yet the neighbors of those who do not vaccinate are put at higher risk, as are the kids who go to school with unvaccinated children, their infant siblings and grandparents, and anybody who has a compromised immune system who comes into contact with the unvaccinated kids. The primary risk is personal, but secondary risk to the community is really significant.
Yet even with all the data that show the effectiveness of vaccines (heck, the field tests of the Salk polio vaccine lay the groundwork for modern double-blind clinical trials) a public figure with a science doctorate can somehow remain equivocal and see it as a matter of personal freedom. So here's a simple analogy that may help readers understand why the rest of us don't just say "OK, don't vaccinate your kids."
If you drive at night, you can turn on the headlights (and taillights) of your car to be more visible and decrease the chances of your having a collision. "But headlights consume energy and reduce my gas mileage," you might argue. "I wonder if it's a conspiracy of the oil companies to make me consume more fuel and pad out their profits." Um, sure. "In any case, it's a personal choice. I should be able to drive at night without my headlights if I think the risk is outweighed by the fuel savings. It's not your business to tell me I have to run my lights if I have personal reasons not to use them."
Well, it is your personal risk that is greatest; if you drive without your lights, yours is the car most likely to get hit. But that's not the only risk; you put all the other users of the road at higher risk of colliding with you when you don't use your lights. Hence, we as a society require you to use your lights because it makes us all safer. Nobody complains about these laws; we all realize the communal good in requiring everybody to use their lights at night.
How exactly is refusing to vaccinate different?
Of course, that's simply not true, if you understand epidemiology. Take, as an example, the fact that outbreaks of whooping cough (pertussis) are geographically connected to places where higher numbers of parents opt out of giving their kids the vaccine for non-medical reasons. In other words, everybody is put at higher risk of getting whooping cough in places where parents don't vaccinate, even in a state with a 91% average rate of vaccination for kindergartners. And yes, it's the unvaccinated who are at most risk, leading to the remarkable result that "with vaccine-preventable infectious diseases, the risk is higher for those higher in socioeconomic status" (Dr. Kenneth Bromberg, director of The Vaccine Research Center). After all, it's those with time on their hands who have the luxury of delving into the kinds of conspiracy theories that fuel the pseudoscience of the anti-vaccination movement. Yet the neighbors of those who do not vaccinate are put at higher risk, as are the kids who go to school with unvaccinated children, their infant siblings and grandparents, and anybody who has a compromised immune system who comes into contact with the unvaccinated kids. The primary risk is personal, but secondary risk to the community is really significant.
Yet even with all the data that show the effectiveness of vaccines (heck, the field tests of the Salk polio vaccine lay the groundwork for modern double-blind clinical trials) a public figure with a science doctorate can somehow remain equivocal and see it as a matter of personal freedom. So here's a simple analogy that may help readers understand why the rest of us don't just say "OK, don't vaccinate your kids."
If you drive at night, you can turn on the headlights (and taillights) of your car to be more visible and decrease the chances of your having a collision. "But headlights consume energy and reduce my gas mileage," you might argue. "I wonder if it's a conspiracy of the oil companies to make me consume more fuel and pad out their profits." Um, sure. "In any case, it's a personal choice. I should be able to drive at night without my headlights if I think the risk is outweighed by the fuel savings. It's not your business to tell me I have to run my lights if I have personal reasons not to use them."
Well, it is your personal risk that is greatest; if you drive without your lights, yours is the car most likely to get hit. But that's not the only risk; you put all the other users of the road at higher risk of colliding with you when you don't use your lights. Hence, we as a society require you to use your lights because it makes us all safer. Nobody complains about these laws; we all realize the communal good in requiring everybody to use their lights at night.
How exactly is refusing to vaccinate different?
Thursday, September 26, 2013
The Agency problem
I recently heard an interesting podcast on "memes" and the extent of their similarity to genes. One of the hosts (and many online commentators) had trouble with the idea that memes (units of information transmitted among humans) could replicate themselves - after all, a song or phrase doesn't copy itself, does it? This was contrasted with genes, which we were assured could self-replicate, and thus the meme idea was dismissed. We can choose whether or not to pass along a pithy phrase, so it doesn't have the status that a gene does.
Of course, to anyone who understands genetics, the idea that a gene "self-replicates" is pretty silly. Aside from a few special sequences of RNA (and damn if those aren't interesting) DNA is just a molecular encoding of information - not fundamentally that different from a long string of text. Genes are particular stretches of DNA that code for useful proteins, but they are still just information. It takes a bunch of molecular machinery inside a cell to copy that information (or, for that matter, to convert it in to a protein). While it's true that the individual enzymes that copy, transcribe, and translate the information are produced from genetic information, it takes a collection of genes working in concert to duplicate any of the individual genes. For the gene to become more common in the world, it has to be in an organism that survives and reproduces. Genes replicate, proliferate, and disappear in cells in organisms (even if those organisms are unicellular), and genes can't "do" anything on their own.
So why does everybody just assume genes are doing their thing replicating and proliferating? Well, the fact that Richard Dawkins's groundbreaking book was called "The Selfish Gene" may have something to do with it. Dawkins pointed out that from the point of view of a gene, an organism is a "survival machine." Genes don't really "care" about individual organisms beyond their ability to promulgate genes. To bolster this view, Dawkins (correctly) pointed out that individual organisms (and even species) are incredibly short-lived compared to genes; hemoglobin genes, for example, are something like 500 million years old, and the genes for enzymes like DNA polymerase are presumably as old as DNA-based life itself - which is the only kind we know (and yes, RNA viruses are still DNA-based; they require the use of plenty of proteins from their hosts' DNA to reproduce). Most folks seem to accept some sort of independent drive for genes since Dawkins published the book - hence the idea that they are "self-replicating" - and the way Dawkins pointed out that evolution was not just about the survival and reproduction of organisms was a big step forward in understanding. Nonetheless, the idea that emerged ("it's all about the genes") is just plain mistaken.
The problem here is that we humans are used to the idea of agency. That is, changes come from agents which are capable of action in the world. In human political and social systems, agency is a basic assumption; if something happens, somebody must have done it. It's a great concept for explaining a market or a legislature. In understanding an evolutionary process, however, agency is basically useless, because changes occur at so many levels (gene, organism, population) at the same time that one just can't ascribe evolution to a single agent. Genes don't evolve on their own; neither do organisms. Populations evolve (change genetically over time) only to the degree that individuals in the populations are more or less successful at surviving and reproducing, but the change is at a genetic level (usually described as a change in gene frequencies). So who is the agent?
Really thinking deeply about this is hard when you're raised on agency; hell, even evolutionary biologists use Darwin's term "natural selection," as if some agent ("Nature") was carefully deciding who deserves to survive. That, of course, is not just nonsense, but unnecessary; organisms just survive and reproduce, passing on genes that determine part of the likelihood of survival of those organisms. In that situation, there is no agent that has to "select"; instead, better-surviving libraries of genes emerge in the surviving organisms, period. Evolution is literally the most natural thing to happen to reproducing organisms with some sort of genetic system, no agent required.
Back to memes. We have information (words, phrases, ideas, music) that sits around on its own, but tends to get passed around when you have people talking to each other. We (people) use information, share it, find it helpful, tell our kids. Nobody has to go out and force us all to put "ROFL" in our social media posts, but it works, and it spreads. That's a meme, and it deserves the same sort of respect as a concept in social informatics that a gene does in biochemistry. OK, I made up the phrase "social informatics," but you get the point. Genes and memes spread, or they don't; one is biological evolution, the other social evolution, but in any case, there's no point to trying to find the Agent of Evolution behind it all. In a situation where genes or memes can be copied, it just happens.
Of course, to anyone who understands genetics, the idea that a gene "self-replicates" is pretty silly. Aside from a few special sequences of RNA (and damn if those aren't interesting) DNA is just a molecular encoding of information - not fundamentally that different from a long string of text. Genes are particular stretches of DNA that code for useful proteins, but they are still just information. It takes a bunch of molecular machinery inside a cell to copy that information (or, for that matter, to convert it in to a protein). While it's true that the individual enzymes that copy, transcribe, and translate the information are produced from genetic information, it takes a collection of genes working in concert to duplicate any of the individual genes. For the gene to become more common in the world, it has to be in an organism that survives and reproduces. Genes replicate, proliferate, and disappear in cells in organisms (even if those organisms are unicellular), and genes can't "do" anything on their own.
So why does everybody just assume genes are doing their thing replicating and proliferating? Well, the fact that Richard Dawkins's groundbreaking book was called "The Selfish Gene" may have something to do with it. Dawkins pointed out that from the point of view of a gene, an organism is a "survival machine." Genes don't really "care" about individual organisms beyond their ability to promulgate genes. To bolster this view, Dawkins (correctly) pointed out that individual organisms (and even species) are incredibly short-lived compared to genes; hemoglobin genes, for example, are something like 500 million years old, and the genes for enzymes like DNA polymerase are presumably as old as DNA-based life itself - which is the only kind we know (and yes, RNA viruses are still DNA-based; they require the use of plenty of proteins from their hosts' DNA to reproduce). Most folks seem to accept some sort of independent drive for genes since Dawkins published the book - hence the idea that they are "self-replicating" - and the way Dawkins pointed out that evolution was not just about the survival and reproduction of organisms was a big step forward in understanding. Nonetheless, the idea that emerged ("it's all about the genes") is just plain mistaken.
The problem here is that we humans are used to the idea of agency. That is, changes come from agents which are capable of action in the world. In human political and social systems, agency is a basic assumption; if something happens, somebody must have done it. It's a great concept for explaining a market or a legislature. In understanding an evolutionary process, however, agency is basically useless, because changes occur at so many levels (gene, organism, population) at the same time that one just can't ascribe evolution to a single agent. Genes don't evolve on their own; neither do organisms. Populations evolve (change genetically over time) only to the degree that individuals in the populations are more or less successful at surviving and reproducing, but the change is at a genetic level (usually described as a change in gene frequencies). So who is the agent?
Really thinking deeply about this is hard when you're raised on agency; hell, even evolutionary biologists use Darwin's term "natural selection," as if some agent ("Nature") was carefully deciding who deserves to survive. That, of course, is not just nonsense, but unnecessary; organisms just survive and reproduce, passing on genes that determine part of the likelihood of survival of those organisms. In that situation, there is no agent that has to "select"; instead, better-surviving libraries of genes emerge in the surviving organisms, period. Evolution is literally the most natural thing to happen to reproducing organisms with some sort of genetic system, no agent required.
Back to memes. We have information (words, phrases, ideas, music) that sits around on its own, but tends to get passed around when you have people talking to each other. We (people) use information, share it, find it helpful, tell our kids. Nobody has to go out and force us all to put "ROFL" in our social media posts, but it works, and it spreads. That's a meme, and it deserves the same sort of respect as a concept in social informatics that a gene does in biochemistry. OK, I made up the phrase "social informatics," but you get the point. Genes and memes spread, or they don't; one is biological evolution, the other social evolution, but in any case, there's no point to trying to find the Agent of Evolution behind it all. In a situation where genes or memes can be copied, it just happens.
Subscribe to:
Posts (Atom)