Tuesday, February 27, 2018

The conjecture of "fine-tuning"...and "cosmopsychism"?

            A persistent claim in what one might call the philosophy of cosmology is the supposed “fine-tuning” of the constants of physics to conditions we consider suitable for sustaining living things. Consider, as a representative example, the introduction to a recent essay by Philip Goff in Aeon:

In the past 40 or so years, a strange fact about our Universe gradually made itself known to scientists: the laws of physics, and the initial conditions of our Universe, are fine-tuned for the possibility of life. It turns out that, for life to be possible, the numbers in basic physics – for example, the strength of gravity, or the mass of the electron – must have values falling in a certain range. And that range is an incredibly narrow slice of all the possible values those numbers can have. It is therefore incredibly unlikely that a universe like ours would have the kind of numbers compatible with the existence of life. But, against all the odds, our Universe does.

            Goff goes on to interpret this “fact” of fine-tuning as support for “…the idea that the Universe is a conscious mind that responds to value.” In his view, the Universe has a clear telos – the production of intelligent life. Given how central “fine-tuning” is to Goff’s claim, one might be forgiven for more closely examining the basis for his probability argument – the likelihood of a given universe having physical constants compatible with intelligent life.
            Probability, in its simplest form, is a calculation of the likelihood of a particular outcome given the range of possible outcomes. If, for simplicity, we assume that all potential combinations of the physical constants are equally likely, then the probability of getting a universe that can support intelligent life is a simple ratio: the number of possible universes that we judge could potentially support such life, divided by the number of possible universes. To get to Goff’s conclusion that this outcome is “incredibly unlikely,” we have to know both how many possible universes there are, and how many of them could support intelligent life. In terms of the constants in the laws of physics (those parameters that must be measured empirically, rather than calculated from theory), we need to know what range of variation is possible for each constant, and how much of that variation is compatible with intelligent life. This is where we get Goff’s basic claim of “fine-tuning” - “that range [of values of physical constants] is an incredibly narrow slice of all the possible values those numbers can have.”
            A crucial assumption in this view is that physical constants could potentially vary at all. Goff argues, for example, that we are fortunate that the parameter 𝛆 - representing the efficiency of the fusion of hydrogen to helium - has the value 0.007, since a universe where 𝛆 was slightly larger or smaller would have either very little or only hydrogen. However, the fact that we can simply substitute other values of 𝛆 in our equations hardly demonstrates that other values are actually possible. Further, nothing in our actual experience suggests that 𝛆 can vary; in fact, it seems to be the same everywhere in the universe (the fact that we can observe stars across vast separations in distance and time being but one example). Taken from another perspective, it would be truly remarkable if a parameter like 𝛆 could have a range of values yet somehow always turn up with the same value whenever we measure it. To claim “fine-tuning” is to claim that some entity could adjust the value of parameters like 𝛆; it takes a remarkably imaginative line of thinking to argue that our base assumption about a parameter should be that it is a variable, when all our experience suggests it is a constant.
            This is not new territory, either. In the late 17th and early 18th centuries, ingenious observations of Jupiter’s moon Io by Ole RΓΈmer – and conversions to absolute distances by Christian Huygens – showed that the speed of light in a vacuum was both finite and quite fast – about 220,000 km/s, as compared to the modern value of 299,792 km/s. One could have wondered why the speed of light had that particular value…until around 1864, when James Clerk Maxwell calculated what the speed of light had to be if it were an electromagnetic wave. The only speed compatible with mutual electric and magnetic induction – and with the conservation of energy – was remarkably close to the observed results. Before Maxwell, one could have imagined light having many potential speeds and wondered about their consequences, but after Maxwell those flights of imagination were simply implausible. Physics explained why light had one particular speed – the speed toward which experimental measurements were rapidly converging.
            Even if we were to grant that the constants could vary, it is rather difficult to determine how much variation “fine-tuning” advocates think is possible. A factor of two? An order of magnitude? Any number we can imagine? A statistical estimate of variation relies on measuring a value many times in a sample to determine how much variation is likely. To estimate potential variation the physical constants, we must measure each of those parameters in many different contexts and calculate a value and uncertainty. For the gravitational constant – which is notoriously variable in its measured value – the variation in modern measurements is on the order of 10-4, or one part in ten thousand. For the mass of the electron, the uncertainty derived from measurement is on the order of 0.1 parts per billion (depending on which units one uses to express the mass). In that light, considering universes where the electron is 2.5 times as massive (as Goff does) is utterly hypothetical. Or, to put it another way, we have no reason to think that the physical constants themselves could vary outside a remarkably tiny range of values, and that range itself is likely a product of the uncertainty of our measurements. The careful reader might fairly object that this is simply a restatement of the remarkable constancy of the measured parameters in physics. That is precisely the point.
            Thus the denominator in Goff’s hypothetical probability calculation – the range of possible combinations of the physical constants – is actually quite tiny; from an empirical point of view, only a minuscule range of the values we can imagine correspond to observations of the actual Universe. If the constants are truly constants, the denominator is simply one – ours is the only possible version of the current laws of physics. But what of the numerator – the portion of possible universes compatible with intelligent life? Here again, proponents of fine-tuning are rather vague on their probability assessments for a rather simple reason: we have almost no grounds to evaluate what kinds of universes could support intelligent life in principle, because we cannot possibly imagine all the ways intelligent life could arise. When we think of life in other universes (or even on other planets), “suitable for intelligent life” is usually shorthand for “suitable for life that uses solar energy to convert carbon dioxide and water to oxygen and carbohydrate, later releasing energy in the oxidation of that carbohydrate; most likely using a particular set of nucleotides to encode information and translate that information into protein macromolecules; organizing independent subunits (cells) into hierarchies which can specialize to the degree that a complex internal model of the outside world is represented inside the resultant organisms.” In other words, we are quite good at enumerating the requirements for the intelligent life we know best (humans), but our particular case does little to delimit the range of possible ways to get intelligent life in principle. To do so, we would have to have “comprehensive imagination,” a complete understanding of all the possible ways intelligent life could arise in a variety of possible universes. Our track record of predicting where relatively familiar life might be found on Earth is rather poor (e.g. hydrothermal vent communities); there is no reason to expect our imagination to be any less myopic when conceiving of ways unfamiliar life could arise or produce intelligence.
            In sum, an estimate of the probability of getting a universe that can produce intelligent life (the estimate Philip Goff characterizes as “incredibly unlikely”) relies both on estimating the actual range of possible universes and the number of those universes compatible with intelligent life. Fanciful speculation aside, we have no empirical reason to think that the constants of the physical universe could be anything else but the ones we know. Further, only an excess of hubris could lead us to think that we are able to comprehensively imagine all the ways intelligent life could arise in a given universe. As a result, we can only conclude that “fine-tuning” is a speculative story, and any assessment of its probability is groundless. If a scholar like Goff wants to postulate a “cosmopsychic” hypothesis of the “Universe [as] a conscious mind that responds to value,” he is more than welcome to do so. Offering that hypothesis as an solution to a problem – the popular metaphor of “fine-tuning” physical constants – only works if we have good reasons to think there is a problem at all. 

Friday, February 19, 2016

OSCaR report on cancer rates near Bullseye Glass

The OR state cancer registry (OSCaR) released a report today examining the rates of lung and bladder cancer in census tracts near the Bullseye Glass factory in southeast Portland (OR). This district of the city is the location of elevated exposures to cadmium and arsenic measured by the Department of Environmental Quality (DEQ), and understanding whether those exposures pose a threat to human health has been a pressing issue.

The OSCaR report is based on comparing observed rates of cancer in census tracts near the site of the metal emissions with observed rates of cancer in the county overall. Specifically, the study calculates Standardized Incidence Ratios (SIRs) for both lung and bladder cancer. Those ratios divide the observed number of cancer cases in sites of exposure by the number of cases expected based on the number of people living in those sites (again using the county as a whole to best estimate that baseline rate). SIRs greater than one indicate an increased rate of cancer in the study area, while an SIR less than one suggests cancer rates in a study area are actually below baseline rates.

Here are the SIRs for the census tracts near the Bullseye plant:

As you can see, the ratios are close to one for both cancers, whether including just the census tract that includes the plant, or adding the adjacent census tracts to the east. That is good news!

You may notice that the confidence intervals for the study are large. These are 95% confidence intervals; they essentially tell you that, for example, the actual SIR for bladder cancer in tract 1000 has a 95% chance of falling between 0.3 and 3.1 given the number of cases measured. That number of cases is what makes the confidence intervals large; in this example it is based on four actual cases reported among residents over the five-year study. You can see why the CIs would be wide; having just one more (or one fewer) case of bladder cancer in that tract would change the rate by 25%. Since the number of cancers reported is small, the confidence intervals are broad. I should point out that this is a good problem to have - the cancer rate is hard to estimate because there are not many cases! From a statistical point of view, if the confidence interval on an SIR includes the value 1.0, there is no statistical evidence that the rate of cancer is different in the study area. That is clearly the case in these data. Even better, the actual estimates of the SIR are all close to one; this is very reassuring.

There are limitations to this analysis, of course. First off, given the low number of cases, it is just not sensitive enough to pick up very small changes in the cancer rate. The differences between the estimates above and exactly one are small compared to the confidence intervals, yes, but realistically the study does not have the statistical power to pick up a change in rate less than about two-fold (for bladder cancer) or 1.5-fold (for lung cancer) in the study area. There just aren't enough people living there (and not enough reported cases) to be more precise. In addition, the conclusions are limited to the two cancers studied, though those are the cancers known to have the strongest links to arsenic and cadmium exposure.

I'm personally very heartened by these results, for a few reasons. First off, because the data were collected for cases that pre-date the release of the DEQ emissions testing data, we don't have to try to adjust for reporting biases. Second, the exposure thresholds for cancer tend to be among the lowest exposure thresholds in the health-effects literature; you would expect to see cancer effects at lower exposures than for acute diseases caused by heavy metal exposures. I don't want to give you the impression that we're measuring 1 in 10,000 risks well in this study; the total population in the study area was 13,725 - so the result of an increase of 1 in 10,000 would be just over one additional case for a given cancer. But I'm encouraged that the predictions that come from examining the DEQ emissions levels hang together nicely with this result from cancer rates; any additional risk for human health effects looks to be small (or less) for cancer; hopefully that conclusion holds up for other types of disease.

Finally, I want to congratulate OSCaR for the study; I imagine that running a cancer registry is rather low-profile and out of the public eye until you really need that registry to answer a question like "Should we be concerned about cancer from these arsenic and cadmium emissions?" Then it's really nice to know that the data have already been collected, and within a couple of weeks of the first media reports we have a direct, evidence-based answer to that question. Thanks to everyone at OSCaR!

Thursday, February 18, 2016

Interim Clinical Update for metal exposures in Portland

Hi everyone! I'm posting this PDF from the agencies working on heavy metal exposures in Portland (OR) because it's rather hard to find directly. There is lots of good information here, though I would point out that the agencies are being cautious in not trying to quantify exposures (though we have some helpful data there, as you can see in other posts on this site). The risks listed and recommendations are all evidence-based (that's important), and your clinical physician will probably be seeing this document soon if not already.

I do have to highlight one passage that addresses an issue that has come up: "Under no circumstances should chelation or provocation be used before testing." The authors thought this important enough to use a bold font! If you decide to get tested, please do objective testing that can be compared to reference values; doing a chelation challenge pumps up the test value and creates false positives, as well as exposing you to a chelating agent before you know if there is a problem. You can also read the American College of Medical Toxicology's position statement here, which explains the reasoning in more detail.

Cadmium and Arsenic update: soil tests from CCLC

One big unknown in the controversy about heavy metal pollution in southeast Portland (OR) has been the amount of accumulation of pollutants identified by the Oregon Department of Environmental Quality as above its benchmark levels in air. Cadmium (Cd) and arsenic (As) were measured well above emissions benchmarks at an air monitoring station near the Bullseye Glass production facility; while those emissions were not particularly high with respect to health risks for inhalation, we did not know if the pollutants had accumulated in soils. For heavy metals this can be a real concern, as atomic elements like Cd and As do not convert into other substances as a molecular pollutant can. There has been a legitimate concern that even air concentrations that were not acute could have built up to dangerous levels over the functional lifetime of the Bullseye facility.

Now we have some illuminating data, in the form of soil testing results from the very close Children's Creative Learning Center, about 150 meters away from the Bullseye factory. The report from testing is posted here, and I've included the data table below:

Helpfully included in the test results are background levels for the Portland area. The 6" and 12" designations refer to the depth at which soil was sampled at each of six sites around the CCLC campus. The table also includes DEQ soil screening values for residential areas. Not included are the DEQ risk-based concentrations (RBCs) for urban residential areas, which are 1.0 mg/kg (As) and 160 mg/kg (Cd).

So what do the results tell us? First off, that there's no evidence for an overall accumulation of arsenic at the site, since the averages at both depths fall below the background level for the City as a whole. It is true that sites 5 and 6, which are on the west side of the building, are a bit higher than the other samples, especially at the 12" depth; this could be an effect of proximity to Bullseye, or even just proximity to a nearby road. Sample #4, however, is immediately adjacent and shows lower As levels, especially at 12" depth. The values do fall above the DEQ's risk-based concentration of arsenic for soil ingestion, skin contact, and inhalation through contaminated soil, with the RBC set to match the 1 in 1 million risk of causing a cancer. The average value at 6" depth would correspond to roughly a 1 in 70,000 increase in the change of developing a cancer using assumptions like roughly year-round exposure. The maximum at 12" would correspond to a 1 in 20,000 increase in cancer risk with similar assumptions.

For cadmium, the sample values are more consistent from location to location. At depth, they fall very close to the background for Portland, though at 6" they are about twice that value, so there may have been some accumulation of Cd in the soil. The values are all very far below the risk-based concentration from DEQ for soil (by a factor of 50 or more), so there appear to be no grounds to worry about a health risk for cadmium from soil at CCLC.

Incidentally, you may have noticed that the DEQ screening values (the RBCs) are really different for the two elements; at first I thought that the value for cadmium was a typo! But the big difference seems to be that arsenic is volatile (i.e. it can evaporate into air from soil) while cadmium cannot, along with the fact that arsenic is more easily absorbed through the skin.

So what is the overall conclusion? I hesitate to generalize too much, but the amounts of As and Cd measured at CCLC all fall within about 2X the background for the Portland basin as a whole. For arsenic, only at the 12" depth are samples well above that background. Conversely, for cadmium, the values at depth are near baseline, while the shallow samples are elevated. The difference could be due to different leaching rates for the two elements, or they could represent different modes or times of accumulation. So there may be some accumulation of these elements from nearby industrial operations like Bullseye. On the other hand, this is for a very close site to the Bullseye factory; given the relative values observed in the moss and lichen study from the US Forest Service, soil levels much farther away are likely to be at background levels for the City if soil concentrations drop at a similar rate to moss and lichen concentrations. As an example, airborne Cd levels drop by a factor of about 2 by the time one reaches Cleveland HS or Winterhaven Elementary in the moss and lichen study; if soil accumulation drops off at a similar rate, those schools would be expected to have soil levels of Cd and As near the background levels. For students and staff at CCLC, continued exposure to unremediated soil may be unwise, but health risks from soil will be small for arsenic and negligible for cadmium.

UPDATE: The Oregonian has released their own set of soil testing data here. The levels tend to drop off quickly with distance from Bullseye, with a few scattered high results among mostly below-detection-threshold tests. One very high test result near Cleveland HS should probably be followed up, though as an outlier it is likely to come from another source than settling from Bullseye. A high lead level next to Powell Blvd. is likely (IMHO) to be the result of years of auto traffic using leaded gasoline.

Saturday, February 13, 2016

Glass manufacturing update for chromium

Since I put up my recent post on cadmium (Cd) and arsenic (As), attention in SE Portland (OR) has shifted to the possibility of hexavalent chromium pollution in the area around Bullseye glass. To try to help everyone understand the data for Cr(VI), I'm adding this post, but a word of caution at the outset: the data for chromium are far less clear.

First off, everyone needs to understand that there are two common forms of chromium used in industry. Cr(III), or trivalent chromium, does not have documented health risks; it is the form of chromium found most often in the human body. Cr(VI), or hexavalent chromium, is more dangerous, because it is a powerful oxidant; it strips 3 electrons off other atoms or ions in converting to Cr(III). If you've heard of oxidative damage, this is an example. You could write these as Cr3+ and Cr6+, but I can't get Blogger to do superscripts...so they look funny.

Hexavalent chromium, therefore, is the form of the element that matters for health risk, and all the standards I can find are for Cr(VI), including the ones in the following table, which again combines DEQ air monitoring data with Oregon benchmarks, EPA cancer risks, and reference values for non-cancer health effects. I left the Cd and As rows in for comparison. Everything here applies to inhalation risk, with units of micrograms/cubic meter.

You may notice that the DEQ measurements are noticeably higher for chromium, while EPA cancer risk threshold is lower. There is, however, a huge uncertainty here: we don't know how much of the chromium DEQ measured is Cr(VI) - hexavalent chromium. In the data sheet, there is no benchmark reported for "unspeciated" chromium, which means a mix of chromium in different oxidation states like Cr(III) or Cr(VI). It seems very likely that the method used to monitor chromium was something like mass spectrometry, which identifies Cr by its atomic weight but does not distinguish the oxidation state. So all we can be sure of from the data is that the amount of Cr(VI) is less than or equal to 0.0715 micrograms/cubic meter on average at the monitoring station.

The other uncertainty is whether the bulk of the chromium is coming from Bullseye at all. While the monitoring station is close by, we haven't got a relative concentration map for chromium from the moss and lichen sampling done by the US Forest Service, as we do for cadmium and arsenic. In the case of those metals, the USFS study pointed pretty clearly to Bullseye's location. It does seem that Bullseye was using Cr(III) and Cr(VI), since they have suspended production with Cr(VI) but not Cr(III) at DEQ's request. However, the company claims that its factory was not operating on the peak days of chromium emission; there are indeed two days with much higher (roughly 20X) chromium emissions in the DEQ data that are responsible for most of the average (66% of the total emissions measured come from those two days). While a correspondence between cadmium and arsenic points to glassmaking as an industry, and Cr(VI) salts like potassium chromate and dicrhomate are used to color glass, lots of industries use Cr(VI) salts, for example for the chrome plating industry. We also don't have the benefit of knowing (from the moss and lichen monitoring) how quickly the chromium exposure lessens - if at all - as one moves away from the DEQ monitoring station. In sum, the source of the chromium in the DEQ data is unclear at this point.

Given those caveats, the chromium levels are still concerning. If the source of the chromium is industrial, it is quite possible that it is indeed mostly Cr(VI). In that case, the lung cancer risk from chromium exposure at the monitoring station is close to 1 in 1,000 for someone breathing that air 24/7 for a lifetime using the EPA model. That is still much lower than the baseline lung cancer risk, but if you eliminate the effect of smoking (which is linked to about 90% of lung cancer), this potential Cr(VI) risk might add as much as 14.5% to the risk of lung cancer from all causes except smoking - a worst-case scenario for someone living at the monitoring station. That's still only about a sixth of the contribution of radon exposure (you have had your house radon-checked, right?), but it's potentially more than for cadmium or arsenic. The non-cancer risks don't look too bad; they are actually far below the level where particulate emissions cause non-cancer disease, and the EPA has low confidence in the study that gives the much lower threshold for mist and aerosol exposure, for what look to my eyes like great reasons (see the "Chronic effects - noncancer" section in the IRIS summary).

So the picture for chromium is much more vague. It would be great, for example, to know how much of the chromium DEQ is finding in the air is hexavalent. It is possible to distinguish the hexavalent chromium ion, but to do a Cr(VI)-specific test requires an extra separation step (silica gel chromatography) that was presumably not part of the DEQ protocol. It would also be very helpful to know if the USFS data includes relative chromium levels, even if those samples don't distinguish the oxidation state, to help pinpoint the source of the emissions. Those avenues should definitely be pursued; the existing DEQ emissions data give us a worst-case risk that is not minor. Further data would let us know just how close we actually are to that worst case.

Tuesday, February 9, 2016

Cadmium, Arsenic, and glass manufacturing in Portland: What do the test results mean?

Tonight at Cleveland High School (in Portland, OR) the Multnomah County government convened a public meeting to discuss the recent pinpointing of elevated cadmium (Cd) and arsenic (As) levels in Portland to the Bullseye Glass manufacturing facility at 3722 SE 21st. Avenue. I was frankly hoping that the representatives of the Department of Environmental Quality (DEQ) and Oregon Health Authority (OHA) would put the levels DEQ measured of those metals in October into a health context. Unfortunately, they did not, and the meeting degenerated into an expression of public anger that didn't answer many questions.

Fortunately, we have well-established models of cancer risk from the Environmental Protection Agency (EPA) and reference exposures from the California EPA for the threshold of likely health risks in general. Those models and exposures are based on data from industrial use of Cd and As, as well as documented environmental exposures from published studies. In other words, we have an evidence-based context for the numbers DEQ measured in the air outside Bullseye's production facility. In hopes of helping people understand the big question voiced at the meeting tonight ("What do the emissions mean for my family?"),  I've compiled the results from air quality monitoring with those reference levels.

To make things easier to compare, I've put all the exposure data and relevant reference levels in a table with everything in the same units (micrograms per cubic meter).

Let's break these numbers down a bit. First off, the DEQ numbers come from the agency's monitoring station in the parking lot of Fred Meyer corporate headquaters, 120 meters from the Bullseye Glass factory. You can get those numbers here. Note: DEQ reported nanograms per cubic meter, so you have to divide those numbers by 1,000 to get micrograms per cubic meter.

The Oregon DEQ benchmarks are reported here. These numbers are set based on the EPA lung-cancer model, where the benchmark is the level at which exposure would increase cancer risk by 1 in one million for a person who breathed that air for their entire lifetime (chronic exposure).

The EPA cancer risk and CalEPA reference exposures come from this page (arsenic) and this page (cadmium). I used the 1 in 10,000 lung cancer risk estimate from EPA because it was closest to the exposures reported near Bullseye. It's a linear model, so it shouldn't be surprising that you can just multiply the DEQ benchmark by 100 to get that number as well. Note: The CalEPA reference exposures are given in milligrams per cubic meter, so you have to multiply those numbers by 1,000 to get to micrograms per cubic meter.

From a first look, you may notice that the average exposure DEQ measured falls near the 1 in 10,000 risk for lung cancer from the EPA model (it's actually about 1 in 6,300 for As, and 1 in 20,400 for Cd). The averages are also 2-3 times the CalEPA threshold below which no health effects at all are expected.

Hopefully you've noticed something else from the table above: the DEQ benchmarks are quite stringent. They are far below the CalEPA thresholds for any likely health effects at all from inhaling either cadmium or arsenic, and they correspond to a 1 in one million increase in lung cancer risk (based on the EPA models). For some perspective, men have a 1 in 13 chance of developing lung cancer in their lifetime (and a 1 in 2 chance of any cancer), while women have a 1 in 16 lifetime chance for lung cancer and a 1 in 3 lifetime chance for all cancers (check out the data from the American Cancer Society). Taken another way, if one million people lived at the DEQ benchmark, we would expect one lung cancer due to cadmium exposure, and about 70,000 lung cancers from other causes. Even going up to the 1 in 10,000 risk in the table above, you would expect around 50 cadmium-related lung cancers in a population of one million who all lived at the monitoring station, as compared to the 70,000 cases from other causes. Did I say stringent?

It's important to repeat that the data above come from right next to Bullseye. If you go even a short distance away, mixing in the atmosphere will lower the concentration of arsenic and cadmium. DEQ has even published a preliminary map of expected cadmium exposures here, based on relative concentrations of the elements in moss samples collected by the US Forest Service. Here's a screenshot:
These numbers are in nanograms per cubic meter again, so you have to divide by 1,000 to compare with the table. It may help to think of the boundary between the highest two concentrations (around 30 ng/cubic meter) as the average from monitoring (29.4 ng/cubic meter). By the time you get to the Fred Meyer day care, the estimated levels have dropped to about 0.01 micrograms/cubic meter - or right at the CalEPA threshold below which any health effects are unlikely. Farther out, at Cleveland HS or Winterhaven Elementary, levels are more like 0.005 micrograms/cubic meter, half the CalEPA health threshold and in the 1 in 100,000 probability of cancer in an individual due to the cadmium exposure. You may also notice a smaller point-source in north Portland, which looks like it's at Ostrom Glass and Metal Works.

So what's the conclusion?

Even for students at the Fred Meyer day care facility, inhaling the air is unlikely to cause any ill effects at all based on the CalEPA threshold and EPA cancer models. The cancer risk estimate is already down to 1 in 60,000. In other words, if there were 60,000 students in that day care, we would expect one of them to get a lung cancer from Cd exposure. There are in fact approximately 100 children who attend the facility, and they will not breathe this air 24/7 for 70 years (the assumptions of the EPA model). As for other health effects, the exposure looks like it would be right at the CalEPA threshold outside (and probably lower inside).

Further away (for example, in the residential neighborhoods east of 26th Ave. and south of Gladstone St., exposures should be lower. While there are reports of a detectable level of Cd and As in the soil at the day care facility, soil and air tests at Cleveland HS and Winterhaven Elementary ordered by Portland Public Schools did not detect any Cd or As. (I hope to link to those results if and when they are released).

In short, the levels DEQ detected are at or below where one would expect much of an elevation in lung cancer risk or other health consequences. Personally, I don't see grounds to get particularly worried about human health given the numbers that have been reported.

Let me be clear: It would be great to get cadmium and arsenic exposures down to the DEQ benchmarks, especially since they are elements and do not "break down" in the environment the way molecular toxins can. Given the possibility of soil accumulation, DEQ's efforts make sense. I hope that the employees at Bullseye are being evaluated for As and Cd exposure, since their exposures are potentially much higher inside the factory. And re-evaluating the pollution control measures at Bullseye is a no-brainer, since we now know that it is a significant point source for As and Cd. There is lots of work to be done.

However, the community anger expressed at tonight's meeting is, in my opinion, totally unjustified. It was clear from DEQ and Forest Service presentations that sleuthing out the source of Cd and As was an involved process, from DEQ identifying the two elements as more concentrated in monitoring than predicted from modeling known sources, to the Forest Service using a moss and lichen model to establish relative exposures throughout the City, to the DEQ doing targeted monitoring near Bullseye to confirm the source and document the exposure levels. Put simply, government agencies discovered the problem and tracked it to its source before anybody noticed any consequences. To me, that looks like effective detective work on the part of DEQ and USFS. The first commenter, reading a prepared statement from Neighbors for Clean Air, characterized the process as "an abysmal failure of government." While I understand that many of my neighbors are worried and angry, I think we can do better than that.

As the conversation has broadened to include emissions of hexavalent chromium, I've written another post dealing with that metal.

Also, soil testing results from the nearby daycare have been released, and I've worked through those in another post.

The OR state cancer registry (OSCaR) has released the results of an analysis of lung and bladder cancer in the area around Bullseye glass; here are those results with my analysis.

Tuesday, January 27, 2015

Does Eric Metaxas really have comprehensive imagination?

Recently an opinion piece by Eric Metaxas, "Science Increasingly Makes the Case for God," caused a bit of a storm after the Wall Street Journal ran it on Christmas Day. The argument it makes - that the physical world is fine-tuned to match the needs of living things - is nothing new; similar claims from authors such as Michael Behe, William Dembski, and Michael Denton were part of founding the Intelligent Design movement in the early 1990s. The fact that Metaxas can still get publicity from his claims suggests a serious examination is in order.

The foundation of Metaxas's argument is the striking correlation between the physical conditions of the Earth and the needs of organisms that live on that Earth. As he puts it:
Today there are more than 200 known parameters necessary for a planet to support life—every single one of which must be perfectly met, or the whole thing falls apart.
This correlation is something any knowledgeable observer should accept; the match between organisms and their environments is quite striking. Given that correlation, we naturally want to understand the cause, and that is where Metaxas's views differ so radically from those of most scientists.

There are really two possibilities. The first explanation argues that life is picky, fragile, and static, and that the universe must have been adjusted to the particular needs of living things. This is where Metaxas's description, that "every single [parameter] must be perfectly met" comes from. If the universe needed adjusting (often called "fine-tuning" when describing the constants of physics), then there must have been an intelligent being doing the adjusting - hence "the Case for God."

The other possibility is that the conditions of the universe are relatively static and not "adjusted" to benefit life, but that living things are capable of evolving so that they can tolerate those conditions (at least in a few places like Earth). Either explanation could produce the striking correlation; how do we choose between them?

Each explanation makes strong demands on our understanding. One particularly strong demand for the fine-tuning explanation, often unstated or ignored, is a complete understanding of the possible ways living things could exist. Arguing that the universe has to be adjusted to the requirements of Earth life includes the implicit assumption that the Earth includes the complete sweep of potential living things. If you want to argue that Metaxas's "200 parameters" have to be the way they are, you have to argue that the only way living things could possibly exist is pretty much just the way they do here on Earth.

As an example, if you insist that life absolutely requires liquid water, having planets with liquid water on their surface seems like a reasonable demand of the universe. Can you be sure of that assumption when the only examples of living things you know of all come from a planet with oceans? It's rather like insisting that all mammals deliver well-developed live young, completely ignorant of the marsupial branch of the mammal family tree. Time has not been kind to this assumption of "comprehensive imagination." That living things could survive in deep oceanic hydrothermal vents (way too deep to get energy from the sun), at the near-boiling temperatures of hot springs,  or in extremely acidic conditions were all hardly conceived (not even taken seriously enough to be considered impossible) until the organisms living in those environments were discovered. Claiming to imagine every possible way of being alive seems both remarkably arrogant and extremely unlikely. To borrow from Daniel Kahneman's remarkable book, this explanation suffers from the "WYSIATI" fallacy: What You See Is All There Is.

But what about the other explanation - that life has evolved to match the conditions in which it finds itself? That lineages of organisms can change over time was once considered an extraordinary idea, and one might rightfully demand some pretty strong evidence of its reality. After over 150 years with the idea of natural selection, the evolutionary biologist can point to a range of different types of evidence, from change in fossil lineages, to the hierarchical pattern of diversity of living things, to the underlying biochemical similarity of all living things. One can observe the evolutionary process in the lab with organisms whose generation times are conveniently short, as well as in decades-long studies of wild populations. We even know that life on Earth has adjusted to huge shifts in the composition of its atmosphere caused by living things themselves! In short, that lineages of living things on Earth can change over time is a well-documented fact. That other forms of life could evolve does not seem far-fetched, since there is no apparent reason the conditions for evolution (reproduction, a genetic system, and differential survival) could not be fulfilled in radically different forms of life.

So in understanding the strong correlation between environment and living things, we can either claim that life is inherently dependent on the conditions here on Earth (again, a claim of comprehensive imagination for which we cannot have any real basis) or claim that life can adapt to at least some of the environments available (for which we have abundant evidence, at least for Earth-life). If we are really honest about the limitations of our examples of life - and the limits of our imaginations - we should see the absurdity of the assumptions necessary to claim proof of supernatural fine-tuning. The far more plausible explanation is the evolutionary one, and it is the only one with sound empirical evidence.

(note: not long after I wrote this post, the AAAS hosted a session on what a "shadow biosphere" (i.e. not based on DNA/protein) might be like. There's an account of the session here, if you're curious.)