Sunday, June 9, 2019

The selective advantage of consicousness in humans? Understanding each other.

Sam Harris recently interviewed his wife, Annaka, on his "Making Sense"podcast to discuss her new book on consciousness. One interesting exchange concerned the idea of "panpsychism," the view that consciousness is somehow inherent to matter itself. Both parties expressed a certain sympathy to the panpsychic view in light of a perceived inability to show a selective advantage to consciousness that would cause natural selection to produce and develop it. I'd like to argue here that the Harrises have missed a plausible and profound Darwinian advantage to consciousness, as well as the intuitions of free will, self, and agency; namely, the ability to predict what other human beings will do.

Humans (Homo sapiens and our hominid ancestors) have been part of relatively large and sophisticated social groups for millions of years. These groups have been both complex and remarkably effective; the culture of tool use alone, passed on without the aid of genetics from generation to generation for at least 2.6 million years, shows how important social groups have been for humans. Of course, as soon as one is part of a social group, there is a decided advantage in predicting how other members of that group will behave. It's easy to imagine how predicting the reaction of another human to one's aggression, consoling, or flirting would boost an individual up the social hierarchy and contribute to the number and fitness of that individual's offspring, thus enhancing that person's inclusive fitness. So traits that enhance the ability to predict the behavior of other humans would almost certainly confer a selective advantage.

The way we, as conscious people, regard other humans is shot through with the intuitions discussed by the Harrises - we see others as individual agents that make choices based on internal drives and desires. While those intuitions may not match up with our 20th century understanding of physics and neuroscience, they are nonetheless really useful models of other humans and their behavior. We may be far from perfect from predicting the reactions of others, but modeling them as agents still makes sophisticated and powerful predictions of what they will do (perhaps this ability reaches its zenith in the modern kindergarten teacher near the end of the school year). Those intuitions are also quite efficient; modern neuroscience has shown us just how much complexity underlies our simplistic ideas of how others behave, and therefore just how much computational effort we avoid by using the simpler model of others as decision-making agents. Further, the conscious post-hoc narratives we construct about others generally improve our predictions of what others will do (hence the difference between a beginning-of-year kindergarten teacher and the same teacher at the end of the year). So while our intuitions about others may not be strictly correct, they are still a vast improvement over a  reaction of merely fear or disgust, and we should not be surprised at the widespread adoption of those intuitions across humanity.

Given the practical utility of seeing others as agents with free will, it is not that surprising we might turn the model upon ourselves. Understanding our own behavior as agents that make choices based on drives or desires can help us better interact with our fellow humans. Telling ourselves stories about  why we make the choices we make - and what goals we believe motivate us - both helps us control our own reactions to others and plan our social strategies in much more nuanced ways that simple emotion can. Having the patience to wait for an opportune moment to challenge a rival or seek out an ally relies on the kind of ongoing narrative that consciousness provides. Even if consciousness is a post-hoc story we tell ourselves (as tantalizing results from the past few decades suggest) the potential to refine our future actions to better serve our interests is immense.

In case the reader is not yet convinced of the practical value of a conscious perception of other humans, a somewhat recent review of the selective advantage of various steps on the way to human consciousness (including the social advantages) by Michael Gratziano may be helpful.

One might object along the lines of "zombie" thought experiments that all this modeling of agency and narrative could be unconscious and still effective; why would it be necessary for the "lights" of awareness to be on to make good predictions about what we and others would do in social situations? We must concede that we really don't know what that possibility looks like; perhaps some future unconscious social genius AI will show us. Or perhaps a mental model sophisticated enough to effectively model our social interactions through a narrative of independent agents just equates to consciousness. Or maybe once that mental model is constructed, generating awareness is a low-cost adjustment that permits even more sophisticated predictions based on conscious reflection in memory.  Judging from our position as conscious beings, it's just hard to evaluate the other possibilities.

It is also important to point out that our intuitions about consciousness in other species generally correlates with social sophistication (the blog does have "empiricist" in the name). Whales and dolphins; dogs; other primates; birds like crows or parrots; even prairie dogs; all these social species are ones we intuitively suspect may have the "lights" of consciousness on, given their apparent ability to perceive and react to the emotional states of their peers and even us. Even the counterexample of social insects is instructive; we don't generally ascribe consciousness to, say, ants or honeybees, because the individuals in those societies behave in quite programmed and stereotypical ways to each other (as well as being bound by inclusive fitness). Further, the behavior of hives or colonies of social insects is not particularly sophisticated with regards to other hives or colonies; the typical interaction between ant colonies, for example, is ruthless (perhaps mindless) all-out conflict.

So far from being selectively neutral, the intuitive models contained in our conscious awareness are profoundly useful in navigating the complexity of even hunter-gatherer human societies. Indeed, they have likely been useful for millions of years, and perhaps across many different groups of social animals. While we do not know whether our particular self-aware version of these models is the only way to gain that predictive utility, it is not hard to see the adaptive value of a conscious representation of the world for a social animal. There is no need to conjure a panpsychic conscious property of matter itself, just as there was ultimately no need to define a "vital" force animating matter into living things. We have sufficient reason to predict that consciousness would be selected for as animals began living together in groups that were large enough for its mental models to matter.

Monday, March 11, 2019

Is the performance of trans athletes in the NCAA a good way to determine whether trans athletes have an advantage over cis women?

Recently Brynn Tannehill addressed (in a short Twitter stream) the question of whether trans athletes competing in women's divisions in sporting events have an advantage over cis women. Her key argument was that the performance of trans athletes in the NCAA was decisive proof that they did not:

"The NCAA has allowed transgender people to compete without surgery since 2011, and there has not been a single dominant transgender athlete anywhere in college sports."

"These constitute large scale, longitudinal tests of the system with millions of athletes as a sample, and the IOC and NCAA rules for transgender athletes are clearly sufficient to preserve the integrity of sports at this time."

Tannehill suggests that if trans athletes had such an advantage, we would see them dominating college sports. However, the fact that there are relatively few trans athletes (a point she acknowledges later in the thread) means that NCAA women's sport results are strongly biased against producing trans champions simply because of the dramatic difference in sample size between trans and cis women athletes.

To illustrate why this matters, let's pick an objectively judged event as an example: the long jump in track and field. There were 898 NCAA women's outdoor track and field programs in 2017 (source) so let's estimate that each program fields 3 long-jumpers, for a total of 2,694 athletes. How many of those athletes are trans? The NCAA does not appear to publish this statistic, but estimates of percentages of the US population who are trans range from 0.3-0.6% (source). For the sake of the example, let's round up and assume that a full 1% of our hypothetical women long jumpers are trans (meaning the remaining 99% are cis).

If we want to use NCAA women athletes as a sample, it's clear we have a severe sampling bias when it comes to the trans/cis distinction. That can be a problem in a couple of ways. First, it means that in estimating, say, the average personal best long jump for each group (the individual PR), we will have a much better estimate of the cis average, and a much poorer estimate of the trans average. For our long jump example, we will have a sample of 2,667 cis women, and 27 trans women. Our estimate of the trans average will have very large error bars, and it would take a large difference between the groups to show up through random variation (i.e., if it would take a big advantage of trans athletes to be statistically significant - to know that the difference wasn't just noise).

However, Tannehill isn't suggesting we calculate average performance, but rather look at the champions in individual sports; in other words, she argues that if trans athletes have an advantage, they should be winning a noticeable number of national championships. In this case, we aren't just uncertain of the correct number for trans athletes, but rather the sampling bias makes it much more unlikely that a trans athlete to end up on the top of a podium.

To win a championship, one has to literally be the best. In other words, for our long jump example, the champion will basically have the maximum PR for their part of the sample. For trans athletes to end up winning championships, the maximum PR for the trans women has to be greater than the maximum PR for the cis women. But the maximum value for any group (in, say, a normal distribution) depends quite directly on the sample size. All other things being equal (like the mean and standard deviation), it's tremendously unlikely for the maximum long jump PR from a sample of 27 trans women to equal or surpass the same maximum from a sample of 2,667 cis women.

An easy way to visualize this is to imagine sampling from the (large) pool of cis women. What if we randomly selected 27 cis women who do the long jump from the total pool of cis women long jumpers. How often would one of those 27 jumpers beat the entire pool of 2,667 cis women? Only when the longest jumper happened to be selected by chance - 1% of the time. Suppose we extend this to a situation where the trans and cis women have exactly the same average jump and the same "spread" in the distribution of their personal PRs (in other words, the same standard deviation). The 27 trans women are just as unlikely to win a championship as the subsample of 27 cis women.

This bias in extreme results (the longest jump is by definition an extreme result) toward larger pools of athletes is just about sampling. A larger sample will always produce more extreme values (both maximum and minimum). If you want to dig into the statistics more, you could check out this quick summary. An interesting result from that analysis is that the difference between the average maximum from a sample of 27 and a sample in the thousands is about half a standard deviation. In other words, as a rough approximation, our sample of 27 trans women athletes would have to have an average long jump about half a standard deviation greater than their cis women colleagues to have an equal probability of winning the event (i.e., half the time the champion would be trans, half the time cis). Half a standard deviation is huge. In high school long jump, the average for women is 16.5 feet with a standard deviation of 2.2 feet (from an admittedly secondary source - if anybody has better data I'd love to see it). So the trans women athletes would have to jump roughly a foot farther on average just to have even odds of producing the national champion, much less "dominate" the competition.

Another example of this phenomenon is the documented increase in performance of national champions as the population of nations increases. Larger nations have more athletes - essentially a larger sample of the distribution of athletic performance - and tend to produce larger values for maximum performance. To extend the analogy to our example, one could think of the group of trans women competitors as a very small "country" that has difficulty beating the top athletes from a big "country" (the cis women) purely because of a small population.

In conclusion, evaluating the "dominance" of trans women athletes in NCAA competition is a really terrible way of assessing whether trans women athletes have an inherent advantage, due solely to the small number of trans women athletes. It would take a very large inherent advantage to show up as podium performances for trans women athletes. A better approach would be comparing average performance between cis and trans women athletes, though one might have difficulty detecting smaller inherent advantages above random statistical variation. Pooling all NCAA women athletes together might also obscure differences between disciplines that had different inherent advantages for either cis or trans women.

Finally, let me make one thing clear; I'm not making any conclusions about whether inherent advantages exist for trans women athletes. Quite the opposite, I'm pointing out how little we can know from the current history of trans women in NCAA competition given the small percentage of female athletes who are trans women. Obviously this is a question that matters to a great number of people, and we should be really careful not to draw unjustified conclusions from the data we have.

Tuesday, February 27, 2018

The conjecture of "fine-tuning"...and "cosmopsychism"?

            A persistent claim in what one might call the philosophy of cosmology is the supposed “fine-tuning” of the constants of physics to conditions we consider suitable for sustaining living things. Consider, as a representative example, the introduction to a recent essay by Philip Goff in Aeon:

In the past 40 or so years, a strange fact about our Universe gradually made itself known to scientists: the laws of physics, and the initial conditions of our Universe, are fine-tuned for the possibility of life. It turns out that, for life to be possible, the numbers in basic physics – for example, the strength of gravity, or the mass of the electron – must have values falling in a certain range. And that range is an incredibly narrow slice of all the possible values those numbers can have. It is therefore incredibly unlikely that a universe like ours would have the kind of numbers compatible with the existence of life. But, against all the odds, our Universe does.

            Goff goes on to interpret this “fact” of fine-tuning as support for “…the idea that the Universe is a conscious mind that responds to value.” In his view, the Universe has a clear telos – the production of intelligent life. Given how central “fine-tuning” is to Goff’s claim, one might be forgiven for more closely examining the basis for his probability argument – the likelihood of a given universe having physical constants compatible with intelligent life.
            Probability, in its simplest form, is a calculation of the likelihood of a particular outcome given the range of possible outcomes. If, for simplicity, we assume that all potential combinations of the physical constants are equally likely, then the probability of getting a universe that can support intelligent life is a simple ratio: the number of possible universes that we judge could potentially support such life, divided by the number of possible universes. To get to Goff’s conclusion that this outcome is “incredibly unlikely,” we have to know both how many possible universes there are, and how many of them could support intelligent life. In terms of the constants in the laws of physics (those parameters that must be measured empirically, rather than calculated from theory), we need to know what range of variation is possible for each constant, and how much of that variation is compatible with intelligent life. This is where we get Goff’s basic claim of “fine-tuning” - “that range [of values of physical constants] is an incredibly narrow slice of all the possible values those numbers can have.”
            A crucial assumption in this view is that physical constants could potentially vary at all. Goff argues, for example, that we are fortunate that the parameter 𝛆 - representing the efficiency of the fusion of hydrogen to helium - has the value 0.007, since a universe where 𝛆 was slightly larger or smaller would have either very little or only hydrogen. However, the fact that we can simply substitute other values of 𝛆 in our equations hardly demonstrates that other values are actually possible. Further, nothing in our actual experience suggests that 𝛆 can vary; in fact, it seems to be the same everywhere in the universe (the fact that we can observe stars across vast separations in distance and time being but one example). Taken from another perspective, it would be truly remarkable if a parameter like 𝛆 could have a range of values yet somehow always turn up with the same value whenever we measure it. To claim “fine-tuning” is to claim that some entity could adjust the value of parameters like 𝛆; it takes a remarkably imaginative line of thinking to argue that our base assumption about a parameter should be that it is a variable, when all our experience suggests it is a constant.
            This is not new territory, either. In the late 17th and early 18th centuries, ingenious observations of Jupiter’s moon Io by Ole RΓΈmer – and conversions to absolute distances by Christian Huygens – showed that the speed of light in a vacuum was both finite and quite fast – about 220,000 km/s, as compared to the modern value of 299,792 km/s. One could have wondered why the speed of light had that particular value…until around 1864, when James Clerk Maxwell calculated what the speed of light had to be if it were an electromagnetic wave. The only speed compatible with mutual electric and magnetic induction – and with the conservation of energy – was remarkably close to the observed results. Before Maxwell, one could have imagined light having many potential speeds and wondered about their consequences, but after Maxwell those flights of imagination were simply implausible. Physics explained why light had one particular speed – the speed toward which experimental measurements were rapidly converging.
            Even if we were to grant that the constants could vary, it is rather difficult to determine how much variation “fine-tuning” advocates think is possible. A factor of two? An order of magnitude? Any number we can imagine? A statistical estimate of variation relies on measuring a value many times in a sample to determine how much variation is likely. To estimate potential variation the physical constants, we must measure each of those parameters in many different contexts and calculate a value and uncertainty. For the gravitational constant – which is notoriously variable in its measured value – the variation in modern measurements is on the order of 10-4, or one part in ten thousand. For the mass of the electron, the uncertainty derived from measurement is on the order of 0.1 parts per billion (depending on which units one uses to express the mass). In that light, considering universes where the electron is 2.5 times as massive (as Goff does) is utterly hypothetical. Or, to put it another way, we have no reason to think that the physical constants themselves could vary outside a remarkably tiny range of values, and that range itself is likely a product of the uncertainty of our measurements. The careful reader might fairly object that this is simply a restatement of the remarkable constancy of the measured parameters in physics. That is precisely the point.
            Thus the denominator in Goff’s hypothetical probability calculation – the range of possible combinations of the physical constants – is actually quite tiny; from an empirical point of view, only a minuscule range of the values we can imagine correspond to observations of the actual Universe. If the constants are truly constants, the denominator is simply one – ours is the only possible version of the current laws of physics. But what of the numerator – the portion of possible universes compatible with intelligent life? Here again, proponents of fine-tuning are rather vague on their probability assessments for a rather simple reason: we have almost no grounds to evaluate what kinds of universes could support intelligent life in principle, because we cannot possibly imagine all the ways intelligent life could arise. When we think of life in other universes (or even on other planets), “suitable for intelligent life” is usually shorthand for “suitable for life that uses solar energy to convert carbon dioxide and water to oxygen and carbohydrate, later releasing energy in the oxidation of that carbohydrate; most likely using a particular set of nucleotides to encode information and translate that information into protein macromolecules; organizing independent subunits (cells) into hierarchies which can specialize to the degree that a complex internal model of the outside world is represented inside the resultant organisms.” In other words, we are quite good at enumerating the requirements for the intelligent life we know best (humans), but our particular case does little to delimit the range of possible ways to get intelligent life in principle. To do so, we would have to have “comprehensive imagination,” a complete understanding of all the possible ways intelligent life could arise in a variety of possible universes. Our track record of predicting where relatively familiar life might be found on Earth is rather poor (e.g. hydrothermal vent communities); there is no reason to expect our imagination to be any less myopic when conceiving of ways unfamiliar life could arise or produce intelligence.
            In sum, an estimate of the probability of getting a universe that can produce intelligent life (the estimate Philip Goff characterizes as “incredibly unlikely”) relies both on estimating the actual range of possible universes and the number of those universes compatible with intelligent life. Fanciful speculation aside, we have no empirical reason to think that the constants of the physical universe could be anything else but the ones we know. Further, only an excess of hubris could lead us to think that we are able to comprehensively imagine all the ways intelligent life could arise in a given universe. As a result, we can only conclude that “fine-tuning” is a speculative story, and any assessment of its probability is groundless. If a scholar like Goff wants to postulate a “cosmopsychic” hypothesis of the “Universe [as] a conscious mind that responds to value,” he is more than welcome to do so. Offering that hypothesis as an solution to a problem – the popular metaphor of “fine-tuning” physical constants – only works if we have good reasons to think there is a problem at all. 

Friday, February 19, 2016

OSCaR report on cancer rates near Bullseye Glass


The OR state cancer registry (OSCaR) released a report today examining the rates of lung and bladder cancer in census tracts near the Bullseye Glass factory in southeast Portland (OR). This district of the city is the location of elevated exposures to cadmium and arsenic measured by the Department of Environmental Quality (DEQ), and understanding whether those exposures pose a threat to human health has been a pressing issue.

The OSCaR report is based on comparing observed rates of cancer in census tracts near the site of the metal emissions with observed rates of cancer in the county overall. Specifically, the study calculates Standardized Incidence Ratios (SIRs) for both lung and bladder cancer. Those ratios divide the observed number of cancer cases in sites of exposure by the number of cases expected based on the number of people living in those sites (again using the county as a whole to best estimate that baseline rate). SIRs greater than one indicate an increased rate of cancer in the study area, while an SIR less than one suggests cancer rates in a study area are actually below baseline rates.

Here are the SIRs for the census tracts near the Bullseye plant:

As you can see, the ratios are close to one for both cancers, whether including just the census tract that includes the plant, or adding the adjacent census tracts to the east. That is good news!

You may notice that the confidence intervals for the study are large. These are 95% confidence intervals; they essentially tell you that, for example, the actual SIR for bladder cancer in tract 1000 has a 95% chance of falling between 0.3 and 3.1 given the number of cases measured. That number of cases is what makes the confidence intervals large; in this example it is based on four actual cases reported among residents over the five-year study. You can see why the CIs would be wide; having just one more (or one fewer) case of bladder cancer in that tract would change the rate by 25%. Since the number of cancers reported is small, the confidence intervals are broad. I should point out that this is a good problem to have - the cancer rate is hard to estimate because there are not many cases! From a statistical point of view, if the confidence interval on an SIR includes the value 1.0, there is no statistical evidence that the rate of cancer is different in the study area. That is clearly the case in these data. Even better, the actual estimates of the SIR are all close to one; this is very reassuring.

There are limitations to this analysis, of course. First off, given the low number of cases, it is just not sensitive enough to pick up very small changes in the cancer rate. The differences between the estimates above and exactly one are small compared to the confidence intervals, yes, but realistically the study does not have the statistical power to pick up a change in rate less than about two-fold (for bladder cancer) or 1.5-fold (for lung cancer) in the study area. There just aren't enough people living there (and not enough reported cases) to be more precise. In addition, the conclusions are limited to the two cancers studied, though those are the cancers known to have the strongest links to arsenic and cadmium exposure.

I'm personally very heartened by these results, for a few reasons. First off, because the data were collected for cases that pre-date the release of the DEQ emissions testing data, we don't have to try to adjust for reporting biases. Second, the exposure thresholds for cancer tend to be among the lowest exposure thresholds in the health-effects literature; you would expect to see cancer effects at lower exposures than for acute diseases caused by heavy metal exposures. I don't want to give you the impression that we're measuring 1 in 10,000 risks well in this study; the total population in the study area was 13,725 - so the result of an increase of 1 in 10,000 would be just over one additional case for a given cancer. But I'm encouraged that the predictions that come from examining the DEQ emissions levels hang together nicely with this result from cancer rates; any additional risk for human health effects looks to be small (or less) for cancer; hopefully that conclusion holds up for other types of disease.

Finally, I want to congratulate OSCaR for the study; I imagine that running a cancer registry is rather low-profile and out of the public eye until you really need that registry to answer a question like "Should we be concerned about cancer from these arsenic and cadmium emissions?" Then it's really nice to know that the data have already been collected, and within a couple of weeks of the first media reports we have a direct, evidence-based answer to that question. Thanks to everyone at OSCaR!




Thursday, February 18, 2016

Interim Clinical Update for metal exposures in Portland

Hi everyone! I'm posting this PDF from the agencies working on heavy metal exposures in Portland (OR) because it's rather hard to find directly. There is lots of good information here, though I would point out that the agencies are being cautious in not trying to quantify exposures (though we have some helpful data there, as you can see in other posts on this site). The risks listed and recommendations are all evidence-based (that's important), and your clinical physician will probably be seeing this document soon if not already.

I do have to highlight one passage that addresses an issue that has come up: "Under no circumstances should chelation or provocation be used before testing." The authors thought this important enough to use a bold font! If you decide to get tested, please do objective testing that can be compared to reference values; doing a chelation challenge pumps up the test value and creates false positives, as well as exposing you to a chelating agent before you know if there is a problem. You can also read the American College of Medical Toxicology's position statement here, which explains the reasoning in more detail.

Cadmium and Arsenic update: soil tests from CCLC

One big unknown in the controversy about heavy metal pollution in southeast Portland (OR) has been the amount of accumulation of pollutants identified by the Oregon Department of Environmental Quality as above its benchmark levels in air. Cadmium (Cd) and arsenic (As) were measured well above emissions benchmarks at an air monitoring station near the Bullseye Glass production facility; while those emissions were not particularly high with respect to health risks for inhalation, we did not know if the pollutants had accumulated in soils. For heavy metals this can be a real concern, as atomic elements like Cd and As do not convert into other substances as a molecular pollutant can. There has been a legitimate concern that even air concentrations that were not acute could have built up to dangerous levels over the functional lifetime of the Bullseye facility.

Now we have some illuminating data, in the form of soil testing results from the very close Children's Creative Learning Center, about 150 meters away from the Bullseye factory. The report from testing is posted here, and I've included the data table below:

Helpfully included in the test results are background levels for the Portland area. The 6" and 12" designations refer to the depth at which soil was sampled at each of six sites around the CCLC campus. The table also includes DEQ soil screening values for residential areas. Not included are the DEQ risk-based concentrations (RBCs) for urban residential areas, which are 1.0 mg/kg (As) and 160 mg/kg (Cd).

So what do the results tell us? First off, that there's no evidence for an overall accumulation of arsenic at the site, since the averages at both depths fall below the background level for the City as a whole. It is true that sites 5 and 6, which are on the west side of the building, are a bit higher than the other samples, especially at the 12" depth; this could be an effect of proximity to Bullseye, or even just proximity to a nearby road. Sample #4, however, is immediately adjacent and shows lower As levels, especially at 12" depth. The values do fall above the DEQ's risk-based concentration of arsenic for soil ingestion, skin contact, and inhalation through contaminated soil, with the RBC set to match the 1 in 1 million risk of causing a cancer. The average value at 6" depth would correspond to roughly a 1 in 70,000 increase in the change of developing a cancer using assumptions like roughly year-round exposure. The maximum at 12" would correspond to a 1 in 20,000 increase in cancer risk with similar assumptions.

For cadmium, the sample values are more consistent from location to location. At depth, they fall very close to the background for Portland, though at 6" they are about twice that value, so there may have been some accumulation of Cd in the soil. The values are all very far below the risk-based concentration from DEQ for soil (by a factor of 50 or more), so there appear to be no grounds to worry about a health risk for cadmium from soil at CCLC.

Incidentally, you may have noticed that the DEQ screening values (the RBCs) are really different for the two elements; at first I thought that the value for cadmium was a typo! But the big difference seems to be that arsenic is volatile (i.e. it can evaporate into air from soil) while cadmium cannot, along with the fact that arsenic is more easily absorbed through the skin.

So what is the overall conclusion? I hesitate to generalize too much, but the amounts of As and Cd measured at CCLC all fall within about 2X the background for the Portland basin as a whole. For arsenic, only at the 12" depth are samples well above that background. Conversely, for cadmium, the values at depth are near baseline, while the shallow samples are elevated. The difference could be due to different leaching rates for the two elements, or they could represent different modes or times of accumulation. So there may be some accumulation of these elements from nearby industrial operations like Bullseye. On the other hand, this is for a very close site to the Bullseye factory; given the relative values observed in the moss and lichen study from the US Forest Service, soil levels much farther away are likely to be at background levels for the City if soil concentrations drop at a similar rate to moss and lichen concentrations. As an example, airborne Cd levels drop by a factor of about 2 by the time one reaches Cleveland HS or Winterhaven Elementary in the moss and lichen study; if soil accumulation drops off at a similar rate, those schools would be expected to have soil levels of Cd and As near the background levels. For students and staff at CCLC, continued exposure to unremediated soil may be unwise, but health risks from soil will be small for arsenic and negligible for cadmium.


UPDATE: The Oregonian has released their own set of soil testing data here. The levels tend to drop off quickly with distance from Bullseye, with a few scattered high results among mostly below-detection-threshold tests. One very high test result near Cleveland HS should probably be followed up, though as an outlier it is likely to come from another source than settling from Bullseye. A high lead level next to Powell Blvd. is likely (IMHO) to be the result of years of auto traffic using leaded gasoline.

Saturday, February 13, 2016

Glass manufacturing update for chromium


Since I put up my recent post on cadmium (Cd) and arsenic (As), attention in SE Portland (OR) has shifted to the possibility of hexavalent chromium pollution in the area around Bullseye glass. To try to help everyone understand the data for Cr(VI), I'm adding this post, but a word of caution at the outset: the data for chromium are far less clear.

First off, everyone needs to understand that there are two common forms of chromium used in industry. Cr(III), or trivalent chromium, does not have documented health risks; it is the form of chromium found most often in the human body. Cr(VI), or hexavalent chromium, is more dangerous, because it is a powerful oxidant; it strips 3 electrons off other atoms or ions in converting to Cr(III). If you've heard of oxidative damage, this is an example. You could write these as Cr3+ and Cr6+, but I can't get Blogger to do superscripts...so they look funny.

Hexavalent chromium, therefore, is the form of the element that matters for health risk, and all the standards I can find are for Cr(VI), including the ones in the following table, which again combines DEQ air monitoring data with Oregon benchmarks, EPA cancer risks, and reference values for non-cancer health effects. I left the Cd and As rows in for comparison. Everything here applies to inhalation risk, with units of micrograms/cubic meter.


You may notice that the DEQ measurements are noticeably higher for chromium, while EPA cancer risk threshold is lower. There is, however, a huge uncertainty here: we don't know how much of the chromium DEQ measured is Cr(VI) - hexavalent chromium. In the data sheet, there is no benchmark reported for "unspeciated" chromium, which means a mix of chromium in different oxidation states like Cr(III) or Cr(VI). It seems very likely that the method used to monitor chromium was something like mass spectrometry, which identifies Cr by its atomic weight but does not distinguish the oxidation state. So all we can be sure of from the data is that the amount of Cr(VI) is less than or equal to 0.0715 micrograms/cubic meter on average at the monitoring station.

The other uncertainty is whether the bulk of the chromium is coming from Bullseye at all. While the monitoring station is close by, we haven't got a relative concentration map for chromium from the moss and lichen sampling done by the US Forest Service, as we do for cadmium and arsenic. In the case of those metals, the USFS study pointed pretty clearly to Bullseye's location. It does seem that Bullseye was using Cr(III) and Cr(VI), since they have suspended production with Cr(VI) but not Cr(III) at DEQ's request. However, the company claims that its factory was not operating on the peak days of chromium emission; there are indeed two days with much higher (roughly 20X) chromium emissions in the DEQ data that are responsible for most of the average (66% of the total emissions measured come from those two days). While a correspondence between cadmium and arsenic points to glassmaking as an industry, and Cr(VI) salts like potassium chromate and dicrhomate are used to color glass, lots of industries use Cr(VI) salts, for example for the chrome plating industry. We also don't have the benefit of knowing (from the moss and lichen monitoring) how quickly the chromium exposure lessens - if at all - as one moves away from the DEQ monitoring station. In sum, the source of the chromium in the DEQ data is unclear at this point.

Given those caveats, the chromium levels are still concerning. If the source of the chromium is industrial, it is quite possible that it is indeed mostly Cr(VI). In that case, the lung cancer risk from chromium exposure at the monitoring station is close to 1 in 1,000 for someone breathing that air 24/7 for a lifetime using the EPA model. That is still much lower than the baseline lung cancer risk, but if you eliminate the effect of smoking (which is linked to about 90% of lung cancer), this potential Cr(VI) risk might add as much as 14.5% to the risk of lung cancer from all causes except smoking - a worst-case scenario for someone living at the monitoring station. That's still only about a sixth of the contribution of radon exposure (you have had your house radon-checked, right?), but it's potentially more than for cadmium or arsenic. The non-cancer risks don't look too bad; they are actually far below the level where particulate emissions cause non-cancer disease, and the EPA has low confidence in the study that gives the much lower threshold for mist and aerosol exposure, for what look to my eyes like great reasons (see the "Chronic effects - noncancer" section in the IRIS summary).

So the picture for chromium is much more vague. It would be great, for example, to know how much of the chromium DEQ is finding in the air is hexavalent. It is possible to distinguish the hexavalent chromium ion, but to do a Cr(VI)-specific test requires an extra separation step (silica gel chromatography) that was presumably not part of the DEQ protocol. It would also be very helpful to know if the USFS data includes relative chromium levels, even if those samples don't distinguish the oxidation state, to help pinpoint the source of the emissions. Those avenues should definitely be pursued; the existing DEQ emissions data give us a worst-case risk that is not minor. Further data would let us know just how close we actually are to that worst case.