Skip to main content

Thanks for your quick reply, Deke. That is really helpful. I don't want to take advantage of your time and generosity, but if you're up for fielding one additional question from me, I would be super appreciative, as your response has helped me clarify now what I'm really after on this. I’m writing about what happened in Utqiaġvik for an academic journal in the humanities and am trying my best to understand something very basic about how the PHA works. In essence, I’m trying to understand whether the limitations of a PHA are, ultimately, purely technical and practical (i.e., with a sufficient number of reliably working monitoring stations in a given area, the algorithm would be foolproof), or if there is also always the possibility (however slight) for an error caused, instead, by abnormally rapid yet very real climate driven temperature change. Above, you explain that a PHA works by comparing the rate of temperature change across multiple monitoring stations in a given area. And you say that the monitoring station in Utqiaġvik was "relatively isolated," thus making it more vulnerable to error than most. But does this mean that if there are enough stations in a given area that the algorithm would be foolproof, i.e., would then *only* flag a temperature change as “artificial” when it really is? Or, is it possible for a rate of temperature increase, itself, to be so unusual or improbable that it could be mistakenly classified by the algorithm as "artificial," even if it were registered by lots of monitoring stations in a given region (perhaps because the algorithm needs to differentiate between “artificial” changes to the surroundings like logging)? In other words, from my understanding so far, it seems to me to come down to an ultimately not-fully-eliminable distance/temperature trade-off, that is, a trade-off between the number of monitoring stations in a given area vs. the rate of temperature change that an algorithm currently deems “acceptable in reality” in that area. But if, as global warming unfolds, its effects can always bring about “new normal” standards for the distance/temperature trade-off, then it seems there must always be the possibility (however small) for a current algorithm to give a false alarm. No matter how many monitoring stations there are in a given area. Because what an algorithm deems “acceptable in reality” can never fully anticipate the “new normal” standards of acceptability that global warming can bring about. I hope what I’m asking about makes some kind of sense. In essence, I’m just asking if the PHA is ultimately prone *only* to errors due to practical or technical limitations, or if there is also always the possibility (however small) of error caused by sufficiently unusual or unprecedented yet nevertheless very real changes due to climate change itself. And I would be hugely grateful for a response on this from you. Thank you so much for your time and expertise! Chris