Quantcast
Viewing latest article 2
Browse Latest Browse All 8

Classification of contrarian claims about climate change

The latest controversy in the climate debate is the publication of a paper on [c]omputer-assisted classification of contrarian claims about climate change. The authors are Travis Coan, Constantine Boussalis, John Cook, and Mirjam Nanko. You may recognise John Cook as being the founder of Skeptical Science which, for some, is already a red flag. This Washington Post article does a nice job of describing the work, as does this article by John Cook.

The paper essentially develops a taxonomy of contrarians claims about climate science/policy and then uses a computer model to categorize the content from a set of contrarian blogs and conservative think tanks. The taxonomy includes a small set of super claims, and then a set of sub-claims, and sub-sub-claims. I was getting a little bit of flack on Twitter because I was one of those who helped to train the model by categorizing a sample of the content from these blogs/think tanks, so I do have an association with the paper.

The basic result was that conservative think tanks tend to argue that climate solutions won’t work, or would be harmful, while contrarian blogs tend to focus on claims that climate science is unreliable. It’s also possible to determine how the various super claims have changed with time and to associate these changes with various events (Climategate, An Inconvenient Truth, etc.).

There seemed to be two main criticisms of the paper. One was that it’s advocating for censorship. This seems obviously not true, but I do understand the concern. However, I think there are a number of related, but different issues. Are there sites that promote misinformation? The answer seems obviously yes. Is it worth developing methods for identifying misinformation? Again, I would argue yes, but I can see how some might worry about the implications of doing so. Finally, should we do anything about sites that promote misinformation? My own view is that we shouldn’t be aiming to censor such sites, but I see no reason why we shouldn’t be identifying the misinformation that they’re presenting and highlighting that they’re doing so.

The second criticism related to some of the claims not necessarily being wrong. For example models are unreliable, CCS is unproven, or nuclear is good. I think, though, that this misunderstands the methodology in the paper. The taxonomy had 5 super-claims, and then a set of sub-claims, and a set of sub-sub-claims. So, yes, some of the text in the sub-claims, or sub-sub-claims, may not necessarily be statements that are wrong, but they identify rhetoric that is sometimes used to justify a super-claim. The paper even acknowledges that our taxonomy includes several claims that well-known contrarians tend to make, yet are not necessarily contrary to mainstream views. So, you need to interpret these sub-claims, and sub-sub-claims, in the context of the super-claims, rather than interpreting them as independent claims.

I do think it is quite an interesting paper that present some quite impressive work. I also think it’s worth thinking about how we can identify misinformation, which this paper is trying to do. On the other hand, I also think we should be careful of what we do with this information. With some exceptions, I think people should be free to promote misinformation. However, they aren’t free to do so without this being highlighted and without facing any criticism for what they’re choosing to do.


Viewing latest article 2
Browse Latest Browse All 8

Trending Articles