In the U.K., two members of the direct action climate group Just Stop Oil were arrested after they spray-painted the words “1.5 is dead” on the grave of Charles Darwin at London’s Westminster Abbey Monday. The protest follows the news last week that 2024 was the hottest year in human history, with the average global temperature rising by 1.6 degrees Celsius above preindustrial levels. This is one of the activists.
Alyson Lee: “2024 was the hottest year on record. We have already passed through the 1.5 degree that was supposed to keep us safe. Millions are being displaced. California is on fire. And three-quarters of all wildlife has disappeared since the 1970s.”
The activist later said she believed Charles Darwin would approve of their protest “because he would be following the science, and he would be as upset as us with the government for ignoring the science.”
They succeeded in getting attention. Look at all the comments posted here. The issue needs attention. The issue also needs fact checking. I was pleased with fact checking I got from diffy.chat about the wildfires in LA County. Maybe fact checking bots should be included in online discussion forums.
The bots are mostly langauge models, not knowledge models. I don’t regard them as sufficiently reliable to do any kind of fact checking.
The language model for diffy.chat has been trained not to respond from its own learned parameters, but to use the Diffbot external knowledge base. Each sentence or paragraph in a Diffy response has a link to the source of the information.
That’s still not into the realm where I trust it; the underlying model is a language model. What you’re describing is a recipe for ending up with paltering a significant fraction of the time.
Did you even try diffy.chat to test how factually correct it is and how well it cites its sources? How good does it have to be to be useful? How bad does it have to be to be useless?
I tried it. It produces reasonably accurate results a meaningful fraction of the time. The problem is that when it’s wrong, it still uses authoritative language, and you can’t tell the difference without underlying knowledge.
There does need to be a mechanism to keep the human in the loop to correct the knowledge base by people who have the underlying knowledge. Perhaps notification needs to be sent to people who have previously viewed the incorrect information when a correction is made.