|
--- |
|
license: cc-by-nc-sa-4.0 |
|
language: |
|
- en |
|
tags: |
|
- climate |
|
size_categories: |
|
- n<1K |
|
--- |
|
# Dataset Card for Science Feedback |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
A dataset consisting of Science Feedback's climate change claim/article reviews web-scraped with Python. Science Feedback's climate change reviews are one of, if not, the only existing gold-standard dataset that was written by nonpartisan climate change experts. |
|
Read more about Science Feedback here: https://science.feedback.org/about/. |
|
|
|
### Dataset Sources |
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
The reviews were extracted from https://science.feedback.org/reviews/?_topic=climate. |
|
- **Repository:** |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
### Curation Rationale |
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
This dataset was created to test and fine-tune The Endowment for Climate Intelligence (ECI)’s climategpt model’s performance on non-binary climate change claims. This will help determine its usefulness in real world applications where claims do not always fall under the binary categories “correct” and “incorrect.” For testing and fine-tuning, the verdicts were consolidated into 5 categories to increase the number of samples per class and to avoid confusing the model with too many categories without enough samples (see merge_science_feedback_labels.ipynb in github repository). |
|
|
|
### Data Fields |
|
- **url:** The URL of the claim/article review. |
|
- **verdict:** A keyword or float assigned by the reviewer to label the claim (social media, podcast, etc.)/article on a scale of scientific credibility. Check https://science.feedback.org/process/ for more information on Science Feedback’s methodology. The latest reviews use keyword verdicts but Science Feedback used to use a numerical credibility scale (-2.0 to 2.0). |
|
- **source:** The source of the claim/article being reviewed. |
|
- **claim:** The main claim of the source made concise (social media, podcast, etc.). This is unique to sources that have a keyword verdict. |
|
- **headline:** Headline of the article being reviewed. This is unique to sources that have a float verdict. |
|
- **verdict_detail:** Explanation for why the verdict was given. |
|
- **key_takeaway:** A summary, or key takeaway, of the review. |
|
- **full_claim:** The full claim of the source. The full claim section is always present when the claim is present. |
|
- **references:** The references listed at the bottom of the claim review (if it exists) |
|
- **review:** The text that reviews the claim/article in question and supports the verdict. |
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
According to Science Feedback, "Science Feedback editors select claims or articles for review that are the most viral on social media and/or are published by sources with large readership. Either must contain potentially verifiable claims in the scientific realm." (https://science.feedback.org/process/) |
|
|
|
#### Data Collection and Processing |
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
|
|
Used Python's requests and BeautifulSoup libraries to loop through each claim/article review page and extract the html file as a string. Also used pandas library to save extracted data fields as a csv file. |
|
Extracted reviews were minorly processed for consistency in format: |
|
- Text not part of the review was omitted (e.x.: “SEE OUR METHOD OF EVALUATION →”) (lines 52, 75-76 of main code) |
|
- The “VERDICT DETAIL” heading was omitted (line 52 of main code) |
|
- New line and tab characters (“\n” and “\t”) were replaced by a space and empty string respectively (line 52 of main code) |
|
- A “: ” was added after each scientist and heading’s title (lines 80-86, 93-105 of main code) |
|
- e.g.: |
|
- “Amber Kerr Researcher, Agricultural Sustainability Institute, University of California, Davis” → “Amber Kerr Researcher, Agricultural Sustainability Institute, University of California, Davis: ” |
|
- "Scientists' Feedback..." → "Scientists' Feedback: ..." |
|
- References were split from the main review (lines 108-117 of main code) |
|
- Removed “+” from verdicts of type float |
|
|
|
#### Who are the source data producers? |
|
|
|
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> |
|
|
|
"The Science Feedback association is a not-for-profit organisation registered in France whose mission is defined in its status as to improve the credibility of science-related information online, in the media and on social media." (https://science.feedback.org/about/) |
|
|
|
## Dataset Card Authors |
|
|
|
Joongeun Choi |
|
|
|
## Dataset Card Contact |
|
|
|
joon.choi2025@gmail.com |