Datasets:
Tasks:
Text Retrieval
Modalities:
Image
Formats:
imagefolder
Sub-tasks:
document-retrieval
Languages:
English
Size:
< 1K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,20 +1,92 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
-
|
4 |
|
5 |
-
|
|
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
-
|
|
|
|
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
Search datasets (search.tar.gz): Objects to facilitate prototyping of search algorithms on the comment corpus. Contains the following elements:
|
20 |
|
@@ -27,6 +99,198 @@ search_index_express.pickle | Pandas dataframe containing unique id and total te
|
|
27 |
search_dtms.pickle | Document-term matrix for standard comment attachments (44655x3986) in sparse csr format (rows are comment pages, columns are bigram keyword counts). |
|
28 |
search_index.pickle | Pandas dataframe containing unique id and total term length for standard comment attachments. |
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
language_creators:
|
7 |
+
- found
|
8 |
+
license:
|
9 |
+
- cc-by-nc-sa-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: fcc-comments
|
13 |
+
size_categories:
|
14 |
+
- 10M<n<100M
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
tags:
|
18 |
+
- notice and comment
|
19 |
+
- regulation
|
20 |
+
- government
|
21 |
+
task_categories:
|
22 |
+
- text-retrieval
|
23 |
+
task_ids:
|
24 |
+
- document-retrieval
|
25 |
+
---
|
26 |
+
|
27 |
+
# Dataset Card for fcc-comments
|
28 |
+
|
29 |
+
## Table of Contents
|
30 |
+
- [Table of Contents](#table-of-contents)
|
31 |
+
- [Dataset Description](#dataset-description)
|
32 |
+
- [Dataset Summary](#dataset-summary)
|
33 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
34 |
+
- [Languages](#languages)
|
35 |
+
- [Dataset Structure](#dataset-structure)
|
36 |
+
- [Data Instances](#data-instances)
|
37 |
+
- [Data Fields](#data-fields)
|
38 |
+
- [Data Splits](#data-splits)
|
39 |
+
- [Dataset Creation](#dataset-creation)
|
40 |
+
- [Curation Rationale](#curation-rationale)
|
41 |
+
- [Source Data](#source-data)
|
42 |
+
- [Annotations](#annotations)
|
43 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
44 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
45 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
46 |
+
- [Discussion of Biases](#discussion-of-biases)
|
47 |
+
- [Other Known Limitations](#other-known-limitations)
|
48 |
+
- [Additional Information](#additional-information)
|
49 |
+
- [Dataset Curators](#dataset-curators)
|
50 |
+
- [Licensing Information](#licensing-information)
|
51 |
+
- [Citation Information](#citation-information)
|
52 |
+
- [Contributions](#contributions)
|
53 |
|
54 |
+
## Dataset Description
|
55 |
|
56 |
+
- **Repository: https://github.com/slnader/fcc-comments **
|
57 |
+
- **Paper: https://doi.org/10.1002/poi3.327 **
|
58 |
|
59 |
+
### Dataset Summary
|
60 |
|
61 |
+
Online comment floods during public consultations have posed unique governance challenges for
|
62 |
+
regulatory bodies seeking relevant information on proposed regulations.
|
63 |
+
How should regulatory bodies separate spam and fake comments from genuine submissions by the public,
|
64 |
+
especially when fake comments are designed to imitate ordinary citizens? How can regulatory bodies
|
65 |
+
achieve both breadth and depth in their citations to the comment corpus? What is the best way to
|
66 |
+
select comments that represent the average submission and comments that supply highly specialized
|
67 |
+
information?
|
68 |
|
69 |
+
`fcc-comments` is an annotated version of the comment corpus from the Federal Communications Commission's
|
70 |
+
(FCC) 2017 "Restoring Internet Freedom" proceeding. The source data were downloaded directly from the FCC's Electronic
|
71 |
+
Comment Filing System (ECFS) between January and February of 2019 and include raw comment text and metadata on
|
72 |
+
comment submissions. The comment data were processed to be in a consistent format
|
73 |
+
(machine-readable pdf or plain text), and annotated with three types of information: whether the comment was cited in the
|
74 |
+
agency's final order, the type of commenter (individual, interest group, business group), and whether the comment was associated with an in-person meeting.
|
75 |
|
76 |
+
The release also includes query-term and document-term matrices to facilitate keyword searches on the comment corpus.
|
77 |
+
An example of how these can be used with the bm25 algorithm can be found
|
78 |
+
[here](https://github.com/slnader/fcc-comments/blob/main/process_comments/1_score_comments.py).
|
79 |
|
80 |
+
## Dataset Structure
|
81 |
|
82 |
+
FCC relational database (fcc.pgsql): The core components of the database include a table for submission metadata,
|
83 |
+
a table for attachment metadata, a table for filer metadata, and a table that contains comment text if submitted in express format.
|
84 |
+
In addition to these core tables, there are several derived tables specific to the analyses in the paper,
|
85 |
+
including which submissions and attachments were cited in the final order, which submissions were associated with in-person meetings,
|
86 |
+
and which submissions were associated with interest groups. Full documentation of the tables can be found in fcc_database.md.
|
87 |
+
|
88 |
+
Attachments (attachments.tar.gz): Attachments to submissions that could be converted to text via OCR and saved in machine-readable pdf format.
|
89 |
+
The filenames are formatted as [submission_id]_[document_id].pdf, where submission_id and document_id are keys in the relational database.
|
90 |
|
91 |
Search datasets (search.tar.gz): Objects to facilitate prototyping of search algorithms on the comment corpus. Contains the following elements:
|
92 |
|
|
|
99 |
search_dtms.pickle | Document-term matrix for standard comment attachments (44655x3986) in sparse csr format (rows are comment pages, columns are bigram keyword counts). |
|
100 |
search_index.pickle | Pandas dataframe containing unique id and total term length for standard comment attachments. |
|
101 |
|
102 |
+
### Data Fields
|
103 |
+
|
104 |
+
The following tables are available in fcc.pgsql:
|
105 |
+
|
106 |
+
### comments
|
107 |
+
plain text comments associated with submissions
|
108 |
+
|
109 |
+
| column | type | description |
|
110 |
+
| ----------- | ----------- | ----------- |
|
111 |
+
| comment_id | character varying(64) | unique id for plain text comment |
|
112 |
+
comment_text | text | raw text of plain text comment
|
113 |
+
row_id | integer | row sequence for plain text comments
|
114 |
+
|
115 |
+
### submissions
|
116 |
+
metadata for submissions
|
117 |
+
|
118 |
+
| column | type | description |
|
119 |
+
| ----------- | ----------- | ----------- |
|
120 |
+
submission_id | character varying(20) | unique id for submission
|
121 |
+
submission_type | character varying(100) | type of submission (e.g., comment, reply, statement)
|
122 |
+
express_comment | numeric | 1 if express comment
|
123 |
+
date_received | date | date submission was received
|
124 |
+
contact_email | character varying(255) | submitter email address
|
125 |
+
city | character varying(255) | submitter city
|
126 |
+
address_line_1 | character varying(255) | submitter address line 1
|
127 |
+
address_line_2 | character varying(255) | submitter address line 2
|
128 |
+
state | character varying(255) | submitter state
|
129 |
+
zip_code | character varying(50) | submitter zip
|
130 |
+
comment_id | character varying(64) | unique id for plain text comment
|
131 |
+
|
132 |
+
### filers
|
133 |
+
names of filers associated with submissions
|
134 |
+
|
135 |
+
| column | type | description |
|
136 |
+
| ----------- | ----------- | ----------- |
|
137 |
+
submission_id | character varying(20) | unique id for submission
|
138 |
+
filer_name | character varying(250) | name of filer associated with submission
|
139 |
+
|
140 |
+
### documents
|
141 |
+
attachments associated with submissions
|
142 |
+
|
143 |
+
| column | type | description |
|
144 |
+
| ----------- | ----------- | ----------- |
|
145 |
+
submission_id | character varying(20) | unique id for submission
|
146 |
+
document_name | text | filename of attachment
|
147 |
+
download_status | numeric | status of attachment download
|
148 |
+
document_id | character varying(64) | unique id for attachment
|
149 |
+
file_extension | character varying(4) | file extension for attachment
|
150 |
+
|
151 |
+
### filers_cited
|
152 |
+
citations from final order
|
153 |
+
|
154 |
+
| column | type | description |
|
155 |
+
| ----------- | ----------- | ----------- |
|
156 |
+
point | numeric | paragraph number in final order
|
157 |
+
filer_name | character varying(250) | name of cited filer
|
158 |
+
submission_type | character varying(12) | type of submission as indicated in final order
|
159 |
+
page_numbers | text[] | cited page numbers
|
160 |
+
cite_id | integer | unique id for citation
|
161 |
+
filer_id | character varying(250) | id for cited filer
|
162 |
+
|
163 |
+
### docs_cited
|
164 |
+
attachments associated with cited submissions
|
165 |
+
|
166 |
+
| column | type | description |
|
167 |
+
| ----------- | ----------- | ----------- |
|
168 |
+
cite_id | numeric | unique id for citation
|
169 |
+
submission_id | character varying(20) | unique id for submission
|
170 |
+
document_id | character varying(64) | unique id for attachment
|
171 |
+
|
172 |
+
|
173 |
+
### near_duplicates
|
174 |
+
lookup table for comment near-duplicates
|
175 |
+
|
176 |
+
| column | type | description |
|
177 |
+
| ----------- | ----------- | ----------- |
|
178 |
+
target_document_id | unique id for target document
|
179 |
+
duplicate_document_id | unique id for duplicate of target document
|
180 |
+
|
181 |
+
### exact_duplicates
|
182 |
+
lookup table for comment exact duplicates
|
183 |
+
|
184 |
+
| column | type | description |
|
185 |
+
| ----------- | ----------- | ----------- |
|
186 |
+
target_document_id | character varying(100) | unique id for target document
|
187 |
+
duplicate_document_id | character varying(100) | unique id for duplicate of target document
|
188 |
+
|
189 |
+
### in_person_exparte
|
190 |
+
submissions associated with ex parte meeting
|
191 |
+
|
192 |
+
| column | type | description |
|
193 |
+
| ----------- | ----------- | ----------- |
|
194 |
+
submission_id | character varying(20) | unique id for submission
|
195 |
+
|
196 |
+
### interest_groups
|
197 |
+
submissions associated with interest groups
|
198 |
+
|
199 |
+
| column | type | description |
|
200 |
+
| ----------- | ----------- | ----------- |
|
201 |
+
submission_id | character varying(20) | unique id for submission
|
202 |
+
business | numeric | 1 if business group, 0 otherwise
|
203 |
+
|
204 |
+
|
205 |
+
## Dataset Creation
|
206 |
+
|
207 |
+
### Curation Rationale
|
208 |
+
|
209 |
+
The data were curated to perform information retrieval and summarization tasks as documented in https://doi.org/10.1002/poi3.327.
|
210 |
+
|
211 |
+
### Source Data
|
212 |
+
|
213 |
+
#### Initial Data Collection and Normalization
|
214 |
+
|
215 |
+
The data for this study come from the FCC's Electronic Comment Filing System (ECFS) system, accessed between January and February of 2019.
|
216 |
+
I converted the API responses into a normalized, relational database containing information on 23,951,967 submissions.
|
217 |
+
23,938,686 "express" submissions contained a single plain text comment submitted directly through the comment form.
|
218 |
+
13,821 "standard" submissions contained one or more comment documents submitted as attachments in various file formats.
|
219 |
+
While the FCC permitted any file format for attachments, I only consider documents attached in pdf, plain text, rich text,
|
220 |
+
and Microsoft Word file formats, and I drop submitted documents that were simply copies of the FCC’s official documents (e.g., the NPRM itself).
|
221 |
+
Using standard OCR software, I attempted to convert all attachments into plain text and saved them as machine-readable pdfs.
|
222 |
+
|
223 |
+
#### Who are the source language producers?
|
224 |
+
|
225 |
+
All submitters of public comments during the public comment period (but see note on fake comments in considerations).
|
226 |
+
|
227 |
+
### Annotations
|
228 |
+
|
229 |
+
#### Annotation process
|
230 |
+
|
231 |
+
- Citations: I consider citations from the main text of the FCC's final rule. I did not include citations to
|
232 |
+
supporting documents not available through ECFS (e.g., court decisions), nor did I include citations
|
233 |
+
to submissions from prior FCC proceedings. The direct citations to filed submissions are included
|
234 |
+
in a series of 1,186 footnotes. The FCC’s citation format typically followed a relatively standard
|
235 |
+
pattern: the name of the filer (e.g., Verizon), a description of the document (e.g., Comment), and
|
236 |
+
at times a page number. I extracted citations from the text using regular expressions. Based on a
|
237 |
+
random sample of paragraphs from the final order, the regular expressions identified 98% of eligible citations,
|
238 |
+
while successfully excluding all non-citation text. In total, this produced 1,886 unique citations.
|
239 |
+
I then identified which of the comments were cited. First, I identified all documents from the cited filer
|
240 |
+
that had enough pages to contain the page number cited (if provided), and, where applicable, whose filename
|
241 |
+
contained the moniker from the FCC’s citation (e.g., "Reply"). The majority of citations matched to only one
|
242 |
+
possible comment submitted, and I identified the re- maining cited comments through manual review of the citations.
|
243 |
+
In this way, I was able to tag documents associated with all but three citations. When the same cited document was
|
244 |
+
submitted under multiple separate submissions, I tagged all versions of the document as being cited.
|
245 |
+
|
246 |
+
- Commenter type: Comments are labeled as mass comments if 10 or more duplicate or near-duplicate copies were
|
247 |
+
submitted by individual commenters. Near-duplicates were defined as comments with non-zero identical information scores.
|
248 |
+
To identify the type of commenter for non-mass comments, I take advantage of the fact that the vast majority of organized
|
249 |
+
groups preferred standard submissions over express submissions. Any non-mass comment submitted as an express comment was
|
250 |
+
coded as coming from an individual. To distinguish between individuals and organizations that used standard submissions,
|
251 |
+
I use a first name and surname database from the names dataset Python package to characterize filer names as belonging to
|
252 |
+
individuals or organizations. I also use the domain of the submitter’s email address to re-categorize comments as coming
|
253 |
+
from organizations if they were submitted on behalf of organizations by an individual. Government officials were identified by
|
254 |
+
their .gov email addresses. I manually review this procedure for mischaracterizations. After obtaining a list of organization
|
255 |
+
names, I manually code each one as belonging to a business group or a non-business group. Government officials writing in
|
256 |
+
their official capacity were categorized as a non-business group.
|
257 |
+
|
258 |
+
- In-person meetings: To identify which commenters held in-person meetings with the agency, I collect all comments labeled
|
259 |
+
as an ex-parte submission in the EFCS. I manually review these submissions for mention of an in-person meeting. I label
|
260 |
+
a commenter as having held an in-person meeting if they submitted at least one ex-parte document that mentioned an in-person meeting.
|
261 |
+
|
262 |
+
#### Who are the annotators?
|
263 |
+
|
264 |
+
Annotations are a combination of automated and manual review done by the author.
|
265 |
+
|
266 |
+
### Personal and Sensitive Information
|
267 |
+
|
268 |
+
This dataset may contain personal and sensitive information, as there were no restrictions on what commenters could submit to
|
269 |
+
the agency. This dataset also contains numerous examples of profanity and spam. These comments represent what the FCC decided was
|
270 |
+
appropriate to share publicly on their own website.
|
271 |
+
|
272 |
+
## Considerations for Using the Data
|
273 |
+
|
274 |
+
### Discussion of Biases
|
275 |
+
|
276 |
+
This proceeding was famous for the large number of "fake" comments (comments impersonating ordinary citizens) submitted to the
|
277 |
+
agency (see [this report](https://ag.ny.gov/sites/default/files/oag-fakecommentsreport.pdf) by the NY AG for more information).
|
278 |
+
As such, this comment corpus contains a mix of computer-generated and natural language, and there is currently no way to reliably separate
|
279 |
+
mass comments submitted with the approval of the commenter and those submitted on behalf of the commenter without their knowledge.
|
280 |
+
|
281 |
+
## Additional Information
|
282 |
+
|
283 |
+
### Licensing Information
|
284 |
+
|
285 |
+
CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International.
|
286 |
+
|
287 |
+
### Citation Information
|
288 |
+
|
289 |
+
```
|
290 |
+
@article{handan2022,
|
291 |
+
title={Do fake online comments pose a threat to regulatory policymaking? Evidence from Internet regulation in the United States},
|
292 |
+
author={Handan-Nader, Cassandra},
|
293 |
+
journal={Policy \& Internet},
|
294 |
+
year={2022}
|
295 |
+
}
|
296 |
+
```
|