Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pretty_name: BBT-CC19
|
6 |
+
size_categories:
|
7 |
+
- 10M<n<100M
|
8 |
+
configs:
|
9 |
+
- config_name: script_extraction
|
10 |
+
data_files: "script_extraction/*.arrow"
|
11 |
+
- config_name: ipmaxmind
|
12 |
+
data_files: "ipmaxmind/*.arrow"
|
13 |
+
---
|
14 |
+
# Context
|
15 |
+
BigBanyanTree is an initiative to empower colleges to set up their data engineering clusters, and drive interest towards data processing and analysis using tools such as Apache Spark. The data provided here is the direct result of this initiative. The data was processed by [Gautam](https://www.linkedin.com/in/gautam-menon-9a30a3233/) and [Suchit](https://www.linkedin.com/in/suchitg04/), under the guidance of [Harsh Singhal](https://www.linkedin.com/in/harshsinghal/).
|
16 |
+
|
17 |
+
# Content
|
18 |
+
Each `arrow` file contains a table with fields extracted from Common Crawl WARC files.
|
19 |
+
|
20 |
+
## <span style="color:red">⚠️ WARNING ⚠️</span>
|
21 |
+
|
22 |
+
The **URLs** and **IP addresses** extracted in this dataset are sourced from **publicly available Common Crawl data dumps**. Please be aware that:
|
23 |
+
|
24 |
+
- The data may contain **inaccuracies** or **outdated information**.
|
25 |
+
- **No validation or filtering** has been performed on the extracted URLs or IP addresses.
|
26 |
+
- As the data has **not been filtered**, it may contain URLs promoting **obscene or objectionable content**.
|
27 |
+
- Use this data **with caution**, especially for tasks involving personal or sensitive information.
|
28 |
+
|
29 |
+
## Disclaimer
|
30 |
+
|
31 |
+
These data points are included solely for the purpose of:
|
32 |
+
|
33 |
+
- **Analyzing domain distributions**
|
34 |
+
- **IP metadata analysis**
|