|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- text-classification |
|
language: |
|
- en |
|
size_categories: |
|
- 10K<n<100K |
|
tags: |
|
- phishing |
|
- url |
|
- html |
|
- text |
|
--- |
|
# Phishing Dataset |
|
|
|
Phishing datasets compiled from various resources for classification and phishing detection tasks. |
|
|
|
## Dataset Details |
|
|
|
All datasets have been preprocessed in terms of eliminating null, empty and duplicate data. Class balancing has also been performed to avoid possible biases. |
|
Datasets have the same structure of two columns: `text` and `label`. Text field can contain samples of: |
|
|
|
- URL |
|
- SMS messages |
|
- Email messages |
|
- HTML code |
|
|
|
Depending on the dataset it belongs to; if it is the combined dataset it will have all data types. In addition, all records are labeled as **1 (Phishing)** or **0 (Benign)**. |
|
|
|
### Source Data |
|
|
|
Datasets correspond to a compilation of 4 sources, which are described below: |
|
|
|
- [Mail dataset](https://www.kaggle.com/datasets/subhajournal/phishingemails) that specifies the body text of various emails that can be used to detect phishing emails, |
|
through extensive text analysis and classification with machine learning. Contains over 18,000 emails |
|
generated by Enron Corporation employees. |
|
|
|
- [SMS message dataset](https://data.mendeley.com/datasets/f45bkkt8pr/1) of more than 5,971 text messages. It includes 489 Spam messages, 638 Smishing messages |
|
and 4,844 Ham messages. The dataset contains attributes extracted from malicious messages that can be used |
|
to classify messages as malicious or legitimate. The data was collected by converting images obtained from |
|
the Internet into text using Python code. |
|
|
|
- [URL dataset](https://www.kaggle.com/datasets/harisudhan411/phishing-and-legitimate-urls) with more than 800,000 URLs where 52% of the domains are legitimate and the remaining 47% are |
|
phishing domains. It is a collection of data samples from various sources, the URLs were collected from the |
|
JPCERT website, existing Kaggle datasets, Github repositories where the URLs are updated once a year and |
|
some open source databases, including Excel files. |
|
|
|
- [Website dataset](https://data.mendeley.com/datasets/n96ncsr5g4/1) of 80,000 instances of legitimate websites (50,000) and phishing websites (30,000). Each |
|
instance contains the URL and the HTML page. Legitimate data were collected from two sources: 1) A simple |
|
keyword search on the Google search engine was used and the first 5 URLs of each search were collected. |
|
Domain restrictions were used and a maximum of 10 collections from one domain was limited to have a diverse |
|
collection at the end. 2) Almost 25,874 active URLs were collected from the Ebbu2017 Phishing Dataset |
|
repository. Three sources were used for the phishing data: PhishTank, OpenPhish and PhishRepo. |
|
|
|
> It is worth mentioning that, in the case of the website dataset, it was unfeasible to bring the total 80,000 samples due to the heavy processing required. |
|
> It was limited to search the first 30,000 samples, of which only those with a weight of less than 100KB were used. This will make it easier to use the website dataset if you do not |
|
> have powerful resources. |
|
|
|
### Combined dataset |
|
|
|
The combined dataset is the one used to train BERT in phishing detection. But, in this repository you can notice that there are |
|
two datasets named as **combined**: |
|
|
|
- combined full |
|
- combined reduced |
|
|
|
Combined datasets owe their name to the fact that they combine all the data sources mentioned in the previous section. However, there is a notable difference between them: |
|
|
|
- The full combined dataset contains the 800,000+ URLs of the URL dataset. |
|
- The reduced combined dataset reduces the URL samples by 95% in order to keep a more balanced combination of data. |
|
|
|
Why was that elimination made in the reduced combined dataset? Completely unifying all URL samples would make URLs 97% of the total, and emails, SMS and websites just 3%. |
|
Missing data types from specific populations could bias the model and not reflect the realities of the environment in which it is run. There would be no representativeness |
|
for the other data types and the model could ignore them. In fact, a test performed on the combined full dataset showed deplorable results in phishing classification with BERT. |
|
Therefore it is recommended to use the reduced combined dataset. The combined full dataset was added for experimentation only. |
|
|
|
#### Processing combined reduced dataset |
|
|
|
Primarily, this dataset is intended to be used in conjunction with the BERT language model. Therefore, it has |
|
not been subjected to traditional preprocessing that is usually done for NLP tasks, such as Text Classification. |
|
|
|
_You may be wondering, is stemming, lemmatization, stop word removal, etc., necessary to improve the performance of BERT?_ |
|
|
|
In general, **NO**. Preprocessing will not change the output predictions. In fact, removing empty words (which |
|
are considered noise in conventional text representation, such as bag-of-words or tf-idf) can and probably will |
|
worsen the predictions of your BERT model. Since BERT uses the self-attenuation mechanism, these "stop words" |
|
are valuable information for BERT. The same goes for punctuation: a question mark can certainly change the |
|
overall meaning of a sentence. Therefore, eliminating stop words and punctuation marks would only mean |
|
eliminating the context that BERT could have used to get better results. |
|
|
|
However, if this dataset plans to be used for another type of model, perhaps preprocessing for NLP tasks should |
|
be considered. That is at the discretion of whoever wishes to employ this dataset. |
|
|
|
For more information check these links: |
|
|
|
- https://stackoverflow.com/a/70700145 |
|
- https://datascience.stackexchange.com/a/113366 |
|
|
|
### How to use them |
|
|
|
You can easily use any of these datasets by specifying its name in the following code configuration: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("ealvaradob/phishing-dataset", "<desired_dataset>", trust_remote_code=True) |
|
``` |
|
|
|
For example, if you want to load combined reduced dataset, you can use: |
|
|
|
```python |
|
dataset = load_dataset("ealvaradob/phishing-dataset", "combined_reduced", trust_remote_code=True) |
|
``` |
|
|
|
Due to the implementation of the datasets library, when executing these codes you will see that only a training split is generated. |
|
The entire downloaded dataset will be inside that split. But if you want to separate it into test and training sets, you could run this code: |
|
|
|
```python |
|
from datasets import Dataset |
|
from sklearn.model_selection import train_test_split |
|
|
|
df = dataset['train'].to_pandas() |
|
train, test = train_test_split(df, test_size=0.2, random_state=42) |
|
train, test = Dataset.from_pandas(train, preserve_index=False), Dataset.from_pandas(test, preserve_index=False) |
|
``` |