Add dataset documentation
Browse files
README.md
CHANGED
@@ -38,11 +38,9 @@ Original data source: https://zenodo.org/records/8196385/files/HDFS_v1.zip?downl
|
|
38 |
|
39 |
## Preprocessing
|
40 |
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
3. Remove block IDs from parameter lists to prevent data leakage
|
45 |
-
4. Add special tokens for event type separation
|
46 |
|
47 |
## Intended Uses
|
48 |
|
@@ -51,6 +49,8 @@ This dataset is designed for:
|
|
51 |
- Evaluating log sequence prediction models
|
52 |
- Benchmarking different approaches to log-based anomaly detection
|
53 |
|
|
|
|
|
54 |
## Citation
|
55 |
|
56 |
If you use this dataset, please cite the original HDFS paper:
|
|
|
38 |
|
39 |
## Preprocessing
|
40 |
|
41 |
+
We preprocess the logs using the Drain algorithm to extract structured fields and identify event types.
|
42 |
+
We then encode the logs using a pretrained tokenizer and add special tokens to separate event types. This
|
43 |
+
dataset should be immediately usable for training and testing models for log-based anomaly detection.
|
|
|
|
|
44 |
|
45 |
## Intended Uses
|
46 |
|
|
|
49 |
- Evaluating log sequence prediction models
|
50 |
- Benchmarking different approaches to log-based anomaly detection
|
51 |
|
52 |
+
see [honicky/pythia-14m-hdfs-logs](https://huggingface.co/honicky/pythia-14m-hdfs-logs) for an example model.
|
53 |
+
|
54 |
## Citation
|
55 |
|
56 |
If you use this dataset, please cite the original HDFS paper:
|