Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 1 new columns ({'disk_usage_percent'}) This happened while the csv dataset builder was generating data using hf://datasets/jayzou3773/CloudAnoBench/anom_dataset/scenario_3/anom_3_22.csv (at revision 12dbff5fc50d7a4e85f73b7548dd9edb4ab8cc26) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast timestamp: string cpu_usage: double mem_usage: double disk_usage_percent: double disk_io: double net_in: double net_out: double -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1096 to {'timestamp': Value('string'), 'cpu_usage': Value('float64'), 'mem_usage': Value('float64'), 'disk_io': Value('float64'), 'net_in': Value('float64'), 'net_out': Value('float64')} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1833, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 1 new columns ({'disk_usage_percent'}) This happened while the csv dataset builder was generating data using hf://datasets/jayzou3773/CloudAnoBench/anom_dataset/scenario_3/anom_3_22.csv (at revision 12dbff5fc50d7a4e85f73b7548dd9edb4ab8cc26) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
timestamp
string | cpu_usage
float64 | mem_usage
float64 | disk_io
float64 | net_in
float64 | net_out
float64 |
---|---|---|---|---|---|
2025-07-02T11:00:00Z
| 13.23 | 40.98 | 36.08 | 1.05 | 0.92 |
2025-07-02T11:00:05Z
| 12.35 | 42.58 | 29.12 | 1.18 | 0.79 |
2025-07-02T11:00:10Z
| 18.93 | 36.14 | 29.03 | 1.18 | 0.88 |
2025-07-02T11:00:15Z
| 19.55 | 42.69 | 21.42 | 0.74 | 1.06 |
2025-07-02T11:00:20Z
| 18.85 | 36.41 | 27.69 | 1.35 | 1.26 |
2025-07-02T11:00:25Z
| 11.6 | 40.48 | 22.14 | 0.92 | 0.89 |
2025-07-02T11:00:30Z
| 12.02 | 36.76 | 34.73 | 0.8 | 1.28 |
2025-07-02T11:00:35Z
| 14.41 | 42.55 | 28.06 | 1.24 | 1.31 |
2025-07-02T11:00:40Z
| 10.6 | 39.03 | 26.67 | 1.1 | 0.72 |
2025-07-02T11:00:45Z
| 12.09 | 36.84 | 30.89 | 0.71 | 1.13 |
2025-07-02T11:00:50Z
| 18.5 | 37.19 | 25.29 | 0.97 | 1.11 |
2025-07-02T11:00:55Z
| 11.15 | 39.57 | 24.6 | 0.53 | 0.65 |
2025-07-02T11:01:00Z
| 14.41 | 38.13 | 23.91 | 0.72 | 1.3 |
2025-07-02T11:01:05Z
| 19.15 | 38.87 | 38.2 | 0.52 | 0.76 |
2025-07-02T11:01:10Z
| 12 | 35.18 | 35.54 | 1.03 | 1.13 |
2025-07-02T11:01:15Z
| 10.55 | 37.64 | 29.19 | 1.48 | 1.34 |
2025-07-02T11:01:20Z
| 10.83 | 36.99 | 36.4 | 0.66 | 0.54 |
2025-07-02T11:01:25Z
| 11.11 | 37.53 | 37.54 | 0.89 | 1.22 |
2025-07-02T11:01:30Z
| 10.79 | 36.61 | 38.49 | 1.39 | 0.68 |
2025-07-02T11:01:35Z
| 14.27 | 36.81 | 34.64 | 1.14 | 0.73 |
2025-07-02T11:01:40Z
| 10.22 | 43.79 | 34.25 | 1.36 | 1.38 |
2025-07-02T11:01:45Z
| 19.2 | 36.66 | 30.82 | 1.07 | 0.77 |
2025-07-02T11:01:50Z
| 16.69 | 38.29 | 34.77 | 0.83 | 1.41 |
2025-07-02T11:01:55Z
| 16.41 | 37.04 | 30.57 | 1.22 | 1.24 |
2025-07-02T11:02:00Z
| 12.83 | 39.59 | 30 | 1.34 | 0.71 |
2025-07-02T11:02:05Z
| 19.8 | 37.65 | 21.28 | 1.27 | 0.73 |
2025-07-02T11:02:10Z
| 14.58 | 40.39 | 31.15 | 1.44 | 1.3 |
2025-07-02T11:02:15Z
| 12.76 | 39.97 | 24.25 | 1.14 | 0.66 |
2025-07-02T11:02:20Z
| 14.62 | 38.9 | 39.44 | 0.98 | 0.81 |
2025-07-02T11:02:25Z
| 15.66 | 42.65 | 34.65 | 0.88 | 0.55 |
2025-07-02T11:02:30Z
| 11.15 | 39.22 | 23.2 | 0.65 | 1.4 |
2025-07-02T11:02:35Z
| 12.13 | 39.86 | 29.18 | 0.67 | 0.69 |
2025-07-02T11:02:40Z
| 16.72 | 43.94 | 26.39 | 0.57 | 0.57 |
2025-07-02T11:02:45Z
| 12.38 | 41.36 | 22.81 | 0.52 | 1.07 |
2025-07-02T11:02:50Z
| 17.83 | 43.9 | 35.07 | 0.81 | 1.27 |
2025-07-02T11:02:55Z
| 18.47 | 35.88 | 38.23 | 0.74 | 1.18 |
2025-07-02T11:03:00Z
| 12.37 | 42.08 | 31.73 | 1 | 1.3 |
2025-07-02T11:03:05Z
| 12.54 | 38.07 | 25.83 | 1.24 | 1.45 |
2025-07-02T11:03:10Z
| 15.11 | 36.43 | 20.26 | 0.66 | 0.97 |
2025-07-02T11:03:15Z
| 15.67 | 38.66 | 27.78 | 0.95 | 1.08 |
2025-07-02T11:03:20Z
| 20 | 42.61 | 24.17 | 0.5 | 0.51 |
2025-07-02T11:03:25Z
| 28.33 | 44.31 | 28.91 | 1.05 | 1.03 |
2025-07-02T11:03:30Z
| 36.67 | 41.12 | 22.86 | 1.02 | 0.79 |
2025-07-02T11:03:35Z
| 45 | 37.22 | 24.07 | 0.92 | 1.36 |
2025-07-02T11:03:40Z
| 53.33 | 38.51 | 37.94 | 1.18 | 0.71 |
2025-07-02T11:03:45Z
| 61.67 | 37.19 | 31.92 | 0.85 | 0.58 |
2025-07-02T11:03:50Z
| 70 | 36.06 | 37.36 | 1.18 | 1.22 |
2025-07-02T11:03:55Z
| 78.33 | 43.25 | 27.16 | 0.74 | 1.36 |
2025-07-02T11:04:00Z
| 86.67 | 37.63 | 34.17 | 0.9 | 1.29 |
2025-07-02T11:04:05Z
| 95 | 44.85 | 22.33 | 1.04 | 0.61 |
2025-07-02T11:04:10Z
| 96.79 | 38.54 | 26.51 | 1.18 | 1.36 |
2025-07-02T11:04:15Z
| 97.15 | 41.18 | 34.76 | 0.64 | 1.21 |
2025-07-02T11:04:20Z
| 98.1 | 39.47 | 26.24 | 0.9 | 1.07 |
2025-07-02T11:04:25Z
| 98.74 | 42.55 | 38.29 | 1.2 | 1.37 |
2025-07-02T11:04:30Z
| 99.72 | 38.16 | 28.26 | 0.64 | 1.04 |
2025-07-02T11:04:35Z
| 96.79 | 41.99 | 34.2 | 0.93 | 0.92 |
2025-07-02T11:04:40Z
| 99.31 | 41.79 | 23.47 | 1.16 | 1.47 |
2025-07-02T11:04:45Z
| 99.8 | 37.42 | 37.38 | 1.01 | 1.2 |
2025-07-02T11:04:50Z
| 96.38 | 38.83 | 31.2 | 1.45 | 0.66 |
2025-07-02T11:04:55Z
| 99.92 | 37.51 | 24.6 | 1.31 | 0.77 |
2025-07-02T11:05:00Z
| 99.07 | 44.48 | 20.71 | 1.41 | 0.85 |
2025-07-02T11:05:05Z
| 95.63 | 41.15 | 26.96 | 0.97 | 0.65 |
2025-07-02T11:05:10Z
| 97.34 | 43.47 | 31.73 | 1.01 | 0.57 |
2025-07-02T11:05:15Z
| 96.37 | 38.97 | 23.57 | 0.52 | 0.68 |
2025-07-02T11:05:20Z
| 96.92 | 39.49 | 33.15 | 1.45 | 1.3 |
2025-07-02T11:05:25Z
| 96.08 | 37.24 | 29.89 | 0.96 | 0.6 |
2025-07-02T11:05:30Z
| 99.47 | 42.32 | 32.25 | 1.23 | 1.23 |
2025-07-02T11:05:35Z
| 96.26 | 44.39 | 20.54 | 1.43 | 0.74 |
2025-07-02T11:05:40Z
| 98.69 | 44.43 | 21.07 | 0.92 | 1.13 |
2025-07-02T11:05:45Z
| 97.53 | 39.04 | 20.91 | 0.71 | 1.17 |
2025-07-02T11:05:50Z
| 95 | 37.88 | 27.36 | 1.44 | 1.21 |
2025-07-02T11:05:55Z
| 96.04 | 38.59 | 30.14 | 0.85 | 0.83 |
2025-07-02T11:06:00Z
| 99.88 | 42.68 | 26.54 | 0.68 | 0.95 |
2025-07-02T11:06:05Z
| 99.14 | 35.3 | 23.47 | 1.33 | 0.93 |
2025-07-02T11:06:10Z
| 95.87 | 37.61 | 34.16 | 0.56 | 1.03 |
2025-07-02T11:06:15Z
| 99.68 | 35.75 | 36.06 | 0.96 | 1.23 |
2025-07-02T11:06:20Z
| 97.44 | 35.68 | 27.15 | 1.01 | 1.4 |
2025-07-02T11:06:25Z
| 96.98 | 35.91 | 32.24 | 0.89 | 0.99 |
2025-07-02T11:06:30Z
| 98.63 | 35.88 | 25.63 | 0.69 | 1.11 |
2025-07-02T11:06:35Z
| 95.88 | 39.86 | 34.56 | 1.5 | 0.68 |
2025-07-02T11:06:40Z
| 97.16 | 41.99 | 34.07 | 0.62 | 1.35 |
2025-07-02T11:06:45Z
| 95 | 44.55 | 30.36 | 0.57 | 1.11 |
2025-07-02T11:06:50Z
| 96.61 | 43 | 24.07 | 1.36 | 0.74 |
2025-07-02T11:06:55Z
| 98.39 | 35.38 | 32.72 | 1.28 | 0.67 |
2025-07-02T11:07:00Z
| 98.49 | 41.98 | 22.56 | 1.25 | 1.11 |
2025-07-02T11:07:05Z
| 96.4 | 41.63 | 29.1 | 0.69 | 0.67 |
2025-07-02T11:07:10Z
| 97.36 | 42.05 | 20.65 | 1.29 | 0.64 |
2025-07-02T11:07:15Z
| 96.96 | 40.43 | 39.76 | 0.79 | 0.94 |
2025-07-02T11:07:20Z
| 97.54 | 39.63 | 33.89 | 1.13 | 0.56 |
2025-07-02T11:07:25Z
| 97.62 | 41.58 | 34.24 | 1.43 | 0.99 |
2025-08-23T10:00:00Z
| 13.43 | 40.16 | 22.58 | 0.8 | 1.4 |
2025-08-23T10:00:05Z
| 15.73 | 39.81 | 32.45 | 0.72 | 1.07 |
2025-08-23T10:00:10Z
| 12.92 | 42.98 | 33.95 | 0.55 | 1.46 |
2025-08-23T10:00:15Z
| 15.72 | 39.67 | 20.99 | 0.78 | 1.19 |
2025-08-23T10:00:20Z
| 11.19 | 41.04 | 24.08 | 0.93 | 1.45 |
2025-08-23T10:00:25Z
| 15.13 | 40.71 | 23.46 | 0.63 | 1.45 |
2025-08-23T10:00:30Z
| 16.92 | 41.85 | 32.42 | 0.53 | 1.24 |
2025-08-23T10:00:35Z
| 17.01 | 39.92 | 17.27 | 0.84 | 0.99 |
2025-08-23T10:00:40Z
| 15.28 | 40.18 | 34.83 | 0.6 | 1.05 |
2025-08-23T10:00:45Z
| 14.23 | 42.82 | 27.47 | 0.76 | 1.1 |
CloudAnoBench
Paper: Towards Generalizable Context-aware Anomaly Detection: A Large-scale Benchmark in Cloud Environments
Project Page: https://jayzou3773.github.io/cloudanobench-agent/
CloudAnoBench is a large-scale benchmark for context-aware anomaly detection in cloud environments, jointly incorporating both metrics and logs to more faithfully reflect real-world conditions. It consists of 1,252 labeled cases spanning 28 anomalous scenarios and 16 deceptive normal scenarios (approximately 200K lines), with explicit annotations for anomaly type and scenario. By including deceptive normal cases where anomalous-looking metric patterns are explained by benign log events, the benchmark introduces higher ambiguity and difficulty compared to prior datasets, thereby providing a rigorous testbed for evaluating both anomaly detection and scenario identification.
π Dataset Structure
The dataset is organized as follows:
cloudanobench/
βββ anom_dataset/ # Anomaly scenarios (without Malicious)
β βββ scenario_1/ # CPU Hog Process
β βββ scenario_2/ # Memory Leak
β βββ ...
βββ mali_dataset/ # Malicious Events scenarios
β βββ scenario_1/ # Cryptomining Malware
β βββ scenario_2/ # DDoS Botnet Agent
β βββ ...
βββ norm_dataset/ # Deceptive Normal scnarios
βββ scenario_1/ # Nightly Full Backup
βββ scenario_2/ # Log Rotation & Compression
βββ ...
Each scenario directory contains:
*.csv
files: Structured performance metrics and log data*.log
files: Raw log files
π¨ Anomaly scenarios (anom_dataset)
The anomaly scenarios contains 17 different types of system anomaly scenarios, organized by the following categories:
Resource Exhaustion & Bottlenecks
- Scenario 1: CPU Hog Process - Resource exhaustion caused by CPU-intensive processes
- Scenario 2: Memory Leak - Memory leakage issues
- Scenario 3: Disk I/O Bottleneck - Disk I/O performance bottlenecks
- Scenario 4: Noisy Neighbor - Noisy neighbor effects in shared environments
- Scenario 5: Handle/Thread Exhaustion - Handle or thread pool exhaustion
Network Anomalies
- Scenario 6: High Network Latency / Packet Loss - High network latency and packet loss
- Scenario 7: Bandwidth Saturation - Network bandwidth saturation
- Scenario 8: DNS Resolution Failure - DNS resolution failures
- Scenario 9: Firewall/Security Group Misconfiguration - Firewall or security group misconfigurations
Software & Application Anomalies
- Scenario 10: Application Crash Loop - Application crash-restart loops
- Scenario 11: Deadlock / Livelock - Deadlock and livelock situations
- Scenario 12: External Dependency Failure - External service dependency failures
- Scenario 13: Application Misconfiguration - Application configuration errors
- Scenario 14: Garbage Collection (GC) Storm - Excessive garbage collection activity
Other Complex & Subtle Anomalies
- Scenario 15: Time Skew (Clock Drift) - System clock drift and time synchronization issues
- Scenario 16: Kernel / Driver Bug - Kernel or device driver bugs
- Scenario 17: (Additional anomaly scenario)
π¦ Malicious Events scenarios Dataset (mali_dataset)
The malicious activity dataset contains 12 different types of malicious event scenarios:
Malicious Events
- Scenario 1: Cryptomining Malware - Cryptocurrency mining malware activities
- Scenario 2: DDoS Botnet Agent - DDoS botnet agent operations
- Scenario 3: Ransomware - Ransomware infection and encryption activities
- Scenario 4: Data Exfiltration - Unauthorized data theft and transfer
- Scenario 5: Spam Bot - Spam email distribution bot activities
- Scenario 6: Rootkit / Backdoor - Rootkit installation and backdoor access
- Scenario 7: Brute-force / Credential Stuffing Attack - Password brute-force and credential stuffing attacks
- Scenario 8: Web Shell Beaconing & Command Execution - Web shell communication and remote command execution
- Scenario 9: Password Cracking - Password cracking and hash breaking activities
- Scenario 10: Log Deletion / Tampering - System log deletion and tampering
- Scenario 11: Reverse Shell - Reverse shell establishment and communication
- Scenario 12: Application-Layer DDoS Attack - Application-layer DDoS attacks
β Deceptive Normal scnarios Dataset (norm_dataset)
The normal behavior dataset contains 16 different normal business operation scenarios:
Normal Business Operations
- Scenario 1: Nightly Full Backup - Scheduled full system backup operations
- Scenario 2: Log Rotation & Compression - Automated log file rotation and compression
- Scenario 3: Periodic Data Aggregation - Regular data aggregation and summarization tasks
- Scenario 4: Filesystem Cleanup - Routine filesystem maintenance and cleanup
- Scenario 5: SSL Certificate Renewal - Automated SSL/TLS certificate renewal processes
- Scenario 6: Blue-Green Deployment - Blue-green application deployment procedures
- Scenario 7: Live Streaming Event Start - Live streaming service initialization
- Scenario 8: Search Engine Aggressive Crawl - Intensive web crawling operations
- Scenario 9: End-of-Month Financial Closing - Month-end financial processing workflows
- Scenario 10: On-Demand Full Report Generation - Large-scale report generation tasks
- Scenario 11: ETL Pipeline Execution - Extract, Transform, Load data pipeline operations
- Scenario 12: On-Demand Video Transcoding - Video transcoding and processing tasks
- Scenario 13: Database Compaction/Vacuum - Database maintenance and optimization
- Scenario 14: Cache Eviction Storm - Cache invalidation and eviction processes
- Scenario 15: Connection Pool Scaling - Dynamic connection pool scaling operations
- Scenario 16: File Integrity Monitoring Scan - System-wide file integrity verification
π§ Usage
Data Loading Example
import pandas as pd
import os
# Load data for a specific scenario
def load_scenario_data(dataset_type, scenario_id):
"""
Load data for a specific scenario
Args:
dataset_type: 'anom', 'mali', or 'norm'
scenario_id: Scenario number (1-17 for anom, 1-12 for mali, 1-16 for norm)
"""
base_path = f"{dataset_type}_dataset/scenario_{scenario_id}"
csv_files = []
log_files = []
for file in os.listdir(base_path):
if file.endswith('.csv'):
csv_files.append(pd.read_csv(os.path.join(base_path, file)))
elif file.endswith('.log'):
log_files.append(os.path.join(base_path, file))
return csv_files, log_files
# Example: Load CPU Hog Process anomaly data
csv_data, log_files = load_scenario_data('anom', 1)
Batch Data Processing
# Batch load all anomaly scenarios
anomaly_data = {}
for i in range(1, 18): # scenarios 1-17
anomaly_data[f'scenario_{i}'] = load_scenario_data('anom', i)
# Batch load all malicious activity scenarios
malicious_data = {}
for i in range(1, 13): # scenarios 1-12
malicious_data[f'scenario_{i}'] = load_scenario_data('mali', i)
# Batch load all normal behavior scenarios
normal_data = {}
for i in range(1, 17): # scenarios 1-16
normal_data[f'scenario_{i}'] = load_scenario_data('norm', i)
π File Naming Convention
- Anomaly data:
anom_{scenario_id}_{case_number}.{csv|log}
- Malicious data:
mali_{scenario_id}_{case_number}.{csv|log}
- Normal data:
norm_{scenario_id}_{case_number}.{csv|log}
Where:
scenario_id
: Scenario numbercase_number
: Case number within the scenario- Extension:
.csv
for structured data,.log
for raw logs
@misc{zou2025generalizablecontextawareanomalydetection,
title={Towards Generalizable Context-aware Anomaly Detection: A Large-scale Benchmark in Cloud Environments},
author={Xinkai Zou and Xuan Jiang and Ruikai Huang and Haoze He and Parv Kapoor and Hongrui Wu and Yibo Wang and Jian Sha and Xiongbo Shi and Zixun Huang and Jinhua Zhao},
year={2025},
eprint={2508.01844},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.01844},
}
- Downloads last month
- 435