Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 620, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1886, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 639, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
print_report
bool
log_report
bool
overall
dict
warmup
dict
train
dict
{ "name": "cpu_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2490.195968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6796462880000149, 0.5551654500000041, 0.5572487570000249, 0.555526766000014, 0.5655911999999716 ], "count": 5, "total": 2.9131784610000295, "mean": 0.5826356922000059, "p50": 0.5572487570000249, "p90": 0.6340242527999976, "p95": 0.6568352704000062, "p99": 0.6750840844800132, "stdev": 0.04865300602618744, "stdev_": 8.350502153837487 }, "throughput": { "unit": "samples/s", "value": 17.163383798614284 }, "energy": { "unit": "kWh", "cpu": 0.00012045345841666676, "ram": 0.000005034762742361399, "gpu": 0, "total": 0.00012548822115902817 }, "efficiency": { "unit": "samples/kWh", "value": 79688.75411284412 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2490.195968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6796462880000149, 0.5551654500000041 ], "count": 2, "total": 1.234811738000019, "mean": 0.6174058690000095, "p50": 0.6174058690000095, "p90": 0.6671982042000139, "p95": 0.6734222461000143, "p99": 0.6784014796200148, "stdev": 0.06224041900000543, "stdev_": 10.080956810600787 }, "throughput": { "unit": "samples/s", "value": 6.478720402315998 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2490.195968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5572487570000249, 0.555526766000014, 0.5655911999999716 ], "count": 3, "total": 1.6783667230000106, "mean": 0.5594555743333368, "p50": 0.5572487570000249, "p90": 0.5639227113999823, "p95": 0.564756955699977, "p99": 0.5654243511399727, "stdev": 0.004395129121488192, "stdev_": 0.7856082454313826 }, "throughput": { "unit": "samples/s", "value": 10.724712158154418 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2490.195968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6796462880000149, 0.5551654500000041, 0.5572487570000249, 0.555526766000014, 0.5655911999999716 ], "count": 5, "total": 2.9131784610000295, "mean": 0.5826356922000059, "p50": 0.5572487570000249, "p90": 0.6340242527999976, "p95": 0.6568352704000062, "p99": 0.6750840844800132, "stdev": 0.04865300602618744, "stdev_": 8.350502153837487 }, "throughput": { "unit": "samples/s", "value": 17.163383798614284 }, "energy": { "unit": "kWh", "cpu": 0.00012045345841666676, "ram": 0.000005034762742361399, "gpu": 0, "total": 0.00012548822115902817 }, "efficiency": { "unit": "samples/kWh", "value": 79688.75411284412 } }
{ "memory": { "unit": "MB", "max_ram": 2490.195968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6796462880000149, 0.5551654500000041 ], "count": 2, "total": 1.234811738000019, "mean": 0.6174058690000095, "p50": 0.6174058690000095, "p90": 0.6671982042000139, "p95": 0.6734222461000143, "p99": 0.6784014796200148, "stdev": 0.06224041900000543, "stdev_": 10.080956810600787 }, "throughput": { "unit": "samples/s", "value": 6.478720402315998 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2490.195968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5572487570000249, 0.555526766000014, 0.5655911999999716 ], "count": 3, "total": 1.6783667230000106, "mean": 0.5594555743333368, "p50": 0.5572487570000249, "p90": 0.5639227113999823, "p95": 0.564756955699977, "p99": 0.5654243511399727, "stdev": 0.004395129121488192, "stdev_": 0.7856082454313826 }, "throughput": { "unit": "samples/s", "value": 10.724712158154418 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "model": "google-bert/bert-base-uncased", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 2.738775005999969, "mean": 0.5477550011999938, "stdev": 0.03693447784258994, "p50": 0.5307143729999666, "p90": 0.5856752317999963, "p95": 0.6036043034000045, "p99": 0.6179475606800111, "values": [ 0.6215333750000127, 0.5307143729999666, 0.5270229210000252, 0.5276163199999928, 0.5318880169999716 ] }, "throughput": { "unit": "samples/s", "value": 18.25633719106628 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.1522477479999793, "mean": 0.5761238739999897, "stdev": 0.045409501000023056, "p50": 0.5761238739999897, "p90": 0.612451474800008, "p95": 0.6169924249000104, "p99": 0.6206251849800123, "values": [ 0.6215333750000127, 0.5307143729999666 ] }, "throughput": { "unit": "samples/s", "value": 6.942951300088064 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.5865272579999896, "mean": 0.5288424193333299, "stdev": 0.0021671455040491463, "p50": 0.5276163199999928, "p90": 0.5310336775999758, "p95": 0.5314608472999737, "p99": 0.531802583059972, "values": [ 0.5270229210000252, 0.5276163199999928, 0.5318880169999716 ] }, "throughput": { "unit": "samples/s", "value": 11.345534663357242 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2575.83104, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6783148810000057, 1.505849014999967, 1.5191561999999976, 1.5226754370000322, 1.5418639119999966 ], "count": 5, "total": 7.767859444999999, "mean": 1.5535718889999999, "p50": 1.5226754370000322, "p90": 1.623734493400002, "p95": 1.6510246872000038, "p99": 1.6728568422400054, "stdev": 0.06342616790710333, "stdev_": 4.082602701309777 }, "throughput": { "unit": "samples/s", "value": 6.43677970154106 }, "energy": { "unit": "kWh", "cpu": 0.0003138687434944438, "ram": 0.000013120288847596779, "gpu": 0, "total": 0.0003269890323420406 }, "efficiency": { "unit": "samples/kWh", "value": 30582.065485119063 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2575.83104, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6783148810000057, 1.505849014999967 ], "count": 2, "total": 3.1841638959999727, "mean": 1.5920819479999864, "p50": 1.5920819479999864, "p90": 1.661068294400002, "p95": 1.6696915877000038, "p99": 1.6765902223400053, "stdev": 0.08623293300001933, "stdev_": 5.416362713511533 }, "throughput": { "unit": "samples/s", "value": 2.512433486872269 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2575.83104, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.5191561999999976, 1.5226754370000322, 1.5418639119999966 ], "count": 3, "total": 4.583695549000026, "mean": 1.527898516333342, "p50": 1.5226754370000322, "p90": 1.5380262170000036, "p95": 1.5399450645000001, "p99": 1.5414801424999973, "stdev": 0.0099789934148444, "stdev_": 0.6531188628150536 }, "throughput": { "unit": "samples/s", "value": 3.9269623838622656 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2575.83104, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6783148810000057, 1.505849014999967, 1.5191561999999976, 1.5226754370000322, 1.5418639119999966 ], "count": 5, "total": 7.767859444999999, "mean": 1.5535718889999999, "p50": 1.5226754370000322, "p90": 1.623734493400002, "p95": 1.6510246872000038, "p99": 1.6728568422400054, "stdev": 0.06342616790710333, "stdev_": 4.082602701309777 }, "throughput": { "unit": "samples/s", "value": 6.43677970154106 }, "energy": { "unit": "kWh", "cpu": 0.0003138687434944438, "ram": 0.000013120288847596779, "gpu": 0, "total": 0.0003269890323420406 }, "efficiency": { "unit": "samples/kWh", "value": 30582.065485119063 } }
{ "memory": { "unit": "MB", "max_ram": 2575.83104, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6783148810000057, 1.505849014999967 ], "count": 2, "total": 3.1841638959999727, "mean": 1.5920819479999864, "p50": 1.5920819479999864, "p90": 1.661068294400002, "p95": 1.6696915877000038, "p99": 1.6765902223400053, "stdev": 0.08623293300001933, "stdev_": 5.416362713511533 }, "throughput": { "unit": "samples/s", "value": 2.512433486872269 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2575.83104, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.5191561999999976, 1.5226754370000322, 1.5418639119999966 ], "count": 3, "total": 4.583695549000026, "mean": 1.527898516333342, "p50": 1.5226754370000322, "p90": 1.5380262170000036, "p95": 1.5399450645000001, "p99": 1.5414801424999973, "stdev": 0.0099789934148444, "stdev_": 0.6531188628150536 }, "throughput": { "unit": "samples/s", "value": 3.9269623838622656 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "model": "google/vit-base-patch16-224", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 7.2970974209999895, "mean": 1.459419484199998, "stdev": 0.05210139006345095, "p50": 1.4401334369999859, "p90": 1.521250764199999, "p95": 1.5379663595999886, "p99": 1.5513388359199802, "values": [ 1.5546819549999782, 1.4241662819999874, 1.4401334369999859, 1.4711039780000306, 1.4070117690000075 ] }, "throughput": { "unit": "samples/s", "value": 6.852039532336137 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 2.9788482369999656, "mean": 1.4894241184999828, "stdev": 0.06525783649999539, "p50": 1.4894241184999828, "p90": 1.541630387699979, "p95": 1.5481561713499787, "p99": 1.5533767982699782, "values": [ 1.5546819549999782, 1.4241662819999874 ] }, "throughput": { "unit": "samples/s", "value": 2.6856017371522416 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 4.318249184000024, "mean": 1.4394163946666747, "stdev": 0.02617044676610726, "p50": 1.4401334369999859, "p90": 1.4649098698000216, "p95": 1.4680069239000262, "p99": 1.4704845671800297, "values": [ 1.4401334369999859, 1.4711039780000306, 1.4070117690000075 ] }, "throughput": { "unit": "samples/s", "value": 4.1683560241714614 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2908.09856, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8228547080000226, 0.7182253619999983, 0.7183813430000328, 0.7021890369999824, 0.7177695560000075 ], "count": 5, "total": 3.6794200060000435, "mean": 0.7358840012000087, "p50": 0.7182253619999983, "p90": 0.7810653620000266, "p95": 0.8019600350000246, "p99": 0.818675773400023, "stdev": 0.043921653341148746, "stdev_": 5.968556629784795 }, "throughput": { "unit": "samples/s", "value": 13.589098259634621 }, "energy": { "unit": "kWh", "cpu": 0.00015042468719444555, "ram": 0.000006287666404905634, "gpu": 0, "total": 0.00015671235359935117 }, "efficiency": { "unit": "samples/kWh", "value": 63811.17869983546 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2908.09856, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8228547080000226, 0.7182253619999983 ], "count": 2, "total": 1.541080070000021, "mean": 0.7705400350000104, "p50": 0.7705400350000104, "p90": 0.8123917734000201, "p95": 0.8176232407000213, "p99": 0.8218084145400223, "stdev": 0.052314673000012135, "stdev_": 6.78935170448495 }, "throughput": { "unit": "samples/s", "value": 5.191164401989763 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2908.09856, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7183813430000328, 0.7021890369999824, 0.7177695560000075 ], "count": 3, "total": 2.1383399360000226, "mean": 0.7127799786666742, "p50": 0.7177695560000075, "p90": 0.7182589856000277, "p95": 0.7183201643000302, "p99": 0.7183691072600322, "stdev": 0.007493090367078275, "stdev_": 1.0512487150796301 }, "throughput": { "unit": "samples/s", "value": 8.417744857569648 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2908.09856, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8228547080000226, 0.7182253619999983, 0.7183813430000328, 0.7021890369999824, 0.7177695560000075 ], "count": 5, "total": 3.6794200060000435, "mean": 0.7358840012000087, "p50": 0.7182253619999983, "p90": 0.7810653620000266, "p95": 0.8019600350000246, "p99": 0.818675773400023, "stdev": 0.043921653341148746, "stdev_": 5.968556629784795 }, "throughput": { "unit": "samples/s", "value": 13.589098259634621 }, "energy": { "unit": "kWh", "cpu": 0.00015042468719444555, "ram": 0.000006287666404905634, "gpu": 0, "total": 0.00015671235359935117 }, "efficiency": { "unit": "samples/kWh", "value": 63811.17869983546 } }
{ "memory": { "unit": "MB", "max_ram": 2908.09856, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8228547080000226, 0.7182253619999983 ], "count": 2, "total": 1.541080070000021, "mean": 0.7705400350000104, "p50": 0.7705400350000104, "p90": 0.8123917734000201, "p95": 0.8176232407000213, "p99": 0.8218084145400223, "stdev": 0.052314673000012135, "stdev_": 6.78935170448495 }, "throughput": { "unit": "samples/s", "value": 5.191164401989763 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2908.09856, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7183813430000328, 0.7021890369999824, 0.7177695560000075 ], "count": 3, "total": 2.1383399360000226, "mean": 0.7127799786666742, "p50": 0.7177695560000075, "p90": 0.7182589856000277, "p95": 0.7183201643000302, "p99": 0.7183691072600322, "stdev": 0.007493090367078275, "stdev_": 1.0512487150796301 }, "throughput": { "unit": "samples/s", "value": 8.417744857569648 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 3.581090587999995, "mean": 0.716218117599999, "stdev": 0.043372798377969854, "p50": 0.697155070000008, "p90": 0.7645524997999928, "p95": 0.7826806403999967, "p99": 0.7971831528799999, "values": [ 0.8008087810000006, 0.710168077999981, 0.6928677590000234, 0.697155070000008, 0.680090899999982 ] }, "throughput": { "unit": "samples/s", "value": 13.962227084549829 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.5109768589999817, "mean": 0.7554884294999908, "stdev": 0.04532035150000979, "p50": 0.7554884294999908, "p90": 0.7917447106999986, "p95": 0.7962767458499996, "p99": 0.7999023739700004, "values": [ 0.8008087810000006, 0.710168077999981 ] }, "throughput": { "unit": "samples/s", "value": 5.294588035778844 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 2.0701137290000133, "mean": 0.6900379096666711, "stdev": 0.007248103654729414, "p50": 0.6928677590000234, "p90": 0.696297607800011, "p95": 0.6967263389000096, "p99": 0.6970693237800083, "values": [ 0.6928677590000234, 0.697155070000008, 0.680090899999982 ] }, "throughput": { "unit": "samples/s", "value": 8.695174447587021 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2888.720384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7103454560000273, 0.595783124000036, 0.5754666850000376, 0.6006517540000118, 0.5872036250000292 ], "count": 5, "total": 3.069450644000142, "mean": 0.6138901288000284, "p50": 0.595783124000036, "p90": 0.6664679752000211, "p95": 0.6884067156000242, "p99": 0.7059577079200267, "stdev": 0.048980156908588875, "stdev_": 7.97865197870675 }, "throughput": { "unit": "samples/s", "value": 16.289559859102166 }, "energy": { "unit": "kWh", "cpu": 0.00012706255181666677, "ram": 0.000005311049123049572, "gpu": 0, "total": 0.00013237360093971634 }, "efficiency": { "unit": "samples/kWh", "value": 75543.76347708525 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2888.720384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7103454560000273, 0.595783124000036 ], "count": 2, "total": 1.3061285800000633, "mean": 0.6530642900000316, "p50": 0.6530642900000316, "p90": 0.6988892228000282, "p95": 0.7046173394000277, "p99": 0.7091998326800274, "stdev": 0.057281165999995665, "stdev_": 8.771137371481894 }, "throughput": { "unit": "samples/s", "value": 6.12497124900185 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2888.720384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5754666850000376, 0.6006517540000118, 0.5872036250000292 ], "count": 3, "total": 1.7633220640000786, "mean": 0.5877740213333595, "p50": 0.5872036250000292, "p90": 0.5979621282000153, "p95": 0.5993069411000136, "p99": 0.6003827914200122, "stdev": 0.01028966922423239, "stdev_": 1.750616538119595 }, "throughput": { "unit": "samples/s", "value": 10.208004747111925 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2888.720384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7103454560000273, 0.595783124000036, 0.5754666850000376, 0.6006517540000118, 0.5872036250000292 ], "count": 5, "total": 3.069450644000142, "mean": 0.6138901288000284, "p50": 0.595783124000036, "p90": 0.6664679752000211, "p95": 0.6884067156000242, "p99": 0.7059577079200267, "stdev": 0.048980156908588875, "stdev_": 7.97865197870675 }, "throughput": { "unit": "samples/s", "value": 16.289559859102166 }, "energy": { "unit": "kWh", "cpu": 0.00012706255181666677, "ram": 0.000005311049123049572, "gpu": 0, "total": 0.00013237360093971634 }, "efficiency": { "unit": "samples/kWh", "value": 75543.76347708525 } }
{ "memory": { "unit": "MB", "max_ram": 2888.720384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7103454560000273, 0.595783124000036 ], "count": 2, "total": 1.3061285800000633, "mean": 0.6530642900000316, "p50": 0.6530642900000316, "p90": 0.6988892228000282, "p95": 0.7046173394000277, "p99": 0.7091998326800274, "stdev": 0.057281165999995665, "stdev_": 8.771137371481894 }, "throughput": { "unit": "samples/s", "value": 6.12497124900185 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2888.720384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5754666850000376, 0.6006517540000118, 0.5872036250000292 ], "count": 3, "total": 1.7633220640000786, "mean": 0.5877740213333595, "p50": 0.5872036250000292, "p90": 0.5979621282000153, "p95": 0.5993069411000136, "p99": 0.6003827914200122, "stdev": 0.01028966922423239, "stdev_": 1.750616538119595 }, "throughput": { "unit": "samples/s", "value": 10.208004747111925 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 2.882509665999976, "mean": 0.5765019331999952, "stdev": 0.04978939696949581, "p50": 0.5569985249999831, "p90": 0.6300386333999881, "p95": 0.6525100941999881, "p99": 0.670487262839988, "values": [ 0.674981554999988, 0.5569985249999831, 0.5424031590000027, 0.5626242509999884, 0.5455021760000136 ] }, "throughput": { "unit": "samples/s", "value": 17.34599560576128 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.2319800799999712, "mean": 0.6159900399999856, "stdev": 0.05899151500000244, "p50": 0.6159900399999856, "p90": 0.6631832519999875, "p95": 0.6690824034999878, "p99": 0.673801724699988, "values": [ 0.674981554999988, 0.5569985249999831 ] }, "throughput": { "unit": "samples/s", "value": 6.493611487614465 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.6505295860000047, "mean": 0.5501765286666682, "stdev": 0.00889233078021606, "p50": 0.5455021760000136, "p90": 0.5591998359999935, "p95": 0.5609120434999909, "p99": 0.5622818094999888, "values": [ 0.5424031590000027, 0.5626242509999884, 0.5455021760000136 ] }, "throughput": { "unit": "samples/s", "value": 10.905590637500968 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2841.915392, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.713606432000006, 0.6087099439999974, 0.6273626509999986, 0.5982021100000168, 0.6168094589999953 ], "count": 5, "total": 3.164690596000014, "mean": 0.6329381192000028, "p50": 0.6168094589999953, "p90": 0.679108919600003, "p95": 0.6963576758000044, "p99": 0.7101566807600056, "stdev": 0.04145404931767073, "stdev_": 6.549463219258504 }, "throughput": { "unit": "samples/s", "value": 15.799332820465013 }, "energy": { "unit": "kWh", "cpu": 0.0001301337896444445, "ram": 0.000005439559553038045, "gpu": 0, "total": 0.00013557334919748253 }, "efficiency": { "unit": "samples/kWh", "value": 73760.80962220332 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2841.915392, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.713606432000006, 0.6087099439999974 ], "count": 2, "total": 1.3223163760000034, "mean": 0.6611581880000017, "p50": 0.6611581880000017, "p90": 0.7031167832000051, "p95": 0.7083616076000055, "p99": 0.7125574671200059, "stdev": 0.05244824400000425, "stdev_": 7.932782948458943 }, "throughput": { "unit": "samples/s", "value": 6.049989355951211 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2841.915392, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6273626509999986, 0.5982021100000168, 0.6168094589999953 ], "count": 3, "total": 1.8423742200000106, "mean": 0.6141247400000035, "p50": 0.6168094589999953, "p90": 0.6252520125999979, "p95": 0.6263073317999982, "p99": 0.6271515871599985, "stdev": 0.012055153114874212, "stdev_": 1.9629811876450611 }, "throughput": { "unit": "samples/s", "value": 9.770002100876063 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2841.915392, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.713606432000006, 0.6087099439999974, 0.6273626509999986, 0.5982021100000168, 0.6168094589999953 ], "count": 5, "total": 3.164690596000014, "mean": 0.6329381192000028, "p50": 0.6168094589999953, "p90": 0.679108919600003, "p95": 0.6963576758000044, "p99": 0.7101566807600056, "stdev": 0.04145404931767073, "stdev_": 6.549463219258504 }, "throughput": { "unit": "samples/s", "value": 15.799332820465013 }, "energy": { "unit": "kWh", "cpu": 0.0001301337896444445, "ram": 0.000005439559553038045, "gpu": 0, "total": 0.00013557334919748253 }, "efficiency": { "unit": "samples/kWh", "value": 73760.80962220332 } }
{ "memory": { "unit": "MB", "max_ram": 2841.915392, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.713606432000006, 0.6087099439999974 ], "count": 2, "total": 1.3223163760000034, "mean": 0.6611581880000017, "p50": 0.6611581880000017, "p90": 0.7031167832000051, "p95": 0.7083616076000055, "p99": 0.7125574671200059, "stdev": 0.05244824400000425, "stdev_": 7.932782948458943 }, "throughput": { "unit": "samples/s", "value": 6.049989355951211 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2841.915392, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6273626509999986, 0.5982021100000168, 0.6168094589999953 ], "count": 3, "total": 1.8423742200000106, "mean": 0.6141247400000035, "p50": 0.6168094589999953, "p90": 0.6252520125999979, "p95": 0.6263073317999982, "p99": 0.6271515871599985, "stdev": 0.012055153114874212, "stdev_": 1.9629811876450611 }, "throughput": { "unit": "samples/s", "value": 9.770002100876063 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "openai-community/gpt2", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 3.1791685380000274, "mean": 0.6358337076000055, "stdev": 0.07846941233493662, "p50": 0.596941285000014, "p90": 0.7161873142000047, "p95": 0.7544328206000045, "p99": 0.7850292257200044, "values": [ 0.7926783270000044, 0.6014507950000052, 0.596941285000014, 0.593667699000008, 0.5944304319999958 ] }, "throughput": { "unit": "samples/s", "value": 15.727382616668173 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.3941291220000096, "mean": 0.6970645610000048, "stdev": 0.0956137659999996, "p50": 0.6970645610000048, "p90": 0.7735555738000045, "p95": 0.7831169504000044, "p99": 0.7907660516800044, "values": [ 0.7926783270000044, 0.6014507950000052 ] }, "throughput": { "unit": "samples/s", "value": 5.738349392288174 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.7850394160000178, "mean": 0.5950131386666726, "stdev": 0.0013985114990352936, "p50": 0.5944304319999958, "p90": 0.5964391144000103, "p95": 0.5966901997000121, "p99": 0.5968910679400136, "values": [ 0.596941285000014, 0.593667699000008, 0.5944304319999958 ] }, "throughput": { "unit": "samples/s", "value": 10.083810944822195 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 4390.588416, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.324631328999999, 1.1373790739999663, 1.14549581, 1.1179114660000096, 1.113112846999968 ], "count": 5, "total": 5.838530525999943, "mean": 1.1677061051999886, "p50": 1.1373790739999663, "p90": 1.2529771213999994, "p95": 1.2888042251999992, "p99": 1.3174659082399989, "stdev": 0.0793706265981456, "stdev_": 6.79714067132946 }, "throughput": { "unit": "samples/s", "value": 8.563798677996411 }, "energy": { "unit": "kWh", "cpu": 0.00023477029041666527, "ram": 0.000009813570993522525, "gpu": 0, "total": 0.0002445838614101878 }, "efficiency": { "unit": "samples/kWh", "value": 40885.77203067849 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 4390.588416, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.324631328999999, 1.1373790739999663 ], "count": 2, "total": 2.462010402999965, "mean": 1.2310052014999826, "p50": 1.2310052014999826, "p90": 1.3059061034999957, "p95": 1.3152687162499972, "p99": 1.3227588064499987, "stdev": 0.09362612750001631, "stdev_": 7.605664654051231 }, "throughput": { "unit": "samples/s", "value": 3.2493770092327727 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 4390.588416, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.14549581, 1.1179114660000096, 1.113112846999968 ], "count": 3, "total": 3.3765201229999775, "mean": 1.1255067076666592, "p50": 1.1179114660000096, "p90": 1.1399789412000019, "p95": 1.1427373756000008, "p99": 1.1449441231200002, "stdev": 0.014269544378301306, "stdev_": 1.2678329041578231 }, "throughput": { "unit": "samples/s", "value": 5.3309322451208505 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "31aa6620675bda1ecd6e40a22ecaa03106d279d8", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 4390.588416, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.324631328999999, 1.1373790739999663, 1.14549581, 1.1179114660000096, 1.113112846999968 ], "count": 5, "total": 5.838530525999943, "mean": 1.1677061051999886, "p50": 1.1373790739999663, "p90": 1.2529771213999994, "p95": 1.2888042251999992, "p99": 1.3174659082399989, "stdev": 0.0793706265981456, "stdev_": 6.79714067132946 }, "throughput": { "unit": "samples/s", "value": 8.563798677996411 }, "energy": { "unit": "kWh", "cpu": 0.00023477029041666527, "ram": 0.000009813570993522525, "gpu": 0, "total": 0.0002445838614101878 }, "efficiency": { "unit": "samples/kWh", "value": 40885.77203067849 } }
{ "memory": { "unit": "MB", "max_ram": 4390.588416, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.324631328999999, 1.1373790739999663 ], "count": 2, "total": 2.462010402999965, "mean": 1.2310052014999826, "p50": 1.2310052014999826, "p90": 1.3059061034999957, "p95": 1.3152687162499972, "p99": 1.3227588064499987, "stdev": 0.09362612750001631, "stdev_": 7.605664654051231 }, "throughput": { "unit": "samples/s", "value": 3.2493770092327727 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 4390.588416, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.14549581, 1.1179114660000096, 1.113112846999968 ], "count": 3, "total": 3.3765201229999775, "mean": 1.1255067076666592, "p50": 1.1179114660000096, "p90": 1.1399789412000019, "p95": 1.1427373756000008, "p99": 1.1449441231200002, "stdev": 0.014269544378301306, "stdev_": 1.2678329041578231 }, "throughput": { "unit": "samples/s", "value": 5.3309322451208505 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "model": "microsoft/deberta-v3-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 5.583659903000068, "mean": 1.1167319806000138, "stdev": 0.07853258776030796, "p50": 1.0731834320000075, "p90": 1.2025418994000006, "p95": 1.2374836681999908, "p99": 1.265437083239983, "values": [ 1.2724254369999812, 1.0731834320000075, 1.0977165930000297, 1.0711078369999996, 1.0692266040000504 ] }, "throughput": { "unit": "samples/s", "value": 8.954700119385008 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 2.3456088689999888, "mean": 1.1728044344999944, "stdev": 0.09962100249998684, "p50": 1.1728044344999944, "p90": 1.252501236499984, "p95": 1.2624633367499825, "p99": 1.2704330169499816, "values": [ 1.2724254369999812, 1.0731834320000075 ] }, "throughput": { "unit": "samples/s", "value": 3.4106283045436583 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 3.2380510340000797, "mean": 1.0793503446666932, "stdev": 0.01300958794585394, "p50": 1.0711078369999996, "p90": 1.0923948418000236, "p95": 1.0950557174000266, "p99": 1.097184417880029, "values": [ 1.0977165930000297, 1.0711078369999996, 1.0692266040000504 ] }, "throughput": { "unit": "samples/s", "value": 5.558899415419021 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
649