text
stringlengths
7
318k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
439
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Troubleshooting guide This guide aims to provide you the tools and knowledge required to navigate some common issues. However, as 🤗 Accelerate continuously evolves and the use cases and setups are diverse, you might encounter an issue not covered in this guide. If the suggestions listed in this guide do not cover your such situation, please refer to the final section of the guide, [Asking for Help](#ask-for-help), to learn where to find help with your specific issue. ## Logging When facing an error, logging can help narrow down where it is coming from. In a distributed setup with multiple processes, logging can be a challenge, but 🤗 Accelerate provides a utility that streamlines the logging process and ensures that logs are synchronized and managed effectively across the distributed setup. To troubleshoot an issue, use `accelerate.logging` instead of the standard Python `logging` module: ```diff - import logging + from accelerate.logging import get_logger - logger = logging.getLogger(__name__) + logger = get_logger(__name__) ``` To set the log level (`INFO`, `DEBUG`, `WARNING`, `ERROR`, `CRITICAL`), export it as the `ACCELERATE_LOG_LEVEL` environment, or pass as `log_level` to `get_logger`: ```python from accelerate.logging import get_logger logger = get_logger(__name__, log_level="INFO") ``` By default, the log is called on main processes only. To call it on all processes, pass `main_process_only=False`. If a log should be called on all processes and in order, also pass `in_order=True`. ## Hanging code and timeout errors ### Mismatched tensor shapes If your code seems to be hanging for a significant amount time on a distributed setup, a common cause is mismatched shapes of tensors on different devices. When running scripts in a distributed fashion, functions such as [`Accelerator.gather`] and [`Accelerator.reduce`] are necessary to grab tensors across devices to perform operations on them collectively. These (and other) functions rely on `torch.distributed` performing a `gather` operation, which requires that tensors have the **exact same shape** across all processes. When the tensor shapes don't match, you will experience handing code, and eventually hit a timeout exception. If you suspect this to be the case, use Accelerate's operational debug mode to immediately catch the issue. The recommended way to enable Accelerate's operational debug mode is during `accelerate config` setup. Alternative ways to enable debug mode are: * From the CLI: ```bash accelerate launch --debug {my_script.py} --arg1 --arg2 ``` * As an environmental variable (which avoids the need for `accelerate launch`): ```bash ACCELERATE_DEBUG_MODE="1" torchrun {my_script.py} --arg1 --arg2 ``` * Manually changing the `config.yaml` file: ```diff compute_environment: LOCAL_MACHINE +debug: true ``` Once you enable the debug mode, you should get a similar traceback that points to the tensor shape mismatch issue: ```py Traceback (most recent call last): File "/home/zach_mueller_huggingface_co/test.py", line 18, in <module> main() File "/home/zach_mueller_huggingface_co/test.py", line 15, in main broadcast_tensor = broadcast(tensor) File "/home/zach_mueller_huggingface_co/accelerate/src/accelerate/utils/operations.py", line 303, in wrapper accelerate.utils.operations.DistributedOperationException: Cannot apply desired operation due to shape mismatches. All shapes across devices must be valid. Operation: `accelerate.utils.operations.broadcast` Input shapes: - Process 0: [1, 5] - Process 1: [1, 2, 5] ``` ### Early stopping leads to hanging When doing early stopping in distributed training, if each process has a specific stopping condition (e.g. validation loss), it may not be synchronized across all of them. As a result, a break can happen on process 0 but not on process 1. This will cause the code to hang indefinitely until a timeout occurs. If you have early stopping conditionals, use `set_breakpoint` and `check_breakpoint` methods to make sure all the processes are ended correctly: ```py # Assume `should_do_breakpoint` is a custom defined function that returns a conditional, # and that conditional might be true only on process 1 if should_do_breakpoint(loss): accelerator.set_breakpoint() # Later in the training script when we need to check for the breakpoint if accelerator.check_breakpoint(): break ``` ### Hanging on low kernel versions on Linux This is a known issue. On Linux with kernel version < 5.5, hanging processes have been reported. To avoid encountering this problem, we recommend upgrading your system to a later kernel version. ## CUDA out of memory One of the most frustrating errors when it comes to running training scripts is hitting "CUDA Out-of-Memory", as the entire script needs to be restarted, progress is lost, and typically a developer would want to simply start their script and let it run. To address this problem, `Accelerate` offers a utility `find_executable_batch_size` that is heavily based on [toma](https://github.com/BlackHC/toma). The utility retries code that fails due to OOM (out-of-memory) conditions and lowers batch sizes automatically. ### find_executable_batch_size This algorithm operates with exponential decay, decreasing the batch size in half after each failed run on some training script. To use it, restructure your training function to include an inner function that includes this wrapper, and build your dataloaders inside it. At a minimum, this could look like 4 new lines of code. <Tip warning={true}> The inner function *must* take in the batch size as the first parameter, but we do not pass one to it when called. The wrapper handles this for us. </Tip> It should also be noted that anything which will consume CUDA memory and passed to the `accelerator` **must** be declared inside the inner function, such as models and optimizers. ```diff def training_function(args): accelerator = Accelerator() + @find_executable_batch_size(starting_batch_size=args.batch_size) + def inner_training_loop(batch_size): + nonlocal accelerator # Ensure they can be used in our context + accelerator.free_memory() # Free all lingering references model = get_model() model.to(accelerator.device) optimizer = get_optimizer() train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size) lr_scheduler = get_scheduler( optimizer, num_training_steps=len(train_dataloader)*num_epochs ) model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader, lr_scheduler ) train(model, optimizer, train_dataloader, lr_scheduler) validate(model, eval_dataloader) + inner_training_loop() ``` To find out more, check the documentation [here](../package_reference/utilities#accelerate.find_executable_batch_size). ## Non-reproducible results between device setups If you have changed the device setup and are observing different model performance, this is likely due to the fact that you have not updated your script when moving from one setup to another. The same script with the same batch size across TPU, multi-GPU, and single-GPU with Accelerate will have different results. For example, if you were previously training on a single GPU with a batch size of 16, when moving to two GPU setup, you need to change the batch size to 8 to have the same effective batch size. This is because when training with Accelerate, the batch size passed to the dataloader is the **batch size per GPU**. To make sure you can reproduce the results between the setups, make sure to use the same seed, adjust the batch size accordingly, consider scaling the learning rate. For more details and a quick reference for batch sizes, check out the [Comparing performance between different device setups](../concept_guides/performance) guide. ## Performance issues on different GPUs If your multi-GPU setup consists of different GPUs, you may hit some limitations: - There may be an imbalance in GPU memory between the GPUs. In this case, the GPU with smaller memory will limit the batch size or the size of the model that can be loaded onto the GPUs. - If you are using GPUs with different performance profiles, the performance will be driven by the slowest GPU that you are using as the other GPUs will have to wait for it to complete its workload. Vastly different GPUs within the same setup can lead to performance bottlenecks. ## Ask for help If the above troubleshooting tools and advice did not help you resolve your issue, reach out for help to the community and the team. ### Forums Ask for help on the Hugging Face forums - post your question in the [🤗Accelerate category](https://discuss.huggingface.co/c/accelerate/18) Make sure to write a descriptive post with relevant context about your setup and reproducible code to maximize the likelihood that your problem is solved! ### Discord Post a question on [Discord](http://hf.co/join/discord), and let the team and the community help you. ### GitHub Issues Create an Issue on the 🤗 Accelerate [GitHub repository](https://github.com/huggingface/accelerate/issues) if you suspect to have found a bug related to the library. Include context regarding the bug and details about your distributed setup to help us better figure out what's wrong and how we can fix it.
accelerate/docs/source/basic_tutorials/troubleshooting.md/0
{ "file_path": "accelerate/docs/source/basic_tutorials/troubleshooting.md", "repo_id": "accelerate", "token_count": 2832 }
0
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Intel® Extension for PyTorch [IPEX](https://github.com/intel/intel-extension-for-pytorch) is optimized for CPUs with AVX-512 or above, and functionally works for CPUs with only AVX2. So, it is expected to bring performance benefit for Intel CPU generations with AVX-512 or above while CPUs with only AVX2 (e.g., AMD CPUs or older Intel CPUs) might result in a better performance under IPEX, but not guaranteed. IPEX provides performance optimizations for CPU training with both Float32 and BFloat16. The usage of BFloat16 is the main focus of the following sections. Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set with further boosted performance. The Auto Mixed Precision for CPU backend has been enabled since PyTorch-1.10. At the same time, the support of Auto Mixed Precision with BFloat16 for CPU and BFloat16 optimization of operators has been massively enabled in Intel® Extension for PyTorch, and partially upstreamed to PyTorch master branch. Users can get better performance and user experience with IPEX Auto Mixed Precision. ## IPEX installation: IPEX release is following PyTorch, to install via pip: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 2.0 | 2.0.0 | | 1.13 | 1.13.0 | | 1.12 | 1.12.300 | | 1.11 | 1.11.200 | | 1.10 | 1.10.100 | ``` pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html). ## How It Works For Training optimization in CPU 🤗 Accelerate has integrated [IPEX](https://github.com/intel/intel-extension-for-pytorch), all you need to do is enabling it through the config. **Scenario 1**: Acceleration of No distributed CPU training Run <u>accelerate config</u> on your machine: ```bash $ accelerate config ----------------------------------------------------------------------------------------------------------------------------------------------------------- In which compute environment are you running? This machine ----------------------------------------------------------------------------------------------------------------------------------------------------------- Which type of machine are you using? No distributed training Do you want to run your training on CPU only (even if a GPU / Apple Silicon device is available)? [yes/NO]:yes Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes Do you wish to optimize your script with torch dynamo?[yes/NO]:NO Do you want to use DeepSpeed? [yes/NO]: NO ----------------------------------------------------------------------------------------------------------------------------------------------------------- Do you wish to use FP16 or BF16 (mixed precision)? bf16 ``` This will generate a config file that will be used automatically to properly set the default options when doing ```bash accelerate launch my_script.py --args_to_my_script ``` For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled. default_config.yaml that is generated after `accelerate config` ```bash compute_environment: LOCAL_MACHINE distributed_type: 'NO' downcast_bf16: 'no' ipex_config: ipex: true machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 1 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: true ``` ```bash accelerate launch examples/nlp_example.py ``` **Scenario 2**: Acceleration of distributed CPU training we use Intel oneCCL for communication, combined with Intel® MPI library to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. you could refer the [here](https://huggingface.co/docs/transformers/perf_train_cpu_many) for the installation guide Run <u>accelerate config</u> on your machine(node0): ```bash $ accelerate config ----------------------------------------------------------------------------------------------------------------------------------------------------------- In which compute environment are you running? This machine ----------------------------------------------------------------------------------------------------------------------------------------------------------- Which type of machine are you using? multi-CPU How many different machines will you use (use more than 1 for multi-node training)? [1]: 4 ----------------------------------------------------------------------------------------------------------------------------------------------------------- What is the rank of this machine? 0 What is the IP address of the machine that will host the main process? 36.112.23.24 What is the port you will use to communicate with the main process? 29500 Are all the machines on the same local network? Answer `no` if nodes are on the cloud and/or on different network hosts [YES/no]: yes Do you want to use Intel PyTorch Extension (IPEX) to speed up training on CPU? [yes/NO]:yes Do you wish to optimize your script with torch dynamo?[yes/NO]:NO How many CPU(s) should be used for distributed training? [1]:16 ----------------------------------------------------------------------------------------------------------------------------------------------------------- Do you wish to use FP16 or BF16 (mixed precision)? bf16 ``` For instance, here is how you would run the NLP example `examples/nlp_example.py` (from the root of the repo) with IPEX enabled for distributed CPU training. default_config.yaml that is generated after `accelerate config` ```bash compute_environment: LOCAL_MACHINE distributed_type: MULTI_CPU downcast_bf16: 'no' ipex_config: ipex: true machine_rank: 0 main_process_ip: 36.112.23.24 main_process_port: 29500 main_training_function: main mixed_precision: bf16 num_machines: 4 num_processes: 16 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: true ``` Set following env and using intel MPI to launch the training In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument. ```bash $ cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip xxx.xxx.xxx.xxx #node2 ip xxx.xxx.xxx.xxx #node3 ip ``` Now, run the following command in node0 and **16DDP** will be enabled in node0,node1,node2,node3 with BF16 mixed precision: ```bash oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip export CCL_ATL_TRANSPORT=ofi mpirun -f hostfile -n 16 -ppn 4 accelerate launch examples/nlp_example.py ``` ## Related Resources - [Project's github](https://github.com/intel/intel-extension-for-pytorch) - [API docs](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/api_doc.html) - [Tuning guide](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html) - [Blogs & Publications](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/blogs_publications.html)
accelerate/docs/source/usage_guides/ipex.md/0
{ "file_path": "accelerate/docs/source/usage_guides/ipex.md", "repo_id": "accelerate", "token_count": 2313 }
1
import argparse import runhouse as rh import torch from nlp_example import training_function from accelerate.utils import PrepareForLaunch, patch_environment def launch_train(*args): num_processes = torch.cuda.device_count() print(f"Device count: {num_processes}") with patch_environment( world_size=num_processes, master_addr="127.0.0.1", master_port="29500", mixed_precision=args[1].mixed_precision ): launcher = PrepareForLaunch(training_function, distributed_type="MULTI_GPU") torch.multiprocessing.start_processes(launcher, args=args, nprocs=num_processes, start_method="spawn") if __name__ == "__main__": # Refer to https://runhouse-docs.readthedocs-hosted.com/en/main/rh_primitives/cluster.html#hardware-setup # for cloud access setup instructions (if using on-demand hardware), and for API specifications. # on-demand GPU # gpu = rh.cluster(name='rh-cluster', instance_type='V100:1', provider='cheapest', use_spot=False) # single GPU gpu = rh.cluster(name="rh-cluster", instance_type="V100:4", provider="cheapest", use_spot=False) # multi GPU gpu.up_if_not() # on-prem GPU # gpu = rh.cluster( # ips=["ip_addr"], ssh_creds={ssh_user:"<username>", ssh_private_key:"<key_path>"}, name="rh-cluster" # ) # Set up remote function reqs = [ "pip:./", "transformers", "datasets", "evaluate", "tqdm", "scipy", "scikit-learn", "tensorboard", "torch --upgrade --extra-index-url https://download.pytorch.org/whl/cu117", ] launch_train_gpu = rh.function(fn=launch_train, system=gpu, reqs=reqs, name="train_bert_glue") # Define train args/config, run train function train_args = argparse.Namespace(cpu=False, mixed_precision="fp16") config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16} launch_train_gpu(config, train_args, stream_logs=True) # Alternatively, we can just run as instructed in the README (but only because there's already a wrapper CLI): # gpu.install_packages(reqs) # gpu.run(['accelerate launch --multi_gpu accelerate/examples/nlp_example.py'])
accelerate/examples/multigpu_remote_launcher.py/0
{ "file_path": "accelerate/examples/multigpu_remote_launcher.py", "repo_id": "accelerate", "token_count": 869 }
2
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import random from pathlib import Path from typing import List import numpy as np import torch from safetensors.torch import load_file from torch.cuda.amp import GradScaler from .utils import ( MODEL_NAME, OPTIMIZER_NAME, RNG_STATE_NAME, SAFE_MODEL_NAME, SAFE_WEIGHTS_NAME, SAMPLER_NAME, SCALER_NAME, SCHEDULER_NAME, WEIGHTS_NAME, get_pretty_name, is_tpu_available, is_xpu_available, save, ) if is_tpu_available(check_device=False): import torch_xla.core.xla_model as xm from .logging import get_logger from .state import PartialState logger = get_logger(__name__) def save_accelerator_state( output_dir: str, model_states: List[dict], optimizers: list, schedulers: list, dataloaders: list, process_index: int, scaler: GradScaler = None, save_on_each_node: bool = False, safe_serialization: bool = True, ): """ Saves the current states of the models, optimizers, scaler, and RNG generators to a given directory. <Tip> If `safe_serialization` is `True`, models will be saved with `safetensors` while the rest are saved using native `pickle`. </Tip> Args: output_dir (`str` or `os.PathLike`): The name of the folder to save all relevant weights and states. model_states (`List[torch.nn.Module]`): A list of model states optimizers (`List[torch.optim.Optimizer]`): A list of optimizer instances schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`): A list of learning rate schedulers dataloaders (`List[torch.utils.data.DataLoader]`): A list of dataloader instances to save their sampler states process_index (`int`): The current process index in the Accelerator state scaler (`torch.cuda.amp.GradScaler`, *optional*): An optional gradient scaler instance to save save_on_each_node (`bool`, *optional*): Whether to save on every node, or only the main node. safe_serialization (`bool`, *optional*, defaults to `True`): Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). """ output_dir = Path(output_dir) # Model states for i, state in enumerate(model_states): weights_name = WEIGHTS_NAME if not safe_serialization else SAFE_WEIGHTS_NAME if i > 0: weights_name = weights_name.replace(".", f"_{i}.") output_model_file = output_dir.joinpath(weights_name) save(state, output_model_file, save_on_each_node=save_on_each_node, safe_serialization=safe_serialization) logger.info(f"Model weights saved in {output_model_file}") # Optimizer states for i, opt in enumerate(optimizers): state = opt.state_dict() optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin" output_optimizer_file = output_dir.joinpath(optimizer_name) save(state, output_optimizer_file, save_on_each_node=save_on_each_node, safe_serialization=False) logger.info(f"Optimizer state saved in {output_optimizer_file}") # Scheduler states for i, scheduler in enumerate(schedulers): state = scheduler.state_dict() scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin" output_scheduler_file = output_dir.joinpath(scheduler_name) save(state, output_scheduler_file, save_on_each_node=save_on_each_node, safe_serialization=False) logger.info(f"Scheduler state saved in {output_scheduler_file}") # DataLoader states for i, dataloader in enumerate(dataloaders): sampler_name = f"{SAMPLER_NAME}.bin" if i == 0 else f"{SAMPLER_NAME}_{i}.bin" output_sampler_file = output_dir.joinpath(sampler_name) # Only save if we have our custom sampler from .data_loader import IterableDatasetShard, SeedableRandomSampler if isinstance(dataloader.dataset, IterableDatasetShard): sampler = dataloader.sampler.sampler if isinstance(sampler, SeedableRandomSampler): save(sampler, output_sampler_file, save_on_each_node=save_on_each_node, safe_serialization=False) logger.info(f"Sampler state for dataloader {i} saved in {output_sampler_file}") # GradScaler state if scaler is not None: state = scaler.state_dict() output_scaler_file = output_dir.joinpath(SCALER_NAME) torch.save(state, output_scaler_file) logger.info(f"Gradient scaler state saved in {output_scaler_file}") # Random number generator states states = {} states_name = f"{RNG_STATE_NAME}_{process_index}.pkl" states["random_state"] = random.getstate() states["numpy_random_seed"] = np.random.get_state() states["torch_manual_seed"] = torch.get_rng_state() if is_xpu_available(): states["torch_xpu_manual_seed"] = torch.xpu.get_rng_state_all() else: states["torch_cuda_manual_seed"] = torch.cuda.get_rng_state_all() if is_tpu_available(): states["xm_seed"] = xm.get_rng_state() output_states_file = output_dir.joinpath(states_name) torch.save(states, output_states_file) logger.info(f"Random states saved in {output_states_file}") return output_dir def load_accelerator_state( input_dir, models, optimizers, schedulers, dataloaders, process_index, scaler=None, map_location=None, **load_model_func_kwargs, ): """ Loads states of the models, optimizers, scaler, and RNG generators from a given directory. Args: input_dir (`str` or `os.PathLike`): The name of the folder to load all relevant weights and states. models (`List[torch.nn.Module]`): A list of model instances optimizers (`List[torch.optim.Optimizer]`): A list of optimizer instances schedulers (`List[torch.optim.lr_scheduler._LRScheduler]`): A list of learning rate schedulers process_index (`int`): The current process index in the Accelerator state scaler (`torch.cuda.amp.GradScaler`, *optional*): An optional *GradScaler* instance to load map_location (`str`, *optional*): What device to load the optimizer state onto. Should be one of either "cpu" or "on_device". load_model_func_kwargs (`dict`, *optional*): Additional arguments that can be passed to the model's `load_state_dict` method. """ if map_location not in [None, "cpu", "on_device"]: raise TypeError( "Unsupported optimizer map location passed, please choose one of `None`, `'cpu'`, or `'on_device'`" ) if map_location is None: map_location = "cpu" elif map_location == "on_device": map_location = PartialState().device input_dir = Path(input_dir) # Model states for i, model in enumerate(models): ending = f"_{i}" if i > 0 else "" input_model_file = input_dir.joinpath(f"{SAFE_MODEL_NAME}{ending}.safetensors") if input_model_file.exists(): state_dict = load_file(input_model_file, device=str(map_location)) else: # Load with torch input_model_file = input_dir.joinpath(f"{MODEL_NAME}{ending}.bin") state_dict = torch.load(input_model_file, map_location=map_location) models[i].load_state_dict(state_dict, **load_model_func_kwargs) logger.info("All model weights loaded successfully") # Optimizer states for i, opt in enumerate(optimizers): optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin" input_optimizer_file = input_dir.joinpath(optimizer_name) optimizer_state = torch.load(input_optimizer_file, map_location=map_location) optimizers[i].load_state_dict(optimizer_state) logger.info("All optimizer states loaded successfully") # Scheduler states for i, scheduler in enumerate(schedulers): scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin" input_scheduler_file = input_dir.joinpath(scheduler_name) scheduler.load_state_dict(torch.load(input_scheduler_file)) logger.info("All scheduler states loaded successfully") for i, dataloader in enumerate(dataloaders): sampler_name = f"{SAMPLER_NAME}.bin" if i == 0 else f"{SAMPLER_NAME}_{i}.bin" input_sampler_file = input_dir.joinpath(sampler_name) # Only load if we have our custom sampler from .data_loader import IterableDatasetShard, SeedableRandomSampler if isinstance(dataloader.dataset, IterableDatasetShard): sampler = dataloader.sampler.sampler if isinstance(sampler, SeedableRandomSampler): dataloader.sampler.sampler = torch.load(input_sampler_file) logger.info("All dataloader sampler states loaded successfully") # GradScaler state if scaler is not None: input_scaler_file = input_dir.joinpath(SCALER_NAME) scaler.load_state_dict(torch.load(input_scaler_file)) logger.info("GradScaler state loaded successfully") # Random states try: states = torch.load(input_dir.joinpath(f"{RNG_STATE_NAME}_{process_index}.pkl")) random.setstate(states["random_state"]) np.random.set_state(states["numpy_random_seed"]) torch.set_rng_state(states["torch_manual_seed"]) if is_xpu_available(): torch.xpu.set_rng_state_all(states["torch_xpu_manual_seed"]) else: torch.cuda.set_rng_state_all(states["torch_cuda_manual_seed"]) if is_tpu_available(): xm.set_rng_state(states["xm_seed"]) logger.info("All random states loaded successfully") except Exception: logger.info("Could not load random states") def save_custom_state(obj, path, index: int = 0, save_on_each_node: bool = False): """ Saves the state of `obj` to `{path}/custom_checkpoint_{index}.pkl` """ # Should this be the right way to get a qual_name type value from `obj`? save_location = Path(path) / f"custom_checkpoint_{index}.pkl" logger.info(f"Saving the state of {get_pretty_name(obj)} to {save_location}") save(obj.state_dict(), save_location, save_on_each_node=save_on_each_node) def load_custom_state(obj, path, index: int = 0): """ Loads the state of `obj` at `{path}/custom_checkpoint_{index}.pkl` """ load_location = f"{path}/custom_checkpoint_{index}.pkl" logger.info(f"Loading the state of {get_pretty_name(obj)} from {load_location}") obj.load_state_dict(torch.load(load_location, map_location="cpu"))
accelerate/src/accelerate/checkpointing.py/0
{ "file_path": "accelerate/src/accelerate/checkpointing.py", "repo_id": "accelerate", "token_count": 4641 }
3
# Copyright 2022 The HuggingFace Team and Brian Chao. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ A variety of helper functions and constants when dealing with terminal menu choices, based on https://github.com/bchao1/bullet """ import enum import shutil import sys TERMINAL_WIDTH, _ = shutil.get_terminal_size() CURSOR_TO_CHAR = {"UP": "A", "DOWN": "B", "RIGHT": "C", "LEFT": "D"} class Direction(enum.Enum): UP = 0 DOWN = 1 def forceWrite(content, end=""): sys.stdout.write(str(content) + end) sys.stdout.flush() def writeColor(content, color, end=""): forceWrite(f"\u001b[{color}m{content}\u001b[0m", end) def reset_cursor(): forceWrite("\r") def move_cursor(num_lines: int, direction: str): forceWrite(f"\033[{num_lines}{CURSOR_TO_CHAR[direction.upper()]}") def clear_line(): forceWrite(" " * TERMINAL_WIDTH) reset_cursor() def linebreak(): reset_cursor() forceWrite("-" * TERMINAL_WIDTH)
accelerate/src/accelerate/commands/menu/helpers.py/0
{ "file_path": "accelerate/src/accelerate/commands/menu/helpers.py", "repo_id": "accelerate", "token_count": 505 }
4
#!/usr/bin/env python # Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ A collection of utilities for comparing `examples/complete_*_example.py` scripts with the capabilities inside of each `examples/by_feature` example. `compare_against_test` is the main function that should be used when testing, while the others are used to either get the code that matters, or to preprocess them (such as stripping comments) """ import os from typing import List def get_function_contents_by_name(lines: List[str], name: str): """ Extracts a function from `lines` of segmented source code with the name `name`. Args: lines (`List[str]`): Source code of a script seperated by line. name (`str`): The name of the function to extract. Should be either `training_function` or `main` """ if name != "training_function" and name != "main": raise ValueError(f"Incorrect function name passed: {name}, choose either 'main' or 'training_function'") good_lines, found_start = [], False for line in lines: if not found_start and f"def {name}" in line: found_start = True good_lines.append(line) continue if found_start: if name == "training_function" and "def main" in line: return good_lines if name == "main" and "if __name__" in line: return good_lines good_lines.append(line) def clean_lines(lines: List[str]): """ Filters `lines` and removes any entries that start with a comment ('#') or is just a newline ('\n') Args: lines (`List[str]`): Source code of a script seperated by line. """ return [line for line in lines if not line.lstrip().startswith("#") and line != "\n"] def compare_against_test(base_filename: str, feature_filename: str, parser_only: bool, secondary_filename: str = None): """ Tests whether the additional code inside of `feature_filename` was implemented in `base_filename`. This should be used when testing to see if `complete_*_.py` examples have all of the implementations from each of the `examples/by_feature/*` scripts. It utilizes `nlp_example.py` to extract out all of the repeated training code, so that only the new additional code is examined and checked. If something *other* than `nlp_example.py` should be used, such as `cv_example.py` for the `complete_cv_example.py` script, it should be passed in for the `secondary_filename` parameter. Args: base_filename (`str` or `os.PathLike`): The filepath of a single "complete" example script to test, such as `examples/complete_cv_example.py` feature_filename (`str` or `os.PathLike`): The filepath of a single feature example script. The contents of this script are checked to see if they exist in `base_filename` parser_only (`bool`): Whether to compare only the `main()` sections in both files, or to compare the contents of `training_loop()` secondary_filename (`str`, *optional*): A potential secondary filepath that should be included in the check. This function extracts the base functionalities off of "examples/nlp_example.py", so if `base_filename` is a script other than `complete_nlp_example.py`, the template script should be included here. Such as `examples/cv_example.py` """ with open(base_filename, "r") as f: base_file_contents = f.readlines() with open(os.path.abspath(os.path.join("examples", "nlp_example.py")), "r") as f: full_file_contents = f.readlines() with open(feature_filename, "r") as f: feature_file_contents = f.readlines() if secondary_filename is not None: with open(secondary_filename, "r") as f: secondary_file_contents = f.readlines() # This is our base, we remove all the code from here in our `full_filename` and `feature_filename` to find the new content if parser_only: base_file_func = clean_lines(get_function_contents_by_name(base_file_contents, "main")) full_file_func = clean_lines(get_function_contents_by_name(full_file_contents, "main")) feature_file_func = clean_lines(get_function_contents_by_name(feature_file_contents, "main")) if secondary_filename is not None: secondary_file_func = clean_lines(get_function_contents_by_name(secondary_file_contents, "main")) else: base_file_func = clean_lines(get_function_contents_by_name(base_file_contents, "training_function")) full_file_func = clean_lines(get_function_contents_by_name(full_file_contents, "training_function")) feature_file_func = clean_lines(get_function_contents_by_name(feature_file_contents, "training_function")) if secondary_filename is not None: secondary_file_func = clean_lines( get_function_contents_by_name(secondary_file_contents, "training_function") ) _dl_line = "train_dataloader, eval_dataloader = get_dataloaders(accelerator, batch_size)\n" # Specific code in our script that differs from the full version, aka what is new new_feature_code = [] passed_idxs = [] # We keep track of the idxs just in case it's a repeated statement it = iter(feature_file_func) for i in range(len(feature_file_func) - 1): if i not in passed_idxs: line = next(it) if (line not in full_file_func) and (line.lstrip() != _dl_line): if "TESTING_MOCKED_DATALOADERS" not in line: new_feature_code.append(line) passed_idxs.append(i) else: # Skip over the `config['num_epochs'] = 2` statement _ = next(it) # Extract out just the new parts from the full_file_training_func new_full_example_parts = [] passed_idxs = [] # We keep track of the idxs just in case it's a repeated statement for i, line in enumerate(base_file_func): if i not in passed_idxs: if (line not in full_file_func) and (line.lstrip() != _dl_line): if "TESTING_MOCKED_DATALOADERS" not in line: new_full_example_parts.append(line) passed_idxs.append(i) # Finally, get the overall diff diff_from_example = [line for line in new_feature_code if line not in new_full_example_parts] if secondary_filename is not None: diff_from_two = [line for line in full_file_contents if line not in secondary_file_func] diff_from_example = [line for line in diff_from_example if line not in diff_from_two] return diff_from_example
accelerate/src/accelerate/test_utils/examples.py/0
{ "file_path": "accelerate/src/accelerate/test_utils/examples.py", "repo_id": "accelerate", "token_count": 2747 }
5
from .constants import ( MODEL_NAME, OPTIMIZER_NAME, RNG_STATE_NAME, SAFE_MODEL_NAME, SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, SAMPLER_NAME, SCALER_NAME, SCHEDULER_NAME, TORCH_DISTRIBUTED_OPERATION_TYPES, TORCH_LAUNCH_PARAMS, WEIGHTS_INDEX_NAME, WEIGHTS_NAME, ) from .dataclasses import ( AutocastKwargs, BnbQuantizationConfig, ComputeEnvironment, CustomDtype, DeepSpeedPlugin, DistributedDataParallelKwargs, DistributedType, DynamoBackend, FP8RecipeKwargs, FullyShardedDataParallelPlugin, GradientAccumulationPlugin, GradScalerKwargs, InitProcessGroupKwargs, KwargsHandler, LoggerType, MegatronLMPlugin, PrecisionType, ProjectConfiguration, RNGType, SageMakerDistributedType, TensorInformation, TorchDynamoPlugin, ) from .environment import ( are_libraries_initialized, check_cuda_p2p_ib_support, check_fp8_capability, get_int_from_env, parse_choice_from_env, parse_flag_from_env, str_to_bool, ) from .imports import ( get_ccl_version, is_4bit_bnb_available, is_8bit_bnb_available, is_aim_available, is_bf16_available, is_bnb_available, is_boto3_available, is_ccl_available, is_clearml_available, is_comet_ml_available, is_cuda_available, is_datasets_available, is_deepspeed_available, is_dvclive_available, is_fp8_available, is_ipex_available, is_megatron_lm_available, is_mlflow_available, is_mps_available, is_msamp_available, is_npu_available, is_pandas_available, is_peft_available, is_rich_available, is_sagemaker_available, is_tensorboard_available, is_timm_available, is_tpu_available, is_transformer_engine_available, is_transformers_available, is_wandb_available, is_xpu_available, ) from .modeling import ( calculate_maximum_sizes, check_device_map, check_tied_parameters_in_config, check_tied_parameters_on_same_device, compute_module_sizes, convert_file_size_to_int, dtype_byte_size, find_tied_parameters, get_balanced_memory, get_max_layer_size, get_max_memory, get_mixed_precision_context_manager, id_tensor_storage, infer_auto_device_map, is_peft_model, load_checkpoint_in_model, load_offloaded_weights, load_state_dict, named_module_tensors, retie_parameters, set_module_tensor_to_device, shard_checkpoint, ) from .offload import ( OffloadedWeightsLoader, PrefixedDataset, extract_submodules_state_dict, load_offloaded_weight, offload_state_dict, offload_weight, save_offload_index, ) from .operations import ( CannotPadNestedTensorWarning, broadcast, broadcast_object_list, concatenate, convert_outputs_to_fp32, convert_to_fp32, find_batch_size, find_device, gather, gather_object, get_data_structure, honor_type, initialize_tensors, is_namedtuple, is_tensor_information, is_torch_tensor, listify, pad_across_processes, recursively_apply, reduce, send_to_device, slice_tensors, ) from .versions import compare_versions, is_torch_version if is_deepspeed_available(): from .deepspeed import ( DeepSpeedEngineWrapper, DeepSpeedOptimizerWrapper, DeepSpeedSchedulerWrapper, DummyOptim, DummyScheduler, HfDeepSpeedConfig, ) from .bnb import has_4bit_bnb_layers, load_and_quantize_model from .fsdp_utils import load_fsdp_model, load_fsdp_optimizer, save_fsdp_model, save_fsdp_optimizer from .launch import ( PrepareForLaunch, _filter_args, prepare_deepspeed_cmd_env, prepare_multi_gpu_env, prepare_sagemager_args_inputs, prepare_simple_launcher_cmd_env, prepare_tpu, ) from .megatron_lm import ( AbstractTrainStep, BertTrainStep, GPTTrainStep, MegatronEngine, MegatronLMDummyDataLoader, MegatronLMDummyScheduler, MegatronLMOptimizerWrapper, MegatronLMSchedulerWrapper, T5TrainStep, avg_losses_across_data_parallel_group, gather_across_data_parallel_groups, ) from .megatron_lm import initialize as megatron_lm_initialize from .megatron_lm import prepare_data_loader as megatron_lm_prepare_data_loader from .megatron_lm import prepare_model as megatron_lm_prepare_model from .megatron_lm import prepare_optimizer as megatron_lm_prepare_optimizer from .megatron_lm import prepare_scheduler as megatron_lm_prepare_scheduler from .memory import find_executable_batch_size, release_memory from .other import ( check_os_kernel, clean_state_dict_for_safetensors, clear_environment, convert_bytes, extract_model_from_parallel, get_pretty_name, is_port_in_use, merge_dicts, patch_environment, recursive_getattr, save, wait_for_everyone, write_basic_config, ) from .random import set_seed, synchronize_rng_state, synchronize_rng_states from .torch_xla import install_xla from .tqdm import tqdm from .transformer_engine import convert_model, has_transformer_engine_layers
accelerate/src/accelerate/utils/__init__.py/0
{ "file_path": "accelerate/src/accelerate/utils/__init__.py", "repo_id": "accelerate", "token_count": 2193 }
6
compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: 'NO' downcast_bf16: 'no' fsdp_config: {} gpu_ids: all machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main megatron_lm_config: {} mixed_precision: 'no' num_machines: 1 num_processes: 1 rdzv_backend: static same_network: true use_cpu: false tpu_name: 'test-tpu' tpu_zone: 'us-central1-a' commands: null command_file: tests/test_samples/test_command_file.sh
accelerate/tests/test_configs/latest.yaml/0
{ "file_path": "accelerate/tests/test_configs/latest.yaml", "repo_id": "accelerate", "token_count": 186 }
7
#!/usr/bin/env python # coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import random import sys import torch import transformers from transformers import AutoModelForCausalLM, set_seed from alignment import ( DataArguments, DPOConfig, H4ArgumentParser, ModelArguments, apply_chat_template, get_checkpoint, get_datasets, get_kbit_device_map, get_peft_config, get_quantization_config, get_tokenizer, is_adapter_model, ) from peft import PeftConfig, PeftModel from trl import DPOTrainer logger = logging.getLogger(__name__) def main(): parser = H4ArgumentParser((ModelArguments, DataArguments, DPOConfig)) model_args, data_args, training_args = parser.parse() ####### # Setup ####### logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = training_args.get_process_log_level() logger.setLevel(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process the small summary: logger.info(f"Model parameters {model_args}") logger.info(f"Data parameters {data_args}") logger.info(f"Training/evaluation parameters {training_args}") # Check for last checkpoint last_checkpoint = get_checkpoint(training_args) if last_checkpoint is not None and training_args.resume_from_checkpoint is None: logger.info(f"Checkpoint detected, resuming training at {last_checkpoint=}.") # Set seed for reproducibility set_seed(training_args.seed) ############### # Load datasets ############### raw_datasets = get_datasets(data_args, splits=data_args.dataset_splits) logger.info( f"Training on the following splits: {[split + ' : ' + str(dset.num_rows) for split, dset in raw_datasets.items()]}" ) column_names = list(raw_datasets["train"].features) ##################################### # Load tokenizer and process datasets ##################################### data_args.truncation_side = "left" # Truncate from left to ensure we don't lose labels in final turn tokenizer = get_tokenizer(model_args, data_args) ##################### # Apply chat template ##################### raw_datasets = raw_datasets.map( apply_chat_template, fn_kwargs={"tokenizer": tokenizer, "task": "dpo"}, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, desc="Formatting comparisons with prompt template", ) # Replace column names with what TRL needs, text_chosen -> chosen and text_rejected -> rejected for split in ["train", "test"]: raw_datasets[split] = raw_datasets[split].rename_columns( {"text_prompt": "prompt", "text_chosen": "chosen", "text_rejected": "rejected"} ) # Log a few random samples from the training set: for index in random.sample(range(len(raw_datasets["train"])), 3): logger.info(f"Prompt sample {index} of the raw training set:\n\n{raw_datasets['train'][index]['prompt']}") logger.info(f"Chosen sample {index} of the raw training set:\n\n{raw_datasets['train'][index]['chosen']}") logger.info(f"Rejected sample {index} of the raw training set:\n\n{raw_datasets['train'][index]['rejected']}") torch_dtype = ( model_args.torch_dtype if model_args.torch_dtype in ["auto", None] else getattr(torch, model_args.torch_dtype) ) quantization_config = get_quantization_config(model_args) model_kwargs = dict( revision=model_args.model_revision, trust_remote_code=model_args.trust_remote_code, use_flash_attention_2=model_args.use_flash_attention_2, torch_dtype=torch_dtype, use_cache=False if training_args.gradient_checkpointing else True, device_map=get_kbit_device_map() if quantization_config is not None else None, quantization_config=quantization_config, ) model = model_args.model_name_or_path if is_adapter_model(model, model_args.model_revision) is True: # Load the base model, merge the adapter weights and unload the adapter # Note: to run QLoRA, you will need to merge the base model separately as the merged model in 16bit logger.info(f"Merging PEFT adapters for {model_args.model_name_or_path=}") peft_config = PeftConfig.from_pretrained(model_args.model_name_or_path, revision=model_args.model_revision) model_kwargs = dict( revision=model_args.base_model_revision, trust_remote_code=model_args.trust_remote_code, use_flash_attention_2=model_args.use_flash_attention_2, torch_dtype=torch_dtype, use_cache=False if training_args.gradient_checkpointing else True, ) base_model = AutoModelForCausalLM.from_pretrained( peft_config.base_model_name_or_path, **model_kwargs, ) model = PeftModel.from_pretrained( base_model, model_args.model_name_or_path, revision=model_args.model_revision ) model.eval() model = model.merge_and_unload() model_kwargs = None ref_model = model ref_model_kwargs = model_kwargs if model_args.use_peft is True: ref_model = None ref_model_kwargs = None ######################### # Instantiate DPO trainer ######################### trainer = DPOTrainer( model, ref_model, model_init_kwargs=model_kwargs, ref_model_init_kwargs=ref_model_kwargs, args=training_args, beta=training_args.beta, train_dataset=raw_datasets["train"], eval_dataset=raw_datasets["test"], tokenizer=tokenizer, max_length=training_args.max_length, max_prompt_length=training_args.max_prompt_length, peft_config=get_peft_config(model_args), loss_type=training_args.loss_type, ) ############### # Training loop ############### checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint elif last_checkpoint is not None: checkpoint = last_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) metrics = train_result.metrics metrics["train_samples"] = len(raw_datasets["train"]) trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) trainer.save_state() logger.info("*** Training complete ***") ########## # Evaluate ########## if training_args.do_eval: logger.info("*** Evaluate ***") metrics = trainer.evaluate() metrics["eval_samples"] = len(raw_datasets["test"]) trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) ################################## # Save model and create model card ################################## logger.info("*** Save model ***") trainer.save_model(training_args.output_dir) logger.info(f"Model saved to {training_args.output_dir}") # Save everything else on main process kwargs = { "finetuned_from": model_args.model_name_or_path, "dataset": list(data_args.dataset_mixer.keys()), "dataset_tags": list(data_args.dataset_mixer.keys()), "tags": ["alignment-handbook"], } if trainer.accelerator.is_main_process: trainer.create_model_card(**kwargs) # Restore k,v cache for fast inference trainer.model.config.use_cache = True trainer.model.config.save_pretrained(training_args.output_dir) if training_args.push_to_hub is True: logger.info("Pushing to hub...") trainer.push_to_hub(**kwargs) logger.info("*** Training complete! ***") if __name__ == "__main__": main()
alignment-handbook/scripts/run_dpo.py/0
{ "file_path": "alignment-handbook/scripts/run_dpo.py", "repo_id": "alignment-handbook", "token_count": 3365 }
8
# Creating apps
candle/candle-book/src/apps/README.md/0
{ "file_path": "candle/candle-book/src/apps/README.md", "repo_id": "candle", "token_count": 4 }
9
# Running a model In order to run an existing model, you will need to download and use existing weights. Most models are already available on https://huggingface.co/ in [`safetensors`](https://github.com/huggingface/safetensors) format. Let's get started by running an old model : `bert-base-uncased`.
candle/candle-book/src/inference/inference.md/0
{ "file_path": "candle/candle-book/src/inference/inference.md", "repo_id": "candle", "token_count": 88 }
10
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run(a: &Tensor, b: &Tensor, c: &Tensor) { a.where_cond(b, c).unwrap(); } const fn create_cond_arr<const N: usize>() -> [u8; N] { let mut arr = [0u8; N]; let mut i = 0; while i < N { arr[i] = (i % 2) as u8; i += 1; } arr } const B: usize = 1; const M: usize = 1024; const K: usize = 1024; const SIZE: usize = B * M * K; const DATA: [u8; SIZE] = create_cond_arr::<SIZE>(); fn run_where_cond_benchmark(c: &mut Criterion, device: &Device, dtype: DType, name: &str) { let tensor = Tensor::from_slice(DATA.as_slice(), (B, M, K), &device).unwrap(); let on_true = Tensor::ones((B, M, K), dtype, &device).unwrap(); let on_false = Tensor::zeros((B, M, K), dtype, &device).unwrap(); let elements = B * M * K; // E.g. 2 f32 tensors + 1 u8 tensor let flops = (2 * elements * dtype.size_in_bytes()) + elements; let mut group = c.benchmark_group(device.bench_name(name)); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run( black_box(&tensor), black_box(&on_true), black_box(&on_false), ); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let device = BenchDeviceHandler::new().unwrap(); for d in device.devices { run_where_cond_benchmark(c, &d, DType::F32, "where_cond_f32"); run_where_cond_benchmark(c, &d, DType::BF16, "where_cond_bf16"); run_where_cond_benchmark(c, &d, DType::F16, "where_cond_f16"); } } criterion_group!(benches, criterion_benchmark);
candle/candle-core/benches/benchmarks/where_cond.rs/0
{ "file_path": "candle/candle-core/benches/benchmarks/where_cond.rs", "repo_id": "candle", "token_count": 942 }
11
use crate::backend::{BackendDevice, BackendStorage}; use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT}; use crate::{DType, Error, IntDType, Layout, Result, Shape, WithDType}; use half::{bf16, f16}; use rayon::prelude::*; const USE_IM2COL_CONV1D: bool = true; const USE_IM2COL_CONV2D: bool = true; // TODO: Maybe we should not implement [Clone] here and instead have an explicit allocator + // intercept the oom errors to avoid panicking and provide a proper error. #[derive(Debug, Clone)] pub enum CpuStorage { U8(Vec<u8>), U32(Vec<u32>), I64(Vec<i64>), BF16(Vec<bf16>), F16(Vec<f16>), F32(Vec<f32>), F64(Vec<f64>), } #[derive(Debug, Clone)] pub struct CpuDevice; pub trait Map1 { fn f<T: WithDType>(&self, vs: &[T], layout: &Layout) -> Result<Vec<T>>; fn map(&self, vs: &CpuStorage, layout: &Layout) -> Result<CpuStorage> { match vs { CpuStorage::U8(vs) => Ok(CpuStorage::U8(self.f(vs, layout)?)), CpuStorage::U32(vs) => Ok(CpuStorage::U32(self.f(vs, layout)?)), CpuStorage::I64(vs) => Ok(CpuStorage::I64(self.f(vs, layout)?)), CpuStorage::BF16(vs) => Ok(CpuStorage::BF16(self.f(vs, layout)?)), CpuStorage::F16(vs) => Ok(CpuStorage::F16(self.f(vs, layout)?)), CpuStorage::F32(vs) => Ok(CpuStorage::F32(self.f(vs, layout)?)), CpuStorage::F64(vs) => Ok(CpuStorage::F64(self.f(vs, layout)?)), } } } pub trait Map1Any { fn f<T: WithDType, W: Fn(Vec<T>) -> CpuStorage>( &self, vs: &[T], layout: &Layout, wrap: W, ) -> Result<CpuStorage>; fn map(&self, vs: &CpuStorage, layout: &Layout) -> Result<CpuStorage> { match vs { CpuStorage::U8(vs) => Ok(self.f(vs, layout, CpuStorage::U8)?), CpuStorage::U32(vs) => Ok(self.f(vs, layout, CpuStorage::U32)?), CpuStorage::I64(vs) => Ok(self.f(vs, layout, CpuStorage::I64)?), CpuStorage::BF16(vs) => Ok(self.f(vs, layout, CpuStorage::BF16)?), CpuStorage::F16(vs) => Ok(self.f(vs, layout, CpuStorage::F16)?), CpuStorage::F32(vs) => Ok(self.f(vs, layout, CpuStorage::F32)?), CpuStorage::F64(vs) => Ok(self.f(vs, layout, CpuStorage::F64)?), } } } type C = CpuStorage; pub trait Map2 { const OP: &'static str; fn f<T: WithDType>(&self, v1: &[T], l1: &Layout, v2: &[T], l2: &Layout) -> Result<Vec<T>>; fn map( &self, v1: &CpuStorage, l1: &Layout, v2: &CpuStorage, l2: &Layout, ) -> Result<CpuStorage> { match (v1, v2) { (C::U8(v1), C::U8(v2)) => Ok(C::U8(self.f(v1, l1, v2, l2)?)), (C::U32(v1), C::U32(v2)) => Ok(C::U32(self.f(v1, l1, v2, l2)?)), (C::I64(v1), C::I64(v2)) => Ok(C::I64(self.f(v1, l1, v2, l2)?)), (C::BF16(v1), C::BF16(v2)) => Ok(C::BF16(self.f(v1, l1, v2, l2)?)), (C::F16(v1), C::F16(v2)) => Ok(C::F16(self.f(v1, l1, v2, l2)?)), (C::F32(v1), C::F32(v2)) => Ok(C::F32(self.f(v1, l1, v2, l2)?)), (C::F64(v1), C::F64(v2)) => Ok(C::F64(self.f(v1, l1, v2, l2)?)), _ => Err(Error::DTypeMismatchBinaryOp { lhs: v1.dtype(), rhs: v2.dtype(), op: Self::OP, } .bt()), } } } pub trait Map2U8 { const OP: &'static str; fn f<T: WithDType>(&self, v1: &[T], l1: &Layout, v2: &[T], l2: &Layout) -> Result<Vec<u8>>; fn map( &self, v1: &CpuStorage, l1: &Layout, v2: &CpuStorage, l2: &Layout, ) -> Result<CpuStorage> { match (v1, v2) { (C::U8(v1), C::U8(v2)) => Ok(C::U8(self.f(v1, l1, v2, l2)?)), (C::U32(v1), C::U32(v2)) => Ok(C::U8(self.f(v1, l1, v2, l2)?)), (C::I64(v1), C::I64(v2)) => Ok(C::U8(self.f(v1, l1, v2, l2)?)), (C::BF16(v1), C::BF16(v2)) => Ok(C::U8(self.f(v1, l1, v2, l2)?)), (C::F16(v1), C::F16(v2)) => Ok(C::U8(self.f(v1, l1, v2, l2)?)), (C::F32(v1), C::F32(v2)) => Ok(C::U8(self.f(v1, l1, v2, l2)?)), (C::F64(v1), C::F64(v2)) => Ok(C::U8(self.f(v1, l1, v2, l2)?)), _ => Err(Error::DTypeMismatchBinaryOp { lhs: v1.dtype(), rhs: v2.dtype(), op: Self::OP, } .bt()), } } } struct Cmp(CmpOp); impl Map2U8 for Cmp { const OP: &'static str = "cmp"; #[inline(always)] fn f<T: WithDType>( &self, lhs: &[T], lhs_l: &Layout, rhs: &[T], rhs_l: &Layout, ) -> Result<Vec<u8>> { let dst = match self.0 { CmpOp::Eq => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x == y)), CmpOp::Ne => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x != y)), CmpOp::Lt => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x < y)), CmpOp::Le => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x <= y)), CmpOp::Gt => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x > y)), CmpOp::Ge => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x >= y)), }; Ok(dst) } } struct WCond<'a, T: IntDType>(&'a [T], &'a Layout); impl<'a, I: IntDType> Map2 for WCond<'a, I> { const OP: &'static str = "where"; #[inline(always)] fn f<T: WithDType>(&self, t: &[T], t_l: &Layout, f: &[T], f_l: &Layout) -> Result<Vec<T>> { let vs = match ( self.1.contiguous_offsets(), t_l.contiguous_offsets(), f_l.contiguous_offsets(), ) { (Some((o1, o2)), Some((o_t1, o_t2)), Some((o_f1, o_f2))) => { let pred = &self.0[o1..o2]; let t = &t[o_t1..o_t2]; let f = &f[o_f1..o_f2]; pred.iter() .zip(t.iter().zip(f.iter())) .map(|(p, (&t, &f))| if p.is_true() { t } else { f }) .collect::<Vec<_>>() } _ => self .1 .strided_index() .zip(t_l.strided_index().zip(f_l.strided_index())) .map(|(i_p, (i_t, i_f))| { if self.0[i_p].is_true() { t[i_t] } else { f[i_f] } }) .collect::<Vec<_>>(), }; Ok(vs) } } struct ReduceIndex { reduce_dim_index: usize, use_min: bool, return_index: bool, } impl ReduceIndex { // The value gets replaced if f(s[current_acc], s[i]) returns true. #[inline(always)] fn fold_impl<T, U, F, G>(&self, src: &[T], src_l: &Layout, f: F, g: G) -> Result<Vec<U>> where T: Clone + Copy, U: Clone + Copy, F: Fn(T, T) -> bool, G: Fn(T, usize) -> U, { let reduce_dim_size = src_l.dims()[self.reduce_dim_index]; let reduce_dim_stride = src_l.stride()[self.reduce_dim_index]; let dst_len = src_l.shape().elem_count() / reduce_dim_size; let mut dst: Vec<U> = Vec::with_capacity(dst_len); let dst_to_set = dst.spare_capacity_mut(); let dst_to_set = unsafe { std::mem::transmute::<_, &mut [U]>(dst_to_set) }; match src_l.contiguous_offsets() { Some((o1, o2)) => { let src = &src[o1..o2]; if reduce_dim_stride == 1 { for (start_src_i, dst_v) in dst_to_set.iter_mut().enumerate() { let start_src_i = start_src_i * reduce_dim_size; let src = &src[start_src_i..start_src_i + reduce_dim_size]; let mut acc = 0; let mut val = src[0]; for (src_i, &s) in src.iter().enumerate() { if f(val, s) { acc = src_i; val = s } } *dst_v = g(val, acc) } } else { for (start_src_i, dst_v) in dst_to_set.iter_mut().enumerate() { let (p, q) = ( start_src_i / reduce_dim_stride, start_src_i % reduce_dim_stride, ); // start_src_i = p * reduce_dim_stride + q let start_src_i = p * reduce_dim_stride * reduce_dim_size + q; let src = &src[start_src_i..]; let mut acc = 0; let mut val = src[0]; for src_i in 0..reduce_dim_size { let s = src[src_i * reduce_dim_stride]; if f(val, s) { acc = src_i; val = s } } *dst_v = g(val, acc) } } } None => { let l = src_l.narrow(self.reduce_dim_index, 0, 1)?; for (unstr_index, src_index) in l.strided_index().enumerate() { let src = &src[src_index..]; let mut acc = 0; let mut val = src[0]; for src_i in 0..reduce_dim_size { let s = src[src_i * reduce_dim_stride]; if f(val, s) { acc = src_i; val = s } } dst_to_set[unstr_index] = g(val, acc) } } } unsafe { dst.set_len(dst_len) }; Ok(dst) } } impl Map1Any for ReduceIndex { #[inline(always)] fn f<T: WithDType, W: Fn(Vec<T>) -> CpuStorage>( &self, src: &[T], src_l: &Layout, wrap: W, ) -> Result<CpuStorage> { if src_l.shape().elem_count() == 0 { Err(Error::EmptyTensor { op: "reduce" }.bt())? } let dst = match (self.return_index, self.use_min) { (false, true) => wrap(self.fold_impl(src, src_l, |x, y| x > y, |v, _i| v)?), (false, false) => wrap(self.fold_impl(src, src_l, |x, y| x < y, |v, _i| v)?), (true, true) => { CpuStorage::U32(self.fold_impl(src, src_l, |x, y| x > y, |_v, i| i as u32)?) } (true, false) => { CpuStorage::U32(self.fold_impl(src, src_l, |x, y| x < y, |_v, i| i as u32)?) } }; Ok(dst) } } struct ReduceSum<'a> { dst_shape: &'a Shape, reduce_dims: &'a [usize], reduce_dims_and_stride: Vec<(usize, usize)>, } impl<'a> ReduceSum<'a> { #[inline(always)] fn fold_impl<T>(&self, src: &[T], src_l: &Layout, start_elt: T) -> Result<Vec<T>> where T: WithDType, { let mut dst = vec![start_elt; self.dst_shape.elem_count()]; match src_l.contiguous_offsets() { Some((o1, o2)) => { let src = &src[o1..o2]; // Handle the case where we reduce over the last dimensions separately as it is // fairly common and easy to optimize. This rely on the layout being contiguous! // reduce_dims is sorted, check if it is ranging from a to n-1. let reduce_over_last_dims = self .reduce_dims .iter() .rev() .enumerate() .all(|(i, &v)| v == src_l.shape().rank() - 1 - i); if reduce_over_last_dims { let reduce_sz = self .reduce_dims_and_stride .iter() .map(|(u, _)| u) .product::<usize>(); for (dst_i, dst_v) in dst.iter_mut().enumerate() { let src_i = dst_i * reduce_sz; unsafe { T::vec_reduce_sum( src[src_i..src_i + reduce_sz].as_ptr(), dst_v, reduce_sz, ) }; } return Ok(dst); }; for (unstr_index, &src) in src.iter().enumerate() { let mut dst_index = unstr_index; // Set the reduce_dims indexes to 0. for &(dim, stride) in self.reduce_dims_and_stride.iter() { // The compiler is able to optimize the following in a single divmod op. let (pre, post) = (dst_index / stride, dst_index % stride); dst_index = (pre / dim) * stride + post; } dst[dst_index] += src; } } None => { for (unstr_index, src_index) in src_l.strided_index().enumerate() { let mut dst_index = unstr_index; // Set the reduce_dims indexes to 0. for &(dim, stride) in self.reduce_dims_and_stride.iter() { // The compiler is able to optimize the following in a single divmod op. let (pre, post) = (dst_index / stride, dst_index % stride); dst_index = (pre / dim) * stride + post; } dst[dst_index] += src[src_index]; } } } Ok(dst) } } impl<'a> Map1 for ReduceSum<'a> { #[inline(always)] fn f<T: WithDType>(&self, src: &[T], src_l: &Layout) -> Result<Vec<T>> { self.fold_impl(src, src_l, T::zero()) } } pub fn unary_map<T: Copy, U: Copy, F: FnMut(T) -> U>( vs: &[T], layout: &Layout, mut f: F, ) -> Vec<U> { match layout.strided_blocks() { crate::StridedBlocks::SingleBlock { start_offset, len } => vs [start_offset..start_offset + len] .iter() .map(|&v| f(v)) .collect(), crate::StridedBlocks::MultipleBlocks { block_start_index, block_len, } => { let mut result = Vec::with_capacity(layout.shape().elem_count()); // Specialize the case where block_len is one to avoid the second loop. if block_len == 1 { for index in block_start_index { let v = unsafe { vs.get_unchecked(index) }; result.push(f(*v)) } } else { for index in block_start_index { for offset in 0..block_len { let v = unsafe { vs.get_unchecked(index + offset) }; result.push(f(*v)) } } } result } } } pub fn unary_map_vec<T: Copy, U: Copy, F: FnMut(T) -> U, FV: FnMut(&[T], &mut [U])>( vs: &[T], layout: &Layout, mut f: F, mut f_vec: FV, ) -> Vec<U> { match layout.strided_blocks() { crate::StridedBlocks::SingleBlock { start_offset, len } => { let mut ys: Vec<U> = Vec::with_capacity(len); let ys_to_set = ys.spare_capacity_mut(); let ys_to_set = unsafe { std::mem::transmute::<_, &mut [U]>(ys_to_set) }; f_vec(&vs[start_offset..start_offset + len], ys_to_set); // SAFETY: values are all set by f_vec. unsafe { ys.set_len(len) }; ys } crate::StridedBlocks::MultipleBlocks { block_start_index, block_len, } => { let el_count = layout.shape().elem_count(); // Specialize the case where block_len is one to avoid the second loop. if block_len == 1 { let mut result = Vec::with_capacity(el_count); for index in block_start_index { let v = unsafe { vs.get_unchecked(index) }; result.push(f(*v)) } result } else { let mut ys: Vec<U> = Vec::with_capacity(el_count); let ys_to_set = ys.spare_capacity_mut(); let ys_to_set = unsafe { std::mem::transmute::<_, &mut [U]>(ys_to_set) }; let mut dst_index = 0; for src_index in block_start_index { let vs = &vs[src_index..src_index + block_len]; let ys = &mut ys_to_set[dst_index..dst_index + block_len]; f_vec(vs, ys); dst_index += block_len; } // SAFETY: values are all set by f_vec. unsafe { ys.set_len(el_count) }; ys } } } } // This function maps over two strided index sequences. pub fn binary_map<T: Copy, U: Copy, F: FnMut(T, T) -> U>( lhs_l: &Layout, rhs_l: &Layout, lhs: &[T], rhs: &[T], mut f: F, ) -> Vec<U> { match (lhs_l.contiguous_offsets(), rhs_l.contiguous_offsets()) { (Some((o_l1, o_l2)), Some((o_r1, o_r2))) => lhs[o_l1..o_l2] .iter() .zip(rhs[o_r1..o_r2].iter()) .map(|(&l, &r)| f(l, r)) .collect(), (Some((o_l1, o_l2)), None) => { // TODO: Maybe we want to avoid going through the layout twice. match rhs_l.offsets_b() { Some(ob) => { let mut i_in_block = 0; let mut i_right_broadcast = 0; lhs[o_l1..o_l2] .iter() .map(|&l| { let r = unsafe { rhs.get_unchecked(i_in_block + ob.start) }; i_right_broadcast += 1; if i_right_broadcast >= ob.right_broadcast { i_in_block += 1; i_right_broadcast = 0; } if i_in_block >= ob.len { i_in_block = 0 } f(l, *r) }) .collect() } None => lhs_l .strided_index() .zip(rhs_l.strided_index()) .map(|(lhs_i, rhs_i)| f(lhs[lhs_i], rhs[rhs_i])) .collect(), } } (None, Some((o_r1, o_r2))) => { // TODO: Maybe we want to avoid going through the layout twice. match lhs_l.offsets_b() { Some(ob) => { let mut i_in_block = 0; let mut i_right_broadcast = 0; rhs[o_r1..o_r2] .iter() .map(|&r| { let l = unsafe { lhs.get_unchecked(i_in_block + ob.start) }; i_right_broadcast += 1; if i_right_broadcast >= ob.right_broadcast { i_in_block += 1; i_right_broadcast = 0; } if i_in_block >= ob.len { i_in_block = 0 } f(*l, r) }) .collect() } None => lhs_l .strided_index() .zip(rhs_l.strided_index()) .map(|(lhs_i, rhs_i)| f(lhs[lhs_i], rhs[rhs_i])) .collect(), } } _ => lhs_l .strided_index() .zip(rhs_l.strided_index()) .map(|(lhs_i, rhs_i)| f(lhs[lhs_i], rhs[rhs_i])) .collect(), } } // Similar to binary_map but with vectorized variants. pub fn binary_map_vec<T: Copy, F: FnMut(T, T) -> T, FV: FnMut(&[T], &[T], &mut [T])>( lhs_l: &Layout, rhs_l: &Layout, lhs: &[T], rhs: &[T], mut f: F, mut f_vec: FV, ) -> Vec<T> { let el_count = lhs_l.shape().elem_count(); match (lhs_l.contiguous_offsets(), rhs_l.contiguous_offsets()) { (Some((o_l1, o_l2)), Some((o_r1, o_r2))) => { let mut ys: Vec<T> = Vec::with_capacity(el_count); let ys_to_set = ys.spare_capacity_mut(); let ys_to_set = unsafe { std::mem::transmute::<_, &mut [T]>(ys_to_set) }; f_vec(&lhs[o_l1..o_l2], &rhs[o_r1..o_r2], ys_to_set); // SAFETY: values are all set by f_vec. unsafe { ys.set_len(el_count) }; ys } (Some((o_l1, o_l2)), None) => match rhs_l.offsets_b() { Some(ob) if ob.right_broadcast == 1 => { let rhs = &rhs[ob.start..ob.start + ob.len]; let mut ys: Vec<T> = Vec::with_capacity(el_count); let ys_to_set = ys.spare_capacity_mut(); let ys_to_set = unsafe { std::mem::transmute::<_, &mut [T]>(ys_to_set) }; let mut dst_i = 0; for src_i in (o_l1..o_l2).step_by(ob.len) { f_vec( &lhs[src_i..src_i + ob.len], rhs, &mut ys_to_set[dst_i..dst_i + ob.len], ); dst_i += ob.len; } // SAFETY: values are all set by f_vec. unsafe { ys.set_len(el_count) }; ys } Some(ob) => { let rhs = &rhs[ob.start..ob.start + ob.len]; let mut ys = lhs[o_l1..o_l2].to_vec(); for idx_l in 0..ob.left_broadcast { let start = idx_l * ob.len * ob.right_broadcast; for (i, &r) in rhs.iter().enumerate() { let start = start + i * ob.right_broadcast; for v in ys[start..start + ob.right_broadcast].iter_mut() { *v = f(*v, r) } } } ys } None => lhs_l .strided_index() .zip(rhs_l.strided_index()) .map(|(lhs_i, rhs_i)| f(lhs[lhs_i], rhs[rhs_i])) .collect(), }, (None, Some((o_r1, o_r2))) => match lhs_l.offsets_b() { Some(ob) if ob.right_broadcast == 1 => { let lhs = &lhs[ob.start..ob.start + ob.len]; let mut ys: Vec<T> = Vec::with_capacity(el_count); let ys_to_set = ys.spare_capacity_mut(); let ys_to_set = unsafe { std::mem::transmute::<_, &mut [T]>(ys_to_set) }; let mut dst_i = 0; for src_i in (o_r1..o_r2).step_by(ob.len) { f_vec( lhs, &rhs[src_i..src_i + ob.len], &mut ys_to_set[dst_i..dst_i + ob.len], ); dst_i += ob.len; } // SAFETY: values are all set by f_vec. unsafe { ys.set_len(el_count) }; ys } Some(ob) => { let lhs = &lhs[ob.start..ob.start + ob.len]; let mut ys = rhs[o_r1..o_r2].to_vec(); for idx_l in 0..ob.left_broadcast { let start = idx_l * ob.len * ob.right_broadcast; for (i, &l) in lhs.iter().enumerate() { let start = start + i * ob.right_broadcast; for v in ys[start..start + ob.right_broadcast].iter_mut() { *v = f(l, *v) } } } ys } None => lhs_l .strided_index() .zip(rhs_l.strided_index()) .map(|(lhs_i, rhs_i)| f(lhs[lhs_i], rhs[rhs_i])) .collect(), }, _ => lhs_l .strided_index() .zip(rhs_l.strided_index()) .map(|(lhs_i, rhs_i)| f(lhs[lhs_i], rhs[rhs_i])) .collect(), } } struct Affine(f64, f64); impl Map1 for Affine { fn f<T: WithDType>(&self, vs: &[T], layout: &Layout) -> Result<Vec<T>> { let mul = T::from_f64(self.0); let add = T::from_f64(self.1); Ok(unary_map(vs, layout, |v| v * mul + add)) } } struct AvgPool2D((usize, usize), (usize, usize)); impl Map1 for AvgPool2D { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { // https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html let (k_h, k_w) = self.0; let (s_h, s_w) = self.1; let (b_sz, c, h, w) = layout.shape().dims4()?; let stride = layout.stride(); let (stride_h, stride_w) = (stride[2], stride[3]); let h_out = (h - k_h) / s_h + 1; let w_out = (w - k_w) / s_w + 1; let src_index = layout.start_offset(); let mut dst = vec![T::zero(); b_sz * c * h_out * w_out]; let scale = 1f64 / (k_h * k_w) as f64; let scale = T::from_f64(scale); for b_idx in 0..b_sz { let dst = &mut dst[b_idx * c * h_out * w_out..]; let src_index = src_index + b_idx * stride[0]; for c_idx in 0..c { let dst = &mut dst[c_idx * h_out * w_out..]; let src_index = src_index + c_idx * stride[1]; for h_idx in 0..h_out { for w_idx in 0..w_out { let mut sum = T::zero(); for m in 0..k_h { for n in 0..k_w { let m = s_h * h_idx + m; let n = s_w * w_idx + n; sum += src[src_index + m * stride_h + n * stride_w] } } dst[h_idx * w_out + w_idx] = sum * scale; } } } } Ok(dst) } } struct MaxPool2D((usize, usize), (usize, usize)); impl Map1 for MaxPool2D { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { // https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html let (k_h, k_w) = self.0; let (s_h, s_w) = self.1; let (b_sz, c, h, w) = layout.shape().dims4()?; let stride = layout.stride(); let (stride_h, stride_w) = (stride[2], stride[3]); let h_out = (h - k_h) / s_h + 1; let w_out = (w - k_w) / s_w + 1; let src_index = layout.start_offset(); let mut dst = vec![T::zero(); b_sz * c * h_out * w_out]; for b_idx in 0..b_sz { let dst = &mut dst[b_idx * c * h_out * w_out..]; let src_index = src_index + b_idx * stride[0]; for c_idx in 0..c { let dst = &mut dst[c_idx * h_out * w_out..]; let src_index = src_index + c_idx * stride[1]; for h_idx in 0..h_out { for w_idx in 0..w_out { let mut largest = src[src_index + s_h * h_idx * stride_h + s_w * w_idx * stride_w]; for m in 0..k_h { for n in 0..k_w { let m = s_h * h_idx + m; let n = s_w * w_idx + n; if largest < src[src_index + m * stride_h + n * stride_w] { largest = src[src_index + m * stride_h + n * stride_w] } } } dst[h_idx * w_out + w_idx] = largest; } } } } Ok(dst) } } struct UpsampleNearest1D(usize); impl Map1 for UpsampleNearest1D { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { // TODO: Specialized implementation for the case 2*sz? let dst_sz = self.0; let (b_sz, c, src_sz) = layout.shape().dims3()?; let stride = layout.stride(); let stride_sz = stride[2]; let src_index = layout.start_offset(); let scale_sz = src_sz as f64 / dst_sz as f64; let mut dst = vec![T::zero(); b_sz * c * dst_sz]; let src_idxs = (0..dst_sz) .map(|idx| usize::min(src_sz - 1, (idx as f64 * scale_sz) as usize)) .collect::<Vec<_>>(); for b_idx in 0..b_sz { let dst = &mut dst[b_idx * c * dst_sz..]; let src_index = src_index + b_idx * stride[0]; for c_idx in 0..c { let dst = &mut dst[c_idx * dst_sz..]; let src_index = src_index + c_idx * stride[1]; for (idx, src_idx) in src_idxs.iter().enumerate() { dst[idx] = src[src_index + src_idx * stride_sz] } } } Ok(dst) } } struct UpsampleNearest2D(usize, usize); impl Map1 for UpsampleNearest2D { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { // TODO: Specialized implementation for the case 2*h, 2*w? let (dst_h, dst_w) = (self.0, self.1); let (b_sz, c, src_h, src_w) = layout.shape().dims4()?; let stride = layout.stride(); let (stride_h, stride_w) = (stride[2], stride[3]); let src_index = layout.start_offset(); let scale_h = src_h as f64 / dst_h as f64; let scale_w = src_w as f64 / dst_w as f64; let mut dst = vec![T::zero(); b_sz * c * dst_h * dst_w]; let src_h_idxs = (0..dst_h) .map(|h_idx| usize::min(src_h - 1, (h_idx as f64 * scale_h) as usize)) .collect::<Vec<_>>(); let src_w_idxs = (0..dst_w) .map(|w_idx| usize::min(src_w - 1, (w_idx as f64 * scale_w) as usize)) .collect::<Vec<_>>(); for b_idx in 0..b_sz { let dst = &mut dst[b_idx * c * dst_h * dst_w..]; let src_index = src_index + b_idx * stride[0]; for c_idx in 0..c { let dst = &mut dst[c_idx * dst_h * dst_w..]; let src_index = src_index + c_idx * stride[1]; for (h_idx, src_h_idx) in src_h_idxs.iter().enumerate() { for (w_idx, src_w_idx) in src_w_idxs.iter().enumerate() { let src_index = src_index + src_h_idx * stride_h + src_w_idx * stride_w; dst[h_idx * dst_w + w_idx] = src[src_index] } } } } Ok(dst) } } struct Gather<'a, I: IntDType> { ids: &'a [I], ids_l: &'a Layout, dim: usize, } impl<'a, I: IntDType> Map1 for Gather<'a, I> { fn f<T: WithDType>(&self, src: &[T], src_l: &Layout) -> Result<Vec<T>> { let ids = match self.ids_l.contiguous_offsets() { Some((a, b)) => &self.ids[a..b], None => Err(Error::RequiresContiguous { op: "gather" }.bt())?, }; let src = match src_l.contiguous_offsets() { Some((a, b)) => &src[a..b], None => Err(Error::RequiresContiguous { op: "gather" }.bt())?, }; let dim = self.dim; let ids_dims = self.ids_l.dims(); let src_dims = src_l.dims(); let dst_len: usize = ids_dims.iter().product(); let dst_left_len: usize = ids_dims[..dim].iter().product(); let dst_dim_len = ids_dims[dim]; let dst_right_len: usize = ids_dims[dim + 1..].iter().product(); let src_dim_len = src_dims[dim]; let src_right_len: usize = src_dims[dim + 1..].iter().product(); let mut dst = vec![T::zero(); dst_len]; for left_i in 0..dst_left_len { let start_src_idx = left_i * src_right_len * src_dim_len; let start_dst_idx = left_i * dst_right_len * dst_dim_len; for i in 0..dst_dim_len { let start_dst_idx = start_dst_idx + i * dst_right_len; for right_i in 0..dst_right_len { let dst_idx = start_dst_idx + right_i; let index = ids[dst_idx].as_usize(); if index >= src_dim_len { Err(Error::InvalidIndex { index, size: src_dim_len, op: "gather", } .bt())? } let src_idx = start_src_idx + index * src_right_len + right_i; dst[dst_idx] = src[src_idx] } } } Ok(dst) } } struct IndexSelect<'a, T: IntDType> { ids: &'a [T], ids_l: &'a Layout, dim: usize, } impl<'a, I: IntDType> Map1 for IndexSelect<'a, I> { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { let src = match layout.contiguous_offsets() { Some((a, b)) => &src[a..b], None => Err(Error::RequiresContiguous { op: "index-select" }.bt())?, }; let dim = self.dim; let n_ids = match self.ids_l.dims() { [n_ids] => *n_ids, d => Err(Error::UnexpectedNumberOfDims { expected: 1, got: d.len(), shape: self.ids_l.shape().clone(), } .bt())?, }; let stride_ids = self.ids_l.stride()[0]; let mut dst_dims = layout.dims().to_vec(); let src_dim = dst_dims[dim]; dst_dims[dim] = n_ids; let dst_len: usize = dst_dims.iter().product(); let left_len: usize = dst_dims[..dim].iter().product(); let right_len: usize = dst_dims[dim + 1..].iter().product(); let mut dst = vec![T::zero(); dst_len]; for left_i in 0..left_len { let start_src_idx = left_i * right_len * src_dim; let start_dst_idx = left_i * right_len * n_ids; for i in 0..n_ids { let index = self.ids[self.ids_l.start_offset() + stride_ids * i].as_usize(); if index >= src_dim { Err(Error::InvalidIndex { index, size: src_dim, op: "index-select", } .bt())? } let start_src_idx = start_src_idx + index * right_len; let start_dst_idx = start_dst_idx + i * right_len; dst[start_dst_idx..start_dst_idx + right_len] .copy_from_slice(&src[start_src_idx..start_src_idx + right_len]) } } Ok(dst) } } struct ScatterAdd<'a, I: IntDType> { ids: &'a [I], ids_l: &'a Layout, dim: usize, } impl<'a, I: IntDType> Map2 for ScatterAdd<'a, I> { const OP: &'static str = "scatter-add"; fn f<T: WithDType>(&self, v1: &[T], l1: &Layout, src: &[T], src_l: &Layout) -> Result<Vec<T>> { let dst_len = l1.shape().elem_count(); let mut dst = vec![T::zero(); dst_len]; copy_strided_src_(v1, &mut dst, 0, l1); let src = match src_l.contiguous_offsets() { None => Err(Error::RequiresContiguous { op: "scatter-add" }.bt())?, Some((o1, o2)) => &src[o1..o2], }; let dim = self.dim; let ids_dims = self.ids_l.dims(); let dst_dims = l1.dims(); let dst_dim_len = dst_dims[dim]; let dst_right_len: usize = dst_dims[dim + 1..].iter().product(); let ids_left_len: usize = ids_dims[..dim].iter().product(); let ids_dim_len = ids_dims[dim]; let ids_right_len: usize = ids_dims[dim + 1..].iter().product(); let ids = match self.ids_l.contiguous_offsets() { Some((a, b)) => &self.ids[a..b], None => Err(Error::RequiresContiguous { op: "gather" }.bt())?, }; for left_i in 0..ids_left_len { let start_ids_idx = left_i * ids_right_len * ids_dim_len; let start_dst_idx = left_i * dst_right_len * dst_dim_len; for i in 0..ids_dim_len { let start_ids_idx = start_ids_idx + i * ids_right_len; for right_i in 0..dst_right_len { let ids_idx = start_ids_idx + right_i; let index = ids[ids_idx].as_usize(); if index >= dst_dim_len { Err(Error::InvalidIndex { index, size: dst_dim_len, op: "gather", } .bt())? } let dst_idx = start_dst_idx + index * dst_right_len + right_i; dst[dst_idx] += src[ids_idx] } } } Ok(dst) } } struct IndexAdd<'a, I: IntDType> { ids: &'a [I], dim: usize, } impl<'a, I: IntDType> Map2 for IndexAdd<'a, I> { const OP: &'static str = "index-add"; // https://pytorch.org/docs/stable/generated/torch.Tensor.index_add_.html#torch.Tensor.index_add_ // v1, l1 -> self fn f<T: WithDType>(&self, v1: &[T], l1: &Layout, src: &[T], src_l: &Layout) -> Result<Vec<T>> { let dst_len = l1.shape().elem_count(); let mut dst = vec![T::zero(); dst_len]; copy_strided_src_(v1, &mut dst, 0, l1); let src = match src_l.contiguous_offsets() { None => Err(Error::RequiresContiguous { op: "index-add" }.bt())?, Some((o1, o2)) => &src[o1..o2], }; let dim = self.dim; let max_idx = l1.dims()[dim]; let pre_dim = src_l.dims()[..dim].iter().product::<usize>(); let src_dim_sz = src_l.dims()[dim]; let post_dim = src_l.dims()[dim + 1..].iter().product::<usize>(); if dim == 0 { for (src_idx, dst_idx) in self.ids.iter().enumerate() { let dst_idx = dst_idx.as_usize(); if dst_idx >= max_idx { Err(Error::InvalidIndex { index: dst_idx, op: "index-add", size: max_idx, })? } let src_idx = src_idx * post_dim; let dst_idx = dst_idx * post_dim; let src = &src[src_idx..src_idx + post_dim]; let dst = &mut dst[dst_idx..dst_idx + post_dim]; for (d, &s) in dst.iter_mut().zip(src.iter()) { *d += s } } } else { for (src_idx, dst_idx) in self.ids.iter().enumerate() { let dst_idx = dst_idx.as_usize(); if dst_idx >= max_idx { Err(Error::InvalidIndex { index: dst_idx, op: "index-add", size: max_idx, })? } for pre_i in 0..pre_dim { let pre_src_i = (pre_i * src_dim_sz + src_idx) * post_dim; let pre_dst_i = (pre_i * max_idx + dst_idx) * post_dim; let src = &src[pre_src_i..pre_src_i + post_dim]; let dst = &mut dst[pre_dst_i..pre_dst_i + post_dim]; for (d, &s) in dst.iter_mut().zip(src.iter()) { *d += s } } } } Ok(dst) } } fn copy_strided_src_<T: Copy>(src: &[T], dst: &mut [T], dst_offset: usize, src_l: &Layout) { match src_l.strided_blocks() { crate::StridedBlocks::SingleBlock { start_offset, len } => { let to_copy = (dst.len() - dst_offset).min(len); dst[dst_offset..dst_offset + to_copy] .copy_from_slice(&src[start_offset..start_offset + to_copy]) } crate::StridedBlocks::MultipleBlocks { block_start_index, block_len: 1, } => { for (dst_index, src_index) in block_start_index.enumerate() { let dst_index = dst_index + dst_offset; if dst_index >= dst.len() { break; } dst[dst_index] = src[src_index] } } crate::StridedBlocks::MultipleBlocks { block_start_index, block_len, } => { let mut dst_index = dst_offset; for src_index in block_start_index { let next_dst_index = dst_index + block_len; if dst_index >= dst.len() { break; } let to_copy = usize::min(block_len, dst.len() - dst_index); dst[dst_index..dst_index + to_copy] .copy_from_slice(&src[src_index..src_index + to_copy]); dst_index = next_dst_index } } } } struct Conv1D<'a>(&'a crate::conv::ParamsConv1D); impl<'a> Map2 for Conv1D<'a> { const OP: &'static str = "conv1d"; fn f<T: WithDType>(&self, inp: &[T], inp_l: &Layout, k: &[T], k_l: &Layout) -> Result<Vec<T>> { let p = self.0; let inp = &inp[inp_l.start_offset()..]; let k = &k[k_l.start_offset()..]; let (inp_s0, inp_s1, inp_s2) = crate::shape::dims3(inp_l.stride())?; let (k_s0, k_s1, k_s2) = crate::shape::dims3(k_l.stride())?; let l_out = p.l_out(); let dst_elems = p.c_out * l_out * p.b_size; // The output shape is [b_size, c_out, l_out] let dst = vec![T::zero(); dst_elems]; // TODO: Avoid making this copy if `inp` already has the appropriate layout. let mut inp_cont = vec![T::zero(); p.b_size * p.c_in * p.l_in]; for b_idx in 0..p.b_size { for src_l in 0..p.l_in { for src_c_idx in 0..p.c_in { let inp_idx = b_idx * inp_s0 + src_c_idx * inp_s1 + src_l * inp_s2; inp_cont[b_idx * p.l_in * p.c_in + src_l * p.c_in + src_c_idx] = inp[inp_idx] } } } for offset in 0..p.k_size { (0..p.c_out).into_par_iter().for_each(|dst_c_idx| { let dst_idx = dst_c_idx * l_out; let k_cont = (0..p.c_in) .map(|c_in_idx| k[dst_c_idx * k_s0 + c_in_idx * k_s1 + offset * k_s2]) .collect::<Vec<_>>(); for b_idx in 0..p.b_size { let dst_idx = dst_idx + b_idx * p.c_out * l_out; for dst_l in 0..l_out { let dst_idx = dst_idx + dst_l; let src_l = p.stride * dst_l + offset * p.dilation; if src_l < p.padding || src_l >= p.padding + p.l_in { continue; } let src_l = src_l - p.padding; let inp_cont = &inp_cont[b_idx * p.l_in * p.c_in + src_l * p.c_in..]; assert!(inp_cont.len() >= p.c_in); assert!(k_cont.len() >= p.c_in); let mut d = T::zero(); unsafe { T::vec_dot(inp_cont.as_ptr(), k_cont.as_ptr(), &mut d, p.c_in) } let dst_p = dst.as_ptr(); // Safety: dst_idx are uniques per dst_c_idx which is used to parallelise // the different tasks so no two threads can try to write at the same // location. unsafe { let ptr = dst_p.add(dst_idx) as *mut T; *ptr += d } } } }) } Ok(dst) } } struct Im2Col1D { l_k: usize, stride: usize, dilation: usize, padding: usize, } impl Im2Col1D { fn l_out(&self, l: usize) -> usize { (l + 2 * self.padding - self.dilation * (self.l_k - 1) - 1) / self.stride + 1 } } impl Map1 for Im2Col1D { fn f<T: WithDType>(&self, vs: &[T], layout: &Layout) -> Result<Vec<T>> { let &Self { l_k, stride, dilation, padding, } = self; let (b, c, l) = layout.shape().dims3()?; let l_out = self.l_out(l); let src = &vs[layout.start_offset()..]; let mut dst = vec![T::zero(); b * l_out * c * l_k]; let (src_s0, src_s1, src_s2) = { let s = layout.stride(); (s[0], s[1], s[2]) }; // TODO: provide specialized kernels for the common use cases. // - l_k = 1 // - padding = 0 // - stride = 1 // - dilation = 1 for b_idx in 0..b { let src_idx = b_idx * src_s0; let dst_idx = b_idx * l_out * c * l_k; for l_idx in 0..l_out { let dst_idx = dst_idx + l_idx * c * l_k; for c_idx in 0..c { let dst_idx = dst_idx + c_idx * l_k; let src_idx = c_idx * src_s1 + src_idx; for l_k_idx in 0..l_k { let src_l = l_idx * stride + l_k_idx * dilation; if padding != 0 && (src_l < padding || src_l >= l + padding) { continue; } let src_l = src_l - padding; let src_idx = src_idx + src_l * src_s2; let dst_idx = dst_idx + l_k_idx; dst[dst_idx] = src[src_idx] } } } } Ok(dst) } } struct Im2Col { h_k: usize, w_k: usize, stride: usize, dilation: usize, padding: usize, } impl Im2Col { fn hw_out(&self, h: usize, w: usize) -> (usize, usize) { let h_out = (h + 2 * self.padding - self.dilation * (self.h_k - 1) - 1) / self.stride + 1; let w_out = (w + 2 * self.padding - self.dilation * (self.w_k - 1) - 1) / self.stride + 1; (h_out, w_out) } } impl Map1 for Im2Col { fn f<T: WithDType>(&self, vs: &[T], layout: &Layout) -> Result<Vec<T>> { let &Self { h_k, w_k, stride, dilation, padding, } = self; let (b, c, h, w) = layout.shape().dims4()?; let (h_out, w_out) = self.hw_out(h, w); let src = &vs[layout.start_offset()..]; let mut dst = vec![T::zero(); b * h_out * w_out * c * h_k * w_k]; let (src_s0, src_s1, src_s2, src_s3) = { let s = layout.stride(); (s[0], s[1], s[2], s[3]) }; // TODO: provide specialized kernels for the common use cases. // - h_k = w_k = 1 // - padding = 0 // - stride = 1 // - dilation = 1 for b_idx in 0..b { let src_idx = b_idx * src_s0; let dst_idx = b_idx * h_out * w_out * c * h_k * w_k; for h_idx in 0..h_out { let dst_idx = dst_idx + h_idx * w_out * c * h_k * w_k; for w_idx in 0..w_out { let dst_idx = dst_idx + w_idx * c * h_k * w_k; for c_idx in 0..c { let dst_idx = dst_idx + c_idx * h_k * w_k; let src_idx = c_idx * src_s1 + src_idx; for h_k_idx in 0..h_k { let src_h = h_idx * stride + h_k_idx * dilation; if padding != 0 && (src_h < padding || src_h >= h + padding) { continue; } let src_h = src_h - padding; let src_idx = src_idx + src_h * src_s2; let dst_idx = dst_idx + h_k_idx * w_k; for w_k_idx in 0..w_k { let src_w = w_idx * stride + w_k_idx * dilation; if padding != 0 && (src_w < padding || src_w >= w + padding) { continue; } let src_w = src_w - padding; let src_idx = src_idx + src_w * src_s3; let dst_idx = dst_idx + w_k_idx; dst[dst_idx] = src[src_idx] } } } } } } Ok(dst) } } struct ConvTranspose1D<'a>(&'a crate::conv::ParamsConvTranspose1D); impl<'a> Map2 for ConvTranspose1D<'a> { const OP: &'static str = "conv_transpose1d"; fn f<T: WithDType>(&self, inp: &[T], inp_l: &Layout, k: &[T], k_l: &Layout) -> Result<Vec<T>> { let p = self.0; let inp = &inp[inp_l.start_offset()..]; let (inp_s0, inp_s1, inp_s2) = crate::shape::dims3(inp_l.stride())?; let (k_s0, k_s1, k_s2) = crate::shape::dims3(k_l.stride())?; let l_out = p.l_out(); // Output shape: [b_size, c_out, l_out]. let dst_elems = p.c_out * l_out * p.b_size; let dst = vec![T::zero(); dst_elems]; let dst_s0 = p.c_out * l_out; let dst_s1 = l_out; let dst_s2 = 1; // TODO: Avoid making this copy if `inp` already has the appropriate layout. let mut inp_cont = vec![T::zero(); p.b_size * p.c_in * p.l_in]; let cont_s0 = p.l_in * p.c_in; let cont_s1 = p.c_in; for b_idx in 0..p.b_size { for l_idx in 0..p.l_in { for c_idx in 0..p.c_in { let src_idx = b_idx * inp_s0 + c_idx * inp_s1 + l_idx * inp_s2; let dst_idx = b_idx * cont_s0 + l_idx * cont_s1 + c_idx; inp_cont[dst_idx] = inp[src_idx] } } } for k_idx in 0..p.k_size { (0..p.c_out).into_par_iter().for_each(|dst_c_idx| { let k_cont = (0..p.c_in) .map(|c_in_idx| k[c_in_idx * k_s0 + dst_c_idx * k_s1 + k_idx * k_s2]) .collect::<Vec<_>>(); for b_idx in 0..p.b_size { for l_idx in 0..p.l_in { let out_idx = l_idx * p.stride + k_idx * p.dilation; if out_idx < p.padding { continue; } let out_idx = out_idx - p.padding; if out_idx < l_out { let inp_cont = &inp_cont[b_idx * cont_s0 + l_idx * cont_s1..]; let dst_idx = b_idx * dst_s0 + out_idx * dst_s2 + dst_c_idx * dst_s1; let mut d = T::zero(); unsafe { T::vec_dot(inp_cont.as_ptr(), k_cont.as_ptr(), &mut d, p.c_in) } let dst_p = dst.as_ptr(); // Safety: dst_idx are uniques per dst_c_idx which is used to // parallelise the different tasks so no two threads can try to // write at the same location. unsafe { let ptr = dst_p.add(dst_idx) as *mut T; *ptr += d } } } } }) } Ok(dst) } } struct Conv2D<'a>(&'a crate::conv::ParamsConv2D); impl<'a> Map2 for Conv2D<'a> { const OP: &'static str = "conv2d"; fn f<T: WithDType>(&self, inp: &[T], inp_l: &Layout, k: &[T], k_l: &Layout) -> Result<Vec<T>> { let p = self.0; let inp = &inp[inp_l.start_offset()..]; let (inp_s0, inp_s1, inp_s2, inp_s3) = crate::shape::dims4(inp_l.stride())?; let k = &k[k_l.start_offset()..]; let (k_s0, k_s1, k_s2, k_s3) = crate::shape::dims4(k_l.stride())?; let (out_h, out_w) = (p.out_h(), p.out_w()); // Output shape: [b_size, c_out, out_h, out_w]. let dst = vec![T::zero(); p.b_size * p.c_out * out_h * out_w]; // TODO: Avoid making this copy if `inp` already has the appropriate layout. let mut inp_cont = vec![T::zero(); p.b_size * p.c_in * p.i_h * p.i_w]; let cont_s0 = p.i_h * p.i_w * p.c_in; let cont_s1 = p.i_w * p.c_in; let cont_s2 = p.c_in; for b_idx in 0..p.b_size { for h_idx in 0..p.i_h { for w_idx in 0..p.i_w { for c_idx in 0..p.c_in { let src_idx = b_idx * inp_s0 + c_idx * inp_s1 + h_idx * inp_s2 + w_idx * inp_s3; let dst_idx = b_idx * cont_s0 + h_idx * cont_s1 + w_idx * cont_s2 + c_idx; inp_cont[dst_idx] = inp[src_idx] } } } } for offset_h in 0..p.k_h { for offset_w in 0..p.k_w { (0..p.c_out).into_par_iter().for_each(|dst_c_idx| { let dst_idx = dst_c_idx * out_w * out_h; let k_cont = (0..p.c_in) .map(|c_in_idx| { k[dst_c_idx * k_s0 + c_in_idx * k_s1 + offset_h * k_s2 + offset_w * k_s3] }) .collect::<Vec<_>>(); for b_idx in 0..p.b_size { let dst_idx = dst_idx + b_idx * p.c_out * out_h * out_w; for dst_h in 0..out_h { let dst_idx = dst_idx + dst_h * out_w; let src_h = p.stride * dst_h + offset_h * p.dilation; if src_h < p.padding || src_h >= p.i_h + p.padding { continue; } let src_h = src_h - p.padding; for dst_w in 0..out_w { let dst_idx = dst_idx + dst_w; let src_w = p.stride * dst_w + offset_w * p.dilation; if src_w < p.padding || src_w >= p.i_w + p.padding { continue; } let src_w = src_w - p.padding; let inp_cont = &inp_cont [b_idx * cont_s0 + src_h * cont_s1 + src_w * cont_s2..]; assert!(inp_cont.len() >= p.c_in); assert!(k_cont.len() >= p.c_in); let mut d = T::zero(); unsafe { T::vec_dot(inp_cont.as_ptr(), k_cont.as_ptr(), &mut d, p.c_in) } let dst_p = dst.as_ptr(); // Safety: dst_idx are uniques per dst_c_idx which is used to parallelise // the different tasks so no two threads can try to write at the same // location. unsafe { let ptr = dst_p.add(dst_idx) as *mut T; *ptr += d } } } } }); } } Ok(dst) } } struct ConvTranspose2D<'a>(&'a crate::conv::ParamsConvTranspose2D); impl<'a> Map2 for ConvTranspose2D<'a> { const OP: &'static str = "conv_transpose2d"; fn f<T: WithDType>(&self, inp: &[T], inp_l: &Layout, k: &[T], k_l: &Layout) -> Result<Vec<T>> { let p = self.0; let inp = &inp[inp_l.start_offset()..]; let (inp_s0, inp_s1, inp_s2, inp_s3) = crate::shape::dims4(inp_l.stride())?; let k = &k[k_l.start_offset()..]; let (k_s0, k_s1, k_s2, k_s3) = crate::shape::dims4(k_l.stride())?; let (out_h, out_w) = (p.out_h(), p.out_w()); // Output shape: [b_size, c_out, out_h, out_w]. let dst = vec![T::zero(); p.b_size * p.c_out * out_h * out_w]; let dst_s0 = p.c_out * out_h * out_w; let dst_s1 = out_h * out_w; let dst_s2 = out_w; let dst_s3 = 1; // TODO: Avoid making this copy if `inp` already has the appropriate layout. let mut inp_cont = vec![T::zero(); p.b_size * p.c_in * p.i_h * p.i_w]; let cont_s0 = p.i_h * p.i_w * p.c_in; let cont_s1 = p.i_w * p.c_in; let cont_s2 = p.c_in; for b_idx in 0..p.b_size { for h_idx in 0..p.i_h { for w_idx in 0..p.i_w { for c_idx in 0..p.c_in { let src_idx = b_idx * inp_s0 + c_idx * inp_s1 + h_idx * inp_s2 + w_idx * inp_s3; let dst_idx = b_idx * cont_s0 + h_idx * cont_s1 + w_idx * cont_s2 + c_idx; inp_cont[dst_idx] = inp[src_idx] } } } } for k_y in 0..p.k_h { for k_x in 0..p.k_w { (0..p.c_out).into_par_iter().for_each(|dst_c_idx| { let k_cont = (0..p.c_in) .map(|c_in_idx| { k[c_in_idx * k_s0 + dst_c_idx * k_s1 + k_y * k_s2 + k_x * k_s3] }) .collect::<Vec<_>>(); for b_idx in 0..p.b_size { for inp_y in 0..p.i_h { for inp_x in 0..p.i_w { let out_x = inp_x * p.stride + k_x * p.dilation; let out_y = inp_y * p.stride + k_y * p.dilation; if out_x < p.padding || out_y < p.padding { continue; } let out_x = out_x - p.padding; let out_y = out_y - p.padding; if out_x < out_w && out_y < out_h { let inp_cont = &inp_cont [b_idx * cont_s0 + inp_y * cont_s1 + inp_x * cont_s2..]; let dst_idx = b_idx * dst_s0 + out_y * dst_s2 + out_x * dst_s3 + dst_c_idx * dst_s1; let mut d = T::zero(); unsafe { T::vec_dot( inp_cont.as_ptr(), k_cont.as_ptr(), &mut d, p.c_in, ) } let dst_p = dst.as_ptr(); // Safety: dst_idx are uniques per dst_c_idx which is used to // parallelise the different tasks so no two threads can try to // write at the same location. unsafe { let ptr = dst_p.add(dst_idx) as *mut T; *ptr += d } } } } } }) } } Ok(dst) } } struct MatMul((usize, usize, usize, usize)); impl MatMul { fn striding_error(&self, lhs_l: &Layout, rhs_l: &Layout, msg: &'static str) -> Error { Error::MatMulUnexpectedStriding(Box::new(crate::error::MatMulUnexpectedStriding { lhs_l: lhs_l.clone(), rhs_l: rhs_l.clone(), bmnk: self.0, msg, })) .bt() } } impl Map2 for MatMul { const OP: &'static str = "mat_mul"; #[cfg(all(not(feature = "mkl"), not(feature = "accelerate")))] fn f<T: 'static + WithDType + num_traits::Num + Copy>( &self, lhs: &[T], lhs_l: &Layout, rhs: &[T], rhs_l: &Layout, ) -> Result<Vec<T>> { use gemm::{gemm, Parallelism}; match T::DTYPE { DType::F16 | DType::F32 | DType::F64 => {} _ => Err(Error::UnsupportedDTypeForOp(T::DTYPE, "matmul").bt())?, } let (b, m, n, k) = self.0; let lhs = &lhs[lhs_l.start_offset()..]; let rhs = &rhs[rhs_l.start_offset()..]; let lhs_stride = lhs_l.stride(); let rhs_stride = rhs_l.stride(); let rank = lhs_stride.len(); let lhs_cs = lhs_stride[rank - 1]; let lhs_rs = lhs_stride[rank - 2]; let rhs_cs = rhs_stride[rank - 1]; let rhs_rs = rhs_stride[rank - 2]; let a_skip: usize = match lhs_stride[..rank - 2] { [s1, stride] if s1 == stride * lhs_l.dims()[1] => stride, [stride] => stride, [] => m * k, _ => Err(self.striding_error(lhs_l, rhs_l, "non-contiguous lhs"))?, }; let b_skip: usize = match rhs_stride[..rank - 2] { [s1, stride] if s1 == stride * rhs_l.dims()[1] => stride, [stride] => stride, [] => n * k, _ => Err(self.striding_error(lhs_l, rhs_l, "non-contiguous rhs"))?, }; let c_skip: usize = m * n; let dst_shape: Shape = (m, n).into(); let dst_strides = dst_shape.stride_contiguous(); let dst_rs = dst_strides[0]; let dst_cs = dst_strides[1]; let mut dst = vec![T::zero(); b * m * n]; let num_threads = crate::utils::get_num_threads(); let parallelism = if num_threads > 1 { Parallelism::Rayon(num_threads) } else { Parallelism::None }; for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { gemm( /* m: usize = */ m, /* n: usize = */ n, /* k: usize = */ k, /* dst: *mut T = */ dst_p.as_mut_ptr(), /* dst_cs: isize = */ dst_cs as isize, /* dst_rs: isize = */ dst_rs as isize, /* read_dst: bool = */ false, /* lhs: *const T = */ lhs_p.as_ptr(), /* lhs_cs: isize = */ lhs_cs as isize, /* lhs_rs: isize = */ lhs_rs as isize, /* rhs: *const T = */ rhs_p.as_ptr(), /* rhs_cs: isize = */ rhs_cs as isize, /* rhs_rs: isize = */ rhs_rs as isize, /* alpha: T = */ T::zero(), /* beta: T = */ T::one(), /* conj_dst: bool = */ false, /* conj_lhs: bool = */ false, /* conj_rhs: bool = */ false, parallelism, ) } } Ok(dst) } #[cfg(feature = "accelerate")] fn f<T: 'static + WithDType + num_traits::Num + Copy>( &self, lhs: &[T], lhs_l: &Layout, rhs: &[T], rhs_l: &Layout, ) -> Result<Vec<T>> { let (b, m, n, k) = self.0; let lhs = &lhs[lhs_l.start_offset()..]; let rhs = &rhs[rhs_l.start_offset()..]; let lhs_stride = lhs_l.stride(); let rhs_stride = rhs_l.stride(); let rank = lhs_stride.len(); let a_skip: usize = match lhs_stride[..rank - 2] { [s1, stride] if s1 == stride * lhs_l.dims()[1] => stride, [stride] => stride, [] => m * k, _ => Err(self.striding_error(lhs_l, rhs_l, "non-contiguous lhs"))?, }; let b_skip: usize = match rhs_stride[..rank - 2] { [s1, stride] if s1 == stride * rhs_l.dims()[1] => stride, [stride] => stride, [] => n * k, _ => Err(self.striding_error(lhs_l, rhs_l, "non-contiguous rhs"))?, }; let c_skip: usize = m * n; let rhs_m1 = rhs_stride[rhs_stride.len() - 1]; let rhs_m2 = rhs_stride[rhs_stride.len() - 2]; let lhs_m1 = lhs_stride[lhs_stride.len() - 1]; let lhs_m2 = lhs_stride[lhs_stride.len() - 2]; let (lda, transa) = if rhs_m1 == 1 && rhs_m2 == n { (n as i32, b'N') } else if rhs_m1 == k && rhs_m2 == 1 { (k as i32, b'T') } else { Err(self.striding_error(lhs_l, rhs_l, "non-contiguous rhs"))? }; // The b tensor has dims batching, m, k (lhs) let (ldb, transb) = if lhs_m1 == 1 && lhs_m2 == k { (k as i32, b'N') } else if lhs_m1 == m && lhs_m2 == 1 { (m as i32, b'T') } else { Err(self.striding_error(lhs_l, rhs_l, "non-contiguous lhs"))? }; let mut dst = vec![T::zero(); b * m * n]; match T::DTYPE { DType::F16 => { crate::bail!("the accelerate backend does not support f16 matmul") } DType::F32 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f32; let b = lhs_p.as_ptr() as *const f32; let c = dst_p.as_mut_ptr() as *mut f32; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::accelerate::sgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ 1., /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ 0., /* c= */ c, /* ldc= */ n as i32, ) } } } DType::F64 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f64; let b = lhs_p.as_ptr() as *const f64; let c = dst_p.as_mut_ptr() as *mut f64; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::accelerate::dgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ 1., /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ 0., /* c= */ c, /* ldc= */ n as i32, ) } } } dtype => Err(Error::UnsupportedDTypeForOp(dtype, "matmul").bt())?, } Ok(dst) } #[cfg(feature = "mkl")] fn f<T: 'static + WithDType + num_traits::Num + Copy>( &self, lhs: &[T], lhs_l: &Layout, rhs: &[T], rhs_l: &Layout, ) -> Result<Vec<T>> { let (b, m, n, k) = self.0; let lhs = &lhs[lhs_l.start_offset()..]; let rhs = &rhs[rhs_l.start_offset()..]; let lhs_stride = lhs_l.stride(); let rhs_stride = rhs_l.stride(); let rank = lhs_stride.len(); let a_skip: usize = match lhs_stride[..rank - 2] { [s1, stride] if s1 == stride * lhs_l.dims()[1] => stride, [stride] => stride, [] => m * k, _ => Err(self.striding_error(lhs_l, rhs_l, "non-contiguous lhs"))?, }; let b_skip: usize = match rhs_stride[..rank - 2] { [s1, stride] if s1 == stride * rhs_l.dims()[1] => stride, [stride] => stride, [] => n * k, _ => Err(self.striding_error(lhs_l, rhs_l, "non-contiguous rhs"))?, }; let c_skip: usize = m * n; let rhs_m1 = rhs_stride[rhs_stride.len() - 1]; let rhs_m2 = rhs_stride[rhs_stride.len() - 2]; let lhs_m1 = lhs_stride[lhs_stride.len() - 1]; let lhs_m2 = lhs_stride[lhs_stride.len() - 2]; let (lda, transa) = if rhs_m1 == 1 && rhs_m2 == n { (n as i32, b'N') } else if rhs_m1 == k && rhs_m2 == 1 { (k as i32, b'T') } else { Err(self.striding_error(lhs_l, rhs_l, "non-contiguous rhs"))? }; // The b tensor has dims batching, m, k (lhs) let (ldb, transb) = if lhs_m1 == 1 && lhs_m2 == k { (k as i32, b'N') } else if lhs_m1 == m && lhs_m2 == 1 { (m as i32, b'T') } else { Err(self.striding_error(lhs_l, rhs_l, "non-contiguous lhs"))? }; let mut dst = vec![T::zero(); b * m * n]; match T::DTYPE { DType::F16 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f16; let b = lhs_p.as_ptr() as *const f16; let c = dst_p.as_mut_ptr() as *mut f16; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::mkl::hgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ f16::ONE, /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ f16::ZERO, /* c= */ c, /* ldc= */ n as i32, ) } } } DType::F32 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f32; let b = lhs_p.as_ptr() as *const f32; let c = dst_p.as_mut_ptr() as *mut f32; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::mkl::sgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ 1., /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ 0., /* c= */ c, /* ldc= */ n as i32, ) } } } DType::F64 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f64; let b = lhs_p.as_ptr() as *const f64; let c = dst_p.as_mut_ptr() as *mut f64; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::mkl::dgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ 1., /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ 0., /* c= */ c, /* ldc= */ n as i32, ) } } } dtype => Err(Error::UnsupportedDTypeForOp(dtype, "matmul").bt())?, } Ok(dst) } } fn elu<T: num_traits::Float>(v: T, alpha: T) -> T { if v.is_sign_positive() { v } else { (v.exp() - T::one()) * alpha } } impl CpuStorage { pub fn as_slice<D: WithDType>(&self) -> Result<&[D]> { D::cpu_storage_as_slice(self) } pub fn concat(storages: &[CpuStorage]) -> Result<CpuStorage> { let storage0 = &storages[0]; let s = match storage0 { Self::U8(_) => { let storages = storages .iter() .map(|s| match s { Self::U8(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::U8(storages) } Self::U32(_) => { let storages = storages .iter() .map(|s| match s { Self::U32(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::U32(storages) } Self::I64(_) => { let storages = storages .iter() .map(|s| match s { Self::I64(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::I64(storages) } Self::BF16(_) => { let storages = storages .iter() .map(|s| match s { Self::BF16(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::BF16(storages) } Self::F16(_) => { let storages = storages .iter() .map(|s| match s { Self::F16(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::F16(storages) } Self::F32(_) => { let storages = storages .iter() .map(|s| match s { Self::F32(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::F32(storages) } Self::F64(_) => { let storages = storages .iter() .map(|s| match s { Self::F64(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::F64(storages) } }; Ok(s) } } impl BackendStorage for CpuStorage { type Device = CpuDevice; fn dtype(&self) -> DType { match self { Self::U8(_) => DType::U8, Self::U32(_) => DType::U32, Self::I64(_) => DType::I64, Self::BF16(_) => DType::BF16, Self::F16(_) => DType::F16, Self::F32(_) => DType::F32, Self::F64(_) => DType::F64, } } fn to_dtype(&self, layout: &Layout, dtype: DType) -> Result<Self> { // TODO: find a way around the quadratic number of cases below. match (self, dtype) { (Self::U8(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| bf16::from_f32(v as f32)); Ok(Self::BF16(data)) } (Self::U32(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| bf16::from_f32(v as f32)); Ok(Self::BF16(data)) } (Self::I64(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| bf16::from_f32(v as f32)); Ok(Self::BF16(data)) } (Self::BF16(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| v); Ok(Self::BF16(data)) } (Self::F16(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| bf16::from_f32(v.to_f32())); Ok(Self::BF16(data)) } (Self::F32(storage), DType::BF16) => { let data = unary_map(storage, layout, bf16::from_f32); Ok(Self::BF16(data)) } (Self::F64(storage), DType::BF16) => { let data = unary_map(storage, layout, bf16::from_f64); Ok(Self::BF16(data)) } (Self::U8(storage), DType::F16) => { let data = unary_map(storage, layout, |v| f16::from_f32(v as f32)); Ok(Self::F16(data)) } (Self::U32(storage), DType::F16) => { let data = unary_map(storage, layout, |v| f16::from_f32(v as f32)); Ok(Self::F16(data)) } (Self::I64(storage), DType::F16) => { let data = unary_map(storage, layout, |v| f16::from_f32(v as f32)); Ok(Self::F16(data)) } (Self::BF16(storage), DType::F16) => { let data = unary_map(storage, layout, |v| f16::from_f32(v.to_f32())); Ok(Self::F16(data)) } (Self::F16(storage), DType::F16) => { let data = unary_map(storage, layout, |v| v); Ok(Self::F16(data)) } (Self::F32(storage), DType::F16) => { let data = unary_map(storage, layout, f16::from_f32); Ok(Self::F16(data)) } (Self::F64(storage), DType::F16) => { let data = unary_map(storage, layout, f16::from_f64); Ok(Self::F16(data)) } (Self::U8(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v as f32); Ok(Self::F32(data)) } (Self::U32(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v as f32); Ok(Self::F32(data)) } (Self::I64(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v as f32); Ok(Self::F32(data)) } (Self::BF16(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v.to_f32()); Ok(Self::F32(data)) } (Self::F16(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v.to_f32()); Ok(Self::F32(data)) } (Self::F32(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v); Ok(Self::F32(data)) } (Self::F64(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v as f32); Ok(Self::F32(data)) } (Self::U8(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v); Ok(Self::U8(data)) } (Self::BF16(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v.to_f32() as u8); Ok(Self::U8(data)) } (Self::F16(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v.to_f32() as u8); Ok(Self::U8(data)) } (Self::F32(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v as u8); Ok(Self::U8(data)) } (Self::F64(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v as u8); Ok(Self::U8(data)) } (Self::U32(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v as u8); Ok(Self::U8(data)) } (Self::I64(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v as u8); Ok(Self::U8(data)) } (Self::U8(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v as u32); Ok(Self::U32(data)) } (Self::U32(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v); Ok(Self::U32(data)) } (Self::I64(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v as u32); Ok(Self::U32(data)) } (Self::BF16(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v.to_f32() as u32); Ok(Self::U32(data)) } (Self::F16(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v.to_f32() as u32); Ok(Self::U32(data)) } (Self::F32(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v as u32); Ok(Self::U32(data)) } (Self::F64(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v as u32); Ok(Self::U32(data)) } (Self::U8(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v as i64); Ok(Self::I64(data)) } (Self::U32(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v as i64); Ok(Self::I64(data)) } (Self::I64(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v); Ok(Self::I64(data)) } (Self::BF16(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v.to_f32() as i64); Ok(Self::I64(data)) } (Self::F16(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v.to_f32() as i64); Ok(Self::I64(data)) } (Self::F32(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v as i64); Ok(Self::I64(data)) } (Self::F64(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v as i64); Ok(Self::I64(data)) } (Self::U8(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v as f64); Ok(Self::F64(data)) } (Self::U32(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v as f64); Ok(Self::F64(data)) } (Self::I64(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v as f64); Ok(Self::F64(data)) } (Self::BF16(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v.to_f64()); Ok(Self::F64(data)) } (Self::F16(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v.to_f64()); Ok(Self::F64(data)) } (Self::F32(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v as f64); Ok(Self::F64(data)) } (Self::F64(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v); Ok(Self::F64(data)) } } } fn reduce_op(&self, op: ReduceOp, layout: &Layout, reduce_dims: &[usize]) -> Result<Self> { match op { ReduceOp::Sum => { let src_dims = layout.dims(); let mut dst_dims = src_dims.to_vec(); for &dim in reduce_dims.iter() { dst_dims[dim] = 1; } let dst_shape = Shape::from(dst_dims); let mut reduce_dims = reduce_dims.to_vec(); // Sort the reduce_dims as they have to be processed from left to right when converting the // indexes. reduce_dims.sort(); let reduce_dims_and_stride: Vec<_> = reduce_dims .iter() .map(|&d| (src_dims[d], src_dims[d + 1..].iter().product::<usize>())) .collect(); ReduceSum { dst_shape: &dst_shape, reduce_dims: &reduce_dims, reduce_dims_and_stride, } .map(self, layout) } ReduceOp::Min | ReduceOp::ArgMin | ReduceOp::Max | ReduceOp::ArgMax => { let reduce_dim_index = match reduce_dims { [reduce_dim_index] => *reduce_dim_index, _ => { let op = match op { ReduceOp::Min => "min", ReduceOp::ArgMin => "argmin", ReduceOp::Max => "max", ReduceOp::ArgMax => "argmax", _ => unreachable!(), }; let dims = reduce_dims.to_vec(); Err(Error::OnlySingleDimension { op, dims })? } }; let (use_min, return_index) = match op { ReduceOp::Min => (true, false), ReduceOp::ArgMin => (true, true), ReduceOp::Max => (false, false), ReduceOp::ArgMax => (false, true), _ => unreachable!(), }; ReduceIndex { reduce_dim_index, use_min, return_index, } .map(self, layout) } } } fn cmp(&self, op: CmpOp, rhs: &Self, lhs_l: &Layout, rhs_l: &Layout) -> Result<Self> { Cmp(op).map(self, lhs_l, rhs, rhs_l) } fn affine(&self, layout: &Layout, mul: f64, add: f64) -> Result<Self> { Affine(mul, add).map(self, layout) } fn avg_pool2d( &self, layout: &Layout, kernel_size: (usize, usize), stride: (usize, usize), ) -> Result<Self> { AvgPool2D(kernel_size, stride).map(self, layout) } fn max_pool2d( &self, layout: &Layout, kernel_size: (usize, usize), stride: (usize, usize), ) -> Result<Self> { MaxPool2D(kernel_size, stride).map(self, layout) } fn upsample_nearest1d(&self, layout: &Layout, sz: usize) -> Result<Self> { UpsampleNearest1D(sz).map(self, layout) } fn upsample_nearest2d(&self, layout: &Layout, h: usize, w: usize) -> Result<Self> { UpsampleNearest2D(h, w).map(self, layout) } fn powf(&self, layout: &Layout, e: f64) -> Result<Self> { use num_traits::Float; // TODO: Have some generic map for functions that apply on num_traits::Float elements. match self { Self::BF16(storage) => { let data = unary_map(storage, layout, |v| v.powf(bf16::from_f64(e))); Ok(Self::BF16(data)) } Self::F16(storage) => { let data = unary_map(storage, layout, |v| v.powf(f16::from_f64(e))); Ok(Self::F16(data)) } Self::F32(storage) => { let data = unary_map(storage, layout, |v| v.powf(e as f32)); Ok(Self::F32(data)) } Self::F64(storage) => { let data = unary_map(storage, layout, |v| v.powf(e)); Ok(Self::F64(data)) } Self::U8(_) => Err(Error::UnsupportedDTypeForOp(DType::U8, "elu").bt()), Self::U32(_) => Err(Error::UnsupportedDTypeForOp(DType::U32, "elu").bt()), Self::I64(_) => Err(Error::UnsupportedDTypeForOp(DType::I64, "elu").bt()), } } fn elu(&self, layout: &Layout, alpha: f64) -> Result<Self> { // TODO: Have some generic map for functions that apply on num_traits::Float elements. match self { Self::BF16(storage) => { let data = unary_map(storage, layout, |v| elu(v, bf16::from_f64(alpha))); Ok(Self::BF16(data)) } Self::F16(storage) => { let data = unary_map(storage, layout, |v| elu(v, f16::from_f64(alpha))); Ok(Self::F16(data)) } Self::F32(storage) => { let data = unary_map(storage, layout, |v| elu(v, f32::from_f64(alpha))); Ok(Self::F32(data)) } Self::F64(storage) => { let data = unary_map(storage, layout, |v| elu(v, alpha)); Ok(Self::F64(data)) } Self::U8(_) => Err(Error::UnsupportedDTypeForOp(DType::U8, "elu").bt()), Self::U32(_) => Err(Error::UnsupportedDTypeForOp(DType::U32, "elu").bt()), Self::I64(_) => Err(Error::UnsupportedDTypeForOp(DType::I64, "elu").bt()), } } fn unary_impl<B: UnaryOpT>(&self, layout: &Layout) -> Result<Self> { match self { Self::BF16(storage) => { if B::BF16_VEC { let data = unary_map_vec(storage, layout, B::bf16, B::bf16_vec); Ok(Self::BF16(data)) } else { let data = unary_map(storage, layout, B::bf16); Ok(Self::BF16(data)) } } Self::F16(storage) => { if B::F16_VEC { let data = unary_map_vec(storage, layout, B::f16, B::f16_vec); Ok(Self::F16(data)) } else { let data = unary_map(storage, layout, B::f16); Ok(Self::F16(data)) } } Self::F32(storage) => { if B::F32_VEC { let data = unary_map_vec(storage, layout, B::f32, B::f32_vec); Ok(Self::F32(data)) } else { let data = unary_map(storage, layout, B::f32); Ok(Self::F32(data)) } } Self::F64(storage) => { if B::F64_VEC { let data = unary_map_vec(storage, layout, B::f64, B::f64_vec); Ok(Self::F64(data)) } else { let data = unary_map(storage, layout, B::f64); Ok(Self::F64(data)) } } Self::U8(storage) => { let data = unary_map(storage, layout, B::u8); Ok(Self::U8(data)) } Self::U32(storage) => { let data = unary_map(storage, layout, B::u32); Ok(Self::U32(data)) } Self::I64(storage) => { let data = unary_map(storage, layout, B::i64); Ok(Self::I64(data)) } } } fn binary_impl<B: BinaryOpT>( &self, rhs: &Self, lhs_l: &Layout, rhs_l: &Layout, ) -> Result<Self> { match (self, rhs) { (Self::BF16(lhs), Self::BF16(rhs)) => { let data = if B::BF16_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::bf16, B::bf16_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::bf16) }; Ok(Self::BF16(data)) } (Self::F16(lhs), Self::F16(rhs)) => { let data = if B::F16_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::f16, B::f16_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::f16) }; Ok(Self::F16(data)) } (Self::F32(lhs), Self::F32(rhs)) => { let data = if B::F32_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::f32, B::f32_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::f32) }; Ok(Self::F32(data)) } (Self::F64(lhs), Self::F64(rhs)) => { let data = if B::F64_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::f64, B::f64_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::f64) }; Ok(Self::F64(data)) } (Self::U32(lhs), Self::U32(rhs)) => { let data = if B::U32_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::u32, B::u32_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::u32) }; Ok(Self::U32(data)) } (Self::I64(lhs), Self::I64(rhs)) => { let data = if B::I64_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::i64, B::i64_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::i64) }; Ok(Self::I64(data)) } (Self::U8(lhs), Self::U8(rhs)) => { let data = if B::U8_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::u8, B::u8_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::u8) }; Ok(Self::U8(data)) } _ => { // This should be covered by the dtype check above. Err(Error::DTypeMismatchBinaryOp { lhs: self.dtype(), rhs: rhs.dtype(), op: B::NAME, } .bt()) } } } fn copy_strided_src(&self, dst: &mut Self, dst_offset: usize, src_l: &Layout) -> Result<()> { match (self, dst) { (Self::U8(src), Self::U8(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::U32(src), Self::U32(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::I64(src), Self::I64(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::BF16(src), Self::BF16(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::F16(src), Self::F16(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::F32(src), Self::F32(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::F64(src), Self::F64(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (_, dst) => { // This should be covered by the dtype check above. return Err(Error::DTypeMismatchBinaryOp { lhs: self.dtype(), rhs: dst.dtype(), op: "copy_strided", } .bt()); } } Ok(()) } fn where_cond( &self, layout: &Layout, t: &Self, t_l: &Layout, f: &Self, f_l: &Layout, ) -> Result<Self> { match self { Self::U8(pred) => WCond(pred, layout).map(t, t_l, f, f_l), Self::U32(pred) => WCond(pred, layout).map(t, t_l, f, f_l), Self::I64(pred) => WCond(pred, layout).map(t, t_l, f, f_l), _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "where-cond")), } } fn conv1d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConv1D, ) -> Result<Self> { if !USE_IM2COL_CONV1D { return Conv1D(params).map(self, l, kernel, kernel_l); } let op = Im2Col1D { l_k: params.k_size, padding: params.padding, stride: params.stride, dilation: params.dilation, }; let col = op.map(self, l)?; let b = params.b_size; let n = params.c_out; let l_out = params.l_out(); let k = op.l_k * params.c_in; let m = l_out; let col_l = Layout::contiguous((b, m, k)); let res = if kernel_l.is_contiguous() { let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? } else { // Make the kernel contiguous if not already the case. let mut kernel_c = self.device().zeros_impl(kernel_l.shape(), kernel.dtype())?; kernel.copy_strided_src(&mut kernel_c, 0, kernel_l)?; let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? }; let res_l = Layout::contiguous((b, l_out, params.c_out)).transpose(1, 2)?; let mut res_t = self.device().zeros_impl(res_l.shape(), res.dtype())?; res.copy_strided_src(&mut res_t, 0, &res_l)?; Ok(res_t) } fn conv_transpose1d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConvTranspose1D, ) -> Result<Self> { ConvTranspose1D(params).map(self, l, kernel, kernel_l) } fn conv2d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConv2D, ) -> Result<Self> { if !USE_IM2COL_CONV2D { return Conv2D(params).map(self, l, kernel, kernel_l); } let op = Im2Col { h_k: params.k_h, w_k: params.k_w, padding: params.padding, stride: params.stride, dilation: params.dilation, }; let col = op.map(self, l)?; let b = params.b_size; let n = params.c_out; let (h_out, w_out) = (params.out_h(), params.out_w()); let k = op.h_k * op.w_k * params.c_in; let m = h_out * w_out; let col_l = Layout::contiguous((b, m, k)); let res = if kernel_l.is_contiguous() { let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? } else { // Make the kernel contiguous if not already the case. let mut kernel_c = self.device().zeros_impl(kernel_l.shape(), kernel.dtype())?; kernel.copy_strided_src(&mut kernel_c, 0, kernel_l)?; let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? }; let res_l = Layout::contiguous((b, h_out, w_out, params.c_out)) .transpose(1, 2)? .transpose(1, 3)?; let mut res_t = self.device().zeros_impl(res_l.shape(), res.dtype())?; res.copy_strided_src(&mut res_t, 0, &res_l)?; Ok(res_t) } fn conv_transpose2d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConvTranspose2D, ) -> Result<Self> { ConvTranspose2D(params).map(self, l, kernel, kernel_l) } fn index_select(&self, ids: &Self, l: &Layout, ids_l: &Layout, dim: usize) -> Result<Self> { match ids { Self::U8(ids) => IndexSelect { ids, ids_l, dim }.map(self, l), Self::U32(ids) => IndexSelect { ids, ids_l, dim }.map(self, l), Self::I64(ids) => IndexSelect { ids, ids_l, dim }.map(self, l), _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "index-select")), } } fn gather(&self, l: &Layout, ids: &Self, ids_l: &Layout, dim: usize) -> Result<Self> { match ids { Self::U8(ids) => Gather { ids, ids_l, dim }.map(self, l), Self::U32(ids) => Gather { ids, ids_l, dim }.map(self, l), Self::I64(ids) => Gather { ids, ids_l, dim }.map(self, l), _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "gather")), } } fn scatter_add( &self, l: &Layout, ids: &Self, ids_l: &Layout, src: &Self, src_l: &Layout, dim: usize, ) -> Result<Self> { match ids { Self::U8(ids) => ScatterAdd { ids, ids_l, dim }.map(self, l, src, src_l), Self::U32(ids) => ScatterAdd { ids, ids_l, dim }.map(self, l, src, src_l), Self::I64(ids) => ScatterAdd { ids, ids_l, dim }.map(self, l, src, src_l), _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "scatter-add")), } } fn index_add( &self, l: &Layout, ids: &Self, ids_l: &Layout, src: &Self, src_l: &Layout, dim: usize, ) -> Result<Self> { match ids { Self::U8(ids) => { let ids = match ids_l.contiguous_offsets() { Some((a, b)) => &ids[a..b], None => Err(Error::RequiresContiguous { op: "index-add" }.bt())?, }; IndexAdd { ids, dim }.map(self, l, src, src_l) } Self::U32(ids) => { let ids = match ids_l.contiguous_offsets() { Some((a, b)) => &ids[a..b], None => Err(Error::RequiresContiguous { op: "index-add" }.bt())?, }; IndexAdd { ids, dim }.map(self, l, src, src_l) } Self::I64(ids) => { let ids = match ids_l.contiguous_offsets() { Some((a, b)) => &ids[a..b], None => Err(Error::RequiresContiguous { op: "index-add" }.bt())?, }; IndexAdd { ids, dim }.map(self, l, src, src_l) } _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "index-add").bt()), } } fn matmul( &self, rhs: &Self, bmnk: (usize, usize, usize, usize), lhs_l: &Layout, rhs_l: &Layout, ) -> Result<Self> { MatMul(bmnk).map(self, lhs_l, rhs, rhs_l) } fn device(&self) -> &Self::Device { &CpuDevice } fn try_clone(&self, _: &Layout) -> Result<Self> { Ok(self.clone()) } fn to_cpu_storage(&self) -> Result<CpuStorage> { Ok(self.clone()) } } impl BackendDevice for CpuDevice { type Storage = CpuStorage; fn location(&self) -> crate::DeviceLocation { crate::DeviceLocation::Cpu } fn same_device(&self, _: &Self) -> bool { true } fn storage_from_cpu_storage(&self, s: &CpuStorage) -> Result<Self::Storage> { Ok(s.clone()) } fn new(_: usize) -> Result<Self> { Ok(Self) } fn set_seed(&self, _seed: u64) -> Result<()> { crate::bail!("cannot seed the CPU rng with set_seed") } fn rand_uniform(&self, shape: &Shape, dtype: DType, min: f64, max: f64) -> Result<CpuStorage> { use rand::prelude::*; let elem_count = shape.elem_count(); let mut rng = rand::thread_rng(); match dtype { DType::U8 | DType::U32 | DType::I64 => { Err(Error::UnsupportedDTypeForOp(dtype, "rand_uniform").bt()) } DType::BF16 => { let mut data = Vec::with_capacity(elem_count); let uniform = rand::distributions::Uniform::new(bf16::from_f64(min), bf16::from_f64(max)); for _i in 0..elem_count { data.push(rng.sample::<bf16, _>(uniform)) } Ok(CpuStorage::BF16(data)) } DType::F16 => { let mut data = Vec::with_capacity(elem_count); let uniform = rand::distributions::Uniform::new(f16::from_f64(min), f16::from_f64(max)); for _i in 0..elem_count { data.push(rng.sample::<f16, _>(uniform)) } Ok(CpuStorage::F16(data)) } DType::F32 => { let mut data = Vec::with_capacity(elem_count); let uniform = rand::distributions::Uniform::new(min as f32, max as f32); for _i in 0..elem_count { data.push(rng.sample::<f32, _>(uniform)) } Ok(CpuStorage::F32(data)) } DType::F64 => { let mut data = Vec::with_capacity(elem_count); let uniform = rand::distributions::Uniform::new(min, max); for _i in 0..elem_count { data.push(rng.sample::<f64, _>(uniform)) } Ok(CpuStorage::F64(data)) } } } fn rand_normal(&self, shape: &Shape, dtype: DType, mean: f64, std: f64) -> Result<CpuStorage> { use rand::prelude::*; let elem_count = shape.elem_count(); let mut rng = rand::thread_rng(); match dtype { DType::U8 | DType::U32 | DType::I64 => { Err(Error::UnsupportedDTypeForOp(dtype, "rand_normal").bt()) } DType::BF16 => { let mut data = Vec::with_capacity(elem_count); let normal = rand_distr::Normal::new(bf16::from_f64(mean), bf16::from_f64(std)) .map_err(Error::wrap)?; for _i in 0..elem_count { data.push(normal.sample(&mut rng)) } Ok(CpuStorage::BF16(data)) } DType::F16 => { let mut data = Vec::with_capacity(elem_count); let normal = rand_distr::Normal::new(f16::from_f64(mean), f16::from_f64(std)) .map_err(Error::wrap)?; for _i in 0..elem_count { data.push(normal.sample(&mut rng)) } Ok(CpuStorage::F16(data)) } DType::F32 => { let mut data = Vec::with_capacity(elem_count); let normal = rand_distr::Normal::new(mean as f32, std as f32).map_err(Error::wrap)?; for _i in 0..elem_count { data.push(normal.sample(&mut rng)) } Ok(CpuStorage::F32(data)) } DType::F64 => { let mut data = Vec::with_capacity(elem_count); let normal = rand_distr::Normal::new(mean, std).map_err(Error::wrap)?; for _i in 0..elem_count { data.push(normal.sample(&mut rng)) } Ok(CpuStorage::F64(data)) } } } fn ones_impl(&self, shape: &Shape, dtype: DType) -> Result<CpuStorage> { let elem_count = shape.elem_count(); let storage = match dtype { DType::U8 => CpuStorage::U8(vec![1u8; elem_count]), DType::U32 => CpuStorage::U32(vec![1u32; elem_count]), DType::I64 => CpuStorage::I64(vec![1i64; elem_count]), DType::BF16 => CpuStorage::BF16(vec![bf16::ONE; elem_count]), DType::F16 => CpuStorage::F16(vec![f16::ONE; elem_count]), DType::F32 => CpuStorage::F32(vec![1f32; elem_count]), DType::F64 => CpuStorage::F64(vec![1f64; elem_count]), }; Ok(storage) } fn zeros_impl(&self, shape: &Shape, dtype: DType) -> Result<CpuStorage> { let elem_count = shape.elem_count(); let storage = match dtype { DType::U8 => CpuStorage::U8(vec![0u8; elem_count]), DType::U32 => CpuStorage::U32(vec![0u32; elem_count]), DType::I64 => CpuStorage::I64(vec![0i64; elem_count]), DType::BF16 => CpuStorage::BF16(vec![bf16::ZERO; elem_count]), DType::F16 => CpuStorage::F16(vec![f16::ZERO; elem_count]), DType::F32 => CpuStorage::F32(vec![0f32; elem_count]), DType::F64 => CpuStorage::F64(vec![0f64; elem_count]), }; Ok(storage) } } #[macro_export] macro_rules! map_dtype { ($name:expr, $storage:ident, $fn:expr, ($($dtypes:ident),+)) => { match $storage { $(CpuStorage::$dtypes(__e) => CpuStorage::$dtypes($fn(__e)),)* s => Err(Error::UnsupportedDTypeForOp(s.dtype(), $name).bt())?, } }; }
candle/candle-core/src/cpu_backend.rs/0
{ "file_path": "candle/candle-core/src/cpu_backend.rs", "repo_id": "candle", "token_count": 68866 }
12
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
118
Edit dataset card