text
stringlengths
7
1.24M
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
519
FROM nvcr.io/nvidia/pytorch:24.07-py3 RUN pip install transformers evaluate datasets RUN git clone https://github.com/huggingface/accelerate.git RUN cd accelerate && \ pip install -e . && \ cd benchmarks/fp8 RUN /bin/bash
accelerate/benchmarks/fp8/Dockerfile/0
{ "file_path": "accelerate/benchmarks/fp8/Dockerfile", "repo_id": "accelerate", "token_count": 90 }
0
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Launching your 🤗 Accelerate scripts In the previous tutorial, you were introduced to how to modify your current training script to use 🤗 Accelerate. The final version of that code is shown below: ```python from accelerate import Accelerator accelerator = Accelerator() model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: optimizer.zero_grad() inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) accelerator.backward(loss) optimizer.step() scheduler.step() ``` But how do you run this code and have it utilize the special hardware available to it? First, you should rewrite the above code into a function, and make it callable as a script. For example: ```diff from accelerate import Accelerator + def main(): accelerator = Accelerator() model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: optimizer.zero_grad() inputs, targets = batch outputs = model(inputs) loss = loss_function(outputs, targets) accelerator.backward(loss) optimizer.step() scheduler.step() + if __name__ == "__main__": + main() ``` Next, you need to launch it with `accelerate launch`. <Tip warning={true}> It's recommended you run `accelerate config` before using `accelerate launch` to configure your environment to your liking. Otherwise 🤗 Accelerate will use very basic defaults depending on your system setup. </Tip> ## Using accelerate launch 🤗 Accelerate has a special CLI command to help you launch your code in your system through `accelerate launch`. This command wraps around all of the different commands needed to launch your script on various platforms, without you having to remember what each of them is. <Tip> If you are familiar with launching scripts in PyTorch yourself such as with `torchrun`, you can still do this. It is not required to use `accelerate launch`. </Tip> You can launch your script quickly by using: ```bash accelerate launch {script_name.py} --arg1 --arg2 ... ``` Just put `accelerate launch` at the start of your command, and pass in additional arguments and parameters to your script afterward like normal! Since this runs the various torch spawn methods, all of the expected environment variables can be modified here as well. For example, here is how to use `accelerate launch` with a single GPU: ```bash CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ... ``` You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters. In this case, 🤗 Accelerate will make some hyperparameter decisions for you, e.g., if GPUs are available, it will use all of them by default without the mixed precision. Here is how you would use all GPUs and train with mixed precision disabled: ```bash accelerate launch --multi_gpu {script_name.py} {--arg1} {--arg2} ... ``` Or by specifying a number of GPUs to use: ```bash accelerate launch --num_processes=2 {script_name.py} {--arg1} {--arg2} ... ``` To get more specific you should pass in the needed parameters yourself. For instance, here is how you would also launch that same script on two GPUs using mixed precision while avoiding all of the warnings: ```bash accelerate launch --multi_gpu --mixed_precision=fp16 --num_processes=2 {script_name.py} {--arg1} {--arg2} ... ``` For a complete list of parameters you can pass in, run: ```bash accelerate launch -h ``` <Tip> Even if you are not using 🤗 Accelerate in your code, you can still use the launcher for starting your scripts! </Tip> For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`: ```bash MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ... ``` You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific launching behaviors. To do so, use `accelerate.commands.launch` instead of `accelerate launch`: ```bash python -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2} ``` If you want to execute the script with any other python flags, you can pass them in as well similar to `-m`, such as the below example enabling unbuffered stdout and stderr: ```bash python -u -m accelerate.commands.launch --num_processes=2 {script_name.py} {--arg1} {--arg2} ``` <Tip> You can run your code on CPU as well! This is helpful for debugging and testing purposes on toy models and datasets. ```bash accelerate launch --cpu {script_name.py} {--arg1} {--arg2} ``` </Tip> ## Why you should always use `accelerate config` Why is it useful to the point you should **always** run `accelerate config`? Remember that earlier call to `accelerate launch` as well as `torchrun`? Post configuration, to run that script with the needed parts you just need to use `accelerate launch` outright, without passing anything else in: ```bash accelerate launch {script_name.py} {--arg1} {--arg2} ... ``` ## Custom Configurations As briefly mentioned earlier, `accelerate launch` should be mostly used through combining set configurations made with the `accelerate config` command. These configs are saved to a `default_config.yaml` file in your cache folder for 🤗 Accelerate. This cache folder is located at (with decreasing order of priority): - The content of your environment variable `HF_HOME` suffixed with `accelerate`. - If it does not exist, the content of your environment variable `XDG_CACHE_HOME` suffixed with `huggingface/accelerate`. - If this does not exist either, the folder `~/.cache/huggingface/accelerate`. To have multiple configurations, the flag `--config_file` can be passed to the `accelerate launch` command paired with the location of the custom yaml. An example yaml may look something like the following for two GPUs on a single machine using `fp16` for mixed precision: ```yaml compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: MULTI_GPU fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 use_cpu: false ``` Launching a script from the location of that custom yaml file looks like the following: ```bash accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2} ... ``` ## Multi-node training Multi-node training with 🤗Accelerate is similar to [multi-node training with torchrun](https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html). The simplest way to launch a multi-node training run is to do the following: - Copy your codebase and data to all nodes. (or place them on a shared filesystem) - Setup your python packages on all nodes. - Run `accelerate config` on the main single node first. After specifying the number of nodes, you will be asked to specify the rank of each node (this will be 0 for the main/master node), along with the IP address and port for the main process. This is required for the worker nodes to communicate with the main process. Afterwards, you can copy or send this config file across all of your nodes, changing the `machine_rank` to 1, 2,3, etc. to avoid having to run the command (or just follow their directions directly for launching with `torchrun` as well) Once you have done this, you can start your multi-node training run by running `accelerate launch` (or `torchrun`) on all nodes. <Tip> It is required that the command be ran on all nodes for everything to start, not just running it from the main node. You can use something like SLURM or a different process executor to wrap around this requirement and call everything from a single command. </Tip> <Tip> It is recommended to use the intranet IP of your main node over the public IP for better latency. This is the `192.168.x.x` or the `172.x.x.x` address you see when you run `hostname -I` on the main node. </Tip> To get a better idea about multi-node training, check out our example for [multi-node training with FSDP](https://huggingface.co/blog/ram-efficient-pytorch-fsdp).
accelerate/docs/source/basic_tutorials/launch.md/0
{ "file_path": "accelerate/docs/source/basic_tutorials/launch.md", "repo_id": "accelerate", "token_count": 2702 }
1
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Helpful Utilities Below are a variety of utility functions that 🤗 Accelerate provides, broken down by use-case. ## Constants Constants used throughout 🤗 Accelerate for reference The following are constants used when utilizing [`Accelerator.save_state`] `utils.MODEL_NAME`: `"pytorch_model"` `utils.OPTIMIZER_NAME`: `"optimizer"` `utils.RNG_STATE_NAME`: `"random_states"` `utils.SCALER_NAME`: `"scaler.pt` `utils.SCHEDULER_NAME`: `"scheduler` The following are constants used when utilizing [`Accelerator.save_model`] `utils.WEIGHTS_NAME`: `"pytorch_model.bin"` `utils.SAFE_WEIGHTS_NAME`: `"model.safetensors"` `utils.WEIGHTS_INDEX_NAME`: `"pytorch_model.bin.index.json"` `utils.SAFE_WEIGHTS_INDEX_NAME`: `"model.safetensors.index.json"` ## Data Classes These are basic dataclasses used throughout 🤗 Accelerate and they can be passed in as parameters. ### Standalone These are standalone dataclasses used for checks, such as the type of distributed system being used [[autodoc]] utils.ComputeEnvironment [[autodoc]] utils.DistributedType [[autodoc]] utils.DynamoBackend [[autodoc]] utils.LoggerType [[autodoc]] utils.PrecisionType [[autodoc]] utils.RNGType [[autodoc]] utils.SageMakerDistributedType ### Kwargs These are configurable arguments for specific interactions throughout the PyTorch ecosystem that Accelerate handles under the hood. [[autodoc]] utils.AutocastKwargs [[autodoc]] utils.DistributedDataParallelKwargs [[autodoc]] utils.FP8RecipeKwargs [[autodoc]] utils.GradScalerKwargs [[autodoc]] utils.InitProcessGroupKwargs [[autodoc]] utils.KwargsHandler ## Plugins These are plugins that can be passed to the [`Accelerator`] object. While they are defined elsewhere in the documentation, for convenience all of them are available to see here: [[autodoc]] utils.DeepSpeedPlugin [[autodoc]] utils.FullyShardedDataParallelPlugin [[autodoc]] utils.GradientAccumulationPlugin [[autodoc]] utils.MegatronLMPlugin [[autodoc]] utils.TorchDynamoPlugin ## Configurations These are classes which can be configured and passed through to the appropriate integration [[autodoc]] utils.BnbQuantizationConfig [[autodoc]] utils.DataLoaderConfiguration [[autodoc]] utils.ProjectConfiguration ## Environmental Variables These are environmental variables that can be enabled for different use cases * `ACCELERATE_DEBUG_MODE` (`str`): Whether to run accelerate in debug mode. More info available [here](../usage_guides/debug.md). ## Data Manipulation and Operations These include data operations that mimic the same `torch` ops but can be used on distributed processes. [[autodoc]] utils.broadcast [[autodoc]] utils.broadcast_object_list [[autodoc]] utils.concatenate [[autodoc]] utils.convert_outputs_to_fp32 [[autodoc]] utils.convert_to_fp32 [[autodoc]] utils.gather [[autodoc]] utils.gather_object [[autodoc]] utils.listify [[autodoc]] utils.pad_across_processes [[autodoc]] utils.recursively_apply [[autodoc]] utils.reduce [[autodoc]] utils.send_to_device [[autodoc]] utils.slice_tensors ## Environment Checks These functionalities check the state of the current working environment including information about the operating system itself, what it can support, and if particular dependencies are installed. [[autodoc]] utils.is_bf16_available [[autodoc]] utils.is_ipex_available [[autodoc]] utils.is_mps_available [[autodoc]] utils.is_npu_available [[autodoc]] utils.is_torch_version [[autodoc]] utils.is_torch_xla_available [[autodoc]] utils.is_xpu_available ## Environment Manipulation [[autodoc]] utils.patch_environment [[autodoc]] utils.clear_environment [[autodoc]] utils.write_basic_config When setting up 🤗 Accelerate for the first time, rather than running `accelerate config` [~utils.write_basic_config] can be used as an alternative for quick configuration. [[autodoc]] utils.set_numa_affinity [[autodoc]] utils.environment.override_numa_affinity ## Memory [[autodoc]] utils.find_executable_batch_size ## Modeling These utilities relate to interacting with PyTorch models [[autodoc]] utils.calculate_maximum_sizes [[autodoc]] utils.compute_module_sizes [[autodoc]] utils.extract_model_from_parallel [[autodoc]] utils.get_balanced_memory [[autodoc]] utils.get_max_layer_size [[autodoc]] utils.infer_auto_device_map [[autodoc]] utils.load_checkpoint_in_model [[autodoc]] utils.load_offloaded_weights [[autodoc]] utils.load_state_dict [[autodoc]] utils.offload_state_dict [[autodoc]] utils.retie_parameters [[autodoc]] utils.set_module_tensor_to_device [[autodoc]] utils.shard_checkpoint ## Parallel These include general utilities that should be used when working in parallel. [[autodoc]] utils.extract_model_from_parallel [[autodoc]] utils.save [[autodoc]] utils.wait_for_everyone ## Random These utilities relate to setting and synchronizing of all the random states. [[autodoc]] utils.set_seed [[autodoc]] utils.synchronize_rng_state [[autodoc]] utils.synchronize_rng_states ## PyTorch XLA These include utilities that are useful while using PyTorch with XLA. [[autodoc]] utils.install_xla ## Loading model weights These include utilities that are useful to load checkpoints. [[autodoc]] utils.load_checkpoint_in_model ## Quantization These include utilities that are useful to quantize model. [[autodoc]] utils.load_and_quantize_model
accelerate/docs/source/package_reference/utilities.md/0
{ "file_path": "accelerate/docs/source/package_reference/utilities.md", "repo_id": "accelerate", "token_count": 1999 }
2
<!-- Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Profiler Profiler is a tool that allows the collection of performance metrics during training and inference. Profiler’s context manager API can be used to better understand what model operators are the most expensive, examine their input shapes and stack traces, study device kernel activity, and visualize the execution trace. It provides insights into the performance of your model, allowing you to optimize and improve it. This guide explains how to use PyTorch Profiler to measure the time and memory consumption of the model’s operators and how to integrate this with 🤗 Accelerate. We will cover various use cases and provide examples for each. ## Using profiler to analyze execution time Profiler allows one to check which operators were called during the execution of a code range wrapped with a profiler context manager. Let’s see how we can use profiler to analyze the execution time: <hfoptions id="cpu execution time"> <hfoption id="PyTorch"> ```python import torch import torchvision.models as models from torch.profiler import profile, record_function, ProfilerActivity model = models.resnet18() inputs = torch.randn(5, 3, 224, 224) with profile(activities=[ProfilerActivity.CPU], record_shapes=True) as prof: model(inputs) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10)) ``` </hfoption> <hfoption id="Accelerate"> ```python from accelerate import Accelerator, ProfileKwargs import torch import torchvision.models as models model = models.resnet18() inputs = torch.randn(5, 3, 224, 224) profile_kwargs = ProfileKwargs( activities=["cpu"], record_shapes=True ) accelerator = Accelerator(cpu=True, kwargs_handlers=[profile_kwargs]) model = accelerator.prepare(model) with accelerator.profile() as prof: with torch.no_grad(): model(inputs) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10)) ``` </hfoption> </hfoptions> The resulting table output (omitting some columns): ``` --------------------------------- ------------ ------------ ------------ ------------ Name Self CPU CPU total CPU time avg # of Calls --------------------------------- ------------ ------------ ------------ ------------ aten::conv2d 171.000us 52.260ms 2.613ms 20 aten::convolution 227.000us 52.089ms 2.604ms 20 aten::_convolution 270.000us 51.862ms 2.593ms 20 aten::mkldnn_convolution 51.273ms 51.592ms 2.580ms 20 aten::batch_norm 118.000us 7.059ms 352.950us 20 aten::_batch_norm_impl_index 315.000us 6.941ms 347.050us 20 aten::native_batch_norm 6.305ms 6.599ms 329.950us 20 aten::max_pool2d 40.000us 4.008ms 4.008ms 1 aten::max_pool2d_with_indices 3.968ms 3.968ms 3.968ms 1 aten::add_ 780.000us 780.000us 27.857us 28 --------------------------------- ------------ ------------ ------------ ------------ Self CPU time total: 67.016ms ``` To get a finer granularity of results and include operator input shapes, pass `group_by_input_shape=True` (note: this requires running the profiler with `record_shapes=True`): ```python print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_time_total", row_limit=10)) ``` ## Using profiler to analyze memory consumption Profiler can also show the amount of memory (used by the model’s tensors) that was allocated (or released) during the execution of the model’s operators. To enable memory profiling functionality pass `profile_memory=True`. <hfoptions id="memory consumption"> <hfoption id="PyTorch"> ```python model = models.resnet18() inputs = torch.randn(5, 3, 224, 224) with profile(activities=[ProfilerActivity.CPU], profile_memory=True, record_shapes=True) as prof: model(inputs) print(prof.key_averages().table(sort_by="self_cpu_memory_usage", row_limit=10)) ``` </hfoption> <hfoption id="Accelerate"> ```python model = models.resnet18() inputs = torch.randn(5, 3, 224, 224) profile_kwargs = ProfileKwargs( activities=["cpu"], profile_memory=True, record_shapes=True ) accelerator = Accelerator(cpu=True, kwargs_handlers=[profile_kwargs]) model = accelerator.prepare(model) with accelerator.profile() as prof: model(inputs) print(prof.key_averages().table(sort_by="self_cpu_memory_usage", row_limit=10)) ``` </hfoption> </hfoptions> The resulting table output (omitting some columns): ``` --------------------------------- ------------ ------------ ------------ Name CPU Mem Self CPU Mem # of Calls --------------------------------- ------------ ------------ ------------ aten::empty 94.85 Mb 94.85 Mb 205 aten::max_pool2d_with_indices 11.48 Mb 11.48 Mb 1 aten::addmm 19.53 Kb 19.53 Kb 1 aten::mean 10.00 Kb 10.00 Kb 1 aten::empty_strided 492 b 492 b 5 aten::cat 240 b 240 b 6 aten::abs 480 b 240 b 4 aten::masked_select 120 b 112 b 1 aten::ne 61 b 53 b 3 aten::eq 30 b 30 b 1 --------------------------------- ------------ ------------ ------------ Self CPU time total: 69.332ms ``` ## Exporting chrome trace You can examine the sequence of profiled operators and CUDA kernels in Chrome trace viewer (`chrome://tracing`): ![profile_export](https://github.com/huggingface/accelerate/assets/100389977/5acb193f-6d11-4f7b-9873-c600c19e8172) <hfoptions id="exporting chrome trace"> <hfoption id="PyTorch"> ```python model = models.resnet18().cuda() inputs = torch.randn(5, 3, 224, 224).cuda() with profile(activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA]) as prof: model(inputs) prof.export_chrome_trace("trace.json") ``` </hfoption> <hfoption id="Accelerate"> ```python profile_kwargs = ProfileKwargs( activities=["cpu", "cuda"], output_trace_dir="trace" ) accelerator = Accelerator(kwargs_handlers=[profile_kwargs]) model = accelerator.prepare(model) with accelerator.profile() as prof: model(inputs) # The trace will be saved to the specified directory ``` </hfoption> </hfoptions> ## Using Profiler to Analyze Long-Running Jobs Profiler offers an additional API to handle long-running jobs (such as training loops). Tracing all of the execution can be slow and result in very large trace files. To avoid this, use optional arguments: - `schedule_option`: Scheduling options allow you to control when profiling is active. This is useful for long-running jobs to avoid collecting too much data. Available keys are `wait`, `warmup`, `active`, `repeat` and `skip_first`. The profiler will skip the first `skip_first` steps, then wait for `wait` steps, then do the warmup for the next `warmup` steps, then do the active recording for the next `active` steps and then repeat the cycle starting with `wait` steps. The optional number of cycles is specified with the `repeat` parameter, the zero value means that the cycles will continue until the profiling is finished. - `on_trace_ready`: specifies a function that takes a reference to the profiler as an input and is called by the profiler each time the new trace is ready. To illustrate how the API works, consider the following example: <hfoptions id="custom handler"> <hfoption id="PyTorch"> ```python from torch.profiler import schedule my_schedule = schedule( skip_first=10, wait=5, warmup=1, active=3, repeat=2 ) def trace_handler(p): output = p.key_averages().table(sort_by="self_cuda_time_total", row_limit=10) print(output) p.export_chrome_trace("/tmp/trace_" + str(p.step_num) + ".json") with profile( activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA], schedule=my_schedule, on_trace_ready=trace_handler ) as p: for idx in range(8): model(inputs) p.step() ``` </hfoption> <hfoption id="Accelerate"> ```python def trace_handler(p): output = p.key_averages().table(sort_by="self_cuda_time_total", row_limit=10) print(output) p.export_chrome_trace("/tmp/trace_" + str(p.step_num) + ".json") profile_kwargs = ProfileKwargs( activities=["cpu", "cuda"], schedule_option={"wait": 5, "warmup": 1, "active": 3, "repeat": 2, "skip_first": 10}, on_trace_ready=trace_handler ) accelerator = Accelerator(kwargs_handlers=[profile_kwargs]) model = accelerator.prepare(model) with accelerator.profile() as prof: for idx in range(8): model(inputs) prof.step() ``` </hfoption> </hfoptions> ## FLOPS Use formula to estimate the FLOPs (floating point operations) of specific operators (matrix multiplication and 2D convolution). To measure floating-point operations (FLOPS): <hfoptions id="FLOPS"> <hfoption id="PyTorch"> ```python with profile( activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA], with_flops=True ) as prof: model(inputs) print(prof.key_averages().table(sort_by="flops", row_limit=10)) ``` </hfoption> <hfoption id="Accelerate"> ```python profile_kwargs = ProfileKwargs( with_flops=True ) accelerator = Accelerator(kwargs_handlers=[profile_kwargs]) with accelerator.profile() as prof: model(inputs) print(prof.key_averages().table(sort_by="flops", row_limit=10)) ``` </hfoption> </hfoptions> The resulting table output (omitting some columns): ``` ------------------------------------------------------- ------------ ------------ ------------ Name Self CPU Self CUDA Total FLOPs ------------------------------------------------------- ------------ ------------ ------------ aten::conv2d 197.000us 0.000us 18135613440.000 aten::addmm 103.000us 17.000us 5120000.000 aten::mul 29.000us 2.000us 30.000 aten::convolution 409.000us 0.000us -- aten::_convolution 253.000us 0.000us -- aten::cudnn_convolution 5.465ms 2.970ms -- cudaEventRecord 138.000us 0.000us -- cudaStreamIsCapturing 43.000us 0.000us -- cudaStreamGetPriority 40.000us 0.000us -- cudaDeviceGetStreamPriorityRange 10.000us 0.000us -- ------------------------------------------------------- ------------ ------------ ------------ Self CPU time total: 21.938ms Self CUDA time total: 4.165ms ``` ## Conclusion and Further Information PyTorch Profiler is a powerful tool for analyzing the performance of your models. By integrating it with 🤗 Accelerate, you can easily profile your models and gain insights into their performance, helping you to optimize and improve them. For more detailed information, refer to the [PyTorch Profiler documentation](https://pytorch.org/docs/stable/profiler.html).
accelerate/docs/source/usage_guides/profiler.md/0
{ "file_path": "accelerate/docs/source/usage_guides/profiler.md", "repo_id": "accelerate", "token_count": 5124 }
3
# Copyright 2021 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import os import re import numpy as np import PIL import torch from timm import create_model from torch.optim.lr_scheduler import OneCycleLR from torch.utils.data import DataLoader, Dataset from torchvision.transforms import Compose, RandomResizedCrop, Resize, ToTensor from accelerate import Accelerator ######################################################################## # This is a fully working simple example to use Accelerate # # This example trains a ResNet50 on the Oxford-IIT Pet Dataset # in any of the following settings (with the same script): # - single CPU or single GPU # - multi GPUS (using PyTorch distributed mode) # - (multi) TPUs # - fp16 (mixed-precision) or fp32 (normal precision) # # To run it in each of these various modes, follow the instructions # in the readme for examples: # https://github.com/huggingface/accelerate/tree/main/examples # ######################################################################## # Function to get the label from the filename def extract_label(fname): stem = fname.split(os.path.sep)[-1] return re.search(r"^(.*)_\d+\.jpg$", stem).groups()[0] class PetsDataset(Dataset): def __init__(self, file_names, image_transform=None, label_to_id=None): self.file_names = file_names self.image_transform = image_transform self.label_to_id = label_to_id def __len__(self): return len(self.file_names) def __getitem__(self, idx): fname = self.file_names[idx] raw_image = PIL.Image.open(fname) image = raw_image.convert("RGB") if self.image_transform is not None: image = self.image_transform(image) label = extract_label(fname) if self.label_to_id is not None: label = self.label_to_id[label] return {"image": image, "label": label} def training_function(config, args): # Initialize accelerator accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision) # Sample hyper-parameters for learning rate, batch size, seed and a few other HPs lr = config["lr"] num_epochs = int(config["num_epochs"]) seed = int(config["seed"]) batch_size = int(config["batch_size"]) image_size = config["image_size"] if not isinstance(image_size, (list, tuple)): image_size = (image_size, image_size) # Grab all the image filenames file_names = [os.path.join(args.data_dir, fname) for fname in os.listdir(args.data_dir) if fname.endswith(".jpg")] # Build the label correspondences all_labels = [extract_label(fname) for fname in file_names] id_to_label = list(set(all_labels)) id_to_label.sort() label_to_id = {lbl: i for i, lbl in enumerate(id_to_label)} # Set the seed before splitting the data. np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) # Split our filenames between train and validation random_perm = np.random.permutation(len(file_names)) cut = int(0.8 * len(file_names)) train_split = random_perm[:cut] eval_split = random_perm[cut:] # For training we use a simple RandomResizedCrop train_tfm = Compose([RandomResizedCrop(image_size, scale=(0.5, 1.0)), ToTensor()]) train_dataset = PetsDataset( [file_names[i] for i in train_split], image_transform=train_tfm, label_to_id=label_to_id ) # For evaluation, we use a deterministic Resize eval_tfm = Compose([Resize(image_size), ToTensor()]) eval_dataset = PetsDataset([file_names[i] for i in eval_split], image_transform=eval_tfm, label_to_id=label_to_id) # Instantiate dataloaders. train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size, num_workers=4) eval_dataloader = DataLoader(eval_dataset, shuffle=False, batch_size=batch_size, num_workers=4) # Instantiate the model (we build the model here so that the seed also control new weights initialization) model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id)) # We could avoid this line since the accelerator is set with `device_placement=True` (default value). # Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer # creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that). model = model.to(accelerator.device) # Freezing the base model for param in model.parameters(): param.requires_grad = False for param in model.get_classifier().parameters(): param.requires_grad = True # We normalize the batches of images to be a bit faster. mean = torch.tensor(model.default_cfg["mean"])[None, :, None, None].to(accelerator.device) std = torch.tensor(model.default_cfg["std"])[None, :, None, None].to(accelerator.device) # Instantiate optimizer optimizer = torch.optim.Adam(params=model.parameters(), lr=lr / 25) # Instantiate learning rate scheduler lr_scheduler = OneCycleLR(optimizer=optimizer, max_lr=lr, epochs=num_epochs, steps_per_epoch=len(train_dataloader)) # Prepare everything # There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the # prepare method. model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader, lr_scheduler ) # Now we train the model for epoch in range(num_epochs): model.train() for step, batch in enumerate(train_dataloader): # We could avoid this line since we set the accelerator with `device_placement=True`. batch = {k: v.to(accelerator.device) for k, v in batch.items()} inputs = (batch["image"] - mean) / std outputs = model(inputs) loss = torch.nn.functional.cross_entropy(outputs, batch["label"]) accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() accurate = 0 num_elems = 0 for _, batch in enumerate(eval_dataloader): # We could avoid this line since we set the accelerator with `device_placement=True`. batch = {k: v.to(accelerator.device) for k, v in batch.items()} inputs = (batch["image"] - mean) / std with torch.no_grad(): outputs = model(inputs) predictions = outputs.argmax(dim=-1) predictions, references = accelerator.gather_for_metrics((predictions, batch["label"])) accurate_preds = predictions == references num_elems += accurate_preds.shape[0] accurate += accurate_preds.long().sum() eval_metric = accurate.item() / num_elems # Use accelerator.print to print only on the main process. accelerator.print(f"epoch {epoch}: {100 * eval_metric:.2f}") accelerator.end_training() def main(): parser = argparse.ArgumentParser(description="Simple example of training script.") parser.add_argument("--data_dir", required=True, help="The data folder on disk.") parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16", "fp8"], help="Whether to use mixed precision. Choose" "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." "and an Nvidia Ampere GPU.", ) parser.add_argument( "--checkpointing_steps", type=str, default=None, help="Whether the various states should be saved at the end of every n steps, or 'epoch' for each epoch.", ) parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.") args = parser.parse_args() config = {"lr": 3e-2, "num_epochs": 3, "seed": 42, "batch_size": 64, "image_size": 224} training_function(config, args) if __name__ == "__main__": main()
accelerate/examples/cv_example.py/0
{ "file_path": "accelerate/examples/cv_example.py", "repo_id": "accelerate", "token_count": 3215 }
4
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from manim import * class Stage4(Scene): def construct(self): step_1 = MarkupText( f"To understand the next part fully, let's define two terms,\n<span fgcolor='{RED}'>`batch_size`</span> and <span fgcolor='{BLUE}'>`global_batch_size`</span>:", font_size=18 ) step_1.move_to([0, 1.5, 0]) # <span fgcolor='{YELLOW}'>●</span> step_2 = MarkupText( f"\n\n● <span fgcolor='{RED}'>`batch_size`</span>: \n\tThis will be defined as the batch size seen on a given\n\t*individual* GPU", font_size=18, ).next_to(step_1, direction=DOWN, aligned_edge=LEFT) step_3 = MarkupText( f"\n\n● <span fgcolor='{BLUE}'>`global_batch_size`</span>:\n\tThis will be defined as the *total* number of\n\tdifferent items seen in the dataset, across all GPUs", font_size=18, ).next_to(step_2, direction=DOWN, aligned_edge=LEFT) step_4 = MarkupText( f"\n\nSo if we have a dataset of 64 items, 8 GPUs, \nand a `batch_size` of 8, each *step* will go through\nthe entire dataset one time as 8*8=64", font_size=18, ).next_to(step_3, direction=DOWN, aligned_edge=LEFT) self.play( Write(step_1, run_time=4), ) self.play( Write(step_2, run_time=4) ) self.play( Write(step_3, run_time=4) ) self.play( Write(step_4, run_time=6) ) self.wait()
accelerate/manim_animations/dataloaders/stage_4.py/0
{ "file_path": "accelerate/manim_animations/dataloaders/stage_4.py", "repo_id": "accelerate", "token_count": 914 }
5
#!/usr/bin/env python # Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse from ...utils.dataclasses import ( ComputeEnvironment, DistributedType, DynamoBackend, FP8BackendType, PrecisionType, SageMakerDistributedType, ) from ..menu import BulletMenu DYNAMO_BACKENDS = [ "EAGER", "AOT_EAGER", "INDUCTOR", "AOT_TS_NVFUSER", "NVPRIMS_NVFUSER", "CUDAGRAPHS", "OFI", "FX2TRT", "ONNXRT", "TENSORRT", "AOT_TORCHXLA_TRACE_ONCE", "TORHCHXLA_TRACE_ONCE", "IPEX", "TVM", ] def _ask_field(input_text, convert_value=None, default=None, error_message=None): ask_again = True while ask_again: result = input(input_text) try: if default is not None and len(result) == 0: return default return convert_value(result) if convert_value is not None else result except Exception: if error_message is not None: print(error_message) def _ask_options(input_text, options=[], convert_value=None, default=0): menu = BulletMenu(input_text, options) result = menu.run(default_choice=default) return convert_value(result) if convert_value is not None else result def _convert_compute_environment(value): value = int(value) return ComputeEnvironment(["LOCAL_MACHINE", "AMAZON_SAGEMAKER"][value]) def _convert_distributed_mode(value): value = int(value) return DistributedType( ["NO", "MULTI_CPU", "MULTI_XPU", "MULTI_GPU", "MULTI_NPU", "MULTI_MLU", "MULTI_MUSA", "XLA"][value] ) def _convert_dynamo_backend(value): value = int(value) return DynamoBackend(DYNAMO_BACKENDS[value]).value def _convert_mixed_precision(value): value = int(value) return PrecisionType(["no", "fp16", "bf16", "fp8"][value]) def _convert_sagemaker_distributed_mode(value): value = int(value) return SageMakerDistributedType(["NO", "DATA_PARALLEL", "MODEL_PARALLEL"][value]) def _convert_fp8_backend(value): value = int(value) return FP8BackendType(["TE", "MSAMP"][value]) def _convert_yes_no_to_bool(value): return {"yes": True, "no": False}[value.lower()] class SubcommandHelpFormatter(argparse.RawDescriptionHelpFormatter): """ A custom formatter that will remove the usage line from the help message for subcommands. """ def _format_usage(self, usage, actions, groups, prefix): usage = super()._format_usage(usage, actions, groups, prefix) usage = usage.replace("<command> [<args>] ", "") return usage
accelerate/src/accelerate/commands/config/config_utils.py/0
{ "file_path": "accelerate/src/accelerate/commands/config/config_utils.py", "repo_id": "accelerate", "token_count": 1219 }
6
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse class _StoreAction(argparse.Action): """ Custom action that allows for `-` or `_` to be passed in for an argument. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) new_option_strings = [] for option_string in self.option_strings: new_option_strings.append(option_string) if "_" in option_string[2:]: # Add `-` version to the option string new_option_strings.append(option_string.replace("_", "-")) self.option_strings = new_option_strings def __call__(self, parser, namespace, values, option_string=None): setattr(namespace, self.dest, values) class _StoreConstAction(_StoreAction): """ Same as `argparse._StoreConstAction` but uses the custom `_StoreAction`. """ def __init__(self, option_strings, dest, const, default=None, required=False, help=None): super().__init__( option_strings=option_strings, dest=dest, nargs=0, const=const, default=default, required=required, help=help, ) def __call__(self, parser, namespace, values, option_string=None): setattr(namespace, self.dest, self.const) class _StoreTrueAction(_StoreConstAction): """ Same as `argparse._StoreTrueAction` but uses the custom `_StoreConstAction`. """ def __init__( self, option_strings, dest, default=None, required=False, help=None, ): super().__init__( option_strings=option_strings, dest=dest, const=True, default=default, required=required, help=help ) class CustomArgumentGroup(argparse._ArgumentGroup): """ Custom argument group that allows for the use of `-` or `_` in arguments passed and overrides the help for each when applicable. """ def _add_action(self, action): args = vars(action) if isinstance(action, argparse._StoreTrueAction): action = _StoreTrueAction( args["option_strings"], args["dest"], args["default"], args["required"], args["help"] ) elif isinstance(action, argparse._StoreConstAction): action = _StoreConstAction( args["option_strings"], args["dest"], args["const"], args["default"], args["required"], args["help"], ) elif isinstance(action, argparse._StoreAction): action = _StoreAction(**args) action = super()._add_action(action) return action class CustomArgumentParser(argparse.ArgumentParser): """ Custom argument parser that allows for the use of `-` or `_` in arguments passed and overrides the help for each when applicable. """ def add_argument(self, *args, **kwargs): if "action" in kwargs: # Translate action -> class if kwargs["action"] == "store_true": kwargs["action"] = _StoreTrueAction else: kwargs["action"] = _StoreAction super().add_argument(*args, **kwargs) def add_argument_group(self, *args, **kwargs): group = CustomArgumentGroup(self, *args, **kwargs) self._action_groups.append(group) return group
accelerate/src/accelerate/commands/utils.py/0
{ "file_path": "accelerate/src/accelerate/commands/utils.py", "repo_id": "accelerate", "token_count": 1619 }
7
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import logging import math import os from copy import deepcopy import datasets import evaluate import torch import transformers from datasets import load_dataset from torch.utils.data import DataLoader, IterableDataset from transformers import AutoModelForSequenceClassification, AutoTokenizer from accelerate import Accelerator, DataLoaderConfiguration, DistributedType from accelerate.data_loader import DataLoaderDispatcher from accelerate.test_utils import RegressionDataset, RegressionModel, torch_device from accelerate.utils import is_torch_xla_available, set_seed os.environ["TRANSFORMERS_NO_ADVISORY_WARNINGS"] = "true" class ListHandler(logging.Handler): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.logs = [] def emit(self, record): self.logs.append(record) def get_basic_setup(accelerator, num_samples=82, batch_size=16): "Returns everything needed to perform basic training" set_seed(42) model = RegressionModel() ddp_model = deepcopy(model) dset = RegressionDataset(length=num_samples) dataloader = DataLoader(dset, batch_size=batch_size) model.to(accelerator.device) ddp_model, dataloader = accelerator.prepare(ddp_model, dataloader) return model, ddp_model, dataloader def get_dataloader(accelerator: Accelerator, use_longest=False): tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/mrpc-bert-base-cased") dataset = load_dataset("glue", "mrpc", split="validation") def tokenize_function(examples): outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs with accelerator.main_process_first(): tokenized_datasets = dataset.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): if use_longest: return tokenizer.pad(examples, padding="longest", return_tensors="pt") return tokenizer.pad(examples, padding="max_length", max_length=128, return_tensors="pt") return DataLoader(tokenized_datasets, shuffle=False, collate_fn=collate_fn, batch_size=16) def get_mrpc_setup(dispatch_batches, split_batches): dataloader_config = DataLoaderConfiguration(dispatch_batches=dispatch_batches, split_batches=split_batches) accelerator = Accelerator(dataloader_config=dataloader_config) dataloader = get_dataloader(accelerator, not dispatch_batches) model = AutoModelForSequenceClassification.from_pretrained( "hf-internal-testing/mrpc-bert-base-cased", return_dict=True ) ddp_model, ddp_dataloader = accelerator.prepare(model, dataloader) return { "ddp": [ddp_model, ddp_dataloader, torch_device], "no": [model, dataloader, accelerator.device], }, accelerator def generate_predictions(model, dataloader, accelerator): logits_and_targets = [] for batch in dataloader: input, target = batch.values() with torch.no_grad(): logit = model(input) logit, target = accelerator.gather_for_metrics((logit, target)) logits_and_targets.append((logit, target)) logits, targs = [], [] for logit, targ in logits_and_targets: logits.append(logit) targs.append(targ) logits, targs = torch.cat(logits), torch.cat(targs) return logits, targs def test_torch_metrics( accelerator: Accelerator, num_samples=82, dispatch_batches=False, split_batches=False, batch_size=16 ): _, ddp_model, dataloader = get_basic_setup(accelerator, num_samples, batch_size) logits, _ = generate_predictions(ddp_model, dataloader, accelerator) assert ( len(logits) == num_samples ), f"Unexpected number of inputs:\n Expected: {num_samples}\n Actual: {len(logits)}" def test_mrpc(dispatch_batches: bool = False, split_batches: bool = False): metric = evaluate.load("glue", "mrpc") setup, accelerator = get_mrpc_setup(dispatch_batches, split_batches) # First do baseline model, dataloader, device = setup["no"] model.to(device) model.eval() for batch in dataloader: batch.to(device) with torch.inference_mode(): outputs = model(**batch) preds = outputs.logits.argmax(dim=-1) metric.add_batch(predictions=preds, references=batch["labels"]) baseline = metric.compute() # Then do distributed model, dataloader, device = setup["ddp"] model.eval() for batch in dataloader: with torch.inference_mode(): outputs = model(**batch) preds = outputs.logits.argmax(dim=-1) references = batch["labels"] preds, references = accelerator.gather_for_metrics((preds, references)) metric.add_batch(predictions=preds, references=references) distributed = metric.compute() for key in "accuracy f1".split(): assert math.isclose( baseline[key], distributed[key] ), f"Baseline and Distributed are not the same for key {key}:\n\tBaseline: {baseline[key]}\n\tDistributed: {distributed[key]}\n" def test_gather_for_metrics_with_non_tensor_objects_iterable_dataset(): class DummyIterableDataset(IterableDataset): def __init__(self, data): self.data = data def __len__(self): return len(self.data) def __iter__(self): yield from self.data iterable_dataset = DummyIterableDataset([n for n in range(30)]) dataloader = DataLoader(iterable_dataset, batch_size=4) accelerator = Accelerator() prepared_dataloader = accelerator.prepare(dataloader) if accelerator.is_main_process: logger = logging.root.manager.loggerDict["accelerate.accelerator"] list_handler = ListHandler() logger.addHandler(list_handler) batches_for_metrics = [] for batch in prepared_dataloader: batches_for_metrics.append(accelerator.gather_for_metrics(batch)) assert torch.cat(batches_for_metrics).size(0) == 30 if accelerator.is_main_process: assert len(list_handler.logs) == 0 logger.removeHandler(list_handler) def test_gather_for_metrics_with_iterable_dataset(): class DummyIterableDataset(IterableDataset): def __init__(self, data): self.data = data def __len__(self): return len(self.data) def __iter__(self): yield from self.data iterable_dataset = DummyIterableDataset(torch.as_tensor(range(30))) dataloader = DataLoader(iterable_dataset, batch_size=4) accelerator = Accelerator() prepared_dataloader = accelerator.prepare(dataloader) assert isinstance(prepared_dataloader, DataLoaderDispatcher) if accelerator.is_main_process: logger = logging.root.manager.loggerDict["accelerate.accelerator"] list_handler = ListHandler() logger.addHandler(list_handler) batches_for_metrics = [] for batch in prepared_dataloader: batches_for_metrics.append(accelerator.gather_for_metrics(batch)) assert torch.cat(batches_for_metrics).size(0) == 30 if accelerator.is_main_process: assert len(list_handler.logs) == 0 logger.removeHandler(list_handler) def test_gather_for_metrics_drop_last(): accelerator = Accelerator() per_device_batch_size = 5 num_items = (10 * accelerator.num_processes) + 1 dataloader = DataLoader(range(num_items), batch_size=per_device_batch_size, drop_last=True) dataloader = accelerator.prepare(dataloader) iterator = iter(dataloader) next(iterator) # Skip first batch tensor([0, 1, 2, 3, 4], device='cuda:0') batch = next(iterator) gathered_items = accelerator.gather_for_metrics(batch) # Should return a full set of complete batches from each GPU num_expected_items = per_device_batch_size * accelerator.num_processes assert gathered_items.size(0) == ( num_expected_items ), f"Expected number of items: {num_expected_items}, Actual: {gathered_items.size(0)}" def main(): dataloader_config = DataLoaderConfiguration(split_batches=False, dispatch_batches=False) accelerator = Accelerator(dataloader_config=dataloader_config) if accelerator.is_local_main_process: datasets.utils.logging.set_verbosity_warning() transformers.utils.logging.set_verbosity_warning() else: datasets.utils.logging.set_verbosity_error() transformers.utils.logging.set_verbosity_error() # TorchXLA does not support batch dispatching. 'put_on_device' is always False for # TorchXLA, which can cause a value error in 'prepare_data_loader' function. dispatch_batches_options = [False] if accelerator.state.distributed_type == DistributedType.XLA else [True, False] # Temporarily close this test for TorchXLA due to the 'Cannot set version_counter for # inference tensor' error in inference mode. Reopen it after TorchXLA fixes this bug. # These are a bit slower so they should only be ran on the GPU or TPU if accelerator.device.type != "cpu" and not is_torch_xla_available(): if accelerator.is_local_main_process: print("**Testing gather_for_metrics**") for split_batches in [True, False]: for dispatch_batches in dispatch_batches_options: if accelerator.is_local_main_process: print(f"With: `split_batches={split_batches}`, `dispatch_batches={dispatch_batches}`") test_mrpc(dispatch_batches, split_batches) accelerator.state._reset_state() print("test_gather_for_metrics_with_iterable_dataset") test_gather_for_metrics_with_iterable_dataset() print("test gather_for_metrics_with_non_tensor_objects_iterable_dataset") test_gather_for_metrics_with_non_tensor_objects_iterable_dataset() # MpDeviceLoader in TorchXLA is an asynchronous loader that preloads several batches into cache. # This can cause the 'end_of_dataloader' of DataLoaderStateMixin to be set earlier than intended. # Skip this test when TorchXLA is enabled. if accelerator.state.distributed_type != DistributedType.XLA: if accelerator.is_local_main_process: print("**Test torch metrics**") for split_batches in [True, False]: for dispatch_batches in dispatch_batches_options: dataloader_config = DataLoaderConfiguration( split_batches=split_batches, dispatch_batches=dispatch_batches ) accelerator = Accelerator(dataloader_config=dataloader_config) if accelerator.is_local_main_process: print(f"With: `split_batches={split_batches}`, `dispatch_batches={dispatch_batches}`, length=99") test_torch_metrics(accelerator, 99) accelerator.state._reset_state() if accelerator.is_local_main_process: print("**Test last batch is not dropped when perfectly divisible**") accelerator = Accelerator() test_torch_metrics(accelerator, 512) accelerator.state._reset_state() if accelerator.is_local_main_process: print("**Test that `drop_last` is taken into account**") test_gather_for_metrics_drop_last() accelerator.end_training() accelerator.state._reset_state() def _mp_fn(index): # For xla_spawn (TPUs) main() if __name__ == "__main__": main()
accelerate/src/accelerate/test_utils/scripts/external_deps/test_metrics.py/0
{ "file_path": "accelerate/src/accelerate/test_utils/scripts/external_deps/test_metrics.py", "repo_id": "accelerate", "token_count": 4714 }
8
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from .constants import ( MODEL_NAME, OPTIMIZER_NAME, PROFILE_PATTERN_NAME, RNG_STATE_NAME, SAFE_MODEL_NAME, SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, SAFE_WEIGHTS_PATTERN_NAME, SAMPLER_NAME, SCALER_NAME, SCHEDULER_NAME, TORCH_DISTRIBUTED_OPERATION_TYPES, TORCH_LAUNCH_PARAMS, WEIGHTS_INDEX_NAME, WEIGHTS_NAME, WEIGHTS_PATTERN_NAME, ) from .dataclasses import ( AutocastKwargs, BnbQuantizationConfig, ComputeEnvironment, CustomDtype, DataLoaderConfiguration, DDPCommunicationHookType, DeepSpeedPlugin, DistributedDataParallelKwargs, DistributedType, DynamoBackend, FP8RecipeKwargs, FullyShardedDataParallelPlugin, GradientAccumulationPlugin, GradScalerKwargs, InitProcessGroupKwargs, KwargsHandler, LoggerType, MegatronLMPlugin, PrecisionType, ProfileKwargs, ProjectConfiguration, RNGType, SageMakerDistributedType, TensorInformation, TorchDynamoPlugin, add_model_config_to_megatron_parser, ) from .environment import ( are_libraries_initialized, check_cuda_p2p_ib_support, check_fp8_capability, convert_dict_to_env_variables, get_cpu_distributed_information, get_gpu_info, get_int_from_env, parse_choice_from_env, parse_flag_from_env, set_numa_affinity, str_to_bool, ) from .imports import ( get_ccl_version, is_4bit_bnb_available, is_8bit_bnb_available, is_aim_available, is_bf16_available, is_bnb_available, is_boto3_available, is_ccl_available, is_clearml_available, is_comet_ml_available, is_cuda_available, is_datasets_available, is_deepspeed_available, is_dvclive_available, is_fp8_available, is_import_timer_available, is_ipex_available, is_lomo_available, is_megatron_lm_available, is_mlflow_available, is_mlu_available, is_mps_available, is_msamp_available, is_musa_available, is_npu_available, is_pandas_available, is_peft_available, is_pippy_available, is_pynvml_available, is_pytest_available, is_rich_available, is_sagemaker_available, is_schedulefree_available, is_tensorboard_available, is_timm_available, is_torch_xla_available, is_torchdata_available, is_torchdata_stateful_dataloader_available, is_torchvision_available, is_transformer_engine_available, is_transformers_available, is_triton_available, is_wandb_available, is_xpu_available, ) from .modeling import ( calculate_maximum_sizes, check_device_map, check_tied_parameters_in_config, check_tied_parameters_on_same_device, compute_module_sizes, convert_file_size_to_int, dtype_byte_size, find_tied_parameters, get_balanced_memory, get_max_layer_size, get_max_memory, get_mixed_precision_context_manager, id_tensor_storage, infer_auto_device_map, is_peft_model, load_checkpoint_in_model, load_offloaded_weights, load_state_dict, named_module_tensors, retie_parameters, set_module_tensor_to_device, shard_checkpoint, ) from .offload import ( OffloadedWeightsLoader, PrefixedDataset, extract_submodules_state_dict, load_offloaded_weight, offload_state_dict, offload_weight, save_offload_index, ) from .operations import ( CannotPadNestedTensorWarning, GatheredParameters, broadcast, broadcast_object_list, concatenate, convert_outputs_to_fp32, convert_to_fp32, copy_tensor_to_devices, find_batch_size, find_device, gather, gather_object, get_data_structure, honor_type, ignorant_find_batch_size, initialize_tensors, is_namedtuple, is_tensor_information, is_torch_tensor, listify, pad_across_processes, pad_input_tensors, recursively_apply, reduce, send_to_device, slice_tensors, ) from .versions import compare_versions, is_torch_version if is_deepspeed_available(): from .deepspeed import ( DeepSpeedEngineWrapper, DeepSpeedOptimizerWrapper, DeepSpeedSchedulerWrapper, DummyOptim, DummyScheduler, HfDeepSpeedConfig, ) from .bnb import has_4bit_bnb_layers, load_and_quantize_model from .fsdp_utils import ( disable_fsdp_ram_efficient_loading, enable_fsdp_ram_efficient_loading, load_fsdp_model, load_fsdp_optimizer, merge_fsdp_weights, save_fsdp_model, save_fsdp_optimizer, ) from .launch import ( PrepareForLaunch, _filter_args, prepare_deepspeed_cmd_env, prepare_multi_gpu_env, prepare_sagemager_args_inputs, prepare_simple_launcher_cmd_env, prepare_tpu, ) # For docs from .megatron_lm import ( AbstractTrainStep, BertTrainStep, GPTTrainStep, MegatronLMDummyDataLoader, MegatronLMDummyScheduler, T5TrainStep, avg_losses_across_data_parallel_group, ) if is_megatron_lm_available(): from .megatron_lm import ( MegatronEngine, MegatronLMOptimizerWrapper, MegatronLMSchedulerWrapper, gather_across_data_parallel_groups, ) from .megatron_lm import initialize as megatron_lm_initialize from .megatron_lm import prepare_data_loader as megatron_lm_prepare_data_loader from .megatron_lm import prepare_model_optimizer_scheduler as megatron_lm_prepare_model_optimizer_scheduler from .megatron_lm import prepare_optimizer as megatron_lm_prepare_optimizer from .megatron_lm import prepare_scheduler as megatron_lm_prepare_scheduler from .memory import find_executable_batch_size, release_memory from .other import ( check_os_kernel, clean_state_dict_for_safetensors, clear_environment, convert_bytes, extract_model_from_parallel, get_pretty_name, is_port_in_use, merge_dicts, patch_environment, recursive_getattr, save, wait_for_everyone, write_basic_config, ) from .random import set_seed, synchronize_rng_state, synchronize_rng_states from .torch_xla import install_xla from .tqdm import tqdm from .transformer_engine import ( apply_fp8_autowrap, contextual_fp8_autocast, convert_model, has_transformer_engine_layers, )
accelerate/src/accelerate/utils/__init__.py/0
{ "file_path": "accelerate/src/accelerate/utils/__init__.py", "repo_id": "accelerate", "token_count": 2864 }
9
compute_environment: LOCAL_MACHINE debug: false distributed_type: MULTI_GPU downcast_bf16: 'no' enable_cpu_affinity: false fp8_config: amax_compute_algorithm: max amax_history_length: 1024 backend: TE fp8_format: E4M3 interval: 1 margin: 0 override_linear_precision: false use_autocast_during_eval: false gpu_ids: all machine_rank: 0 main_training_function: main mixed_precision: fp8 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false
accelerate/tests/test_configs/0_34_0_fp8.yaml/0
{ "file_path": "accelerate/tests/test_configs/0_34_0_fp8.yaml", "repo_id": "accelerate", "token_count": 216 }
10
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest from tempfile import TemporaryDirectory import torch import torch.nn as nn from accelerate.utils import ( OffloadedWeightsLoader, extract_submodules_state_dict, load_offloaded_weight, offload_state_dict, offload_weight, ) class ModelForTest(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(3, 4) self.batchnorm = nn.BatchNorm1d(4) self.linear2 = nn.Linear(4, 5) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))) class OffloadTester(unittest.TestCase): def test_offload_state_dict(self): model = ModelForTest() with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, model.state_dict()) index_file = os.path.join(tmp_dir, "index.json") assert os.path.isfile(index_file) # TODO: add tests on what is inside the index for key in ["linear1.weight", "linear1.bias", "linear2.weight", "linear2.bias"]: weight_file = os.path.join(tmp_dir, f"{key}.dat") assert os.path.isfile(weight_file) # TODO: add tests on the fact weights are properly loaded def test_offload_weight(self): dtypes = [torch.float16, torch.float32, torch.bfloat16] for dtype in dtypes: weight = torch.randn(2, 3, dtype=dtype) with TemporaryDirectory() as tmp_dir: index = offload_weight(weight, "weight", tmp_dir, {}) weight_file = os.path.join(tmp_dir, "weight.dat") assert os.path.isfile(weight_file) assert index == {"weight": {"shape": [2, 3], "dtype": str(dtype).split(".")[1]}} new_weight = load_offloaded_weight(weight_file, index["weight"]) assert torch.equal(weight, new_weight) def test_offload_weights_loader(self): model = ModelForTest() state_dict = model.state_dict() cpu_part = {k: v for k, v in state_dict.items() if "linear2" not in k} disk_part = {k: v for k, v in state_dict.items() if "linear2" in k} with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, disk_part) weight_map = OffloadedWeightsLoader(state_dict=cpu_part, save_folder=tmp_dir) # Every key is there with the right value assert sorted(weight_map) == sorted(state_dict.keys()) for key, param in state_dict.items(): assert torch.allclose(param, weight_map[key]) cpu_part = {k: v for k, v in state_dict.items() if "weight" in k} disk_part = {k: v for k, v in state_dict.items() if "weight" not in k} with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, disk_part) weight_map = OffloadedWeightsLoader(state_dict=cpu_part, save_folder=tmp_dir) # Every key is there with the right value assert sorted(weight_map) == sorted(state_dict.keys()) for key, param in state_dict.items(): assert torch.allclose(param, weight_map[key]) with TemporaryDirectory() as tmp_dir: offload_state_dict(tmp_dir, state_dict) # Duplicates are removed weight_map = OffloadedWeightsLoader(state_dict=cpu_part, save_folder=tmp_dir) # Every key is there with the right value assert sorted(weight_map) == sorted(state_dict.keys()) for key, param in state_dict.items(): assert torch.allclose(param, weight_map[key]) def test_extract_submodules_state_dict(self): state_dict = {"a.1": 0, "a.10": 1, "a.2": 2} extracted = extract_submodules_state_dict(state_dict, ["a.1", "a.2"]) assert extracted == {"a.1": 0, "a.2": 2} state_dict = {"a.1.a": 0, "a.10.a": 1, "a.2.a": 2} extracted = extract_submodules_state_dict(state_dict, ["a.1", "a.2"]) assert extracted == {"a.1.a": 0, "a.2.a": 2}
accelerate/tests/test_offload.py/0
{ "file_path": "accelerate/tests/test_offload.py", "repo_id": "accelerate", "token_count": 1981 }
11
# Model arguments model_name_or_path: BramVanroy/gpt2-sft-dutch model_revision: main torch_dtype: bfloat16 # Data training arguments # For definitions, see: src/h4/training/config.py dataset_mixer: BramVanroy/ultra_feedback_dutch: 1.0 dataset_splits: - train_prefs - test_prefs preprocessing_num_workers: 12 # DPOTrainer arguments bf16: true beta: 0.1 do_eval: true eval_strategy: steps eval_steps: 100 gradient_accumulation_steps: 8 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: False hub_model_id: gpt2-dpo-dutch learning_rate: 5.0e-7 log_level: info logging_steps: 10 lr_scheduler_type: cosine max_length: 1024 max_prompt_length: 512 num_train_epochs: 1 optim: adamw_torch output_dir: data/gpt2-dpo-dutch per_device_train_batch_size: 8 per_device_eval_batch_size: 8 push_to_hub: true save_strategy: "steps" save_steps: 100 save_total_limit: 1 seed: 42 warmup_ratio: 0.1 report_to: - wandb
alignment-handbook/recipes/gpt2-nl/dpo/config_full.yaml/0
{ "file_path": "alignment-handbook/recipes/gpt2-nl/dpo/config_full.yaml", "repo_id": "alignment-handbook", "token_count": 374 }
12
# Model arguments model_name_or_path: alignment-handbook/zephyr-7b-sft-qlora torch_dtype: bfloat16 attn_implementation: flash_attention_2 # LoRA arguments use_peft: true load_in_4bit: true lora_r: 128 lora_alpha: 128 lora_dropout: 0.05 lora_target_modules: - q_proj - k_proj - v_proj - o_proj - gate_proj - up_proj - down_proj # Data training arguments dataset_mixer: HuggingFaceH4/ultrafeedback_binarized: 1.0 dataset_splits: - train_prefs - test_prefs preprocessing_num_workers: 12 # DPOTrainer arguments bf16: true beta: 0.01 do_eval: true eval_strategy: steps eval_steps: 100 gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false hub_model_id: zephyr-7b-dpo-qlora learning_rate: 5.0e-6 log_level: info logging_steps: 10 lr_scheduler_type: cosine max_length: 1024 max_prompt_length: 512 num_train_epochs: 1 optim: paged_adamw_32bit output_dir: data/zephyr-7b-dpo-qlora # It is handy to append `hub_model_revision` to keep track of your local experiments per_device_train_batch_size: 4 per_device_eval_batch_size: 8 push_to_hub: true save_strategy: "steps" save_steps: 100 save_total_limit: 1 seed: 42 warmup_ratio: 0.1
alignment-handbook/recipes/zephyr-7b-beta/dpo/config_qlora.yaml/0
{ "file_path": "alignment-handbook/recipes/zephyr-7b-beta/dpo/config_qlora.yaml", "repo_id": "alignment-handbook", "token_count": 490 }
13
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Any, Dict, List from datasets import load_dataset # HumanEval solutions that are considered simple/generic enough to be kept in the training dataset HUMAN_EVAL_STRINGS_OK = ["return x + y", "return len(string)", "return n**2", "return " ".join(strings)"] def extract_docstring(prompt: str) -> str: if '"""' in prompt: if prompt.count('"""') == 2: return prompt.split('"""')[1].strip() elif prompt.count('"""') == 4: return prompt.split('"""')[3].strip() else: raise ValueError() elif "'''" in prompt: assert prompt.count("'''") == 2 return prompt.split("'''")[1].strip() else: raise ValueError() def human_eval_docstrings() -> List[str]: ds = load_dataset("openai_humaneval", split="test") docstrings = [extract_docstring(v["prompt"]) for v in ds] return docstrings def load_dataset_column(dataset: str, column: str, split: str, name=None) -> List[str]: ds = load_dataset(dataset, split=split, name=name) res = [sample[column].strip() for sample in ds] # Only return non-empty strings return [sample for sample in res if len(sample) > 0] FILTER_OUT = { "human_eval_docstrings": human_eval_docstrings(), "human_eval_solutions": [ s for s in load_dataset_column("openai_humaneval", "canonical_solution", "test") if s not in HUMAN_EVAL_STRINGS_OK ], } def normalize_whitespace(text: str) -> str: return " ".join(text.split()) def decontaminate_humaneval( samples: List[Dict[str, Any]], text_column: str = "text", filter_out: Dict[str, List[str]] = FILTER_OUT ) -> List[Dict[str, Any]]: """ filter_out: Dict[str, List[str]] mapping from benchmark name to list of strings that need to be filtered-out. Return a list where each element is True if the corresponding file should be included in the dataset. Otherwise, the element is False. """ output = [] for content in samples[text_column]: content = normalize_whitespace(content.lower()) matched = False for _, substrings in filter_out.items(): for substring in substrings: if normalize_whitespace(substring.lower()) in content: matched = True break if matched: break # we keep files that are not matched output.append(not matched) return output
alignment-handbook/src/alignment/decontaminate.py/0
{ "file_path": "alignment-handbook/src/alignment/decontaminate.py", "repo_id": "alignment-handbook", "token_count": 1160 }
14
{ "[python]": { "editor.defaultFormatter": "ms-python.black-formatter" }, "python.formatting.provider": "none", "python.testing.pytestArgs": [ "candle-pyo3" ], "python.testing.unittestEnabled": false, "python.testing.pytestEnabled": true }
candle/.vscode/settings.json/0
{ "file_path": "candle/.vscode/settings.json", "repo_id": "candle", "token_count": 123 }
15
# Creating a WASM app
candle/candle-book/src/apps/wasm.md/0
{ "file_path": "candle/candle-book/src/apps/wasm.md", "repo_id": "candle", "token_count": 7 }
16
# Fine-tuning
candle/candle-book/src/training/finetuning.md/0
{ "file_path": "candle/candle-book/src/training/finetuning.md", "repo_id": "candle", "token_count": 6 }
17
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{DType, Device, Tensor}; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run(a: &Tensor, b: &Tensor, c: &Tensor) { a.where_cond(b, c).unwrap(); } const fn create_cond_arr<const N: usize>() -> [u8; N] { let mut arr = [0u8; N]; let mut i = 0; while i < N { arr[i] = (i % 2) as u8; i += 1; } arr } const B: usize = 1; const M: usize = 1024; const K: usize = 1024; const SIZE: usize = B * M * K; const DATA: [u8; SIZE] = create_cond_arr::<SIZE>(); fn run_where_cond_benchmark(c: &mut Criterion, device: &Device, dtype: DType, name: &str) { let tensor = Tensor::from_slice(DATA.as_slice(), (B, M, K), device).unwrap(); let on_true = Tensor::ones((B, M, K), dtype, device).unwrap(); let on_false = Tensor::zeros((B, M, K), dtype, device).unwrap(); let elements = B * M * K; // E.g. 2 f32 tensors + 1 u8 tensor let flops = (2 * elements * dtype.size_in_bytes()) + elements; let mut group = c.benchmark_group(device.bench_name(name)); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run( black_box(&tensor), black_box(&on_true), black_box(&on_false), ); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let device = BenchDeviceHandler::new().unwrap(); for d in device.devices { run_where_cond_benchmark(c, &d, DType::F32, "where_cond_f32"); run_where_cond_benchmark(c, &d, DType::BF16, "where_cond_bf16"); run_where_cond_benchmark(c, &d, DType::F16, "where_cond_f16"); } } criterion_group!(benches, criterion_benchmark);
candle/candle-core/benches/benchmarks/where_cond.rs/0
{ "file_path": "candle/candle-core/benches/benchmarks/where_cond.rs", "repo_id": "candle", "token_count": 939 }
18
use crate::backend::{BackendDevice, BackendStorage}; use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT}; use crate::{DType, Error, IntDType, Layout, Result, Shape, WithDType}; use half::{bf16, f16}; use rayon::prelude::*; mod utils; pub use utils::{ binary_map, binary_map_vec, unary_map, unary_map_vec, Map1, Map1Any, Map2, Map2U8, }; const USE_IM2COL_CONV1D: bool = true; const USE_COL2IM_CONV1D_TR: bool = true; const USE_IM2COL_CONV2D: bool = true; // TODO: Maybe we should not implement [Clone] here and instead have an explicit allocator + // intercept the oom errors to avoid panicking and provide a proper error. #[derive(Debug, Clone)] pub enum CpuStorage { U8(Vec<u8>), U32(Vec<u32>), I64(Vec<i64>), BF16(Vec<bf16>), F16(Vec<f16>), F32(Vec<f32>), F64(Vec<f64>), } #[derive(Debug, Clone)] pub enum CpuStorageRef<'a> { U8(&'a [u8]), U32(&'a [u32]), I64(&'a [i64]), BF16(&'a [bf16]), F16(&'a [f16]), F32(&'a [f32]), F64(&'a [f64]), } #[derive(Debug, Clone)] pub struct CpuDevice; struct Cmp(CmpOp); impl Map2U8 for Cmp { const OP: &'static str = "cmp"; #[inline(always)] fn f<T: WithDType>( &self, lhs: &[T], lhs_l: &Layout, rhs: &[T], rhs_l: &Layout, ) -> Result<Vec<u8>> { let dst = match self.0 { CmpOp::Eq => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x == y)), CmpOp::Ne => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x != y)), CmpOp::Lt => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x < y)), CmpOp::Le => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x <= y)), CmpOp::Gt => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x > y)), CmpOp::Ge => binary_map(lhs_l, rhs_l, lhs, rhs, |x, y| u8::from(x >= y)), }; Ok(dst) } } struct WCond<'a, T: IntDType>(&'a [T], &'a Layout); impl<'a, I: IntDType> Map2 for WCond<'a, I> { const OP: &'static str = "where"; #[inline(always)] fn f<T: WithDType>(&self, t: &[T], t_l: &Layout, f: &[T], f_l: &Layout) -> Result<Vec<T>> { let vs = match ( self.1.contiguous_offsets(), t_l.contiguous_offsets(), f_l.contiguous_offsets(), ) { (Some((o1, o2)), Some((o_t1, o_t2)), Some((o_f1, o_f2))) => { let pred = &self.0[o1..o2]; let t = &t[o_t1..o_t2]; let f = &f[o_f1..o_f2]; pred.iter() .zip(t.iter().zip(f.iter())) .map(|(p, (&t, &f))| if p.is_true() { t } else { f }) .collect::<Vec<_>>() } _ => self .1 .strided_index() .zip(t_l.strided_index().zip(f_l.strided_index())) .map(|(i_p, (i_t, i_f))| { if self.0[i_p].is_true() { t[i_t] } else { f[i_f] } }) .collect::<Vec<_>>(), }; Ok(vs) } } struct ReduceIndex { reduce_dim_index: usize, use_min: bool, return_index: bool, } impl ReduceIndex { // The value gets replaced if f(s[current_acc], s[i]) returns true. #[inline(always)] fn fold_impl<T, U, F, G>(&self, src: &[T], src_l: &Layout, f: F, g: G) -> Result<Vec<U>> where T: Clone + Copy, U: Clone + Copy, F: Fn(T, T) -> bool, G: Fn(T, usize) -> U, { let reduce_dim_size = src_l.dims()[self.reduce_dim_index]; let reduce_dim_stride = src_l.stride()[self.reduce_dim_index]; let dst_len = src_l.shape().elem_count() / reduce_dim_size; let mut dst: Vec<U> = Vec::with_capacity(dst_len); let dst_to_set = dst.spare_capacity_mut(); let dst_to_set = unsafe { std::mem::transmute::<&mut [std::mem::MaybeUninit<U>], &mut [U]>(dst_to_set) }; match src_l.contiguous_offsets() { Some((o1, o2)) => { let src = &src[o1..o2]; if reduce_dim_stride == 1 { for (start_src_i, dst_v) in dst_to_set.iter_mut().enumerate() { let start_src_i = start_src_i * reduce_dim_size; let src = &src[start_src_i..start_src_i + reduce_dim_size]; let mut acc = 0; let mut val = src[0]; for (src_i, &s) in src.iter().enumerate() { if f(val, s) { acc = src_i; val = s } } *dst_v = g(val, acc) } } else { for (start_src_i, dst_v) in dst_to_set.iter_mut().enumerate() { let (p, q) = ( start_src_i / reduce_dim_stride, start_src_i % reduce_dim_stride, ); // start_src_i = p * reduce_dim_stride + q let start_src_i = p * reduce_dim_stride * reduce_dim_size + q; let src = &src[start_src_i..]; let mut acc = 0; let mut val = src[0]; for src_i in 0..reduce_dim_size { let s = src[src_i * reduce_dim_stride]; if f(val, s) { acc = src_i; val = s } } *dst_v = g(val, acc) } } } None => { let l = src_l.narrow(self.reduce_dim_index, 0, 1)?; for (unstr_index, src_index) in l.strided_index().enumerate() { let src = &src[src_index..]; let mut acc = 0; let mut val = src[0]; for src_i in 0..reduce_dim_size { let s = src[src_i * reduce_dim_stride]; if f(val, s) { acc = src_i; val = s } } dst_to_set[unstr_index] = g(val, acc) } } } unsafe { dst.set_len(dst_len) }; Ok(dst) } } impl Map1Any for ReduceIndex { #[inline(always)] fn f<T: WithDType, W: Fn(Vec<T>) -> CpuStorage>( &self, src: &[T], src_l: &Layout, wrap: W, ) -> Result<CpuStorage> { if src_l.shape().elem_count() == 0 { Err(Error::EmptyTensor { op: "reduce" }.bt())? } let dst = match (self.return_index, self.use_min) { (false, true) => wrap(self.fold_impl(src, src_l, |x, y| x > y, |v, _i| v)?), (false, false) => wrap(self.fold_impl(src, src_l, |x, y| x < y, |v, _i| v)?), (true, true) => { CpuStorage::U32(self.fold_impl(src, src_l, |x, y| x > y, |_v, i| i as u32)?) } (true, false) => { CpuStorage::U32(self.fold_impl(src, src_l, |x, y| x < y, |_v, i| i as u32)?) } }; Ok(dst) } } struct ReduceSum<'a> { dst_shape: &'a Shape, reduce_dims: &'a [usize], reduce_dims_and_stride: Vec<(usize, usize)>, } impl<'a> ReduceSum<'a> { #[inline(always)] fn fold_impl<T>(&self, src: &[T], src_l: &Layout, start_elt: T) -> Result<Vec<T>> where T: WithDType, { let mut dst = vec![start_elt; self.dst_shape.elem_count()]; match src_l.contiguous_offsets() { Some((o1, o2)) => { let src = &src[o1..o2]; // Handle the case where we reduce over the last dimensions separately as it is // fairly common and easy to optimize. This rely on the layout being contiguous! // reduce_dims is sorted, check if it is ranging from a to n-1. let reduce_over_last_dims = self .reduce_dims .iter() .rev() .enumerate() .all(|(i, &v)| v == src_l.shape().rank() - 1 - i); if reduce_over_last_dims { let reduce_sz = self .reduce_dims_and_stride .iter() .map(|(u, _)| u) .product::<usize>(); for (dst_i, dst_v) in dst.iter_mut().enumerate() { let src_i = dst_i * reduce_sz; unsafe { T::vec_reduce_sum( src[src_i..src_i + reduce_sz].as_ptr(), dst_v, reduce_sz, ) }; } return Ok(dst); }; for (unstr_index, &src) in src.iter().enumerate() { let mut dst_index = unstr_index; // Set the reduce_dims indexes to 0. for &(dim, stride) in self.reduce_dims_and_stride.iter() { // The compiler is able to optimize the following in a single divmod op. let (pre, post) = (dst_index / stride, dst_index % stride); dst_index = (pre / dim) * stride + post; } dst[dst_index] += src; } } None => { for (unstr_index, src_index) in src_l.strided_index().enumerate() { let mut dst_index = unstr_index; // Set the reduce_dims indexes to 0. for &(dim, stride) in self.reduce_dims_and_stride.iter() { // The compiler is able to optimize the following in a single divmod op. let (pre, post) = (dst_index / stride, dst_index % stride); dst_index = (pre / dim) * stride + post; } dst[dst_index] += src[src_index]; } } } Ok(dst) } } impl<'a> Map1 for ReduceSum<'a> { #[inline(always)] fn f<T: WithDType>(&self, src: &[T], src_l: &Layout) -> Result<Vec<T>> { self.fold_impl(src, src_l, T::zero()) } } struct Affine(f64, f64); impl Map1 for Affine { fn f<T: WithDType>(&self, vs: &[T], layout: &Layout) -> Result<Vec<T>> { let mul = T::from_f64(self.0); let add = T::from_f64(self.1); Ok(unary_map(vs, layout, |v| v * mul + add)) } } struct AvgPool2D((usize, usize), (usize, usize)); impl Map1 for AvgPool2D { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { // https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html let (k_h, k_w) = self.0; let (s_h, s_w) = self.1; let (b_sz, c, h, w) = layout.shape().dims4()?; let stride = layout.stride(); let (stride_h, stride_w) = (stride[2], stride[3]); let h_out = (h - k_h) / s_h + 1; let w_out = (w - k_w) / s_w + 1; let src_index = layout.start_offset(); let mut dst = vec![T::zero(); b_sz * c * h_out * w_out]; let scale = 1f64 / (k_h * k_w) as f64; let scale = T::from_f64(scale); for b_idx in 0..b_sz { let dst = &mut dst[b_idx * c * h_out * w_out..]; let src_index = src_index + b_idx * stride[0]; for c_idx in 0..c { let dst = &mut dst[c_idx * h_out * w_out..]; let src_index = src_index + c_idx * stride[1]; for h_idx in 0..h_out { for w_idx in 0..w_out { let mut sum = T::zero(); for m in 0..k_h { for n in 0..k_w { let m = s_h * h_idx + m; let n = s_w * w_idx + n; sum += src[src_index + m * stride_h + n * stride_w] } } dst[h_idx * w_out + w_idx] = sum * scale; } } } } Ok(dst) } } struct MaxPool2D((usize, usize), (usize, usize)); impl Map1 for MaxPool2D { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { // https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html let (k_h, k_w) = self.0; let (s_h, s_w) = self.1; let (b_sz, c, h, w) = layout.shape().dims4()?; let stride = layout.stride(); let (stride_h, stride_w) = (stride[2], stride[3]); let h_out = (h - k_h) / s_h + 1; let w_out = (w - k_w) / s_w + 1; let src_index = layout.start_offset(); let mut dst = vec![T::zero(); b_sz * c * h_out * w_out]; for b_idx in 0..b_sz { let dst = &mut dst[b_idx * c * h_out * w_out..]; let src_index = src_index + b_idx * stride[0]; for c_idx in 0..c { let dst = &mut dst[c_idx * h_out * w_out..]; let src_index = src_index + c_idx * stride[1]; for h_idx in 0..h_out { for w_idx in 0..w_out { let mut largest = src[src_index + s_h * h_idx * stride_h + s_w * w_idx * stride_w]; for m in 0..k_h { for n in 0..k_w { let m = s_h * h_idx + m; let n = s_w * w_idx + n; if largest < src[src_index + m * stride_h + n * stride_w] { largest = src[src_index + m * stride_h + n * stride_w] } } } dst[h_idx * w_out + w_idx] = largest; } } } } Ok(dst) } } struct UpsampleNearest1D(usize); impl Map1 for UpsampleNearest1D { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { // TODO: Specialized implementation for the case 2*sz? let dst_sz = self.0; let (b_sz, c, src_sz) = layout.shape().dims3()?; let stride = layout.stride(); let stride_sz = stride[2]; let src_index = layout.start_offset(); let scale_sz = src_sz as f64 / dst_sz as f64; let mut dst = vec![T::zero(); b_sz * c * dst_sz]; let src_idxs = (0..dst_sz) .map(|idx| usize::min(src_sz - 1, (idx as f64 * scale_sz) as usize)) .collect::<Vec<_>>(); for b_idx in 0..b_sz { let dst = &mut dst[b_idx * c * dst_sz..]; let src_index = src_index + b_idx * stride[0]; for c_idx in 0..c { let dst = &mut dst[c_idx * dst_sz..]; let src_index = src_index + c_idx * stride[1]; for (idx, src_idx) in src_idxs.iter().enumerate() { dst[idx] = src[src_index + src_idx * stride_sz] } } } Ok(dst) } } struct UpsampleNearest2D(usize, usize); impl Map1 for UpsampleNearest2D { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { // TODO: Specialized implementation for the case 2*h, 2*w? let (dst_h, dst_w) = (self.0, self.1); let (b_sz, c, src_h, src_w) = layout.shape().dims4()?; let stride = layout.stride(); let (stride_h, stride_w) = (stride[2], stride[3]); let src_index = layout.start_offset(); let scale_h = src_h as f64 / dst_h as f64; let scale_w = src_w as f64 / dst_w as f64; let mut dst = vec![T::zero(); b_sz * c * dst_h * dst_w]; let src_h_idxs = (0..dst_h) .map(|h_idx| usize::min(src_h - 1, (h_idx as f64 * scale_h) as usize)) .collect::<Vec<_>>(); let src_w_idxs = (0..dst_w) .map(|w_idx| usize::min(src_w - 1, (w_idx as f64 * scale_w) as usize)) .collect::<Vec<_>>(); for b_idx in 0..b_sz { let dst = &mut dst[b_idx * c * dst_h * dst_w..]; let src_index = src_index + b_idx * stride[0]; for c_idx in 0..c { let dst = &mut dst[c_idx * dst_h * dst_w..]; let src_index = src_index + c_idx * stride[1]; for (h_idx, src_h_idx) in src_h_idxs.iter().enumerate() { for (w_idx, src_w_idx) in src_w_idxs.iter().enumerate() { let src_index = src_index + src_h_idx * stride_h + src_w_idx * stride_w; dst[h_idx * dst_w + w_idx] = src[src_index] } } } } Ok(dst) } } struct Gather<'a, I: IntDType> { ids: &'a [I], ids_l: &'a Layout, dim: usize, } impl<'a, I: IntDType> Map1 for Gather<'a, I> { fn f<T: WithDType>(&self, src: &[T], src_l: &Layout) -> Result<Vec<T>> { let ids = match self.ids_l.contiguous_offsets() { Some((a, b)) => &self.ids[a..b], None => Err(Error::RequiresContiguous { op: "gather" }.bt())?, }; let src = match src_l.contiguous_offsets() { Some((a, b)) => &src[a..b], None => Err(Error::RequiresContiguous { op: "gather" }.bt())?, }; let dim = self.dim; let ids_dims = self.ids_l.dims(); let src_dims = src_l.dims(); let dst_len: usize = ids_dims.iter().product(); let dst_left_len: usize = ids_dims[..dim].iter().product(); let dst_dim_len = ids_dims[dim]; let dst_right_len: usize = ids_dims[dim + 1..].iter().product(); let src_dim_len = src_dims[dim]; let src_right_len: usize = src_dims[dim + 1..].iter().product(); let mut dst = vec![T::zero(); dst_len]; for left_i in 0..dst_left_len { let start_src_idx = left_i * src_right_len * src_dim_len; let start_dst_idx = left_i * dst_right_len * dst_dim_len; for i in 0..dst_dim_len { let start_dst_idx = start_dst_idx + i * dst_right_len; for right_i in 0..dst_right_len { let dst_idx = start_dst_idx + right_i; let index = ids[dst_idx].as_usize(); if index >= src_dim_len { Err(Error::InvalidIndex { index, size: src_dim_len, op: "gather", } .bt())? } let src_idx = start_src_idx + index * src_right_len + right_i; dst[dst_idx] = src[src_idx] } } } Ok(dst) } } struct IndexSelect<'a, T: IntDType> { ids: &'a [T], ids_l: &'a Layout, dim: usize, } impl<'a, I: IntDType> Map1 for IndexSelect<'a, I> { fn f<T: WithDType>(&self, src: &[T], layout: &Layout) -> Result<Vec<T>> { let src = match layout.contiguous_offsets() { Some((a, b)) => &src[a..b], None => Err(Error::RequiresContiguous { op: "index-select" }.bt())?, }; let dim = self.dim; let n_ids = match self.ids_l.dims() { [n_ids] => *n_ids, d => Err(Error::UnexpectedNumberOfDims { expected: 1, got: d.len(), shape: self.ids_l.shape().clone(), } .bt())?, }; let stride_ids = self.ids_l.stride()[0]; let mut dst_dims = layout.dims().to_vec(); let src_dim = dst_dims[dim]; dst_dims[dim] = n_ids; let dst_len: usize = dst_dims.iter().product(); let left_len: usize = dst_dims[..dim].iter().product(); let right_len: usize = dst_dims[dim + 1..].iter().product(); let mut dst = vec![T::zero(); dst_len]; for left_i in 0..left_len { let start_src_idx = left_i * right_len * src_dim; let start_dst_idx = left_i * right_len * n_ids; for i in 0..n_ids { let index = self.ids[self.ids_l.start_offset() + stride_ids * i].as_usize(); if index >= src_dim { Err(Error::InvalidIndex { index, size: src_dim, op: "index-select", } .bt())? } let start_src_idx = start_src_idx + index * right_len; let start_dst_idx = start_dst_idx + i * right_len; dst[start_dst_idx..start_dst_idx + right_len] .copy_from_slice(&src[start_src_idx..start_src_idx + right_len]) } } Ok(dst) } } struct ScatterAdd<'a, I: IntDType> { ids: &'a [I], ids_l: &'a Layout, dim: usize, } impl<'a, I: IntDType> Map2 for ScatterAdd<'a, I> { const OP: &'static str = "scatter-add"; fn f<T: WithDType>(&self, v1: &[T], l1: &Layout, src: &[T], src_l: &Layout) -> Result<Vec<T>> { let dst_len = l1.shape().elem_count(); let mut dst = vec![T::zero(); dst_len]; copy_strided_src_(v1, &mut dst, 0, l1); let src = match src_l.contiguous_offsets() { None => Err(Error::RequiresContiguous { op: "scatter-add" }.bt())?, Some((o1, o2)) => &src[o1..o2], }; let dim = self.dim; let ids_dims = self.ids_l.dims(); let dst_dims = l1.dims(); let dst_dim_len = dst_dims[dim]; let dst_right_len: usize = dst_dims[dim + 1..].iter().product(); let ids_left_len: usize = ids_dims[..dim].iter().product(); let ids_dim_len = ids_dims[dim]; let ids_right_len: usize = ids_dims[dim + 1..].iter().product(); let ids = match self.ids_l.contiguous_offsets() { Some((a, b)) => &self.ids[a..b], None => Err(Error::RequiresContiguous { op: "gather" }.bt())?, }; for left_i in 0..ids_left_len { let start_ids_idx = left_i * ids_right_len * ids_dim_len; let start_dst_idx = left_i * dst_right_len * dst_dim_len; for i in 0..ids_dim_len { let start_ids_idx = start_ids_idx + i * ids_right_len; for right_i in 0..dst_right_len { let ids_idx = start_ids_idx + right_i; let index = ids[ids_idx].as_usize(); if index >= dst_dim_len { Err(Error::InvalidIndex { index, size: dst_dim_len, op: "gather", } .bt())? } let dst_idx = start_dst_idx + index * dst_right_len + right_i; dst[dst_idx] += src[ids_idx] } } } Ok(dst) } } struct IndexAdd<'a, I: IntDType> { ids: &'a [I], dim: usize, } impl<'a, I: IntDType> Map2 for IndexAdd<'a, I> { const OP: &'static str = "index-add"; // https://pytorch.org/docs/stable/generated/torch.Tensor.index_add_.html#torch.Tensor.index_add_ // v1, l1 -> self fn f<T: WithDType>(&self, v1: &[T], l1: &Layout, src: &[T], src_l: &Layout) -> Result<Vec<T>> { let dst_len = l1.shape().elem_count(); let mut dst = vec![T::zero(); dst_len]; copy_strided_src_(v1, &mut dst, 0, l1); let src = match src_l.contiguous_offsets() { None => Err(Error::RequiresContiguous { op: "index-add" }.bt())?, Some((o1, o2)) => &src[o1..o2], }; let dim = self.dim; let max_idx = l1.dims()[dim]; let pre_dim = src_l.dims()[..dim].iter().product::<usize>(); let src_dim_sz = src_l.dims()[dim]; let post_dim = src_l.dims()[dim + 1..].iter().product::<usize>(); if dim == 0 { for (src_idx, dst_idx) in self.ids.iter().enumerate() { let dst_idx = dst_idx.as_usize(); if dst_idx >= max_idx { Err(Error::InvalidIndex { index: dst_idx, op: "index-add", size: max_idx, })? } let src_idx = src_idx * post_dim; let dst_idx = dst_idx * post_dim; let src = &src[src_idx..src_idx + post_dim]; let dst = &mut dst[dst_idx..dst_idx + post_dim]; for (d, &s) in dst.iter_mut().zip(src.iter()) { *d += s } } } else { for (src_idx, dst_idx) in self.ids.iter().enumerate() { let dst_idx = dst_idx.as_usize(); if dst_idx >= max_idx { Err(Error::InvalidIndex { index: dst_idx, op: "index-add", size: max_idx, })? } for pre_i in 0..pre_dim { let pre_src_i = (pre_i * src_dim_sz + src_idx) * post_dim; let pre_dst_i = (pre_i * max_idx + dst_idx) * post_dim; let src = &src[pre_src_i..pre_src_i + post_dim]; let dst = &mut dst[pre_dst_i..pre_dst_i + post_dim]; for (d, &s) in dst.iter_mut().zip(src.iter()) { *d += s } } } } Ok(dst) } } #[allow(clippy::too_many_arguments)] fn copy2d_<T: Copy>( src: &[T], dst: &mut [T], d1: usize, d2: usize, src_stride1: usize, dst_stride1: usize, src_offset: usize, dst_offset: usize, ) { for i1 in 0..d1 { let dst_idx = i1 * dst_stride1 + dst_offset; let src_idx = i1 * src_stride1 + src_offset; let dst = &mut dst[dst_idx..dst_idx + d2]; let src = &src[src_idx..src_idx + d2]; dst.copy_from_slice(src) } } fn copy_strided_src_<T: Copy>(src: &[T], dst: &mut [T], dst_offset: usize, src_l: &Layout) { match src_l.strided_blocks() { crate::StridedBlocks::SingleBlock { start_offset, len } => { let to_copy = (dst.len() - dst_offset).min(len); dst[dst_offset..dst_offset + to_copy] .copy_from_slice(&src[start_offset..start_offset + to_copy]) } crate::StridedBlocks::MultipleBlocks { block_start_index, block_len: 1, } => { for (dst_index, src_index) in block_start_index.enumerate() { let dst_index = dst_index + dst_offset; if dst_index >= dst.len() { break; } dst[dst_index] = src[src_index] } } crate::StridedBlocks::MultipleBlocks { block_start_index, block_len, } => { let mut dst_index = dst_offset; for src_index in block_start_index { let next_dst_index = dst_index + block_len; if dst_index >= dst.len() { break; } let to_copy = usize::min(block_len, dst.len() - dst_index); dst[dst_index..dst_index + to_copy] .copy_from_slice(&src[src_index..src_index + to_copy]); dst_index = next_dst_index } } } } struct Conv1D<'a>(&'a crate::conv::ParamsConv1D); impl<'a> Map2 for Conv1D<'a> { const OP: &'static str = "conv1d"; fn f<T: WithDType>(&self, inp: &[T], inp_l: &Layout, k: &[T], k_l: &Layout) -> Result<Vec<T>> { let p = self.0; let inp = &inp[inp_l.start_offset()..]; let k = &k[k_l.start_offset()..]; let (inp_s0, inp_s1, inp_s2) = crate::shape::dims3(inp_l.stride())?; let (k_s0, k_s1, k_s2) = crate::shape::dims3(k_l.stride())?; let l_out = p.l_out(); let dst_elems = p.c_out * l_out * p.b_size; // The output shape is [b_size, c_out, l_out] let dst = vec![T::zero(); dst_elems]; // TODO: Avoid making this copy if `inp` already has the appropriate layout. let mut inp_cont = vec![T::zero(); p.b_size * p.c_in * p.l_in]; for b_idx in 0..p.b_size { for src_l in 0..p.l_in { for src_c_idx in 0..p.c_in { let inp_idx = b_idx * inp_s0 + src_c_idx * inp_s1 + src_l * inp_s2; inp_cont[b_idx * p.l_in * p.c_in + src_l * p.c_in + src_c_idx] = inp[inp_idx] } } } for offset in 0..p.k_size { (0..p.c_out).into_par_iter().for_each(|dst_c_idx| { let dst_idx = dst_c_idx * l_out; let k_cont = (0..p.c_in) .map(|c_in_idx| k[dst_c_idx * k_s0 + c_in_idx * k_s1 + offset * k_s2]) .collect::<Vec<_>>(); for b_idx in 0..p.b_size { let dst_idx = dst_idx + b_idx * p.c_out * l_out; for dst_l in 0..l_out { let dst_idx = dst_idx + dst_l; let src_l = p.stride * dst_l + offset * p.dilation; if src_l < p.padding || src_l >= p.padding + p.l_in { continue; } let src_l = src_l - p.padding; let inp_cont = &inp_cont[b_idx * p.l_in * p.c_in + src_l * p.c_in..]; assert!(inp_cont.len() >= p.c_in); assert!(k_cont.len() >= p.c_in); let mut d = T::zero(); unsafe { T::vec_dot(inp_cont.as_ptr(), k_cont.as_ptr(), &mut d, p.c_in) } let dst_p = dst.as_ptr(); // Safety: dst_idx are uniques per dst_c_idx which is used to parallelise // the different tasks so no two threads can try to write at the same // location. unsafe { let ptr = dst_p.add(dst_idx) as *mut T; *ptr += d } } } }) } Ok(dst) } } struct Im2Col1D { l_k: usize, stride: usize, dilation: usize, padding: usize, } impl Im2Col1D { fn l_out(&self, l: usize) -> usize { (l + 2 * self.padding - self.dilation * (self.l_k - 1) - 1) / self.stride + 1 } } impl Map1 for Im2Col1D { fn f<T: WithDType>(&self, vs: &[T], layout: &Layout) -> Result<Vec<T>> { let &Self { l_k, stride, dilation, padding, } = self; let (b, c, l) = layout.shape().dims3()?; let l_out = self.l_out(l); let src = &vs[layout.start_offset()..]; let mut dst = vec![T::zero(); b * l_out * c * l_k]; let (src_s0, src_s1, src_s2) = { let s = layout.stride(); (s[0], s[1], s[2]) }; // TODO: provide specialized kernels for the common use cases. // - l_k = 1 // - padding = 0 // - stride = 1 // - dilation = 1 for b_idx in 0..b { let src_idx = b_idx * src_s0; let dst_idx = b_idx * l_out * c * l_k; for l_idx in 0..l_out { let dst_idx = dst_idx + l_idx * c * l_k; for c_idx in 0..c { let dst_idx = dst_idx + c_idx * l_k; let src_idx = c_idx * src_s1 + src_idx; for l_k_idx in 0..l_k { let src_l = l_idx * stride + l_k_idx * dilation; if padding != 0 && (src_l < padding || src_l >= l + padding) { continue; } let src_l = src_l - padding; let src_idx = src_idx + src_l * src_s2; let dst_idx = dst_idx + l_k_idx; dst[dst_idx] = src[src_idx] } } } } Ok(dst) } } struct Im2Col { h_k: usize, w_k: usize, stride: usize, dilation: usize, padding: usize, } impl Im2Col { fn hw_out(&self, h: usize, w: usize) -> (usize, usize) { let h_out = (h + 2 * self.padding - self.dilation * (self.h_k - 1) - 1) / self.stride + 1; let w_out = (w + 2 * self.padding - self.dilation * (self.w_k - 1) - 1) / self.stride + 1; (h_out, w_out) } } impl Map1 for Im2Col { fn f<T: WithDType>(&self, vs: &[T], layout: &Layout) -> Result<Vec<T>> { let &Self { h_k, w_k, stride, dilation, padding, } = self; let (b, c, h, w) = layout.shape().dims4()?; let (h_out, w_out) = self.hw_out(h, w); let src = &vs[layout.start_offset()..]; let mut dst = vec![T::zero(); b * h_out * w_out * c * h_k * w_k]; let (src_s0, src_s1, src_s2, src_s3) = { let s = layout.stride(); (s[0], s[1], s[2], s[3]) }; // TODO: provide specialized kernels for the common use cases. // - h_k = w_k = 1 // - padding = 0 // - stride = 1 // - dilation = 1 for b_idx in 0..b { let src_idx = b_idx * src_s0; let dst_idx = b_idx * h_out * w_out * c * h_k * w_k; for h_idx in 0..h_out { let dst_idx = dst_idx + h_idx * w_out * c * h_k * w_k; for w_idx in 0..w_out { let dst_idx = dst_idx + w_idx * c * h_k * w_k; for c_idx in 0..c { let dst_idx = dst_idx + c_idx * h_k * w_k; let src_idx = c_idx * src_s1 + src_idx; for h_k_idx in 0..h_k { let src_h = h_idx * stride + h_k_idx * dilation; if padding != 0 && (src_h < padding || src_h >= h + padding) { continue; } let src_h = src_h - padding; let src_idx = src_idx + src_h * src_s2; let dst_idx = dst_idx + h_k_idx * w_k; for w_k_idx in 0..w_k { let src_w = w_idx * stride + w_k_idx * dilation; if padding != 0 && (src_w < padding || src_w >= w + padding) { continue; } let src_w = src_w - padding; let src_idx = src_idx + src_w * src_s3; let dst_idx = dst_idx + w_k_idx; dst[dst_idx] = src[src_idx] } } } } } } Ok(dst) } } struct Col2Im1D { stride: usize, } impl Map1 for Col2Im1D { fn f<T: WithDType>(&self, col: &[T], l: &Layout) -> Result<Vec<T>> { let (b_size, l_in, c_out, k_size) = l.shape().dims4()?; let stride = self.stride; let l_out = (l_in - 1) * stride + k_size; let mut im = vec![T::zero(); b_size * c_out * l_out]; let (dst_s0, dst_s1) = (c_out * l_out, l_out); let (src_s0, src_s1, src_s2) = (c_out * k_size * l_in, c_out * k_size, k_size); for l_in_i in 0..l_in { for k_i in 0..k_size { let l_out_i = l_in_i * stride + k_i; for b_i in 0..b_size { for c_i in 0..c_out { let dst_idx = b_i * dst_s0 + c_i * dst_s1 + l_out_i; let src_idx = b_i * src_s0 + l_in_i * src_s1 + c_i * src_s2 + k_i; im[dst_idx] += col[src_idx] } } } } Ok(im) } } struct ConvTranspose1D<'a>(&'a crate::conv::ParamsConvTranspose1D); impl<'a> Map2 for ConvTranspose1D<'a> { const OP: &'static str = "conv_transpose1d"; fn f<T: WithDType>(&self, inp: &[T], inp_l: &Layout, k: &[T], k_l: &Layout) -> Result<Vec<T>> { let p = self.0; let inp = &inp[inp_l.start_offset()..]; let k = &k[k_l.start_offset()..]; let (inp_s0, inp_s1, inp_s2) = crate::shape::dims3(inp_l.stride())?; let (k_s0, k_s1, k_s2) = crate::shape::dims3(k_l.stride())?; let l_out = p.l_out(); // Output shape: [b_size, c_out, l_out]. let dst_elems = p.c_out * l_out * p.b_size; let dst = vec![T::zero(); dst_elems]; let dst_s0 = p.c_out * l_out; let dst_s1 = l_out; let dst_s2 = 1; // TODO: Avoid making this copy if `inp` already has the appropriate layout. let mut inp_cont = vec![T::zero(); p.b_size * p.c_in * p.l_in]; let cont_s0 = p.l_in * p.c_in; let cont_s1 = p.c_in; for b_idx in 0..p.b_size { for l_idx in 0..p.l_in { for c_idx in 0..p.c_in { let src_idx = b_idx * inp_s0 + c_idx * inp_s1 + l_idx * inp_s2; let dst_idx = b_idx * cont_s0 + l_idx * cont_s1 + c_idx; inp_cont[dst_idx] = inp[src_idx] } } } for k_idx in 0..p.k_size { (0..p.c_out).into_par_iter().for_each(|dst_c_idx| { let k_cont = (0..p.c_in) .map(|c_in_idx| k[c_in_idx * k_s0 + dst_c_idx * k_s1 + k_idx * k_s2]) .collect::<Vec<_>>(); for b_idx in 0..p.b_size { for l_idx in 0..p.l_in { let out_idx = l_idx * p.stride + k_idx * p.dilation; if out_idx < p.padding { continue; } let out_idx = out_idx - p.padding; if out_idx < l_out { let inp_cont = &inp_cont[b_idx * cont_s0 + l_idx * cont_s1..]; let dst_idx = b_idx * dst_s0 + out_idx * dst_s2 + dst_c_idx * dst_s1; let mut d = T::zero(); unsafe { T::vec_dot(inp_cont.as_ptr(), k_cont.as_ptr(), &mut d, p.c_in) } let dst_p = dst.as_ptr(); // Safety: dst_idx are uniques per dst_c_idx which is used to // parallelise the different tasks so no two threads can try to // write at the same location. unsafe { let ptr = dst_p.add(dst_idx) as *mut T; *ptr += d } } } } }) } Ok(dst) } } struct Conv2D<'a>(&'a crate::conv::ParamsConv2D); impl<'a> Map2 for Conv2D<'a> { const OP: &'static str = "conv2d"; fn f<T: WithDType>(&self, inp: &[T], inp_l: &Layout, k: &[T], k_l: &Layout) -> Result<Vec<T>> { let p = self.0; let inp = &inp[inp_l.start_offset()..]; let (inp_s0, inp_s1, inp_s2, inp_s3) = crate::shape::dims4(inp_l.stride())?; let k = &k[k_l.start_offset()..]; let (k_s0, k_s1, k_s2, k_s3) = crate::shape::dims4(k_l.stride())?; let (out_h, out_w) = (p.out_h(), p.out_w()); // Output shape: [b_size, c_out, out_h, out_w]. let dst = vec![T::zero(); p.b_size * p.c_out * out_h * out_w]; // TODO: Avoid making this copy if `inp` already has the appropriate layout. let mut inp_cont = vec![T::zero(); p.b_size * p.c_in * p.i_h * p.i_w]; let cont_s0 = p.i_h * p.i_w * p.c_in; let cont_s1 = p.i_w * p.c_in; let cont_s2 = p.c_in; for b_idx in 0..p.b_size { for h_idx in 0..p.i_h { for w_idx in 0..p.i_w { for c_idx in 0..p.c_in { let src_idx = b_idx * inp_s0 + c_idx * inp_s1 + h_idx * inp_s2 + w_idx * inp_s3; let dst_idx = b_idx * cont_s0 + h_idx * cont_s1 + w_idx * cont_s2 + c_idx; inp_cont[dst_idx] = inp[src_idx] } } } } for offset_h in 0..p.k_h { for offset_w in 0..p.k_w { (0..p.c_out).into_par_iter().for_each(|dst_c_idx| { let dst_idx = dst_c_idx * out_w * out_h; let k_cont = (0..p.c_in) .map(|c_in_idx| { k[dst_c_idx * k_s0 + c_in_idx * k_s1 + offset_h * k_s2 + offset_w * k_s3] }) .collect::<Vec<_>>(); for b_idx in 0..p.b_size { let dst_idx = dst_idx + b_idx * p.c_out * out_h * out_w; for dst_h in 0..out_h { let dst_idx = dst_idx + dst_h * out_w; let src_h = p.stride * dst_h + offset_h * p.dilation; if src_h < p.padding || src_h >= p.i_h + p.padding { continue; } let src_h = src_h - p.padding; for dst_w in 0..out_w { let dst_idx = dst_idx + dst_w; let src_w = p.stride * dst_w + offset_w * p.dilation; if src_w < p.padding || src_w >= p.i_w + p.padding { continue; } let src_w = src_w - p.padding; let inp_cont = &inp_cont [b_idx * cont_s0 + src_h * cont_s1 + src_w * cont_s2..]; assert!(inp_cont.len() >= p.c_in); assert!(k_cont.len() >= p.c_in); let mut d = T::zero(); unsafe { T::vec_dot(inp_cont.as_ptr(), k_cont.as_ptr(), &mut d, p.c_in) } let dst_p = dst.as_ptr(); // Safety: dst_idx are uniques per dst_c_idx which is used to parallelise // the different tasks so no two threads can try to write at the same // location. unsafe { let ptr = dst_p.add(dst_idx) as *mut T; *ptr += d } } } } }); } } Ok(dst) } } struct ConvTranspose2D<'a>(&'a crate::conv::ParamsConvTranspose2D); impl<'a> Map2 for ConvTranspose2D<'a> { const OP: &'static str = "conv_transpose2d"; fn f<T: WithDType>(&self, inp: &[T], inp_l: &Layout, k: &[T], k_l: &Layout) -> Result<Vec<T>> { let p = self.0; let inp = &inp[inp_l.start_offset()..]; let (inp_s0, inp_s1, inp_s2, inp_s3) = crate::shape::dims4(inp_l.stride())?; let k = &k[k_l.start_offset()..]; let (k_s0, k_s1, k_s2, k_s3) = crate::shape::dims4(k_l.stride())?; let (out_h, out_w) = (p.out_h(), p.out_w()); // Output shape: [b_size, c_out, out_h, out_w]. let dst = vec![T::zero(); p.b_size * p.c_out * out_h * out_w]; let dst_s0 = p.c_out * out_h * out_w; let dst_s1 = out_h * out_w; let dst_s2 = out_w; let dst_s3 = 1; // TODO: Avoid making this copy if `inp` already has the appropriate layout. let mut inp_cont = vec![T::zero(); p.b_size * p.c_in * p.i_h * p.i_w]; let cont_s0 = p.i_h * p.i_w * p.c_in; let cont_s1 = p.i_w * p.c_in; let cont_s2 = p.c_in; for b_idx in 0..p.b_size { for h_idx in 0..p.i_h { for w_idx in 0..p.i_w { for c_idx in 0..p.c_in { let src_idx = b_idx * inp_s0 + c_idx * inp_s1 + h_idx * inp_s2 + w_idx * inp_s3; let dst_idx = b_idx * cont_s0 + h_idx * cont_s1 + w_idx * cont_s2 + c_idx; inp_cont[dst_idx] = inp[src_idx] } } } } for k_y in 0..p.k_h { for k_x in 0..p.k_w { (0..p.c_out).into_par_iter().for_each(|dst_c_idx| { let k_cont = (0..p.c_in) .map(|c_in_idx| { k[c_in_idx * k_s0 + dst_c_idx * k_s1 + k_y * k_s2 + k_x * k_s3] }) .collect::<Vec<_>>(); for b_idx in 0..p.b_size { for inp_y in 0..p.i_h { for inp_x in 0..p.i_w { let out_x = inp_x * p.stride + k_x * p.dilation; let out_y = inp_y * p.stride + k_y * p.dilation; if out_x < p.padding || out_y < p.padding { continue; } let out_x = out_x - p.padding; let out_y = out_y - p.padding; if out_x < out_w && out_y < out_h { let inp_cont = &inp_cont [b_idx * cont_s0 + inp_y * cont_s1 + inp_x * cont_s2..]; let dst_idx = b_idx * dst_s0 + out_y * dst_s2 + out_x * dst_s3 + dst_c_idx * dst_s1; let mut d = T::zero(); unsafe { T::vec_dot( inp_cont.as_ptr(), k_cont.as_ptr(), &mut d, p.c_in, ) } let dst_p = dst.as_ptr(); // Safety: dst_idx are uniques per dst_c_idx which is used to // parallelise the different tasks so no two threads can try to // write at the same location. unsafe { let ptr = dst_p.add(dst_idx) as *mut T; *ptr += d } } } } } }) } } Ok(dst) } } struct MatMul((usize, usize, usize, usize)); impl MatMul { fn striding_error(&self, lhs_l: &Layout, rhs_l: &Layout, msg: &'static str) -> Error { Error::MatMulUnexpectedStriding(Box::new(crate::error::MatMulUnexpectedStriding { lhs_l: lhs_l.clone(), rhs_l: rhs_l.clone(), bmnk: self.0, msg, })) .bt() } fn ab_skip(&self, lhs_l: &Layout, rhs_l: &Layout) -> Result<(usize, usize)> { let lhs_stride = lhs_l.stride(); let rhs_stride = rhs_l.stride(); let rank = lhs_stride.len(); let (_b, m, n, k) = self.0; let a_skip: usize = match lhs_stride[..rank - 2] { [s1, stride] if s1 == stride * lhs_l.dims()[1] => stride, [_, stride] if lhs_l.dims()[0] == 1 => stride, [stride, _] if lhs_l.dims()[1] == 1 => stride, [stride] => stride, [] => m * k, _ => Err(self.striding_error(lhs_l, rhs_l, "non-contiguous lhs"))?, }; let b_skip: usize = match rhs_stride[..rank - 2] { [s1, stride] if s1 == stride * rhs_l.dims()[1] => stride, [_, stride] if rhs_l.dims()[0] == 1 => stride, [stride, _] if rhs_l.dims()[1] == 1 => stride, [stride] => stride, [] => n * k, _ => Err(self.striding_error(lhs_l, rhs_l, "non-contiguous rhs"))?, }; Ok((a_skip, b_skip)) } } impl Map2 for MatMul { const OP: &'static str = "mat_mul"; #[cfg(all(not(feature = "mkl"), not(feature = "accelerate")))] fn f<T: 'static + WithDType + num_traits::Num + Copy>( &self, lhs: &[T], lhs_l: &Layout, rhs: &[T], rhs_l: &Layout, ) -> Result<Vec<T>> { use gemm::{gemm, Parallelism}; match T::DTYPE { DType::F16 | DType::F32 | DType::F64 => {} _ => Err(Error::UnsupportedDTypeForOp(T::DTYPE, "matmul").bt())?, } let (b, m, n, k) = self.0; let lhs = &lhs[lhs_l.start_offset()..]; let rhs = &rhs[rhs_l.start_offset()..]; let lhs_stride = lhs_l.stride(); let rhs_stride = rhs_l.stride(); let rank = lhs_stride.len(); let lhs_cs = lhs_stride[rank - 1]; let lhs_rs = lhs_stride[rank - 2]; let rhs_cs = rhs_stride[rank - 1]; let rhs_rs = rhs_stride[rank - 2]; let (a_skip, b_skip) = self.ab_skip(lhs_l, rhs_l)?; let c_skip: usize = m * n; let dst_shape: Shape = (m, n).into(); let dst_strides = dst_shape.stride_contiguous(); let dst_rs = dst_strides[0]; let dst_cs = dst_strides[1]; let mut dst = vec![T::zero(); b * m * n]; let num_threads = crate::utils::get_num_threads(); let parallelism = if num_threads > 1 { Parallelism::Rayon(num_threads) } else { Parallelism::None }; for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { gemm( /* m: usize = */ m, /* n: usize = */ n, /* k: usize = */ k, /* dst: *mut T = */ dst_p.as_mut_ptr(), /* dst_cs: isize = */ dst_cs as isize, /* dst_rs: isize = */ dst_rs as isize, /* read_dst: bool = */ false, /* lhs: *const T = */ lhs_p.as_ptr(), /* lhs_cs: isize = */ lhs_cs as isize, /* lhs_rs: isize = */ lhs_rs as isize, /* rhs: *const T = */ rhs_p.as_ptr(), /* rhs_cs: isize = */ rhs_cs as isize, /* rhs_rs: isize = */ rhs_rs as isize, /* alpha: T = */ T::zero(), /* beta: T = */ T::one(), /* conj_dst: bool = */ false, /* conj_lhs: bool = */ false, /* conj_rhs: bool = */ false, parallelism, ) } } Ok(dst) } #[cfg(feature = "accelerate")] fn f<T: 'static + WithDType + num_traits::Num + Copy>( &self, lhs: &[T], lhs_l: &Layout, rhs: &[T], rhs_l: &Layout, ) -> Result<Vec<T>> { let (b, m, n, k) = self.0; let lhs = &lhs[lhs_l.start_offset()..]; let rhs = &rhs[rhs_l.start_offset()..]; let lhs_stride = lhs_l.stride(); let rhs_stride = rhs_l.stride(); let (a_skip, b_skip) = self.ab_skip(lhs_l, rhs_l)?; let c_skip: usize = m * n; let rhs_m1 = rhs_stride[rhs_stride.len() - 1]; let rhs_m2 = rhs_stride[rhs_stride.len() - 2]; let lhs_m1 = lhs_stride[lhs_stride.len() - 1]; let lhs_m2 = lhs_stride[lhs_stride.len() - 2]; let (lda, transa) = if (rhs_m1 == 1 || n == 1) && (rhs_m2 == n || k == 1) { (n as i32, b'N') } else if rhs_m1 == k && rhs_m2 == 1 { (k as i32, b'T') } else { Err(self.striding_error(lhs_l, rhs_l, "non-contiguous rhs"))? }; // The b tensor has dims batching, m, k (lhs) let (ldb, transb) = if (lhs_m1 == 1 || k == 1) && (lhs_m2 == k || m == 1) { (k as i32, b'N') } else if lhs_m1 == m && lhs_m2 == 1 { (m as i32, b'T') } else { Err(self.striding_error(lhs_l, rhs_l, "non-contiguous lhs"))? }; let mut dst = vec![T::zero(); b * m * n]; match T::DTYPE { DType::F16 => { crate::bail!("the accelerate backend does not support f16 matmul") } DType::F32 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f32; let b = lhs_p.as_ptr() as *const f32; let c = dst_p.as_mut_ptr() as *mut f32; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::accelerate::sgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ 1., /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ 0., /* c= */ c, /* ldc= */ n as i32, ) } } } DType::F64 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f64; let b = lhs_p.as_ptr() as *const f64; let c = dst_p.as_mut_ptr() as *mut f64; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::accelerate::dgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ 1., /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ 0., /* c= */ c, /* ldc= */ n as i32, ) } } } dtype => Err(Error::UnsupportedDTypeForOp(dtype, "matmul").bt())?, } Ok(dst) } #[cfg(feature = "mkl")] fn f<T: 'static + WithDType + num_traits::Num + Copy>( &self, lhs: &[T], lhs_l: &Layout, rhs: &[T], rhs_l: &Layout, ) -> Result<Vec<T>> { let (b, m, n, k) = self.0; let lhs = &lhs[lhs_l.start_offset()..]; let rhs = &rhs[rhs_l.start_offset()..]; let lhs_stride = lhs_l.stride(); let rhs_stride = rhs_l.stride(); let (a_skip, b_skip) = self.ab_skip(lhs_l, rhs_l)?; let c_skip: usize = m * n; let rhs_m1 = rhs_stride[rhs_stride.len() - 1]; let rhs_m2 = rhs_stride[rhs_stride.len() - 2]; let lhs_m1 = lhs_stride[lhs_stride.len() - 1]; let lhs_m2 = lhs_stride[lhs_stride.len() - 2]; let (lda, transa) = if (rhs_m1 == 1 || n == 1) && (rhs_m2 == n || k == 1) { (n as i32, b'N') } else if rhs_m1 == k && rhs_m2 == 1 { (k as i32, b'T') } else { Err(self.striding_error(lhs_l, rhs_l, "non-contiguous rhs"))? }; // The b tensor has dims batching, m, k (lhs) let (ldb, transb) = if (lhs_m1 == 1 || k == 1) && (lhs_m2 == k || m == 1) { (k as i32, b'N') } else if lhs_m1 == m && lhs_m2 == 1 { (m as i32, b'T') } else { Err(self.striding_error(lhs_l, rhs_l, "non-contiguous lhs"))? }; let mut dst = vec![T::zero(); b * m * n]; match T::DTYPE { DType::F16 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f16; let b = lhs_p.as_ptr() as *const f16; let c = dst_p.as_mut_ptr() as *mut f16; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::mkl::hgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ f16::ONE, /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ f16::ZERO, /* c= */ c, /* ldc= */ n as i32, ) } } } DType::F32 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f32; let b = lhs_p.as_ptr() as *const f32; let c = dst_p.as_mut_ptr() as *mut f32; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::mkl::sgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ 1., /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ 0., /* c= */ c, /* ldc= */ n as i32, ) } } } DType::F64 => { for step in 0..b { let lhs_p = &lhs[step * a_skip..]; let rhs_p = &rhs[step * b_skip..]; let dst_p = &mut dst[step * c_skip..]; unsafe { let a = rhs_p.as_ptr() as *const f64; let b = lhs_p.as_ptr() as *const f64; let c = dst_p.as_mut_ptr() as *mut f64; let a = std::slice::from_raw_parts(a, a_skip); let b = std::slice::from_raw_parts(b, b_skip); let c = std::slice::from_raw_parts_mut(c, c_skip); crate::mkl::dgemm( transa, transb, /* m= */ n as i32, /* n= */ m as i32, /* k= */ k as i32, /* alpha= */ 1., /* a= */ a, /* lda= */ lda, /* b= */ b, /* ldb= */ ldb, /* beta= */ 0., /* c= */ c, /* ldc= */ n as i32, ) } } } dtype => Err(Error::UnsupportedDTypeForOp(dtype, "matmul").bt())?, } Ok(dst) } } fn elu<T: num_traits::Float>(v: T, alpha: T) -> T { if v.is_sign_positive() { v } else { (v.exp() - T::one()) * alpha } } impl CpuStorage { pub fn as_slice<D: WithDType>(&self) -> Result<&[D]> { D::cpu_storage_as_slice(self) } pub fn concat(storages: &[CpuStorage]) -> Result<CpuStorage> { let storage0 = &storages[0]; let s = match storage0 { Self::U8(_) => { let storages = storages .iter() .map(|s| match s { Self::U8(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::U8(storages) } Self::U32(_) => { let storages = storages .iter() .map(|s| match s { Self::U32(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::U32(storages) } Self::I64(_) => { let storages = storages .iter() .map(|s| match s { Self::I64(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::I64(storages) } Self::BF16(_) => { let storages = storages .iter() .map(|s| match s { Self::BF16(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::BF16(storages) } Self::F16(_) => { let storages = storages .iter() .map(|s| match s { Self::F16(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::F16(storages) } Self::F32(_) => { let storages = storages .iter() .map(|s| match s { Self::F32(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::F32(storages) } Self::F64(_) => { let storages = storages .iter() .map(|s| match s { Self::F64(s) => Ok(s.as_slice()), _ => crate::bail!("dtype mismatch"), }) .collect::<Result<Vec<_>>>()? .concat(); Self::F64(storages) } }; Ok(s) } } impl BackendStorage for CpuStorage { type Device = CpuDevice; fn dtype(&self) -> DType { match self { Self::U8(_) => DType::U8, Self::U32(_) => DType::U32, Self::I64(_) => DType::I64, Self::BF16(_) => DType::BF16, Self::F16(_) => DType::F16, Self::F32(_) => DType::F32, Self::F64(_) => DType::F64, } } fn to_dtype(&self, layout: &Layout, dtype: DType) -> Result<Self> { // TODO: find a way around the quadratic number of cases below. match (self, dtype) { (Self::U8(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| bf16::from_f32(v as f32)); Ok(Self::BF16(data)) } (Self::U32(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| bf16::from_f32(v as f32)); Ok(Self::BF16(data)) } (Self::I64(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| bf16::from_f32(v as f32)); Ok(Self::BF16(data)) } (Self::BF16(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| v); Ok(Self::BF16(data)) } (Self::F16(storage), DType::BF16) => { let data = unary_map(storage, layout, |v| bf16::from_f32(v.to_f32())); Ok(Self::BF16(data)) } (Self::F32(storage), DType::BF16) => { let data = unary_map(storage, layout, bf16::from_f32); Ok(Self::BF16(data)) } (Self::F64(storage), DType::BF16) => { let data = unary_map(storage, layout, bf16::from_f64); Ok(Self::BF16(data)) } (Self::U8(storage), DType::F16) => { let data = unary_map(storage, layout, |v| f16::from_f32(v as f32)); Ok(Self::F16(data)) } (Self::U32(storage), DType::F16) => { let data = unary_map(storage, layout, |v| f16::from_f32(v as f32)); Ok(Self::F16(data)) } (Self::I64(storage), DType::F16) => { let data = unary_map(storage, layout, |v| f16::from_f32(v as f32)); Ok(Self::F16(data)) } (Self::BF16(storage), DType::F16) => { let data = unary_map(storage, layout, |v| f16::from_f32(v.to_f32())); Ok(Self::F16(data)) } (Self::F16(storage), DType::F16) => { let data = unary_map(storage, layout, |v| v); Ok(Self::F16(data)) } (Self::F32(storage), DType::F16) => { let data = unary_map(storage, layout, f16::from_f32); Ok(Self::F16(data)) } (Self::F64(storage), DType::F16) => { let data = unary_map(storage, layout, f16::from_f64); Ok(Self::F16(data)) } (Self::U8(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v as f32); Ok(Self::F32(data)) } (Self::U32(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v as f32); Ok(Self::F32(data)) } (Self::I64(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v as f32); Ok(Self::F32(data)) } (Self::BF16(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v.to_f32()); Ok(Self::F32(data)) } (Self::F16(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v.to_f32()); Ok(Self::F32(data)) } (Self::F32(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v); Ok(Self::F32(data)) } (Self::F64(storage), DType::F32) => { let data = unary_map(storage, layout, |v| v as f32); Ok(Self::F32(data)) } (Self::U8(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v); Ok(Self::U8(data)) } (Self::BF16(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v.to_f32() as u8); Ok(Self::U8(data)) } (Self::F16(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v.to_f32() as u8); Ok(Self::U8(data)) } (Self::F32(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v as u8); Ok(Self::U8(data)) } (Self::F64(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v as u8); Ok(Self::U8(data)) } (Self::U32(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v as u8); Ok(Self::U8(data)) } (Self::I64(storage), DType::U8) => { let data = unary_map(storage, layout, |v| v as u8); Ok(Self::U8(data)) } (Self::U8(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v as u32); Ok(Self::U32(data)) } (Self::U32(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v); Ok(Self::U32(data)) } (Self::I64(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v as u32); Ok(Self::U32(data)) } (Self::BF16(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v.to_f32() as u32); Ok(Self::U32(data)) } (Self::F16(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v.to_f32() as u32); Ok(Self::U32(data)) } (Self::F32(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v as u32); Ok(Self::U32(data)) } (Self::F64(storage), DType::U32) => { let data = unary_map(storage, layout, |v| v as u32); Ok(Self::U32(data)) } (Self::U8(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v as i64); Ok(Self::I64(data)) } (Self::U32(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v as i64); Ok(Self::I64(data)) } (Self::I64(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v); Ok(Self::I64(data)) } (Self::BF16(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v.to_f32() as i64); Ok(Self::I64(data)) } (Self::F16(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v.to_f32() as i64); Ok(Self::I64(data)) } (Self::F32(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v as i64); Ok(Self::I64(data)) } (Self::F64(storage), DType::I64) => { let data = unary_map(storage, layout, |v| v as i64); Ok(Self::I64(data)) } (Self::U8(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v as f64); Ok(Self::F64(data)) } (Self::U32(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v as f64); Ok(Self::F64(data)) } (Self::I64(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v as f64); Ok(Self::F64(data)) } (Self::BF16(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v.to_f64()); Ok(Self::F64(data)) } (Self::F16(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v.to_f64()); Ok(Self::F64(data)) } (Self::F32(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v as f64); Ok(Self::F64(data)) } (Self::F64(storage), DType::F64) => { let data = unary_map(storage, layout, |v| v); Ok(Self::F64(data)) } } } fn reduce_op(&self, op: ReduceOp, layout: &Layout, reduce_dims: &[usize]) -> Result<Self> { match op { ReduceOp::Sum => { let src_dims = layout.dims(); let mut dst_dims = src_dims.to_vec(); for &dim in reduce_dims.iter() { dst_dims[dim] = 1; } let dst_shape = Shape::from(dst_dims); let mut reduce_dims = reduce_dims.to_vec(); // Sort the reduce_dims as they have to be processed from left to right when converting the // indexes. reduce_dims.sort(); let reduce_dims_and_stride: Vec<_> = reduce_dims .iter() .map(|&d| (src_dims[d], src_dims[d + 1..].iter().product::<usize>())) .collect(); ReduceSum { dst_shape: &dst_shape, reduce_dims: &reduce_dims, reduce_dims_and_stride, } .map(self, layout) } ReduceOp::Min | ReduceOp::ArgMin | ReduceOp::Max | ReduceOp::ArgMax => { let reduce_dim_index = match reduce_dims { [reduce_dim_index] => *reduce_dim_index, _ => { let op = match op { ReduceOp::Min => "min", ReduceOp::ArgMin => "argmin", ReduceOp::Max => "max", ReduceOp::ArgMax => "argmax", _ => unreachable!(), }; let dims = reduce_dims.to_vec(); Err(Error::OnlySingleDimension { op, dims })? } }; let (use_min, return_index) = match op { ReduceOp::Min => (true, false), ReduceOp::ArgMin => (true, true), ReduceOp::Max => (false, false), ReduceOp::ArgMax => (false, true), _ => unreachable!(), }; ReduceIndex { reduce_dim_index, use_min, return_index, } .map(self, layout) } } } fn cmp(&self, op: CmpOp, rhs: &Self, lhs_l: &Layout, rhs_l: &Layout) -> Result<Self> { Cmp(op).map(self, lhs_l, rhs, rhs_l) } fn affine(&self, layout: &Layout, mul: f64, add: f64) -> Result<Self> { Affine(mul, add).map(self, layout) } fn avg_pool2d( &self, layout: &Layout, kernel_size: (usize, usize), stride: (usize, usize), ) -> Result<Self> { AvgPool2D(kernel_size, stride).map(self, layout) } fn max_pool2d( &self, layout: &Layout, kernel_size: (usize, usize), stride: (usize, usize), ) -> Result<Self> { MaxPool2D(kernel_size, stride).map(self, layout) } fn upsample_nearest1d(&self, layout: &Layout, sz: usize) -> Result<Self> { UpsampleNearest1D(sz).map(self, layout) } fn upsample_nearest2d(&self, layout: &Layout, h: usize, w: usize) -> Result<Self> { UpsampleNearest2D(h, w).map(self, layout) } fn powf(&self, layout: &Layout, e: f64) -> Result<Self> { use num_traits::Float; // TODO: Have some generic map for functions that apply on num_traits::Float elements. match self { Self::BF16(storage) => { let data = unary_map(storage, layout, |v| v.powf(bf16::from_f64(e))); Ok(Self::BF16(data)) } Self::F16(storage) => { let data = unary_map(storage, layout, |v| v.powf(f16::from_f64(e))); Ok(Self::F16(data)) } Self::F32(storage) => { let data = unary_map(storage, layout, |v| v.powf(e as f32)); Ok(Self::F32(data)) } Self::F64(storage) => { let data = unary_map(storage, layout, |v| v.powf(e)); Ok(Self::F64(data)) } Self::U8(_) => Err(Error::UnsupportedDTypeForOp(DType::U8, "elu").bt()), Self::U32(_) => Err(Error::UnsupportedDTypeForOp(DType::U32, "elu").bt()), Self::I64(_) => Err(Error::UnsupportedDTypeForOp(DType::I64, "elu").bt()), } } fn elu(&self, layout: &Layout, alpha: f64) -> Result<Self> { // TODO: Have some generic map for functions that apply on num_traits::Float elements. match self { Self::BF16(storage) => { let data = unary_map(storage, layout, |v| elu(v, bf16::from_f64(alpha))); Ok(Self::BF16(data)) } Self::F16(storage) => { let data = unary_map(storage, layout, |v| elu(v, f16::from_f64(alpha))); Ok(Self::F16(data)) } Self::F32(storage) => { let data = unary_map(storage, layout, |v| elu(v, f32::from_f64(alpha))); Ok(Self::F32(data)) } Self::F64(storage) => { let data = unary_map(storage, layout, |v| elu(v, alpha)); Ok(Self::F64(data)) } Self::U8(_) => Err(Error::UnsupportedDTypeForOp(DType::U8, "elu").bt()), Self::U32(_) => Err(Error::UnsupportedDTypeForOp(DType::U32, "elu").bt()), Self::I64(_) => Err(Error::UnsupportedDTypeForOp(DType::I64, "elu").bt()), } } fn unary_impl<B: UnaryOpT>(&self, layout: &Layout) -> Result<Self> { match self { Self::BF16(storage) => { if B::BF16_VEC { let data = unary_map_vec(storage, layout, B::bf16, B::bf16_vec); Ok(Self::BF16(data)) } else { let data = unary_map(storage, layout, B::bf16); Ok(Self::BF16(data)) } } Self::F16(storage) => { if B::F16_VEC { let data = unary_map_vec(storage, layout, B::f16, B::f16_vec); Ok(Self::F16(data)) } else { let data = unary_map(storage, layout, B::f16); Ok(Self::F16(data)) } } Self::F32(storage) => { if B::F32_VEC { let data = unary_map_vec(storage, layout, B::f32, B::f32_vec); Ok(Self::F32(data)) } else { let data = unary_map(storage, layout, B::f32); Ok(Self::F32(data)) } } Self::F64(storage) => { if B::F64_VEC { let data = unary_map_vec(storage, layout, B::f64, B::f64_vec); Ok(Self::F64(data)) } else { let data = unary_map(storage, layout, B::f64); Ok(Self::F64(data)) } } Self::U8(storage) => { let data = unary_map(storage, layout, B::u8); Ok(Self::U8(data)) } Self::U32(storage) => { let data = unary_map(storage, layout, B::u32); Ok(Self::U32(data)) } Self::I64(storage) => { let data = unary_map(storage, layout, B::i64); Ok(Self::I64(data)) } } } fn binary_impl<B: BinaryOpT>( &self, rhs: &Self, lhs_l: &Layout, rhs_l: &Layout, ) -> Result<Self> { match (self, rhs) { (Self::BF16(lhs), Self::BF16(rhs)) => { let data = if B::BF16_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::bf16, B::bf16_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::bf16) }; Ok(Self::BF16(data)) } (Self::F16(lhs), Self::F16(rhs)) => { let data = if B::F16_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::f16, B::f16_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::f16) }; Ok(Self::F16(data)) } (Self::F32(lhs), Self::F32(rhs)) => { let data = if B::F32_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::f32, B::f32_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::f32) }; Ok(Self::F32(data)) } (Self::F64(lhs), Self::F64(rhs)) => { let data = if B::F64_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::f64, B::f64_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::f64) }; Ok(Self::F64(data)) } (Self::U32(lhs), Self::U32(rhs)) => { let data = if B::U32_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::u32, B::u32_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::u32) }; Ok(Self::U32(data)) } (Self::I64(lhs), Self::I64(rhs)) => { let data = if B::I64_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::i64, B::i64_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::i64) }; Ok(Self::I64(data)) } (Self::U8(lhs), Self::U8(rhs)) => { let data = if B::U8_VEC { binary_map_vec(lhs_l, rhs_l, lhs, rhs, B::u8, B::u8_vec) } else { binary_map(lhs_l, rhs_l, lhs, rhs, B::u8) }; Ok(Self::U8(data)) } _ => { // This should be covered by the dtype check above. Err(Error::DTypeMismatchBinaryOp { lhs: self.dtype(), rhs: rhs.dtype(), op: B::NAME, } .bt()) } } } fn copy2d( &self, dst: &mut Self, d1: usize, d2: usize, src_s: usize, dst_s: usize, src_o: usize, dst_o: usize, ) -> Result<()> { match (self, dst) { (Self::U8(src), Self::U8(dst)) => copy2d_(src, dst, d1, d2, src_s, dst_s, src_o, dst_o), (Self::U32(src), Self::U32(dst)) => { copy2d_(src, dst, d1, d2, src_s, dst_s, src_o, dst_o) } (Self::I64(src), Self::I64(dst)) => { copy2d_(src, dst, d1, d2, src_s, dst_s, src_o, dst_o) } (Self::BF16(src), Self::BF16(dst)) => { copy2d_(src, dst, d1, d2, src_s, dst_s, src_o, dst_o) } (Self::F16(src), Self::F16(dst)) => { copy2d_(src, dst, d1, d2, src_s, dst_s, src_o, dst_o) } (Self::F32(src), Self::F32(dst)) => { copy2d_(src, dst, d1, d2, src_s, dst_s, src_o, dst_o) } (Self::F64(src), Self::F64(dst)) => { copy2d_(src, dst, d1, d2, src_s, dst_s, src_o, dst_o) } (_, dst) => { return Err(Error::DTypeMismatchBinaryOp { lhs: self.dtype(), rhs: dst.dtype(), op: "copy2d", } .bt()); } } Ok(()) } fn copy_strided_src(&self, dst: &mut Self, dst_offset: usize, src_l: &Layout) -> Result<()> { match (self, dst) { (Self::U8(src), Self::U8(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::U32(src), Self::U32(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::I64(src), Self::I64(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::BF16(src), Self::BF16(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::F16(src), Self::F16(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::F32(src), Self::F32(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (Self::F64(src), Self::F64(dst)) => copy_strided_src_(src, dst, dst_offset, src_l), (_, dst) => { // This should be covered by the dtype check above. return Err(Error::DTypeMismatchBinaryOp { lhs: self.dtype(), rhs: dst.dtype(), op: "copy_strided", } .bt()); } } Ok(()) } fn where_cond( &self, layout: &Layout, t: &Self, t_l: &Layout, f: &Self, f_l: &Layout, ) -> Result<Self> { match self { Self::U8(pred) => WCond(pred, layout).map(t, t_l, f, f_l), Self::U32(pred) => WCond(pred, layout).map(t, t_l, f, f_l), Self::I64(pred) => WCond(pred, layout).map(t, t_l, f, f_l), _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "where-cond")), } } fn conv1d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConv1D, ) -> Result<Self> { if !USE_IM2COL_CONV1D { return Conv1D(params).map(self, l, kernel, kernel_l); } let op = Im2Col1D { l_k: params.k_size, padding: params.padding, stride: params.stride, dilation: params.dilation, }; let col = op.map(self, l)?; let b = params.b_size; let n = params.c_out; let l_out = params.l_out(); let k = op.l_k * params.c_in; let m = l_out; let col_l = Layout::contiguous((b, m, k)); let res = if kernel_l.is_contiguous() { let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? } else { // Make the kernel contiguous if not already the case. let mut kernel_c = unsafe { self.device() .alloc_uninit(kernel_l.shape(), kernel.dtype())? }; kernel.copy_strided_src(&mut kernel_c, 0, kernel_l)?; let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? }; let res_l = Layout::contiguous((b, l_out, params.c_out)).transpose(1, 2)?; let mut res_t = unsafe { self.device().alloc_uninit(res_l.shape(), res.dtype())? }; res.copy_strided_src(&mut res_t, 0, &res_l)?; Ok(res_t) } fn conv_transpose1d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConvTranspose1D, ) -> Result<Self> { let can_use_col2im = kernel_l.is_contiguous() && params.dilation == 1 && params.padding == 0 && params.output_padding == 0; if USE_COL2IM_CONV1D_TR && can_use_col2im { let (b_size, c_in, l_in) = l.shape().dims3()?; let (c_in2, c_out, k_size) = kernel_l.shape().dims3()?; if !kernel_l.is_contiguous() { crate::bail!( "convtr1d: the second argument (kernel) has to be contiguous {kernel_l:?}" ) } if c_in != c_in2 { crate::bail!( "convtr1d: shape mismatch on c_in {:?} {:?}", l.shape(), kernel_l.shape() ) } let col = { // This merges the last two dimensions of the kernel together. let kernel_l_mm = Layout::new( (b_size, c_in, k_size * c_out).into(), vec![0, k_size * c_out, 1], kernel_l.start_offset(), ); self.matmul( kernel, ( b_size, /* m */ l_in, /* n */ c_out * k_size, /* k */ c_in, ), &l.transpose(1, 2)?, &kernel_l_mm, )? }; let col_l = Layout::contiguous((b_size, l_in, c_out, k_size)); Col2Im1D { stride: params.stride, } .map(&col, &col_l) } else { ConvTranspose1D(params).map(self, l, kernel, kernel_l) } } fn conv2d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConv2D, ) -> Result<Self> { if !USE_IM2COL_CONV2D { return Conv2D(params).map(self, l, kernel, kernel_l); } let op = Im2Col { h_k: params.k_h, w_k: params.k_w, padding: params.padding, stride: params.stride, dilation: params.dilation, }; let col = op.map(self, l)?; let b = params.b_size; let n = params.c_out; let (h_out, w_out) = (params.out_h(), params.out_w()); let k = op.h_k * op.w_k * params.c_in; let m = h_out * w_out; let col_l = Layout::contiguous((b, m, k)); let res = if kernel_l.is_contiguous() { let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? } else { // Make the kernel contiguous if not already the case. let mut kernel_c = unsafe { self.device() .alloc_uninit(kernel_l.shape(), kernel.dtype())? }; kernel.copy_strided_src(&mut kernel_c, 0, kernel_l)?; let kernel_l = Layout::contiguous_with_offset((1, n, k), kernel_l.start_offset()) .transpose(1, 2)? .broadcast_as((b, k, n))?; col.matmul(kernel, (b, m, n, k), &col_l, &kernel_l)? }; let res_l = Layout::contiguous((b, h_out, w_out, params.c_out)) .transpose(1, 2)? .transpose(1, 3)?; let mut res_t = unsafe { self.device().alloc_uninit(res_l.shape(), res.dtype())? }; res.copy_strided_src(&mut res_t, 0, &res_l)?; Ok(res_t) } fn conv_transpose2d( &self, l: &Layout, kernel: &Self, kernel_l: &Layout, params: &crate::conv::ParamsConvTranspose2D, ) -> Result<Self> { ConvTranspose2D(params).map(self, l, kernel, kernel_l) } fn index_select(&self, ids: &Self, l: &Layout, ids_l: &Layout, dim: usize) -> Result<Self> { match ids { Self::U8(ids) => IndexSelect { ids, ids_l, dim }.map(self, l), Self::U32(ids) => IndexSelect { ids, ids_l, dim }.map(self, l), Self::I64(ids) => IndexSelect { ids, ids_l, dim }.map(self, l), _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "index-select").bt()), } } fn gather(&self, l: &Layout, ids: &Self, ids_l: &Layout, dim: usize) -> Result<Self> { match ids { Self::U8(ids) => Gather { ids, ids_l, dim }.map(self, l), Self::U32(ids) => Gather { ids, ids_l, dim }.map(self, l), Self::I64(ids) => Gather { ids, ids_l, dim }.map(self, l), _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "gather").bt()), } } fn scatter_add( &self, l: &Layout, ids: &Self, ids_l: &Layout, src: &Self, src_l: &Layout, dim: usize, ) -> Result<Self> { match ids { Self::U8(ids) => ScatterAdd { ids, ids_l, dim }.map(self, l, src, src_l), Self::U32(ids) => ScatterAdd { ids, ids_l, dim }.map(self, l, src, src_l), Self::I64(ids) => ScatterAdd { ids, ids_l, dim }.map(self, l, src, src_l), _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "scatter-add").bt()), } } fn index_add( &self, l: &Layout, ids: &Self, ids_l: &Layout, src: &Self, src_l: &Layout, dim: usize, ) -> Result<Self> { match ids { Self::U8(ids) => { let ids = match ids_l.contiguous_offsets() { Some((a, b)) => &ids[a..b], None => Err(Error::RequiresContiguous { op: "index-add" }.bt())?, }; IndexAdd { ids, dim }.map(self, l, src, src_l) } Self::U32(ids) => { let ids = match ids_l.contiguous_offsets() { Some((a, b)) => &ids[a..b], None => Err(Error::RequiresContiguous { op: "index-add" }.bt())?, }; IndexAdd { ids, dim }.map(self, l, src, src_l) } Self::I64(ids) => { let ids = match ids_l.contiguous_offsets() { Some((a, b)) => &ids[a..b], None => Err(Error::RequiresContiguous { op: "index-add" }.bt())?, }; IndexAdd { ids, dim }.map(self, l, src, src_l) } _ => Err(Error::UnsupportedDTypeForOp(self.dtype(), "index-add").bt()), } } fn matmul( &self, rhs: &Self, bmnk: (usize, usize, usize, usize), lhs_l: &Layout, rhs_l: &Layout, ) -> Result<Self> { MatMul(bmnk).map(self, lhs_l, rhs, rhs_l) } fn device(&self) -> &Self::Device { &CpuDevice } fn try_clone(&self, _: &Layout) -> Result<Self> { Ok(self.clone()) } fn to_cpu_storage(&self) -> Result<CpuStorage> { Ok(self.clone()) } } impl BackendDevice for CpuDevice { type Storage = CpuStorage; fn location(&self) -> crate::DeviceLocation { crate::DeviceLocation::Cpu } fn same_device(&self, _: &Self) -> bool { true } fn storage_from_slice<T: crate::WithDType>(&self, s: &[T]) -> Result<Self::Storage> { Ok(T::to_cpu_storage(s)) } fn storage_from_cpu_storage(&self, s: &CpuStorage) -> Result<Self::Storage> { Ok(s.clone()) } fn storage_from_cpu_storage_owned(&self, s: CpuStorage) -> Result<Self::Storage> { Ok(s) } fn new(_: usize) -> Result<Self> { Ok(Self) } fn set_seed(&self, _seed: u64) -> Result<()> { crate::bail!("cannot seed the CPU rng with set_seed") } fn rand_uniform(&self, shape: &Shape, dtype: DType, min: f64, max: f64) -> Result<CpuStorage> { use rand::prelude::*; let elem_count = shape.elem_count(); let mut rng = rand::thread_rng(); match dtype { DType::U8 | DType::U32 | DType::I64 => { Err(Error::UnsupportedDTypeForOp(dtype, "rand_uniform").bt()) } DType::BF16 => { let mut data = Vec::with_capacity(elem_count); let uniform = rand::distributions::Uniform::new(bf16::from_f64(min), bf16::from_f64(max)); for _i in 0..elem_count { data.push(rng.sample::<bf16, _>(uniform)) } Ok(CpuStorage::BF16(data)) } DType::F16 => { let mut data = Vec::with_capacity(elem_count); let uniform = rand::distributions::Uniform::new(f16::from_f64(min), f16::from_f64(max)); for _i in 0..elem_count { data.push(rng.sample::<f16, _>(uniform)) } Ok(CpuStorage::F16(data)) } DType::F32 => { let mut data = Vec::with_capacity(elem_count); let uniform = rand::distributions::Uniform::new(min as f32, max as f32); for _i in 0..elem_count { data.push(rng.sample::<f32, _>(uniform)) } Ok(CpuStorage::F32(data)) } DType::F64 => { let mut data = Vec::with_capacity(elem_count); let uniform = rand::distributions::Uniform::new(min, max); for _i in 0..elem_count { data.push(rng.sample::<f64, _>(uniform)) } Ok(CpuStorage::F64(data)) } } } fn rand_normal(&self, shape: &Shape, dtype: DType, mean: f64, std: f64) -> Result<CpuStorage> { use rand::prelude::*; let elem_count = shape.elem_count(); let mut rng = rand::thread_rng(); match dtype { DType::U8 | DType::U32 | DType::I64 => { Err(Error::UnsupportedDTypeForOp(dtype, "rand_normal").bt()) } DType::BF16 => { let mut data = Vec::with_capacity(elem_count); let normal = rand_distr::Normal::new(bf16::from_f64(mean), bf16::from_f64(std)) .map_err(Error::wrap)?; for _i in 0..elem_count { data.push(normal.sample(&mut rng)) } Ok(CpuStorage::BF16(data)) } DType::F16 => { let mut data = Vec::with_capacity(elem_count); let normal = rand_distr::Normal::new(f16::from_f64(mean), f16::from_f64(std)) .map_err(Error::wrap)?; for _i in 0..elem_count { data.push(normal.sample(&mut rng)) } Ok(CpuStorage::F16(data)) } DType::F32 => { let mut data = Vec::with_capacity(elem_count); let normal = rand_distr::Normal::new(mean as f32, std as f32).map_err(Error::wrap)?; for _i in 0..elem_count { data.push(normal.sample(&mut rng)) } Ok(CpuStorage::F32(data)) } DType::F64 => { let mut data = Vec::with_capacity(elem_count); let normal = rand_distr::Normal::new(mean, std).map_err(Error::wrap)?; for _i in 0..elem_count { data.push(normal.sample(&mut rng)) } Ok(CpuStorage::F64(data)) } } } #[allow(clippy::uninit_vec)] unsafe fn alloc_uninit(&self, shape: &Shape, dtype: DType) -> Result<CpuStorage> { let elem_count = shape.elem_count(); // The code below is highly unsafe but hopefully not directly unsound as we only consider // types that are Copy, not Drop, and for which all bit patterns are proper values. // It's still pretty risky, see the following for more details: // https://github.com/rust-lang/rust-clippy/issues/4483 let storage = match dtype { DType::U8 => { let mut v = Vec::with_capacity(elem_count); v.set_len(elem_count); CpuStorage::U8(v) } DType::U32 => { let mut v = Vec::with_capacity(elem_count); v.set_len(elem_count); CpuStorage::U32(v) } DType::I64 => { let mut v = Vec::with_capacity(elem_count); v.set_len(elem_count); CpuStorage::I64(v) } DType::BF16 => { let mut v = Vec::with_capacity(elem_count); v.set_len(elem_count); CpuStorage::BF16(v) } DType::F16 => { let mut v = Vec::with_capacity(elem_count); v.set_len(elem_count); CpuStorage::F16(v) } DType::F32 => { let mut v = Vec::with_capacity(elem_count); v.set_len(elem_count); CpuStorage::F32(v) } DType::F64 => { let mut v = Vec::with_capacity(elem_count); v.set_len(elem_count); CpuStorage::F64(v) } }; Ok(storage) } fn ones_impl(&self, shape: &Shape, dtype: DType) -> Result<CpuStorage> { let elem_count = shape.elem_count(); let storage = match dtype { DType::U8 => CpuStorage::U8(vec![1u8; elem_count]), DType::U32 => CpuStorage::U32(vec![1u32; elem_count]), DType::I64 => CpuStorage::I64(vec![1i64; elem_count]), DType::BF16 => CpuStorage::BF16(vec![bf16::ONE; elem_count]), DType::F16 => CpuStorage::F16(vec![f16::ONE; elem_count]), DType::F32 => CpuStorage::F32(vec![1f32; elem_count]), DType::F64 => CpuStorage::F64(vec![1f64; elem_count]), }; Ok(storage) } fn zeros_impl(&self, shape: &Shape, dtype: DType) -> Result<CpuStorage> { let elem_count = shape.elem_count(); let storage = match dtype { DType::U8 => CpuStorage::U8(vec![0u8; elem_count]), DType::U32 => CpuStorage::U32(vec![0u32; elem_count]), DType::I64 => CpuStorage::I64(vec![0i64; elem_count]), DType::BF16 => CpuStorage::BF16(vec![bf16::ZERO; elem_count]), DType::F16 => CpuStorage::F16(vec![f16::ZERO; elem_count]), DType::F32 => CpuStorage::F32(vec![0f32; elem_count]), DType::F64 => CpuStorage::F64(vec![0f64; elem_count]), }; Ok(storage) } fn synchronize(&self) -> Result<()> { Ok(()) } } #[macro_export] macro_rules! map_dtype { ($name:expr, $storage:ident, $fn:expr, ($($dtypes:ident),+)) => { match $storage { $(CpuStorage::$dtypes(__e) => CpuStorage::$dtypes($fn(__e)),)* s => Err(Error::UnsupportedDTypeForOp(s.dtype(), $name).bt())?, } }; }
candle/candle-core/src/cpu_backend/mod.rs/0
{ "file_path": "candle/candle-core/src/cpu_backend/mod.rs", "repo_id": "candle", "token_count": 64123 }
19
//! ML framework for Rust //! //! ```rust //! use candle_core::{Tensor, DType, Device}; //! # use candle_core::Error; //! # fn main() -> Result<(), Error>{ //! //! let a = Tensor::arange(0f32, 6f32, &Device::Cpu)?.reshape((2, 3))?; //! let b = Tensor::arange(0f32, 12f32, &Device::Cpu)?.reshape((3, 4))?; //! //! let c = a.matmul(&b)?; //! # Ok(())} //! ``` //! //! ## Features //! //! - Simple syntax (looks and feels like PyTorch) //! - CPU and Cuda backends (and M1 support) //! - Enable serverless (CPU) small and fast deployments //! - Model training //! - Distributed computing (NCCL). //! - Models out of the box (Llama, Whisper, Falcon, ...) //! //! ## FAQ //! //! - Why Candle? //! //! Candle stems from the need to reduce binary size in order to *enable serverless* //! possible by making the whole engine smaller than PyTorch very large library volume //! //! And simply *removing Python* from production workloads. //! Python can really add overhead in more complex workflows and the [GIL](https://www.backblaze.com/blog/the-python-gil-past-present-and-future/) is a notorious source of headaches. //! //! Rust is cool, and a lot of the HF ecosystem already has Rust crates [safetensors](https://github.com/huggingface/safetensors) and [tokenizers](https://github.com/huggingface/tokenizers) #[cfg(feature = "accelerate")] mod accelerate; pub mod backend; pub mod backprop; pub mod conv; mod convert; pub mod cpu; pub mod cpu_backend; #[cfg(feature = "cuda")] pub mod cuda_backend; mod custom_op; mod device; pub mod display; mod dtype; pub mod dummy_cuda_backend; mod dummy_metal_backend; pub mod error; mod indexer; pub mod layout; #[cfg(feature = "metal")] pub mod metal_backend; #[cfg(feature = "mkl")] mod mkl; pub mod npy; pub mod op; pub mod pickle; pub mod quantized; pub mod safetensors; pub mod scalar; pub mod shape; mod sort; mod storage; pub mod streaming; mod strided_index; mod tensor; mod tensor_cat; pub mod test_utils; pub mod utils; mod variable; #[cfg(feature = "cudnn")] pub use cuda_backend::cudnn; pub use cpu_backend::{CpuStorage, CpuStorageRef}; pub use custom_op::{CustomOp1, CustomOp2, CustomOp3, InplaceOp1, InplaceOp2, InplaceOp3}; pub use device::{Device, DeviceLocation, NdArray}; pub use dtype::{DType, DTypeParseError, FloatDType, IntDType, WithDType}; pub use error::{Error, Result}; pub use indexer::IndexOp; pub use layout::Layout; pub use shape::{Shape, D}; pub use storage::Storage; pub use streaming::{StreamTensor, StreamingBinOp, StreamingModule}; pub use strided_index::{StridedBlocks, StridedIndex}; pub use tensor::{Tensor, TensorId}; pub use variable::Var; #[cfg(feature = "cuda")] pub use cuda_backend as cuda; #[cfg(not(feature = "cuda"))] pub use dummy_cuda_backend as cuda; pub use cuda::{CudaDevice, CudaStorage}; #[cfg(feature = "metal")] pub use metal_backend::{MetalDevice, MetalError, MetalStorage}; #[cfg(not(feature = "metal"))] pub use dummy_metal_backend::{MetalDevice, MetalError, MetalStorage}; #[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; pub trait ToUsize2 { fn to_usize2(self) -> (usize, usize); } impl ToUsize2 for usize { fn to_usize2(self) -> (usize, usize) { (self, self) } } impl ToUsize2 for (usize, usize) { fn to_usize2(self) -> (usize, usize) { self } } // A simple trait defining a module with forward method using a single argument. pub trait Module { fn forward(&self, xs: &Tensor) -> Result<Tensor>; } impl<T: Fn(&Tensor) -> Result<Tensor>> Module for T { fn forward(&self, xs: &Tensor) -> Result<Tensor> { self(xs) } } impl<M: Module> Module for Option<&M> { fn forward(&self, xs: &Tensor) -> Result<Tensor> { match self { None => Ok(xs.clone()), Some(m) => m.forward(xs), } } } // A trait defining a module with forward method using a single tensor argument and a flag to // separate the training and evaluation behaviors. pub trait ModuleT { fn forward_t(&self, xs: &Tensor, train: bool) -> Result<Tensor>; } impl<M: Module> ModuleT for M { fn forward_t(&self, xs: &Tensor, _train: bool) -> Result<Tensor> { self.forward(xs) } }
candle/candle-core/src/lib.rs/0
{ "file_path": "candle/candle-core/src/lib.rs", "repo_id": "candle", "token_count": 1598 }
20
use super::k_quants::{ BlockQ2K, BlockQ3K, BlockQ4K, BlockQ4_0, BlockQ5K, BlockQ6K, BlockQ8K, BlockQ8_0, QK8_0, QK_K, }; use crate::Result; use byteorder::{ByteOrder, LittleEndian}; #[allow(unused_imports)] #[cfg(target_arch = "arm")] use core::arch::arm::*; #[allow(unused_imports)] #[cfg(target_arch = "aarch64")] use core::arch::aarch64::*; #[inline(always)] unsafe fn vdotq_s32(a: int8x16_t, b: int8x16_t) -> int32x4_t { // TODO: dotprod let p0 = vmull_s8(vget_low_s8(a), vget_low_s8(b)); let p1 = vmull_s8(vget_high_s8(a), vget_high_s8(b)); vaddq_s32(vpaddlq_s16(p0), vpaddlq_s16(p1)) } #[inline(always)] pub(crate) fn vec_dot_q4_0_q8_0(n: usize, xs: &[BlockQ4_0], ys: &[BlockQ8_0]) -> Result<f32> { let qk = QK8_0; let nb = n / qk; if n % QK8_0 != 0 { crate::bail!("vec_dot_q4_0_q8_0: {n} is not divisible by {qk}") } unsafe { let mut sumv0 = vdupq_n_f32(0.0f32); for i in 0..nb { let x0 = &xs[i]; let y0 = &ys[i]; let m4b = vdupq_n_u8(0x0F); let s8b = vdupq_n_s8(0x8); let v0_0 = vld1q_u8(x0.qs.as_ptr()); // 4-bit -> 8-bit let v0_0l = vreinterpretq_s8_u8(vandq_u8(v0_0, m4b)); let v0_0h = vreinterpretq_s8_u8(vshrq_n_u8(v0_0, 4)); // sub 8 let v0_0ls = vsubq_s8(v0_0l, s8b); let v0_0hs = vsubq_s8(v0_0h, s8b); // load y let v1_0l = vld1q_s8(y0.qs.as_ptr()); let v1_0h = vld1q_s8(y0.qs.as_ptr().add(16)); let pl0 = vdotq_s32(v0_0ls, v1_0l); let ph0 = vdotq_s32(v0_0hs, v1_0h); sumv0 = vmlaq_n_f32( sumv0, vcvtq_f32_s32(vaddq_s32(pl0, ph0)), x0.d.to_f32() * y0.d.to_f32(), ); } Ok(vaddvq_f32(sumv0)) } } #[inline(always)] pub(crate) fn vec_dot_q8_0_q8_0(n: usize, xs: &[BlockQ8_0], ys: &[BlockQ8_0]) -> Result<f32> { let qk = QK8_0; if n % QK8_0 != 0 { crate::bail!("vec_dot_q8_0_q8_0: {n} is not divisible by {qk}") } let nb = n / QK8_0; unsafe { let mut sumv0 = vdupq_n_f32(0.0f32); for i in 0..nb { let x0 = &xs[i]; let y0 = &ys[i]; let x0_0 = vld1q_s8(x0.qs.as_ptr()); let x0_1 = vld1q_s8(x0.qs.as_ptr().add(16)); // load y let y0_0 = vld1q_s8(y0.qs.as_ptr()); let y0_1 = vld1q_s8(y0.qs.as_ptr().add(16)); let p0 = vdotq_s32(x0_0, y0_0); let p1 = vdotq_s32(x0_1, y0_1); sumv0 = vmlaq_n_f32( sumv0, vcvtq_f32_s32(vaddq_s32(p0, p1)), x0.d.to_f32() * y0.d.to_f32(), ); } Ok(vaddvq_f32(sumv0)) } } #[inline(always)] pub(crate) fn vec_dot_q8k_q8k(n: usize, xs: &[BlockQ8K], ys: &[BlockQ8K]) -> Result<f32> { let qk = QK_K; if n % QK_K != 0 { crate::bail!("vec_dot_q8k_q8k: {n} is not divisible by {qk}") } let mut sumf = 0f32; for (xs, ys) in xs.iter().zip(ys.iter()) { unsafe { let mut sum_i = vdupq_n_s32(0); let scale = xs.d * ys.d; let xs = xs.qs.as_ptr(); let ys = ys.qs.as_ptr(); for i in (0..QK_K).step_by(16) { let xs = vld1q_s8(xs.add(i)); let ys = vld1q_s8(ys.add(i)); let xy = vdotq_s32(xs, ys); sum_i = vaddq_s32(sum_i, xy) } sumf += vaddvq_s32(sum_i) as f32 * scale } } Ok(sumf) } #[inline(always)] pub(crate) fn vec_dot_q6k_q8k(n: usize, xs: &[BlockQ6K], ys: &[BlockQ8K]) -> Result<f32> { if n % QK_K != 0 { crate::bail!("vec_dot_q6k_q8k: {n} is not divisible by {QK_K}") } let mut sum = 0f32; unsafe { let m4b = vdupq_n_u8(0xF); let mone = vdupq_n_u8(3); for (x, y) in xs.iter().zip(ys.iter()) { let d_all = x.d.to_f32(); let mut q6 = x.ql.as_ptr(); let mut qh = x.qh.as_ptr(); let mut q8 = y.qs.as_ptr(); let mut scale = x.scales.as_ptr(); let q8sums = vld1q_s16_x2(y.bsums.as_ptr()); let scales = vld1q_s8(scale); let q6scales = int16x8x2_t( vmovl_s8(vget_low_s8(scales)), vmovl_s8(vget_high_s8(scales)), ); let prod = vaddq_s32( vaddq_s32( vmull_s16(vget_low_s16(q8sums.0), vget_low_s16(q6scales.0)), vmull_s16(vget_high_s16(q8sums.0), vget_high_s16(q6scales.0)), ), vaddq_s32( vmull_s16(vget_low_s16(q8sums.1), vget_low_s16(q6scales.1)), vmull_s16(vget_high_s16(q8sums.1), vget_high_s16(q6scales.1)), ), ); let isum_mins = vaddvq_s32(prod); let mut isum = 0i32; for _j in 0..QK_K / 128 { let qhbits = vld1q_u8_x2(qh); qh = qh.add(32); let q6bits = vld1q_u8_x4(q6); q6 = q6.add(64); let q8bytes = vld1q_s8_x4(q8); q8 = q8.add(64); let q6h_0 = vshlq_n_u8(vandq_u8(mone, qhbits.0), 4); let q6h_1 = vshlq_n_u8(vandq_u8(mone, qhbits.1), 4); let shifted = vshrq_n_u8(qhbits.0, 2); let q6h_2 = vshlq_n_u8(vandq_u8(mone, shifted), 4); let shifted = vshrq_n_u8(qhbits.1, 2); let q6h_3 = vshlq_n_u8(vandq_u8(mone, shifted), 4); let q6bytes_0 = vreinterpretq_s8_u8(vorrq_u8(vandq_u8(q6bits.0, m4b), q6h_0)); let q6bytes_1 = vreinterpretq_s8_u8(vorrq_u8(vandq_u8(q6bits.1, m4b), q6h_1)); let q6bytes_2 = vreinterpretq_s8_u8(vorrq_u8(vandq_u8(q6bits.2, m4b), q6h_2)); let q6bytes_3 = vreinterpretq_s8_u8(vorrq_u8(vandq_u8(q6bits.3, m4b), q6h_3)); let p0 = vdotq_s32(q6bytes_0, q8bytes.0); let p1 = vdotq_s32(q6bytes_1, q8bytes.1); let (scale0, scale1) = (*scale as i32, *scale.add(1) as i32); isum += vaddvq_s32(p0) * scale0 + vaddvq_s32(p1) * scale1; scale = scale.add(2); let p2 = vdotq_s32(q6bytes_2, q8bytes.2); let p3 = vdotq_s32(q6bytes_3, q8bytes.3); let (scale0, scale1) = (*scale as i32, *scale.add(1) as i32); isum += vaddvq_s32(p2) * scale0 + vaddvq_s32(p3) * scale1; scale = scale.add(2); let q8bytes = vld1q_s8_x4(q8); q8 = q8.add(64); let shifted = vshrq_n_u8(qhbits.0, 4); let q6h_0 = vshlq_n_u8(vandq_u8(mone, shifted), 4); let shifted = vshrq_n_u8(qhbits.1, 4); let q6h_1 = vshlq_n_u8(vandq_u8(mone, shifted), 4); let shifted = vshrq_n_u8(qhbits.0, 6); let q6h_2 = vshlq_n_u8(vandq_u8(mone, shifted), 4); let shifted = vshrq_n_u8(qhbits.1, 6); let q6h_3 = vshlq_n_u8(vandq_u8(mone, shifted), 4); let q6bytes_0 = vreinterpretq_s8_u8(vorrq_u8(vshrq_n_u8(q6bits.0, 4), q6h_0)); let q6bytes_1 = vreinterpretq_s8_u8(vorrq_u8(vshrq_n_u8(q6bits.1, 4), q6h_1)); let q6bytes_2 = vreinterpretq_s8_u8(vorrq_u8(vshrq_n_u8(q6bits.2, 4), q6h_2)); let q6bytes_3 = vreinterpretq_s8_u8(vorrq_u8(vshrq_n_u8(q6bits.3, 4), q6h_3)); let p0 = vdotq_s32(q6bytes_0, q8bytes.0); let p1 = vdotq_s32(q6bytes_1, q8bytes.1); let (scale0, scale1) = (*scale as i32, *scale.add(1) as i32); isum += vaddvq_s32(p0) * scale0 + vaddvq_s32(p1) * scale1; scale = scale.add(2); let p2 = vdotq_s32(q6bytes_2, q8bytes.2); let p3 = vdotq_s32(q6bytes_3, q8bytes.3); let (scale0, scale1) = (*scale as i32, *scale.add(1) as i32); isum += vaddvq_s32(p2) * scale0 + vaddvq_s32(p3) * scale1; scale = scale.add(2); } sum += d_all * y.d * ((isum - 32 * isum_mins) as f32); } } Ok(sum) } #[inline(always)] pub(crate) fn vec_dot_q5k_q8k(n: usize, xs: &[BlockQ5K], ys: &[BlockQ8K]) -> Result<f32> { if n % QK_K != 0 { crate::bail!("vec_dot_q5k_q8k: {n} is not divisible by {QK_K}") } let mut sumf = 0f32; let mut utmp = [0u32; 4]; const KMASK1: u32 = 0x3f3f3f3f; const KMASK2: u32 = 0x0f0f0f0f; const KMASK3: u32 = 0x03030303; unsafe { let m4b = vdupq_n_u8(0xF); let mone = vdupq_n_u8(1); let mtwo = vdupq_n_u8(2); for (x, y) in xs.iter().zip(ys.iter()) { let d = y.d * x.d.to_f32(); let dmin = y.d * x.dmin.to_f32(); let q8sums = vpaddq_s16( vld1q_s16(y.bsums.as_ptr()), vld1q_s16(y.bsums.as_ptr().add(8)), ); LittleEndian::read_u32_into(&x.scales, &mut utmp[0..3]); utmp[3] = ((utmp[2] >> 4) & KMASK2) | (((utmp[1] >> 6) & KMASK3) << 4); let uaux = utmp[1] & KMASK1; utmp[1] = (utmp[2] & KMASK2) | (((utmp[0] >> 6) & KMASK3) << 4); utmp[2] = uaux; utmp[0] &= KMASK1; let mins8 = vld1_u8((utmp.as_ptr() as *const u8).add(8)); let mins = vreinterpretq_s16_u16(vmovl_u8(mins8)); let prod = vaddq_s32( vmull_s16(vget_low_s16(q8sums), vget_low_s16(mins)), vmull_s16(vget_high_s16(q8sums), vget_high_s16(mins)), ); let sumi_mins = vaddvq_s32(prod); let mut scales = utmp.as_ptr() as *const u8; let mut q5 = x.qs.as_ptr(); let mut q8 = y.qs.as_ptr(); let mut qhbits = vld1q_u8_x2(x.qh.as_ptr()); let mut sumi = 0i32; for _j in 0..QK_K / 64 { let q5bits = vld1q_u8_x2(q5); q5 = q5.add(32); let q8bytes = vld1q_s8_x4(q8); q8 = q8.add(64); let q5h_0 = vshlq_n_u8(vandq_u8(mone, qhbits.0), 4); let q5h_1 = vshlq_n_u8(vandq_u8(mone, qhbits.1), 4); let q5h_2 = vshlq_n_u8(vandq_u8(mtwo, qhbits.0), 3); let q5h_3 = vshlq_n_u8(vandq_u8(mtwo, qhbits.1), 3); qhbits.0 = vshrq_n_u8(qhbits.0, 2); qhbits.1 = vshrq_n_u8(qhbits.1, 2); let q5bytes_0 = vreinterpretq_s8_u8(vorrq_u8(vandq_u8(q5bits.0, m4b), q5h_0)); let q5bytes_1 = vreinterpretq_s8_u8(vorrq_u8(vandq_u8(q5bits.1, m4b), q5h_1)); let q5bytes_2 = vreinterpretq_s8_u8(vorrq_u8(vshrq_n_u8(q5bits.0, 4), q5h_2)); let q5bytes_3 = vreinterpretq_s8_u8(vorrq_u8(vshrq_n_u8(q5bits.1, 4), q5h_3)); let p0 = vdotq_s32(q5bytes_0, q8bytes.0); let p1 = vdotq_s32(q5bytes_1, q8bytes.1); sumi += vaddvq_s32(vaddq_s32(p0, p1)) * *scales as i32; scales = scales.add(1); let p2 = vdotq_s32(q5bytes_2, q8bytes.2); let p3 = vdotq_s32(q5bytes_3, q8bytes.3); sumi += vaddvq_s32(vaddq_s32(p2, p3)) * *scales as i32; scales = scales.add(1); } sumf += d * sumi as f32 - dmin * sumi_mins as f32; } } Ok(sumf) } #[inline(always)] pub(crate) fn vec_dot_q4k_q8k(n: usize, xs: &[BlockQ4K], ys: &[BlockQ8K]) -> Result<f32> { if n % QK_K != 0 { crate::bail!("vec_dot_q4k_q8k: {n} is not divisible by {QK_K}") } let mut sumf = 0f32; let mut utmp = [0u32; 4]; let mut scales = [0u8; 16]; const KMASK1: u32 = 0x3f3f3f3f; const KMASK2: u32 = 0x0f0f0f0f; const KMASK3: u32 = 0x03030303; unsafe { let m4b = vdupq_n_u8(0xF); for (x, y) in xs.iter().zip(ys.iter()) { let d = y.d * x.d.to_f32(); let dmin = y.d * x.dmin.to_f32(); let q8sums = vpaddq_s16( vld1q_s16(y.bsums.as_ptr()), vld1q_s16(y.bsums.as_ptr().add(8)), ); LittleEndian::read_u32_into(&x.scales, &mut utmp[0..3]); let mins8 = vld1_u32( [ utmp[1] & KMASK1, ((utmp[2] >> 4) & KMASK2) | (((utmp[1] >> 6) & KMASK3) << 4), ] .as_ptr(), ); utmp[1] = (utmp[2] & KMASK2) | (((utmp[0] >> 6) & KMASK3) << 4); utmp[0] &= KMASK1; let mins = vreinterpretq_s16_u16(vmovl_u8(vreinterpret_u8_u32(mins8))); let prod = vaddq_s32( vmull_s16(vget_low_s16(q8sums), vget_low_s16(mins)), vmull_s16(vget_high_s16(q8sums), vget_high_s16(mins)), ); sumf -= dmin * vaddvq_s32(prod) as f32; LittleEndian::write_u32_into(&utmp, &mut scales); let mut q4 = x.qs.as_ptr(); let mut q8 = y.qs.as_ptr(); let mut sumi1 = 0i32; let mut sumi2 = 0i32; for j in 0..QK_K / 64 { let q4bits = vld1q_u8_x2(q4); q4 = q4.add(32); let q8bytes = vld1q_s8_x2(q8); q8 = q8.add(32); let q4bytes = int8x16x2_t( vreinterpretq_s8_u8(vandq_u8(q4bits.0, m4b)), vreinterpretq_s8_u8(vandq_u8(q4bits.1, m4b)), ); let p0 = vdotq_s32(q4bytes.0, q8bytes.0); let p1 = vdotq_s32(q4bytes.1, q8bytes.1); sumi1 += vaddvq_s32(vaddq_s32(p0, p1)) * scales[2 * j] as i32; let q8bytes = vld1q_s8_x2(q8); q8 = q8.add(32); let q4bytes = int8x16x2_t( vreinterpretq_s8_u8(vshrq_n_u8(q4bits.0, 4)), vreinterpretq_s8_u8(vshrq_n_u8(q4bits.1, 4)), ); let p2 = vdotq_s32(q4bytes.0, q8bytes.0); let p3 = vdotq_s32(q4bytes.1, q8bytes.1); sumi2 += vaddvq_s32(vaddq_s32(p2, p3)) * scales[2 * j + 1] as i32; } sumf += d * (sumi1 + sumi2) as f32; } } Ok(sumf) } #[inline(always)] pub(crate) fn vec_dot_q3k_q8k(n: usize, xs: &[BlockQ3K], ys: &[BlockQ8K]) -> Result<f32> { if n % QK_K != 0 { crate::bail!("vec_dot_q3k_q8k: {n} is not divisible by {QK_K}") } let mut sumf = 0f32; let mut utmp = [0u32; 4]; let mut aux = [0u32; 3]; const KMASK1: u32 = 0x03030303; const KMASK2: u32 = 0x0f0f0f0f; unsafe { let m3b = vdupq_n_u8(0x3); let m0 = vdupq_n_u8(1); let m1 = vshlq_n_u8(m0, 1); let m2 = vshlq_n_u8(m0, 2); let m3 = vshlq_n_u8(m0, 3); for (x, y) in xs.iter().zip(ys.iter()) { let d = y.d * x.d.to_f32(); let mut q3 = x.qs.as_ptr(); let qh = x.hmask.as_ptr(); let mut q8 = y.qs.as_ptr(); let mut qhbits = vld1q_u8_x2(qh); let mut isum = 0i32; // Set up scales LittleEndian::read_u32_into(&x.scales, &mut aux); utmp[3] = ((aux[1] >> 4) & KMASK2) | (((aux[2] >> 6) & KMASK1) << 4); utmp[2] = ((aux[0] >> 4) & KMASK2) | (((aux[2] >> 4) & KMASK1) << 4); utmp[1] = (aux[1] & KMASK2) | (((aux[2] >> 2) & KMASK1) << 4); utmp[0] = (aux[0] & KMASK2) | ((aux[2] & KMASK1) << 4); let mut scale = utmp.as_mut_ptr() as *mut i8; for j in 0..16 { *scale.add(j) -= 32i8 } for j in 0..QK_K / 128 { let q3bits = vld1q_u8_x2(q3); q3 = q3.add(32); let q8bytes_1 = vld1q_s8_x4(q8); q8 = q8.add(64); let q8bytes_2 = vld1q_s8_x4(q8); q8 = q8.add(64); let q3h_0 = vshlq_n_u8(vbicq_u8(m0, qhbits.0), 2); let q3h_1 = vshlq_n_u8(vbicq_u8(m0, qhbits.1), 2); let q3h_2 = vshlq_n_u8(vbicq_u8(m1, qhbits.0), 1); let q3h_3 = vshlq_n_u8(vbicq_u8(m1, qhbits.1), 1); let q3bytes_0 = vsubq_s8( vreinterpretq_s8_u8(vandq_u8(q3bits.0, m3b)), vreinterpretq_s8_u8(q3h_0), ); let q3bytes_1 = vsubq_s8( vreinterpretq_s8_u8(vandq_u8(q3bits.1, m3b)), vreinterpretq_s8_u8(q3h_1), ); let q3bytes_2 = vsubq_s8( vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q3bits.0, 2), m3b)), vreinterpretq_s8_u8(q3h_2), ); let q3bytes_3 = vsubq_s8( vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q3bits.1, 2), m3b)), vreinterpretq_s8_u8(q3h_3), ); let p0 = vdotq_s32(q3bytes_0, q8bytes_1.0); let p1 = vdotq_s32(q3bytes_1, q8bytes_1.1); let p2 = vdotq_s32(q3bytes_2, q8bytes_1.2); let p3 = vdotq_s32(q3bytes_3, q8bytes_1.3); isum += vaddvq_s32(p0) * *scale as i32 + vaddvq_s32(p1) * *scale.add(1) as i32 + vaddvq_s32(p2) * *scale.add(2) as i32 + vaddvq_s32(p3) * *scale.add(3) as i32; scale = scale.add(4); let q3h_0 = vbicq_u8(m2, qhbits.0); let q3h_1 = vbicq_u8(m2, qhbits.1); let q3h_2 = vshrq_n_u8(vbicq_u8(m3, qhbits.0), 1); let q3h_3 = vshrq_n_u8(vbicq_u8(m3, qhbits.1), 1); let q3bytes_0 = vsubq_s8( vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q3bits.0, 4), m3b)), vreinterpretq_s8_u8(q3h_0), ); let q3bytes_1 = vsubq_s8( vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q3bits.1, 4), m3b)), vreinterpretq_s8_u8(q3h_1), ); let q3bytes_2 = vsubq_s8( vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q3bits.0, 6), m3b)), vreinterpretq_s8_u8(q3h_2), ); let q3bytes_3 = vsubq_s8( vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q3bits.1, 6), m3b)), vreinterpretq_s8_u8(q3h_3), ); let p0 = vdotq_s32(q3bytes_0, q8bytes_2.0); let p1 = vdotq_s32(q3bytes_1, q8bytes_2.1); let p2 = vdotq_s32(q3bytes_2, q8bytes_2.2); let p3 = vdotq_s32(q3bytes_3, q8bytes_2.3); isum += vaddvq_s32(p0) * *scale as i32 + vaddvq_s32(p1) * *scale.add(1) as i32 + vaddvq_s32(p2) * *scale.add(2) as i32 + vaddvq_s32(p3) * *scale.add(3) as i32; scale = scale.add(4); if j == 0 { qhbits.0 = vshrq_n_u8(qhbits.0, 4); qhbits.1 = vshrq_n_u8(qhbits.1, 4); } } sumf += d * isum as f32; } } Ok(sumf) } #[inline(always)] pub(crate) fn vec_dot_q2k_q8k(n: usize, xs: &[BlockQ2K], ys: &[BlockQ8K]) -> Result<f32> { if n % QK_K != 0 { crate::bail!("vec_dot_q2k_q8k: {n} is not divisible by {QK_K}") } let mut sumf = 0f32; let mut aux = [0u8; 16]; unsafe { let m3 = vdupq_n_u8(0x3); let m4 = vdupq_n_u8(0xF); for (x, y) in xs.iter().zip(ys.iter()) { let d = y.d * x.d.to_f32(); let dmin = -y.d * x.dmin.to_f32(); let mut q2 = x.qs.as_ptr(); let mut q8 = y.qs.as_ptr(); let sc = x.scales.as_ptr(); let mins_and_scales = vld1q_u8(sc); let scales = vandq_u8(mins_and_scales, m4); vst1q_u8(aux.as_mut_ptr(), scales); let mins = vshrq_n_u8(mins_and_scales, 4); let q8sums = vld1q_s16_x2(y.bsums.as_ptr()); let mins16 = int16x8x2_t( vreinterpretq_s16_u16(vmovl_u8(vget_low_u8(mins))), vreinterpretq_s16_u16(vmovl_u8(vget_high_u8(mins))), ); let s0 = vaddq_s32( vmull_s16(vget_low_s16(mins16.0), vget_low_s16(q8sums.0)), vmull_s16(vget_high_s16(mins16.0), vget_high_s16(q8sums.0)), ); let s1 = vaddq_s32( vmull_s16(vget_low_s16(mins16.1), vget_low_s16(q8sums.1)), vmull_s16(vget_high_s16(mins16.1), vget_high_s16(q8sums.1)), ); sumf += dmin * vaddvq_s32(vaddq_s32(s0, s1)) as f32; let mut isum = 0i32; let mut is = 0usize; // TODO: dotprod for _j in 0..QK_K / 128 { let q2bits = vld1q_u8_x2(q2); q2 = q2.add(32); let q8bytes = vld1q_s8_x2(q8); q8 = q8.add(32); let mut q2bytes = int8x16x2_t( vreinterpretq_s8_u8(vandq_u8(q2bits.0, m3)), vreinterpretq_s8_u8(vandq_u8(q2bits.1, m3)), ); isum += multiply_accum_with_scale(&aux, is, 0, q2bytes, q8bytes); let q8bytes = vld1q_s8_x2(q8); q8 = q8.add(32); q2bytes.0 = vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q2bits.0, 2), m3)); q2bytes.1 = vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q2bits.1, 2), m3)); isum += multiply_accum_with_scale(&aux, is, 2, q2bytes, q8bytes); let q8bytes = vld1q_s8_x2(q8); q8 = q8.add(32); q2bytes.0 = vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q2bits.0, 4), m3)); q2bytes.1 = vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q2bits.1, 4), m3)); isum += multiply_accum_with_scale(&aux, is, 4, q2bytes, q8bytes); let q8bytes = vld1q_s8_x2(q8); q8 = q8.add(32); q2bytes.0 = vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q2bits.0, 6), m3)); q2bytes.1 = vreinterpretq_s8_u8(vandq_u8(vshrq_n_u8(q2bits.1, 6), m3)); isum += multiply_accum_with_scale(&aux, is, 6, q2bytes, q8bytes); is += 8; } sumf += d * isum as f32; } } Ok(sumf) } #[inline(always)] unsafe fn multiply_accum_with_scale( aux: &[u8; 16], is: usize, index: usize, q2bytes: int8x16x2_t, q8bytes: int8x16x2_t, ) -> i32 { let p1 = vdotq_s32(q2bytes.0, q8bytes.0); let p2 = vdotq_s32(q2bytes.1, q8bytes.1); vaddvq_s32(p1) * aux[is + index] as i32 + vaddvq_s32(p2) * aux[is + 1 + index] as i32 }
candle/candle-core/src/quantized/neon.rs/0
{ "file_path": "candle/candle-core/src/quantized/neon.rs", "repo_id": "candle", "token_count": 15290 }
21
use candle_core::backend::BackendStorage; use candle_core::cpu_backend; use candle_core::test_utils::to_vec1_round; use candle_core::{CpuStorage, CustomOp1, DType, Device, Error, Layout, Result, Shape, Tensor}; fn fwd<T: num_traits::Float>(v: T, alpha: f64) -> T { if v.is_sign_positive() { v } else { let alpha = T::from(alpha).unwrap_or(T::nan()); (v.exp() - T::one()) * alpha } } struct Elu { alpha: f64, } impl CustomOp1 for Elu { fn name(&self) -> &'static str { "elu" } fn cpu_fwd(&self, s: &CpuStorage, l: &Layout) -> Result<(CpuStorage, Shape)> { let storage = candle_core::map_dtype!( "elu", s, |s| cpu_backend::unary_map(s, l, |v| fwd(v, self.alpha)), (BF16, F16, F32, F64) ); Ok((storage, l.shape().clone())) } } #[test] fn custom_op1_no_backward() -> Result<()> { let cpu = &Device::Cpu; let t = Tensor::arange(0u32, 12u32, cpu)?.to_dtype(DType::F32)?; let t = (t - 5.)?; let elu_t = t.apply_op1_no_bwd(&Elu { alpha: 1. })?; assert_eq!( to_vec1_round(&elu_t, 4)?, &[-0.9933, -0.9817, -0.9502, -0.8647, -0.6321, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] ); Ok(()) } // Define a similar struct as Elu but with backward support. fn bwd<T: num_traits::Float>(v: T, alpha: f64) -> T { if v.is_sign_positive() { T::one() } else { let alpha = T::from(alpha).unwrap_or(T::nan()); v.exp() * alpha } } struct EluBackward { alpha: f64, } impl CustomOp1 for EluBackward { fn name(&self) -> &'static str { "elu-bwd" } fn cpu_fwd(&self, s: &CpuStorage, l: &Layout) -> Result<(CpuStorage, Shape)> { let storage = candle_core::map_dtype!( "elu-bwd", s, |s| cpu_backend::unary_map(s, l, |v| bwd(v, self.alpha)), (BF16, F16, F32, F64) ); Ok((storage, l.shape().clone())) } } struct EluWithBackward(Elu); impl EluWithBackward { fn new(alpha: f64) -> Self { Self(Elu { alpha }) } } impl CustomOp1 for EluWithBackward { fn name(&self) -> &'static str { "elu" } fn cpu_fwd(&self, s: &CpuStorage, l: &Layout) -> Result<(CpuStorage, Shape)> { self.0.cpu_fwd(s, l) } fn bwd(&self, arg: &Tensor, _res: &Tensor, grad_res: &Tensor) -> Result<Option<Tensor>> { let alpha = self.0.alpha; let bwd = arg.apply_op1(EluBackward { alpha })?; Ok(Some(grad_res.mul(&bwd)?)) } } #[test] fn custom_op1_with_backward() -> Result<()> { let cpu = &Device::Cpu; let t = candle_core::Var::new(&[-2f32, 0f32, 2f32], cpu)?; let elu_t = t.apply_op1(EluWithBackward::new(2.))?; assert_eq!(to_vec1_round(&elu_t, 4)?, &[-1.7293, 0.0, 2.0]); let grads = elu_t.backward()?; let grad_x = grads.get(&t).unwrap(); assert_eq!(to_vec1_round(grad_x, 4)?, [0.2707, 1.0, 1.0]); Ok(()) } impl candle_core::InplaceOp1 for Elu { fn name(&self) -> &'static str { "elu" } fn cpu_fwd(&self, s: &mut CpuStorage, _l: &Layout) -> Result<()> { let alpha = self.alpha; match s { CpuStorage::BF16(s) => s.iter_mut().for_each(|v| *v = fwd(*v, alpha)), CpuStorage::F16(s) => s.iter_mut().for_each(|v| *v = fwd(*v, alpha)), CpuStorage::F32(s) => s.iter_mut().for_each(|v| *v = fwd(*v, alpha)), CpuStorage::F64(s) => s.iter_mut().for_each(|v| *v = fwd(*v, alpha)), _ => candle_core::bail!("unsupported dtype for inplace elu"), } Ok(()) } } #[test] fn inplace_op1() -> Result<()> { let cpu = &Device::Cpu; let t = Tensor::arange(0u32, 12u32, cpu)?.to_dtype(DType::F32)?; let t = (t - 5.)?; t.inplace_op1(&Elu { alpha: 1. })?; assert_eq!( to_vec1_round(&t, 4)?, &[-0.9933, -0.9817, -0.9502, -0.8647, -0.6321, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] ); Ok(()) }
candle/candle-core/tests/custom_op_tests.rs/0
{ "file_path": "candle/candle-core/tests/custom_op_tests.rs", "repo_id": "candle", "token_count": 2102 }
22
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use clap::{Parser, ValueEnum}; use candle::{DType, IndexOp, D}; use candle_nn::{Module, VarBuilder}; use candle_transformers::models::convnext; #[derive(Clone, Copy, Debug, ValueEnum)] enum Which { Atto, Femto, Pico, Nano, Tiny, Small, Base, Large, AttoV2, FemtoV2, PicoV2, NanoV2, TinyV2, BaseV2, LargeV2, XLarge, Huge, } impl Which { fn model_filename(&self) -> String { let name = match self { Self::Atto => "convnext_atto.d2_in1k", Self::Femto => "convnext_femto.d1_in1k", Self::Pico => "convnext_pico.d1_in1k", Self::Nano => "convnext_nano.d1h_in1k", Self::Tiny => "convnext_tiny.fb_in1k", Self::Small => "convnext_small.fb_in1k", Self::Base => "convnext_base.fb_in1k", Self::Large => "convnext_large.fb_in1k", Self::AttoV2 => "convnextv2_atto.fcmae_ft_in1k", Self::FemtoV2 => "convnextv2_femto.fcmae_ft_in1k", Self::PicoV2 => "convnextv2_pico.fcmae_ft_in1k", Self::NanoV2 => "convnextv2_nano.fcmae_ft_in1k", Self::TinyV2 => "convnextv2_tiny.fcmae_ft_in1k", Self::BaseV2 => "convnextv2_base.fcmae_ft_in1k", Self::LargeV2 => "convnextv2_large.fcmae_ft_in1k", Self::XLarge => "convnext_xlarge.fb_in22k_ft_in1k", Self::Huge => "convnextv2_huge.fcmae_ft_in1k", }; format!("timm/{name}") } fn config(&self) -> convnext::Config { match self { Self::Atto | Self::AttoV2 => convnext::Config::atto(), Self::Femto | Self::FemtoV2 => convnext::Config::femto(), Self::Pico | Self::PicoV2 => convnext::Config::pico(), Self::Nano | Self::NanoV2 => convnext::Config::nano(), Self::Tiny | Self::TinyV2 => convnext::Config::tiny(), Self::Small => convnext::Config::small(), Self::Base | Self::BaseV2 => convnext::Config::base(), Self::Large | Self::LargeV2 => convnext::Config::large(), Self::XLarge => convnext::Config::xlarge(), Self::Huge => convnext::Config::huge(), } } } #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, #[arg(value_enum, long, default_value_t=Which::Tiny)] which: Which, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let model_name = args.which.model_filename(); let api = hf_hub::api::sync::Api::new()?; let api = api.model(model_name); api.get("model.safetensors")? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let model = convnext::convnext(&args.which.config(), 1000, vb)?; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
candle/candle-examples/examples/convnext/main.rs/0
{ "file_path": "candle/candle-examples/examples/convnext/main.rs", "repo_id": "candle", "token_count": 1926 }
23
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use clap::{Parser, ValueEnum}; use candle::{DType, IndexOp, D}; use candle_nn::{Module, VarBuilder}; use candle_transformers::models::efficientvit; #[derive(Clone, Copy, Debug, ValueEnum)] enum Which { M0, M1, M2, M3, M4, M5, } impl Which { fn model_filename(&self) -> String { let name = match self { Self::M0 => "m0", Self::M1 => "m1", Self::M2 => "m2", Self::M3 => "m3", Self::M4 => "m4", Self::M5 => "m5", }; format!("timm/efficientvit_{}.r224_in1k", name) } fn config(&self) -> efficientvit::Config { match self { Self::M0 => efficientvit::Config::m0(), Self::M1 => efficientvit::Config::m1(), Self::M2 => efficientvit::Config::m2(), Self::M3 => efficientvit::Config::m3(), Self::M4 => efficientvit::Config::m4(), Self::M5 => efficientvit::Config::m5(), } } } #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, #[arg(value_enum, long, default_value_t=Which::M0)] which: Which, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let model_name = args.which.model_filename(); let api = hf_hub::api::sync::Api::new()?; let api = api.model(model_name); api.get("model.safetensors")? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let model = efficientvit::efficientvit(&args.which.config(), 1000, vb)?; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
candle/candle-examples/examples/efficientvit/main.rs/0
{ "file_path": "candle/candle-examples/examples/efficientvit/main.rs", "repo_id": "candle", "token_count": 1278 }
24
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::{Error as E, Result}; use clap::Parser; use candle_transformers::models::gemma::{Config as Config1, Model as Model1}; use candle_transformers::models::gemma2::{Config as Config2, Model as Model2}; use candle::{DType, Device, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::generation::LogitsProcessor; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "2b")] Base2B, #[value(name = "7b")] Base7B, #[value(name = "2b-it")] Instruct2B, #[value(name = "7b-it")] Instruct7B, #[value(name = "1.1-2b-it")] InstructV1_1_2B, #[value(name = "1.1-7b-it")] InstructV1_1_7B, #[value(name = "code-2b")] CodeBase2B, #[value(name = "code-7b")] CodeBase7B, #[value(name = "code-2b-it")] CodeInstruct2B, #[value(name = "code-7b-it")] CodeInstruct7B, #[value(name = "2-2b")] BaseV2_2B, #[value(name = "2-2b-it")] InstructV2_2B, #[value(name = "2-9b")] BaseV2_9B, #[value(name = "2-9b-it")] InstructV2_9B, } impl Which { fn is_v1(&self) -> bool { match self { Self::Base2B | Self::Base7B | Self::Instruct2B | Self::Instruct7B | Self::InstructV1_1_2B | Self::InstructV1_1_7B | Self::CodeBase2B | Self::CodeBase7B | Self::CodeInstruct2B | Self::CodeInstruct7B => true, Self::BaseV2_2B | Self::InstructV2_2B | Self::BaseV2_9B | Self::InstructV2_9B => false, } } } enum Model { V1(Model1), V2(Model2), } impl Model { fn forward(&mut self, input_ids: &Tensor, pos: usize) -> candle::Result<Tensor> { match self { Self::V1(m) => m.forward(input_ids, pos), Self::V2(m) => m.forward(input_ids, pos), } } } struct TextGeneration { model: Model, device: Device, tokenizer: TokenOutputStream, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, } impl TextGeneration { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, seed: u64, temp: Option<f64>, top_p: Option<f64>, repeat_penalty: f32, repeat_last_n: usize, device: &Device, ) -> Self { let logits_processor = LogitsProcessor::new(seed, temp, top_p); Self { model, tokenizer: TokenOutputStream::new(tokenizer), logits_processor, repeat_penalty, repeat_last_n, device: device.clone(), } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { use std::io::Write; self.tokenizer.clear(); let mut tokens = self .tokenizer .tokenizer() .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); for &t in tokens.iter() { if let Some(t) = self.tokenizer.next_token(t)? { print!("{t}") } } std::io::stdout().flush()?; let mut generated_tokens = 0usize; let eos_token = match self.tokenizer.get_token("<eos>") { Some(token) => token, None => anyhow::bail!("cannot find the <eos> token"), }; let start_gen = std::time::Instant::now(); for index in 0..sample_len { let context_size = if index > 0 { 1 } else { tokens.len() }; let start_pos = tokens.len().saturating_sub(context_size); let ctxt = &tokens[start_pos..]; let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?; let logits = self.model.forward(&input, start_pos)?; let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); generated_tokens += 1; if next_token == eos_token { break; } if let Some(t) = self.tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } let dt = start_gen.elapsed(); if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } std::io::stdout().flush()?; println!( "\n{generated_tokens} tokens generated ({:.2} token/s)", generated_tokens as f64 / dt.as_secs_f64(), ); Ok(()) } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] prompt: String, /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, short = 'n', default_value_t = 10000)] sample_len: usize, #[arg(long)] model_id: Option<String>, #[arg(long, default_value = "main")] revision: String, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] config_file: Option<String>, #[arg(long)] weight_files: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, /// The model to use. #[arg(long, default_value = "2-2b")] which: Which, #[arg(long)] use_flash_attn: bool, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature.unwrap_or(0.), args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = Api::new()?; let model_id = match &args.model_id { Some(model_id) => model_id.to_string(), None => match args.which { Which::InstructV1_1_2B => "google/gemma-1.1-2b-it".to_string(), Which::InstructV1_1_7B => "google/gemma-1.1-7b-it".to_string(), Which::Base2B => "google/gemma-2b".to_string(), Which::Base7B => "google/gemma-7b".to_string(), Which::Instruct2B => "google/gemma-2b-it".to_string(), Which::Instruct7B => "google/gemma-7b-it".to_string(), Which::CodeBase2B => "google/codegemma-2b".to_string(), Which::CodeBase7B => "google/codegemma-7b".to_string(), Which::CodeInstruct2B => "google/codegemma-2b-it".to_string(), Which::CodeInstruct7B => "google/codegemma-7b-it".to_string(), Which::BaseV2_2B => "google/gemma-2-2b".to_string(), Which::InstructV2_2B => "google/gemma-2-2b-it".to_string(), Which::BaseV2_9B => "google/gemma-2-9b".to_string(), Which::InstructV2_9B => "google/gemma-2-9b-it".to_string(), }, }; let repo = api.repo(Repo::with_revision( model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let config_filename = match args.config_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("config.json")?, }; let filenames = match args.weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")?, }; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let dtype = if device.is_cuda() { DType::BF16 } else { DType::F32 }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)? }; let model = if args.which.is_v1() { let config: Config1 = serde_json::from_reader(std::fs::File::open(config_filename)?)?; let model = Model1::new(args.use_flash_attn, &config, vb)?; Model::V1(model) } else { let config: Config2 = serde_json::from_reader(std::fs::File::open(config_filename)?)?; let model = Model2::new(args.use_flash_attn, &config, vb)?; Model::V2(model) }; println!("loaded the model in {:?}", start.elapsed()); let mut pipeline = TextGeneration::new( model, tokenizer, args.seed, args.temperature, args.top_p, args.repeat_penalty, args.repeat_last_n, &device, ); pipeline.run(&args.prompt, args.sample_len)?; Ok(()) }
candle/candle-examples/examples/gemma/main.rs/0
{ "file_path": "candle/candle-examples/examples/gemma/main.rs", "repo_id": "candle", "token_count": 5150 }
25
use std::cmp::min; use candle::{bail, DType, Device, Result, Tensor}; use candle_transformers::models::llava::{ config::{HFPreProcessorConfig, LLaVAConfig}, utils::select_best_resolution, }; use hf_hub::api::sync::Api; use image::{imageops::overlay, DynamicImage, GenericImageView, Rgb, RgbImage}; use serde::{Deserialize, Serialize}; //This struct is mainly for LLaVA aplications, hence it's not completely compatible with python transformer CLIPImageProcessor few several preprocess that LLaVA used, including "openai/clip-vit-large-patch14-336" and "openai/clip-vit-large-patch14". #[derive(Serialize, Deserialize, Debug)] pub struct ImageProcessor { #[serde(default = "default_size")] pub size: u32, // this is not the same as python transformer #[serde(default = "default_do_resize")] pub do_resize: bool, //resample: u32 // 3 for PIL bicubic, equivalent to rust CatmullRom. Hence below we use CatmullRom #[serde(default = "default_do_center_crop")] pub do_center_crop: bool, #[serde(default = "default_crop_size")] pub crop_size: u32, // this is not the same as python transformer #[serde(default = "default_do_rescale")] pub do_rescale: bool, #[serde(default = "default_rescale_factor")] pub rescale_factor: f32, #[serde(default = "default_do_normalize")] pub do_normalize: bool, #[serde(default = "default_image_mean")] pub image_mean: Vec<f32>, #[serde(default = "default_image_std")] pub image_std: Vec<f32>, } fn default_size() -> u32 { 224 } fn default_do_resize() -> bool { true } fn default_do_center_crop() -> bool { true } fn default_crop_size() -> u32 { 224 } fn default_do_rescale() -> bool { true } fn default_rescale_factor() -> f32 { 1.0 / 255.0 } fn default_do_normalize() -> bool { true } fn default_image_mean() -> Vec<f32> { vec![0.48145466, 0.4578275, 0.40821073] } fn default_image_std() -> Vec<f32> { vec![0.26862954, 0.2613026, 0.2757771] } impl ImageProcessor { pub fn from_pretrained(clip_id: &str) -> Result<Self> { let api = Api::new().map_err(|e| candle::Error::Msg(e.to_string()))?; let api = api.model(clip_id.to_string()); let config_filename = api .get("preprocessor_config.json") .map_err(|e| candle::Error::Msg(e.to_string()))?; let image_processor = serde_json::from_slice(&std::fs::read(config_filename).map_err(candle::Error::Io)?) .map_err(|e| candle::Error::Msg(e.to_string()))?; Ok(image_processor) } pub fn from_hf_preprocessor_config(hf_preprocessor_config: &HFPreProcessorConfig) -> Self { Self { size: hf_preprocessor_config.size["shortest_edge"] as u32, do_resize: hf_preprocessor_config.do_resize, do_center_crop: hf_preprocessor_config.do_center_crop, crop_size: hf_preprocessor_config.crop_size["height"] as u32, do_rescale: hf_preprocessor_config.do_rescale, rescale_factor: hf_preprocessor_config.rescale_factor, do_normalize: hf_preprocessor_config.do_normalize, image_mean: hf_preprocessor_config.image_mean.clone(), image_std: hf_preprocessor_config.image_std.clone(), } } ///shortest edge to self.resize, other edge is resized to maintain aspect ratio pub fn resize(&self, image: &DynamicImage) -> DynamicImage { let (width, height) = image.dimensions(); let size = self.size; if width == size && height == size { image.clone() } else { let (new_width, new_height) = if width < height { ( size, (((size * height) as f32) / width as f32).ceil() as u32, ) } else { ( (((size * width) as f32) / height as f32).ceil() as u32, size, ) }; image.resize( new_width, new_height, image::imageops::FilterType::CatmullRom, ) } } pub fn center_crop(&self, image: &DynamicImage) -> DynamicImage { let (width, height) = image.dimensions(); let crop_size = self.crop_size; let (left, top) = calculate_middle((width, height), (crop_size, crop_size)); image.crop_imm(left, top, crop_size, crop_size) } pub fn to_tensor(&self, image: &DynamicImage) -> Result<Tensor> { let img = image.to_rgb8().into_raw(); let (width, height) = image.dimensions(); Tensor::from_vec(img, (height as usize, width as usize, 3), &Device::Cpu)? .to_dtype(DType::F32) // only for internal compute } pub fn rescale(&self, tensor: &Tensor) -> Result<Tensor> { let rescale_factor = self.rescale_factor as f64; tensor.affine(rescale_factor, 0.0) } pub fn normalize(&self, tensor: &Tensor) -> Result<Tensor> { let image_mean = self.image_mean.clone(); let image_std = self.image_std.clone(); let mean = Tensor::from_vec(image_mean, (3,), &Device::Cpu)?; let std = Tensor::from_vec(image_std, (3,), &Device::Cpu)?; tensor.broadcast_sub(&mean)?.broadcast_div(&std) } pub fn to_channel_dimension_format(&self, tensor: &Tensor) -> Result<Tensor> { tensor.permute((2, 0, 1)) } pub fn preprocess(&self, image: &DynamicImage) -> Result<Tensor> { let image = if self.do_resize { self.resize(image) } else { image.clone() }; let image = if self.do_center_crop { self.center_crop(&image) } else { image }; let tensor = self.to_tensor(&image)?; let tensor = if self.do_rescale { self.rescale(&tensor)? } else { tensor }; let tensor = if self.do_normalize { self.normalize(&tensor)? } else { tensor }; self.to_channel_dimension_format(&tensor) } } pub fn calculate_middle(image_size: (u32, u32), center_size: (u32, u32)) -> (u32, u32) { let (width, height) = image_size; let (center_width, center_height) = center_size; let left = if width <= center_width { 0 } else { ((width as f32 - center_width as f32) / 2.0).ceil() as u32 }; let top = if height <= center_height { 0 } else { ((height as f32 - center_height as f32) / 2.0).ceil() as u32 }; (left, top) } pub fn process_image( image: &DynamicImage, processor: &ImageProcessor, llava_config: &LLaVAConfig, ) -> candle::Result<Tensor> { if llava_config.image_aspect_ratio == *"square" { processor.preprocess(image)?.unsqueeze(0) } else if llava_config.image_aspect_ratio == *"anyres" { process_anyres_image(image, processor, &llava_config.image_grid_pinpoints) } else if llava_config.image_aspect_ratio == *"pad" { process_pad_image(image, processor) } else { bail!("Invalid image aspect ratio") } } fn process_pad_image(image: &DynamicImage, processor: &ImageProcessor) -> Result<Tensor> { let mean_color = processor .image_mean .iter() .map(|x| ((*x) * 255.0) as u8) .collect::<Vec<u8>>(); let mean_color = Rgb::from([mean_color[0], mean_color[1], mean_color[2]]); let image_padded = expand2square(image, mean_color); processor.preprocess(&image_padded) } fn process_anyres_image( image: &DynamicImage, processor: &ImageProcessor, grid_pinpoints: &[(u32, u32)], ) -> Result<Tensor> { let original_size = image.dimensions(); let best_resolution = select_best_resolution(original_size, grid_pinpoints); let image_padded = resize_and_pad_image(image, best_resolution); let image_original_resize = image.resize_exact( processor.size, processor.size, image::imageops::FilterType::CatmullRom, ); let mut patches = vec![image_original_resize]; for patch in divide_to_patches(&image_padded, processor.crop_size) { patches.push(patch); } let tensors = patches .iter() .map(|patch| processor.preprocess(patch)) .collect::<Result<Vec<Tensor>>>()?; Tensor::stack(&tensors, 0) } fn expand2square(image: &DynamicImage, background_color: Rgb<u8>) -> DynamicImage { let (width, height) = image.dimensions(); match width.cmp(&height) { std::cmp::Ordering::Less => { let mut new_image = DynamicImage::from(RgbImage::from_pixel(height, height, background_color)); overlay(&mut new_image, image, ((height - width) / 2) as i64, 0); new_image } std::cmp::Ordering::Equal => image.clone(), std::cmp::Ordering::Greater => { let mut new_image = DynamicImage::from(RgbImage::from_pixel(width, width, background_color)); overlay(&mut new_image, image, 0, ((width - height) / 2) as i64); new_image } } } fn resize_and_pad_image(image: &DynamicImage, target_resolution: (u32, u32)) -> DynamicImage { let (original_width, original_height) = image.dimensions(); let original_width_f = original_width as f32; let original_height_f = original_height as f32; let (target_width, target_height) = target_resolution; let target_width_f = target_width as f32; let target_height_f = target_height as f32; let scale_w = target_width_f / original_width_f; let scale_h = target_height_f / original_height_f; let (new_width, new_height) = if scale_w < scale_h { ( target_width, min((original_height_f * scale_w).ceil() as u32, target_height), ) } else { ( min((original_width_f * scale_h).ceil() as u32, target_width), target_height, ) }; let resized_image = image.resize_exact( new_width, new_height, image::imageops::FilterType::CatmullRom, ); let mut new_image = DynamicImage::new_rgb8(target_width, target_height); let (paste_x, paste_y) = calculate_middle((target_width, target_height), (new_width, new_height)); overlay( &mut new_image, &resized_image, paste_x.into(), paste_y.into(), ); new_image } fn divide_to_patches(image: &DynamicImage, patch_size: u32) -> Vec<DynamicImage> { let (width, height) = image.dimensions(); let mut patches = Vec::new(); for y in (0..height).step_by(patch_size as usize) { for x in (0..width).step_by(patch_size as usize) { let patch = image.crop_imm(x, y, patch_size, patch_size); patches.push(patch); } } patches }
candle/candle-examples/examples/llava/image_processor.rs/0
{ "file_path": "candle/candle-examples/examples/llava/image_processor.rs", "repo_id": "candle", "token_count": 4904 }
26
use anyhow::Result; use candle::{Device, Tensor}; use clap::{Parser, Subcommand}; #[derive(Subcommand, Debug, Clone)] enum Command { Print { #[arg(long)] file: String, }, SimpleEval { #[arg(long)] file: String, }, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] pub struct Args { #[command(subcommand)] command: Command, } pub fn main() -> Result<()> { let args = Args::parse(); match args.command { Command::Print { file } => { let model = candle_onnx::read_file(file)?; println!("{model:?}"); let graph = model.graph.unwrap(); for node in graph.node.iter() { println!("{node:?}"); } } Command::SimpleEval { file } => { let model = candle_onnx::read_file(file)?; let graph = model.graph.as_ref().unwrap(); let constants: std::collections::HashSet<_> = graph.initializer.iter().map(|i| i.name.as_str()).collect(); let mut inputs = std::collections::HashMap::new(); for input in graph.input.iter() { use candle_onnx::onnx::tensor_proto::DataType; if constants.contains(input.name.as_str()) { continue; } let type_ = input.r#type.as_ref().expect("no type for input"); let type_ = type_.value.as_ref().expect("no type.value for input"); let value = match type_ { candle_onnx::onnx::type_proto::Value::TensorType(tt) => { let dt = match DataType::try_from(tt.elem_type) { Ok(dt) => match candle_onnx::dtype(dt) { Some(dt) => dt, None => { anyhow::bail!( "unsupported 'value' data-type {dt:?} for {}", input.name ) } }, type_ => anyhow::bail!("unsupported input type {type_:?}"), }; let shape = tt.shape.as_ref().expect("no tensortype.shape for input"); let dims = shape .dim .iter() .map(|dim| match dim.value.as_ref().expect("no dim value") { candle_onnx::onnx::tensor_shape_proto::dimension::Value::DimValue(v) => Ok(*v as usize), candle_onnx::onnx::tensor_shape_proto::dimension::Value::DimParam(_) => Ok(42), }) .collect::<Result<Vec<usize>>>()?; Tensor::zeros(dims, dt, &Device::Cpu)? } type_ => anyhow::bail!("unsupported input type {type_:?}"), }; println!("input {}: {value:?}", input.name); inputs.insert(input.name.clone(), value); } let outputs = candle_onnx::simple_eval(&model, inputs)?; for (name, value) in outputs.iter() { println!("output {name}: {value:?}") } } } Ok(()) }
candle/candle-examples/examples/onnx_basics.rs/0
{ "file_path": "candle/candle-examples/examples/onnx_basics.rs", "repo_id": "candle", "token_count": 2016 }
27
# candle-recurrent-gemma This model card corresponds to the 2B base version of the RecurrentGemma model [huggingface model card](https://huggingface.co/google/recurrentgemma-2b). ```bash cargo run --features cuda -r --example recurrent-gemma -- \ --prompt "Write me a poem about Machine Learning." ```
candle/candle-examples/examples/recurrent-gemma/README.md/0
{ "file_path": "candle/candle-examples/examples/recurrent-gemma/README.md", "repo_id": "candle", "token_count": 101 }
28
# candle-stable-lm StableLM-3B-4E1T is a 3 billion parameter decoder-only language model pre-trained on 1 trillion tokens of diverse English and code datasets for 4 epochs. See the [HuggingFace Hub Model Card](https://huggingface.co/stabilityai/stablelm-3b-4e1t). Note that this model is gated so you will have to request access on the Hub in order to be able to use it. Other available models are Stable-Code-3B, StableLM-2 and Zephyr variants. ## Running some example ```bash $ cargo run --example stable-lm --release --features cuda -- --prompt 'What is the most efficient programming language in use?' --sample-len 150 avx: true, neon: false, simd128: false, f16c: true temp: 0.00 repeat-penalty: 1.10 repeat-last-n: 64 retrieved the files in 126.593µs loaded the model in 3.474148965s What is the most efficient programming language in use? The answer to this question depends on what you mean by "efficient". If you're talking about speed, then C++ and Java are probably your best bets. But if you're talking about ease of development, then Python is probably the way to go. Python is a high-level, interpreted language that is easy to learn and use. It has a large community of developers who are always working on new features and improvements. C++ is a low-level, compiled language that can be used for both desktop applications and web development. It's more difficult to learn than Python but offers greater control over the code. Java is another high-level language that is popular with programmers because it runs on many different platforms (including Android phones 150 tokens generated (37.61 token/s) ```
candle/candle-examples/examples/stable-lm/README.md/0
{ "file_path": "candle/candle-examples/examples/stable-lm/README.md", "repo_id": "candle", "token_count": 432 }
29
# candle-whisper: speech recognition An implementation of [OpenAI Whisper](https://github.com/openai/whisper) using candle. Whisper is a general purpose speech recognition model, it can be used to convert audio files (in the `.wav` format) to text. Supported features include language detection as well as multilingual speech recognition. ## Running some example If no audio file is passed as input, a [sample file](https://huggingface.co/datasets/Narsil/candle-examples/resolve/main/samples_jfk.wav) is automatically downloaded from the hub. ```bash cargo run --example whisper --release > No audio file submitted: Downloading https://huggingface.co/datasets/Narsil/candle_demo/blob/main/samples_jfk.wav > loaded wav data: Header { audio_format: 1, channel_count: 1, sampling_rate: 16000, bytes_per_second: 32000, bytes_per_sample: 2, bits_per_sample: 16 } > pcm data loaded 176000 > loaded mel: [1, 80, 3000] > 0.0s -- 30.0s: And so my fellow Americans ask not what your country can do for you ask what you can do for your country ``` In order to use the multilingual mode, specify a multilingual model via the `--model` flag, see the details below. ## Command line flags - `--input`: the audio file to be converted to text, in wav format. - `--language`: force the language to some specific value rather than being detected, e.g. `en`. - `--task`: the task to be performed, can be `transcribe` (return the text data in the original language) or `translate` (translate the text to English). - `--timestamps`: enable the timestamp mode where some timestamps are reported for each recognized audio extracts. - `--model`: the model to be used. Models that do not end with `-en` are multilingual models, other ones are English only models. The supported OpenAI Whisper models are `tiny`, `tiny.en`, `base`, `base.en`, `small`, `small.en`, `medium`, `medium.en`, `large`, `large-v2` and `large-v3`. The supported Distil-Whisper models are `distil-medium.en`, `distil-large-v2` and `distil-large-v3`.
candle/candle-examples/examples/whisper/README.md/0
{ "file_path": "candle/candle-examples/examples/whisper/README.md", "repo_id": "candle", "token_count": 620 }
30
// Build script to run nvcc and generate the C glue code for launching the flash-attention kernel. // The cuda build time is very long so one can set the CANDLE_FLASH_ATTN_BUILD_DIR environment // variable in order to cache the compiled artifacts and avoid recompiling too often. use anyhow::{Context, Result}; use std::path::PathBuf; const KERNEL_FILES: [&str; 33] = [ "kernels/flash_api.cu", "kernels/flash_fwd_hdim128_fp16_sm80.cu", "kernels/flash_fwd_hdim160_fp16_sm80.cu", "kernels/flash_fwd_hdim192_fp16_sm80.cu", "kernels/flash_fwd_hdim224_fp16_sm80.cu", "kernels/flash_fwd_hdim256_fp16_sm80.cu", "kernels/flash_fwd_hdim32_fp16_sm80.cu", "kernels/flash_fwd_hdim64_fp16_sm80.cu", "kernels/flash_fwd_hdim96_fp16_sm80.cu", "kernels/flash_fwd_hdim128_bf16_sm80.cu", "kernels/flash_fwd_hdim160_bf16_sm80.cu", "kernels/flash_fwd_hdim192_bf16_sm80.cu", "kernels/flash_fwd_hdim224_bf16_sm80.cu", "kernels/flash_fwd_hdim256_bf16_sm80.cu", "kernels/flash_fwd_hdim32_bf16_sm80.cu", "kernels/flash_fwd_hdim64_bf16_sm80.cu", "kernels/flash_fwd_hdim96_bf16_sm80.cu", "kernels/flash_fwd_hdim128_fp16_causal_sm80.cu", "kernels/flash_fwd_hdim160_fp16_causal_sm80.cu", "kernels/flash_fwd_hdim192_fp16_causal_sm80.cu", "kernels/flash_fwd_hdim224_fp16_causal_sm80.cu", "kernels/flash_fwd_hdim256_fp16_causal_sm80.cu", "kernels/flash_fwd_hdim32_fp16_causal_sm80.cu", "kernels/flash_fwd_hdim64_fp16_causal_sm80.cu", "kernels/flash_fwd_hdim96_fp16_causal_sm80.cu", "kernels/flash_fwd_hdim128_bf16_causal_sm80.cu", "kernels/flash_fwd_hdim160_bf16_causal_sm80.cu", "kernels/flash_fwd_hdim192_bf16_causal_sm80.cu", "kernels/flash_fwd_hdim224_bf16_causal_sm80.cu", "kernels/flash_fwd_hdim256_bf16_causal_sm80.cu", "kernels/flash_fwd_hdim32_bf16_causal_sm80.cu", "kernels/flash_fwd_hdim64_bf16_causal_sm80.cu", "kernels/flash_fwd_hdim96_bf16_causal_sm80.cu", ]; fn main() -> Result<()> { println!("cargo:rerun-if-changed=build.rs"); for kernel_file in KERNEL_FILES.iter() { println!("cargo:rerun-if-changed={kernel_file}"); } println!("cargo:rerun-if-changed=kernels/flash_fwd_kernel.h"); println!("cargo:rerun-if-changed=kernels/flash_fwd_launch_template.h"); println!("cargo:rerun-if-changed=kernels/flash.h"); println!("cargo:rerun-if-changed=kernels/philox.cuh"); println!("cargo:rerun-if-changed=kernels/softmax.h"); println!("cargo:rerun-if-changed=kernels/utils.h"); println!("cargo:rerun-if-changed=kernels/kernel_traits.h"); println!("cargo:rerun-if-changed=kernels/block_info.h"); println!("cargo:rerun-if-changed=kernels/static_switch.h"); let out_dir = PathBuf::from(std::env::var("OUT_DIR").context("OUT_DIR not set")?); let build_dir = match std::env::var("CANDLE_FLASH_ATTN_BUILD_DIR") { Err(_) => { #[allow(clippy::redundant_clone)] out_dir.clone() } Ok(build_dir) => { let path = PathBuf::from(build_dir); path.canonicalize().expect(&format!( "Directory doesn't exists: {} (the current directory is {})", &path.display(), std::env::current_dir()?.display() )) } }; let kernels = KERNEL_FILES.iter().collect(); let builder = bindgen_cuda::Builder::default() .kernel_paths(kernels) .out_dir(build_dir.clone()) .arg("-std=c++17") .arg("-O3") .arg("-U__CUDA_NO_HALF_OPERATORS__") .arg("-U__CUDA_NO_HALF_CONVERSIONS__") .arg("-U__CUDA_NO_HALF2_OPERATORS__") .arg("-U__CUDA_NO_BFLOAT16_CONVERSIONS__") .arg("-Icutlass/include") .arg("--expt-relaxed-constexpr") .arg("--expt-extended-lambda") .arg("--use_fast_math") .arg("--verbose"); let out_file = build_dir.join("libflashattention.a"); builder.build_lib(out_file); println!("cargo:rustc-link-search={}", build_dir.display()); println!("cargo:rustc-link-lib=flashattention"); println!("cargo:rustc-link-lib=dylib=cudart"); println!("cargo:rustc-link-lib=dylib=stdc++"); Ok(()) }
candle/candle-flash-attn/build.rs/0
{ "file_path": "candle/candle-flash-attn/build.rs", "repo_id": "candle", "token_count": 2052 }
31
/****************************************************************************** * Copyright (c) 2024, Tri Dao. ******************************************************************************/ #pragma once #include <cmath> #include <cute/tensor.hpp> #include <cutlass/numeric_types.h> #include "philox.cuh" #include "utils.h" namespace flash { using namespace cute; //////////////////////////////////////////////////////////////////////////////////////////////////// template<bool zero_init=true, typename Engine0, typename Layout0, typename Engine1, typename Layout1, typename Operator> __device__ __forceinline__ void thread_reduce_(Tensor<Engine0, Layout0> const &tensor, Tensor<Engine1, Layout1> &summary, Operator &op) { static_assert(Layout0::rank == 2, "Only support 2D Tensor"); static_assert(Layout1::rank == 1, "Only support 1D Tensor"); CUTE_STATIC_ASSERT_V(size<0>(summary) == size<0>(tensor)); #pragma unroll for (int mi = 0; mi < size<0>(tensor); mi++) { summary(mi) = zero_init ? tensor(mi, 0) : op(summary(mi), tensor(mi, 0)); #pragma unroll for (int ni = 1; ni < size<1>(tensor); ni++) { summary(mi) = op(summary(mi), tensor(mi, ni)); } } } template<typename Engine0, typename Layout0, typename Engine1, typename Layout1, typename Operator> __device__ __forceinline__ void quad_allreduce_(Tensor<Engine0, Layout0> &dst, Tensor<Engine1, Layout1> &src, Operator &op) { CUTE_STATIC_ASSERT_V(size(dst) == size(src)); #pragma unroll for (int i = 0; i < size(dst); i++){ dst(i) = Allreduce<4>::run(src(i), op); } } template<bool zero_init=true, typename Engine0, typename Layout0, typename Engine1, typename Layout1, typename Operator> __device__ __forceinline__ void reduce_(Tensor<Engine0, Layout0> const& tensor, Tensor<Engine1, Layout1> &summary, Operator &op) { thread_reduce_<zero_init>(tensor, summary, op); quad_allreduce_(summary, summary, op); } template<bool zero_init=true, typename Engine0, typename Layout0, typename Engine1, typename Layout1> __device__ __forceinline__ void reduce_max(Tensor<Engine0, Layout0> const& tensor, Tensor<Engine1, Layout1> &max){ MaxOp<float> max_op; reduce_<zero_init>(tensor, max, max_op); } template<bool zero_init=true, typename Engine0, typename Layout0, typename Engine1, typename Layout1> __device__ __forceinline__ void reduce_sum(Tensor<Engine0, Layout0> const& tensor, Tensor<Engine1, Layout1> &sum){ SumOp<float> sum_op; thread_reduce_<zero_init>(tensor, sum, sum_op); } // Apply the exp to all the elements. template <bool Scale_max=true, typename Engine0, typename Layout0, typename Engine1, typename Layout1> __forceinline__ __device__ void scale_apply_exp2(Tensor<Engine0, Layout0> &tensor, Tensor<Engine1, Layout1> const &max, const float scale) { static_assert(Layout0::rank == 2, "Only support 2D Tensor"); static_assert(Layout1::rank == 1, "Only support 1D Tensor"); CUTE_STATIC_ASSERT_V(size<0>(max) == size<0>(tensor)); #pragma unroll for (int mi = 0; mi < size<0>(tensor); ++mi) { // If max is -inf, then all elements must have been -inf (possibly due to masking). // We don't want (-inf - (-inf)) since that would give NaN. // If we don't have float around M_LOG2E the multiplication is done in fp64. const float max_scaled = max(mi) == -INFINITY ? 0.f : max(mi) * (Scale_max ? scale : float(M_LOG2E)); #pragma unroll for (int ni = 0; ni < size<1>(tensor); ++ni) { // Instead of computing exp(x - max), we compute exp2(x * log_2(e) - // max * log_2(e)) This allows the compiler to use the ffma // instruction instead of fadd and fmul separately. // The following macro will disable the use of fma. // See: https://github.com/pytorch/pytorch/issues/121558 for more details // This macro is set in PyTorch and not FlashAttention #ifdef UNFUSE_FMA tensor(mi, ni) = exp2f(__fmul_rn(tensor(mi, ni), scale) - max_scaled); #else tensor(mi, ni) = exp2f(tensor(mi, ni) * scale - max_scaled); #endif } } } // Apply the exp to all the elements. template <bool zero_init=true, typename Engine0, typename Layout0, typename Engine1, typename Layout1> __forceinline__ __device__ void max_scale_exp2_sum(Tensor<Engine0, Layout0> &tensor, Tensor<Engine1, Layout1> &max, Tensor<Engine1, Layout1> &sum, const float scale) { static_assert(Layout0::rank == 2, "Only support 2D Tensor"); static_assert(Layout1::rank == 1, "Only support 1D Tensor"); CUTE_STATIC_ASSERT_V(size<0>(max) == size<0>(tensor)); #pragma unroll for (int mi = 0; mi < size<0>(tensor); ++mi) { MaxOp<float> max_op; max(mi) = zero_init ? tensor(mi, 0) : max_op(max(mi), tensor(mi, 0)); #pragma unroll for (int ni = 1; ni < size<1>(tensor); ni++) { max(mi) = max_op(max(mi), tensor(mi, ni)); } max(mi) = Allreduce<4>::run(max(mi), max_op); // If max is -inf, then all elements must have been -inf (possibly due to masking). // We don't want (-inf - (-inf)) since that would give NaN. const float max_scaled = max(mi) == -INFINITY ? 0.f : max(mi) * scale; sum(mi) = 0; #pragma unroll for (int ni = 0; ni < size<1>(tensor); ++ni) { // Instead of computing exp(x - max), we compute exp2(x * log_2(e) - // max * log_2(e)) This allows the compiler to use the ffma // instruction instead of fadd and fmul separately. tensor(mi, ni) = exp2f(tensor(mi, ni) * scale - max_scaled); sum(mi) += tensor(mi, ni); } SumOp<float> sum_op; sum(mi) = Allreduce<4>::run(sum(mi), sum_op); } } //////////////////////////////////////////////////////////////////////////////////////////////////// template <int kNRows> struct Softmax { using TensorT = decltype(make_tensor<float>(Shape<Int<kNRows>>{})); TensorT row_max, row_sum; __forceinline__ __device__ Softmax() {}; template<bool Is_first, bool Check_inf=false, typename Tensor0, typename Tensor1> __forceinline__ __device__ void softmax_rescale_o(Tensor0 &acc_s, Tensor1 &acc_o, float softmax_scale_log2) { // Reshape acc_s from (MMA=4, MMA_M, MMA_N) to (nrow=(2, MMA_M), ncol=(2, MMA_N)) Tensor scores = make_tensor(acc_s.data(), flash::convert_layout_acc_rowcol(acc_s.layout())); static_assert(decltype(size<0>(scores))::value == kNRows); if (Is_first) { flash::template reduce_max</*zero_init=*/true>(scores, row_max); flash::scale_apply_exp2(scores, row_max, softmax_scale_log2); flash::reduce_sum</*zero_init=*/true>(scores, row_sum); } else { Tensor scores_max_prev = make_fragment_like(row_max); cute::copy(row_max, scores_max_prev); flash::template reduce_max</*zero_init=*/false>(scores, row_max); // Reshape acc_o from (MMA=4, MMA_M, MMA_K) to (nrow=(2, MMA_M), ncol=(2, MMA_K)) Tensor acc_o_rowcol = make_tensor(acc_o.data(), flash::convert_layout_acc_rowcol(acc_o.layout())); static_assert(decltype(size<0>(acc_o_rowcol))::value == kNRows); #pragma unroll for (int mi = 0; mi < size(row_max); ++mi) { float scores_max_cur = !Check_inf ? row_max(mi) : (row_max(mi) == -INFINITY ? 0.0f : row_max(mi)); float scores_scale = exp2f((scores_max_prev(mi) - scores_max_cur) * softmax_scale_log2); row_sum(mi) *= scores_scale; #pragma unroll for (int ni = 0; ni < size<1>(acc_o_rowcol); ++ni) { acc_o_rowcol(mi, ni) *= scores_scale; } } flash::scale_apply_exp2(scores, row_max, softmax_scale_log2); // We don't do the reduce across threads here since we don't need to use the row_sum. // We do that reduce at the end when we need to normalize the softmax. flash::reduce_sum</*zero_init=*/false>(scores, row_sum); } }; template<bool Is_dropout=false, bool Split=false, typename Tensor0> __forceinline__ __device__ TensorT normalize_softmax_lse(Tensor0 &acc_o, float softmax_scale, float rp_dropout=1.0) { SumOp<float> sum_op; quad_allreduce_(row_sum, row_sum, sum_op); TensorT lse = make_fragment_like(row_sum); Tensor acc_o_rowcol = make_tensor(acc_o.data(), flash::convert_layout_acc_rowcol(acc_o.layout())); static_assert(decltype(size<0>(acc_o_rowcol))::value == kNRows); #pragma unroll for (int mi = 0; mi < size<0>(acc_o_rowcol); ++mi) { float sum = row_sum(mi); float inv_sum = (sum == 0.f || sum != sum) ? 1.f : 1.f / sum; lse(mi) = (sum == 0.f || sum != sum) ? (Split ? -INFINITY : INFINITY) : row_max(mi) * softmax_scale + __logf(sum); float scale = !Is_dropout ? inv_sum : inv_sum * rp_dropout; #pragma unroll for (int ni = 0; ni < size<1>(acc_o_rowcol); ++ni) { acc_o_rowcol(mi, ni) *= scale; } } return lse; }; }; } // namespace flash
candle/candle-flash-attn/kernels/softmax.h/0
{ "file_path": "candle/candle-flash-attn/kernels/softmax.h", "repo_id": "candle", "token_count": 4008 }
32
#include<stdint.h> #include "cuda_fp16.h" template<typename T> __device__ void fill_with(T *buf, T value, const size_t numel) { for (unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; i < numel; i += blockDim.x * gridDim.x) { buf[i] = value; } } extern "C" __global__ void fill_u8(uint8_t *buf, uint8_t value, const size_t numel) { fill_with(buf, value, numel); } extern "C" __global__ void fill_u32(uint32_t *buf, uint32_t value, const size_t numel) { fill_with(buf, value, numel); } extern "C" __global__ void fill_i64(int64_t *buf, int64_t value, const size_t numel) { fill_with(buf, value, numel); } extern "C" __global__ void fill_f32(float *buf, float value, const size_t numel) { fill_with(buf, value, numel); } extern "C" __global__ void fill_f64(double *buf, double value, const size_t numel) { fill_with(buf, value, numel); } template<typename T> __device__ void copy2d(const T *src, T *dst, uint32_t d1, uint32_t d2, uint32_t src_s, uint32_t dst_s) { uint32_t idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx >= d1 * d2) { return; } uint32_t idx1 = idx / d2; uint32_t idx2 = idx - d2 * idx1; dst[idx1 * dst_s + idx2] = src[idx1 * src_s + idx2]; } #define COPY2D_OP(TYPENAME, FNNAME) \ extern "C" __global__ \ void FNNAME(const TYPENAME *src, TYPENAME *dst, uint32_t d1, uint32_t d2, uint32_t src_s, uint32_t dst_s) { \ copy2d(src, dst, d1, d2, src_s, dst_s); \ } \ COPY2D_OP(float, copy2d_f32) COPY2D_OP(double, copy2d_f64) COPY2D_OP(uint8_t, copy2d_u8) COPY2D_OP(uint32_t, copy2d_u32) COPY2D_OP(int64_t, copy2d_i64) #if __CUDA_ARCH__ >= 530 extern "C" __global__ void fill_f16(__half *buf, __half value, const size_t numel) { fill_with(buf, value, numel); } COPY2D_OP(__half, copy2d_f16) #endif #if __CUDA_ARCH__ >= 800 #include <cuda_bf16.h> extern "C" __global__ void fill_bf16(__nv_bfloat16 *buf, __nv_bfloat16 value, const size_t numel) { fill_with(buf, value, numel); } COPY2D_OP(__nv_bfloat16, copy2d_bf16) #endif
candle/candle-kernels/src/fill.cu/0
{ "file_path": "candle/candle-kernels/src/fill.cu", "repo_id": "candle", "token_count": 919 }
33
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle::{DType, Device, Module, Tensor}; use candle_nn::{Conv2d, Conv2dConfig}; use criterion::{black_box, criterion_group, Criterion}; use std::time::Instant; const B: usize = 1; const C: usize = 1; const M: usize = 128; const K: usize = 128; const K_SIZE: usize = 3; fn run(input: Tensor, weight: Tensor, bias: Tensor, config: Conv2dConfig) { Conv2d::new(weight, Some(bias), config) .forward(&input) .unwrap(); } fn run_conv2d_benchmark(c: &mut Criterion, device: &Device, dtype: DType, name: &str) { let weight = Tensor::ones((1, 1, K_SIZE, K_SIZE), dtype, device) .unwrap() .to_dtype(dtype) .unwrap(); let bias = Tensor::zeros(K, dtype, device).unwrap(); let input = Tensor::ones((B, C, M, K), dtype, device).unwrap(); let mut group = c.benchmark_group(device.bench_name(name)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run( black_box(input.clone()), black_box(weight.clone()), black_box(bias.clone()), Default::default(), ); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let device = BenchDeviceHandler::new().unwrap(); for d in device.devices { run_conv2d_benchmark(c, &d, DType::F32, "conv2d_f32"); run_conv2d_benchmark(c, &d, DType::F16, "conv2d_f16"); } } criterion_group!(benches, criterion_benchmark);
candle/candle-nn/benches/benchmarks/conv.rs/0
{ "file_path": "candle/candle-nn/benches/benchmarks/conv.rs", "repo_id": "candle", "token_count": 808 }
34
//! Linear layer //! //! This layer applies a linear transformation to the incoming data, `y = x@w.t() + b`. //! The bias is optional. The `forward` method can be used to apply the layer, it supports input //! with a batch dimension (so of shape `(b_sz, in_c)`) or without (of shape `(in_c,)`), the //! output has shape `(b_sz, out_c)` and `(out_c,)` respectively. //! //! ```rust //! use candle::{Tensor, Device::Cpu}; //! use candle_nn::{Linear, Module}; //! # fn main() -> candle::Result<()> { //! //! let w = Tensor::new(&[[1f32, 2.], [3., 4.], [5., 6.]], &Cpu)?; //! let layer = Linear::new(w, None); // Use no bias. //! let xs = Tensor::new(&[[10f32, 100.]], &Cpu)?; //! let ys = layer.forward(&xs)?; //! assert_eq!(ys.to_vec2::<f32>()?, &[[210.0, 430.0, 650.0]]); //! # Ok(()) } //! ``` use candle::{Result, Tensor}; #[derive(Clone, Debug)] pub struct Linear { weight: Tensor, bias: Option<Tensor>, } impl Linear { pub fn new(weight: Tensor, bias: Option<Tensor>) -> Self { Self { weight, bias } } pub fn weight(&self) -> &Tensor { &self.weight } pub fn bias(&self) -> Option<&Tensor> { self.bias.as_ref() } } impl super::Module for Linear { fn forward(&self, x: &Tensor) -> candle::Result<Tensor> { let w = match *x.dims() { [b1, b2, _, _] => self.weight.broadcast_left((b1, b2))?.t()?, [bsize, _, _] => self.weight.broadcast_left(bsize)?.t()?, _ => self.weight.t()?, }; let x = x.matmul(&w)?; match &self.bias { None => Ok(x), Some(bias) => x.broadcast_add(bias), } } } /// Create or initialize a new linear layer. /// /// This uses some default names for weights and biases, namely `"weight"` and `"bias"`. pub fn linear(in_dim: usize, out_dim: usize, vb: crate::VarBuilder) -> Result<Linear> { let init_ws = crate::init::DEFAULT_KAIMING_NORMAL; let ws = vb.get_with_hints((out_dim, in_dim), "weight", init_ws)?; let bound = 1. / (in_dim as f64).sqrt(); let init_bs = crate::Init::Uniform { lo: -bound, up: bound, }; let bs = vb.get_with_hints(out_dim, "bias", init_bs)?; Ok(Linear::new(ws, Some(bs))) } /// Create or initialize a new linear layer without biases. pub fn linear_no_bias(in_dim: usize, out_dim: usize, vb: crate::VarBuilder) -> Result<Linear> { let init_ws = crate::init::DEFAULT_KAIMING_NORMAL; let ws = vb.get_with_hints((out_dim, in_dim), "weight", init_ws)?; Ok(Linear::new(ws, None)) } pub fn linear_b( in_dim: usize, out_dim: usize, bias: bool, vb: crate::VarBuilder, ) -> Result<Linear> { if bias { linear(in_dim, out_dim, vb) } else { linear_no_bias(in_dim, out_dim, vb) } }
candle/candle-nn/src/linear.rs/0
{ "file_path": "candle/candle-nn/src/linear.rs", "repo_id": "candle", "token_count": 1252 }
35
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use candle::{test_utils::to_vec2_round, DType, Device, Result, Tensor}; use candle_nn::RNN; /* The following test can be verified against PyTorch using the following snippet. import torch from torch import nn lstm = nn.LSTM(2, 3, 1) lstm.weight_ih_l0 = torch.nn.Parameter(torch.arange(0., 24.).reshape(12, 2).cos()) lstm.weight_hh_l0 = torch.nn.Parameter(torch.arange(0., 36.).reshape(12, 3).sin()) lstm.bias_ih_l0 = torch.nn.Parameter(torch.tensor([-1., 1., -0.5, 2, -1, 1, -0.5, 2, -1, 1, -0.5, 2])) lstm.bias_hh_l0 = torch.nn.Parameter(torch.tensor([-1., 1., -0.5, 2, -1, 1, -0.5, 2, -1, 1, -0.5, 2]).cos()) state = torch.zeros((1, 3)), torch.zeros((1, 3)) for inp in [3., 1., 4., 1., 5., 9., 2.]: inp = torch.tensor([[inp, inp * 0.5]]) _out, state = lstm(inp, state) print(state) # (tensor([[ 0.9919, 0.1738, -0.1451]], grad_fn=...), tensor([[ 5.7250, 0.4458, -0.2908]], grad_fn=...)) */ #[test] fn lstm() -> Result<()> { let cpu = &Device::Cpu; let w_ih = Tensor::arange(0f32, 24f32, cpu)?.reshape((12, 2))?; let w_ih = w_ih.cos()?; let w_hh = Tensor::arange(0f32, 36f32, cpu)?.reshape((12, 3))?; let w_hh = w_hh.sin()?; let b_ih = Tensor::new( &[-1f32, 1., -0.5, 2., -1., 1., -0.5, 2., -1., 1., -0.5, 2.], cpu, )?; let b_hh = b_ih.cos()?; let tensors: std::collections::HashMap<_, _> = [ ("weight_ih_l0".to_string(), w_ih), ("weight_hh_l0".to_string(), w_hh), ("bias_ih_l0".to_string(), b_ih), ("bias_hh_l0".to_string(), b_hh), ] .into_iter() .collect(); let vb = candle_nn::VarBuilder::from_tensors(tensors, DType::F32, cpu); let lstm = candle_nn::lstm(2, 3, Default::default(), vb)?; let mut state = lstm.zero_state(1)?; for inp in [3f32, 1., 4., 1., 5., 9., 2.] { let inp = Tensor::new(&[[inp, inp * 0.5]], cpu)?; state = lstm.step(&inp, &state)? } let h = state.h(); let c = state.c(); assert_eq!(to_vec2_round(h, 4)?, &[[0.9919, 0.1738, -0.1451]]); assert_eq!(to_vec2_round(c, 4)?, &[[5.725, 0.4458, -0.2908]]); Ok(()) } /* The following test can be verified against PyTorch using the following snippet. import torch from torch import nn gru = nn.GRU(2, 3, 1) gru.weight_ih_l0 = torch.nn.Parameter(torch.arange(0., 18.).reshape(9, 2).cos()) gru.weight_hh_l0 = torch.nn.Parameter(torch.arange(0., 27.).reshape(9, 3).sin()) gru.bias_ih_l0 = torch.nn.Parameter(torch.tensor([-1., 1., -0.5, 2, -1, 1, -0.5, 2, -1])) gru.bias_hh_l0 = torch.nn.Parameter(torch.tensor([-1., 1., -0.5, 2, -1, 1, -0.5, 2, -1]).cos()) state = torch.zeros((1, 3)) for inp in [3., 1., 4., 1., 5., 9., 2.]: inp = torch.tensor([[inp, inp * 0.5]]) _out, state = gru(inp, state) print(state) # tensor([[ 0.0579, 0.8836, -0.9991]], grad_fn=<SqueezeBackward1>) */ #[test] fn gru() -> Result<()> { let cpu = &Device::Cpu; let w_ih = Tensor::arange(0f32, 18f32, cpu)?.reshape((9, 2))?; let w_ih = w_ih.cos()?; let w_hh = Tensor::arange(0f32, 27f32, cpu)?.reshape((9, 3))?; let w_hh = w_hh.sin()?; let b_ih = Tensor::new(&[-1f32, 1., -0.5, 2., -1., 1., -0.5, 2., -1.], cpu)?; let b_hh = b_ih.cos()?; let tensors: std::collections::HashMap<_, _> = [ ("weight_ih_l0".to_string(), w_ih), ("weight_hh_l0".to_string(), w_hh), ("bias_ih_l0".to_string(), b_ih), ("bias_hh_l0".to_string(), b_hh), ] .into_iter() .collect(); let vb = candle_nn::VarBuilder::from_tensors(tensors, DType::F32, cpu); let gru = candle_nn::gru(2, 3, Default::default(), vb)?; let mut state = gru.zero_state(1)?; for inp in [3f32, 1., 4., 1., 5., 9., 2.] { let inp = Tensor::new(&[[inp, inp * 0.5]], cpu)?; state = gru.step(&inp, &state)? } let h = state.h(); assert_eq!(to_vec2_round(h, 4)?, &[[0.0579, 0.8836, -0.9991]]); Ok(()) }
candle/candle-nn/tests/rnn.rs/0
{ "file_path": "candle/candle-nn/tests/rnn.rs", "repo_id": "candle", "token_count": 2010 }
36
# Generated content DO NOT EDIT from typing import Any, Callable, Dict, List, Optional, Tuple, Union, Sequence from os import PathLike from candle.typing import _ArrayLike, Device, Scalar, Index, Shape class bf16(DType): pass @staticmethod def cat(tensors: List[Tensor], dim: int) -> Tensor: """ Concatenate the tensors across one axis. """ pass class f16(DType): pass class f32(DType): pass class f64(DType): pass class i64(DType): pass @staticmethod def ones(*shape: Shape, dtype: Optional[DType] = None, device: Optional[Device] = None) -> Tensor: """ Creates a new tensor filled with ones. """ pass @staticmethod def rand(*shape: Shape, device: Optional[Device] = None) -> Tensor: """ Creates a new tensor with random values. """ pass @staticmethod def randn(*shape: Shape, device: Optional[Device] = None) -> Tensor: """ Creates a new tensor with random values from a normal distribution. """ pass @staticmethod def stack(tensors: List[Tensor], dim: int) -> Tensor: """ Stack the tensors along a new axis. """ pass @staticmethod def tensor(data: _ArrayLike) -> Tensor: """ Creates a new tensor from a Python value. The value can be a scalar or array-like object. """ pass class u32(DType): pass class u8(DType): pass @staticmethod def zeros(*shape: Shape, dtype: Optional[DType] = None, device: Optional[Device] = None) -> Tensor: """ Creates a new tensor filled with zeros. """ pass class DType: """ A `candle` dtype. """ class QTensor: """ A quantized tensor. """ def dequantize(self) -> Tensor: """ Dequantizes the tensor. """ pass @property def ggml_dtype(self) -> str: """ Gets the tensors quantized dtype. """ pass def matmul_t(self, lhs: Tensor) -> Tensor: """ Performs a quantized matrix multiplication, with the quantized tensor as the right hand side. """ pass @property def rank(self) -> int: """ Gets the rank of the tensor. """ pass @property def shape(self) -> Tuple[int]: """ Gets the shape of the tensor. """ pass class Tensor: """ A `candle` tensor. """ def __init__(self, data: _ArrayLike): pass def __add__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Add a scalar to a tensor or two tensors together. """ pass def __eq__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Compare a tensor with a scalar or one tensor with another. """ pass def __ge__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Compare a tensor with a scalar or one tensor with another. """ pass def __getitem__(self, index: Union[Index, Tensor, Sequence[Index]]) -> "Tensor": """ Return a slice of a tensor. """ pass def __gt__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Compare a tensor with a scalar or one tensor with another. """ pass def __le__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Compare a tensor with a scalar or one tensor with another. """ pass def __lt__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Compare a tensor with a scalar or one tensor with another. """ pass def __mul__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Multiply a tensor by a scalar or one tensor by another. """ pass def __ne__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Compare a tensor with a scalar or one tensor with another. """ pass def __radd__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Add a scalar to a tensor or two tensors together. """ pass def __richcmp__(self, rhs: Union[Tensor, Scalar], op) -> "Tensor": """ Compare a tensor with a scalar or one tensor with another. """ pass def __rmul__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Multiply a tensor by a scalar or one tensor by another. """ pass def __sub__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Subtract a scalar from a tensor or one tensor from another. """ pass def __truediv__(self, rhs: Union[Tensor, Scalar]) -> "Tensor": """ Divide a tensor by a scalar or one tensor by another. """ pass def abs(self) -> Tensor: """ Performs the `abs` operation on the tensor. """ pass def argmax_keepdim(self, dim: int) -> Tensor: """ Returns the indices of the maximum value(s) across the selected dimension. """ pass def argmin_keepdim(self, dim: int) -> Tensor: """ Returns the indices of the minimum value(s) across the selected dimension. """ pass def broadcast_add(self, rhs: Tensor) -> Tensor: """ Adds the two tensors, while broadcasting the right-hand-side tensor to match the shape of the left-hand-side tensor. """ pass def broadcast_as(self, *shape: Shape) -> Tensor: """ Broadcasts the tensor to the given shape. """ pass def broadcast_div(self, rhs: Tensor) -> Tensor: """ Divides the two tensors, while broadcasting the right-hand-side tensor to match the shape of the left-hand-side tensor. """ pass def broadcast_left(self, *shape: Shape) -> Tensor: """ Broadcasts the tensor to the given shape, adding new dimensions on the left. """ pass def broadcast_mul(self, rhs: Tensor) -> Tensor: """ Multiplies the two tensors, while broadcasting the right-hand-side tensor to match the shape of the left-hand-side tensor. """ pass def broadcast_sub(self, rhs: Tensor) -> Tensor: """ Subtracts the two tensors, while broadcasting the right-hand-side tensor to match the shape of the left-hand-side tensor. """ pass def contiguous(self) -> Tensor: """ Makes the tensor contiguous in memory. """ pass def copy(self) -> Tensor: """ Returns a copy of the tensor. """ pass def cos(self) -> Tensor: """ Performs the `cos` operation on the tensor. """ pass def detach(self) -> Tensor: """ Detach the tensor from the computation graph. """ pass @property def device(self) -> Device: """ Gets the tensor's device. """ pass @property def dtype(self) -> DType: """ Gets the tensor's dtype. """ pass def exp(self) -> Tensor: """ Performs the `exp` operation on the tensor. """ pass def flatten_all(self) -> Tensor: """ Flattens the tensor into a 1D tensor. """ pass def flatten_from(self, dim: int) -> Tensor: """ Flattens the tensor on the dimension indexes from `dim` (inclusive) to the last dimension. """ pass def flatten_to(self, dim: int) -> Tensor: """ Flattens the tensor on the dimension indexes from `0` to `dim` (inclusive). """ pass def gather(self, index, dim): """ Gathers values along an axis specified by dim. """ pass def get(self, index: int) -> Tensor: """ Gets the value at the specified index. """ pass def index_select(self, rhs: Tensor, dim: int) -> Tensor: """ Select values for the input tensor at the target indexes across the specified dimension. The `indexes` is argument is an int tensor with a single dimension. The output has the same number of dimension as the `self` input. The target dimension of the output has length the length of `indexes` and the values are taken from `self` using the index from `indexes`. Other dimensions have the same number of elements as the input tensor. """ pass def is_contiguous(self) -> bool: """ Returns true if the tensor is contiguous in C order. """ pass def is_fortran_contiguous(self) -> bool: """ Returns true if the tensor is contiguous in Fortran order. """ pass def log(self) -> Tensor: """ Performs the `log` operation on the tensor. """ pass def matmul(self, rhs: Tensor) -> Tensor: """ Performs a matrix multiplication between the two tensors. """ pass def max_keepdim(self, dim: int) -> Tensor: """ Gathers the maximum value across the selected dimension. """ pass def mean_all(self) -> Tensor: """ Returns the mean of the tensor. """ pass def min_keepdim(self, dim: int) -> Tensor: """ Gathers the minimum value across the selected dimension. """ pass def narrow(self, dim: int, start: int, len: int) -> Tensor: """ Returns a new tensor that is a narrowed version of the input, the dimension `dim` ranges from `start` to `start + len`. """ pass @property def nelement(self) -> int: """ Gets the tensor's element count. """ pass def powf(self, p: float) -> Tensor: """ Performs the `pow` operation on the tensor with the given exponent. """ pass def quantize(self, quantized_dtype: str) -> QTensor: """ Quantize the tensor. """ pass @property def rank(self) -> int: """ Gets the tensor's rank. """ pass def recip(self) -> Tensor: """ Get the `recip` of the tensor. """ pass def reshape(self, *shape: Shape) -> Tensor: """ Reshapes the tensor to the given shape. """ pass @property def shape(self) -> Tuple[int]: """ Gets the tensor's shape. """ pass def sin(self) -> Tensor: """ Performs the `sin` operation on the tensor. """ pass def sqr(self) -> Tensor: """ Squares the tensor. """ pass def sqrt(self) -> Tensor: """ Calculates the square root of the tensor. """ pass def squeeze(self, dim: int) -> Tensor: """ Creates a new tensor with the specified dimension removed if its size was one. """ pass @property def stride(self) -> Tuple[int]: """ Gets the tensor's strides. """ pass def sum_all(self) -> Tensor: """ Returns the sum of the tensor. """ pass def sum_keepdim(self, dim: Union[int, List[int]]) -> Tensor: """ Returns the sum of all elements in the input tensor. The sum is performed over all the input dimensions. """ pass def t(self) -> Tensor: """ Transposes the tensor. """ pass def to(self, *args, **kwargs) -> Tensor: """ Performs Tensor dtype and/or device conversion. """ pass def to_device(self, device: Union[str, Device]) -> Tensor: """ Move the tensor to a new device. """ pass def to_dtype(self, dtype: Union[str, DType]) -> Tensor: """ Convert the tensor to a new dtype. """ pass def to_torch(self) -> torch.Tensor: """ Converts candle's tensor to pytorch's tensor """ pass def transpose(self, dim1: int, dim2: int) -> Tensor: """ Returns a tensor that is a transposed version of the input, the given dimensions are swapped. """ pass def unsqueeze(self, dim: int) -> Tensor: """ Creates a new tensor with a dimension of size one inserted at the specified position. """ pass def values(self) -> _ArrayLike: """ Gets the tensor's data as a Python scalar or array-like object. """ pass def where_cond(self, on_true: Tensor, on_false: Tensor) -> Tensor: """ Returns a tensor with the same shape as the input tensor, the values are taken from `on_true` if the input tensor value is not zero, and `on_false` at the positions where the input tensor is equal to zero. """ pass
candle/candle-pyo3/py_src/candle/__init__.pyi/0
{ "file_path": "candle/candle-pyo3/py_src/candle/__init__.pyi", "repo_id": "candle", "token_count": 5844 }
37
# Generated content DO NOT EDIT from .. import utils cuda_is_available = utils.cuda_is_available get_num_threads = utils.get_num_threads has_accelerate = utils.has_accelerate has_mkl = utils.has_mkl load_ggml = utils.load_ggml load_gguf = utils.load_gguf load_safetensors = utils.load_safetensors save_gguf = utils.save_gguf save_safetensors = utils.save_safetensors
candle/candle-pyo3/py_src/candle/utils/__init__.py/0
{ "file_path": "candle/candle-pyo3/py_src/candle/utils/__init__.py", "repo_id": "candle", "token_count": 150 }
38
import candle from candle import Tensor from candle.utils import cuda_is_available from candle.testing import assert_equal import pytest def test_tensor_can_be_constructed(): t = Tensor(42.0) assert t.values() == 42.0 def test_tensor_can_be_constructed_from_list(): t = Tensor([3.0, 1, 4, 1, 5, 9, 2, 6]) assert t.values() == [3.0, 1, 4, 1, 5, 9, 2, 6] def test_tensor_can_be_constructed_from_list_of_lists(): t = Tensor([[3.0, 1, 4, 1], [5, 9, 2, 6]]) assert t.values() == [[3.0, 1, 4, 1], [5, 9, 2, 6]] def test_tensor_can_be_quantized(): t = candle.randn((16, 256)) for format in [ "q4_0", "q4_1", "q5_0", "q5_1", "q8_0", "q2k", "q3k", "q4k", "q5k", "q8k", ]: for formatted_format in [format.upper(), format.lower()]: quant_t = t.quantize(formatted_format) assert quant_t.ggml_dtype.lower() == format.lower() assert quant_t.shape == t.shape def test_tensor_can_be_indexed(): t = Tensor([[3.0, 1, 4, 1], [5, 9, 2, 6]]) assert t[0].values() == [3.0, 1.0, 4.0, 1.0] assert t[1].values() == [5.0, 9.0, 2.0, 6.0] assert t[-1].values() == [5.0, 9.0, 2.0, 6.0] assert t[-2].values() == [3.0, 1.0, 4.0, 1.0] def test_tensor_can_be_sliced(): t = Tensor([3.0, 1, 4, 10, 5, 9, 2, 6]) assert t[0:4].values() == [3.0, 1.0, 4.0, 10.0] assert t[4:8].values() == [5.0, 9.0, 2.0, 6.0] assert t[-4:].values() == [5.0, 9.0, 2.0, 6.0] assert t[:-4].values() == [3.0, 1.0, 4.0, 10.0] assert t[-4:-2].values() == [5.0, 9.0] assert t[...].values() == t.values() def test_tensor_can_be_sliced_2d(): t = Tensor([[3.0, 1, 4, 1], [5, 9, 2, 6]]) assert t[:, 0].values() == [3.0, 5] assert t[:, 1].values() == [1.0, 9.0] assert t[0, 0].values() == 3.0 assert t[:, -1].values() == [1.0, 6.0] assert t[:, -4].values() == [3.0, 5] assert t[..., 0].values() == [3.0, 5] def test_tensor_can_be_scliced_3d(): t = Tensor([[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]]) assert t[:, :, 0].values() == [[1, 5], [9, 13]] assert t[:, :, 0:2].values() == [[[1, 2], [5, 6]], [[9, 10], [13, 14]]] assert t[:, 0, 0].values() == [1, 9] assert t[..., 0].values() == [[1, 5], [9, 13]] assert t[..., 0:2].values() == [[[1, 2], [5, 6]], [[9, 10], [13, 14]]] def assert_bool(t: Tensor, expected: bool): assert t.shape == () assert str(t.dtype) == str(candle.u8) assert bool(t.values()) == expected def test_tensor_supports_equality_operations_with_scalars(): t = Tensor(42.0) assert_bool(t == 42.0, True) assert_bool(t == 43.0, False) assert_bool(t != 42.0, False) assert_bool(t != 43.0, True) assert_bool(t > 41.0, True) assert_bool(t > 42.0, False) assert_bool(t >= 41.0, True) assert_bool(t >= 42.0, True) assert_bool(t < 43.0, True) assert_bool(t < 42.0, False) assert_bool(t <= 43.0, True) assert_bool(t <= 42.0, True) def test_tensor_supports_equality_operations_with_tensors(): t = Tensor(42.0) same = Tensor(42.0) other = Tensor(43.0) assert_bool(t == same, True) assert_bool(t == other, False) assert_bool(t != same, False) assert_bool(t != other, True) assert_bool(t > same, False) assert_bool(t > other, False) assert_bool(t >= same, True) assert_bool(t >= other, False) assert_bool(t < same, False) assert_bool(t < other, True) assert_bool(t <= same, True) assert_bool(t <= other, True) def test_tensor_equality_operations_can_broadcast(): # Create a decoder attention mask as a test case # e.g. # [[1,0,0] # [1,1,0] # [1,1,1]] mask_cond = candle.Tensor([0, 1, 2]) mask = mask_cond < (mask_cond + 1).reshape((3, 1)) assert mask.shape == (3, 3) assert_equal(mask, Tensor([[1, 0, 0], [1, 1, 0], [1, 1, 1]]).to_dtype(candle.u8)) def test_tensor_can_be_hashed(): t = Tensor(42.0) other = Tensor(42.0) # Hash should represent a unique tensor assert hash(t) != hash(other) assert hash(t) == hash(t) def test_tensor_can_be_expanded_with_none(): t = candle.rand((12, 12)) b = t[None] assert b.shape == (1, 12, 12) c = t[:, None, None, :] assert c.shape == (12, 1, 1, 12) d = t[None, :, None, :] assert d.shape == (1, 12, 1, 12) e = t[None, None, :, :] assert e.shape == (1, 1, 12, 12) f = t[:, :, None] assert f.shape == (12, 12, 1) def test_tensor_can_be_index_via_tensor(): t = candle.Tensor([[1, 2, 1, 2], [3, 4, 3, 4], [5, 6, 5, 6]]) indexed = t[candle.Tensor([0, 2])] assert indexed.shape == (2, 4) assert indexed.values() == [[1, 2, 1, 2], [5, 6, 5, 6]] indexed = t[:, candle.Tensor([0, 2])] assert indexed.shape == (3, 2) assert indexed.values() == [[1, 1], [3, 3], [5, 5]] def test_tensor_can_be_index_via_list(): t = candle.Tensor([[1, 2, 1, 2], [3, 4, 3, 4], [5, 6, 5, 6]]) indexed = t[[0, 2]] assert indexed.shape == (2, 4) assert indexed.values() == [[1, 2, 1, 2], [5, 6, 5, 6]] indexed = t[:, [0, 2]] assert indexed.shape == (3, 2) assert indexed.values() == [[1, 1], [3, 3], [5, 5]] def test_tensor_can_be_cast_via_to(): t = Tensor(42.0) assert str(t.dtype) == str(candle.f32) t_new_args = t.to(candle.f64) assert str(t_new_args.dtype) == str(candle.f64) t_new_kwargs = t.to(dtype=candle.f64) assert str(t_new_kwargs.dtype) == str(candle.f64) pytest.raises(TypeError, lambda: t.to("not a dtype")) pytest.raises(TypeError, lambda: t.to(dtype="not a dtype")) pytest.raises(TypeError, lambda: t.to(candle.f64, "not a dtype")) pytest.raises(TypeError, lambda: t.to()) pytest.raises(ValueError, lambda: t.to(candle.f16, dtype=candle.f64)) pytest.raises(ValueError, lambda: t.to(candle.f16, candle.f16)) other = Tensor(42.0).to(candle.f64) t_new_other_args = t.to(other) assert str(t_new_other_args.dtype) == str(candle.f64) t_new_other_kwargs = t.to(other=other) assert str(t_new_other_kwargs.dtype) == str(candle.f64) @pytest.mark.skipif(not cuda_is_available(), reason="CUDA is not available") def test_tensor_can_be_moved_via_to(): t = Tensor(42.0) assert t.device == "cpu" t_new_args = t.to("cuda") assert t_new_args.device == "cuda" t_new_kwargs = t.to(device="cuda") assert t_new_kwargs.device == "cuda" pytest.raises(TypeError, lambda: t.to("not a device")) pytest.raises(TypeError, lambda: t.to(device="not a device")) pytest.raises(TypeError, lambda: t.to("cuda", "not a device")) pytest.raises(TypeError, lambda: t.to()) pytest.raises(ValueError, lambda: t.to("cuda", device="cpu")) pytest.raises(ValueError, lambda: t.to("cuda", "cuda")) other = Tensor(42.0).to("cuda") t_new_other_args = t.to(other) assert t_new_other_args.device == "cuda" t_new_other_kwargs = t.to(other=other) assert t_new_other_kwargs.device == "cuda" @pytest.mark.skipif(not cuda_is_available(), reason="CUDA is not available") def test_tensor_can_be_moved_and_cast_via_to(): t = Tensor(42.0) assert t.device == "cpu" assert str(t.dtype) == str(candle.f32) t_new_args = t.to("cuda", candle.f64) assert t_new_args.device == "cuda" assert str(t_new_args.dtype) == str(candle.f64) t_new_kwargs = t.to(device="cuda", dtype=candle.f64) assert t_new_kwargs.device == "cuda" assert str(t_new_kwargs.dtype) == str(candle.f64) other = Tensor(42.0).to("cuda").to(candle.f64) t_new_other_args = t.to(other) assert t_new_other_args.device == "cuda" assert str(t_new_other_args.dtype) == str(candle.f64) t_new_other_kwargs = t.to(other=other) assert t_new_other_kwargs.device == "cuda" assert str(t_new_other_kwargs.dtype) == str(candle.f64) def test_tensor_can_be_added(): t = Tensor(42.0) result = t + t assert result.values() == 84.0 result = t + 2.0 assert result.values() == 44.0 a = candle.rand((3, 1, 4)) b = candle.rand((2, 1)) c_native = a.broadcast_add(b) c = a + b assert c.shape == (3, 2, 4) assert c.values() == c_native.values() with pytest.raises(ValueError): d = candle.rand((3, 4, 5)) e = candle.rand((4, 6)) f = d + e def test_tensor_can_be_subtracted(): t = Tensor(42.0) result = t - t assert result.values() == 0 result = t - 2.0 assert result.values() == 40.0 a = candle.rand((3, 1, 4)) b = candle.rand((2, 1)) c_native = a.broadcast_sub(b) c = a - b assert c.shape == (3, 2, 4) assert c.values() == c_native.values() with pytest.raises(ValueError): d = candle.rand((3, 4, 5)) e = candle.rand((4, 6)) f = d - e def test_tensor_can_be_multiplied(): t = Tensor(42.0) result = t * t assert result.values() == 1764.0 result = t * 2.0 assert result.values() == 84.0 a = candle.rand((3, 1, 4)) b = candle.rand((2, 1)) c_native = a.broadcast_mul(b) c = a * b assert c.shape == (3, 2, 4) assert c.values() == c_native.values() with pytest.raises(ValueError): d = candle.rand((3, 4, 5)) e = candle.rand((4, 6)) f = d * e def test_tensor_can_be_divided(): t = Tensor(42.0) result = t / t assert result.values() == 1.0 result = t / 2.0 assert result.values() == 21.0 a = candle.rand((3, 1, 4)) b = candle.rand((2, 1)) c_native = a.broadcast_div(b) c = a / b assert c.shape == (3, 2, 4) assert c.values() == c_native.values() with pytest.raises(ValueError): d = candle.rand((3, 4, 5)) e = candle.rand((4, 6)) f = d / e
candle/candle-pyo3/tests/native/test_tensor.py/0
{ "file_path": "candle/candle-pyo3/tests/native/test_tensor.py", "repo_id": "candle", "token_count": 4688 }
39
use candle::{DType, IndexOp, Result, Tensor, D}; use candle_nn::{LayerNorm, Linear, RmsNorm, VarBuilder}; // https://github.com/black-forest-labs/flux/blob/727e3a71faf37390f318cf9434f0939653302b60/src/flux/model.py#L12 #[derive(Debug, Clone)] pub struct Config { pub in_channels: usize, pub vec_in_dim: usize, pub context_in_dim: usize, pub hidden_size: usize, pub mlp_ratio: f64, pub num_heads: usize, pub depth: usize, pub depth_single_blocks: usize, pub axes_dim: Vec<usize>, pub theta: usize, pub qkv_bias: bool, pub guidance_embed: bool, } impl Config { // https://github.com/black-forest-labs/flux/blob/727e3a71faf37390f318cf9434f0939653302b60/src/flux/util.py#L32 pub fn dev() -> Self { Self { in_channels: 64, vec_in_dim: 768, context_in_dim: 4096, hidden_size: 3072, mlp_ratio: 4.0, num_heads: 24, depth: 19, depth_single_blocks: 38, axes_dim: vec![16, 56, 56], theta: 10_000, qkv_bias: true, guidance_embed: true, } } // https://github.com/black-forest-labs/flux/blob/727e3a71faf37390f318cf9434f0939653302b60/src/flux/util.py#L64 pub fn schnell() -> Self { Self { in_channels: 64, vec_in_dim: 768, context_in_dim: 4096, hidden_size: 3072, mlp_ratio: 4.0, num_heads: 24, depth: 19, depth_single_blocks: 38, axes_dim: vec![16, 56, 56], theta: 10_000, qkv_bias: true, guidance_embed: false, } } } fn layer_norm(dim: usize, vb: VarBuilder) -> Result<LayerNorm> { let ws = Tensor::ones(dim, vb.dtype(), vb.device())?; Ok(LayerNorm::new_no_bias(ws, 1e-6)) } fn scaled_dot_product_attention(q: &Tensor, k: &Tensor, v: &Tensor) -> Result<Tensor> { let dim = q.dim(D::Minus1)?; let scale_factor = 1.0 / (dim as f64).sqrt(); let mut batch_dims = q.dims().to_vec(); batch_dims.pop(); batch_dims.pop(); let q = q.flatten_to(batch_dims.len() - 1)?; let k = k.flatten_to(batch_dims.len() - 1)?; let v = v.flatten_to(batch_dims.len() - 1)?; let attn_weights = (q.matmul(&k.t()?)? * scale_factor)?; let attn_scores = candle_nn::ops::softmax_last_dim(&attn_weights)?.matmul(&v)?; batch_dims.push(attn_scores.dim(D::Minus2)?); batch_dims.push(attn_scores.dim(D::Minus1)?); attn_scores.reshape(batch_dims) } fn rope(pos: &Tensor, dim: usize, theta: usize) -> Result<Tensor> { if dim % 2 == 1 { candle::bail!("dim {dim} is odd") } let dev = pos.device(); let theta = theta as f64; let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / theta.powf(i as f64 / dim as f64) as f32) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, 1, inv_freq_len), dev)?; let inv_freq = inv_freq.to_dtype(pos.dtype())?; let freqs = pos.unsqueeze(2)?.broadcast_mul(&inv_freq)?; let cos = freqs.cos()?; let sin = freqs.sin()?; let out = Tensor::stack(&[&cos, &sin.neg()?, &sin, &cos], 3)?; let (b, n, d, _ij) = out.dims4()?; out.reshape((b, n, d, 2, 2)) } fn apply_rope(x: &Tensor, freq_cis: &Tensor) -> Result<Tensor> { let dims = x.dims(); let (b_sz, n_head, seq_len, n_embd) = x.dims4()?; let x = x.reshape((b_sz, n_head, seq_len, n_embd / 2, 2))?; let x0 = x.narrow(D::Minus1, 0, 1)?; let x1 = x.narrow(D::Minus1, 1, 1)?; let fr0 = freq_cis.get_on_dim(D::Minus1, 0)?; let fr1 = freq_cis.get_on_dim(D::Minus1, 1)?; (fr0.broadcast_mul(&x0)? + fr1.broadcast_mul(&x1)?)?.reshape(dims.to_vec()) } fn attention(q: &Tensor, k: &Tensor, v: &Tensor, pe: &Tensor) -> Result<Tensor> { let q = apply_rope(q, pe)?.contiguous()?; let k = apply_rope(k, pe)?.contiguous()?; let x = scaled_dot_product_attention(&q, &k, v)?; x.transpose(1, 2)?.flatten_from(2) } fn timestep_embedding(t: &Tensor, dim: usize, dtype: DType) -> Result<Tensor> { const TIME_FACTOR: f64 = 1000.; const MAX_PERIOD: f64 = 10000.; if dim % 2 == 1 { candle::bail!("{dim} is odd") } let dev = t.device(); let half = dim / 2; let t = (t * TIME_FACTOR)?; let arange = Tensor::arange(0, half as u32, dev)?.to_dtype(candle::DType::F32)?; let freqs = (arange * (-MAX_PERIOD.ln() / half as f64))?.exp()?; let args = t .unsqueeze(1)? .to_dtype(candle::DType::F32)? .broadcast_mul(&freqs.unsqueeze(0)?)?; let emb = Tensor::cat(&[args.cos()?, args.sin()?], D::Minus1)?.to_dtype(dtype)?; Ok(emb) } #[derive(Debug, Clone)] pub struct EmbedNd { #[allow(unused)] dim: usize, theta: usize, axes_dim: Vec<usize>, } impl EmbedNd { fn new(dim: usize, theta: usize, axes_dim: Vec<usize>) -> Self { Self { dim, theta, axes_dim, } } } impl candle::Module for EmbedNd { fn forward(&self, ids: &Tensor) -> Result<Tensor> { let n_axes = ids.dim(D::Minus1)?; let mut emb = Vec::with_capacity(n_axes); for idx in 0..n_axes { let r = rope( &ids.get_on_dim(D::Minus1, idx)?, self.axes_dim[idx], self.theta, )?; emb.push(r) } let emb = Tensor::cat(&emb, 2)?; emb.unsqueeze(1) } } #[derive(Debug, Clone)] pub struct MlpEmbedder { in_layer: Linear, out_layer: Linear, } impl MlpEmbedder { fn new(in_sz: usize, h_sz: usize, vb: VarBuilder) -> Result<Self> { let in_layer = candle_nn::linear(in_sz, h_sz, vb.pp("in_layer"))?; let out_layer = candle_nn::linear(h_sz, h_sz, vb.pp("out_layer"))?; Ok(Self { in_layer, out_layer, }) } } impl candle::Module for MlpEmbedder { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.in_layer)?.silu()?.apply(&self.out_layer) } } #[derive(Debug, Clone)] pub struct QkNorm { query_norm: RmsNorm, key_norm: RmsNorm, } impl QkNorm { fn new(dim: usize, vb: VarBuilder) -> Result<Self> { let query_norm = vb.get(dim, "query_norm.scale")?; let query_norm = RmsNorm::new(query_norm, 1e-6); let key_norm = vb.get(dim, "key_norm.scale")?; let key_norm = RmsNorm::new(key_norm, 1e-6); Ok(Self { query_norm, key_norm, }) } } struct ModulationOut { shift: Tensor, scale: Tensor, gate: Tensor, } impl ModulationOut { fn scale_shift(&self, xs: &Tensor) -> Result<Tensor> { xs.broadcast_mul(&(&self.scale + 1.)?)? .broadcast_add(&self.shift) } fn gate(&self, xs: &Tensor) -> Result<Tensor> { self.gate.broadcast_mul(xs) } } #[derive(Debug, Clone)] struct Modulation1 { lin: Linear, } impl Modulation1 { fn new(dim: usize, vb: VarBuilder) -> Result<Self> { let lin = candle_nn::linear(dim, 3 * dim, vb.pp("lin"))?; Ok(Self { lin }) } fn forward(&self, vec_: &Tensor) -> Result<ModulationOut> { let ys = vec_ .silu()? .apply(&self.lin)? .unsqueeze(1)? .chunk(3, D::Minus1)?; if ys.len() != 3 { candle::bail!("unexpected len from chunk {ys:?}") } Ok(ModulationOut { shift: ys[0].clone(), scale: ys[1].clone(), gate: ys[2].clone(), }) } } #[derive(Debug, Clone)] struct Modulation2 { lin: Linear, } impl Modulation2 { fn new(dim: usize, vb: VarBuilder) -> Result<Self> { let lin = candle_nn::linear(dim, 6 * dim, vb.pp("lin"))?; Ok(Self { lin }) } fn forward(&self, vec_: &Tensor) -> Result<(ModulationOut, ModulationOut)> { let ys = vec_ .silu()? .apply(&self.lin)? .unsqueeze(1)? .chunk(6, D::Minus1)?; if ys.len() != 6 { candle::bail!("unexpected len from chunk {ys:?}") } let mod1 = ModulationOut { shift: ys[0].clone(), scale: ys[1].clone(), gate: ys[2].clone(), }; let mod2 = ModulationOut { shift: ys[3].clone(), scale: ys[4].clone(), gate: ys[5].clone(), }; Ok((mod1, mod2)) } } #[derive(Debug, Clone)] pub struct SelfAttention { qkv: Linear, norm: QkNorm, proj: Linear, num_heads: usize, } impl SelfAttention { fn new(dim: usize, num_heads: usize, qkv_bias: bool, vb: VarBuilder) -> Result<Self> { let head_dim = dim / num_heads; let qkv = candle_nn::linear_b(dim, dim * 3, qkv_bias, vb.pp("qkv"))?; let norm = QkNorm::new(head_dim, vb.pp("norm"))?; let proj = candle_nn::linear(dim, dim, vb.pp("proj"))?; Ok(Self { qkv, norm, proj, num_heads, }) } fn qkv(&self, xs: &Tensor) -> Result<(Tensor, Tensor, Tensor)> { let qkv = xs.apply(&self.qkv)?; let (b, l, _khd) = qkv.dims3()?; let qkv = qkv.reshape((b, l, 3, self.num_heads, ()))?; let q = qkv.i((.., .., 0))?.transpose(1, 2)?; let k = qkv.i((.., .., 1))?.transpose(1, 2)?; let v = qkv.i((.., .., 2))?.transpose(1, 2)?; let q = q.apply(&self.norm.query_norm)?; let k = k.apply(&self.norm.key_norm)?; Ok((q, k, v)) } #[allow(unused)] fn forward(&self, xs: &Tensor, pe: &Tensor) -> Result<Tensor> { let (q, k, v) = self.qkv(xs)?; attention(&q, &k, &v, pe)?.apply(&self.proj) } } #[derive(Debug, Clone)] struct Mlp { lin1: Linear, lin2: Linear, } impl Mlp { fn new(in_sz: usize, mlp_sz: usize, vb: VarBuilder) -> Result<Self> { let lin1 = candle_nn::linear(in_sz, mlp_sz, vb.pp("0"))?; let lin2 = candle_nn::linear(mlp_sz, in_sz, vb.pp("2"))?; Ok(Self { lin1, lin2 }) } } impl candle::Module for Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.lin1)?.gelu()?.apply(&self.lin2) } } #[derive(Debug, Clone)] pub struct DoubleStreamBlock { img_mod: Modulation2, img_norm1: LayerNorm, img_attn: SelfAttention, img_norm2: LayerNorm, img_mlp: Mlp, txt_mod: Modulation2, txt_norm1: LayerNorm, txt_attn: SelfAttention, txt_norm2: LayerNorm, txt_mlp: Mlp, } impl DoubleStreamBlock { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let h_sz = cfg.hidden_size; let mlp_sz = (h_sz as f64 * cfg.mlp_ratio) as usize; let img_mod = Modulation2::new(h_sz, vb.pp("img_mod"))?; let img_norm1 = layer_norm(h_sz, vb.pp("img_norm1"))?; let img_attn = SelfAttention::new(h_sz, cfg.num_heads, cfg.qkv_bias, vb.pp("img_attn"))?; let img_norm2 = layer_norm(h_sz, vb.pp("img_norm2"))?; let img_mlp = Mlp::new(h_sz, mlp_sz, vb.pp("img_mlp"))?; let txt_mod = Modulation2::new(h_sz, vb.pp("txt_mod"))?; let txt_norm1 = layer_norm(h_sz, vb.pp("txt_norm1"))?; let txt_attn = SelfAttention::new(h_sz, cfg.num_heads, cfg.qkv_bias, vb.pp("txt_attn"))?; let txt_norm2 = layer_norm(h_sz, vb.pp("txt_norm2"))?; let txt_mlp = Mlp::new(h_sz, mlp_sz, vb.pp("txt_mlp"))?; Ok(Self { img_mod, img_norm1, img_attn, img_norm2, img_mlp, txt_mod, txt_norm1, txt_attn, txt_norm2, txt_mlp, }) } fn forward( &self, img: &Tensor, txt: &Tensor, vec_: &Tensor, pe: &Tensor, ) -> Result<(Tensor, Tensor)> { let (img_mod1, img_mod2) = self.img_mod.forward(vec_)?; // shift, scale, gate let (txt_mod1, txt_mod2) = self.txt_mod.forward(vec_)?; // shift, scale, gate let img_modulated = img.apply(&self.img_norm1)?; let img_modulated = img_mod1.scale_shift(&img_modulated)?; let (img_q, img_k, img_v) = self.img_attn.qkv(&img_modulated)?; let txt_modulated = txt.apply(&self.txt_norm1)?; let txt_modulated = txt_mod1.scale_shift(&txt_modulated)?; let (txt_q, txt_k, txt_v) = self.txt_attn.qkv(&txt_modulated)?; let q = Tensor::cat(&[txt_q, img_q], 2)?; let k = Tensor::cat(&[txt_k, img_k], 2)?; let v = Tensor::cat(&[txt_v, img_v], 2)?; let attn = attention(&q, &k, &v, pe)?; let txt_attn = attn.narrow(1, 0, txt.dim(1)?)?; let img_attn = attn.narrow(1, txt.dim(1)?, attn.dim(1)? - txt.dim(1)?)?; let img = (img + img_mod1.gate(&img_attn.apply(&self.img_attn.proj)?))?; let img = (&img + img_mod2.gate( &img_mod2 .scale_shift(&img.apply(&self.img_norm2)?)? .apply(&self.img_mlp)?, )?)?; let txt = (txt + txt_mod1.gate(&txt_attn.apply(&self.txt_attn.proj)?))?; let txt = (&txt + txt_mod2.gate( &txt_mod2 .scale_shift(&txt.apply(&self.txt_norm2)?)? .apply(&self.txt_mlp)?, )?)?; Ok((img, txt)) } } #[derive(Debug, Clone)] pub struct SingleStreamBlock { linear1: Linear, linear2: Linear, norm: QkNorm, pre_norm: LayerNorm, modulation: Modulation1, h_sz: usize, mlp_sz: usize, num_heads: usize, } impl SingleStreamBlock { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let h_sz = cfg.hidden_size; let mlp_sz = (h_sz as f64 * cfg.mlp_ratio) as usize; let head_dim = h_sz / cfg.num_heads; let linear1 = candle_nn::linear(h_sz, h_sz * 3 + mlp_sz, vb.pp("linear1"))?; let linear2 = candle_nn::linear(h_sz + mlp_sz, h_sz, vb.pp("linear2"))?; let norm = QkNorm::new(head_dim, vb.pp("norm"))?; let pre_norm = layer_norm(h_sz, vb.pp("pre_norm"))?; let modulation = Modulation1::new(h_sz, vb.pp("modulation"))?; Ok(Self { linear1, linear2, norm, pre_norm, modulation, h_sz, mlp_sz, num_heads: cfg.num_heads, }) } fn forward(&self, xs: &Tensor, vec_: &Tensor, pe: &Tensor) -> Result<Tensor> { let mod_ = self.modulation.forward(vec_)?; let x_mod = mod_.scale_shift(&xs.apply(&self.pre_norm)?)?; let x_mod = x_mod.apply(&self.linear1)?; let qkv = x_mod.narrow(D::Minus1, 0, 3 * self.h_sz)?; let (b, l, _khd) = qkv.dims3()?; let qkv = qkv.reshape((b, l, 3, self.num_heads, ()))?; let q = qkv.i((.., .., 0))?.transpose(1, 2)?; let k = qkv.i((.., .., 1))?.transpose(1, 2)?; let v = qkv.i((.., .., 2))?.transpose(1, 2)?; let mlp = x_mod.narrow(D::Minus1, 3 * self.h_sz, self.mlp_sz)?; let q = q.apply(&self.norm.query_norm)?; let k = k.apply(&self.norm.key_norm)?; let attn = attention(&q, &k, &v, pe)?; let output = Tensor::cat(&[attn, mlp.gelu()?], 2)?.apply(&self.linear2)?; xs + mod_.gate(&output) } } #[derive(Debug, Clone)] pub struct LastLayer { norm_final: LayerNorm, linear: Linear, ada_ln_modulation: Linear, } impl LastLayer { fn new(h_sz: usize, p_sz: usize, out_c: usize, vb: VarBuilder) -> Result<Self> { let norm_final = layer_norm(h_sz, vb.pp("norm_final"))?; let linear = candle_nn::linear(h_sz, p_sz * p_sz * out_c, vb.pp("linear"))?; let ada_ln_modulation = candle_nn::linear(h_sz, 2 * h_sz, vb.pp("adaLN_modulation.1"))?; Ok(Self { norm_final, linear, ada_ln_modulation, }) } fn forward(&self, xs: &Tensor, vec: &Tensor) -> Result<Tensor> { let chunks = vec.silu()?.apply(&self.ada_ln_modulation)?.chunk(2, 1)?; let (shift, scale) = (&chunks[0], &chunks[1]); let xs = xs .apply(&self.norm_final)? .broadcast_mul(&(scale.unsqueeze(1)? + 1.0)?)? .broadcast_add(&shift.unsqueeze(1)?)?; xs.apply(&self.linear) } } #[derive(Debug, Clone)] pub struct Flux { img_in: Linear, txt_in: Linear, time_in: MlpEmbedder, vector_in: MlpEmbedder, guidance_in: Option<MlpEmbedder>, pe_embedder: EmbedNd, double_blocks: Vec<DoubleStreamBlock>, single_blocks: Vec<SingleStreamBlock>, final_layer: LastLayer, } impl Flux { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let img_in = candle_nn::linear(cfg.in_channels, cfg.hidden_size, vb.pp("img_in"))?; let txt_in = candle_nn::linear(cfg.context_in_dim, cfg.hidden_size, vb.pp("txt_in"))?; let mut double_blocks = Vec::with_capacity(cfg.depth); let vb_d = vb.pp("double_blocks"); for idx in 0..cfg.depth { let db = DoubleStreamBlock::new(cfg, vb_d.pp(idx))?; double_blocks.push(db) } let mut single_blocks = Vec::with_capacity(cfg.depth_single_blocks); let vb_s = vb.pp("single_blocks"); for idx in 0..cfg.depth_single_blocks { let sb = SingleStreamBlock::new(cfg, vb_s.pp(idx))?; single_blocks.push(sb) } let time_in = MlpEmbedder::new(256, cfg.hidden_size, vb.pp("time_in"))?; let vector_in = MlpEmbedder::new(cfg.vec_in_dim, cfg.hidden_size, vb.pp("vector_in"))?; let guidance_in = if cfg.guidance_embed { let mlp = MlpEmbedder::new(256, cfg.hidden_size, vb.pp("guidance_in"))?; Some(mlp) } else { None }; let final_layer = LastLayer::new(cfg.hidden_size, 1, cfg.in_channels, vb.pp("final_layer"))?; let pe_dim = cfg.hidden_size / cfg.num_heads; let pe_embedder = EmbedNd::new(pe_dim, cfg.theta, cfg.axes_dim.to_vec()); Ok(Self { img_in, txt_in, time_in, vector_in, guidance_in, pe_embedder, double_blocks, single_blocks, final_layer, }) } #[allow(clippy::too_many_arguments)] pub fn forward( &self, img: &Tensor, img_ids: &Tensor, txt: &Tensor, txt_ids: &Tensor, timesteps: &Tensor, y: &Tensor, guidance: Option<&Tensor>, ) -> Result<Tensor> { if txt.rank() != 3 { candle::bail!("unexpected shape for txt {:?}", txt.shape()) } if img.rank() != 3 { candle::bail!("unexpected shape for img {:?}", img.shape()) } let dtype = img.dtype(); let pe = { let ids = Tensor::cat(&[txt_ids, img_ids], 1)?; ids.apply(&self.pe_embedder)? }; let mut txt = txt.apply(&self.txt_in)?; let mut img = img.apply(&self.img_in)?; let vec_ = timestep_embedding(timesteps, 256, dtype)?.apply(&self.time_in)?; let vec_ = match (self.guidance_in.as_ref(), guidance) { (Some(g_in), Some(guidance)) => { (vec_ + timestep_embedding(guidance, 256, dtype)?.apply(g_in))? } _ => vec_, }; let vec_ = (vec_ + y.apply(&self.vector_in))?; // Double blocks for block in self.double_blocks.iter() { (img, txt) = block.forward(&img, &txt, &vec_, &pe)? } // Single blocks let mut img = Tensor::cat(&[&txt, &img], 1)?; for block in self.single_blocks.iter() { img = block.forward(&img, &vec_, &pe)?; } let img = img.i((.., txt.dim(1)?..))?; self.final_layer.forward(&img, &vec_) } }
candle/candle-transformers/src/models/flux/model.rs/0
{ "file_path": "candle/candle-transformers/src/models/flux/model.rs", "repo_id": "candle", "token_count": 10717 }
40
use crate::models::with_tracing::{linear_no_bias, Linear, RmsNorm}; /// Mistral LLM, https://github.com/mistralai/mistral-src use candle::{DType, Device, Module, Result, Tensor, D}; use candle_nn::{Activation, VarBuilder}; use std::sync::Arc; fn default_use_flash_attn() -> bool { false } #[derive(Debug, Clone, PartialEq, serde::Deserialize)] pub struct Config { pub vocab_size: usize, pub hidden_size: usize, pub intermediate_size: usize, pub num_hidden_layers: usize, pub num_attention_heads: usize, pub head_dim: Option<usize>, pub num_key_value_heads: usize, pub hidden_act: Activation, pub max_position_embeddings: usize, pub rms_norm_eps: f64, pub rope_theta: f64, pub sliding_window: Option<usize>, #[serde(default = "default_use_flash_attn")] pub use_flash_attn: bool, } impl Config { // https://huggingface.co/mistralai/Mistral-7B-v0.1/blob/main/config.json pub fn config_7b_v0_1(use_flash_attn: bool) -> Self { Self { vocab_size: 32000, hidden_size: 4096, intermediate_size: 14336, num_hidden_layers: 32, num_attention_heads: 32, head_dim: None, num_key_value_heads: 8, hidden_act: Activation::Silu, max_position_embeddings: 32768, rms_norm_eps: 1e-5, rope_theta: 10_000., sliding_window: Some(4096), use_flash_attn, } } // https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca/blob/main/config.json // https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B/blob/main/config.json pub fn config_chat_ml(use_flash_attn: bool) -> Self { Self { vocab_size: 32002, hidden_size: 4096, intermediate_size: 14336, num_hidden_layers: 32, num_attention_heads: 32, head_dim: None, num_key_value_heads: 8, hidden_act: Activation::Silu, max_position_embeddings: 32768, rms_norm_eps: 1e-5, rope_theta: 10_000., sliding_window: Some(4096), use_flash_attn, } } // https://huggingface.co/amazon/MistralLite/blob/main/config.json pub fn config_amazon_mistral_lite(use_flash_attn: bool) -> Self { Self { vocab_size: 32003, hidden_size: 4096, intermediate_size: 14336, num_hidden_layers: 32, num_attention_heads: 32, head_dim: None, num_key_value_heads: 8, hidden_act: Activation::Silu, max_position_embeddings: 32768, rms_norm_eps: 1e-5, rope_theta: 10_000., sliding_window: Some(4096), use_flash_attn, } } fn head_dim(&self) -> usize { self.head_dim .unwrap_or(self.hidden_size / self.num_attention_heads) } } #[derive(Debug, Clone)] struct RotaryEmbedding { sin: Tensor, cos: Tensor, } impl RotaryEmbedding { fn new(dtype: DType, cfg: &Config, dev: &Device) -> Result<Self> { let rope_theta = cfg.rope_theta as f32; let dim = cfg.head_dim(); let max_seq_len = cfg.max_position_embeddings; let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / rope_theta.powf(i as f32 / dim as f32)) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?.to_dtype(dtype)?; let t = Tensor::arange(0u32, max_seq_len as u32, dev)? .to_dtype(dtype)? .reshape((max_seq_len, 1))?; let freqs = t.matmul(&inv_freq)?; Ok(Self { sin: freqs.sin()?, cos: freqs.cos()?, }) } fn apply_rotary_emb_qkv( &self, q: &Tensor, k: &Tensor, seqlen_offset: usize, ) -> Result<(Tensor, Tensor)> { let (_b_sz, _h, seq_len, _n_embd) = q.dims4()?; let cos = self.cos.narrow(0, seqlen_offset, seq_len)?; let sin = self.sin.narrow(0, seqlen_offset, seq_len)?; let q_embed = candle_nn::rotary_emb::rope(q, &cos, &sin)?; let k_embed = candle_nn::rotary_emb::rope(k, &cos, &sin)?; Ok((q_embed, k_embed)) } } #[derive(Debug, Clone)] #[allow(clippy::upper_case_acronyms)] struct MLP { gate_proj: Linear, up_proj: Linear, down_proj: Linear, act_fn: Activation, } impl MLP { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_sz = cfg.hidden_size; let intermediate_sz = cfg.intermediate_size; let gate_proj = linear_no_bias(hidden_sz, intermediate_sz, vb.pp("gate_proj"))?; let up_proj = linear_no_bias(hidden_sz, intermediate_sz, vb.pp("up_proj"))?; let down_proj = linear_no_bias(intermediate_sz, hidden_sz, vb.pp("down_proj"))?; Ok(Self { gate_proj, up_proj, down_proj, act_fn: cfg.hidden_act, }) } } impl Module for MLP { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let lhs = xs.apply(&self.gate_proj)?.apply(&self.act_fn)?; let rhs = xs.apply(&self.up_proj)?; (lhs * rhs)?.apply(&self.down_proj) } } #[cfg(feature = "flash-attn")] fn flash_attn( q: &Tensor, k: &Tensor, v: &Tensor, softmax_scale: f32, causal: bool, ) -> Result<Tensor> { candle_flash_attn::flash_attn(q, k, v, softmax_scale, causal) } #[cfg(not(feature = "flash-attn"))] fn flash_attn(_: &Tensor, _: &Tensor, _: &Tensor, _: f32, _: bool) -> Result<Tensor> { unimplemented!("compile with '--features flash-attn'") } #[derive(Debug, Clone)] struct Attention { q_proj: Linear, k_proj: Linear, v_proj: Linear, o_proj: Linear, num_heads: usize, num_kv_heads: usize, num_kv_groups: usize, head_dim: usize, rotary_emb: Arc<RotaryEmbedding>, kv_cache: Option<(Tensor, Tensor)>, use_flash_attn: bool, } impl Attention { fn new(rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_sz = cfg.hidden_size; let num_heads = cfg.num_attention_heads; let num_kv_heads = cfg.num_key_value_heads; let num_kv_groups = num_heads / num_kv_heads; let head_dim = cfg.head_dim(); let q_proj = linear_no_bias(hidden_sz, num_heads * head_dim, vb.pp("q_proj"))?; let k_proj = linear_no_bias(hidden_sz, num_kv_heads * head_dim, vb.pp("k_proj"))?; let v_proj = linear_no_bias(hidden_sz, num_kv_heads * head_dim, vb.pp("v_proj"))?; let o_proj = linear_no_bias(num_heads * head_dim, hidden_sz, vb.pp("o_proj"))?; Ok(Self { q_proj, k_proj, v_proj, o_proj, num_heads, num_kv_heads, num_kv_groups, head_dim, rotary_emb, kv_cache: None, use_flash_attn: cfg.use_flash_attn, }) } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, seqlen_offset: usize, ) -> Result<Tensor> { let (b_sz, q_len, _) = xs.dims3()?; let query_states = self.q_proj.forward(xs)?; let key_states = self.k_proj.forward(xs)?; let value_states = self.v_proj.forward(xs)?; let query_states = query_states .reshape((b_sz, q_len, self.num_heads, self.head_dim))? .transpose(1, 2)? .contiguous()?; let key_states = key_states .reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)? .contiguous()?; let value_states = value_states .reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)?; let (query_states, key_states) = self.rotary_emb .apply_rotary_emb_qkv(&query_states, &key_states, seqlen_offset)?; let (key_states, value_states) = match &self.kv_cache { None => (key_states, value_states), Some((prev_k, prev_v)) => { let key_states = Tensor::cat(&[prev_k, &key_states], 2)?; let value_states = Tensor::cat(&[prev_v, &value_states], 2)?; (key_states, value_states) } }; self.kv_cache = Some((key_states.clone(), value_states.clone())); let key_states = crate::utils::repeat_kv(key_states, self.num_kv_groups)?; let value_states = crate::utils::repeat_kv(value_states, self.num_kv_groups)?; let attn_output = if self.use_flash_attn { // flash-attn expects (b_sz, seq_len, nheads, head_dim) let q = query_states.transpose(1, 2)?; let k = key_states.transpose(1, 2)?; let v = value_states.transpose(1, 2)?; let softmax_scale = 1f32 / (self.head_dim as f32).sqrt(); flash_attn(&q, &k, &v, softmax_scale, q_len > 1)?.transpose(1, 2)? } else { let scale = 1f64 / f64::sqrt(self.head_dim as f64); let attn_weights = (query_states.matmul(&key_states.transpose(2, 3)?)? * scale)?; let attn_weights = match attention_mask { None => attn_weights, Some(mask) => attn_weights.broadcast_add(mask)?, }; let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?; attn_weights.matmul(&value_states)? }; attn_output .transpose(1, 2)? .reshape((b_sz, q_len, self.num_heads * self.head_dim))? .apply(&self.o_proj) } fn clear_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Debug, Clone)] struct DecoderLayer { self_attn: Attention, mlp: MLP, input_layernorm: RmsNorm, post_attention_layernorm: RmsNorm, } impl DecoderLayer { fn new(rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder) -> Result<Self> { let self_attn = Attention::new(rotary_emb, cfg, vb.pp("self_attn"))?; let mlp = MLP::new(cfg, vb.pp("mlp"))?; let input_layernorm = RmsNorm::new(cfg.hidden_size, cfg.rms_norm_eps, vb.pp("input_layernorm"))?; let post_attention_layernorm = RmsNorm::new( cfg.hidden_size, cfg.rms_norm_eps, vb.pp("post_attention_layernorm"), )?; Ok(Self { self_attn, mlp, input_layernorm, post_attention_layernorm, }) } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, seqlen_offset: usize, ) -> Result<Tensor> { let residual = xs; let xs = self.input_layernorm.forward(xs)?; let xs = self.self_attn.forward(&xs, attention_mask, seqlen_offset)?; let xs = (xs + residual)?; let residual = &xs; let xs = xs.apply(&self.post_attention_layernorm)?.apply(&self.mlp)?; residual + xs } fn clear_kv_cache(&mut self) { self.self_attn.clear_kv_cache() } } #[derive(Debug, Clone)] pub struct Model { embed_tokens: candle_nn::Embedding, layers: Vec<DecoderLayer>, norm: RmsNorm, lm_head: Linear, sliding_window: Option<usize>, device: Device, dtype: DType, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb_m = vb.pp("model"); let embed_tokens = candle_nn::embedding(cfg.vocab_size, cfg.hidden_size, vb_m.pp("embed_tokens"))?; let rotary_emb = Arc::new(RotaryEmbedding::new(vb.dtype(), cfg, vb_m.device())?); let mut layers = Vec::with_capacity(cfg.num_hidden_layers); let vb_l = vb_m.pp("layers"); for layer_idx in 0..cfg.num_hidden_layers { let layer = DecoderLayer::new(rotary_emb.clone(), cfg, vb_l.pp(layer_idx))?; layers.push(layer) } let norm = RmsNorm::new(cfg.hidden_size, cfg.rms_norm_eps, vb_m.pp("norm"))?; let lm_head = linear_no_bias(cfg.hidden_size, cfg.vocab_size, vb.pp("lm_head"))?; Ok(Self { embed_tokens, layers, norm, lm_head, sliding_window: cfg.sliding_window, device: vb.device().clone(), dtype: vb.dtype(), }) } fn prepare_decoder_attention_mask( &self, tgt_len: usize, seqlen_offset: usize, ) -> Result<Tensor> { let sliding_window = self.sliding_window.unwrap_or(tgt_len + 1); let mask: Vec<_> = (0..tgt_len) .flat_map(|i| { (0..tgt_len).map(move |j| { if i < j || j + sliding_window < i { f32::NEG_INFINITY } else { 0. } }) }) .collect(); let mask = Tensor::from_slice(&mask, (tgt_len, tgt_len), &self.device)?; let mask = if seqlen_offset > 0 { let mask0 = Tensor::zeros((tgt_len, seqlen_offset), DType::F32, &self.device)?; Tensor::cat(&[&mask0, &mask], D::Minus1)? } else { mask }; mask.expand((1, 1, tgt_len, tgt_len + seqlen_offset))? .to_dtype(self.dtype) } pub fn forward(&mut self, input_ids: &Tensor, seqlen_offset: usize) -> Result<Tensor> { let (_b_size, seq_len) = input_ids.dims2()?; let attention_mask = if seq_len <= 1 { None } else { let mask = self.prepare_decoder_attention_mask(seq_len, seqlen_offset)?; Some(mask) }; let mut xs = self.embed_tokens.forward(input_ids)?; for layer in self.layers.iter_mut() { xs = layer.forward(&xs, attention_mask.as_ref(), seqlen_offset)? } xs.narrow(1, seq_len - 1, 1)? .apply(&self.norm)? .apply(&self.lm_head) } pub fn clear_kv_cache(&mut self) { for layer in self.layers.iter_mut() { layer.clear_kv_cache() } } }
candle/candle-transformers/src/models/mistral.rs/0
{ "file_path": "candle/candle-transformers/src/models/mistral.rs", "repo_id": "candle", "token_count": 7523 }
41
//! Text encoder as used in most OpenCLIP pretrained models //! https://github.com/mlfoundations/open_clip use candle::{DType, IndexOp, Result, Tensor, D}; use candle_nn::{ embedding, layer_norm, linear, ops::softmax_last_dim, Embedding, LayerNorm, Linear, Module, VarBuilder, }; #[derive(Debug, Clone)] pub struct Config { pub vocab_size: usize, pub embed_dim: usize, pub intermediate_size: usize, pub max_position_embeddings: usize, pub pad_with: Option<String>, pub num_hidden_layers: usize, pub num_attention_heads: usize, pub projection_dim: usize, } impl Config { pub fn vit_base_patch32() -> Self { Self { vocab_size: 49408, embed_dim: 512, intermediate_size: 2048, max_position_embeddings: 77, pad_with: None, num_hidden_layers: 12, num_attention_heads: 8, projection_dim: 512, } } } #[derive(Clone, Debug)] struct TextEmbeddings { token_embedding: Embedding, position_embedding: Tensor, } impl TextEmbeddings { fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let token_embedding = embedding(c.vocab_size, c.embed_dim, vs.pp("token_embedding"))?; let position_embedding = vs.get( (c.max_position_embeddings, c.embed_dim), "positional_embedding", )?; Ok(TextEmbeddings { token_embedding, position_embedding, }) } } impl Module for TextEmbeddings { fn forward(&self, input_ids: &Tensor) -> Result<Tensor> { let seq_length = input_ids.dim(D::Minus1)?; let inputs_embeds = self.token_embedding.forward(input_ids)?; let position_embedding = self.position_embedding.narrow(0, 0, seq_length)?; inputs_embeds.broadcast_add(&position_embedding) } } #[derive(Clone, Debug)] struct Attention { k_proj: candle_nn::Linear, v_proj: candle_nn::Linear, q_proj: candle_nn::Linear, out_proj: Linear, head_dim: usize, scale: f64, num_attention_heads: usize, } impl Attention { fn new(vs: candle_nn::VarBuilder, c: &Config) -> Result<Self> { let embed_dim = c.embed_dim; let num_attention_heads = c.num_attention_heads; let in_proj_weights = vs .get((embed_dim * 3, embed_dim), "in_proj_weight")? .chunk(3, 0)?; let (q_w, k_w, v_w) = ( &in_proj_weights[0], &in_proj_weights[1], &in_proj_weights[2], ); let in_proj_biases = vs.get(embed_dim * 3, "in_proj_bias")?.chunk(3, 0)?; let (q_b, k_b, v_b) = (&in_proj_biases[0], &in_proj_biases[1], &in_proj_biases[2]); let q_proj = Linear::new(q_w.clone(), Some(q_b.clone())); let k_proj = Linear::new(k_w.clone(), Some(k_b.clone())); let v_proj = Linear::new(v_w.clone(), Some(v_b.clone())); let out_proj = candle_nn::linear(embed_dim, embed_dim, vs.pp("out_proj"))?; let head_dim = embed_dim / num_attention_heads; let scale = (head_dim as f64).powf(-0.5); Ok(Attention { k_proj, v_proj, q_proj, out_proj, head_dim, scale, num_attention_heads, }) } fn shape_multihead(&self, xs: &Tensor, bsz: usize, seq_len: usize) -> Result<Tensor> { xs.reshape((bsz, seq_len, self.num_attention_heads, self.head_dim))? .transpose(1, 2)? .contiguous()? .to_dtype(DType::F32) } fn forward(&self, xs: &Tensor) -> Result<Tensor> { let in_dtype = xs.dtype(); let (bsz, seq_len, embed_dim) = xs.dims3()?; let q = self.shape_multihead(&self.q_proj.forward(xs)?, bsz, seq_len)?; let k = self.shape_multihead(&self.k_proj.forward(xs)?, bsz, seq_len)?; let v = self.shape_multihead(&self.v_proj.forward(xs)?, bsz, seq_len)?; let q = (q * self.scale)?; let attn_weights = q.matmul(&k.transpose(D::Minus1, D::Minus2)?)?; let attn_weights = softmax_last_dim(&attn_weights)?; let attn_output = attn_weights.matmul(&v)?.to_dtype(in_dtype)?; let attn_output = attn_output .transpose(1, 2)? .contiguous()? .reshape((bsz, seq_len, embed_dim))?; let out = self.out_proj.forward(&attn_output)?; Ok(out) } } #[derive(Clone, Debug)] struct Mlp { fc1: Linear, fc2: Linear, } impl Mlp { fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let fc1 = linear(c.embed_dim, c.intermediate_size, vs.pp("c_fc"))?; let fc2 = linear(c.intermediate_size, c.embed_dim, vs.pp("c_proj"))?; Ok(Mlp { fc1, fc2 }) } } impl Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let xs = self.fc1.forward(xs)?; self.fc2.forward(&xs.gelu_erf()?) } } #[derive(Clone, Debug)] struct EncoderLayer { self_attn: Attention, layer_norm1: LayerNorm, mlp: Mlp, layer_norm2: LayerNorm, } impl EncoderLayer { fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let self_attn = Attention::new(vs.pp("attn"), c)?; let layer_norm1 = layer_norm(c.embed_dim, 1e-5, vs.pp("ln_1"))?; let mlp = Mlp::new(vs.pp("mlp"), c)?; let layer_norm2 = layer_norm(c.embed_dim, 1e-5, vs.pp("ln_2"))?; Ok(EncoderLayer { self_attn, layer_norm1, mlp, layer_norm2, }) } fn forward(&self, xs: &Tensor) -> Result<Tensor> { let residual = xs; let xs = self.layer_norm1.forward(xs)?; let xs = self.self_attn.forward(&xs)?; let xs = (xs + residual)?; let residual = &xs; let xs = self.layer_norm2.forward(&xs)?; let xs = self.mlp.forward(&xs)?; let out = (xs + residual)?; Ok(out) } } #[derive(Clone, Debug)] pub struct Encoder { layers: Vec<EncoderLayer>, } impl Encoder { pub fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let vs = vs.pp("resblocks"); let mut layers: Vec<EncoderLayer> = Vec::new(); for index in 0..c.num_hidden_layers { let layer = EncoderLayer::new(vs.pp(index.to_string()), c)?; layers.push(layer) } Ok(Encoder { layers }) } pub fn forward(&self, xs: &Tensor) -> Result<Tensor> { let mut xs = xs.clone(); for layer in self.layers.iter() { xs = layer.forward(&xs)?; } Ok(xs) } } /// A text transformer as used in CLIP variants. #[derive(Clone, Debug)] pub struct OpenClipTextTransformer { embeddings: TextEmbeddings, encoder: Encoder, final_layer_norm: LayerNorm, } impl OpenClipTextTransformer { pub fn new(vs: VarBuilder, c: &Config) -> Result<Self> { let embeddings = TextEmbeddings::new(vs.clone(), c)?; let final_layer_norm = layer_norm(c.embed_dim, 1e-5, vs.pp("ln_final"))?; let encoder = Encoder::new(vs.pp("transformer"), c)?; Ok(OpenClipTextTransformer { embeddings, encoder, final_layer_norm, }) } pub fn forward(&self, input_ids: &Tensor) -> Result<Tensor> { let input_ids = self.embeddings.forward(input_ids)?; let input_ids = self.encoder.forward(&input_ids)?; self.final_layer_norm.forward(&input_ids) } } impl Module for OpenClipTextTransformer { fn forward(&self, input_ids: &Tensor) -> Result<Tensor> { let output = self.forward(input_ids)?; let sequence_max_indices = input_ids.argmax(D::Minus1)?.to_dtype(DType::I64)?; let mut indices = Vec::new(); for (batch_idx, &seq_idx) in sequence_max_indices.to_vec1::<i64>()?.iter().enumerate() { let index = output.i((batch_idx, seq_idx as usize))?.unsqueeze(0)?; indices.push(index); } Tensor::cat(&indices, 0) } }
candle/candle-transformers/src/models/openclip/text_model.rs/0
{ "file_path": "candle/candle-transformers/src/models/openclip/text_model.rs", "repo_id": "candle", "token_count": 3955 }
42
use crate::{quantized_nn::RmsNorm, utils::repeat_kv}; use candle::{ quantized::{gguf_file, QMatMul}, DType, Device, IndexOp, Result, Tensor, }; use candle_nn::{Embedding, Module}; use std::collections::HashMap; #[derive(Debug, Clone)] struct Mlp { feed_forward_w1: QMatMul, feed_forward_w2: QMatMul, feed_forward_w3: QMatMul, } impl Module for Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let w1 = self.feed_forward_w1.forward(xs)?; let w3 = self.feed_forward_w3.forward(xs)?; self.feed_forward_w2 .forward(&(candle_nn::ops::silu(&w1)? * w3)?) } } #[derive(Debug, Clone)] struct LayerWeights { attention_wq: QMatMul, attention_wk: QMatMul, attention_wv: QMatMul, attention_bq: Tensor, attention_bk: Tensor, attention_bv: Tensor, attention_wo: QMatMul, attention_norm: RmsNorm, mlp: Mlp, ffn_norm: RmsNorm, n_head: usize, n_kv_head: usize, head_dim: usize, cos: Tensor, sin: Tensor, neg_inf: Tensor, kv_cache: Option<(Tensor, Tensor)>, span_attn: tracing::Span, span_rot: tracing::Span, span_mlp: tracing::Span, } fn masked_fill(on_false: &Tensor, mask: &Tensor, on_true: &Tensor) -> Result<Tensor> { let shape = mask.shape(); let m = mask.where_cond(&on_true.broadcast_as(shape.dims())?, on_false)?; Ok(m) } impl LayerWeights { fn apply_rotary_emb(&self, x: &Tensor, index_pos: usize) -> Result<Tensor> { let _enter = self.span_rot.enter(); let (_b_sz, _n_head, seq_len, _n_embd) = x.dims4()?; let cos = self.cos.narrow(0, index_pos, seq_len)?; let sin = self.sin.narrow(0, index_pos, seq_len)?; candle_nn::rotary_emb::rope(&x.contiguous()?, &cos, &sin) } fn forward_attn( &mut self, x: &Tensor, mask: Option<&Tensor>, index_pos: usize, ) -> Result<Tensor> { let _enter = self.span_attn.enter(); let (b_sz, seq_len, n_embd) = x.dims3()?; let q = self.attention_wq.forward(x)?; let k = self.attention_wk.forward(x)?; let v = self.attention_wv.forward(x)?; let q = q.broadcast_add(&self.attention_bq)?; let k = k.broadcast_add(&self.attention_bk)?; let v = v.broadcast_add(&self.attention_bv)?; let q = q .reshape((b_sz, seq_len, self.n_head, self.head_dim))? .transpose(1, 2)? .contiguous()?; let k = k .reshape((b_sz, seq_len, self.n_kv_head, self.head_dim))? .transpose(1, 2)? .contiguous()?; let v = v .reshape((b_sz, seq_len, self.n_kv_head, self.head_dim))? .transpose(1, 2)? .contiguous()?; // let (q, k) = self // .rotary_embedding // .apply_rotary_emb_qkv(&q, &k, index_pos)?; let q = self.apply_rotary_emb(&q, index_pos)?; let k = self.apply_rotary_emb(&k, index_pos)?; let (k, v) = match &self.kv_cache { None => (k, v), Some((k_cache, v_cache)) => { if index_pos == 0 { (k, v) } else { let k = Tensor::cat(&[k_cache, &k], 2)?; let v = Tensor::cat(&[v_cache, &v], 2)?; (k, v) } } }; self.kv_cache = Some((k.clone(), v.clone())); // Support for MQA, useful for 70B models and mistral. let k = repeat_kv(k, self.n_head / self.n_kv_head)?; let v = repeat_kv(v, self.n_head / self.n_kv_head)?; let att = (q.matmul(&k.t()?)? / (self.head_dim as f64).sqrt())?; let att = match mask { None => att, Some(mask) => { let mask = mask.broadcast_as(att.shape())?; masked_fill(&att, &mask, &self.neg_inf)? } }; let att = candle_nn::ops::softmax_last_dim(&att)?; // Convert to contiguous as matmul doesn't support strided vs for now. let y = att.matmul(&v.contiguous()?)?; let y = y.transpose(1, 2)?.reshape(&[b_sz, seq_len, n_embd])?; let y = self.attention_wo.forward(&y)?; Ok(y) } } pub struct ModelWeights { tok_embeddings: Embedding, layers: Vec<LayerWeights>, norm: RmsNorm, output: QMatMul, masks: HashMap<usize, Tensor>, span: tracing::Span, span_output: tracing::Span, } fn precomput_freqs_cis( head_dim: usize, freq_base: f32, context_length: usize, device: &Device, ) -> Result<(Tensor, Tensor)> { let theta: Vec<_> = (0..head_dim) .step_by(2) .map(|i| 1f32 / freq_base.powf(i as f32 / head_dim as f32)) .collect(); let theta = Tensor::new(theta.as_slice(), device)?; let idx_theta = Tensor::arange(0, context_length as u32, device)? .to_dtype(DType::F32)? .reshape((context_length, 1))? .matmul(&theta.reshape((1, theta.elem_count()))?)?; let cos = idx_theta.cos()?; let sin = idx_theta.sin()?; Ok((cos, sin)) } impl ModelWeights { pub fn from_gguf<R: std::io::Seek + std::io::Read>( ct: gguf_file::Content, reader: &mut R, device: &Device, ) -> Result<Self> { let md_get = |s: &str| match ct.metadata.get(s) { None => candle::bail!("cannot find {s} in metadata"), Some(v) => Ok(v), }; let head_count = md_get("qwen2.attention.head_count")?.to_u32()? as usize; let head_count_kv = md_get("qwen2.attention.head_count_kv")?.to_u32()? as usize; let embedding_length = md_get("qwen2.embedding_length")?.to_u32()? as usize; let context_length = md_get("qwen2.context_length")?.to_u32()? as usize; let block_count = md_get("qwen2.block_count")?.to_u32()? as usize; let rms_norm_eps = md_get("qwen2.attention.layer_norm_rms_epsilon")?.to_f32()? as f64; let rope_freq_base = md_get("qwen2.rope.freq_base") .and_then(|m| m.to_f32()) .unwrap_or(10000f32); let head_dim = embedding_length / head_count; let neg_inf = Tensor::new(f32::NEG_INFINITY, device)?; let tok_embeddings = ct.tensor(reader, "token_embd.weight", device)?; let tok_embeddings = tok_embeddings.dequantize(device)?; let norm = RmsNorm::from_qtensor( ct.tensor(reader, "output_norm.weight", device)?, rms_norm_eps, )?; let output = match ct.tensor(reader, "output.weight", device) { Ok(v) => QMatMul::from_qtensor(v)?, _ => { // use tie_word_embeddings QMatMul::from_qtensor(ct.tensor(reader, "token_embd.weight", device)?)? } }; let (cos, sin) = precomput_freqs_cis(head_dim, rope_freq_base, context_length, device)?; let mut layers = Vec::with_capacity(block_count); for layer_idx in 0..block_count { let prefix = format!("blk.{layer_idx}"); let attention_wq = ct.tensor(reader, &format!("{prefix}.attn_q.weight"), device)?; let attention_wk = ct.tensor(reader, &format!("{prefix}.attn_k.weight"), device)?; let attention_wv = ct.tensor(reader, &format!("{prefix}.attn_v.weight"), device)?; let attention_bq = ct.tensor(reader, &format!("{prefix}.attn_q.bias"), device)?; let attention_bk = ct.tensor(reader, &format!("{prefix}.attn_k.bias"), device)?; let attention_bv = ct.tensor(reader, &format!("{prefix}.attn_v.bias"), device)?; let attention_wo = ct.tensor(reader, &format!("{prefix}.attn_output.weight"), device)?; let mlp = { let feed_forward_w1 = ct.tensor(reader, &format!("{prefix}.ffn_gate.weight"), device)?; let feed_forward_w2 = ct.tensor(reader, &format!("{prefix}.ffn_down.weight"), device)?; let feed_forward_w3 = ct.tensor(reader, &format!("{prefix}.ffn_up.weight"), device)?; Mlp { feed_forward_w1: QMatMul::from_qtensor(feed_forward_w1)?, feed_forward_w2: QMatMul::from_qtensor(feed_forward_w2)?, feed_forward_w3: QMatMul::from_qtensor(feed_forward_w3)?, } }; let attention_norm = ct.tensor(reader, &format!("{prefix}.attn_norm.weight"), device)?; let ffn_norm = ct.tensor(reader, &format!("{prefix}.ffn_norm.weight"), device)?; let span_attn = tracing::span!(tracing::Level::TRACE, "attn"); let span_rot = tracing::span!(tracing::Level::TRACE, "attn-rot"); let span_mlp = tracing::span!(tracing::Level::TRACE, "attn-mlp"); layers.push(LayerWeights { attention_wq: QMatMul::from_qtensor(attention_wq)?, attention_wk: QMatMul::from_qtensor(attention_wk)?, attention_wv: QMatMul::from_qtensor(attention_wv)?, attention_bq: attention_bq.dequantize(device)?, attention_bk: attention_bk.dequantize(device)?, attention_bv: attention_bv.dequantize(device)?, attention_wo: QMatMul::from_qtensor(attention_wo)?, attention_norm: RmsNorm::from_qtensor(attention_norm, rms_norm_eps)?, cos: cos.clone(), sin: sin.clone(), mlp, ffn_norm: RmsNorm::from_qtensor(ffn_norm, rms_norm_eps)?, n_head: head_count, n_kv_head: head_count_kv, head_dim, neg_inf: neg_inf.clone(), kv_cache: None, span_attn, span_rot, span_mlp, }); } let span = tracing::span!(tracing::Level::TRACE, "model"); let span_output = tracing::span!(tracing::Level::TRACE, "output"); Ok(Self { tok_embeddings: Embedding::new(tok_embeddings, embedding_length), layers, norm, output, masks: HashMap::new(), span, span_output, }) } fn mask(&mut self, t: usize, device: &Device) -> Result<Tensor> { if let Some(mask) = self.masks.get(&t) { Ok(mask.clone()) } else { let mask: Vec<_> = (0..t) .flat_map(|i| (0..t).map(move |j| u8::from(j > i))) .collect(); let mask = Tensor::from_slice(&mask, (t, t), device)?; self.masks.insert(t, mask.clone()); Ok(mask) } } pub fn forward(&mut self, x: &Tensor, index_pos: usize) -> Result<Tensor> { let (_b_sz, seq_len) = x.dims2()?; let mask = if seq_len == 1 { None } else { Some(self.mask(seq_len, x.device())?) }; let _enter = self.span.enter(); let mut layer_in = self.tok_embeddings.forward(x)?; for layer in self.layers.iter_mut() { let x = layer_in; let residual = &x; let x = layer.attention_norm.forward(&x)?; let attn = layer.forward_attn(&x, mask.as_ref(), index_pos)?; let x = (attn + residual)?; // MLP let _enter = layer.span_mlp.enter(); let residual = &x; let x = layer.ffn_norm.forward(&x)?; let x = layer.mlp.forward(&x)?; let x = (x + residual)?; layer_in = x } let x = self.norm.forward(&layer_in)?; let x = x.i((.., seq_len - 1, ..))?; let _enter = self.span_output.enter(); self.output.forward(&x) } }
candle/candle-transformers/src/models/quantized_qwen2.rs/0
{ "file_path": "candle/candle-transformers/src/models/quantized_qwen2.rs", "repo_id": "candle", "token_count": 6259 }
43
pub use crate::models::with_tracing::Linear; use candle::{Result, Tensor}; use candle_nn::{Module, VarBuilder}; pub mod image_encoder; pub mod mask_decoder; pub mod prompt_encoder; pub mod sam; pub mod tiny_vit; pub mod transformer; pub fn linear(vb: VarBuilder, in_dim: usize, out_dim: usize, bias: bool) -> Result<Linear> { if bias { crate::models::with_tracing::linear(in_dim, out_dim, vb) } else { crate::models::with_tracing::linear_no_bias(in_dim, out_dim, vb) } } #[derive(Debug)] pub struct LayerNorm2d { weight: Tensor, bias: Tensor, num_channels: usize, eps: f64, } impl LayerNorm2d { pub fn new(num_channels: usize, eps: f64, vb: VarBuilder) -> Result<Self> { let weight = vb.get(num_channels, "weight")?; let bias = vb.get(num_channels, "bias")?; Ok(Self { weight, bias, num_channels, eps, }) } } impl Module for LayerNorm2d { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let u = xs.mean_keepdim(1)?; let xs = xs.broadcast_sub(&u)?; let s = xs.sqr()?.mean_keepdim(1)?; let xs = xs.broadcast_div(&(s + self.eps)?.sqrt()?)?; xs.broadcast_mul(&self.weight.reshape((1, self.num_channels, 1, 1))?)? .broadcast_add(&self.bias.reshape((1, self.num_channels, 1, 1))?) } } #[derive(Debug)] pub struct MlpBlock { lin1: Linear, lin2: Linear, activation: candle_nn::Activation, span: tracing::Span, } impl MlpBlock { pub fn new( embedding_dim: usize, mlp_dim: usize, activation: candle_nn::Activation, vb: VarBuilder, ) -> Result<Self> { let lin1 = linear(vb.pp("lin1"), embedding_dim, mlp_dim, true)?; let lin2 = linear(vb.pp("lin2"), mlp_dim, embedding_dim, true)?; let span = tracing::span!(tracing::Level::TRACE, "mlp-block"); Ok(Self { lin1, lin2, activation, span, }) } } impl Module for MlpBlock { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); xs.apply(&self.lin1)? .apply(&self.activation)? .apply(&self.lin2) } }
candle/candle-transformers/src/models/segment_anything/mod.rs/0
{ "file_path": "candle/candle-transformers/src/models/segment_anything/mod.rs", "repo_id": "candle", "token_count": 1119 }
44
use candle::{Device, Result, Tensor}; pub fn linspace(start: f64, stop: f64, steps: usize) -> Result<Tensor> { if steps == 0 { Tensor::from_vec(Vec::<f64>::new(), steps, &Device::Cpu) } else if steps == 1 { Tensor::from_vec(vec![start], steps, &Device::Cpu) } else { let delta = (stop - start) / (steps - 1) as f64; let vs = (0..steps) .map(|step| start + step as f64 * delta) .collect::<Vec<_>>(); Tensor::from_vec(vs, steps, &Device::Cpu) } } /// A linear interpolator for a sorted array of x and y values. struct LinearInterpolator<'x, 'y> { xp: &'x [f64], fp: &'y [f64], cache: usize, } impl<'x, 'y> LinearInterpolator<'x, 'y> { fn accel_find(&mut self, x: f64) -> usize { let xidx = self.cache; if x < self.xp[xidx] { self.cache = self.xp[0..xidx].partition_point(|o| *o < x); self.cache = self.cache.saturating_sub(1); } else if x >= self.xp[xidx + 1] { self.cache = self.xp[xidx..self.xp.len()].partition_point(|o| *o < x) + xidx; self.cache = self.cache.saturating_sub(1); } self.cache } fn eval(&mut self, x: f64) -> f64 { if x < self.xp[0] || x > self.xp[self.xp.len() - 1] { return f64::NAN; } let idx = self.accel_find(x); let x_l = self.xp[idx]; let x_h = self.xp[idx + 1]; let y_l = self.fp[idx]; let y_h = self.fp[idx + 1]; let dx = x_h - x_l; if dx > 0.0 { y_l + (x - x_l) / dx * (y_h - y_l) } else { f64::NAN } } } pub fn interp(x: &[f64], xp: &[f64], fp: &[f64]) -> Vec<f64> { let mut interpolator = LinearInterpolator { xp, fp, cache: 0 }; x.iter().map(|&x| interpolator.eval(x)).collect() }
candle/candle-transformers/src/models/stable_diffusion/utils.rs/0
{ "file_path": "candle/candle-transformers/src/models/stable_diffusion/utils.rs", "repo_id": "candle", "token_count": 979 }
45
use super::common::{AttnBlock, GlobalResponseNorm, LayerNormNoWeights, TimestepBlock, WLayerNorm}; use candle::{DType, Module, Result, Tensor, D}; use candle_nn::VarBuilder; #[derive(Debug)] pub struct ResBlockStageB { depthwise: candle_nn::Conv2d, norm: WLayerNorm, channelwise_lin1: candle_nn::Linear, channelwise_grn: GlobalResponseNorm, channelwise_lin2: candle_nn::Linear, } impl ResBlockStageB { pub fn new(c: usize, c_skip: usize, ksize: usize, vb: VarBuilder) -> Result<Self> { let cfg = candle_nn::Conv2dConfig { groups: c, padding: ksize / 2, ..Default::default() }; let depthwise = candle_nn::conv2d(c, c, ksize, cfg, vb.pp("depthwise"))?; let norm = WLayerNorm::new(c)?; let channelwise_lin1 = candle_nn::linear(c + c_skip, c * 4, vb.pp("channelwise.0"))?; let channelwise_grn = GlobalResponseNorm::new(4 * c, vb.pp("channelwise.2"))?; let channelwise_lin2 = candle_nn::linear(c * 4, c, vb.pp("channelwise.4"))?; Ok(Self { depthwise, norm, channelwise_lin1, channelwise_grn, channelwise_lin2, }) } pub fn forward(&self, xs: &Tensor, x_skip: Option<&Tensor>) -> Result<Tensor> { let x_res = xs; let xs = xs.apply(&self.depthwise)?.apply(&self.norm)?; let xs = match x_skip { None => xs.clone(), Some(x_skip) => Tensor::cat(&[&xs, x_skip], 1)?, }; let xs = xs .permute((0, 2, 3, 1))? .contiguous()? .apply(&self.channelwise_lin1)? .gelu()? .apply(&self.channelwise_grn)? .apply(&self.channelwise_lin2)? .permute((0, 3, 1, 2))?; xs + x_res } } #[derive(Debug)] struct SubBlock { res_block: ResBlockStageB, ts_block: TimestepBlock, attn_block: Option<AttnBlock>, } #[derive(Debug)] struct DownBlock { layer_norm: Option<WLayerNorm>, conv: Option<candle_nn::Conv2d>, sub_blocks: Vec<SubBlock>, } #[derive(Debug)] struct UpBlock { sub_blocks: Vec<SubBlock>, layer_norm: Option<WLayerNorm>, conv: Option<candle_nn::ConvTranspose2d>, } #[derive(Debug)] pub struct WDiffNeXt { clip_mapper: candle_nn::Linear, effnet_mappers: Vec<Option<candle_nn::Conv2d>>, seq_norm: LayerNormNoWeights, embedding_conv: candle_nn::Conv2d, embedding_ln: WLayerNorm, down_blocks: Vec<DownBlock>, up_blocks: Vec<UpBlock>, clf_ln: WLayerNorm, clf_conv: candle_nn::Conv2d, c_r: usize, patch_size: usize, } impl WDiffNeXt { #[allow(clippy::too_many_arguments)] pub fn new( c_in: usize, c_out: usize, c_r: usize, c_cond: usize, clip_embd: usize, patch_size: usize, use_flash_attn: bool, vb: VarBuilder, ) -> Result<Self> { const C_HIDDEN: [usize; 4] = [320, 640, 1280, 1280]; const BLOCKS: [usize; 4] = [4, 4, 14, 4]; const NHEAD: [usize; 4] = [1, 10, 20, 20]; const INJECT_EFFNET: [bool; 4] = [false, true, true, true]; const EFFNET_EMBD: usize = 16; let clip_mapper = candle_nn::linear(clip_embd, c_cond, vb.pp("clip_mapper"))?; let mut effnet_mappers = Vec::with_capacity(2 * INJECT_EFFNET.len()); let vb_e = vb.pp("effnet_mappers"); for (i, &inject) in INJECT_EFFNET.iter().enumerate() { let c = if inject { Some(candle_nn::conv2d( EFFNET_EMBD, c_cond, 1, Default::default(), vb_e.pp(i), )?) } else { None }; effnet_mappers.push(c) } for (i, &inject) in INJECT_EFFNET.iter().rev().enumerate() { let c = if inject { Some(candle_nn::conv2d( EFFNET_EMBD, c_cond, 1, Default::default(), vb_e.pp(i + INJECT_EFFNET.len()), )?) } else { None }; effnet_mappers.push(c) } let seq_norm = LayerNormNoWeights::new(c_cond)?; let embedding_ln = WLayerNorm::new(C_HIDDEN[0])?; let embedding_conv = candle_nn::conv2d( c_in * patch_size * patch_size, C_HIDDEN[0], 1, Default::default(), vb.pp("embedding.1"), )?; let mut down_blocks = Vec::with_capacity(C_HIDDEN.len()); for (i, &c_hidden) in C_HIDDEN.iter().enumerate() { let vb = vb.pp("down_blocks").pp(i); let (layer_norm, conv, start_layer_i) = if i > 0 { let layer_norm = WLayerNorm::new(C_HIDDEN[i - 1])?; let cfg = candle_nn::Conv2dConfig { stride: 2, ..Default::default() }; let conv = candle_nn::conv2d(C_HIDDEN[i - 1], c_hidden, 2, cfg, vb.pp("0.1"))?; (Some(layer_norm), Some(conv), 1) } else { (None, None, 0) }; let mut sub_blocks = Vec::with_capacity(BLOCKS[i]); let mut layer_i = start_layer_i; for _j in 0..BLOCKS[i] { let c_skip = if INJECT_EFFNET[i] { c_cond } else { 0 }; let res_block = ResBlockStageB::new(c_hidden, c_skip, 3, vb.pp(layer_i))?; layer_i += 1; let ts_block = TimestepBlock::new(c_hidden, c_r, vb.pp(layer_i))?; layer_i += 1; let attn_block = if i == 0 { None } else { let attn_block = AttnBlock::new( c_hidden, c_cond, NHEAD[i], true, use_flash_attn, vb.pp(layer_i), )?; layer_i += 1; Some(attn_block) }; let sub_block = SubBlock { res_block, ts_block, attn_block, }; sub_blocks.push(sub_block) } let down_block = DownBlock { layer_norm, conv, sub_blocks, }; down_blocks.push(down_block) } let mut up_blocks = Vec::with_capacity(C_HIDDEN.len()); for (i, &c_hidden) in C_HIDDEN.iter().enumerate().rev() { let vb = vb.pp("up_blocks").pp(C_HIDDEN.len() - 1 - i); let mut sub_blocks = Vec::with_capacity(BLOCKS[i]); let mut layer_i = 0; for j in 0..BLOCKS[i] { let c_skip = if INJECT_EFFNET[i] { c_cond } else { 0 }; let c_skip_res = if i < BLOCKS.len() - 1 && j == 0 { c_hidden + c_skip } else { c_skip }; let res_block = ResBlockStageB::new(c_hidden, c_skip_res, 3, vb.pp(layer_i))?; layer_i += 1; let ts_block = TimestepBlock::new(c_hidden, c_r, vb.pp(layer_i))?; layer_i += 1; let attn_block = if i == 0 { None } else { let attn_block = AttnBlock::new( c_hidden, c_cond, NHEAD[i], true, use_flash_attn, vb.pp(layer_i), )?; layer_i += 1; Some(attn_block) }; let sub_block = SubBlock { res_block, ts_block, attn_block, }; sub_blocks.push(sub_block) } let (layer_norm, conv) = if i > 0 { let layer_norm = WLayerNorm::new(C_HIDDEN[i - 1])?; let cfg = candle_nn::ConvTranspose2dConfig { stride: 2, ..Default::default() }; let conv = candle_nn::conv_transpose2d( c_hidden, C_HIDDEN[i - 1], 2, cfg, vb.pp(layer_i).pp(1), )?; (Some(layer_norm), Some(conv)) } else { (None, None) }; let up_block = UpBlock { layer_norm, conv, sub_blocks, }; up_blocks.push(up_block) } let clf_ln = WLayerNorm::new(C_HIDDEN[0])?; let clf_conv = candle_nn::conv2d( C_HIDDEN[0], 2 * c_out * patch_size * patch_size, 1, Default::default(), vb.pp("clf.1"), )?; Ok(Self { clip_mapper, effnet_mappers, seq_norm, embedding_conv, embedding_ln, down_blocks, up_blocks, clf_ln, clf_conv, c_r, patch_size, }) } fn gen_r_embedding(&self, r: &Tensor) -> Result<Tensor> { const MAX_POSITIONS: usize = 10000; let r = (r * MAX_POSITIONS as f64)?; let half_dim = self.c_r / 2; let emb = (MAX_POSITIONS as f64).ln() / (half_dim - 1) as f64; let emb = (Tensor::arange(0u32, half_dim as u32, r.device())?.to_dtype(DType::F32)? * -emb)? .exp()?; let emb = r.unsqueeze(1)?.broadcast_mul(&emb.unsqueeze(0)?)?; let emb = Tensor::cat(&[emb.sin()?, emb.cos()?], 1)?; let emb = if self.c_r % 2 == 1 { emb.pad_with_zeros(D::Minus1, 0, 1)? } else { emb }; emb.to_dtype(r.dtype()) } fn gen_c_embeddings(&self, clip: &Tensor) -> Result<Tensor> { clip.apply(&self.clip_mapper)?.apply(&self.seq_norm) } pub fn forward( &self, xs: &Tensor, r: &Tensor, effnet: &Tensor, clip: Option<&Tensor>, ) -> Result<Tensor> { const EPS: f64 = 1e-3; let r_embed = self.gen_r_embedding(r)?; let clip = match clip { None => None, Some(clip) => Some(self.gen_c_embeddings(clip)?), }; let x_in = xs; let mut xs = xs .apply(&|xs: &_| candle_nn::ops::pixel_unshuffle(xs, self.patch_size))? .apply(&self.embedding_conv)? .apply(&self.embedding_ln)?; let mut level_outputs = Vec::new(); for (i, down_block) in self.down_blocks.iter().enumerate() { if let Some(ln) = &down_block.layer_norm { xs = xs.apply(ln)? } if let Some(conv) = &down_block.conv { xs = xs.apply(conv)? } let skip = match &self.effnet_mappers[i] { None => None, Some(m) => { let effnet = effnet.interpolate2d(xs.dim(D::Minus2)?, xs.dim(D::Minus1)?)?; Some(m.forward(&effnet)?) } }; for block in down_block.sub_blocks.iter() { xs = block.res_block.forward(&xs, skip.as_ref())?; xs = block.ts_block.forward(&xs, &r_embed)?; if let Some(attn_block) = &block.attn_block { xs = attn_block.forward(&xs, clip.as_ref().unwrap())?; } } level_outputs.push(xs.clone()) } level_outputs.reverse(); let mut xs = level_outputs[0].clone(); for (i, up_block) in self.up_blocks.iter().enumerate() { let effnet_c = match &self.effnet_mappers[self.down_blocks.len() + i] { None => None, Some(m) => { let effnet = effnet.interpolate2d(xs.dim(D::Minus2)?, xs.dim(D::Minus1)?)?; Some(m.forward(&effnet)?) } }; for (j, block) in up_block.sub_blocks.iter().enumerate() { let skip = if j == 0 && i > 0 { Some(&level_outputs[i]) } else { None }; let skip = match (skip, effnet_c.as_ref()) { (Some(skip), Some(effnet_c)) => Some(Tensor::cat(&[skip, effnet_c], 1)?), (None, Some(skip)) | (Some(skip), None) => Some(skip.clone()), (None, None) => None, }; xs = block.res_block.forward(&xs, skip.as_ref())?; xs = block.ts_block.forward(&xs, &r_embed)?; if let Some(attn_block) = &block.attn_block { xs = attn_block.forward(&xs, clip.as_ref().unwrap())?; } } if let Some(ln) = &up_block.layer_norm { xs = xs.apply(ln)? } if let Some(conv) = &up_block.conv { xs = xs.apply(conv)? } } let ab = xs .apply(&self.clf_ln)? .apply(&self.clf_conv)? .apply(&|xs: &_| candle_nn::ops::pixel_shuffle(xs, self.patch_size))? .chunk(2, 1)?; let b = ((candle_nn::ops::sigmoid(&ab[1])? * (1. - EPS * 2.))? + EPS)?; (x_in - &ab[0])? / b } }
candle/candle-transformers/src/models/wuerstchen/diffnext.rs/0
{ "file_path": "candle/candle-transformers/src/models/wuerstchen/diffnext.rs", "repo_id": "candle", "token_count": 8148 }
46
use candle::{DType, Device, Tensor}; use candle_nn::VarBuilder; use candle_transformers::{ generation::LogitsProcessor, models::{moondream, quantized_moondream}, }; use candle_wasm_example_moondream::console_log; use js_sys::Date; use serde::{Deserialize, Serialize}; use tokenizers::Tokenizer; use wasm_bindgen::prelude::*; enum SelectedModel { Moondream(moondream::Model), Quantized(quantized_moondream::Model), } #[wasm_bindgen] pub struct Model { model: SelectedModel, tokenizer: Tokenizer, logits_processor: LogitsProcessor, tokens: Vec<u32>, repeat_penalty: f32, repeat_last_n: usize, index: usize, bos_token: Option<Tensor>, image_embeddings: Option<Tensor>, } #[derive(Serialize, Deserialize)] struct Output { token: String, token_id: u32, } #[derive(Serialize, Deserialize)] struct InitInput { prompt: String, seed: u64, temp: f64, top_p: f64, repeat_penalty: f32, repeat_last_n: usize, verbose_prompt: bool, } #[wasm_bindgen] impl Model { #[wasm_bindgen(constructor)] pub fn load(weights: Vec<u8>, tokenizer: Vec<u8>, quantized: bool) -> Result<Model, JsError> { console_error_panic_hook::set_once(); console_log!("loading model"); let device = Device::Cpu; let config = moondream::Config::v2(); console_log!("config loaded in {:?}", Date::now()); let tokenizer = Tokenizer::from_bytes(&tokenizer).map_err(|m| JsError::new(&m.to_string()))?; let start = Date::now(); console_log!("weights len: {:?}", weights.len()); let model = if quantized { let vb = candle_transformers::quantized_var_builder::VarBuilder::from_gguf_buffer( &weights, &device, )?; console_log!("weights loaded"); let model = quantized_moondream::Model::new(&config, vb)?; SelectedModel::Quantized(model) } else { let device = &Device::Cpu; let vb = VarBuilder::from_buffered_safetensors(weights, DType::F32, device)?; let model = moondream::Model::new(&config, vb)?; SelectedModel::Moondream(model) }; console_log!("model loaded in {:?}s", (Date::now() - start) / 1000.); let logits_processor = LogitsProcessor::new(299792458, None, None); Ok(Self { model, tokenizer, tokens: vec![], logits_processor, repeat_penalty: 1., repeat_last_n: 64, bos_token: None, image_embeddings: None, index: 0, }) } pub fn set_image_embeddings(&mut self, image: Vec<u8>) -> Result<(), JsError> { let device = Device::Cpu; console_log!("loading image as tensor"); let start = Date::now(); let image: Tensor = self.load_image(image)?.to_device(&device)?; console_log!("image loaded in {:?}s", (Date::now() - start) / 1000.); let start = Date::now(); let image_embeds = &image.unsqueeze(0)?; let image_embeds = match &self.model { SelectedModel::Moondream(ref m) => image_embeds.apply(m.vision_encoder())?, SelectedModel::Quantized(ref m) => image_embeds.apply(m.vision_encoder())?, }; console_log!( "loaded and encoded the image {image:?} in {:?}", (Date::now() - start) / 1000. ); self.image_embeddings = Some(image_embeds); Ok(()) } #[wasm_bindgen] pub fn init_with_image_prompt(&mut self, input: JsValue) -> Result<JsValue, JsError> { let InitInput { prompt, seed, temp, top_p, repeat_penalty, repeat_last_n, verbose_prompt, } = serde_wasm_bindgen::from_value(input).map_err(|m| JsError::new(&m.to_string()))?; let device = Device::Cpu; let prompt = format!("\n\nQuestion: {0}\n\nAnswer:", prompt); match &mut self.model { SelectedModel::Moondream(m) => m.text_model.clear_kv_cache(), SelectedModel::Quantized(m) => m.text_model.clear_kv_cache(), }; let temp = if temp <= 0. { None } else { Some(temp) }; let top_p = if top_p <= 0. || top_p >= 1. { None } else { Some(top_p) }; self.logits_processor = LogitsProcessor::new(seed, temp, top_p); self.repeat_penalty = repeat_penalty; self.repeat_last_n = repeat_last_n; self.tokens.clear(); self.index = 0; // Moondream tokenizer bos_token is "<|endoftext|>" // https://huggingface.co/vikhyatk/moondream2/blob/main/special_tokens_map.json let special_token = match self.tokenizer.get_vocab(true).get("<|endoftext|>") { Some(token) => *token, None => return Err(JsError::new("BOS token not found in the tokenizer.")), }; self.bos_token = Some(Tensor::new(&[special_token], &device)?.unsqueeze(0)?); let tokens = self .tokenizer .encode(prompt, true) .map_err(|m| JsError::new(&m.to_string()))?; if tokens.is_empty() { return Err(JsError::new( "Empty prompts are not supported in the Moondream model.", )); } if verbose_prompt { for (token, id) in tokens.get_tokens().iter().zip(tokens.get_ids().iter()) { let token = token.replace('▁', " ").replace("<0x0A>", "\n"); println!("{id:7} -> '{token}'"); } } let tokens = tokens.get_ids().to_vec(); let text = match self.process(&tokens) { Ok(text) => text, Err(_e) => { console_log!("error decoding token"); Output { token: "".to_string(), token_id: 0, } } }; Ok(serde_wasm_bindgen::to_value(&text)?) } #[wasm_bindgen] pub fn next_token(&mut self) -> Result<JsValue, JsError> { let last_token = *self.tokens.last().unwrap(); let text = match self.process(&[last_token]) { Ok(text) => text, Err(_e) => { console_log!("error decoding token"); Output { token: "".to_string(), token_id: 0, } } }; Ok(serde_wasm_bindgen::to_value(&text)?) } } impl Model { fn load_image(&self, image: Vec<u8>) -> Result<Tensor, JsError> { let img = image::ImageReader::new(std::io::Cursor::new(image)) .with_guessed_format()? .decode() .map_err(|e| JsError::new(&e.to_string()))? .resize_to_fill(378, 378, image::imageops::FilterType::Triangle); // Adjusted to 378x378 let img = img.to_rgb8(); let data = img.into_raw(); let data = Tensor::from_vec(data, (378, 378, 3), &Device::Cpu)?.permute((2, 0, 1))?; let mean = Tensor::new(&[0.5f32, 0.5, 0.5], &Device::Cpu)?.reshape((3, 1, 1))?; let std = Tensor::new(&[0.5f32, 0.5, 0.5], &Device::Cpu)?.reshape((3, 1, 1))?; (data.to_dtype(candle::DType::F32)? / 255.)? .broadcast_sub(&mean)? .broadcast_div(&std) .map_err(|e| JsError::new(&e.to_string())) } } impl Model { fn process(&mut self, tokens: &[u32]) -> Result<Output, JsError> { let image_embeddings = match &self.image_embeddings { Some(embeddings) => embeddings, None => return Err(JsError::new("Image embeddings are not set.")), }; let bos_token = match &self.bos_token { Some(token) => token, None => return Err(JsError::new("BOS token is not set.")), }; let device = Device::Cpu; let context_size = if self.index > 0 { 1 } else { tokens.len() }; let ctxt = &tokens[tokens.len().saturating_sub(context_size)..]; let input = Tensor::new(ctxt, &device)?.unsqueeze(0)?; let logits = if self.index > 0 { match self.model { SelectedModel::Moondream(ref mut model) => model.text_model.forward(&input)?, SelectedModel::Quantized(ref mut model) => model.text_model.forward(&input)?, } } else { match self.model { SelectedModel::Moondream(ref mut model) => { model .text_model .forward_with_img(bos_token, &input, image_embeddings)? } SelectedModel::Quantized(ref mut model) => { model .text_model .forward_with_img(bos_token, &input, image_embeddings)? } } }; let logits = logits.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; self.tokens.push(next_token); let token = match self.tokenizer.decode(&[next_token], true) { Ok(token) => token, Err(e) => { console_log!("error decoding token: {:?}", e); "".to_string() } }; self.index += 1; Ok(Output { token, token_id: next_token, }) } } fn main() { console_error_panic_hook::set_once(); }
candle/candle-wasm-examples/moondream/src/bin/m.rs/0
{ "file_path": "candle/candle-wasm-examples/moondream/src/bin/m.rs", "repo_id": "candle", "token_count": 4976 }
47
use crate::console_log; use crate::worker::{ModelData, Segment, Worker, WorkerInput, WorkerOutput}; use js_sys::Date; use wasm_bindgen::prelude::*; use wasm_bindgen_futures::JsFuture; use yew::{html, Component, Context, Html}; use yew_agent::{Bridge, Bridged}; const SAMPLE_NAMES: [&str; 6] = [ "audios/samples_jfk.wav", "audios/samples_a13.wav", "audios/samples_gb0.wav", "audios/samples_gb1.wav", "audios/samples_hp0.wav", "audios/samples_mm0.wav", ]; async fn fetch_url(url: &str) -> Result<Vec<u8>, JsValue> { use web_sys::{Request, RequestCache, RequestInit, RequestMode, Response}; let window = web_sys::window().ok_or("window")?; let opts = RequestInit::new(); opts.set_method("GET"); opts.set_mode(RequestMode::Cors); opts.set_cache(RequestCache::NoCache); let request = Request::new_with_str_and_init(url, &opts)?; let resp_value = JsFuture::from(window.fetch_with_request(&request)).await?; // `resp_value` is a `Response` object. assert!(resp_value.is_instance_of::<Response>()); let resp: Response = resp_value.dyn_into()?; let data = JsFuture::from(resp.blob()?).await?; let blob = web_sys::Blob::from(data); let array_buffer = JsFuture::from(blob.array_buffer()).await?; let data = js_sys::Uint8Array::new(&array_buffer).to_vec(); Ok(data) } pub enum Msg { Run(usize), UpdateStatus(String), SetDecoder(ModelData), WorkerIn(WorkerInput), WorkerOut(Result<WorkerOutput, String>), } pub struct CurrentDecode { start_time: Option<f64>, } pub struct App { status: String, loaded: bool, segments: Vec<Segment>, current_decode: Option<CurrentDecode>, worker: Box<dyn Bridge<Worker>>, } async fn model_data_load() -> Result<ModelData, JsValue> { let quantized = false; let is_multilingual = false; let (tokenizer, mel_filters, weights, config) = if quantized { console_log!("loading quantized weights"); let tokenizer = fetch_url("quantized/tokenizer-tiny-en.json").await?; let mel_filters = fetch_url("mel_filters.safetensors").await?; let weights = fetch_url("quantized/model-tiny-en-q80.gguf").await?; let config = fetch_url("quantized/config-tiny-en.json").await?; (tokenizer, mel_filters, weights, config) } else { console_log!("loading float weights"); if is_multilingual { let mel_filters = fetch_url("mel_filters.safetensors").await?; let tokenizer = fetch_url("whisper-tiny/tokenizer.json").await?; let weights = fetch_url("whisper-tiny/model.safetensors").await?; let config = fetch_url("whisper-tiny/config.json").await?; (tokenizer, mel_filters, weights, config) } else { let mel_filters = fetch_url("mel_filters.safetensors").await?; let tokenizer = fetch_url("whisper-tiny.en/tokenizer.json").await?; let weights = fetch_url("whisper-tiny.en/model.safetensors").await?; let config = fetch_url("whisper-tiny.en/config.json").await?; (tokenizer, mel_filters, weights, config) } }; let timestamps = true; let _task = Some("transcribe".to_string()); console_log!("{}", weights.len()); Ok(ModelData { tokenizer, mel_filters, weights, config, quantized, timestamps, task: None, is_multilingual, language: None, }) } fn performance_now() -> Option<f64> { let window = web_sys::window()?; let performance = window.performance()?; Some(performance.now() / 1000.) } impl Component for App { type Message = Msg; type Properties = (); fn create(ctx: &Context<Self>) -> Self { let status = "loading weights".to_string(); let cb = { let link = ctx.link().clone(); move |e| link.send_message(Self::Message::WorkerOut(e)) }; let worker = Worker::bridge(std::rc::Rc::new(cb)); Self { status, segments: vec![], current_decode: None, worker, loaded: false, } } fn rendered(&mut self, ctx: &Context<Self>, first_render: bool) { if first_render { ctx.link().send_future(async { match model_data_load().await { Err(err) => { let status = format!("{err:?}"); Msg::UpdateStatus(status) } Ok(model_data) => Msg::SetDecoder(model_data), } }); } } fn update(&mut self, ctx: &Context<Self>, msg: Self::Message) -> bool { match msg { Msg::SetDecoder(md) => { self.status = "weights loaded successfully!".to_string(); self.loaded = true; console_log!("loaded weights"); self.worker.send(WorkerInput::ModelData(md)); true } Msg::Run(sample_index) => { let sample = SAMPLE_NAMES[sample_index]; if self.current_decode.is_some() { self.status = "already decoding some sample at the moment".to_string() } else { let start_time = performance_now(); self.current_decode = Some(CurrentDecode { start_time }); self.status = format!("decoding {sample}"); self.segments.clear(); ctx.link().send_future(async move { match fetch_url(sample).await { Err(err) => { let output = Err(format!("decoding error: {err:?}")); // Mimic a worker output to so as to release current_decode Msg::WorkerOut(output) } Ok(wav_bytes) => Msg::WorkerIn(WorkerInput::DecodeTask { wav_bytes }), } }) } // true } Msg::WorkerOut(output) => { let dt = self.current_decode.as_ref().and_then(|current_decode| { current_decode.start_time.and_then(|start_time| { performance_now().map(|stop_time| stop_time - start_time) }) }); self.current_decode = None; match output { Ok(WorkerOutput::WeightsLoaded) => self.status = "weights loaded!".to_string(), Ok(WorkerOutput::Decoded(segments)) => { self.status = match dt { None => "decoding succeeded!".to_string(), Some(dt) => format!("decoding succeeded in {:.2}s", dt), }; self.segments = segments; } Err(err) => { self.status = format!("decoding error {err:?}"); } } true } Msg::WorkerIn(inp) => { self.worker.send(inp); true } Msg::UpdateStatus(status) => { self.status = status; true } } } fn view(&self, ctx: &Context<Self>) -> Html { html! { <div> <table> <thead> <tr> <th>{"Sample"}</th> <th></th> <th></th> </tr> </thead> <tbody> { SAMPLE_NAMES.iter().enumerate().map(|(i, name)| { html! { <tr> <th>{name}</th> <th><audio controls=true src={format!("./{name}")}></audio></th> { if self.loaded { html!(<th><button class="button" onclick={ctx.link().callback(move |_| Msg::Run(i))}> { "run" }</button></th>) }else{html!()} } </tr> } }).collect::<Html>() } </tbody> </table> <h2> {&self.status} </h2> { if !self.loaded{ html! { <progress id="progress-bar" aria-label="loading weights…"></progress> } } else if self.current_decode.is_some() { html! { <progress id="progress-bar" aria-label="decoding…"></progress> } } else { html!{ <blockquote> <p> { self.segments.iter().map(|segment| { html! { <> <i> { format!("{:.2}s-{:.2}s: (avg-logprob: {:.4}, no-speech-prob: {:.4})", segment.start, segment.start + segment.duration, segment.dr.avg_logprob, segment.dr.no_speech_prob, ) } </i> <br/ > {&segment.dr.text} <br/ > </> } }).collect::<Html>() } </p> </blockquote> } } } // Display the current date and time the page was rendered <p class="footer"> { "Rendered: " } { String::from(Date::new_0().to_string()) } </p> </div> } } }
candle/candle-wasm-examples/whisper/src/app.rs/0
{ "file_path": "candle/candle-wasm-examples/whisper/src/app.rs", "repo_id": "candle", "token_count": 5669 }
48
use candle_wasm_example_yolo::coco_classes; use candle_wasm_example_yolo::model::Bbox; use candle_wasm_example_yolo::worker::Model as M; use candle_wasm_example_yolo::worker::ModelPose as P; use wasm_bindgen::prelude::*; #[wasm_bindgen] pub struct Model { inner: M, } #[wasm_bindgen] impl Model { #[wasm_bindgen(constructor)] pub fn new(data: Vec<u8>, model_size: &str) -> Result<Model, JsError> { let inner = M::load_(data, model_size)?; Ok(Self { inner }) } #[wasm_bindgen] pub fn run( &self, image: Vec<u8>, conf_threshold: f32, iou_threshold: f32, ) -> Result<String, JsError> { let bboxes = self.inner.run(image, conf_threshold, iou_threshold)?; let mut detections: Vec<(String, Bbox)> = vec![]; for (class_index, bboxes_for_class) in bboxes.into_iter().enumerate() { for b in bboxes_for_class.into_iter() { detections.push((coco_classes::NAMES[class_index].to_string(), b)); } } let json = serde_json::to_string(&detections)?; Ok(json) } } #[wasm_bindgen] pub struct ModelPose { inner: P, } #[wasm_bindgen] impl ModelPose { #[wasm_bindgen(constructor)] pub fn new(data: Vec<u8>, model_size: &str) -> Result<ModelPose, JsError> { let inner = P::load_(data, model_size)?; Ok(Self { inner }) } #[wasm_bindgen] pub fn run( &self, image: Vec<u8>, conf_threshold: f32, iou_threshold: f32, ) -> Result<String, JsError> { let bboxes = self.inner.run(image, conf_threshold, iou_threshold)?; let json = serde_json::to_string(&bboxes)?; Ok(json) } } fn main() {}
candle/candle-wasm-examples/yolo/src/bin/m.rs/0
{ "file_path": "candle/candle-wasm-examples/yolo/src/bin/m.rs", "repo_id": "candle", "token_count": 840 }
49
# Use .env.local to change these variables # DO NOT EDIT THIS FILE WITH SENSITIVE DATA MONGODB_URL=#your mongodb URL here MONGODB_DB_NAME=chat-ui MONGODB_DIRECT_CONNECTION=false COOKIE_NAME=hf-chat TRUSTED_EMAIL_HEADER= # only set this if you understand the implications HF_TOKEN=#hf_<token> from https://huggingface.co/settings/token HF_API_ROOT=https://api-inference.huggingface.co/models OPENAI_API_KEY=#your openai api key here ANTHROPIC_API_KEY=#your anthropic api key here CLOUDFLARE_ACCOUNT_ID=#your cloudflare account id here CLOUDFLARE_API_TOKEN=#your cloudflare api token here COHERE_API_TOKEN=#your cohere api token here HF_ACCESS_TOKEN=#LEGACY! Use HF_TOKEN instead # used to activate search with web functionality. disabled if none are defined. choose one of the following: YDC_API_KEY=#your docs.you.com api key here SERPER_API_KEY=#your serper.dev api key here SERPAPI_KEY=#your serpapi key here SERPSTACK_API_KEY=#your serpstack api key here SEARCHAPI_KEY=#your searchapi api key here USE_LOCAL_WEBSEARCH=#set to true to parse google results yourself, overrides other API keys SEARXNG_QUERY_URL=# where '<query>' will be replaced with query keywords see https://docs.searxng.org/dev/search_api.html eg https://searxng.yourdomain.com/search?q=<query>&engines=duckduckgo,google&format=json BING_SUBSCRIPTION_KEY=#your key PLAYWRIGHT_ADBLOCKER=true WEBSEARCH_ALLOWLIST=`[]` # if it's defined, allow websites from only this list. WEBSEARCH_BLOCKLIST=`[]` # if it's defined, block websites from this list. WEBSEARCH_JAVASCRIPT=true # CPU usage reduces by 60% on average by disabling javascript. Enable to improve website compatibility WEBSEARCH_TIMEOUT = 3500 # in milliseconds, determines how long to wait to load a page before timing out # Parameters to enable open id login OPENID_CONFIG=`{ "PROVIDER_URL": "", "CLIENT_ID": "", "CLIENT_SECRET": "", "SCOPES": "", "NAME_CLAIM": "" }` # /!\ legacy openid settings, prefer the config above OPENID_CLIENT_ID= OPENID_CLIENT_SECRET= OPENID_SCOPES="openid profile" # Add "email" for some providers like Google that do not provide preferred_username OPENID_NAME_CLAIM="name" # Change to "username" for some providers that do not provide name OPENID_PROVIDER_URL=https://huggingface.co # for Google, use https://accounts.google.com OPENID_TOLERANCE= OPENID_RESOURCE= # Parameters to enable a global mTLS context for client fetch requests USE_CLIENT_CERTIFICATE=false CERT_PATH=# KEY_PATH=# CA_PATH=# CLIENT_KEY_PASSWORD=# REJECT_UNAUTHORIZED=true TEXT_EMBEDDING_MODELS = `[ { "name": "Xenova/gte-small", "displayName": "Xenova/gte-small", "description": "Local embedding model running on the server.", "chunkCharLength": 512, "endpoints": [ { "type": "transformersjs" } ] } ]` # 'name', 'userMessageToken', 'assistantMessageToken' are required MODELS=`[ { "name": "mistralai/Mistral-7B-Instruct-v0.1", "displayName": "mistralai/Mistral-7B-Instruct-v0.1", "description": "Mistral 7B is a new Apache 2.0 model, released by Mistral AI that outperforms Llama2 13B in benchmarks.", "websiteUrl": "https://mistral.ai/news/announcing-mistral-7b/", "preprompt": "", "chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}", "parameters": { "temperature": 0.1, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 3072, "max_new_tokens": 1024, "stop": ["</s>"] }, "promptExamples": [ { "title": "Write an email from bullet list", "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)" }, { "title": "Code a snake game", "prompt": "Code a basic snake game in python, give explanations for each step." }, { "title": "Assist in a task", "prompt": "How do I make a delicious lemon cheesecake?" } ] } ]` OLD_MODELS=`[]`# any removed models, `{ name: string, displayName?: string, id?: string }` TASK_MODEL= # name of the model used for tasks such as summarizing title, creating query, etc. PUBLIC_ORIGIN=#https://huggingface.co PUBLIC_SHARE_PREFIX=#https://hf.co/chat PUBLIC_GOOGLE_ANALYTICS_ID=#G-XXXXXXXX / Leave empty to disable PUBLIC_PLAUSIBLE_SCRIPT_URL=#/js/script.js / Leave empty to disable PUBLIC_ANNOUNCEMENT_BANNERS=`[ { "title": "Code Llama 70B is available! 🦙", "linkTitle": "try it", "linkHref": "https://huggingface.co/chat?model=codellama/CodeLlama-70b-Instruct-hf" } ]` PUBLIC_APPLE_APP_ID=#1234567890 / Leave empty to disable PARQUET_EXPORT_DATASET= PARQUET_EXPORT_HF_TOKEN= ADMIN_API_SECRET=# secret to admin API calls, like computing usage stats or exporting parquet data PARQUET_EXPORT_SECRET=#DEPRECATED, use ADMIN_API_SECRET instead RATE_LIMIT= # /!\ Legacy definition of messages per minute. Use USAGE_LIMITS.messagesPerMinute instead MESSAGES_BEFORE_LOGIN=# how many messages a user can send in a conversation before having to login. set to 0 to force login right away APP_BASE="" # base path of the app, e.g. /chat, left blank as default PUBLIC_APP_NAME=ChatUI # name used as title throughout the app PUBLIC_APP_ASSETS=chatui # used to find logos & favicons in static/$PUBLIC_APP_ASSETS PUBLIC_APP_COLOR=blue # can be any of tailwind colors: https://tailwindcss.com/docs/customizing-colors#default-color-palette PUBLIC_APP_DESCRIPTION=# description used throughout the app (if not set, a default one will be used) PUBLIC_APP_DATA_SHARING=#set to 1 to enable options & text regarding data sharing PUBLIC_APP_DISCLAIMER=#set to 1 to show a disclaimer on login page PUBLIC_APP_DISCLAIMER_MESSAGE="Disclaimer: AI is an area of active research with known problems such as biased generation and misinformation. Do not use this application for high-stakes decisions or advice. Do not insert your personal data, especially sensitive, like health data." LLM_SUMMARIZATION=true EXPOSE_API=true USE_HF_TOKEN_IN_API=false # PUBLIC_APP_NAME=HuggingChat # PUBLIC_APP_ASSETS=huggingchat # PUBLIC_APP_COLOR=yellow # PUBLIC_APP_DESCRIPTION="Making the community's best AI chat models available to everyone." # PUBLIC_APP_DATA_SHARING=1 # PUBLIC_APP_DISCLAIMER=1 ENABLE_ASSISTANTS=false #set to true to enable assistants feature ENABLE_ASSISTANTS_RAG=false # /!\ This will let users specify arbitrary URLs that the server will then request. Make sure you have the proper firewall rules in place. REQUIRE_FEATURED_ASSISTANTS=false ENABLE_LOCAL_FETCH=false #set to true to disable the blocklist for local fetches. Only enable this if you have the proper firewall rules to prevent SSRF attacks and understand the implications. ALTERNATIVE_REDIRECT_URLS=`[]` #valide alternative redirect URL for OAuth WEBHOOK_URL_REPORT_ASSISTANT=#provide webhook url to get notified when an assistant gets reported ALLOWED_USER_EMAILS=`[]` # if it's defined, only these emails will be allowed to use the app USAGE_LIMITS=`{}` ALLOW_INSECURE_COOKIES=false # recommended to keep this to false but set to true if you need to run over http without tls METRICS_ENABLED=false METRICS_PORT=5565 LOG_LEVEL=info TOOLS=`[]` BODY_SIZE_LIMIT=15728640 HF_ORG_ADMIN= HF_ORG_EARLY_ACCESS= PUBLIC_SMOOTH_UPDATES=false
chat-ui/.env/0
{ "file_path": "chat-ui/.env", "repo_id": "chat-ui", "token_count": 2715 }
50
{ "editor.formatOnSave": true, "editor.defaultFormatter": "esbenp.prettier-vscode", "editor.codeActionsOnSave": { "source.fixAll": "explicit" }, "eslint.validate": ["javascript", "svelte"] }
chat-ui/.vscode/settings.json/0
{ "file_path": "chat-ui/.vscode/settings.json", "repo_id": "chat-ui", "token_count": 83 }
51
apiVersion: v1 kind: Service metadata: name: "{{ include "name" . }}" annotations: {{ toYaml .Values.service.annotations | nindent 4 }} namespace: {{ .Release.Namespace }} labels: {{ include "labels.standard" . | nindent 4 }} spec: ports: - name: http port: 80 protocol: TCP targetPort: http {{- if $.Values.monitoring.enabled }} - name: metrics port: 5565 protocol: TCP targetPort: metrics {{- end }} selector: {{ include "labels.standard" . | nindent 4 }} type: {{.Values.service.type}}
chat-ui/chart/templates/service.yaml/0
{ "file_path": "chat-ui/chart/templates/service.yaml", "repo_id": "chat-ui", "token_count": 192 }
52
# OpenAI | Feature | Available | | --------------------------- | --------- | | [Tools](../tools) | No | | [Multimodal](../multimodal) | No | Chat UI can be used with any API server that supports OpenAI API compatibility, for example [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai), [LocalAI](https://github.com/go-skynet/LocalAI), [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), and [ialacol](https://github.com/chenhunghan/ialacol) and [vllm](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). The following example config makes Chat UI works with [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai), the `endpoint.baseUrl` is the url of the OpenAI API compatible server, this overrides the baseUrl to be used by OpenAI instance. The `endpoint.completion` determine which endpoint to be used, default is `chat_completions` which uses `/chat/completions`, change to `endpoint.completion` to `completions` to use the `/completions` endpoint. ```ini MODELS=`[ { "name": "text-generation-webui", "id": "text-generation-webui", "parameters": { "temperature": 0.9, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 1000, "max_new_tokens": 1024, "stop": [] }, "endpoints": [{ "type" : "openai", "baseURL": "http://localhost:8000/v1" }] } ]` ``` The `openai` type includes official OpenAI models. You can add, for example, GPT4/GPT3.5 as a "openai" model: ```ini OPENAI_API_KEY=#your openai api key here MODELS=`[{ "name": "gpt-4", "displayName": "GPT 4", "endpoints" : [{ "type": "openai", "apiKey": "or your openai api key here" }] },{ "name": "gpt-3.5-turbo", "displayName": "GPT 3.5 Turbo", "endpoints" : [{ "type": "openai", "apiKey": "or your openai api key here" }] }]` ``` You may also consume any model provider that provides compatible OpenAI API endpoint. For example, you may self-host [Portkey](https://github.com/Portkey-AI/gateway) gateway and experiment with Claude or GPTs offered by Azure OpenAI. Example for Claude from Anthropic: ```ini MODELS=`[{ "name": "claude-2.1", "displayName": "Claude 2.1", "description": "Anthropic has been founded by former OpenAI researchers...", "parameters": { "temperature": 0.5, "max_new_tokens": 4096, }, "endpoints": [ { "type": "openai", "baseURL": "https://gateway.example.com/v1", "defaultHeaders": { "x-portkey-config": '{"provider":"anthropic","api_key":"sk-ant-abc...xyz"}' } } ] }]` ``` Example for GPT 4 deployed on Azure OpenAI: ```ini MODELS=`[{ "id": "gpt-4-1106-preview", "name": "gpt-4-1106-preview", "displayName": "gpt-4-1106-preview", "parameters": { "temperature": 0.5, "max_new_tokens": 4096, }, "endpoints": [ { "type": "openai", "baseURL": "https://{resource-name}.openai.azure.com/openai/deployments/{deployment-id}", "defaultHeaders": { "api-key": "{api-key}" }, "defaultQuery": { "api-version": "2023-05-15" } } ] }]` ``` ## DeepInfra Or try Mistral from [Deepinfra](https://deepinfra.com/mistralai/Mistral-7B-Instruct-v0.1/api?example=openai-http): > Note, apiKey can either be set custom per endpoint, or globally using `OPENAI_API_KEY` variable. ```ini MODELS=`[{ "name": "mistral-7b", "displayName": "Mistral 7B", "description": "A 7B dense Transformer, fast-deployed and easily customisable. Small, yet powerful for a variety of use cases. Supports English and code, and a 8k context window.", "parameters": { "temperature": 0.5, "max_new_tokens": 4096, }, "endpoints": [ { "type": "openai", "baseURL": "https://api.deepinfra.com/v1/openai", "apiKey": "abc...xyz" } ] }]` ``` ## Other Some other providers and their `baseURL` for reference. [Groq](https://groq.com/): https://api.groq.com/openai/v1 [Fireworks](https://fireworks.ai/): https://api.fireworks.ai/inference/v1
chat-ui/docs/source/configuration/models/providers/openai.md/0
{ "file_path": "chat-ui/docs/source/configuration/models/providers/openai.md", "repo_id": "chat-ui", "token_count": 1747 }
53
{ "name": "chat-ui", "version": "0.9.2", "private": true, "packageManager": "npm@9.5.0", "scripts": { "dev": "vite dev", "build": "vite build", "preview": "vite preview", "check": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json", "check:watch": "svelte-kit sync && svelte-check --tsconfig ./tsconfig.json --watch", "lint": "prettier --plugin-search-dir . --check . && eslint .", "format": "prettier --plugin-search-dir . --write .", "test": "vitest", "updateLocalEnv": "node --loader ts-node/esm scripts/updateLocalEnv.ts", "populate": "vite-node --options.transformMode.ssr='/.*/' scripts/populate.ts", "prepare": "husky" }, "devDependencies": { "@faker-js/faker": "^8.4.1", "@iconify-json/carbon": "^1.1.16", "@iconify-json/eos-icons": "^1.1.6", "@sveltejs/adapter-node": "^5.2.0", "@sveltejs/kit": "^2.5.20", "@tailwindcss/typography": "^0.5.9", "@types/dompurify": "^3.0.5", "@types/express": "^4.17.21", "@types/js-yaml": "^4.0.9", "@types/jsdom": "^21.1.1", "@types/jsonpath": "^0.2.4", "@types/minimist": "^1.2.5", "@types/node": "^22.1.0", "@types/parquetjs": "^0.10.3", "@types/sbd": "^1.0.5", "@types/uuid": "^9.0.8", "@typescript-eslint/eslint-plugin": "^6.x", "@typescript-eslint/parser": "^6.x", "dompurify": "^3.1.6", "eslint": "^8.28.0", "eslint-config-prettier": "^8.5.0", "eslint-plugin-svelte": "^2.30.0", "isomorphic-dompurify": "^2.13.0", "js-yaml": "^4.1.0", "minimist": "^1.2.8", "prettier": "^2.8.0", "prettier-plugin-svelte": "^2.10.1", "prettier-plugin-tailwindcss": "^0.2.7", "prom-client": "^15.1.2", "svelte": "^4.2.18", "svelte-check": "^3.8.5", "ts-node": "^10.9.1", "tslib": "^2.4.1", "typescript": "^5.0.0", "unplugin-icons": "^0.16.1", "vite": "^5.3.5", "vite-node": "^1.3.1", "vitest": "^0.31.0" }, "type": "module", "dependencies": { "@aws-sdk/credential-providers": "^3.592.0", "@cliqz/adblocker-playwright": "^1.27.2", "@gradio/client": "^1.1.1", "@huggingface/hub": "^0.5.1", "@huggingface/inference": "^2.7.0", "@huggingface/transformers": "^3.0.0-alpha.6", "@iconify-json/bi": "^1.1.21", "@playwright/browser-chromium": "^1.43.1", "@resvg/resvg-js": "^2.6.2", "autoprefixer": "^10.4.14", "aws-sigv4-fetch": "^4.0.1", "aws4": "^1.13.0", "browser-image-resizer": "^2.4.1", "date-fns": "^2.29.3", "dotenv": "^16.0.3", "express": "^4.19.2", "file-type": "^19.4.1", "google-auth-library": "^9.13.0", "handlebars": "^4.7.8", "highlight.js": "^11.7.0", "husky": "^9.0.11", "image-size": "^1.0.2", "ip-address": "^9.0.5", "jose": "^5.3.0", "jsdom": "^22.0.0", "json5": "^2.2.3", "jsonpath": "^1.1.1", "lint-staged": "^15.2.7", "marked": "^12.0.1", "marked-katex-extension": "^5.0.1", "mongodb": "^5.8.0", "nanoid": "^4.0.2", "openid-client": "^5.4.2", "parquetjs": "^0.11.2", "pino": "^9.0.0", "pino-pretty": "^11.0.0", "playwright": "^1.44.1", "postcss": "^8.4.31", "saslprep": "^1.0.3", "satori": "^0.10.11", "satori-html": "^0.3.2", "sbd": "^1.0.19", "serpapi": "^1.1.1", "sharp": "^0.33.4", "tailwind-scrollbar": "^3.0.0", "tailwindcss": "^3.4.0", "uuid": "^10.0.0", "zod": "^3.22.3" }, "optionalDependencies": { "@aws-sdk/client-bedrock-runtime": "^3.631.0", "@anthropic-ai/sdk": "^0.25.0", "@anthropic-ai/vertex-sdk": "^0.4.1", "@google-cloud/vertexai": "^1.1.0", "@google/generative-ai": "^0.14.1", "aws4fetch": "^1.0.17", "cohere-ai": "^7.9.0", "openai": "^4.44.0" } }
chat-ui/package.json/0
{ "file_path": "chat-ui/package.json", "repo_id": "chat-ui", "token_count": 2019 }
54
<script lang="ts"> import CarbonContinue from "~icons/carbon/continue"; export let classNames = ""; </script> <button type="button" on:click class="btn flex h-8 rounded-lg border bg-white px-3 py-1 text-gray-500 shadow-sm transition-all hover:bg-gray-100 dark:border-gray-600 dark:bg-gray-700 dark:text-gray-300 dark:hover:bg-gray-600 {classNames}" > <CarbonContinue class="mr-2 text-xs " /> Continue </button>
chat-ui/src/lib/components/ContinueBtn.svelte/0
{ "file_path": "chat-ui/src/lib/components/ContinueBtn.svelte", "repo_id": "chat-ui", "token_count": 149 }
55
<script lang="ts"> import { fade } from "svelte/transition"; import { onDestroy } from "svelte"; import IconChevron from "./icons/IconChevron.svelte"; export let scrollNode: HTMLElement; export { className as class }; let visible = false; let className = ""; let observer: ResizeObserver | null = null; $: if (scrollNode) { destroy(); if (window.ResizeObserver) { observer = new ResizeObserver(() => { updateVisibility(); }); observer.observe(scrollNode); } scrollNode.addEventListener("scroll", updateVisibility); } function updateVisibility() { if (!scrollNode) return; visible = Math.ceil(scrollNode.scrollTop) + 200 < scrollNode.scrollHeight - scrollNode.clientHeight; } function destroy() { observer?.disconnect(); scrollNode?.removeEventListener("scroll", updateVisibility); } onDestroy(destroy); </script> {#if visible} <button transition:fade={{ duration: 150 }} on:click={() => scrollNode.scrollTo({ top: scrollNode.scrollHeight, behavior: "smooth" })} class="btn absolute flex h-[41px] w-[41px] rounded-full border bg-white shadow-md transition-all hover:bg-gray-100 dark:border-gray-600 dark:bg-gray-700 dark:shadow-gray-950 dark:hover:bg-gray-600 {className}" ><IconChevron classNames="mt-[2px]" /></button > {/if}
chat-ui/src/lib/components/ScrollToBottomBtn.svelte/0
{ "file_path": "chat-ui/src/lib/components/ScrollToBottomBtn.svelte", "repo_id": "chat-ui", "token_count": 460 }
56
<script lang="ts"> import type { Message, MessageFile } from "$lib/types/Message"; import { createEventDispatcher, onDestroy, tick } from "svelte"; import CarbonSendAltFilled from "~icons/carbon/send-alt-filled"; import CarbonExport from "~icons/carbon/export"; import CarbonStopFilledAlt from "~icons/carbon/stop-filled-alt"; import CarbonCheckmark from "~icons/carbon/checkmark"; import CarbonCaretDown from "~icons/carbon/caret-down"; import EosIconsLoading from "~icons/eos-icons/loading"; import ChatInput from "./ChatInput.svelte"; import StopGeneratingBtn from "../StopGeneratingBtn.svelte"; import type { Model } from "$lib/types/Model"; import WebSearchToggle from "../WebSearchToggle.svelte"; import ToolsMenu from "../ToolsMenu.svelte"; import LoginModal from "../LoginModal.svelte"; import { page } from "$app/stores"; import FileDropzone from "./FileDropzone.svelte"; import RetryBtn from "../RetryBtn.svelte"; import UploadBtn from "../UploadBtn.svelte"; import file2base64 from "$lib/utils/file2base64"; import type { Assistant } from "$lib/types/Assistant"; import { base } from "$app/paths"; import ContinueBtn from "../ContinueBtn.svelte"; import AssistantIntroduction from "./AssistantIntroduction.svelte"; import ChatMessage from "./ChatMessage.svelte"; import ScrollToBottomBtn from "../ScrollToBottomBtn.svelte"; import { browser } from "$app/environment"; import { snapScrollToBottom } from "$lib/actions/snapScrollToBottom"; import SystemPromptModal from "../SystemPromptModal.svelte"; import ChatIntroduction from "./ChatIntroduction.svelte"; import { useConvTreeStore } from "$lib/stores/convTree"; import UploadedFile from "./UploadedFile.svelte"; import { useSettingsStore } from "$lib/stores/settings"; import type { ToolFront } from "$lib/types/Tool"; export let messages: Message[] = []; export let loading = false; export let pending = false; export let shared = false; export let currentModel: Model; export let models: Model[]; export let assistant: Assistant | undefined = undefined; export let preprompt: string | undefined = undefined; export let files: File[] = []; $: isReadOnly = !models.some((model) => model.id === currentModel.id); let loginModalOpen = false; let message: string; let timeout: ReturnType<typeof setTimeout>; let isSharedRecently = false; $: $page.params.id && (isSharedRecently = false); const dispatch = createEventDispatcher<{ message: string; share: void; stop: void; retry: { id: Message["id"]; content?: string }; continue: { id: Message["id"] }; }>(); const handleSubmit = () => { if (loading) return; dispatch("message", message); message = ""; }; let lastTarget: EventTarget | null = null; let onDrag = false; const onDragEnter = (e: DragEvent) => { lastTarget = e.target; onDrag = true; }; const onDragLeave = (e: DragEvent) => { if (e.target === lastTarget) { onDrag = false; } }; const onPaste = (e: ClipboardEvent) => { if (!e.clipboardData) { return; } // paste of files const pastedFiles = Array.from(e.clipboardData.files); if (pastedFiles.length !== 0) { e.preventDefault(); // filter based on activeMimeTypes, including wildcards const filteredFiles = pastedFiles.filter((file) => { return activeMimeTypes.some((mimeType: string) => { const [type, subtype] = mimeType.split("/"); const [fileType, fileSubtype] = file.type.split("/"); return type === fileType && (subtype === "*" || fileSubtype === subtype); }); }); files = [...files, ...filteredFiles]; } }; const convTreeStore = useConvTreeStore(); const updateCurrentIndex = () => { const url = new URL($page.url); let leafId = url.searchParams.get("leafId"); // Ensure the function is only run in the browser. if (!browser) return; if (leafId) { // Remove the 'leafId' from the URL to clean up after retrieving it. url.searchParams.delete("leafId"); history.replaceState(null, "", url.toString()); } else { // Retrieve the 'leafId' from localStorage if it's not in the URL. leafId = localStorage.getItem("leafId"); } // If a 'leafId' exists, find the corresponding message and update indices. if (leafId) { let leafMessage = messages.find((m) => m.id == leafId); if (!leafMessage?.ancestors) return; // Exit if the message has no ancestors. let ancestors = leafMessage.ancestors; // Loop through all ancestors to update the current child index. for (let i = 0; i < ancestors.length; i++) { let curMessage = messages.find((m) => m.id == ancestors[i]); if (curMessage?.children) { for (let j = 0; j < curMessage.children.length; j++) { // Check if the current message's child matches the next ancestor // or the leaf itself, and update the currentChildIndex accordingly. if (i + 1 < ancestors.length) { if (curMessage.children[j] == ancestors[i + 1]) { curMessage.currentChildIndex = j; break; } } else { if (curMessage.children[j] == leafId) { curMessage.currentChildIndex = j; break; } } } } } } }; updateCurrentIndex(); $: lastMessage = browser && (messages.find((m) => m.id == $convTreeStore.leaf) as Message); $: lastIsError = lastMessage && !loading && (lastMessage.from === "user" || lastMessage.updates?.findIndex((u) => u.type === "status" && u.status === "error") !== -1); $: sources = files?.map<Promise<MessageFile>>((file) => file2base64(file).then((value) => ({ type: "base64", value, mime: file.type, name: file.name })) ); function onShare() { if (!confirm("Are you sure you want to share this conversation? This cannot be undone.")) { return; } dispatch("share"); isSharedRecently = true; if (timeout) { clearTimeout(timeout); } timeout = setTimeout(() => { isSharedRecently = false; }, 2000); } onDestroy(() => { if (timeout) { clearTimeout(timeout); } }); let chatContainer: HTMLElement; async function scrollToBottom() { await tick(); chatContainer.scrollTop = chatContainer.scrollHeight; } // If last message is from user, scroll to bottom $: if (lastMessage && lastMessage.from === "user") { scrollToBottom(); } const settings = useSettingsStore(); // active tools are all the checked tools, either from settings or on by default $: activeTools = $page.data.tools.filter((tool: ToolFront) => $settings?.tools?.includes(tool._id) ); $: activeMimeTypes = [ ...(!$page.data?.assistant && currentModel.tools ? activeTools.flatMap((tool: ToolFront) => tool.mimeTypes ?? []) : []), ...(currentModel.multimodal ? ["image/*"] : []), ]; $: isFileUploadEnabled = activeMimeTypes.length > 0; </script> <svelte:window on:dragenter={onDragEnter} on:dragleave={onDragLeave} on:dragover|preventDefault on:drop|preventDefault={() => (onDrag = false)} /> <div class="relative min-h-0 min-w-0"> {#if loginModalOpen} <LoginModal on:close={() => { loginModalOpen = false; }} /> {/if} <div class="scrollbar-custom mr-1 h-full overflow-y-auto" use:snapScrollToBottom={messages.length ? [...messages] : false} bind:this={chatContainer} > <div class="mx-auto flex h-full max-w-3xl flex-col gap-6 px-5 pt-6 sm:gap-8 xl:max-w-4xl xl:pt-10" > {#if $page.data?.assistant && !!messages.length} <a class="mx-auto flex items-center gap-1.5 rounded-full border border-gray-100 bg-gray-50 py-1 pl-1 pr-3 text-sm text-gray-800 hover:bg-gray-100 dark:border-gray-800 dark:bg-gray-800 dark:text-gray-200 dark:hover:bg-gray-700" href="{base}/settings/assistants/{$page.data.assistant._id}" > {#if $page.data?.assistant.avatar} <img src="{base}/settings/assistants/{$page.data?.assistant._id.toString()}/avatar.jpg?hash=${$page .data.assistant.avatar}" alt="Avatar" class="size-5 rounded-full object-cover" /> {:else} <div class="flex size-6 items-center justify-center rounded-full bg-gray-300 font-bold uppercase text-gray-500" > {$page.data?.assistant.name[0]} </div> {/if} {$page.data.assistant.name} </a> {:else if preprompt && preprompt != currentModel.preprompt} <SystemPromptModal preprompt={preprompt ?? ""} /> {/if} {#if messages.length > 0} <div class="flex h-max flex-col gap-8 pb-52"> <ChatMessage {loading} {messages} id={messages[0].id} isAuthor={!shared} readOnly={isReadOnly} model={currentModel} on:retry on:vote on:continue /> </div> {:else if pending} <ChatMessage loading={true} messages={[ { id: "0-0-0-0-0", content: "", from: "assistant", children: [], }, ]} id={"0-0-0-0-0"} isAuthor={!shared} readOnly={isReadOnly} model={currentModel} /> {:else if !assistant} <ChatIntroduction {models} {currentModel} on:message={(ev) => { if ($page.data.loginRequired) { ev.preventDefault(); loginModalOpen = true; } else { dispatch("message", ev.detail); } }} /> {:else} <AssistantIntroduction {models} {assistant} on:message={(ev) => { if ($page.data.loginRequired) { ev.preventDefault(); loginModalOpen = true; } else { dispatch("message", ev.detail); } }} /> {/if} </div> <ScrollToBottomBtn class="bottom-36 right-4 max-md:hidden lg:right-10" scrollNode={chatContainer} /> </div> <div class="dark:via-gray-80 pointer-events-none absolute inset-x-0 bottom-0 z-0 mx-auto flex w-full max-w-3xl flex-col items-center justify-center bg-gradient-to-t from-white via-white/80 to-white/0 px-3.5 py-4 dark:border-gray-800 dark:from-gray-900 dark:to-gray-900/0 max-md:border-t max-md:bg-white max-md:dark:bg-gray-900 sm:px-5 md:py-8 xl:max-w-4xl [&>*]:pointer-events-auto" > {#if sources?.length} <div class="flex flex-row flex-wrap justify-center gap-2.5 max-md:pb-3"> {#each sources as source, index} {#await source then src} <UploadedFile file={src} on:close={() => { files = files.filter((_, i) => i !== index); }} /> {/await} {/each} </div> {/if} <div class="w-full"> <div class="flex w-full pb-3"> {#if !assistant} {#if currentModel.tools} <ToolsMenu {loading} /> {:else if $page.data.settings?.searchEnabled} <WebSearchToggle /> {/if} {/if} {#if loading} <StopGeneratingBtn classNames="ml-auto" on:click={() => dispatch("stop")} /> {:else if lastIsError} <RetryBtn classNames="ml-auto" on:click={() => { if (lastMessage && lastMessage.ancestors) { dispatch("retry", { id: lastMessage.id, }); } }} /> {:else} <div class="ml-auto gap-2"> {#if isFileUploadEnabled} <UploadBtn bind:files mimeTypes={activeMimeTypes} classNames="ml-auto" /> {/if} {#if messages && lastMessage && lastMessage.interrupted && !isReadOnly} <ContinueBtn on:click={() => { if (lastMessage && lastMessage.ancestors) { dispatch("continue", { id: lastMessage?.id, }); } }} /> {/if} </div> {/if} </div> <form tabindex="-1" aria-label={isFileUploadEnabled ? "file dropzone" : undefined} on:submit|preventDefault={handleSubmit} class="relative flex w-full max-w-4xl flex-1 items-center rounded-xl border bg-gray-100 focus-within:border-gray-300 dark:border-gray-600 dark:bg-gray-700 dark:focus-within:border-gray-500 {isReadOnly ? 'opacity-30' : ''}" > {#if onDrag && isFileUploadEnabled} <FileDropzone bind:files bind:onDrag mimeTypes={activeMimeTypes} /> {:else} <div class="flex w-full flex-1 border-none bg-transparent"> {#if lastIsError} <ChatInput value="Sorry, something went wrong. Please try again." disabled={true} /> {:else} <ChatInput placeholder={isReadOnly ? "This conversation is read-only. Start a new one to continue!" : "Ask anything"} bind:value={message} on:submit={handleSubmit} on:beforeinput={(ev) => { if ($page.data.loginRequired) { ev.preventDefault(); loginModalOpen = true; } }} on:paste={onPaste} maxRows={6} disabled={isReadOnly || lastIsError} /> {/if} {#if loading} <button class="btn mx-1 my-1 inline-block h-[2.4rem] self-end rounded-lg bg-transparent p-1 px-[0.7rem] text-gray-400 enabled:hover:text-gray-700 disabled:opacity-60 enabled:dark:hover:text-gray-100 dark:disabled:opacity-40 md:hidden" on:click={() => dispatch("stop")} > <CarbonStopFilledAlt /> </button> <div class="mx-1 my-1 hidden h-[2.4rem] items-center p-1 px-[0.7rem] text-gray-400 enabled:hover:text-gray-700 disabled:opacity-60 enabled:dark:hover:text-gray-100 dark:disabled:opacity-40 md:flex" > <EosIconsLoading /> </div> {:else} <button class="btn mx-1 my-1 h-[2.4rem] self-end rounded-lg bg-transparent p-1 px-[0.7rem] text-gray-400 enabled:hover:text-gray-700 disabled:opacity-60 enabled:dark:hover:text-gray-100 dark:disabled:opacity-40" disabled={!message || isReadOnly} type="submit" > <CarbonSendAltFilled /> </button> {/if} </div> {/if} </form> <div class="mt-2 flex justify-between self-stretch px-1 text-xs text-gray-400/90 max-md:mb-2 max-sm:gap-2" > <p> Model: {#if !assistant} {#if models.find((m) => m.id === currentModel.id)} <a href="{base}/settings/{currentModel.id}" class="inline-flex items-center hover:underline" >{currentModel.displayName}<CarbonCaretDown class="text-xxs" /></a > {:else} <span class="inline-flex items-center line-through dark:border-gray-700"> {currentModel.id} </span> {/if} {:else} {@const model = models.find((m) => m.id === assistant?.modelId)} {#if model} <a href="{base}/settings/assistants/{assistant._id}" class="inline-flex items-center border-b hover:text-gray-600 dark:border-gray-700 dark:hover:text-gray-300" >{model?.displayName}<CarbonCaretDown class="text-xxs" /></a > {:else} <span class="inline-flex items-center line-through dark:border-gray-700"> {currentModel.id} </span> {/if} {/if} <span class="max-sm:hidden">·</span><br class="sm:hidden" /> Generated content may be inaccurate or false. </p> {#if messages.length} <button class="flex flex-none items-center hover:text-gray-400 max-sm:rounded-lg max-sm:bg-gray-50 max-sm:px-2.5 dark:max-sm:bg-gray-800" type="button" class:hover:underline={!isSharedRecently} on:click={onShare} disabled={isSharedRecently} > {#if isSharedRecently} <CarbonCheckmark class="text-[.6rem] sm:mr-1.5 sm:text-green-600" /> <div class="text-green-600 max-sm:hidden">Link copied to clipboard</div> {:else} <CarbonExport class="sm:text-primary-500 text-[.6rem] sm:mr-1.5" /> <div class="max-sm:hidden">Share this conversation</div> {/if} </button> {/if} </div> </div> </div> </div>
chat-ui/src/lib/components/chat/ChatWindow.svelte/0
{ "file_path": "chat-ui/src/lib/components/chat/ChatWindow.svelte", "repo_id": "chat-ui", "token_count": 6773 }
57
import type { ConversationStats } from "$lib/types/ConversationStats"; import { CONVERSATION_STATS_COLLECTION, collections } from "$lib/server/database"; import { logger } from "$lib/server/logger"; import type { ObjectId } from "mongodb"; import { acquireLock, refreshLock } from "$lib/migrations/lock"; export async function computeAllStats() { for (const span of ["day", "week", "month"] as const) { computeStats({ dateField: "updatedAt", type: "conversation", span }).catch((e) => logger.error(e) ); computeStats({ dateField: "createdAt", type: "conversation", span }).catch((e) => logger.error(e) ); computeStats({ dateField: "createdAt", type: "message", span }).catch((e) => logger.error(e)); } } async function computeStats(params: { dateField: ConversationStats["date"]["field"]; span: ConversationStats["date"]["span"]; type: ConversationStats["type"]; }) { const lastComputed = await collections.conversationStats.findOne( { "date.field": params.dateField, "date.span": params.span, type: params.type }, { sort: { "date.at": -1 } } ); // If the last computed week is at the beginning of the last computed month, we need to include some days from the previous month // In those cases we need to compute the stats from before the last month as everything is one aggregation const minDate = lastComputed ? lastComputed.date.at : new Date(0); logger.info( { minDate, dateField: params.dateField, span: params.span, type: params.type }, "Computing conversation stats" ); const dateField = params.type === "message" ? "messages." + params.dateField : params.dateField; const pipeline = [ { $match: { [dateField]: { $gte: minDate }, }, }, { $project: { [dateField]: 1, sessionId: 1, userId: 1, }, }, ...(params.type === "message" ? [ { $unwind: "$messages", }, { $match: { [dateField]: { $gte: minDate }, }, }, ] : []), { $sort: { [dateField]: 1, }, }, { $facet: { userId: [ { $match: { userId: { $exists: true }, }, }, { $group: { _id: { at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } }, userId: "$userId", }, }, }, { $group: { _id: "$_id.at", count: { $sum: 1 }, }, }, { $project: { _id: 0, date: { at: "$_id", field: params.dateField, span: params.span, }, distinct: "userId", count: 1, }, }, ], sessionId: [ { $match: { sessionId: { $exists: true }, }, }, { $group: { _id: { at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } }, sessionId: "$sessionId", }, }, }, { $group: { _id: "$_id.at", count: { $sum: 1 }, }, }, { $project: { _id: 0, date: { at: "$_id", field: params.dateField, span: params.span, }, distinct: "sessionId", count: 1, }, }, ], userOrSessionId: [ { $group: { _id: { at: { $dateTrunc: { date: `$${dateField}`, unit: params.span } }, userOrSessionId: { $ifNull: ["$userId", "$sessionId"] }, }, }, }, { $group: { _id: "$_id.at", count: { $sum: 1 }, }, }, { $project: { _id: 0, date: { at: "$_id", field: params.dateField, span: params.span, }, distinct: "userOrSessionId", count: 1, }, }, ], _id: [ { $group: { _id: { $dateTrunc: { date: `$${dateField}`, unit: params.span } }, count: { $sum: 1 }, }, }, { $project: { _id: 0, date: { at: "$_id", field: params.dateField, span: params.span, }, distinct: "_id", count: 1, }, }, ], }, }, { $project: { stats: { $concatArrays: ["$userId", "$sessionId", "$userOrSessionId", "$_id"], }, }, }, { $unwind: "$stats", }, { $replaceRoot: { newRoot: "$stats", }, }, { $set: { type: params.type, }, }, { $merge: { into: CONVERSATION_STATS_COLLECTION, on: ["date.at", "type", "date.span", "date.field", "distinct"], whenMatched: "replace", whenNotMatched: "insert", }, }, ]; await collections.conversations.aggregate(pipeline, { allowDiskUse: true }).next(); logger.info( { minDate, dateField: params.dateField, span: params.span, type: params.type }, "Computed conversation stats" ); } const LOCK_KEY = "conversation.stats"; let hasLock = false; let lockId: ObjectId | null = null; async function maintainLock() { if (hasLock && lockId) { hasLock = await refreshLock(LOCK_KEY, lockId); if (!hasLock) { lockId = null; } } else if (!hasLock) { lockId = (await acquireLock(LOCK_KEY)) || null; hasLock = !!lockId; } setTimeout(maintainLock, 10_000); } export function refreshConversationStats() { const ONE_HOUR_MS = 3_600_000; maintainLock().then(() => { computeAllStats(); setInterval(computeAllStats, 12 * ONE_HOUR_MS); }); }
chat-ui/src/lib/jobs/refresh-conversation-stats.ts/0
{ "file_path": "chat-ui/src/lib/jobs/refresh-conversation-stats.ts", "repo_id": "chat-ui", "token_count": 2646 }
58
import { z } from "zod"; import type { EmbeddingEndpoint, Embedding } from "../embeddingEndpoints"; import { chunk } from "$lib/utils/chunk"; import { env } from "$env/dynamic/private"; import { logger } from "$lib/server/logger"; export const embeddingEndpointHfApiSchema = z.object({ weight: z.number().int().positive().default(1), model: z.any(), type: z.literal("hfapi"), authorization: z .string() .optional() .transform((v) => (!v && env.HF_TOKEN ? "Bearer " + env.HF_TOKEN : v)), // if the header is not set but HF_TOKEN is, use it as the authorization header }); export async function embeddingEndpointHfApi( input: z.input<typeof embeddingEndpointHfApiSchema> ): Promise<EmbeddingEndpoint> { const { model, authorization } = embeddingEndpointHfApiSchema.parse(input); const url = "https://api-inference.huggingface.co/models/" + model.id; return async ({ inputs }) => { const batchesInputs = chunk(inputs, 128); const batchesResults = await Promise.all( batchesInputs.map(async (batchInputs) => { const response = await fetch(`${url}`, { method: "POST", headers: { Accept: "application/json", "Content-Type": "application/json", ...(authorization ? { Authorization: authorization } : {}), }, body: JSON.stringify({ inputs: { source_sentence: batchInputs[0], sentences: batchInputs.slice(1), }, }), }); if (!response.ok) { logger.error(await response.text()); logger.error(response, "Failed to get embeddings from Hugging Face API"); return []; } const embeddings: Embedding[] = await response.json(); return embeddings; }) ); const flatAllEmbeddings = batchesResults.flat(); return flatAllEmbeddings; }; }
chat-ui/src/lib/server/embeddingEndpoints/hfApi/embeddingHfApi.ts/0
{ "file_path": "chat-ui/src/lib/server/embeddingEndpoints/hfApi/embeddingHfApi.ts", "repo_id": "chat-ui", "token_count": 674 }
59
import { buildPrompt } from "$lib/buildPrompt"; import { z } from "zod"; import type { Endpoint } from "../endpoints"; import type { TextGenerationStreamOutput } from "@huggingface/inference"; import { logger } from "$lib/server/logger"; export const endpointLangserveParametersSchema = z.object({ weight: z.number().int().positive().default(1), model: z.any(), type: z.literal("langserve"), url: z.string().url(), }); export function endpointLangserve( input: z.input<typeof endpointLangserveParametersSchema> ): Endpoint { const { url, model } = endpointLangserveParametersSchema.parse(input); return async ({ messages, preprompt, continueMessage }) => { const prompt = await buildPrompt({ messages, continueMessage, preprompt, model, }); const r = await fetch(`${url}/stream`, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ input: { text: prompt }, }), }); if (!r.ok) { throw new Error(`Failed to generate text: ${await r.text()}`); } const encoder = new TextDecoderStream(); const reader = r.body?.pipeThrough(encoder).getReader(); return (async function* () { let stop = false; let generatedText = ""; let tokenId = 0; let accumulatedData = ""; // Buffer to accumulate data chunks while (!stop) { // Read the stream and log the outputs to console const out = (await reader?.read()) ?? { done: false, value: undefined }; // If it's done, we cancel if (out.done) { reader?.cancel(); return; } if (!out.value) { return; } // Accumulate the data chunk accumulatedData += out.value; // Keep read data to check event type const eventData = out.value; // Process each complete JSON object in the accumulated data while (accumulatedData.includes("\n")) { // Assuming each JSON object ends with a newline const endIndex = accumulatedData.indexOf("\n"); let jsonString = accumulatedData.substring(0, endIndex).trim(); // Remove the processed part from the buffer accumulatedData = accumulatedData.substring(endIndex + 1); // Stopping with end event if (eventData.startsWith("event: end")) { stop = true; yield { token: { id: tokenId++, text: "", logprob: 0, special: true, }, generated_text: generatedText, details: null, } satisfies TextGenerationStreamOutput; reader?.cancel(); continue; } if (eventData.startsWith("event: data") && jsonString.startsWith("data: ")) { jsonString = jsonString.slice(6); let data = null; // Handle the parsed data try { data = JSON.parse(jsonString); } catch (e) { logger.error(e, "Failed to parse JSON"); logger.error(jsonString, "Problematic JSON string:"); continue; // Skip this iteration and try the next chunk } // Assuming content within data is a plain string if (data) { generatedText += data; const output: TextGenerationStreamOutput = { token: { id: tokenId++, text: data, logprob: 0, special: false, }, generated_text: null, details: null, }; yield output; } } } } })(); }; } export default endpointLangserve;
chat-ui/src/lib/server/endpoints/langserve/endpointLangserve.ts/0
{ "file_path": "chat-ui/src/lib/server/endpoints/langserve/endpointLangserve.ts", "repo_id": "chat-ui", "token_count": 1394 }
60
import { env } from "$env/dynamic/private"; import type { ChatTemplateInput } from "$lib/types/Template"; import { compileTemplate } from "$lib/utils/template"; import { z } from "zod"; import endpoints, { endpointSchema, type Endpoint } from "./endpoints/endpoints"; import { endpointTgi } from "./endpoints/tgi/endpointTgi"; import { sum } from "$lib/utils/sum"; import { embeddingModels, validateEmbeddingModelByName } from "./embeddingModels"; import type { PreTrainedTokenizer } from "@huggingface/transformers"; import JSON5 from "json5"; import { getTokenizer } from "$lib/utils/getTokenizer"; import { logger } from "$lib/server/logger"; import { ToolResultStatus, type ToolInput } from "$lib/types/Tool"; type Optional<T, K extends keyof T> = Pick<Partial<T>, K> & Omit<T, K>; const modelConfig = z.object({ /** Used as an identifier in DB */ id: z.string().optional(), /** Used to link to the model page, and for inference */ name: z.string().default(""), displayName: z.string().min(1).optional(), description: z.string().min(1).optional(), logoUrl: z.string().url().optional(), websiteUrl: z.string().url().optional(), modelUrl: z.string().url().optional(), tokenizer: z .union([ z.string(), z.object({ tokenizerUrl: z.string().url(), tokenizerConfigUrl: z.string().url(), }), ]) .optional(), datasetName: z.string().min(1).optional(), datasetUrl: z.string().url().optional(), preprompt: z.string().default(""), prepromptUrl: z.string().url().optional(), chatPromptTemplate: z.string().optional(), promptExamples: z .array( z.object({ title: z.string().min(1), prompt: z.string().min(1), }) ) .optional(), endpoints: z.array(endpointSchema).optional(), parameters: z .object({ temperature: z.number().min(0).max(1).optional(), truncate: z.number().int().positive().optional(), max_new_tokens: z.number().int().positive().optional(), stop: z.array(z.string()).optional(), top_p: z.number().positive().optional(), top_k: z.number().positive().optional(), repetition_penalty: z.number().min(-2).max(2).optional(), }) .passthrough() .optional(), multimodal: z.boolean().default(false), tools: z.boolean().default(false), unlisted: z.boolean().default(false), embeddingModel: validateEmbeddingModelByName(embeddingModels).optional(), }); const modelsRaw = z.array(modelConfig).parse(JSON5.parse(env.MODELS)); async function getChatPromptRender( m: z.infer<typeof modelConfig> ): Promise<ReturnType<typeof compileTemplate<ChatTemplateInput>>> { if (m.chatPromptTemplate) { return compileTemplate<ChatTemplateInput>(m.chatPromptTemplate, m); } let tokenizer: PreTrainedTokenizer; if (!m.tokenizer) { return compileTemplate<ChatTemplateInput>( "{{#if @root.preprompt}}<|im_start|>system\n{{@root.preprompt}}<|im_end|>\n{{/if}}{{#each messages}}{{#ifUser}}<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n{{/ifUser}}{{#ifAssistant}}{{content}}<|im_end|>\n{{/ifAssistant}}{{/each}}", m ); } try { tokenizer = await getTokenizer(m.tokenizer); } catch (e) { logger.error( e, `Failed to load tokenizer for model ${m.name} consider setting chatPromptTemplate manually or making sure the model is available on the hub.` ); process.exit(); } const renderTemplate = ({ messages, preprompt, tools, toolResults }: ChatTemplateInput) => { let formattedMessages: { role: string; content: string }[] = messages.map((message) => ({ content: message.content, role: message.from, })); if (preprompt && formattedMessages[0].role !== "system") { formattedMessages = [ { role: "system", content: preprompt, }, ...formattedMessages, ]; } if (toolResults?.length) { // todo: should update the command r+ tokenizer to support system messages at any location // or use the `rag` mode without the citations const id = m.id ?? m.name; if (id.startsWith("CohereForAI")) { formattedMessages = [ { role: "system", content: "\n\n<results>\n" + toolResults .flatMap((result, idx) => { if (result.status === ToolResultStatus.Error) { return ( `Document: ${idx}\n` + `Tool "${result.call.name}" error\n` + result.message ); } return ( `Document: ${idx}\n` + result.outputs .flatMap((output) => Object.entries(output).map(([title, text]) => `${title}\n${text}`) ) .join("\n") ); }) .join("\n\n") + "\n</results>", }, ...formattedMessages, ]; } else if (id.startsWith("meta-llama")) { const results = toolResults.flatMap((result) => { if (result.status === ToolResultStatus.Error) { return [ { tool_call_id: result.call.name, output: "Error: " + result.message, }, ]; } else { return result.outputs.map((output) => ({ tool_call_id: result.call.name, output: JSON.stringify(output), })); } }); formattedMessages = [ ...formattedMessages, { role: "python", content: JSON.stringify(results), }, ]; } else { formattedMessages = [ ...formattedMessages, { role: "system", content: JSON.stringify(toolResults), }, ]; } tools = []; } const chatTemplate = tools?.length ? "tool_use" : undefined; const documents = (toolResults ?? []).flatMap((result) => { if (result.status === ToolResultStatus.Error) { return [{ title: `Tool "${result.call.name}" error`, text: "\n" + result.message }]; } return result.outputs.flatMap((output) => Object.entries(output).map(([title, text]) => ({ title: `Tool "${result.call.name}" ${title}`, text: "\n" + text, })) ); }); const mappedTools = tools?.map((tool) => { const inputs: Record< string, { type: ToolInput["type"]; description: string; required: boolean; } > = {}; for (const value of tool.inputs) { if (value.paramType !== "fixed") { inputs[value.name] = { type: value.type, description: value.description ?? "", required: value.paramType === "required", }; } } return { name: tool.name, description: tool.description, parameter_definitions: inputs, }; }) ?? []; const output = tokenizer.apply_chat_template(formattedMessages, { tokenize: false, add_generation_prompt: true, chat_template: chatTemplate, // eslint-disable-next-line @typescript-eslint/ban-ts-comment // @ts-ignore tools: mappedTools, documents, }); if (typeof output !== "string") { throw new Error("Failed to apply chat template, the output is not a string"); } return output; }; return renderTemplate; } const processModel = async (m: z.infer<typeof modelConfig>) => ({ ...m, chatPromptRender: await getChatPromptRender(m), id: m.id || m.name, displayName: m.displayName || m.name, preprompt: m.prepromptUrl ? await fetch(m.prepromptUrl).then((r) => r.text()) : m.preprompt, parameters: { ...m.parameters, stop_sequences: m.parameters?.stop }, }); export type ProcessedModel = Awaited<ReturnType<typeof processModel>> & { getEndpoint: () => Promise<Endpoint>; }; const addEndpoint = (m: Awaited<ReturnType<typeof processModel>>) => ({ ...m, getEndpoint: async (): Promise<Endpoint> => { if (!m.endpoints) { return endpointTgi({ type: "tgi", url: `${env.HF_API_ROOT}/${m.name}`, accessToken: env.HF_TOKEN ?? env.HF_ACCESS_TOKEN, weight: 1, model: m, }); } const totalWeight = sum(m.endpoints.map((e) => e.weight)); let random = Math.random() * totalWeight; for (const endpoint of m.endpoints) { if (random < endpoint.weight) { const args = { ...endpoint, model: m }; switch (args.type) { case "tgi": return endpoints.tgi(args); case "anthropic": return endpoints.anthropic(args); case "anthropic-vertex": return endpoints.anthropicvertex(args); case "bedrock": return endpoints.bedrock(args); case "aws": return await endpoints.aws(args); case "openai": return await endpoints.openai(args); case "llamacpp": return endpoints.llamacpp(args); case "ollama": return endpoints.ollama(args); case "vertex": return await endpoints.vertex(args); case "genai": return await endpoints.genai(args); case "cloudflare": return await endpoints.cloudflare(args); case "cohere": return await endpoints.cohere(args); case "langserve": return await endpoints.langserve(args); default: // for legacy reason return endpoints.tgi(args); } } random -= endpoint.weight; } throw new Error(`Failed to select endpoint`); }, }); export const models: ProcessedModel[] = await Promise.all( modelsRaw.map((e) => processModel(e).then(addEndpoint)) ); export const defaultModel = models[0]; // Models that have been deprecated export const oldModels = env.OLD_MODELS ? z .array( z.object({ id: z.string().optional(), name: z.string().min(1), displayName: z.string().min(1).optional(), }) ) .parse(JSON5.parse(env.OLD_MODELS)) .map((m) => ({ ...m, id: m.id || m.name, displayName: m.displayName || m.name })) : []; export const validateModel = (_models: BackendModel[]) => { // Zod enum function requires 2 parameters return z.enum([_models[0].id, ..._models.slice(1).map((m) => m.id)]); }; // if `TASK_MODEL` is string & name of a model in `MODELS`, then we use `MODELS[TASK_MODEL]`, else we try to parse `TASK_MODEL` as a model config itself export const smallModel = env.TASK_MODEL ? (models.find((m) => m.name === env.TASK_MODEL) || (await processModel(modelConfig.parse(JSON5.parse(env.TASK_MODEL))).then((m) => addEndpoint(m) ))) ?? defaultModel : defaultModel; export type BackendModel = Optional< typeof defaultModel, "preprompt" | "parameters" | "multimodal" | "unlisted" | "tools" >;
chat-ui/src/lib/server/models.ts/0
{ "file_path": "chat-ui/src/lib/server/models.ts", "repo_id": "chat-ui", "token_count": 4204 }
61
import type { EmbeddingBackendModel } from "$lib/server/embeddingModels"; import { getSentenceSimilarity } from "$lib/server/sentenceSimilarity"; /** * Combines sentences together to reach the maximum character limit of the embedding model * Improves performance considerably when using CPU embedding */ export async function getCombinedSentenceSimilarity( embeddingModel: EmbeddingBackendModel, query: string, sentences: string[] ): ReturnType<typeof getSentenceSimilarity> { const combinedSentences = sentences.reduce<{ text: string; indices: number[] }[]>( (acc, sentence, idx) => { const lastSentence = acc[acc.length - 1]; if (!lastSentence) return [{ text: sentence, indices: [idx] }]; if (lastSentence.text.length + sentence.length < embeddingModel.chunkCharLength) { lastSentence.text += ` ${sentence}`; lastSentence.indices.push(idx); return acc; } return [...acc, { text: sentence, indices: [idx] }]; }, [] ); const embeddings = await getSentenceSimilarity( embeddingModel, query, combinedSentences.map(({ text }) => text) ); return embeddings.flatMap((embedding, idx) => { const { indices } = combinedSentences[idx]; return indices.map((i) => ({ ...embedding, idx: i })); }); }
chat-ui/src/lib/server/websearch/embed/combine.ts/0
{ "file_path": "chat-ui/src/lib/server/websearch/embed/combine.ts", "repo_id": "chat-ui", "token_count": 420 }
62
import { env } from "$env/dynamic/private"; import type { WebSearchSource } from "$lib/types/WebSearch"; export default async function search(query: string): Promise<WebSearchSource[]> { const response = await fetch( `https://www.searchapi.io/api/v1/search?engine=google&hl=en&gl=us&q=${query}`, { method: "GET", headers: { Authorization: `Bearer ${env.SEARCHAPI_KEY}`, "Content-type": "application/json", }, } ); /* eslint-disable @typescript-eslint/no-explicit-any */ const data = (await response.json()) as Record<string, any>; if (!response.ok) { throw new Error( data["message"] ?? `SearchApi returned error code ${response.status} - ${response.statusText}` ); } return data["organic_results"] ?? []; }
chat-ui/src/lib/server/websearch/search/endpoints/searchApi.ts/0
{ "file_path": "chat-ui/src/lib/server/websearch/search/endpoints/searchApi.ts", "repo_id": "chat-ui", "token_count": 274 }
63
import { writable } from "svelte/store"; export interface TitleUpdate { convId: string; title: string; } export default writable<TitleUpdate | null>(null);
chat-ui/src/lib/stores/titleUpdate.ts/0
{ "file_path": "chat-ui/src/lib/stores/titleUpdate.ts", "repo_id": "chat-ui", "token_count": 50 }
64
import type { ObjectId } from "bson"; import type { Timestamps } from "./Timestamps"; import type { User } from "./User"; export interface Session extends Timestamps { _id: ObjectId; sessionId: string; userId: User["_id"]; userAgent?: string; ip?: string; expiresAt: Date; }
chat-ui/src/lib/types/Session.ts/0
{ "file_path": "chat-ui/src/lib/types/Session.ts", "repo_id": "chat-ui", "token_count": 97 }
65
import { base } from "$app/paths"; import type { Client } from "@gradio/client"; export type ApiReturnType = Awaited<ReturnType<typeof Client.prototype.view_api>>; export async function getGradioApi(space: string) { const api: ApiReturnType = await fetch(`${base}/api/spaces-config?space=${space}`).then( async (res) => { if (!res.ok) { throw new Error(await res.text()); } return res.json(); } ); return api; }
chat-ui/src/lib/utils/getGradioApi.ts/0
{ "file_path": "chat-ui/src/lib/utils/getGradioApi.ts", "repo_id": "chat-ui", "token_count": 166 }
66
/** Takes an unknown error and attempts to convert it to a string */ export function stringifyError(error: unknown): string { if (error instanceof Error) return error.message; if (typeof error === "string") return error; if (typeof error === "object" && error !== null) { // try a few common properties if ("message" in error && typeof error.message === "string") return error.message; if ("body" in error && typeof error.body === "string") return error.body; if ("name" in error && typeof error.name === "string") return error.name; } return "Unknown error"; }
chat-ui/src/lib/utils/stringifyError.ts/0
{ "file_path": "chat-ui/src/lib/utils/stringifyError.ts", "repo_id": "chat-ui", "token_count": 167 }
67
<script lang="ts"> import { page } from "$app/stores"; </script> <div class="flex items-center justify-center bg-gradient-to-t from-gray-200 text-gray-800 dark:from-gray-700 dark:text-gray-300" > <div class="align-center -mt-24 flex flex-col justify-center rounded-xl border bg-white px-8 pb-2 pt-4 text-center dark:border-gray-700 dark:bg-gray-800" > <h1 class="mb-2 text-5xl font-semibold">{$page.status}</h1> <div class="-mx-8 my-2 h-px bg-gray-200 dark:bg-gray-700" /> <h2 class="max-w-sm text-lg">{$page.error?.message}</h2> {#if $page.error?.errorId} <div class="-mx-8 my-2 h-px bg-gray-200 dark:bg-gray-700" /> <pre class="max-w-sm whitespace-pre-wrap text-left font-mono text-xs">{$page.error .errorId}</pre> {/if} </div> </div>
chat-ui/src/routes/+error.svelte/0
{ "file_path": "chat-ui/src/routes/+error.svelte", "repo_id": "chat-ui", "token_count": 344 }
68
import { base } from "$app/paths"; import { collections } from "$lib/server/database"; import { redirect } from "@sveltejs/kit"; import { ObjectId } from "mongodb"; export const load = async ({ params }) => { try { const assistant = await collections.assistants.findOne({ _id: new ObjectId(params.assistantId), }); if (!assistant) { redirect(302, `${base}`); } return { assistant: JSON.parse(JSON.stringify(assistant)) }; } catch { redirect(302, `${base}`); } };
chat-ui/src/routes/assistant/[assistantId]/+page.server.ts/0
{ "file_path": "chat-ui/src/routes/assistant/[assistantId]/+page.server.ts", "repo_id": "chat-ui", "token_count": 176 }
69
export async function GET() { return new Response("OK", { status: 200 }); }
chat-ui/src/routes/healthcheck/+server.ts/0
{ "file_path": "chat-ui/src/routes/healthcheck/+server.ts", "repo_id": "chat-ui", "token_count": 22 }
70
<script lang="ts"> import { page } from "$app/stores"; import { base } from "$app/paths"; import { env as envPublic } from "$env/dynamic/public"; import type { BackendModel } from "$lib/server/models"; import { useSettingsStore } from "$lib/stores/settings"; import CopyToClipBoardBtn from "$lib/components/CopyToClipBoardBtn.svelte"; import TokensCounter from "$lib/components/TokensCounter.svelte"; import CarbonArrowUpRight from "~icons/carbon/arrow-up-right"; import CarbonLink from "~icons/carbon/link"; import CarbonChat from "~icons/carbon/chat"; import { goto } from "$app/navigation"; const settings = useSettingsStore(); $: if ($settings.customPrompts[$page.params.model] === undefined) { $settings.customPrompts = { ...$settings.customPrompts, [$page.params.model]: $page.data.models.find((el: BackendModel) => el.id === $page.params.model)?.preprompt || "", }; } $: hasCustomPreprompt = $settings.customPrompts[$page.params.model] !== $page.data.models.find((el: BackendModel) => el.id === $page.params.model)?.preprompt; $: model = $page.data.models.find((el: BackendModel) => el.id === $page.params.model); </script> <div class="flex flex-col items-start"> <div class="mb-5 flex flex-col gap-1.5"> <h2 class="text-lg font-semibold md:text-xl"> {$page.params.model} </h2> {#if model.description} <p class="whitespace-pre-wrap text-gray-600"> {model.description} </p> {/if} </div> <div class="flex flex-wrap items-center gap-2 md:gap-4"> {#if model.modelUrl} <a href={model.modelUrl || "https://huggingface.co/" + model.name} target="_blank" rel="noreferrer" class="flex items-center truncate underline underline-offset-2" > <CarbonArrowUpRight class="mr-1.5 shrink-0 text-xs " /> Model page </a> {/if} {#if model.datasetName || model.datasetUrl} <a href={model.datasetUrl || "https://huggingface.co/datasets/" + model.datasetName} target="_blank" rel="noreferrer" class="flex items-center truncate underline underline-offset-2" > <CarbonArrowUpRight class="mr-1.5 shrink-0 text-xs " /> Dataset page </a> {/if} {#if model.websiteUrl} <a href={model.websiteUrl} target="_blank" class="flex items-center truncate underline underline-offset-2" rel="noreferrer" > <CarbonArrowUpRight class="mr-1.5 shrink-0 text-xs " /> Model website </a> {/if} <CopyToClipBoardBtn value="{envPublic.PUBLIC_ORIGIN || $page.url.origin}{base}/models/{model.id}" classNames="!border-none !shadow-none !py-0 !px-1 !rounded-md" > <div class="flex items-center gap-1.5 hover:underline"> <CarbonLink />Copy direct link to model </div> </CopyToClipBoardBtn> </div> <button class="my-2 flex w-fit items-center rounded-full bg-black px-3 py-1 text-base !text-white" name="Activate model" on:click|stopPropagation={() => { settings.instantSet({ activeModel: $page.params.model, }); goto(`${base}/`); }} > <CarbonChat class="mr-1.5 text-sm" /> New chat </button> <div class="relative flex w-full flex-col gap-2"> <div class="flex w-full flex-row content-between"> <h3 class="mb-1.5 text-lg font-semibold text-gray-800">System Prompt</h3> {#if hasCustomPreprompt} <button class="ml-auto underline decoration-gray-300 hover:decoration-gray-700" on:click|stopPropagation={() => ($settings.customPrompts[$page.params.model] = model.preprompt)} > Reset </button> {/if} </div> <textarea rows="10" class="w-full resize-none rounded-md border-2 bg-gray-100 p-2" bind:value={$settings.customPrompts[$page.params.model]} /> {#if model.tokenizer && $settings.customPrompts[$page.params.model]} <TokensCounter classNames="absolute bottom-2 right-2" prompt={$settings.customPrompts[$page.params.model]} modelTokenizer={model.tokenizer} truncate={model?.parameters?.truncate} /> {/if} </div> </div>
chat-ui/src/routes/settings/(nav)/[...model]/+page.svelte/0
{ "file_path": "chat-ui/src/routes/settings/(nav)/[...model]/+page.svelte", "repo_id": "chat-ui", "token_count": 1678 }
71
<script lang="ts"> import type { PageData } from "./$types"; import { env as envPublic } from "$env/dynamic/public"; import { isHuggingChat } from "$lib/utils/isHuggingChat"; import { goto } from "$app/navigation"; import { base } from "$app/paths"; import { page } from "$app/stores"; import CarbonAdd from "~icons/carbon/add"; import CarbonHelpFilled from "~icons/carbon/help-filled"; import CarbonClose from "~icons/carbon/close"; import CarbonArrowUpRight from "~icons/carbon/arrow-up-right"; import CarbonEarthAmerica from "~icons/carbon/earth-americas-filled"; import CarbonSearch from "~icons/carbon/search"; import Pagination from "$lib/components/Pagination.svelte"; import { getHref } from "$lib/utils/getHref"; import { debounce } from "$lib/utils/debounce"; import { isDesktop } from "$lib/utils/isDesktop"; import { SortKey } from "$lib/types/Assistant"; import ToolLogo from "$lib/components/ToolLogo.svelte"; export let data: PageData; $: tools = data.tools.filter((t) => activeOnly ? data.settings.tools.some((toolId) => toolId === t._id.toString()) : true ); $: toolsCreator = $page.url.searchParams.get("user"); $: createdByMe = data.user?.username && data.user.username === toolsCreator; $: activeOnly = $page.url.searchParams.get("active") === "true"; const SEARCH_DEBOUNCE_DELAY = 400; let filterInputEl: HTMLInputElement; let filterValue = data.query; let isFilterInPorgress = false; let sortValue = data.sort as SortKey; const resetFilter = () => { filterValue = ""; isFilterInPorgress = false; }; const filterOnName = debounce(async (value: string) => { filterValue = value; if (isFilterInPorgress) { return; } isFilterInPorgress = true; const newUrl = getHref($page.url, { newKeys: { q: value }, existingKeys: { behaviour: "delete", keys: ["p"] }, }); await goto(newUrl); if (isDesktop(window)) { setTimeout(() => filterInputEl.focus(), 0); } isFilterInPorgress = false; // there was a new filter query before server returned response if (filterValue !== value) { filterOnName(filterValue); } }, SEARCH_DEBOUNCE_DELAY); const sortTools = () => { const newUrl = getHref($page.url, { newKeys: { sort: sortValue }, existingKeys: { behaviour: "delete", keys: ["p"] }, }); goto(newUrl); }; const goToActiveUrl = () => { return getHref($page.url, { newKeys: { active: "true" }, existingKeys: { behaviour: "delete_except", keys: ["active", "sort"] }, }); }; const goToCommunity = () => { return getHref($page.url, { existingKeys: { behaviour: "delete_except", keys: ["sort", "q"] }, }); }; </script> <svelte:head> {#if isHuggingChat} <title>HuggingChat - Tools</title> <meta property="og:title" content="HuggingChat - Tools" /> <meta property="og:type" content="link" /> <meta property="og:description" content="Browse HuggingChat tools made by the community." /> <meta property="og:image" content="{envPublic.PUBLIC_ORIGIN || $page.url.origin}{base}/{envPublic.PUBLIC_APP_ASSETS}/tools-thumbnail.png" /> <meta property="og:url" content={$page.url.href} /> {/if} </svelte:head> <div class="scrollbar-custom mr-1 h-full overflow-y-auto py-12 max-sm:pt-8 md:py-24"> <div class="pt-42 mx-auto flex flex-col px-5 xl:w-[60rem] 2xl:w-[64rem]"> <div class="flex items-center"> <h1 class="text-2xl font-bold">Tools</h1> {#if isHuggingChat} <div class="5 ml-1.5 rounded-lg text-xxs uppercase text-gray-500 dark:text-gray-500"> beta </div> <a href="https://huggingface.co/spaces/huggingchat/chat-ui/discussions/357" class="ml-auto dark:text-gray-400 dark:hover:text-gray-300" target="_blank" > <CarbonHelpFilled /> </a> {/if} </div> <h3 class="text-gray-500">Popular tools made by the community</h3> <h4 class="mt-2 w-fit text-purple-700 dark:text-purple-300"> This feature is in <span class="rounded-lg bg-purple-100 px-2 py-1 font-semibold dark:bg-purple-800/50" >early access</span >. Only team members can see it and use it for now. Feel free to share feedback on it internally! </h4> <div class="ml-auto mt-6 flex justify-between gap-2 max-sm:flex-col sm:items-center"> <a href={`${base}/tools/new`} class="flex items-center gap-1 whitespace-nowrap rounded-lg border bg-white py-1 pl-1.5 pr-2.5 shadow-sm hover:bg-gray-50 hover:shadow-none dark:border-gray-600 dark:bg-gray-700 dark:hover:bg-gray-700" > <CarbonAdd />Create new tool </a> </div> <div class="mt-7 flex flex-wrap items-center gap-x-2 gap-y-3 text-sm"> {#if toolsCreator && !createdByMe} <div class="flex items-center gap-1.5 rounded-full border border-gray-300 bg-gray-50 px-3 py-1 dark:border-gray-600 dark:bg-gray-700 dark:text-white" > {toolsCreator}'s tools <a href={getHref($page.url, { existingKeys: { behaviour: "delete", keys: ["user", "modelId", "p", "q"] }, })} on:click={resetFilter} class="group" ><CarbonClose class="text-xs group-hover:text-gray-800 dark:group-hover:text-gray-300" /></a > </div> {#if isHuggingChat} <a href="https://hf.co/{toolsCreator}" target="_blank" class="ml-auto flex items-center text-xs text-gray-500 underline hover:text-gray-800 dark:text-gray-400 dark:hover:text-gray-300" ><CarbonArrowUpRight class="mr-1 flex-none text-[0.58rem]" target="_blank" />View {toolsCreator} on HF</a > {/if} {:else} <a href={goToActiveUrl()} class="flex items-center gap-1.5 rounded-full border px-3 py-1 {activeOnly ? 'border-gray-300 bg-gray-50 dark:border-gray-600 dark:bg-gray-700 dark:text-white' : 'border-transparent text-gray-400 hover:text-gray-800 dark:hover:text-gray-300'}" > <CarbonEarthAmerica class="text-xs" /> Active ({$page.data.settings?.tools?.length}) </a> <a href={goToCommunity()} class="flex items-center gap-1.5 rounded-full border px-3 py-1 {!activeOnly && !toolsCreator ? 'border-gray-300 bg-gray-50 dark:border-gray-600 dark:bg-gray-700 dark:text-white' : 'border-transparent text-gray-400 hover:text-gray-800 dark:hover:text-gray-300'}" > <CarbonEarthAmerica class="text-xs" /> Community </a> {#if data.user?.username} <a href={getHref($page.url, { newKeys: { user: data.user.username }, existingKeys: { behaviour: "delete", keys: ["modelId", "p", "q", "active"] }, })} on:click={resetFilter} class="flex items-center gap-1.5 truncate rounded-full border px-3 py-1 {toolsCreator && createdByMe ? 'border-gray-300 bg-gray-50 dark:border-gray-600 dark:bg-gray-700 dark:text-white' : 'border-transparent text-gray-400 hover:text-gray-800 dark:hover:text-gray-300'}" >{data.user.username} </a> {/if} {/if} <div class="relative ml-auto flex h-[30px] w-40 items-center rounded-full border px-2 has-[:focus]:border-gray-400 dark:border-gray-600 sm:w-64" > <CarbonSearch class="pointer-events-none absolute left-2 text-xs text-gray-400" /> <input class="h-[30px] w-full bg-transparent pl-5 focus:outline-none" placeholder="Filter by name" value={filterValue} on:input={(e) => filterOnName(e.currentTarget.value)} bind:this={filterInputEl} maxlength="150" type="search" /> </div> <select bind:value={sortValue} on:change={sortTools} class="rounded-lg border border-gray-300 bg-gray-50 px-2 py-1 text-sm text-gray-900 focus:border-blue-700 focus:ring-blue-700 dark:border-gray-600 dark:bg-gray-700 dark:text-white dark:placeholder-gray-400" > <option value={SortKey.TRENDING}>{SortKey.TRENDING}</option> <option value={SortKey.POPULAR}>{SortKey.POPULAR}</option> </select> </div> <div class="mt-8 grid grid-cols-1 gap-3 sm:gap-5 lg:grid-cols-2"> {#each tools as tool} {@const isActive = ($page.data.settings?.tools ?? []).includes(tool._id.toString())} {@const isOfficial = !tool.createdByName} <a href="{base}/tools/{tool._id.toString()}" class="relative flex flex-row items-center gap-4 overflow-hidden text-balance rounded-xl border bg-gray-50/50 px-4 text-center shadow hover:bg-gray-50 hover:shadow-inner dark:border-gray-800/70 dark:bg-gray-950/20 dark:hover:bg-gray-950/40 max-sm:px-4 sm:h-24" class:!border-blue-600={isActive} > <ToolLogo color={tool.color} icon={tool.icon} /> <div class="flex h-full w-full flex-col items-start py-2 text-left"> <span class="font-bold"> <span class="w-full overflow-clip"> {tool.displayName} </span> {#if isActive} <span class="mx-1.5 inline-flex items-center rounded-full bg-blue-600 px-2 py-0.5 text-xs font-semibold text-white" >Active</span > {/if} </span> <span class="line-clamp-1 font-mono text-xs text-gray-400"> {tool.baseUrl ?? "Internal tool"} </span> <p class=" line-clamp-1 w-full text-sm text-gray-600 dark:text-gray-300"> {tool.description} </p> {#if !isOfficial} <p class="mt-auto text-xs text-gray-400 dark:text-gray-500"> Added by <a class="hover:underline" href="{base}/tools?user={tool.createdByName}" on:click|stopPropagation > {tool.createdByName} </a> <span class="text-gray-300">•</span> {tool.useCount} runs </p> {:else} <p class="mt-auto text-xs text-purple-700 dark:text-purple-400"> HuggingChat official tool </p> {/if} </div> </a> {:else} No tools found {/each} </div> <Pagination classNames="w-full flex justify-center mt-14 mb-4" numItemsPerPage={data.numItemsPerPage} numTotalItems={data.numTotalItems} /> </div> </div>
chat-ui/src/routes/tools/+page.svelte/0
{ "file_path": "chat-ui/src/routes/tools/+page.svelte", "repo_id": "chat-ui", "token_count": 4314 }
72
const defaultTheme = require("tailwindcss/defaultTheme"); const colors = require("tailwindcss/colors"); /** @type {import('tailwindcss').Config} */ export default { darkMode: "class", mode: "jit", content: ["./src/**/*.{html,js,svelte,ts}"], theme: { extend: { colors: { primary: colors[process.env.PUBLIC_APP_COLOR], }, fontSize: { xxs: "0.625rem", smd: "0.94rem", }, }, }, plugins: [ require("tailwind-scrollbar")({ nocompatible: true }), require("@tailwindcss/typography"), ], };
chat-ui/tailwind.config.cjs/0
{ "file_path": "chat-ui/tailwind.config.cjs", "repo_id": "chat-ui", "token_count": 220 }
73
import json import os import tempfile import datasets from utils import generate_example_dataset, get_duration SPEED_TEST_N_EXAMPLES = 500_000 RESULTS_BASEPATH, RESULTS_FILENAME = os.path.split(__file__) RESULTS_FILE_PATH = os.path.join(RESULTS_BASEPATH, "results", RESULTS_FILENAME.replace(".py", ".json")) @get_duration def select(dataset: datasets.Dataset): _ = dataset.select(range(0, len(dataset), 2)) @get_duration def sort(dataset: datasets.Dataset): _ = dataset.sort("numbers") @get_duration def shuffle(dataset: datasets.Dataset): _ = dataset.shuffle() @get_duration def train_test_split(dataset: datasets.Dataset): _ = dataset.train_test_split(0.1) @get_duration def shard(dataset: datasets.Dataset, num_shards=10): for shard_id in range(num_shards): _ = dataset.shard(num_shards, shard_id) def benchmark_indices_mapping(): times = {"num examples": SPEED_TEST_N_EXAMPLES} functions = (select, sort, shuffle, train_test_split, shard) with tempfile.TemporaryDirectory() as tmp_dir: print("generating dataset") features = datasets.Features({"text": datasets.Value("string"), "numbers": datasets.Value("float32")}) dataset = generate_example_dataset( os.path.join(tmp_dir, "dataset.arrow"), features, num_examples=SPEED_TEST_N_EXAMPLES ) print("Functions") for func in functions: print(func.__name__) times[func.__name__] = func(dataset) with open(RESULTS_FILE_PATH, "wb") as f: f.write(json.dumps(times).encode("utf-8")) if __name__ == "__main__": # useful to run the profiler benchmark_indices_mapping()
datasets/benchmarks/benchmark_indices_mapping.py/0
{ "file_path": "datasets/benchmarks/benchmark_indices_mapping.py", "repo_id": "datasets", "token_count": 677 }
74
# The cache The cache is one of the reasons why 🤗 Datasets is so efficient. It stores previously downloaded and processed datasets so when you need to use them again, they are reloaded directly from the cache. This avoids having to download a dataset all over again, or reapplying processing functions. Even after you close and start another Python session, 🤗 Datasets will reload your dataset directly from the cache! ## Fingerprint How does the cache keeps track of what transforms are applied to a dataset? Well, 🤗 Datasets assigns a fingerprint to the cache file. A fingerprint keeps track of the current state of a dataset. The initial fingerprint is computed using a hash from the Arrow table, or a hash of the Arrow files if the dataset is on disk. Subsequent fingerprints are computed by combining the fingerprint of the previous state, and a hash of the latest transform applied. <Tip> Transforms are any of the processing methods from the [How-to Process](./process) guides such as [`Dataset.map`] or [`Dataset.shuffle`]. </Tip> Here are what the actual fingerprints look like: ```py >>> from datasets import Dataset >>> dataset1 = Dataset.from_dict({"a": [0, 1, 2]}) >>> dataset2 = dataset1.map(lambda x: {"a": x["a"] + 1}) >>> print(dataset1._fingerprint, dataset2._fingerprint) d19493523d95e2dc 5b86abacd4b42434 ``` In order for a transform to be hashable, it needs to be picklable by [dill](https://dill.readthedocs.io/en/latest/) or [pickle](https://docs.python.org/3/library/pickle). When you use a non-hashable transform, 🤗 Datasets uses a random fingerprint instead and raises a warning. The non-hashable transform is considered different from the previous transforms. As a result, 🤗 Datasets will recompute all the transforms. Make sure your transforms are serializable with pickle or dill to avoid this! An example of when 🤗 Datasets recomputes everything is when caching is disabled. When this happens, the cache files are generated every time and they get written to a temporary directory. Once your Python session ends, the cache files in the temporary directory are deleted. A random hash is assigned to these cache files, instead of a fingerprint. <Tip> When caching is disabled, use [`Dataset.save_to_disk`] to save your transformed dataset or it will be deleted once the session ends. </Tip> ## Hashing The fingerprint of a dataset is updated by hashing the function passed to `map` as well as the `map` parameters (`batch_size`, `remove_columns`, etc.). You can check the hash of any Python object using the [`fingerprint.Hasher`]: ```py >>> from datasets.fingerprint import Hasher >>> my_func = lambda example: {"length": len(example["text"])} >>> print(Hasher.hash(my_func)) '3d35e2b3e94c81d6' ``` The hash is computed by dumping the object using a `dill` pickler and hashing the dumped bytes. The pickler recursively dumps all the variables used in your function, so any change you do to an object that is used in your function, will cause the hash to change. If one of your functions doesn't seem to have the same hash across sessions, it means at least one of its variables contains a Python object that is not deterministic. When this happens, feel free to hash any object you find suspicious to try to find the object that caused the hash to change. For example, if you use a list for which the order of its elements is not deterministic across sessions, then the hash won't be the same across sessions either.
datasets/docs/source/about_cache.mdx/0
{ "file_path": "datasets/docs/source/about_cache.mdx", "repo_id": "datasets", "token_count": 909 }
75
# Cloud storage 🤗 Datasets supports access to cloud storage providers through a `fsspec` FileSystem implementations. You can save and load datasets from any cloud storage in a Pythonic way. Take a look at the following table for some example of supported cloud storage providers: | Storage provider | Filesystem implementation | |----------------------|---------------------------------------------------------------| | Amazon S3 | [s3fs](https://s3fs.readthedocs.io/en/latest/) | | Google Cloud Storage | [gcsfs](https://gcsfs.readthedocs.io/en/latest/) | | Azure Blob/DataLake | [adlfs](https://github.com/fsspec/adlfs) | | Dropbox | [dropboxdrivefs](https://github.com/MarineChap/dropboxdrivefs)| | Google Drive | [gdrivefs](https://github.com/intake/gdrivefs) | | Oracle Cloud Storage | [ocifs](https://ocifs.readthedocs.io/en/latest/) | This guide will show you how to save and load datasets with any cloud storage. Here are examples for S3, Google Cloud Storage, Azure Blob Storage, and Oracle Cloud Object Storage. ## Set up your cloud storage FileSystem ### Amazon S3 1. Install the S3 FileSystem implementation: ``` >>> pip install s3fs ``` 2. Define your credentials To use an anonymous connection, use `anon=True`. Otherwise, include your `aws_access_key_id` and `aws_secret_access_key` whenever you are interacting with a private S3 bucket. ```py >>> storage_options = {"anon": True} # for anonymous connection # or use your credentials >>> storage_options = {"key": aws_access_key_id, "secret": aws_secret_access_key} # for private buckets # or use a botocore session >>> import aiobotocore.session >>> s3_session = aiobotocore.session.AioSession(profile="my_profile_name") >>> storage_options = {"session": s3_session} ``` 3. Create your FileSystem instance ```py >>> import s3fs >>> fs = s3fs.S3FileSystem(**storage_options) ``` ### Google Cloud Storage 1. Install the Google Cloud Storage implementation: ``` >>> conda install -c conda-forge gcsfs # or install with pip >>> pip install gcsfs ``` 2. Define your credentials ```py >>> storage_options={"token": "anon"} # for anonymous connection # or use your credentials of your default gcloud credentials or from the google metadata service >>> storage_options={"project": "my-google-project"} # or use your credentials from elsewhere, see the documentation at https://gcsfs.readthedocs.io/ >>> storage_options={"project": "my-google-project", "token": TOKEN} ``` 3. Create your FileSystem instance ```py >>> import gcsfs >>> fs = gcsfs.GCSFileSystem(**storage_options) ``` ### Azure Blob Storage 1. Install the Azure Blob Storage implementation: ``` >>> conda install -c conda-forge adlfs # or install with pip >>> pip install adlfs ``` 2. Define your credentials ```py >>> storage_options = {"anon": True} # for anonymous connection # or use your credentials >>> storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY} # gen 2 filesystem # or use your credentials with the gen 1 filesystem >>> storage_options={"tenant_id": TENANT_ID, "client_id": CLIENT_ID, "client_secret": CLIENT_SECRET} ``` 3. Create your FileSystem instance ```py >>> import adlfs >>> fs = adlfs.AzureBlobFileSystem(**storage_options) ``` ### Oracle Cloud Object Storage 1. Install the OCI FileSystem implementation: ``` >>> pip install ocifs ``` 2. Define your credentials ```py >>> storage_options = {"config": "~/.oci/config", "region": "us-ashburn-1"} ``` 3. Create your FileSystem instance ```py >>> import ocifs >>> fs = ocifs.OCIFileSystem(**storage_options) ``` ## Load and Save your datasets using your cloud storage FileSystem ### Download and prepare a dataset into a cloud storage You can download and prepare a dataset into your cloud storage by specifying a remote `output_dir` in `download_and_prepare`. Don't forget to use the previously defined `storage_options` containing your credentials to write into a private cloud storage. The `download_and_prepare` method works in two steps: 1. it first downloads the raw data files (if any) in your local cache. You can set your cache directory by passing `cache_dir` to [`load_dataset_builder`] 2. then it generates the dataset in Arrow or Parquet format in your cloud storage by iterating over the raw data files. Load a dataset builder from the Hugging Face Hub (see [how to load from the Hugging Face Hub](./loading#hugging-face-hub)): ```py >>> output_dir = "s3://my-bucket/imdb" >>> builder = load_dataset_builder("imdb") >>> builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet") ``` Use your own data files (see [how to load local and remote files](./loading#local-and-remote-files)): ```py >>> data_files = {"train": ["path/to/train.csv"]} >>> output_dir = "s3://my-bucket/imdb" >>> builder = load_dataset_builder("csv", data_files=data_files) >>> builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet") ``` It is highly recommended to save the files as compressed Parquet files to optimize I/O by specifying `file_format="parquet"`. Otherwise the dataset is saved as an uncompressed Arrow file. You can also specify the size of the shards using `max_shard_size` (default is 500MB): ```py >>> builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="1GB") ``` #### Dask Dask is a parallel computing library and it has a pandas-like API for working with larger than memory Parquet datasets in parallel. Dask can use multiple threads or processes on a single machine, or a cluster of machines to process data in parallel. Dask supports local data but also data from a cloud storage. Therefore you can load a dataset saved as sharded Parquet files in Dask with ```py import dask.dataframe as dd df = dd.read_parquet(output_dir, storage_options=storage_options) # or if your dataset is split into train/valid/test df_train = dd.read_parquet(output_dir + f"/{builder.name}-train-*.parquet", storage_options=storage_options) df_valid = dd.read_parquet(output_dir + f"/{builder.name}-validation-*.parquet", storage_options=storage_options) df_test = dd.read_parquet(output_dir + f"/{builder.name}-test-*.parquet", storage_options=storage_options) ``` You can find more about dask dataframes in their [documentation](https://docs.dask.org/en/stable/dataframe.html). ## Saving serialized datasets After you have processed your dataset, you can save it to your cloud storage with [`Dataset.save_to_disk`]: ```py # saves encoded_dataset to amazon s3 >>> encoded_dataset.save_to_disk("s3://my-private-datasets/imdb/train", storage_options=storage_options) # saves encoded_dataset to google cloud storage >>> encoded_dataset.save_to_disk("gcs://my-private-datasets/imdb/train", storage_options=storage_options) # saves encoded_dataset to microsoft azure blob/datalake >>> encoded_dataset.save_to_disk("adl://my-private-datasets/imdb/train", storage_options=storage_options) ``` <Tip> Remember to define your credentials in your [FileSystem instance](#set-up-your-cloud-storage-filesystem) `fs` whenever you are interacting with a private cloud storage. </Tip> ## Listing serialized datasets List files from a cloud storage with your FileSystem instance `fs`, using `fs.ls`: ```py >>> fs.ls("my-private-datasets/imdb/train", detail=False) ["dataset_info.json.json","dataset.arrow","state.json"] ``` ### Load serialized datasets When you are ready to use your dataset again, reload it with [`Dataset.load_from_disk`]: ```py >>> from datasets import load_from_disk # load encoded_dataset from cloud storage >>> dataset = load_from_disk("s3://a-public-datasets/imdb/train", storage_options=storage_options) >>> print(len(dataset)) 25000 ```
datasets/docs/source/filesystems.mdx/0
{ "file_path": "datasets/docs/source/filesystems.mdx", "repo_id": "datasets", "token_count": 2525 }
76
# Loading methods Methods for listing and loading datasets: ## Datasets [[autodoc]] datasets.load_dataset [[autodoc]] datasets.load_from_disk [[autodoc]] datasets.load_dataset_builder [[autodoc]] datasets.get_dataset_config_names [[autodoc]] datasets.get_dataset_infos [[autodoc]] datasets.get_dataset_split_names ## From files Configurations used to load data files. They are used when loading local files or a dataset repository: - local files: `load_dataset("parquet", data_dir="path/to/data/dir")` - dataset repository: `load_dataset("allenai/c4")` You can pass arguments to `load_dataset` to configure data loading. For example you can specify the `sep` parameter to define the [`~datasets.packaged_modules.csv.CsvConfig`] that is used to load the data: ```python load_dataset("csv", data_dir="path/to/data/dir", sep="\t") ``` ### Text [[autodoc]] datasets.packaged_modules.text.TextConfig [[autodoc]] datasets.packaged_modules.text.Text ### CSV [[autodoc]] datasets.packaged_modules.csv.CsvConfig [[autodoc]] datasets.packaged_modules.csv.Csv ### JSON [[autodoc]] datasets.packaged_modules.json.JsonConfig [[autodoc]] datasets.packaged_modules.json.Json ### Parquet [[autodoc]] datasets.packaged_modules.parquet.ParquetConfig [[autodoc]] datasets.packaged_modules.parquet.Parquet ### Arrow [[autodoc]] datasets.packaged_modules.arrow.ArrowConfig [[autodoc]] datasets.packaged_modules.arrow.Arrow ### SQL [[autodoc]] datasets.packaged_modules.sql.SqlConfig [[autodoc]] datasets.packaged_modules.sql.Sql ### Images [[autodoc]] datasets.packaged_modules.imagefolder.ImageFolderConfig [[autodoc]] datasets.packaged_modules.imagefolder.ImageFolder ### Audio [[autodoc]] datasets.packaged_modules.audiofolder.AudioFolderConfig [[autodoc]] datasets.packaged_modules.audiofolder.AudioFolder ### WebDataset [[autodoc]] datasets.packaged_modules.webdataset.WebDataset
datasets/docs/source/package_reference/loading_methods.mdx/0
{ "file_path": "datasets/docs/source/package_reference/loading_methods.mdx", "repo_id": "datasets", "token_count": 651 }
77
# Use with PyTorch This document is a quick introduction to using `datasets` with PyTorch, with a particular focus on how to get `torch.Tensor` objects out of our datasets, and how to use a PyTorch `DataLoader` and a Hugging Face `Dataset` with the best performance. ## Dataset format By default, datasets return regular python objects: integers, floats, strings, lists, etc. To get PyTorch tensors instead, you can set the format of the dataset to `pytorch` using [`Dataset.with_format`]: ```py >>> from datasets import Dataset >>> data = [[1, 2],[3, 4]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': tensor([1, 2])} >>> ds[:2] {'data': tensor([[1, 2], [3, 4]])} ``` <Tip> A [`Dataset`] object is a wrapper of an Arrow table, which allows fast zero-copy reads from arrays in the dataset to PyTorch tensors. </Tip> To load the data as tensors on a GPU, specify the `device` argument: ```py >>> import torch >>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") >>> ds = ds.with_format("torch", device=device) >>> ds[0] {'data': tensor([1, 2], device='cuda:0')} ``` ### N-dimensional arrays If your dataset consists of N-dimensional arrays, you will see that by default they are considered as the same tensor if the shape is fixed: ```py >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] # fixed shape >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': tensor([[1, 2], [3, 4]])} ``` ```py >>> from datasets import Dataset >>> data = [[[1, 2],[3]],[[4, 5, 6],[7, 8]]] # varying shape >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': [tensor([1, 2]), tensor([3])]} ``` However this logic often requires slow shape comparisons and data copies. To avoid this, you must explicitly use the [`Array`] feature type and specify the shape of your tensors: ```py >>> from datasets import Dataset, Features, Array2D >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> features = Features({"data": Array2D(shape=(2, 2), dtype='int32')}) >>> ds = Dataset.from_dict({"data": data}, features=features) >>> ds = ds.with_format("torch") >>> ds[0] {'data': tensor([[1, 2], [3, 4]])} >>> ds[:2] {'data': tensor([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])} ``` ### Other feature types [`ClassLabel`] data are properly converted to tensors: ```py >>> from datasets import Dataset, Features, ClassLabel >>> labels = [0, 0, 1] >>> features = Features({"label": ClassLabel(names=["negative", "positive"])}) >>> ds = Dataset.from_dict({"label": labels}, features=features) >>> ds = ds.with_format("torch") >>> ds[:3] {'label': tensor([0, 0, 1])} ``` String and binary objects are unchanged, since PyTorch only supports numbers. The [`Image`] and [`Audio`] feature types are also supported. <Tip> To use the [`Image`] feature type, you'll need to install the `vision` extra as `pip install datasets[vision]`. </Tip> ```py >>> from datasets import Dataset, Features, Audio, Image >>> images = ["path/to/image.png"] * 10 >>> features = Features({"image": Image()}) >>> ds = Dataset.from_dict({"image": images}, features=features) >>> ds = ds.with_format("torch") >>> ds[0]["image"].shape torch.Size([512, 512, 4]) >>> ds[0] {'image': tensor([[[255, 215, 106, 255], [255, 215, 106, 255], ..., [255, 255, 255, 255], [255, 255, 255, 255]]], dtype=torch.uint8)} >>> ds[:2]["image"].shape torch.Size([2, 512, 512, 4]) >>> ds[:2] {'image': tensor([[[[255, 215, 106, 255], [255, 215, 106, 255], ..., [255, 255, 255, 255], [255, 255, 255, 255]]]], dtype=torch.uint8)} ``` <Tip> To use the [`Audio`] feature type, you'll need to install the `audio` extra as `pip install datasets[audio]`. </Tip> ```py >>> from datasets import Dataset, Features, Audio, Image >>> audio = ["path/to/audio.wav"] * 10 >>> features = Features({"audio": Audio()}) >>> ds = Dataset.from_dict({"audio": audio}, features=features) >>> ds = ds.with_format("torch") >>> ds[0]["audio"]["array"] tensor([ 6.1035e-05, 1.5259e-05, 1.6785e-04, ..., -1.5259e-05, -1.5259e-05, 1.5259e-05]) >>> ds[0]["audio"]["sampling_rate"] tensor(44100) ``` ## Data loading Like `torch.utils.data.Dataset` objects, a [`Dataset`] can be passed directly to a PyTorch `DataLoader`: ```py >>> import numpy as np >>> from datasets import Dataset >>> from torch.utils.data import DataLoader >>> data = np.random.rand(16) >>> label = np.random.randint(0, 2, size=16) >>> ds = Dataset.from_dict({"data": data, "label": label}).with_format("torch") >>> dataloader = DataLoader(ds, batch_size=4) >>> for batch in dataloader: ... print(batch) {'data': tensor([0.0047, 0.4979, 0.6726, 0.8105]), 'label': tensor([0, 1, 0, 1])} {'data': tensor([0.4832, 0.2723, 0.4259, 0.2224]), 'label': tensor([0, 0, 0, 0])} {'data': tensor([0.5837, 0.3444, 0.4658, 0.6417]), 'label': tensor([0, 1, 0, 0])} {'data': tensor([0.7022, 0.1225, 0.7228, 0.8259]), 'label': tensor([1, 1, 1, 1])} ``` ### Optimize data loading There are several ways you can increase the speed your data is loaded which can save you time, especially if you are working with large datasets. PyTorch offers parallelized data loading, retrieving batches of indices instead of individually, and streaming to iterate over the dataset without downloading it on disk. #### Use multiple Workers You can parallelize data loading with the `num_workers` argument of a PyTorch `DataLoader` and get a higher throughput. Under the hood, the `DataLoader` starts `num_workers` processes. Each process reloads the dataset passed to the `DataLoader` and is used to query examples. Reloading the dataset inside a worker doesn't fill up your RAM, since it simply memory-maps the dataset again from your disk. ```py >>> import numpy as np >>> from datasets import Dataset, load_from_disk >>> from torch.utils.data import DataLoader >>> data = np.random.rand(10_000) >>> Dataset.from_dict({"data": data}).save_to_disk("my_dataset") >>> ds = load_from_disk("my_dataset").with_format("torch") >>> dataloader = DataLoader(ds, batch_size=32, num_workers=4) ``` ### Stream data Stream a dataset by loading it as an [`IterableDataset`]. This allows you to progressively iterate over a remote dataset without downloading it on disk and or over local data files. Learn more about which type of dataset is best for your use case in the [choosing between a regular dataset or an iterable dataset](./about_mapstyle_vs_iterable) guide. An iterable dataset from `datasets` inherits from `torch.utils.data.IterableDataset` so you can pass it to a `torch.utils.data.DataLoader`: ```py >>> import numpy as np >>> from datasets import Dataset, load_dataset >>> from torch.utils.data import DataLoader >>> data = np.random.rand(10_000) >>> Dataset.from_dict({"data": data}).push_to_hub("<username>/my_dataset") # Upload to the Hugging Face Hub >>> my_iterable_dataset = load_dataset("<username>/my_dataset", streaming=True, split="train") >>> dataloader = DataLoader(my_iterable_dataset, batch_size=32) ``` If the dataset is split in several shards (i.e. if the dataset consists of multiple data files), then you can stream in parallel using `num_workers`: ```py >>> my_iterable_dataset = load_dataset("deepmind/code_contests", streaming=True, split="train") >>> my_iterable_dataset.n_shards 39 >>> dataloader = DataLoader(my_iterable_dataset, batch_size=32, num_workers=4) ``` In this case each worker is given a subset of the list of shards to stream from. ### Checkpoint and resume If you need a DataLoader that you can checkpoint and resume in the middle of training, you can use the `StatefulDataLoader` from [torchdata](https://github.com/pytorch/data): ```py >>> from torchdata.stateful_dataloader import StatefulDataLoader >>> my_iterable_dataset = load_dataset("deepmind/code_contests", streaming=True, split="train") >>> dataloader = StatefulDataLoader(my_iterable_dataset, batch_size=32, num_workers=4) >>> # save in the middle of training >>> state_dict = dataloader.state_dict() >>> # and resume later >>> dataloader.load_state_dict(state_dict) ``` This is possible thanks to [`IterableDataset.state_dict`] and [`IterableDataset.load_state_dict`]. ### Distributed To split your dataset across your training nodes, you can use [`datasets.distributed.split_dataset_by_node`]: ```python import os from datasets.distributed import split_dataset_by_node ds = split_dataset_by_node(ds, rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"])) ``` This works for both map-style datasets and iterable datasets. The dataset is split for the node at rank `rank` in a pool of nodes of size `world_size`. For map-style datasets: Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset. For iterable datasets: If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples. This can also be combined with a `torch.utils.data.DataLoader` if you want each node to use multiple workers to load the data.
datasets/docs/source/use_with_pytorch.mdx/0
{ "file_path": "datasets/docs/source/use_with_pytorch.mdx", "repo_id": "datasets", "token_count": 3446 }
78
#!/usr/bin/env python from argparse import ArgumentParser from datasets.commands.convert import ConvertCommand from datasets.commands.convert_to_parquet import ConvertToParquetCommand from datasets.commands.delete_from_hub import DeleteFromHubCommand from datasets.commands.env import EnvironmentCommand from datasets.commands.test import TestCommand from datasets.utils.logging import set_verbosity_info def parse_unknown_args(unknown_args): return {key.lstrip("-"): value for key, value in zip(unknown_args[::2], unknown_args[1::2])} def main(): parser = ArgumentParser( "HuggingFace Datasets CLI tool", usage="datasets-cli <command> [<args>]", allow_abbrev=False ) commands_parser = parser.add_subparsers(help="datasets-cli command helpers") set_verbosity_info() # Register commands ConvertCommand.register_subcommand(commands_parser) EnvironmentCommand.register_subcommand(commands_parser) TestCommand.register_subcommand(commands_parser) ConvertToParquetCommand.register_subcommand(commands_parser) DeleteFromHubCommand.register_subcommand(commands_parser) # Parse args args, unknown_args = parser.parse_known_args() if not hasattr(args, "func"): parser.print_help() exit(1) kwargs = parse_unknown_args(unknown_args) # Run service = args.func(args, **kwargs) service.run() if __name__ == "__main__": main()
datasets/src/datasets/commands/datasets_cli.py/0
{ "file_path": "datasets/src/datasets/commands/datasets_cli.py", "repo_id": "datasets", "token_count": 480 }
79
import os import sys import warnings from dataclasses import dataclass, field from io import BytesIO from typing import TYPE_CHECKING, Any, ClassVar, Dict, List, Optional, Union import numpy as np import pyarrow as pa from .. import config from ..download.download_config import DownloadConfig from ..table import array_cast from ..utils.file_utils import is_local_path, xopen from ..utils.py_utils import first_non_null_value, no_op_if_value_is_null, string_to_dict if TYPE_CHECKING: import PIL.Image from .features import FeatureType _IMAGE_COMPRESSION_FORMATS: Optional[List[str]] = None _NATIVE_BYTEORDER = "<" if sys.byteorder == "little" else ">" # Origin: https://github.com/python-pillow/Pillow/blob/698951e19e19972aeed56df686868f1329981c12/src/PIL/Image.py#L3126 minus "|i1" which values are not preserved correctly when saving and loading an image _VALID_IMAGE_ARRAY_DTPYES = [ np.dtype("|b1"), np.dtype("|u1"), np.dtype("<u2"), np.dtype(">u2"), np.dtype("<i2"), np.dtype(">i2"), np.dtype("<u4"), np.dtype(">u4"), np.dtype("<i4"), np.dtype(">i4"), np.dtype("<f4"), np.dtype(">f4"), np.dtype("<f8"), np.dtype(">f8"), ] @dataclass class Image: """Image [`Feature`] to read image data from an image file. Input: The Image feature accepts as input: - A `str`: Absolute path to the image file (i.e. random access is allowed). - A `dict` with the keys: - `path`: String with relative path of the image file to the archive file. - `bytes`: Bytes of the image file. This is useful for archived files with sequential access. - An `np.ndarray`: NumPy array representing an image. - A `PIL.Image.Image`: PIL image object. Args: mode (`str`, *optional*): The mode to convert the image to. If `None`, the native mode of the image is used. decode (`bool`, defaults to `True`): Whether to decode the image data. If `False`, returns the underlying dictionary in the format `{"path": image_path, "bytes": image_bytes}`. Examples: ```py >>> from datasets import load_dataset, Image >>> ds = load_dataset("beans", split="train") >>> ds.features["image"] Image(decode=True, id=None) >>> ds[0]["image"] <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x15E52E7F0> >>> ds = ds.cast_column('image', Image(decode=False)) {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/b0a21163f78769a2cf11f58dfc767fb458fc7cea5c05dccc0144a2c0f0bc1292/train/healthy/healthy_train.85.jpg'} ``` """ mode: Optional[str] = None decode: bool = True id: Optional[str] = None # Automatically constructed dtype: ClassVar[str] = "PIL.Image.Image" pa_type: ClassVar[Any] = pa.struct({"bytes": pa.binary(), "path": pa.string()}) _type: str = field(default="Image", init=False, repr=False) def __call__(self): return self.pa_type def encode_example(self, value: Union[str, bytes, dict, np.ndarray, "PIL.Image.Image"]) -> dict: """Encode example into a format for Arrow. Args: value (`str`, `np.ndarray`, `PIL.Image.Image` or `dict`): Data passed as input to Image feature. Returns: `dict` with "path" and "bytes" fields """ if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support encoding images, please install 'Pillow'.") if isinstance(value, list): value = np.array(value) if isinstance(value, str): return {"path": value, "bytes": None} elif isinstance(value, bytes): return {"path": None, "bytes": value} elif isinstance(value, np.ndarray): # convert the image array to PNG/TIFF bytes return encode_np_array(value) elif isinstance(value, PIL.Image.Image): # convert the PIL image to bytes (default format is PNG/TIFF) return encode_pil_image(value) elif value.get("path") is not None and os.path.isfile(value["path"]): # we set "bytes": None to not duplicate the data if they're already available locally return {"bytes": None, "path": value.get("path")} elif value.get("bytes") is not None or value.get("path") is not None: # store the image bytes, and path is used to infer the image format using the file extension return {"bytes": value.get("bytes"), "path": value.get("path")} else: raise ValueError( f"An image sample should have one of 'path' or 'bytes' but they are missing or None in {value}." ) def decode_example(self, value: dict, token_per_repo_id=None) -> "PIL.Image.Image": """Decode example image file into image data. Args: value (`str` or `dict`): A string with the absolute image file path, a dictionary with keys: - `path`: String with absolute or relative image file path. - `bytes`: The bytes of the image file. token_per_repo_id (`dict`, *optional*): To access and decode image files from private repositories on the Hub, you can pass a dictionary repo_id (`str`) -> token (`bool` or `str`). Returns: `PIL.Image.Image` """ if not self.decode: raise RuntimeError("Decoding is disabled for this feature. Please use Image(decode=True) instead.") if config.PIL_AVAILABLE: import PIL.Image import PIL.ImageOps else: raise ImportError("To support decoding images, please install 'Pillow'.") if token_per_repo_id is None: token_per_repo_id = {} path, bytes_ = value["path"], value["bytes"] if bytes_ is None: if path is None: raise ValueError(f"An image should have one of 'path' or 'bytes' but both are None in {value}.") else: if is_local_path(path): image = PIL.Image.open(path) else: source_url = path.split("::")[-1] pattern = ( config.HUB_DATASETS_URL if source_url.startswith(config.HF_ENDPOINT) else config.HUB_DATASETS_HFFS_URL ) try: repo_id = string_to_dict(source_url, pattern)["repo_id"] token = token_per_repo_id.get(repo_id) except ValueError: token = None download_config = DownloadConfig(token=token) with xopen(path, "rb", download_config=download_config) as f: bytes_ = BytesIO(f.read()) image = PIL.Image.open(bytes_) else: image = PIL.Image.open(BytesIO(bytes_)) image.load() # to avoid "Too many open files" errors if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None: image = PIL.ImageOps.exif_transpose(image) if self.mode and self.mode != image.mode: image = image.convert(self.mode) return image def flatten(self) -> Union["FeatureType", Dict[str, "FeatureType"]]: """If in the decodable state, return the feature itself, otherwise flatten the feature into a dictionary.""" from .features import Value return ( self if self.decode else { "bytes": Value("binary"), "path": Value("string"), } ) def cast_storage(self, storage: Union[pa.StringArray, pa.StructArray, pa.ListArray]) -> pa.StructArray: """Cast an Arrow array to the Image arrow storage type. The Arrow types that can be converted to the Image pyarrow storage type are: - `pa.string()` - it must contain the "path" data - `pa.binary()` - it must contain the image bytes - `pa.struct({"bytes": pa.binary()})` - `pa.struct({"path": pa.string()})` - `pa.struct({"bytes": pa.binary(), "path": pa.string()})` - order doesn't matter - `pa.list(*)` - it must contain the image array data Args: storage (`Union[pa.StringArray, pa.StructArray, pa.ListArray]`): PyArrow array to cast. Returns: `pa.StructArray`: Array in the Image arrow storage type, that is `pa.struct({"bytes": pa.binary(), "path": pa.string()})`. """ if pa.types.is_string(storage.type): bytes_array = pa.array([None] * len(storage), type=pa.binary()) storage = pa.StructArray.from_arrays([bytes_array, storage], ["bytes", "path"], mask=storage.is_null()) elif pa.types.is_binary(storage.type): path_array = pa.array([None] * len(storage), type=pa.string()) storage = pa.StructArray.from_arrays([storage, path_array], ["bytes", "path"], mask=storage.is_null()) elif pa.types.is_struct(storage.type): if storage.type.get_field_index("bytes") >= 0: bytes_array = storage.field("bytes") else: bytes_array = pa.array([None] * len(storage), type=pa.binary()) if storage.type.get_field_index("path") >= 0: path_array = storage.field("path") else: path_array = pa.array([None] * len(storage), type=pa.string()) storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=storage.is_null()) elif pa.types.is_list(storage.type): bytes_array = pa.array( [encode_np_array(np.array(arr))["bytes"] if arr is not None else None for arr in storage.to_pylist()], type=pa.binary(), ) path_array = pa.array([None] * len(storage), type=pa.string()) storage = pa.StructArray.from_arrays( [bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null() ) return array_cast(storage, self.pa_type) def embed_storage(self, storage: pa.StructArray) -> pa.StructArray: """Embed image files into the Arrow array. Args: storage (`pa.StructArray`): PyArrow array to embed. Returns: `pa.StructArray`: Array in the Image arrow storage type, that is `pa.struct({"bytes": pa.binary(), "path": pa.string()})`. """ @no_op_if_value_is_null def path_to_bytes(path): with xopen(path, "rb") as f: bytes_ = f.read() return bytes_ bytes_array = pa.array( [ (path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None for x in storage.to_pylist() ], type=pa.binary(), ) path_array = pa.array( [os.path.basename(path) if path is not None else None for path in storage.field("path").to_pylist()], type=pa.string(), ) storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null()) return array_cast(storage, self.pa_type) def list_image_compression_formats() -> List[str]: if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support encoding images, please install 'Pillow'.") global _IMAGE_COMPRESSION_FORMATS if _IMAGE_COMPRESSION_FORMATS is None: PIL.Image.init() _IMAGE_COMPRESSION_FORMATS = list(set(PIL.Image.OPEN.keys()) & set(PIL.Image.SAVE.keys())) return _IMAGE_COMPRESSION_FORMATS def image_to_bytes(image: "PIL.Image.Image") -> bytes: """Convert a PIL Image object to bytes using native compression if possible, otherwise use PNG/TIFF compression.""" buffer = BytesIO() if image.format in list_image_compression_formats(): format = image.format else: format = "PNG" if image.mode in ["1", "L", "LA", "RGB", "RGBA"] else "TIFF" image.save(buffer, format=format) return buffer.getvalue() def encode_pil_image(image: "PIL.Image.Image") -> dict: if hasattr(image, "filename") and image.filename != "": return {"path": image.filename, "bytes": None} else: return {"path": None, "bytes": image_to_bytes(image)} def encode_np_array(array: np.ndarray) -> dict: if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support encoding images, please install 'Pillow'.") dtype = array.dtype dtype_byteorder = dtype.byteorder if dtype.byteorder != "=" else _NATIVE_BYTEORDER dtype_kind = dtype.kind dtype_itemsize = dtype.itemsize dest_dtype = None # Multi-channel array case (only np.dtype("|u1") is allowed) if array.shape[2:]: if dtype_kind not in ["u", "i"]: raise TypeError( f"Unsupported array dtype {dtype} for image encoding. Only {dest_dtype} is supported for multi-channel arrays." ) dest_dtype = np.dtype("|u1") if dtype != dest_dtype: warnings.warn(f"Downcasting array dtype {dtype} to {dest_dtype} to be compatible with 'Pillow'") # Exact match elif dtype in _VALID_IMAGE_ARRAY_DTPYES: dest_dtype = dtype else: # Downcast the type within the kind (np.can_cast(from_type, to_type, casting="same_kind") doesn't behave as expected, so do it manually) while dtype_itemsize >= 1: dtype_str = dtype_byteorder + dtype_kind + str(dtype_itemsize) if np.dtype(dtype_str) in _VALID_IMAGE_ARRAY_DTPYES: dest_dtype = np.dtype(dtype_str) warnings.warn(f"Downcasting array dtype {dtype} to {dest_dtype} to be compatible with 'Pillow'") break else: dtype_itemsize //= 2 if dest_dtype is None: raise TypeError( f"Cannot downcast dtype {dtype} to a valid image dtype. Valid image dtypes: {_VALID_IMAGE_ARRAY_DTPYES}" ) image = PIL.Image.fromarray(array.astype(dest_dtype)) return {"path": None, "bytes": image_to_bytes(image)} def objects_to_list_of_image_dicts( objs: Union[List[str], List[dict], List[np.ndarray], List["PIL.Image.Image"]], ) -> List[dict]: """Encode a list of objects into a format suitable for creating an extension array of type `ImageExtensionType`.""" if config.PIL_AVAILABLE: import PIL.Image else: raise ImportError("To support encoding images, please install 'Pillow'.") if objs: _, obj = first_non_null_value(objs) if isinstance(obj, str): return [{"path": obj, "bytes": None} if obj is not None else None for obj in objs] if isinstance(obj, np.ndarray): obj_to_image_dict_func = no_op_if_value_is_null(encode_np_array) return [obj_to_image_dict_func(obj) for obj in objs] elif isinstance(obj, PIL.Image.Image): obj_to_image_dict_func = no_op_if_value_is_null(encode_pil_image) return [obj_to_image_dict_func(obj) for obj in objs] else: return objs else: return objs
datasets/src/datasets/features/image.py/0
{ "file_path": "datasets/src/datasets/features/image.py", "repo_id": "datasets", "token_count": 6979 }
80
from abc import ABC, abstractmethod from typing import Optional, Union from .. import Dataset, DatasetDict, Features, IterableDataset, IterableDatasetDict, NamedSplit from ..utils.typing import NestedDataStructureLike, PathLike class AbstractDatasetReader(ABC): def __init__( self, path_or_paths: Optional[NestedDataStructureLike[PathLike]] = None, split: Optional[NamedSplit] = None, features: Optional[Features] = None, cache_dir: str = None, keep_in_memory: bool = False, streaming: bool = False, num_proc: Optional[int] = None, **kwargs, ): self.path_or_paths = path_or_paths self.split = split if split or isinstance(path_or_paths, dict) else "train" self.features = features self.cache_dir = cache_dir self.keep_in_memory = keep_in_memory self.streaming = streaming self.num_proc = num_proc self.kwargs = kwargs @abstractmethod def read(self) -> Union[Dataset, DatasetDict, IterableDataset, IterableDatasetDict]: pass class AbstractDatasetInputStream(ABC): def __init__( self, features: Optional[Features] = None, cache_dir: str = None, keep_in_memory: bool = False, streaming: bool = False, num_proc: Optional[int] = None, **kwargs, ): self.features = features self.cache_dir = cache_dir self.keep_in_memory = keep_in_memory self.streaming = streaming self.num_proc = num_proc self.kwargs = kwargs @abstractmethod def read(self) -> Union[Dataset, IterableDataset]: pass
datasets/src/datasets/io/abc.py/0
{ "file_path": "datasets/src/datasets/io/abc.py", "repo_id": "datasets", "token_count": 721 }
81
from typing import List import datasets from ..folder_based_builder import folder_based_builder logger = datasets.utils.logging.get_logger(__name__) class AudioFolderConfig(folder_based_builder.FolderBasedBuilderConfig): """Builder Config for AudioFolder.""" drop_labels: bool = None drop_metadata: bool = None def __post_init__(self): super().__post_init__() class AudioFolder(folder_based_builder.FolderBasedBuilder): BASE_FEATURE = datasets.Audio BASE_COLUMN_NAME = "audio" BUILDER_CONFIG_CLASS = AudioFolderConfig EXTENSIONS: List[str] # definition at the bottom of the script # Obtained with: # ``` # import soundfile as sf # # AUDIO_EXTENSIONS = [f".{format.lower()}" for format in sf.available_formats().keys()] # # # .opus decoding is supported if libsndfile >= 1.0.31: # AUDIO_EXTENSIONS.extend([".opus"]) # ``` # We intentionally do not run this code on launch because: # (1) Soundfile is an optional dependency, so importing it in global namespace is not allowed # (2) To ensure the list of supported extensions is deterministic AUDIO_EXTENSIONS = [ ".aiff", ".au", ".avr", ".caf", ".flac", ".htk", ".svx", ".mat4", ".mat5", ".mpc2k", ".ogg", ".paf", ".pvf", ".raw", ".rf64", ".sd2", ".sds", ".ircam", ".voc", ".w64", ".wav", ".nist", ".wavex", ".wve", ".xi", ".mp3", ".opus", ] AudioFolder.EXTENSIONS = AUDIO_EXTENSIONS
datasets/src/datasets/packaged_modules/audiofolder/audiofolder.py/0
{ "file_path": "datasets/src/datasets/packaged_modules/audiofolder/audiofolder.py", "repo_id": "datasets", "token_count": 588 }
82
import itertools from dataclasses import dataclass from typing import List, Optional import pyarrow as pa import pyarrow.parquet as pq import datasets from datasets.table import table_cast logger = datasets.utils.logging.get_logger(__name__) @dataclass class ParquetConfig(datasets.BuilderConfig): """BuilderConfig for Parquet.""" batch_size: Optional[int] = None columns: Optional[List[str]] = None features: Optional[datasets.Features] = None def __post_init__(self): super().__post_init__() class Parquet(datasets.ArrowBasedBuilder): BUILDER_CONFIG_CLASS = ParquetConfig def _info(self): if ( self.config.columns is not None and self.config.features is not None and set(self.config.columns) != set(self.config.features) ): raise ValueError( "The columns and features argument must contain the same columns, but got ", f"{self.config.columns} and {self.config.features}", ) return datasets.DatasetInfo(features=self.config.features) def _split_generators(self, dl_manager): """We handle string, list and dicts in datafiles""" if not self.config.data_files: raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}") dl_manager.download_config.extract_on_the_fly = True data_files = dl_manager.download_and_extract(self.config.data_files) splits = [] for split_name, files in data_files.items(): if isinstance(files, str): files = [files] # Use `dl_manager.iter_files` to skip hidden files in an extracted archive files = [dl_manager.iter_files(file) for file in files] # Infer features if they are stored in the arrow schema if self.info.features is None: for file in itertools.chain.from_iterable(files): with open(file, "rb") as f: self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f)) break splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files})) if self.config.columns is not None and set(self.config.columns) != set(self.info.features): self.info.features = datasets.Features( {col: feat for col, feat in self.info.features.items() if col in self.config.columns} ) return splits def _cast_table(self, pa_table: pa.Table) -> pa.Table: if self.info.features is not None: # more expensive cast to support nested features with keys in a different order # allows str <-> int/float or str to Audio for example pa_table = table_cast(pa_table, self.info.features.arrow_schema) return pa_table def _generate_tables(self, files): if self.config.features is not None and self.config.columns is not None: if sorted(field.name for field in self.info.features.arrow_schema) != sorted(self.config.columns): raise ValueError( f"Tried to load parquet data with columns '{self.config.columns}' with mismatching features '{self.info.features}'" ) for file_idx, file in enumerate(itertools.chain.from_iterable(files)): with open(file, "rb") as f: parquet_file = pq.ParquetFile(f) if parquet_file.metadata.num_row_groups > 0: batch_size = self.config.batch_size or parquet_file.metadata.row_group(0).num_rows try: for batch_idx, record_batch in enumerate( parquet_file.iter_batches(batch_size=batch_size, columns=self.config.columns) ): pa_table = pa.Table.from_batches([record_batch]) # Uncomment for debugging (will print the Arrow table size and elements) # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}") # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows))) yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) except ValueError as e: logger.error(f"Failed to read file '{file}' with error {type(e)}: {e}") raise
datasets/src/datasets/packaged_modules/parquet/parquet.py/0
{ "file_path": "datasets/src/datasets/packaged_modules/parquet/parquet.py", "repo_id": "datasets", "token_count": 2061 }
83
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from . import tqdm as _tqdm # _tqdm is the module from .experimental import experimental from .info_utils import VerificationMode from .logging import disable_progress_bar, enable_progress_bar, is_progress_bar_enabled from .tqdm import ( are_progress_bars_disabled, disable_progress_bars, enable_progress_bars, tqdm, ) from .version import Version
datasets/src/datasets/utils/__init__.py/0
{ "file_path": "datasets/src/datasets/utils/__init__.py", "repo_id": "datasets", "token_count": 284 }
84
## Add Dummy data test **Important** In order to pass the `load_dataset_<dataset_name>` test, dummy data is required for all possible config names. First we distinguish between datasets scripts that - A) have no config class and - B) have a config class For A) the dummy data folder structure, will always look as follows: - ``dummy/<version>/dummy_data.zip``, *e.g.* ``cosmos_qa/dummy/0.1.0/dummy_data.zip``. For B) the dummy data folder structure, will always look as follows: - ``dummy/<config_name>/<version>/dummy_data.zip``, *e.g.* ``squad/dummy/plain-text/1.0.0/dummy_data.zip``. Now the difficult part is to create the correct `dummy_data.zip` file. **Important** When checking the dummy folder structure of already added datasets, always unzip ``dummy_data.zip``. If a folder ``dummy_data`` is found next to ``dummy_data.zip``, it is probably an old version and should be deleted. The tests only take the ``dummy_data.zip`` file into account. Here we have to pay close attention to the ``_split_generators(self, dl_manager)`` function of the dataset script in question. There are three general possibilties: 1) The ``dl_manager.download_and_extract()`` is given a **single path variable** of type `str` as its argument. In this case the file `dummy_data.zip` should unzip to the following structure: ``os.path.join("dummy_data", <additional-paths-as-defined-in-split-generations>)`` *e.g.* for ``sentiment140``, the unzipped ``dummy_data.zip`` has the following dir structure ``dummy_data/testdata.manual.2009.06.14.csv`` and ``dummy_data/training.1600000.processed.noemoticon.csv``. **Note** if there are no ``<additional-paths-as-defined-in-split-generations>``, then ``dummy_data`` should be the name of the single file. An example for this is the ``crime-and-punishment`` dataset script. 2) The ``dl_manager.download_and_extract()`` is given a **dictionary of paths** of type `str` as its argument. In this case the file `dummy_data.zip` should unzip to the following structure: ``os.path.join("dummy_data", <value_of_dict>.split('/')[-1], <additional-paths-as-defined-in-split-generations>)`` *e.g.* for ``squad``, the unzipped ``dummy_data.zip`` has the following dir structure ``dummy_data/dev-v1.1.json``, etc... **Note** if ``<value_of_dict>`` is a zipped file then the dummy data folder structure should contain the exact name of the zipped file and the following extracted folder structure. The file `dummy_data.zip` should **never** itself contain a zipped file since the dummy data is not unzipped by the ``MockDownloadManager`` during testing. *E.g.* check the dummy folder structure of ``hansards`` where the folders have to be named ``*.tar`` or the structure of ``wiki_split`` where the folders have to be named ``*.zip``. 3) The ``dl_manager.download_and_extract()`` is given a **dictionary of lists of paths** of type `str` as its argument. This is a very special case and has been seen only for the dataset ``ensli``. In this case the values are simply flattened and the dummy folder structure is the same as in 2).
datasets/tests/README.md/0
{ "file_path": "datasets/tests/README.md", "repo_id": "datasets", "token_count": 928 }
85
import os import random import tempfile import unittest import numpy as np import pandas as pd import pyarrow as pa import pytest from absl.testing import parameterized import datasets from datasets.arrow_writer import ArrowWriter from datasets.features import Array2D, Array3D, Array4D, Array5D, Value from datasets.features.features import Array3DExtensionType, PandasArrayExtensionDtype, _ArrayXD from datasets.formatting.formatting import NumpyArrowExtractor, SimpleArrowExtractor SHAPE_TEST_1 = (30, 487) SHAPE_TEST_2 = (36, 1024) SHAPE_TEST_3 = (None, 100) SPEED_TEST_SHAPE = (100, 100) SPEED_TEST_N_EXAMPLES = 100 DEFAULT_FEATURES = datasets.Features( { "text": Array2D(SHAPE_TEST_1, dtype="float32"), "image": Array2D(SHAPE_TEST_2, dtype="float32"), "dynamic": Array2D(SHAPE_TEST_3, dtype="float32"), } ) def generate_examples(features: dict, num_examples=100, seq_shapes=None): dummy_data = [] seq_shapes = seq_shapes or {} for i in range(num_examples): example = {} for col_id, (k, v) in enumerate(features.items()): if isinstance(v, _ArrayXD): if k == "dynamic": first_dim = random.randint(1, 3) data = np.random.rand(first_dim, *v.shape[1:]).astype(v.dtype) else: data = np.random.rand(*v.shape).astype(v.dtype) elif isinstance(v, datasets.Value): data = "foo" elif isinstance(v, datasets.Sequence): while isinstance(v, datasets.Sequence): v = v.feature shape = seq_shapes[k] data = np.random.rand(*shape).astype(v.dtype) example[k] = data dummy_data.append((i, example)) return dummy_data class ExtensionTypeCompatibilityTest(unittest.TestCase): def test_array2d_nonspecific_shape(self): with tempfile.TemporaryDirectory() as tmp_dir: my_features = DEFAULT_FEATURES.copy() with ArrowWriter(features=my_features, path=os.path.join(tmp_dir, "beta.arrow")) as writer: for key, record in generate_examples( features=my_features, num_examples=1, ): example = my_features.encode_example(record) writer.write(example) num_examples, num_bytes = writer.finalize() dataset = datasets.Dataset.from_file(os.path.join(tmp_dir, "beta.arrow")) dataset.set_format("numpy") row = dataset[0] first_shape = row["image"].shape second_shape = row["text"].shape self.assertTrue(first_shape is not None and second_shape is not None, "need atleast 2 different shapes") self.assertEqual(len(first_shape), len(second_shape), "both shapes are supposed to be equal length") self.assertNotEqual(first_shape, second_shape, "shapes must not be the same") del dataset def test_multiple_extensions_same_row(self): with tempfile.TemporaryDirectory() as tmp_dir: my_features = DEFAULT_FEATURES.copy() with ArrowWriter(features=my_features, path=os.path.join(tmp_dir, "beta.arrow")) as writer: for key, record in generate_examples(features=my_features, num_examples=1): example = my_features.encode_example(record) writer.write(example) num_examples, num_bytes = writer.finalize() dataset = datasets.Dataset.from_file(os.path.join(tmp_dir, "beta.arrow")) dataset.set_format("numpy") row = dataset[0] first_len = len(row["image"].shape) second_len = len(row["text"].shape) third_len = len(row["dynamic"].shape) self.assertEqual(first_len, 2, "use a sequence type if dim is < 2") self.assertEqual(second_len, 2, "use a sequence type if dim is < 2") self.assertEqual(third_len, 2, "use a sequence type if dim is < 2") del dataset def test_compatability_with_string_values(self): with tempfile.TemporaryDirectory() as tmp_dir: my_features = DEFAULT_FEATURES.copy() my_features["image_id"] = datasets.Value("string") with ArrowWriter(features=my_features, path=os.path.join(tmp_dir, "beta.arrow")) as writer: for key, record in generate_examples(features=my_features, num_examples=1): example = my_features.encode_example(record) writer.write(example) num_examples, num_bytes = writer.finalize() dataset = datasets.Dataset.from_file(os.path.join(tmp_dir, "beta.arrow")) self.assertIsInstance(dataset[0]["image_id"], str, "image id must be of type string") del dataset def test_extension_indexing(self): with tempfile.TemporaryDirectory() as tmp_dir: my_features = DEFAULT_FEATURES.copy() my_features["explicit_ext"] = Array2D((3, 3), dtype="float32") with ArrowWriter(features=my_features, path=os.path.join(tmp_dir, "beta.arrow")) as writer: for key, record in generate_examples(features=my_features, num_examples=1): example = my_features.encode_example(record) writer.write(example) num_examples, num_bytes = writer.finalize() dataset = datasets.Dataset.from_file(os.path.join(tmp_dir, "beta.arrow")) dataset.set_format("numpy") data = dataset[0]["explicit_ext"] self.assertIsInstance(data, np.ndarray, "indexed extension must return numpy.ndarray") del dataset def get_array_feature_types(): shape_1 = [3] * 5 shape_2 = [3, 4, 5, 6, 7] return [ { "testcase_name": f"{d}d", "array_feature": array_feature, "shape_1": tuple(shape_1[:d]), "shape_2": tuple(shape_2[:d]), } for d, array_feature in zip(range(2, 6), [Array2D, Array3D, Array4D, Array5D]) ] @parameterized.named_parameters(get_array_feature_types()) class ArrayXDTest(unittest.TestCase): def get_features(self, array_feature, shape_1, shape_2): return datasets.Features( { "image": array_feature(shape_1, dtype="float32"), "source": Value("string"), "matrix": array_feature(shape_2, dtype="float32"), } ) def get_dict_example_0(self, shape_1, shape_2): return { "image": np.random.rand(*shape_1).astype("float32"), "source": "foo", "matrix": np.random.rand(*shape_2).astype("float32"), } def get_dict_example_1(self, shape_1, shape_2): return { "image": np.random.rand(*shape_1).astype("float32"), "matrix": np.random.rand(*shape_2).astype("float32"), "source": "bar", } def get_dict_examples(self, shape_1, shape_2): return { "image": np.random.rand(2, *shape_1).astype("float32").tolist(), "source": ["foo", "bar"], "matrix": np.random.rand(2, *shape_2).astype("float32").tolist(), } def _check_getitem_output_type(self, dataset, shape_1, shape_2, first_matrix): matrix_column = dataset["matrix"] self.assertIsInstance(matrix_column, list) self.assertIsInstance(matrix_column[0], list) self.assertIsInstance(matrix_column[0][0], list) self.assertTupleEqual(np.array(matrix_column).shape, (2, *shape_2)) matrix_field_of_first_example = dataset[0]["matrix"] self.assertIsInstance(matrix_field_of_first_example, list) self.assertIsInstance(matrix_field_of_first_example, list) self.assertEqual(np.array(matrix_field_of_first_example).shape, shape_2) np.testing.assert_array_equal(np.array(matrix_field_of_first_example), np.array(first_matrix)) matrix_field_of_first_two_examples = dataset[:2]["matrix"] self.assertIsInstance(matrix_field_of_first_two_examples, list) self.assertIsInstance(matrix_field_of_first_two_examples[0], list) self.assertIsInstance(matrix_field_of_first_two_examples[0][0], list) self.assertTupleEqual(np.array(matrix_field_of_first_two_examples).shape, (2, *shape_2)) with dataset.formatted_as("numpy"): self.assertTupleEqual(dataset["matrix"].shape, (2, *shape_2)) self.assertEqual(dataset[0]["matrix"].shape, shape_2) self.assertTupleEqual(dataset[:2]["matrix"].shape, (2, *shape_2)) with dataset.formatted_as("pandas"): self.assertIsInstance(dataset["matrix"], pd.Series) self.assertIsInstance(dataset[0]["matrix"], pd.Series) self.assertIsInstance(dataset[:2]["matrix"], pd.Series) self.assertTupleEqual(dataset["matrix"].to_numpy().shape, (2, *shape_2)) self.assertTupleEqual(dataset[0]["matrix"].to_numpy().shape, (1, *shape_2)) self.assertTupleEqual(dataset[:2]["matrix"].to_numpy().shape, (2, *shape_2)) def test_write(self, array_feature, shape_1, shape_2): with tempfile.TemporaryDirectory() as tmp_dir: my_features = self.get_features(array_feature, shape_1, shape_2) my_examples = [ (0, self.get_dict_example_0(shape_1, shape_2)), (1, self.get_dict_example_1(shape_1, shape_2)), ] with ArrowWriter(features=my_features, path=os.path.join(tmp_dir, "beta.arrow")) as writer: for key, record in my_examples: example = my_features.encode_example(record) writer.write(example) num_examples, num_bytes = writer.finalize() dataset = datasets.Dataset.from_file(os.path.join(tmp_dir, "beta.arrow")) self._check_getitem_output_type(dataset, shape_1, shape_2, my_examples[0][1]["matrix"]) del dataset def test_write_batch(self, array_feature, shape_1, shape_2): with tempfile.TemporaryDirectory() as tmp_dir: my_features = self.get_features(array_feature, shape_1, shape_2) dict_examples = self.get_dict_examples(shape_1, shape_2) dict_examples = my_features.encode_batch(dict_examples) with ArrowWriter(features=my_features, path=os.path.join(tmp_dir, "beta.arrow")) as writer: writer.write_batch(dict_examples) num_examples, num_bytes = writer.finalize() dataset = datasets.Dataset.from_file(os.path.join(tmp_dir, "beta.arrow")) self._check_getitem_output_type(dataset, shape_1, shape_2, dict_examples["matrix"][0]) del dataset def test_from_dict(self, array_feature, shape_1, shape_2): dict_examples = self.get_dict_examples(shape_1, shape_2) dataset = datasets.Dataset.from_dict( dict_examples, features=self.get_features(array_feature, shape_1, shape_2) ) self._check_getitem_output_type(dataset, shape_1, shape_2, dict_examples["matrix"][0]) del dataset class ArrayXDDynamicTest(unittest.TestCase): def get_one_col_dataset(self, first_dim_list, fixed_shape): features = datasets.Features({"image": Array3D(shape=(None, *fixed_shape), dtype="float32")}) dict_values = {"image": [np.random.rand(fdim, *fixed_shape).astype("float32") for fdim in first_dim_list]} dataset = datasets.Dataset.from_dict(dict_values, features=features) return dataset def get_two_col_datasset(self, first_dim_list, fixed_shape): features = datasets.Features( {"image": Array3D(shape=(None, *fixed_shape), dtype="float32"), "text": Value("string")} ) dict_values = { "image": [np.random.rand(fdim, *fixed_shape).astype("float32") for fdim in first_dim_list], "text": ["text" for _ in first_dim_list], } dataset = datasets.Dataset.from_dict(dict_values, features=features) return dataset def test_to_pylist(self): fixed_shape = (2, 2) first_dim_list = [1, 3, 10] dataset = self.get_one_col_dataset(first_dim_list, fixed_shape) arr_xd = SimpleArrowExtractor().extract_column(dataset._data) self.assertIsInstance(arr_xd.type, Array3DExtensionType) pylist = arr_xd.to_pylist() for first_dim, single_arr in zip(first_dim_list, pylist): self.assertIsInstance(single_arr, list) self.assertTupleEqual(np.array(single_arr).shape, (first_dim, *fixed_shape)) def test_to_numpy(self): fixed_shape = (2, 2) # ragged first_dim_list = [1, 3, 10] dataset = self.get_one_col_dataset(first_dim_list, fixed_shape) arr_xd = SimpleArrowExtractor().extract_column(dataset._data) self.assertIsInstance(arr_xd.type, Array3DExtensionType) # replace with arr_xd = arr_xd.combine_chunks() when 12.0.0 will be the minimal required PyArrow version arr_xd = arr_xd.type.wrap_array(pa.concat_arrays([chunk.storage for chunk in arr_xd.chunks])) numpy_arr = arr_xd.to_numpy() self.assertIsInstance(numpy_arr, np.ndarray) self.assertEqual(numpy_arr.dtype, object) for first_dim, single_arr in zip(first_dim_list, numpy_arr): self.assertIsInstance(single_arr, np.ndarray) self.assertTupleEqual(single_arr.shape, (first_dim, *fixed_shape)) # non-ragged first_dim_list = [4, 4, 4] dataset = self.get_one_col_dataset(first_dim_list, fixed_shape) arr_xd = SimpleArrowExtractor().extract_column(dataset._data) self.assertIsInstance(arr_xd.type, Array3DExtensionType) # replace with arr_xd = arr_xd.combine_chunks() when 12.0.0 will be the minimal required PyArrow version arr_xd = arr_xd.type.wrap_array(pa.concat_arrays([chunk.storage for chunk in arr_xd.chunks])) numpy_arr = arr_xd.to_numpy() self.assertIsInstance(numpy_arr, np.ndarray) self.assertNotEqual(numpy_arr.dtype, object) for first_dim, single_arr in zip(first_dim_list, numpy_arr): self.assertIsInstance(single_arr, np.ndarray) self.assertTupleEqual(single_arr.shape, (first_dim, *fixed_shape)) def test_iter_dataset(self): fixed_shape = (2, 2) first_dim_list = [1, 3, 10] dataset = self.get_one_col_dataset(first_dim_list, fixed_shape) for first_dim, ds_row in zip(first_dim_list, dataset): single_arr = ds_row["image"] self.assertIsInstance(single_arr, list) self.assertTupleEqual(np.array(single_arr).shape, (first_dim, *fixed_shape)) def test_to_pandas(self): fixed_shape = (2, 2) # ragged first_dim_list = [1, 3, 10] dataset = self.get_one_col_dataset(first_dim_list, fixed_shape) df = dataset.to_pandas() self.assertEqual(type(df.image.dtype), PandasArrayExtensionDtype) numpy_arr = df.image.to_numpy() self.assertIsInstance(numpy_arr, np.ndarray) self.assertEqual(numpy_arr.dtype, object) for first_dim, single_arr in zip(first_dim_list, numpy_arr): self.assertIsInstance(single_arr, np.ndarray) self.assertTupleEqual(single_arr.shape, (first_dim, *fixed_shape)) # non-ragged first_dim_list = [4, 4, 4] dataset = self.get_one_col_dataset(first_dim_list, fixed_shape) df = dataset.to_pandas() self.assertEqual(type(df.image.dtype), PandasArrayExtensionDtype) numpy_arr = df.image.to_numpy() self.assertIsInstance(numpy_arr, np.ndarray) self.assertNotEqual(numpy_arr.dtype, object) for first_dim, single_arr in zip(first_dim_list, numpy_arr): self.assertIsInstance(single_arr, np.ndarray) self.assertTupleEqual(single_arr.shape, (first_dim, *fixed_shape)) def test_map_dataset(self): fixed_shape = (2, 2) first_dim_list = [1, 3, 10] dataset = self.get_one_col_dataset(first_dim_list, fixed_shape) dataset = dataset.map(lambda a: {"image": np.concatenate([a] * 2)}, input_columns="image") # check also if above function resulted with 2x bigger first dim for first_dim, ds_row in zip(first_dim_list, dataset): single_arr = ds_row["image"] self.assertIsInstance(single_arr, list) self.assertTupleEqual(np.array(single_arr).shape, (first_dim * 2, *fixed_shape)) @pytest.mark.parametrize("dtype, dummy_value", [("int32", 1), ("bool", True), ("float64", 1)]) def test_table_to_pandas(dtype, dummy_value): features = datasets.Features({"foo": datasets.Array2D(dtype=dtype, shape=(2, 2))}) dataset = datasets.Dataset.from_dict({"foo": [[[dummy_value] * 2] * 2]}, features=features) df = dataset._data.to_pandas() assert isinstance(df.foo.dtype, PandasArrayExtensionDtype) arr = df.foo.to_numpy() np.testing.assert_equal(arr, np.array([[[dummy_value] * 2] * 2], dtype=np.dtype(dtype))) @pytest.mark.parametrize("dtype, dummy_value", [("int32", 1), ("bool", True), ("float64", 1)]) def test_array_xd_numpy_arrow_extractor(dtype, dummy_value): features = datasets.Features({"foo": datasets.Array2D(dtype=dtype, shape=(2, 2))}) dataset = datasets.Dataset.from_dict({"foo": [[[dummy_value] * 2] * 2]}, features=features) arr = NumpyArrowExtractor().extract_column(dataset._data) assert isinstance(arr, np.ndarray) np.testing.assert_equal(arr, np.array([[[dummy_value] * 2] * 2], dtype=np.dtype(dtype))) def test_array_xd_with_none(): # Fixed shape features = datasets.Features({"foo": datasets.Array2D(dtype="int32", shape=(2, 2))}) dummy_array = np.array([[1, 2], [3, 4]], dtype="int32") dataset = datasets.Dataset.from_dict({"foo": [dummy_array, None, dummy_array, None]}, features=features) arr = NumpyArrowExtractor().extract_column(dataset._data) assert isinstance(arr, np.ndarray) and arr.dtype == np.float64 and arr.shape == (4, 2, 2) assert np.allclose(arr[0], dummy_array) and np.allclose(arr[2], dummy_array) assert np.all(np.isnan(arr[1])) and np.all(np.isnan(arr[3])) # broadcasted np.nan - use np.all # Dynamic shape features = datasets.Features({"foo": datasets.Array2D(dtype="int32", shape=(None, 2))}) dummy_array = np.array([[1, 2], [3, 4]], dtype="int32") dataset = datasets.Dataset.from_dict({"foo": [dummy_array, None, dummy_array, None]}, features=features) arr = NumpyArrowExtractor().extract_column(dataset._data) assert isinstance(arr, np.ndarray) and arr.dtype == object and arr.shape == (4,) np.testing.assert_equal(arr[0], dummy_array) np.testing.assert_equal(arr[2], dummy_array) assert np.isnan(arr[1]) and np.isnan(arr[3]) # a single np.nan value - np.all not needed @pytest.mark.parametrize("seq_type", ["no_sequence", "sequence", "sequence_of_sequence"]) @pytest.mark.parametrize( "dtype", [ "bool", "int8", "int16", "int32", "int64", "uint8", "uint16", "uint32", "uint64", "float16", "float32", "float64", ], ) @pytest.mark.parametrize("shape, feature_class", [((2, 3), datasets.Array2D), ((2, 3, 4), datasets.Array3D)]) def test_array_xd_with_np(seq_type, dtype, shape, feature_class): feature = feature_class(dtype=dtype, shape=shape) data = np.zeros(shape, dtype=dtype) expected = data.tolist() if seq_type == "sequence": feature = datasets.Sequence(feature) data = [data] expected = [expected] elif seq_type == "sequence_of_sequence": feature = datasets.Sequence(datasets.Sequence(feature)) data = [[data]] expected = [[expected]] ds = datasets.Dataset.from_dict({"col": [data]}, features=datasets.Features({"col": feature})) assert ds[0]["col"] == expected @pytest.mark.parametrize("with_none", [False, True]) def test_dataset_map(with_none): ds = datasets.Dataset.from_dict({"path": ["path1", "path2"]}) def process_data(batch): batch = { "image": [ np.array( [ [[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[10, 20, 30], [40, 50, 60], [70, 80, 90]], [[100, 200, 300], [400, 500, 600], [700, 800, 900]], ] ) for _ in batch["path"] ] } if with_none: batch["image"][0] = None return batch features = datasets.Features({"image": Array3D(dtype="int32", shape=(3, 3, 3))}) processed_ds = ds.map(process_data, batched=True, remove_columns=ds.column_names, features=features) assert processed_ds.shape == (2, 1) with processed_ds.with_format("numpy") as pds: for i, example in enumerate(pds): assert "image" in example assert isinstance(example["image"], np.ndarray) assert example["image"].shape == (3, 3, 3) if with_none and i == 0: assert np.all(np.isnan(example["image"]))
datasets/tests/features/test_array_xd.py/0
{ "file_path": "datasets/tests/features/test_array_xd.py", "repo_id": "datasets", "token_count": 9827 }
86
import pytest from datasets import Dataset, DatasetDict, Features, NamedSplit, Value from datasets.io.text import TextDatasetReader from ..utils import assert_arrow_memory_doesnt_increase, assert_arrow_memory_increases def _check_text_dataset(dataset, expected_features): assert isinstance(dataset, Dataset) assert dataset.num_rows == 4 assert dataset.num_columns == 1 assert dataset.column_names == ["text"] for feature, expected_dtype in expected_features.items(): assert dataset.features[feature].dtype == expected_dtype @pytest.mark.parametrize("keep_in_memory", [False, True]) def test_dataset_from_text_keep_in_memory(keep_in_memory, text_path, tmp_path): cache_dir = tmp_path / "cache" expected_features = {"text": "string"} with assert_arrow_memory_increases() if keep_in_memory else assert_arrow_memory_doesnt_increase(): dataset = TextDatasetReader(text_path, cache_dir=cache_dir, keep_in_memory=keep_in_memory).read() _check_text_dataset(dataset, expected_features) @pytest.mark.parametrize( "features", [ None, {"text": "string"}, {"text": "int32"}, {"text": "float32"}, ], ) def test_dataset_from_text_features(features, text_path, tmp_path): cache_dir = tmp_path / "cache" default_expected_features = {"text": "string"} expected_features = features.copy() if features else default_expected_features features = ( Features({feature: Value(dtype) for feature, dtype in features.items()}) if features is not None else None ) dataset = TextDatasetReader(text_path, features=features, cache_dir=cache_dir).read() _check_text_dataset(dataset, expected_features) @pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"]) def test_dataset_from_text_split(split, text_path, tmp_path): cache_dir = tmp_path / "cache" expected_features = {"text": "string"} dataset = TextDatasetReader(text_path, cache_dir=cache_dir, split=split).read() _check_text_dataset(dataset, expected_features) assert dataset.split == split if split else "train" @pytest.mark.parametrize("path_type", [str, list]) def test_dataset_from_text_path_type(path_type, text_path, tmp_path): if issubclass(path_type, str): path = text_path elif issubclass(path_type, list): path = [text_path] cache_dir = tmp_path / "cache" expected_features = {"text": "string"} dataset = TextDatasetReader(path, cache_dir=cache_dir).read() _check_text_dataset(dataset, expected_features) def _check_text_datasetdict(dataset_dict, expected_features, splits=("train",)): assert isinstance(dataset_dict, DatasetDict) for split in splits: dataset = dataset_dict[split] assert dataset.num_rows == 4 assert dataset.num_columns == 1 assert dataset.column_names == ["text"] for feature, expected_dtype in expected_features.items(): assert dataset.features[feature].dtype == expected_dtype @pytest.mark.parametrize("keep_in_memory", [False, True]) def test_datasetdict_from_text_keep_in_memory(keep_in_memory, text_path, tmp_path): cache_dir = tmp_path / "cache" expected_features = {"text": "string"} with assert_arrow_memory_increases() if keep_in_memory else assert_arrow_memory_doesnt_increase(): dataset = TextDatasetReader({"train": text_path}, cache_dir=cache_dir, keep_in_memory=keep_in_memory).read() _check_text_datasetdict(dataset, expected_features) @pytest.mark.parametrize( "features", [ None, {"text": "string"}, {"text": "int32"}, {"text": "float32"}, ], ) def test_datasetdict_from_text_features(features, text_path, tmp_path): cache_dir = tmp_path / "cache" # CSV file loses col_1 string dtype information: default now is "int64" instead of "string" default_expected_features = {"text": "string"} expected_features = features.copy() if features else default_expected_features features = ( Features({feature: Value(dtype) for feature, dtype in features.items()}) if features is not None else None ) dataset = TextDatasetReader({"train": text_path}, features=features, cache_dir=cache_dir).read() _check_text_datasetdict(dataset, expected_features) @pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"]) def test_datasetdict_from_text_split(split, text_path, tmp_path): if split: path = {split: text_path} else: split = "train" path = {"train": text_path, "test": text_path} cache_dir = tmp_path / "cache" expected_features = {"text": "string"} dataset = TextDatasetReader(path, cache_dir=cache_dir).read() _check_text_datasetdict(dataset, expected_features, splits=list(path.keys())) assert all(dataset[split].split == split for split in path.keys())
datasets/tests/io/test_text.py/0
{ "file_path": "datasets/tests/io/test_text.py", "repo_id": "datasets", "token_count": 1833 }
87
import os import tempfile from pathlib import Path from unittest import TestCase import pyarrow as pa import pytest from datasets.arrow_dataset import Dataset from datasets.arrow_reader import ArrowReader, BaseReader, FileInstructions, ReadInstruction, make_file_instructions from datasets.info import DatasetInfo from datasets.splits import NamedSplit, Split, SplitDict, SplitInfo from .utils import assert_arrow_memory_doesnt_increase, assert_arrow_memory_increases class ReaderTest(BaseReader): """ Build a Dataset object out of Instruction instance(s). This reader is made for testing. It mocks file reads. """ def _get_table_from_filename(self, filename_skip_take, in_memory=False): """Returns a Dataset instance from given (filename, skip, take).""" filename, skip, take = ( filename_skip_take["filename"], filename_skip_take["skip"] if "skip" in filename_skip_take else None, filename_skip_take["take"] if "take" in filename_skip_take else None, ) open(os.path.join(filename), "wb").close() pa_table = pa.Table.from_pydict({"filename": [Path(filename).name] * 100}) if take == -1: take = len(pa_table) - skip if skip is not None and take is not None: pa_table = pa_table.slice(skip, take) return pa_table class BaseReaderTest(TestCase): def test_read(self): name = "my_name" train_info = SplitInfo(name="train", num_examples=100) test_info = SplitInfo(name="test", num_examples=100) split_infos = [train_info, test_info] split_dict = SplitDict() split_dict.add(train_info) split_dict.add(test_info) info = DatasetInfo(splits=split_dict) with tempfile.TemporaryDirectory() as tmp_dir: reader = ReaderTest(tmp_dir, info) instructions = "test[:33%]" dset = Dataset(**reader.read(name, instructions, split_infos)) self.assertEqual(dset["filename"][0], f"{name}-test") self.assertEqual(dset.num_rows, 33) self.assertEqual(dset.num_columns, 1) instructions1 = ["train", "test[:33%]"] instructions2 = [Split.TRAIN, ReadInstruction.from_spec("test[:33%]")] for instructions in [instructions1, instructions2]: datasets_kwargs = [reader.read(name, instr, split_infos) for instr in instructions] train_dset, test_dset = (Dataset(**dataset_kwargs) for dataset_kwargs in datasets_kwargs) self.assertEqual(train_dset["filename"][0], f"{name}-train") self.assertEqual(train_dset.num_rows, 100) self.assertEqual(train_dset.num_columns, 1) self.assertIsInstance(train_dset.split, NamedSplit) self.assertEqual(str(train_dset.split), "train") self.assertEqual(test_dset["filename"][0], f"{name}-test") self.assertEqual(test_dset.num_rows, 33) self.assertEqual(test_dset.num_columns, 1) self.assertIsInstance(test_dset.split, NamedSplit) self.assertEqual(str(test_dset.split), "test[:33%]") del train_dset, test_dset def test_read_sharded(self): name = "my_name" train_info = SplitInfo(name="train", num_examples=1000, shard_lengths=[100] * 10) split_infos = [train_info] split_dict = SplitDict() split_dict.add(train_info) info = DatasetInfo(splits=split_dict) with tempfile.TemporaryDirectory() as tmp_dir: reader = ReaderTest(tmp_dir, info) instructions = "train[:33%]" dset = Dataset(**reader.read(name, instructions, split_infos)) self.assertEqual(dset["filename"][0], f"{name}-train-00000-of-00010") self.assertEqual(dset["filename"][-1], f"{name}-train-00003-of-00010") self.assertEqual(dset.num_rows, 330) self.assertEqual(dset.num_columns, 1) def test_read_files(self): train_info = SplitInfo(name="train", num_examples=100) test_info = SplitInfo(name="test", num_examples=100) split_dict = SplitDict() split_dict.add(train_info) split_dict.add(test_info) info = DatasetInfo(splits=split_dict) with tempfile.TemporaryDirectory() as tmp_dir: reader = ReaderTest(tmp_dir, info) files = [ {"filename": os.path.join(tmp_dir, "train")}, {"filename": os.path.join(tmp_dir, "test"), "skip": 10, "take": 10}, ] dset = Dataset(**reader.read_files(files, original_instructions="train+test[10:20]")) self.assertEqual(dset.num_rows, 110) self.assertEqual(dset.num_columns, 1) del dset @pytest.mark.parametrize("in_memory", [False, True]) def test_read_table(in_memory, dataset, arrow_file): filename = arrow_file with assert_arrow_memory_increases() if in_memory else assert_arrow_memory_doesnt_increase(): table = ArrowReader.read_table(filename, in_memory=in_memory) assert table.shape == dataset.data.shape assert set(table.column_names) == set(dataset.data.column_names) assert dict(table.to_pydict()) == dict(dataset.data.to_pydict()) # to_pydict returns OrderedDict @pytest.mark.parametrize("in_memory", [False, True]) def test_read_files(in_memory, dataset, arrow_file): filename = arrow_file reader = ArrowReader("", None) with assert_arrow_memory_increases() if in_memory else assert_arrow_memory_doesnt_increase(): dataset_kwargs = reader.read_files([{"filename": filename}], in_memory=in_memory) assert dataset_kwargs.keys() == {"arrow_table", "info", "split"} table = dataset_kwargs["arrow_table"] assert table.shape == dataset.data.shape assert set(table.column_names) == set(dataset.data.column_names) assert dict(table.to_pydict()) == dict(dataset.data.to_pydict()) # to_pydict returns OrderedDict def test_read_instruction_spec(): assert ReadInstruction("train", to=10, unit="abs").to_spec() == "train[:10]" assert ReadInstruction("train", from_=-80, to=10, unit="%").to_spec() == "train[-80%:10%]" spec_train_test = "train+test" assert ReadInstruction.from_spec(spec_train_test).to_spec() == spec_train_test spec_train_abs = "train[2:10]" assert ReadInstruction.from_spec(spec_train_abs).to_spec() == spec_train_abs spec_train_pct = "train[15%:-20%]" assert ReadInstruction.from_spec(spec_train_pct).to_spec() == spec_train_pct spec_train_pct_rounding = "train[:10%](closest)" assert ReadInstruction.from_spec(spec_train_pct_rounding).to_spec() == "train[:10%]" spec_train_pct_rounding = "train[:10%](pct1_dropremainder)" assert ReadInstruction.from_spec(spec_train_pct_rounding).to_spec() == spec_train_pct_rounding spec_train_test_pct_rounding = "train[:10%](pct1_dropremainder)+test[-10%:](pct1_dropremainder)" assert ReadInstruction.from_spec(spec_train_test_pct_rounding).to_spec() == spec_train_test_pct_rounding def test_make_file_instructions_basic(): name = "dummy" split_infos = [SplitInfo(name="train", num_examples=100)] instruction = "train[:33%]" filetype_suffix = "arrow" prefix_path = "prefix" file_instructions = make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path) assert isinstance(file_instructions, FileInstructions) assert file_instructions.num_examples == 33 assert file_instructions.file_instructions == [ {"filename": os.path.join(prefix_path, f"{name}-train.arrow"), "skip": 0, "take": 33} ] split_infos = [SplitInfo(name="train", num_examples=100, shard_lengths=[10] * 10)] file_instructions = make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path) assert isinstance(file_instructions, FileInstructions) assert file_instructions.num_examples == 33 assert file_instructions.file_instructions == [ {"filename": os.path.join(prefix_path, f"{name}-train-00000-of-00010.arrow"), "skip": 0, "take": -1}, {"filename": os.path.join(prefix_path, f"{name}-train-00001-of-00010.arrow"), "skip": 0, "take": -1}, {"filename": os.path.join(prefix_path, f"{name}-train-00002-of-00010.arrow"), "skip": 0, "take": -1}, {"filename": os.path.join(prefix_path, f"{name}-train-00003-of-00010.arrow"), "skip": 0, "take": 3}, ] @pytest.mark.parametrize( "split_name, instruction, shard_lengths, read_range", [ ("train", "train[-20%:]", 100, (80, 100)), ("train", "train[:200]", 100, (0, 100)), ("train", "train[:-200]", 100, None), ("train", "train[-200:]", 100, (0, 100)), ("train", "train[-20%:]", [10] * 10, (80, 100)), ("train", "train[:200]", [10] * 10, (0, 100)), ("train", "train[:-200]", [10] * 10, None), ("train", "train[-200:]", [10] * 10, (0, 100)), ], ) def test_make_file_instructions(split_name, instruction, shard_lengths, read_range): name = "dummy" split_infos = split_infos = [ SplitInfo( name="train", num_examples=shard_lengths if not isinstance(shard_lengths, list) else sum(shard_lengths), shard_lengths=shard_lengths if isinstance(shard_lengths, list) else None, ) ] filetype_suffix = "arrow" prefix_path = "prefix" file_instructions = make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path) assert isinstance(file_instructions, FileInstructions) assert file_instructions.num_examples == (read_range[1] - read_range[0] if read_range is not None else 0) if read_range is None: assert file_instructions.file_instructions == [] else: if not isinstance(shard_lengths, list): assert file_instructions.file_instructions == [ { "filename": os.path.join(prefix_path, f"{name}-{split_name}.arrow"), "skip": read_range[0], "take": read_range[1] - read_range[0], } ] else: file_instructions_list = [] shard_offset = 0 for i, shard_length in enumerate(shard_lengths): filename = os.path.join(prefix_path, f"{name}-{split_name}-{i:05d}-of-{len(shard_lengths):05d}.arrow") if shard_offset <= read_range[0] < shard_offset + shard_length: file_instructions_list.append( { "filename": filename, "skip": read_range[0] - shard_offset, "take": read_range[1] - read_range[0] if read_range[1] < shard_offset + shard_length else -1, } ) elif shard_offset < read_range[1] <= shard_offset + shard_length: file_instructions_list.append( { "filename": filename, "skip": 0, "take": read_range[1] - shard_offset if read_range[1] < shard_offset + shard_length else -1, } ) elif read_range[0] < shard_offset and read_range[1] > shard_offset + shard_length: file_instructions_list.append( { "filename": filename, "skip": 0, "take": -1, } ) shard_offset += shard_length assert file_instructions.file_instructions == file_instructions_list @pytest.mark.parametrize("name, expected_exception", [(None, TypeError), ("", ValueError)]) def test_make_file_instructions_raises(name, expected_exception): split_infos = [SplitInfo(name="train", num_examples=100)] instruction = "train" filetype_suffix = "arrow" prefix_path = "prefix_path" with pytest.raises(expected_exception): _ = make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path)
datasets/tests/test_arrow_reader.py/0
{ "file_path": "datasets/tests/test_arrow_reader.py", "repo_id": "datasets", "token_count": 5688 }
88
from textwrap import dedent from types import SimpleNamespace from unittest.mock import patch from urllib.parse import quote import pytest from huggingface_hub import CommitOperationAdd, CommitOperationDelete import datasets from datasets.config import METADATA_CONFIGS_FIELD from datasets.hub import convert_to_parquet, delete_from_hub from datasets.utils.hub import hf_dataset_url DUMMY_DATASET_SCRIPT = dedent("""\ import datasets class NewDataset(datasets.GeneratorBasedBuilder): BUILDER_CONFIGS = [ datasets.BuilderConfig(name="first"), datasets.BuilderConfig(name="second"), ] DEFAULT_CONFIG_NAME = "first" def _info(self): return datasets.DatasetInfo( features=datasets.Features({"text": datasets.Value("string")}), ) def _split_generators(self, dl_manager): return [datasets.SplitGenerator(name=datasets.Split.TRAIN)] def _generate_examples(self): for key in range(5): yield key, {"text": f"{self.config.name}-{key}"} """) @pytest.mark.parametrize("repo_id", ["canonical_dataset_name", "org-name/dataset-name"]) @pytest.mark.parametrize("filename", ["filename.csv", "filename with blanks.csv"]) @pytest.mark.parametrize("revision", [None, "v2"]) def test_dataset_url(repo_id, filename, revision): url = hf_dataset_url(repo_id=repo_id, filename=filename, revision=revision) assert url == f"https://huggingface.co/datasets/{repo_id}/resolve/{revision or 'main'}/{quote(filename)}" def test_convert_to_parquet(temporary_repo, hf_api, hf_token, ci_hub_config, ci_hfh_hf_hub_url): with temporary_repo() as repo_id: hf_api.create_repo(repo_id, token=hf_token, repo_type="dataset") hf_api.upload_file( token=hf_token, path_or_fileobj=DUMMY_DATASET_SCRIPT.encode(), path_in_repo=f"{repo_id.split('/')[-1]}.py", repo_id=repo_id, repo_type="dataset", ) commit_info = SimpleNamespace( pr_revision="refs/pr/1", # "main", # pr_url="https:///hub-ci.huggingface.co/datasets/__DUMMY_USER__/__DUMMY_DATASET__/refs%2Fpr%2F1", ) with patch.object(datasets.hub.HfApi, "create_commit", return_value=commit_info) as mock_create_commit: with patch.object(datasets.hub.HfApi, "create_branch") as mock_create_branch: with patch.object(datasets.hub.HfApi, "list_repo_tree", return_value=[]): # not needed with patch.object(datasets.hub.HfApi, "preupload_lfs_files", return_value=None): # not needed _ = convert_to_parquet(repo_id, token=hf_token, trust_remote_code=True) # mock_create_branch assert mock_create_branch.called assert mock_create_branch.call_count == 1 assert mock_create_branch.call_args.kwargs.get("branch") == "script" # mock_create_commit assert mock_create_commit.called assert mock_create_commit.call_count == 2 expected_readmes = [ dedent(f"""\ --- dataset_info: config_name: first features: - name: text dtype: string splits: - name: train num_bytes: 55 num_examples: 5 download_size: 790 dataset_size: 55 {METADATA_CONFIGS_FIELD}: - config_name: first data_files: - split: train path: first/train-* default: true --- """), dedent(f"""\ --- dataset_info: config_name: second features: - name: text dtype: string splits: - name: train num_bytes: 60 num_examples: 5 download_size: 798 dataset_size: 60 {METADATA_CONFIGS_FIELD}: - config_name: second data_files: - split: train path: second/train-* --- """), ] for call_args, expected_commit_message, expected_create_pr, expected_readme, expected_parquet_path_in_repo in zip( mock_create_commit.call_args_list, ["Convert dataset to Parquet", "Add 'second' config data files"], [True, False], expected_readmes, ["first/train-00000-of-00001.parquet", "second/train-00000-of-00001.parquet"], ): assert call_args.kwargs.get("commit_message") == expected_commit_message assert call_args.kwargs.get("create_pr") is expected_create_pr operations = call_args.kwargs.get("operations") assert len(operations) == 2 for operation in operations: if operation.path_in_repo == "README.md": assert operation.path_or_fileobj.decode() == expected_readme else: assert operation.path_in_repo == expected_parquet_path_in_repo def test_delete_from_hub(temporary_repo, hf_api, hf_token, csv_path, ci_hub_config, ci_hfh_hf_hub_url) -> None: with temporary_repo() as repo_id: hf_api.create_repo(repo_id, token=hf_token, repo_type="dataset") hf_api.upload_file( path_or_fileobj=str(csv_path), path_in_repo="cats/train/0000.csv", repo_id=repo_id, repo_type="dataset", token=hf_token, ) hf_api.upload_file( path_or_fileobj=str(csv_path), path_in_repo="dogs/train/0000.csv", repo_id=repo_id, repo_type="dataset", token=hf_token, ) hf_api.upload_file( token=hf_token, path_or_fileobj=dedent(f"""\ --- {METADATA_CONFIGS_FIELD}: - config_name: cats data_files: - split: train path: cats/train/* - config_name: dogs data_files: - split: train path: dogs/train/* --- """).encode(), path_in_repo="README.md", repo_id=repo_id, repo_type="dataset", ) commit_info = SimpleNamespace( pr_url="https:///hub-ci.huggingface.co/datasets/__DUMMY_USER__/__DUMMY_DATASET__/refs%2Fpr%2F1" ) with patch.object(datasets.hub.HfApi, "create_commit", return_value=commit_info) as mock_method: _ = delete_from_hub(repo_id, "dogs") assert mock_method.called assert mock_method.call_args.kwargs.get("commit_message") == "Delete 'dogs' config" assert mock_method.call_args.kwargs.get("create_pr") expected_operations = [ CommitOperationDelete(path_in_repo="dogs/train/0000.csv", is_folder=False), CommitOperationAdd( path_in_repo="README.md", path_or_fileobj=dedent(f"""\ --- {METADATA_CONFIGS_FIELD}: - config_name: cats data_files: - split: train path: cats/train/* --- """).encode(), ), ] assert mock_method.call_args.kwargs.get("operations") == expected_operations
datasets/tests/test_hub.py/0
{ "file_path": "datasets/tests/test_hub.py", "repo_id": "datasets", "token_count": 3524 }
89
import unittest from unittest.mock import patch import pytest from pytest import CaptureFixture from datasets.utils import ( are_progress_bars_disabled, disable_progress_bars, enable_progress_bars, tqdm, ) class TestTqdmUtils(unittest.TestCase): @pytest.fixture(autouse=True) def capsys(self, capsys: CaptureFixture) -> None: """Workaround to make capsys work in unittest framework. Capsys is a convenient pytest fixture to capture stdout. See https://waylonwalker.com/pytest-capsys/. Taken from https://github.com/pytest-dev/pytest/issues/2504#issuecomment-309475790. """ self.capsys = capsys def setUp(self) -> None: """Get verbosity to set it back after the tests.""" self._previous_are_progress_bars_disabled = are_progress_bars_disabled() return super().setUp() def tearDown(self) -> None: """Set back progress bars verbosity as before testing.""" if self._previous_are_progress_bars_disabled: disable_progress_bars() else: enable_progress_bars() @patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None) def test_tqdm_helpers(self) -> None: """Test helpers to enable/disable progress bars.""" disable_progress_bars() self.assertTrue(are_progress_bars_disabled()) enable_progress_bars() self.assertFalse(are_progress_bars_disabled()) @patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", True) def test_cannot_enable_tqdm_when_env_variable_is_set(self) -> None: """ Test helpers cannot enable/disable progress bars when `HF_DATASETS_DISABLE_PROGRESS_BARS` is set. """ disable_progress_bars() self.assertTrue(are_progress_bars_disabled()) with self.assertWarns(UserWarning): enable_progress_bars() self.assertTrue(are_progress_bars_disabled()) # Still disabled ! @patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", False) def test_cannot_disable_tqdm_when_env_variable_is_set(self) -> None: """ Test helpers cannot enable/disable progress bars when `HF_DATASETS_DISABLE_PROGRESS_BARS` is set. """ enable_progress_bars() self.assertFalse(are_progress_bars_disabled()) with self.assertWarns(UserWarning): disable_progress_bars() self.assertFalse(are_progress_bars_disabled()) # Still enabled ! @patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None) def test_tqdm_disabled(self) -> None: """Test TQDM not outputting anything when globally disabled.""" disable_progress_bars() for _ in tqdm(range(10)): pass captured = self.capsys.readouterr() self.assertEqual(captured.out, "") self.assertEqual(captured.err, "") @patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None) def test_tqdm_disabled_cannot_be_forced(self) -> None: """Test TQDM cannot be forced when globally disabled.""" disable_progress_bars() for _ in tqdm(range(10), disable=False): pass captured = self.capsys.readouterr() self.assertEqual(captured.out, "") self.assertEqual(captured.err, "") @patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None) def test_tqdm_can_be_disabled_when_globally_enabled(self) -> None: """Test TQDM can still be locally disabled even when globally enabled.""" enable_progress_bars() for _ in tqdm(range(10), disable=True): pass captured = self.capsys.readouterr() self.assertEqual(captured.out, "") self.assertEqual(captured.err, "") @patch("datasets.utils._tqdm.HF_DATASETS_DISABLE_PROGRESS_BARS", None) def test_tqdm_enabled(self) -> None: """Test TQDM work normally when globally enabled.""" enable_progress_bars() for _ in tqdm(range(10)): pass captured = self.capsys.readouterr() self.assertEqual(captured.out, "") self.assertIn("10/10", captured.err) # tqdm log
datasets/tests/test_tqdm.py/0
{ "file_path": "datasets/tests/test_tqdm.py", "repo_id": "datasets", "token_count": 1804 }
90
<jupyter_start><jupyter_text>Unit 5: An Introduction to ML-Agents In this notebook, you'll learn about ML-Agents and train two agents.- The first one will learn to **shoot snowballs onto spawning targets**.- The second need to press a button to spawn a pyramid, then navigate to the pyramid, knock it over, **and move to the gold brick at the top**. To do that, it will need to explore its environment, and we will use a technique called curiosity.After that, you'll be able **to watch your agents playing directly on your browser**.For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introductioncertification-process ⬇️ Here is an example of what **you will achieve at the end of this unit.** ⬇️ 🎮 Environments:- [Pyramids](https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Learning-Environment-Examples.mdpyramids)- SnowballTarget 📚 RL-Library:- [ML-Agents](https://github.com/Unity-Technologies/ml-agents) We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues). Objectives of this notebook 🏆At the end of the notebook, you will:- Understand how works **ML-Agents**, the environment library.- Be able to **train agents in Unity Environments**. This notebook is from the Deep Reinforcement Learning Course In this free course, you will:- 📖 Study Deep Reinforcement Learning in **theory and practice**.- 🧑‍💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.- 🤖 Train **agents in unique environments**And more check 📚 the syllabus 👉 https://huggingface.co/deep-rl-course/communication/publishing-scheduleDon’t forget to **sign up to the course** (we are collecting your email to be able to **send you the links when each Unit is published and give you information about the challenges and updates).**The best way to keep in touch is to join our discord server to exchange with the community and with us 👉🏻 https://discord.gg/ydHrjt3WP5 Prerequisites 🏗️Before diving into the notebook, you need to:🔲 📚 **Study [what is ML-Agents and how it works by reading Unit 5](https://huggingface.co/deep-rl-course/unit5/introduction)** 🤗 Let's train our agents 🚀**To validate this hands-on for the certification process, you just need to push your trained models to the Hub**. There’s no results to attain to validate this one. But if you want to get nice results you can try to attain:- For `Pyramids` : Mean Reward = 1.75- For `SnowballTarget` : Mean Reward = 15 or 30 targets hit in an episode. Set the GPU 💪- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type` - `Hardware Accelerator > GPU` Clone the repository and install the dependencies 🔽<jupyter_code>%%capture # Clone the repository !git clone --depth 1 https://github.com/Unity-Technologies/ml-agents # Go inside the repository and install the package %cd ml-agents !pip3 install -e ./ml-agents-envs !pip3 install -e ./ml-agents<jupyter_output><empty_output><jupyter_text>SnowballTarget ⛄If you need a refresher on how this environments work check this section 👉https://huggingface.co/deep-rl-course/unit5/snowball-target Download and move the environment zip file in `./training-envs-executables/linux/`- Our environment executable is in a zip file.- We need to download it and place it to `./training-envs-executables/linux/`- We use a linux executable because we use colab, and colab machines OS is Ubuntu (linux)<jupyter_code># Here, we create training-envs-executables and linux !mkdir ./training-envs-executables !mkdir ./training-envs-executables/linux<jupyter_output><empty_output><jupyter_text>We downloaded the file SnowballTarget.zip from https://github.com/huggingface/Snowball-Target using `wget`<jupyter_code>!wget "https://github.com/huggingface/Snowball-Target/raw/main/SnowballTarget.zip" -O ./training-envs-executables/linux/SnowballTarget.zip<jupyter_output><empty_output><jupyter_text>We unzip the executable.zip file<jupyter_code>%%capture !unzip -d ./training-envs-executables/linux/ ./training-envs-executables/linux/SnowballTarget.zip<jupyter_output><empty_output><jupyter_text>Make sure your file is accessible<jupyter_code>!chmod -R 755 ./training-envs-executables/linux/SnowballTarget<jupyter_output><empty_output><jupyter_text>Define the SnowballTarget config file- In ML-Agents, you define the **training hyperparameters into config.yaml files.**There are multiple hyperparameters. To know them better, you should check for each explanation with [the documentation](https://github.com/Unity-Technologies/ml-agents/blob/release_20_docs/docs/Training-Configuration-File.md)So you need to create a `SnowballTarget.yaml` config file in ./content/ml-agents/config/ppo/We'll give you here a first version of this config (to copy and paste into your `SnowballTarget.yaml file`), **but you should modify it**.```behaviors: SnowballTarget: trainer_type: ppo summary_freq: 10000 keep_checkpoints: 10 checkpoint_interval: 50000 max_steps: 200000 time_horizon: 64 threaded: true hyperparameters: learning_rate: 0.0003 learning_rate_schedule: linear batch_size: 128 buffer_size: 2048 beta: 0.005 epsilon: 0.2 lambd: 0.95 num_epoch: 3 network_settings: normalize: false hidden_units: 256 num_layers: 2 vis_encode_type: simple reward_signals: extrinsic: gamma: 0.99 strength: 1.0``` As an experimentation, you should also try to modify some other hyperparameters. Unity provides very [good documentation explaining each of them here](https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Training-Configuration-File.md).Now that you've created the config file and understand what most hyperparameters do, we're ready to train our agent 🔥. Train the agentTo train our agent, we just need to **launch mlagents-learn and select the executable containing the environment.**We define four parameters:1. `mlagents-learn `: the path where the hyperparameter config file is.2. `--env`: where the environment executable is.3. `--run_id`: the name you want to give to your training run id.4. `--no-graphics`: to not launch the visualization during the training.Train the model and use the `--resume` flag to continue training in case of interruption.> It will fail first time if and when you use `--resume`, try running the block again to bypass the error. The training will take 10 to 35min depending on your config, go take a ☕️you deserve it 🤗.<jupyter_code>!mlagents-learn ./config/ppo/SnowballTarget.yaml --env=./training-envs-executables/linux/SnowballTarget/SnowballTarget --run-id="SnowballTarget1" --no-graphics<jupyter_output><empty_output><jupyter_text>Push the agent to the 🤗 Hub- Now that we trained our agent, we’re **ready to push it to the Hub to be able to visualize it playing on your browser🔥.** To be able to share your model with the community there are three more steps to follow:1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.- Create a new token (https://huggingface.co/settings/tokens) **with write role**- Copy the token- Run the cell below and paste the token<jupyter_code>from huggingface_hub import notebook_login notebook_login()<jupyter_output><empty_output><jupyter_text>If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` Then, we simply need to run `mlagents-push-to-hf`.And we define 4 parameters:1. `--run-id`: the name of the training run id.2. `--local-dir`: where the agent was saved, it’s results/, so in my case results/First Training.3. `--repo-id`: the name of the Hugging Face repo you want to create or update. It’s always /If the repo does not exist **it will be created automatically**4. `--commit-message`: since HF repos are git repository you need to define a commit message.For instance:`!mlagents-push-to-hf --run-id="SnowballTarget1" --local-dir="./results/SnowballTarget1" --repo-id="ThomasSimonini/ppo-SnowballTarget" --commit-message="First Push"`<jupyter_code>!mlagents-push-to-hf --run-id="SnowballTarget1" --local-dir="./results/SnowballTarget1" --repo-id="ThomasSimonini/ppo-SnowballTarget" --commit-message="First Push" !mlagents-push-to-hf --run-id= # Add your run id --local-dir= # Your local dir --repo-id= # Your repo id --commit-message= # Your commit message<jupyter_output><empty_output><jupyter_text>Else, if everything worked you should have this at the end of the process(but with a different url 😆) :```Your model is pushed to the hub. You can view your model here: https://huggingface.co/ThomasSimonini/ppo-SnowballTarget```It’s the link to your model, it contains a model card that explains how to use it, your Tensorboard and your config file. **What’s awesome is that it’s a git repository, that means you can have different commits, update your repository with a new push etc.** But now comes the best: **being able to visualize your agent online 👀.** Watch your agent playing 👀For this step it’s simple:1. Go here: https://huggingface.co/spaces/ThomasSimonini/ML-Agents-SnowballTarget2. Launch the game and put it in full screen by clicking on the bottom right button 1. In step 1, type your username (your username is case sensitive: for instance, my username is ThomasSimonini not thomassimonini or ThOmasImoNInI) and click on the search button.2. In step 2, select your model repository.3. In step 3, **choose which model you want to replay**: - I have multiple ones, since we saved a model every 500000 timesteps. - But since I want the more recent, I choose `SnowballTarget.onnx`👉 What’s nice **is to try with different models step to see the improvement of the agent.**And don't hesitate to share the best score your agent gets on discord in rl-i-made-this channel 🔥Let's now try a harder environment called Pyramids... Pyramids 🏆 Download and move the environment zip file in `./training-envs-executables/linux/`- Our environment executable is in a zip file.- We need to download it and place it to `./training-envs-executables/linux/`- We use a linux executable because we use colab, and colab machines OS is Ubuntu (linux) We downloaded the file Pyramids.zip from from https://huggingface.co/spaces/unity/ML-Agents-Pyramids/resolve/main/Pyramids.zip using `wget`<jupyter_code>!wget "https://huggingface.co/spaces/unity/ML-Agents-Pyramids/resolve/main/Pyramids.zip" -O ./training-envs-executables/linux/Pyramids.zip<jupyter_output><empty_output><jupyter_text>We unzip the executable.zip file<jupyter_code>%%capture !unzip -d ./training-envs-executables/linux/ ./training-envs-executables/linux/Pyramids.zip<jupyter_output><empty_output><jupyter_text>Make sure your file is accessible<jupyter_code>!chmod -R 755 ./training-envs-executables/linux/Pyramids/Pyramids<jupyter_output><empty_output><jupyter_text>Modify the PyramidsRND config file- Contrary to the first environment which was a custom one, **Pyramids was made by the Unity team**.- So the PyramidsRND config file already exists and is in ./content/ml-agents/config/ppo/PyramidsRND.yaml- You might asked why "RND" in PyramidsRND. RND stands for *random network distillation* it's a way to generate curiosity rewards. If you want to know more on that we wrote an article explaning this technique: https://medium.com/data-from-the-trenches/curiosity-driven-learning-through-random-network-distillation-488ffd8e5938For this training, we’ll modify one thing:- The total training steps hyperparameter is too high since we can hit the benchmark (mean reward = 1.75) in only 1M training steps.👉 To do that, we go to config/ppo/PyramidsRND.yaml,**and modify these to max_steps to 1000000.** As an experimentation, you should also try to modify some other hyperparameters, Unity provides a very [good documentation explaining each of them here](https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Training-Configuration-File.md).We’re now ready to train our agent 🔥. Train the agentThe training will take 30 to 45min depending on your machine, go take a ☕️you deserve it 🤗.<jupyter_code>!mlagents-learn ./config/ppo/PyramidsRND.yaml --env=./training-envs-executables/linux/Pyramids/Pyramids --run-id="Pyramids Training" --no-graphics<jupyter_output><empty_output><jupyter_text>Push the agent to the 🤗 Hub- Now that we trained our agent, we’re **ready to push it to the Hub to be able to visualize it playing on your browser🔥.**<jupyter_code>!mlagents-push-to-hf --run-id= # Add your run id --local-dir= # Your local dir --repo-id= # Your repo id --commit-message= # Your commit message<jupyter_output><empty_output>
deep-rl-class/notebooks/unit5/unit5.ipynb/0
{ "file_path": "deep-rl-class/notebooks/unit5/unit5.ipynb", "repo_id": "deep-rl-class", "token_count": 3901 }
91
# Glossary [[glossary]] This is a community-created glossary. Contributions are welcome! ### Agent An agent learns to **make decisions by trial and error, with rewards and punishments from the surroundings**. ### Environment An environment is a simulated world **where an agent can learn by interacting with it**. ### Markov Property It implies that the action taken by our agent is **conditional solely on the present state and independent of the past states and actions**. ### Observations/State - **State**: Complete description of the state of the world. - **Observation**: Partial description of the state of the environment/world. ### Actions - **Discrete Actions**: Finite number of actions, such as left, right, up, and down. - **Continuous Actions**: Infinite possibility of actions; for example, in the case of self-driving cars, the driving scenario has an infinite possibility of actions occurring. ### Rewards and Discounting - **Rewards**: Fundamental factor in RL. Tells the agent whether the action taken is good/bad. - RL algorithms are focused on maximizing the **cumulative reward**. - **Reward Hypothesis**: RL problems can be formulated as a maximisation of (cumulative) return. - **Discounting** is performed because rewards obtained at the start are more likely to happen as they are more predictable than long-term rewards. ### Tasks - **Episodic**: Has a starting point and an ending point. - **Continuous**: Has a starting point but no ending point. ### Exploration v/s Exploitation Trade-Off - **Exploration**: It's all about exploring the environment by trying random actions and receiving feedback/returns/rewards from the environment. - **Exploitation**: It's about exploiting what we know about the environment to gain maximum rewards. - **Exploration-Exploitation Trade-Off**: It balances how much we want to **explore** the environment and how much we want to **exploit** what we know about the environment. ### Policy - **Policy**: It is called the agent's brain. It tells us what action to take, given the state. - **Optimal Policy**: Policy that **maximizes** the **expected return** when an agent acts according to it. It is learned through *training*. ### Policy-based Methods: - An approach to solving RL problems. - In this method, the Policy is learned directly. - Will map each state to the best corresponding action at that state. Or a probability distribution over the set of possible actions at that state. ### Value-based Methods: - Another approach to solving RL problems. - Here, instead of training a policy, we train a **value function** that maps each state to the expected value of being in that state. Contributions are welcome 🤗 If you want to improve the course, you can [open a Pull Request.](https://github.com/huggingface/deep-rl-class/pulls) This glossary was made possible thanks to: - [@lucifermorningstar1305](https://github.com/lucifermorningstar1305) - [@daspartho](https://github.com/daspartho) - [@misza222](https://github.com/misza222)
deep-rl-class/units/en/unit1/glossary.mdx/0
{ "file_path": "deep-rl-class/units/en/unit1/glossary.mdx", "repo_id": "deep-rl-class", "token_count": 775 }
92
# Mid-way Quiz [[mid-way-quiz]] The best way to learn and [to avoid the illusion of competence](https://www.coursera.org/lecture/learning-how-to-learn/illusions-of-competence-BuFzf) **is to test yourself.** This will help you to find **where you need to reinforce your knowledge**. ### Q1: What are the two main approaches to find optimal policy? <Question choices={[ { text: "Policy-based methods", explain: "With Policy-Based methods, we train the policy directly to learn which action to take given a state.", correct: true }, { text: "Random-based methods", explain: "" }, { text: "Value-based methods", explain: "With value-based methods, we train a value function to learn which state is more valuable and use this value function to take the action that leads to it.", correct: true }, { text: "Evolution-strategies methods", explain: "" } ]} /> ### Q2: What is the Bellman Equation? <details> <summary>Solution</summary> **The Bellman equation is a recursive equation** that works like this: instead of starting for each state from the beginning and calculating the return, we can consider the value of any state as: Rt+1 + gamma * V(St+1) The immediate reward + the discounted value of the state that follows </details> ### Q3: Define each part of the Bellman Equation <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/bellman4-quiz.jpg" alt="Bellman equation quiz"/> <details> <summary>Solution</summary> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/bellman4.jpg" alt="Bellman equation solution"/> </details> ### Q4: What is the difference between Monte Carlo and Temporal Difference learning methods? <Question choices={[ { text: "With Monte Carlo methods, we update the value function from a complete episode", explain: "", correct: true }, { text: "With Monte Carlo methods, we update the value function from a step", explain: "" }, { text: "With TD learning methods, we update the value function from a complete episode", explain: "" }, { text: "With TD learning methods, we update the value function from a step", explain: "", correct: true }, ]} /> ### Q5: Define each part of Temporal Difference learning formula <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/td-ex.jpg" alt="TD Learning exercise"/> <details> <summary>Solution</summary> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/TD-1.jpg" alt="TD Exercise"/> </details> ### Q6: Define each part of Monte Carlo learning formula <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/mc-ex.jpg" alt="MC Learning exercise"/> <details> <summary>Solution</summary> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/monte-carlo-approach.jpg" alt="MC Exercise"/> </details> Congrats on finishing this Quiz 🥳, if you missed some elements, take time to read again the previous sections to reinforce (😏) your knowledge.
deep-rl-class/units/en/unit2/mid-way-quiz.mdx/0
{ "file_path": "deep-rl-class/units/en/unit2/mid-way-quiz.mdx", "repo_id": "deep-rl-class", "token_count": 1100 }
93
# Quiz [[quiz]] The best way to learn and [to avoid the illusion of competence](https://www.coursera.org/lecture/learning-how-to-learn/illusions-of-competence-BuFzf) **is to test yourself.** This will help you to find **where you need to reinforce your knowledge**. ### Q1: We mentioned Q Learning is a tabular method. What are tabular methods? <details> <summary>Solution</summary> *Tabular methods* is a type of problem in which the state and actions spaces are small enough to approximate value functions to be **represented as arrays and tables**. For instance, **Q-Learning is a tabular method** since we use a table to represent the state, and action value pairs. </details> ### Q2: Why can't we use a classical Q-Learning to solve an Atari Game? <Question choices={[ { text: "Atari environments are too fast for Q-Learning", explain: "" }, { text: "Atari environments have a big observation space. So creating an updating the Q-Table would not be efficient", explain: "", correct: true } ]} /> ### Q3: Why do we stack four frames together when we use frames as input in Deep Q-Learning? <details> <summary>Solution</summary> We stack frames together because it helps us **handle the problem of temporal limitation**: one frame is not enough to capture temporal information. For instance, in pong, our agent **will be unable to know the ball direction if it gets only one frame**. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/temporal-limitation.jpg" alt="Temporal limitation"/> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/temporal-limitation-2.jpg" alt="Temporal limitation"/> </details> ### Q4: What are the two phases of Deep Q-Learning? <Question choices={[ { text: "Sampling", explain: "We perform actions and store the observed experiences tuples in a replay memory.", correct: true, }, { text: "Shuffling", explain: "", }, { text: "Reranking", explain: "", }, { text: "Training", explain: "We select the small batch of tuple randomly and learn from it using a gradient descent update step.", correct: true, } ]} /> ### Q5: Why do we create a replay memory in Deep Q-Learning? <details> <summary>Solution</summary> **1. Make more efficient use of the experiences during the training** Usually, in online reinforcement learning, the agent interacts in the environment, gets experiences (state, action, reward, and next state), learns from them (updates the neural network), and discards them. This is not efficient. But, with experience replay, **we create a replay buffer that saves experience samples that we can reuse during the training**. **2. Avoid forgetting previous experiences and reduce the correlation between experiences** The problem we get if we give sequential samples of experiences to our neural network is that it **tends to forget the previous experiences as it overwrites new experiences**. For instance, if we are in the first level and then the second, which is different, our agent can forget how to behave and play in the first level. </details> ### Q6: How do we use Double Deep Q-Learning? <details> <summary>Solution</summary> When we compute the Q target, we use two networks to decouple the action selection from the target Q value generation. We: - Use our *DQN network* to **select the best action to take for the next state** (the action with the highest Q value). - Use our *Target network* to calculate **the target Q value of taking that action at the next state**. </details> Congrats on finishing this Quiz 🥳, if you missed some elements, take time to read again the chapter to reinforce (😏) your knowledge.
deep-rl-class/units/en/unit3/quiz.mdx/0
{ "file_path": "deep-rl-class/units/en/unit3/quiz.mdx", "repo_id": "deep-rl-class", "token_count": 1099 }
94
# An Introduction to Unity ML-Agents [[introduction-to-ml-agents]] <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit7/thumbnail.png" alt="thumbnail"/> One of the challenges in Reinforcement Learning is **creating environments**. Fortunately for us, we can use game engines to do so. These engines, such as [Unity](https://unity.com/), [Godot](https://godotengine.org/) or [Unreal Engine](https://www.unrealengine.com/), are programs made to create video games. They are perfectly suited for creating environments: they provide physics systems, 2D/3D rendering, and more. One of them, [Unity](https://unity.com/), created the [Unity ML-Agents Toolkit](https://github.com/Unity-Technologies/ml-agents), a plugin based on the game engine Unity that allows us **to use the Unity Game Engine as an environment builder to train agents**. In the first bonus unit, this is what we used to train Huggy to catch a stick! <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/example-envs.png" alt="MLAgents environments"/> <figcaption>Source: <a href="https://github.com/Unity-Technologies/ml-agents">ML-Agents documentation</a></figcaption> </figure> Unity ML-Agents Toolkit provides many exceptional pre-made environments, from playing football (soccer), learning to walk, and jumping over big walls. In this Unit, we'll learn to use ML-Agents, but **don't worry if you don't know how to use the Unity Game Engine**: you don't need to use it to train your agents. So, today, we're going to train two agents: - The first one will learn to **shoot snowballs onto a spawning target**. - The second needs to **press a button to spawn a pyramid, then navigate to the pyramid, knock it over, and move to the gold brick at the top**. To do that, it will need to explore its environment, which will be done using a technique called curiosity. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit7/envs.png" alt="Environments" /> Then, after training, **you'll push the trained agents to the Hugging Face Hub**, and you'll be able to **visualize them playing directly on your browser without having to use the Unity Editor**. Doing this Unit will **prepare you for the next challenge: AI vs. AI where you will train agents in multi-agents environments and compete against your classmates' agents**. Sound exciting? Let's get started!
deep-rl-class/units/en/unit5/introduction.mdx/0
{ "file_path": "deep-rl-class/units/en/unit5/introduction.mdx", "repo_id": "deep-rl-class", "token_count": 696 }
95
# Designing Multi-Agents systems For this section, you're going to watch this excellent introduction to multi-agents made by <a href="https://www.youtube.com/channel/UCq0imsn84ShAe9PBOFnoIrg"> Brian Douglas </a>. <Youtube id="qgb0gyrpiGk" /> In this video, Brian talked about how to design multi-agent systems. He specifically took a multi-agents system of vacuum cleaners and asked: **how can can cooperate with each other**? We have two solutions to design this multi-agent reinforcement learning system (MARL). ## Decentralized system <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/decentralized.png" alt="Decentralized"/> <figcaption> Source: <a href="https://www.youtube.com/watch?v=qgb0gyrpiGk"> Introduction to Multi-Agent Reinforcement Learning </a> </figcaption> </figure> In decentralized learning, **each agent is trained independently from the others**. In the example given, each vacuum learns to clean as many places as it can **without caring about what other vacuums (agents) are doing**. The benefit is that **since no information is shared between agents, these vacuums can be designed and trained like we train single agents**. The idea here is that **our training agent will consider other agents as part of the environment dynamics**. Not as agents. However, the big drawback of this technique is that it will **make the environment non-stationary** since the underlying Markov decision process changes over time as other agents are also interacting in the environment. And this is problematic for many Reinforcement Learning algorithms **that can't reach a global optimum with a non-stationary environment**. ## Centralized approach <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/centralized.png" alt="Centralized"/> <figcaption> Source: <a href="https://www.youtube.com/watch?v=qgb0gyrpiGk"> Introduction to Multi-Agent Reinforcement Learning </a> </figcaption> </figure> In this architecture, **we have a high-level process that collects agents' experiences**: the experience buffer. And we'll use these experiences **to learn a common policy**. For instance, in the vacuum cleaner example, the observation will be: - The coverage map of the vacuums. - The position of all the vacuums. We use that collective experience **to train a policy that will move all three robots in the most beneficial way as a whole**. So each robot is learning from their common experience. We now have a stationary environment since all the agents are treated as a larger entity, and they know the change of other agents' policies (since it's the same as theirs). If we recap: - In a *decentralized approach*, we **treat all agents independently without considering the existence of the other agents.** - In this case, all agents **consider others agents as part of the environment**. - **It’s a non-stationarity environment condition**, so has no guarantee of convergence. - In a *centralized approach*: - A **single policy is learned from all the agents**. - Takes as input the present state of an environment and the policy outputs joint actions. - The reward is global.
deep-rl-class/units/en/unit7/multi-agent-setting.mdx/0
{ "file_path": "deep-rl-class/units/en/unit7/multi-agent-setting.mdx", "repo_id": "deep-rl-class", "token_count": 847 }
96
# Play with Huggy [[play]] Now that you've trained Huggy and pushed it to the Hub. **You will be able to play with him ❤️** For this step it’s simple: - Open the Huggy game in your browser: https://huggingface.co/spaces/ThomasSimonini/Huggy - Click on Play with my Huggy model <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/load-huggy.jpg" alt="load-huggy" width="100%"> 1. In step 1, choose your model repository which is the model id (in my case ThomasSimonini/ppo-Huggy). 2. In step 2, **choose which model you want to replay**: - I have multiple ones, since we saved a model every 500000 timesteps. - But if I want the most recent one I choose Huggy.onnx 👉 It's good to **try with different model checkpoints to see the improvement of the agent.**
deep-rl-class/units/en/unitbonus1/play.mdx/0
{ "file_path": "deep-rl-class/units/en/unitbonus1/play.mdx", "repo_id": "deep-rl-class", "token_count": 271 }
97
import argparse import sys sys.path.append(".") from base_classes import IPAdapterTextToImageBenchmark # noqa: E402 IP_ADAPTER_CKPTS = { "runwayml/stable-diffusion-v1-5": ("h94/IP-Adapter", "ip-adapter_sd15.bin"), "stabilityai/stable-diffusion-xl-base-1.0": ("h94/IP-Adapter", "ip-adapter_sdxl.bin"), } if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--ckpt", type=str, default="runwayml/stable-diffusion-v1-5", choices=list(IP_ADAPTER_CKPTS.keys()), ) parser.add_argument("--batch_size", type=int, default=1) parser.add_argument("--num_inference_steps", type=int, default=50) parser.add_argument("--model_cpu_offload", action="store_true") parser.add_argument("--run_compile", action="store_true") args = parser.parse_args() args.ip_adapter_id = IP_ADAPTER_CKPTS[args.ckpt] benchmark_pipe = IPAdapterTextToImageBenchmark(args) args.ckpt = f"{args.ckpt} (IP-Adapter)" benchmark_pipe.benchmark(args)
diffusers/benchmarks/benchmark_ip_adapters.py/0
{ "file_path": "diffusers/benchmarks/benchmark_ip_adapters.py", "repo_id": "diffusers", "token_count": 434 }
98
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. [`TextualInversionLoaderMixin`] provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. <Tip> To learn more about how to load Textual Inversion embeddings, see the [Textual Inversion](../../using-diffusers/loading_adapters#textual-inversion) loading guide. </Tip> ## TextualInversionLoaderMixin [[autodoc]] loaders.textual_inversion.TextualInversionLoaderMixin
diffusers/docs/source/en/api/loaders/textual_inversion.md/0
{ "file_path": "diffusers/docs/source/en/api/loaders/textual_inversion.md", "repo_id": "diffusers", "token_count": 340 }
99