Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
## | |
<pre> | |
+from accelerate import Accelerator | |
+accelerator = Accelerator() | |
+dataloader, model, optimizer scheduler = accelerator.prepare( | |
+ dataloader, model, optimizer, scheduler | |
+) | |
for batch in dataloader: | |
optimizer.zero_grad() | |
inputs, targets = batch | |
- inputs = inputs.to(device) | |
- targets = targets.to(device) | |
outputs = model(inputs) | |
loss = loss_function(outputs, targets) | |
- loss.backward() | |
+ accelerator.backward(loss) | |
optimizer.step() | |
scheduler.step()</pre> | |
## | |
Everything around `accelerate` occurs with the `Accelerator` class. To use it, first make an object. | |
Then call `.prepare` passing in the PyTorch objects that you would normally train with. This will | |
return the same objects, but they will be on the correct device and distributed if needed. Then | |
you can train as normal, but instead of calling `loss.backward()` you call `accelerator.backward(loss)`. | |
Also note that you don't need to call `model.to(device)` or `inputs.to(device)` anymore, as this | |
is done automatically by `accelerator.prepare()`. | |
## | |
To learn more checkout the related documentation: | |
- <a href="https://huggingface.co/docs/accelerate/basic_tutorials/migration" target="_blank">Migrating to π€ Accelerate</a> | |
- <a href="https://huggingface.co/docs/accelerate/package_reference/accelerator" target="_blank">The Accelerator</a> |