Text Generation
Transformers
Safetensors
English
llama
nvidia
llama3.1
conversational
text-generation-inference

AssertionError: Torch not compiled with CUDA enabled

#35
by gokul9 - opened

model = AutoModel.from_pretrained("nvidia/Llama-3.1-Nemotron-70B-Instruct-HF")
Fetching 8 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<?, ?it/s]
found_layers:{'model.embed_tokens.': True, 'model.layers.0.': True, 'model.layers.1.': True, 'model.layers.2.': True, 'model.layers.3.': True, 'model.layers.4.': True, 'model.layers.5.': True, 'model.layers.6.': True, 'model.layers.7.': True, 'model.layers.8.': True, 'model.layers.9.': True, 'model.layers.10.': True, 'model.layers.11.': True, 'model.layers.12.': True, 'model.layers.13.': True, 'model.layers.14.': True, 'model.layers.15.': True, 'model.layers.16.': True, 'model.layers.17.': True, 'model.layers.18.': True, 'model.layers.19.': True, 'model.layers.20.': True, 'model.layers.21.': True, 'model.layers.22.': True, 'model.layers.23.': True, 'model.layers.24.': True, 'model.layers.25.': True, 'model.layers.26.': True, 'model.layers.27.': True, 'model.layers.28.': True, 'model.layers.29.': True, 'model.layers.30.': True, 'model.layers.31.': True, 'model.layers.32.': True, 'model.layers.33.': True, 'model.layers.34.': True, 'model.layers.35.': True, 'model.layers.36.': True, 'model.layers.37.': True, 'model.layers.38.': True, 'model.layers.39.': True, 'model.layers.40.': True, 'model.layers.41.': True, 'model.layers.42.': True, 'model.layers.43.': True, 'model.layers.44.': True, 'model.layers.45.': True, 'model.layers.46.': True, 'model.layers.47.': True, 'model.layers.48.': True, 'model.layers.49.': True, 'model.layers.50.': True, 'model.layers.51.': True, 'model.layers.52.': True, 'model.layers.53.': True, 'model.layers.54.': True, 'model.layers.55.': True, 'model.layers.56.': True, 'model.layers.57.': True, 'model.layers.58.': True, 'model.layers.59.': True, 'model.layers.60.': True, 'model.layers.61.': True, 'model.layers.62.': True, 'model.layers.63.': True, 'model.layers.64.': True, 'model.layers.65.': True, 'model.layers.66.': True, 'model.layers.67.': True, 'model.layers.68.': True, 'model.layers.69.': True, 'model.layers.70.': True, 'model.layers.71.': True, 'model.layers.72.': True, 'model.layers.73.': True, 'model.layers.74.': True, 'model.layers.75.': True, 'model.layers.76.': True, 'model.layers.77.': True, 'model.layers.78.': True, 'model.layers.79.': True, 'model.norm.': True, 'lm_head.': True}
saved layers already found in G:\nvidiallama-3_1-nemotron-70b-instruct\models--nvidia--Llama-3.1-Nemotron-70B-Instruct-HF\snapshots\fac73d3507320ec1258620423469b4b38f88df6e\splitted_model
The class optimum.bettertransformers.transformation.BetterTransformer is deprecated and will be removed in a future release.
new version of transfomer, no need to use BetterTransformer, try setting attn impl to sdpa...
attn imp: <class 'transformers.models.llama.modeling_llama.LlamaSdpaAttention'>
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Sushant\AppData\Roaming\Python\Python312\site-packages\airllm\auto_model.py", line 56, in from_pretrained
return class_(pretrained_model_name_or_path, *inputs, ** kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Sushant\AppData\Roaming\Python\Python312\site-packages\airllm\airllm.py", line 9, in init
super(AirLLMLlama2, self).init(*args, **kwargs)
File "C:\Users\Sushant\AppData\Roaming\Python\Python312\site-packages\airllm\airllm_base.py", line 131, in init
self.init_model()
File "C:\Users\Sushant\AppData\Roaming\Python\Python312\site-packages\airllm\airllm_base.py", line 233, in init_model
set_module_tensor_to_device(self.model, buffer_name, self.running_device, value=buffer,
File "C:\Users\Sushant\AppData\Roaming\Python\Python312\site-packages\accelerate\utils\modeling.py", line 329, in set_module_tensor_to_device
new_value = value.to(device)
^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\site-packages\torch\cuda_init_.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

how to fix above error

Sign up or log in to comment