Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
59dedc3
·
1 Parent(s): 7cb725b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -96,7 +96,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
96
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
97
  model_basename=model_basename,
98
  use_safetensors=True,
99
- trust_remote_code=True,
100
  device="cuda:0",
101
  use_triton=use_triton,
102
  quantize_config=None)
 
96
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
97
  model_basename=model_basename,
98
  use_safetensors=True,
99
+ trust_remote_code=False,
100
  device="cuda:0",
101
  use_triton=use_triton,
102
  quantize_config=None)