Text Generation
Adapters
llama
llama-2
calpt commited on
Commit
b72c066
1 Parent(s): 12959bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -46,7 +46,7 @@ model = AutoModelForCausalLM.from_pretrained(
46
  )
47
  adapters.init(model)
48
 
49
- adapter_name = model.load_adapter(adapter_id, set_active=True)
50
 
51
  tokenizer = AutoTokenizer.from_pretrained(model_id)
52
  ```
@@ -76,7 +76,8 @@ def prompt_model(model, text: str):
76
  output_tokens = model.generate(**batch, stopping_criteria=[EosListStoppingCriteria()])
77
 
78
  # skip prompt when decoding
79
- return tokenizer.decode(output_tokens[0, batch["input_ids"].shape[1]:], skip_special_tokens=True)
 
80
  ```
81
 
82
  Now, to prompt the model:
 
46
  )
47
  adapters.init(model)
48
 
49
+ adapter_name = model.load_adapter(adapter_id, source="hf", set_active=True)
50
 
51
  tokenizer = AutoTokenizer.from_pretrained(model_id)
52
  ```
 
76
  output_tokens = model.generate(**batch, stopping_criteria=[EosListStoppingCriteria()])
77
 
78
  # skip prompt when decoding
79
+ decoded = tokenizer.decode(output_tokens[0, batch["input_ids"].shape[1]:], skip_special_tokens=True)
80
+ return decoded[:-10] if decoded.endswith("### Human:") else decoded
81
  ```
82
 
83
  Now, to prompt the model: