Edit model card

Model Card for Model ID

Model Details

Model Description

  • Developed by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Infrence Function

def generate(title):
  # Define the roles and markers
  # Define the roles and markers
  prompt = prompt = f"[INST]Identify the brand from the given product title.[/INST]\n\n<TITL> {title} </TITL>\n\n"custom prompt here 
  print("Prompt:")
  print(prompt)
  encoding = tokenizer(prompt, return_tensors="pt").to("cuda:0")
  output = model.generate(input_ids=encoding.input_ids,
                          attention_mask=encoding.attention_mask,
                          max_new_tokens=200,
                          do_sample=True,
                          temperature=0.01,
                          eos_token_id=tokenizer.eos_token_id,
                          top_k=0)
  print()
  # Subtract the length of input_ids from output to get only the model's response
  output_text = tokenizer.decode(output[0, len(encoding.input_ids[0]):], skip_special_tokens=False)
  output_text = re.sub('\n+', '\n', output_text)  # remove excessive newline characters
  print("Generated Assistant Response:")
  print(output_text)
  return output_text
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for 1DS/adapter-title-brand-mapping-Llama-2-7b-chat-hf-v1

Adapter
(1037)
this model