YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
midstream_llama_sft_lora_output
Medical LoRA adapter fine-tuned on clinical QA dataset with chain-of-thought reasoning.
Model Details
- Base model:
meta-llama/Llama-3.1-8B-Instruct - Adapter: LoRA (r=16, alpha=32)
- Training data: Medical case presentations with questions and answers
- Special: Chain-of-thought reasoning with
<think>tags
Usage
from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM
base_model = "meta-llama/Llama-3.1-8B-Instruct"
adapter_id = "ofir408/midstream_llama_sft_lora_output"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_id)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support