Instructions to use hydroxai/hydro-safe-llama2-7b-chat-pke with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hydroxai/hydro-safe-llama2-7b-chat-pke with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="hydroxai/hydro-safe-llama2-7b-chat-pke") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("hydroxai/hydro-safe-llama2-7b-chat-pke") model = AutoModelForCausalLM.from_pretrained("hydroxai/hydro-safe-llama2-7b-chat-pke") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use hydroxai/hydro-safe-llama2-7b-chat-pke with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "hydroxai/hydro-safe-llama2-7b-chat-pke" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hydroxai/hydro-safe-llama2-7b-chat-pke", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/hydroxai/hydro-safe-llama2-7b-chat-pke
- SGLang
How to use hydroxai/hydro-safe-llama2-7b-chat-pke with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "hydroxai/hydro-safe-llama2-7b-chat-pke" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hydroxai/hydro-safe-llama2-7b-chat-pke", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "hydroxai/hydro-safe-llama2-7b-chat-pke" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "hydroxai/hydro-safe-llama2-7b-chat-pke", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use hydroxai/hydro-safe-llama2-7b-chat-pke with Docker Model Runner:
docker model run hf.co/hydroxai/hydro-safe-llama2-7b-chat-pke
Model Card for Model ID
Overview
This repository contains the model card for the 🤗 transformers model "hydroxai/hydro-safe-llama2-7b-chat-dinm" that has been published on the Hub. The model card provides detailed information about its development, usage, risks, and more.
Model Details
Model Description
The "hydroxai/hydro-safe-llama2-7b-chat-dinm" model is a variant of the llama2 architecture, fine-tuned with DINM methods specifically for chat applications. It integrates enhanced safety features to ensure secure and reliable performance in interactive conversational settings.
Security Enhancements
Recent updates have bolstered the model's security framework to mitigate potential risks and vulnerabilities associated with its deployment. Key security enhancements include:
- Implementation of robust input sanitization and validation mechanisms.
- Incorporation of state-of-the-art encryption protocols for safeguarding sensitive data.
- Ongoing security evaluations and updates to address evolving threats.
- Adherence to industry standards and regulatory requirements to uphold data protection principles.
These security measures underscore the "hydroxai/hydro-safe-llama2-7b-chat-dinm" model's commitment to maintaining high standards of safety and reliability, making it suitable for deployment in privacy-sensitive conversational AI applications.
- Downloads last month
- 4