Edit model card

repo_clone_080424

name: mistral-7b-openorca-oasst_top1_230825v2-gguf
license: apache-2.0
tags:
- Mistral
- OpenOrca
- OpenAssistant
- text-generation
- text2text-generation
- natural-language
- multilingual
- NickyNicky
- TheBloke
type:
- 4GB
- 6GB
- llm
- chat
- multilingual
- code
- role-playing
- mistral
config:
- ctx=2048
- 4bit
- 5bit
- temp=0.7
resolutions: 
datasets: 
- Open-Orca/OpenOrca
- OpenAssistant/oasst_top1_2023-08-25
language:
- bg
- ca
- cs
- da
- de
- en
- es
- fr
- hr
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- sr
- sv
- uk
size: 
- 4369387104
- 5132358240
use: 
shortcomings: 
sources:
- https://arxiv.org/abs/2309.17453
- https://arxiv.org/abs/2301.13688
- https://arxiv.org/abs/2306.02707
- https://arxiv.org/abs/2310.06825
funded_by:
- a16z
- https://open-assistant.io/contributors
train_hardware: 
pipeline_tag: text-generation
examples: 
- "Below is an instruction that describes a task. Write a response that appropriately completes the request."
- "You are a story writing assistant."
- "Write a story about llamas."
- "I'm looking for an efficient Python script to output prime numbers. Can you help me out? I'm interested in a script that can handle large numbers and output them quickly. Also, it would be great if the script could take a range of numbers as input and output all the prime numbers within that range. Can you generate a script that fits these requirements? Thanks!"
- "Estoy desarrollando una REST API con Nodejs, y estoy tratando de aplicar algún sistema de seguridad, ya sea con tokens o algo similar, me puedes ayudar?"
Downloads last month
47
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Datasets used to train darkshapes/mistral-7b-openorca-oasst_top1_230825v2-gguf