|
--- |
|
language: en |
|
tags: |
|
- text-generation |
|
- transformer |
|
- mistral |
|
- fine-tuned |
|
- uncensored |
|
- nsfw |
|
license: apache-2.0 |
|
datasets: |
|
- open-source-texts |
|
model-name: Fine-tuned Mistral 7B (Uncensored) |
|
--- |
|
|
|
# Fine-tuned Mistral 7B (Uncensored) |
|
|
|
## Model Description |
|
|
|
This model is a fine-tuned version of the **Mistral 7B**, a dense transformer model, trained on 40,000 datapoints of textual data from a variety of open-source sources. The base model, Mistral 7B, is known for its high efficiency in processing text and generating meaningful, coherent responses. |
|
|
|
This fine-tuned version has been optimized for tasks involving natural language understanding, generation, and conversation-based interactions. Importantly, this model is **uncensored**, which means it does not filter or restrict content, allowing it to engage in more "spicy" or NSFW conversations. |
|
|
|
## Fine-tuning Process |
|
|
|
- **Data**: The model was fine-tuned using a dataset of 40,000 textual datapoints sourced from various open-source repositories. |
|
- **Training Environment**: Fine-tuning was conducted on two NVIDIA A100 GPUs. |
|
- **Training Time**: The training process took approximately 16 hours. |
|
- **Optimizer**: The model was trained using AdamW optimizer with a learning rate of `5e-5`. |
|
|
|
## Intended Use |
|
|
|
This fine-tuned model is intended for the following tasks: |
|
- Text generation |
|
- Question answering |
|
- Dialogue systems |
|
- Content generation for AI-powered interactions, including NSFW or adult-oriented conversations. |
|
|
|
### How to Use |
|
|
|
You can easily load and use this model with the `transformers` library in Python: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("your-organization/finetuned-mistral-7b") |
|
model = AutoModelForCausalLM.from_pretrained("your-organization/finetuned-mistral-7b") |
|
|
|
inputs = tokenizer("Input your text here.", return_tensors="pt") |
|
outputs = model.generate(inputs["input_ids"], max_length=50, num_return_sequences=1) |
|
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|