magicsquares137 commited on
Commit
e4408e3
·
verified ·
1 Parent(s): 9d13796

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - text-generation
5
+ - transformer
6
+ - mistral
7
+ - fine-tuned
8
+ - uncensored
9
+ - nsfw
10
+ license: apache-2.0
11
+ datasets:
12
+ - open-source-texts
13
+ model-name: Fine-tuned Mistral 7B (Uncensored)
14
+ ---
15
+
16
+ # Fine-tuned Mistral 7B (Uncensored)
17
+
18
+ ## Model Description
19
+
20
+ This model is a fine-tuned version of the **Mistral 7B**, a dense transformer model, trained on 40,000 datapoints of textual data from a variety of open-source sources. The base model, Mistral 7B, is known for its high efficiency in processing text and generating meaningful, coherent responses.
21
+
22
+ This fine-tuned version has been optimized for tasks involving natural language understanding, generation, and conversation-based interactions. Importantly, this model is **uncensored**, which means it does not filter or restrict content, allowing it to engage in more "spicy" or NSFW conversations.
23
+
24
+ ## Fine-tuning Process
25
+
26
+ - **Data**: The model was fine-tuned using a dataset of 40,000 textual datapoints sourced from various open-source repositories.
27
+ - **Training Environment**: Fine-tuning was conducted on two NVIDIA A100 GPUs.
28
+ - **Training Time**: The training process took approximately 16 hours.
29
+ - **Optimizer**: The model was trained using AdamW optimizer with a learning rate of `5e-5`.
30
+
31
+ ## Intended Use
32
+
33
+ This fine-tuned model is intended for the following tasks:
34
+ - Text generation
35
+ - Question answering
36
+ - Dialogue systems
37
+ - Content generation for AI-powered interactions, including NSFW or adult-oriented conversations.
38
+
39
+ ### How to Use
40
+
41
+ You can easily load and use this model with the `transformers` library in Python:
42
+
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained("your-organization/finetuned-mistral-7b")
47
+ model = AutoModelForCausalLM.from_pretrained("your-organization/finetuned-mistral-7b")
48
+
49
+ inputs = tokenizer("Input your text here.", return_tensors="pt")
50
+ outputs = model.generate(inputs["input_ids"], max_length=50, num_return_sequences=1)
51
+
52
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))