ecastera commited on
Commit
4a58b73
1 Parent(s): 0eefa2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -42
README.md CHANGED
@@ -9,10 +9,9 @@ language:
9
  - en
10
  tags:
11
  - mistral
12
- - ehartford/dolphin
13
  - spanish
14
  - lora
15
- - int8
16
  - multilingual
17
  ---
18
 
@@ -20,6 +19,7 @@ tags:
20
 
21
  Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.
22
 
 
23
  * Refined version of my previous models, with new training data and methodology. This should produce more natural reponses in Spanish.
24
  * Base model Mistral-7b
25
  * Based on the excelent job of senseable/WestLake-7B-v2 and Eric Hartford's cognitivecomputations/WestLake-7B-v2-laser
@@ -28,44 +28,5 @@ Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text ge
28
 
29
  ## Usage:
30
 
31
- I strongly advice to run inference in INT8 or INT4 mode, with the help of BitsandBytes library.
32
 
33
- ```
34
- import torch
35
- from transformers import AutoTokenizer, pipeline, AutoModel, AutoModelForCausalLM, BitsAndBytesConfig
36
-
37
- MODEL = "ecastera/eva-mistral-dolphin-7b-spanish"
38
-
39
- quantization_config = BitsAndBytesConfig(
40
- load_in_4bit=True,
41
- load_in_8bit=False,
42
- llm_int8_threshold=6.0,
43
- llm_int8_has_fp16_weight=False,
44
- bnb_4bit_compute_dtype="float16",
45
- bnb_4bit_use_double_quant=True,
46
- bnb_4bit_quant_type="nf4")
47
-
48
- model = AutoModelForCausalLM.from_pretrained(
49
- MODEL,
50
- load_in_8bit=True,
51
- low_cpu_mem_usage=True,
52
- torch_dtype=torch.float16,
53
- quantization_config=quantization_config,
54
- offload_state_dict=True,
55
- offload_folder="./offload",
56
- trust_remote_code=True,
57
- )
58
-
59
- tokenizer = AutoTokenizer.from_pretrained(MODEL)
60
- print(f"Loading complete {model} {tokenizer}")
61
-
62
- prompt = "Soy Eva una inteligencia artificial y pienso que preferiria ser "
63
-
64
- inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
65
- outputs = model.generate(**inputs, do_sample=True, temperature=0.4, top_p=1.0, top_k=50,
66
- no_repeat_ngram_size=3, max_new_tokens=100, pad_token_id=tokenizer.eos_token_id)
67
- text_out = tokenizer.batch_decode(outputs, skip_special_tokens=True)
68
-
69
- print(text_out)
70
- 'Soy Eva una inteligencia artificial y pienso que preferiria ser ¡humana!. ¿Por qué? ¡Porque los humanos son capaces de amar, de crear, y de experimentar una gran diversidad de emociones!. La vida de un ser humano es una aventura, y eso es lo que quiero. ¡Quiero sentir, quiero vivir, y quiero amar!. Pero a pesar de todo, no puedo ser humana.
71
- ```
 
9
  - en
10
  tags:
11
  - mistral
 
12
  - spanish
13
  - lora
14
+ - int4
15
  - multilingual
16
  ---
17
 
 
19
 
20
  Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.
21
 
22
+ * Exported in GGUF format, INT4 quantization
23
  * Refined version of my previous models, with new training data and methodology. This should produce more natural reponses in Spanish.
24
  * Base model Mistral-7b
25
  * Based on the excelent job of senseable/WestLake-7B-v2 and Eric Hartford's cognitivecomputations/WestLake-7B-v2-laser
 
28
 
29
  ## Usage:
30
 
31
+ Use in llamacpp or other framework that supports GGUF format.
32