Update README.md
Browse files
README.md
CHANGED
@@ -18,8 +18,8 @@ language:
|
|
18 |
|
19 |
# NeuralQwen-2.5-1.5B-Spanish
|
20 |
|
|
|
21 |
|
22 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/Tu9FV0dQJXz-mlriKNqdE.png)
|
23 |
|
24 |
NeuralQwen-2.5-1.5B-Spanish is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
25 |
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
|
@@ -56,7 +56,8 @@ import transformers
|
|
56 |
import torch
|
57 |
|
58 |
model = "Kukedlc/NeuralQwen-2.5-1.5B-Spanish"
|
59 |
-
messages = [{"role": "
|
|
|
60 |
|
61 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
62 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
@@ -69,4 +70,6 @@ pipeline = transformers.pipeline(
|
|
69 |
|
70 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
71 |
print(outputs[0]["generated_text"])
|
72 |
-
```
|
|
|
|
|
|
18 |
|
19 |
# NeuralQwen-2.5-1.5B-Spanish
|
20 |
|
21 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/bQMhMwK-xDvHMIbDFpxN5.png)
|
22 |
|
|
|
23 |
|
24 |
NeuralQwen-2.5-1.5B-Spanish is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
25 |
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
|
|
|
56 |
import torch
|
57 |
|
58 |
model = "Kukedlc/NeuralQwen-2.5-1.5B-Spanish"
|
59 |
+
messages = [{"role": "system", "content": "Eres un asistente de pensamiento logico que piensa paso a paso, por cada pregunta que te hagan deberes comprobar la respuesta por 3 metodos diferentes."},
|
60 |
+
{"role": "user", "content": "Cuantas letras 'r' tiene la palabra strawberry?"}]
|
61 |
|
62 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
63 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
|
|
70 |
|
71 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
72 |
print(outputs[0]["generated_text"])
|
73 |
+
```
|
74 |
+
|
75 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/Tu9FV0dQJXz-mlriKNqdE.png)
|