Ramikan-BR
commited on
Commit
•
f7ebde2
1
Parent(s):
472122d
Update README.md
Browse files
README.md
CHANGED
@@ -21,3 +21,33 @@ base_model: unsloth/tinyllama-chat-bnb-4bit
|
|
21 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
22 |
|
23 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
22 |
|
23 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
24 |
+
|
25 |
+
---
|
26 |
+
|
27 |
+
## Inference tests after refinement
|
28 |
+
|
29 |
+
**Test 1: Continuing the Fibonacci sequence**
|
30 |
+
```python
|
31 |
+
alpaca_prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n### Input:\nContinue the fibonnaci sequence.\n\n### Output:"
|
32 |
+
|
33 |
+
from unsloth import FastLanguageModel
|
34 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
35 |
+
|
36 |
+
inputs = tokenizer([alpaca_prompt.format("Continue the fibonnaci sequence.", "1, 1, 2, 3, 5, 8", "")], return_tensors="pt").to("cuda")
|
37 |
+
outputs = model.generate(**inputs, max_new_tokens=64, use_cache=True)
|
38 |
+
print(tokenizer.batch_decode(outputs))
|
39 |
+
Output:
|
40 |
+
['<s> Below is an instruction that describes a task. Write a response that appropriately completes the request.\n### Input:\nContinue the fibonnaci sequence.\n\n### Output:\n1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89</s>']
|
41 |
+
Test 2: Famous tall tower in Paris
|
42 |
+
alpaca_prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n### Input:\nWhat is a famous tall tower in Paris?\n\n### Output:"
|
43 |
+
|
44 |
+
from unsloth import FastLanguageModel
|
45 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
46 |
+
|
47 |
+
inputs = tokenizer([alpaca_prompt.format("What is a famous tall tower in Paris?", "", "")], return_tensors="pt").to("cuda")
|
48 |
+
from transformers import TextStreamer
|
49 |
+
text_streamer = TextStreamer(tokenizer)
|
50 |
+
_ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=64)
|
51 |
+
Output:
|
52 |
+
Eiffel Tower, located in Paris, is a famous tall tower that stands at 320 meters (98 feet) tall. It was built in 189002 as a symbol of the city's modernization and progress, and it remains an iconic landmark to this
|
53 |
+
For the first time, the AI answered both questions correctly, despite the response about the Eiffel Tower containing errors about the year and not finishing the response. I will continue refining the AI with the data-oss_instruct-decontaminated_python.jsonl dataset. This version of the dataset only contains Python code, and since I can only train on the free Colab GPU, I was forced to split the dataset into 10 parts and refine the AI for two epochs with each part (up to this point, we are on the fifth part of the dataset)... Thanks to the Unsloth team, without you, I wouldn't have even achieved any relevant training on an AI since I don't have a GPU!
|