MethosPi commited on
Commit
2fbc0bc
·
verified ·
1 Parent(s): 92438cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -19,13 +19,13 @@ ItalIA is a LLM trained for the Italian language and based on Llama3-8b.
19
 
20
  ### Model Description
21
 
22
- ItalIA is a state-of-the-art language model specifically trained for the Italian language, leveraging the latest advancements in the LLM frameworks llama3. This model aims to provide highly accurate and context-aware natural language understanding and generation, making it ideal for a wide range of applications from automated customer support to content creation.
23
 
24
  - **Developed by:** Davide Pizzo
25
- - **Model type:** [Transformer-based Large Language Model]
26
- - **Language(s) (NLP):** [Italian]
27
- - **License:** [Other]
28
- - **Finetuned from model [optional]:** [llama3-8b]
29
 
30
  ### Model Sources [optional]
31
 
@@ -76,9 +76,9 @@ Users should be aware of the potential for biased outputs based on the training
76
 
77
  Use the code below to get started with the model.
78
 
79
- [from transformers import AutoModelForCausalLM, AutoTokenizer
80
 
81
- model_name = "your-model-name-on-huggingface"
82
  tokenizer = AutoTokenizer.from_pretrained(model_name)
83
  model = AutoModelForCausalLM.from_pretrained(model_name)
84
 
@@ -86,7 +86,7 @@ text = "Inserisci qui il tuo testo in italiano."
86
  input_ids = tokenizer.encode(text, return_tensors="pt")
87
  output = model.generate(input_ids)
88
 
89
- print(tokenizer.decode(output[0], skip_special_tokens=True))]
90
 
91
  ## Training Details
92
 
 
19
 
20
  ### Model Description
21
 
22
+ ItalIA is a state-of-the-art language model specifically trained for the Italian language using unsloth, leveraging the latest advancements in the LLM frameworks llama3. This model aims to provide highly accurate and context-aware natural language understanding and generation, making it ideal for a wide range of applications from automated customer support to content creation.
23
 
24
  - **Developed by:** Davide Pizzo
25
+ - **Model type:** Transformer-based Large Language Model
26
+ - **Language(s) (NLP):** Italian
27
+ - **License:** Other
28
+ - **Finetuned from model [optional]:** llama3-8b
29
 
30
  ### Model Sources [optional]
31
 
 
76
 
77
  Use the code below to get started with the model.
78
 
79
+ from transformers import AutoModelForCausalLM, AutoTokenizer
80
 
81
+ model_name = "MethosPi/llama3-8b-italIA-unsloth-merged"
82
  tokenizer = AutoTokenizer.from_pretrained(model_name)
83
  model = AutoModelForCausalLM.from_pretrained(model_name)
84
 
 
86
  input_ids = tokenizer.encode(text, return_tensors="pt")
87
  output = model.generate(input_ids)
88
 
89
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
90
 
91
  ## Training Details
92