Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fr
4
+ - en
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - chat
8
+ - llama
9
+ - llama3
10
+ - finetune
11
+ - french
12
+ - legal
13
+ - loi
14
+ library_name: transformers
15
+ inference: false
16
+ model_creator: MaziyarPanahi
17
+ quantized_by: MaziyarPanahi
18
+ base_model: meta-llama/Llama-3.2-3B
19
+ model_name: calme-3.1-llamaloi-3b
20
+ datasets:
21
+ - MaziyarPanahi/calme-legalkit-v0.2
22
+ license: llama3.2
23
+ ---
24
+
25
+ <img src="./calme_3.png" alt="Calme-3 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
26
+
27
+ # MaziyarPanahi/calme-3.1-llamaloi-3b
28
+
29
+ This model is an advanced iteration of the powerful `meta-llama/Llama-3.2-3B`, specifically fine-tuned to enhance its capabilities in French Legal domain.
30
+
31
+
32
+ # ⚡ Quantized GGUF
33
+
34
+ All GGUF models are available here: [MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF](https://huggingface.co/MaziyarPanahi/calme-3.1-llamaloi-3b-GGUF)
35
+
36
+
37
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
38
+
39
+ Leaderboard 2 coming soon!
40
+
41
+
42
+ # Prompt Template
43
+
44
+ This model uses `ChatML` prompt template:
45
+
46
+ ```
47
+ <|im_start|>system
48
+ {System}
49
+ <|im_end|>
50
+ <|im_start|>user
51
+ {User}
52
+ <|im_end|>
53
+ <|im_start|>assistant
54
+ {Assistant}
55
+ ````
56
+
57
+ # How to use
58
+
59
+
60
+ ```python
61
+
62
+ # Use a pipeline as a high-level helper
63
+
64
+ from transformers import pipeline
65
+
66
+ messages = [
67
+ {"role": "user", "content": "Who are you?"},
68
+ ]
69
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-3.1-llamaloi-3b")
70
+ pipe(messages)
71
+
72
+
73
+ # Load model directly
74
+
75
+ from transformers import AutoTokenizer, AutoModelForCausalLM
76
+
77
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-3.1-llamaloi-3b")
78
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-3.1-llamaloi-3b")
79
+ ```
80
+
81
+
82
+
83
+ # Ethical Considerations
84
+
85
+ As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.