Triangle104 commited on
Commit
681ecd4
1 Parent(s): 403abe9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md CHANGED
@@ -47,6 +47,79 @@ base_model: utter-project/EuroLLM-1.7B
47
  This model was converted to GGUF format from [`utter-project/EuroLLM-1.7B`](https://huggingface.co/utter-project/EuroLLM-1.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
48
  Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-1.7B) for more details on the model.
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ## Use with llama.cpp
51
  Install llama.cpp through brew (works on Mac and Linux)
52
 
 
47
  This model was converted to GGUF format from [`utter-project/EuroLLM-1.7B`](https://huggingface.co/utter-project/EuroLLM-1.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
48
  Refer to the [original model card](https://huggingface.co/utter-project/EuroLLM-1.7B) for more details on the model.
49
 
50
+ ---
51
+ Model details:
52
+ -
53
+ This is the model card for the first pre-trained model of the EuroLLM
54
+ series: EuroLLM-1.7B. You can also check the instruction tuned version:
55
+ EuroLLM-1.7B-Instruct.
56
+
57
+
58
+ Developed by: Unbabel, Instituto Superior Técnico,
59
+ Instituto de Telecomunicações, University of Edinburgh, Aveni,
60
+ University of Paris-Saclay, University of Amsterdam, Naver Labs,
61
+ Sorbonne Université.
62
+ Funded by: European Union.
63
+ Model type: A 1.7B parameter multilingual transfomer LLM.
64
+ Language(s) (NLP): Bulgarian, Croatian, Czech,
65
+ Danish, Dutch, English, Estonian, Finnish, French, German, Greek,
66
+ Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish,
67
+ Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic,
68
+ Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian,
69
+ Turkish, and Ukrainian.
70
+ License: Apache License 2.0.
71
+
72
+
73
+
74
+
75
+
76
+
77
+
78
+ Model Details
79
+
80
+
81
+
82
+
83
+ The EuroLLM project has the goal of creating a suite of LLMs capable
84
+ of understanding and generating text in all European Union languages as
85
+ well as some additional relevant languages.
86
+ EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens
87
+ divided across the considered languages and several data sources: Web
88
+ data, parallel data (en-xx and xx-en), and high-quality datasets.
89
+ EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an
90
+ instruction tuning dataset with focus on general instruction-following
91
+ and machine translation.
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+ Model Description
100
+
101
+
102
+
103
+
104
+ EuroLLM uses a standard, dense Transformer architecture:
105
+
106
+
107
+ We use grouped query attention (GQA) with 8 key-value heads, since
108
+ it has been shown to increase speed at inference time while maintaining
109
+ downstream performance.
110
+ We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
111
+ We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
112
+ We use rotary positional embeddings (RoPE) in every layer, since
113
+ these have been shown to lead to good performances while allowing the
114
+ extension of the context length.
115
+
116
+
117
+ For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5
118
+ supercomputer, training the model with a constant batch size of 3,072
119
+ sequences, which corresponds to approximately 12 million tokens, using
120
+ the Adam optimizer, and BF16 precision.
121
+
122
+ ---
123
  ## Use with llama.cpp
124
  Install llama.cpp through brew (works on Mac and Linux)
125