Update README.md
Browse files
README.md
CHANGED
@@ -20,11 +20,11 @@ Mistral-NeMo-12B-Base is a Large Language Model (LLM) composed of 12B parameters
|
|
20 |
|
21 |
### Intended use
|
22 |
|
23 |
-
Mistral-NeMo-12B-Base is a completion model intended for use in over 80+ programming languages and designed for global, multilingual applications. It is fast, trained on function-calling, has a large context window, and is particularly strong in English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It is compatible with [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html). For best performance on a given task, users are encouraged to customize the model using the NeMo Framework suite of customization tools including Parameter-Efficient Fine-Tuning (P-tuning, Adapters, LoRA, and more), and Model Alignment (SFT, SteerLM, RLHF, and more) using [NeMo-Aligner](https://github.com/NVIDIA/NeMo-Aligner).
|
24 |
|
25 |
**Model Developer:** [NVIDIA](https://www.nvidia.com/en-us/) and [MistralAI](https://mistral.ai/)
|
26 |
|
27 |
-
**Model Dates:** Mistral-NeMo-12B-Base was trained between
|
28 |
|
29 |
### Model Architecture:
|
30 |
|
|
|
20 |
|
21 |
### Intended use
|
22 |
|
23 |
+
Mistral-NeMo-12B-Base is a completion model intended for use in over 80+ programming languages and designed for global, multilingual applications. It is fast, trained on function-calling, has a large context window, and is particularly strong in English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It is compatible with [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html). For best performance on a given task, users are encouraged to customize the model using the NeMo Framework suite of customization tools including Parameter-Efficient Fine-Tuning (P-tuning, Adapters, LoRA, and more), and Model Alignment (SFT, SteerLM, RLHF, and more) using [NeMo-Aligner](https://github.com/NVIDIA/NeMo-Aligner).
|
24 |
|
25 |
**Model Developer:** [NVIDIA](https://www.nvidia.com/en-us/) and [MistralAI](https://mistral.ai/)
|
26 |
|
27 |
+
**Model Dates:** Mistral-NeMo-12B-Base was trained between May 2024 and June 2024.
|
28 |
|
29 |
### Model Architecture:
|
30 |
|