File size: 930 Bytes
ca5b408
 
0eefa2f
 
 
 
 
 
 
 
 
 
2385dbb
0eefa2f
4a58b73
0eefa2f
ca5b408
0eefa2f
 
 
 
 
4a58b73
0eefa2f
 
 
 
 
 
 
 
4a58b73
0eefa2f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: apache-2.0
datasets:
- ecastera/wiki_fisica
- ecastera/filosofia-es
- bertin-project/alpaca-spanish
language:
- es
- en
tags:
- mistral
- spanish
- español
- lora
- int4
- multilingual
---

# ecastera-eva-westlake-7b-spanish

Mistral 7b-based model fine-tuned in Spanish to add high quality Spanish text generation.

* Exported in GGUF format, INT4 quantization
* Refined version of my previous models, with new training data and methodology. This should produce more natural reponses in Spanish.
* Base model Mistral-7b
* Based on the excelent job of senseable/WestLake-7B-v2 and Eric Hartford's cognitivecomputations/WestLake-7B-v2-laser
* Fine-tuned in Spanish with a collection of poetry, books, wikipedia articles, phylosophy texts and alpaca-es datasets.
* Trained using Lora and PEFT and INT8 quantization on 2 GPUs for several days.

## Usage:

Use in llamacpp or other framework that supports GGUF format.