|
--- |
|
license: apache-2.0 |
|
library_name: peft |
|
tags: |
|
- generated_from_trainer |
|
base_model: mistralai/Mistral-7B-v0.3 |
|
datasets: |
|
- BeIR/nq |
|
- embedding-data/PAQ_pairs |
|
- sentence-transformers/msmarco-hard-negatives |
|
- leminda-ai/s2orc_small |
|
- lucadiliello/triviaqa |
|
- pietrolesci/agnews |
|
- mteb/amazon_reviews_multi |
|
- multiIR/ccnews2016-8multi |
|
- eli5 |
|
- gooaq |
|
- quora |
|
- lucadiliello/searchqa |
|
- flax-sentence-embeddings/stackexchange_math_jsonl |
|
- yahoo_answers_qa |
|
- EdinburghNLP/xsum |
|
- wikihow |
|
- rajpurkar/squad_v2 |
|
- nixiesearch/amazon-esci |
|
- osunlp/Mind2Web |
|
- derek-thomas/dataset-creator-askreddit |
|
language: |
|
- en |
|
--- |
|
|
|
# nixie-querygen-v3 |
|
|
|
|
|
A [Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) fine-tuned on query generation task. Main use cases: |
|
|
|
* synthetic query generation for downstream embedding fine-tuning tasks - when you have only documents and no queries/labels. Such task can be done with the [nixietune](https://github.com/nixiesearch/nixietune) toolkit, see the `nixietune.qgen.generate` recipe. |
|
* synthetic dataset expansion for further embedding training - when you DO have query-document pairs, but only a few. You can fine-tune the `nixie-querygen-v3` on existing pairs, and then expand your document corpus with synthetic queries (which are still based on your few real ones). See `nixietune.querygen` recipe. |
|
|
|
The idea behind the approach is taken from the [doqT5query](https://github.com/castorini/docTTTTTquery) model. See the original paper [Rodrigo Nogueira and Jimmy Lin. From doc2query to docTTTTTquery.](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf) |
|
|
|
## Flavours |
|
|
|
This repo has multiple versions of the model: |
|
|
|
* model-*.safetensors: Pytorch FP16 checkpoint, suitable for down-stream fine-tuning |
|
* *-f16.gguf: GGUF F16 non-quantized [llama-cpp](https://github.com/ggerganov/llama.cpp) checkpoint, for CPU inference |
|
* *-q4.gguf: GGUF Q4_0 quantized [llama-cpp](https://github.com/ggerganov/llama.cpp) checkpoint, for fast (and less precise) CPU inference. |
|
|
|
## Prompt formats |
|
|
|
The model accepts the followinng Alpaca prompt format: |
|
|
|
``` |
|
### Instruction: |
|
Write a short query which can be used to search a given document: |
|
|
|
### Input: |
|
{document text} |
|
|
|
### Response: |
|
[short|medium|long]? [question|regular]? query: |
|
``` |
|
|
|
Some notes on format: |
|
|
|
* `[short|medium|long]` and `[question|regular]` fragments are optional and can be skipped. |
|
|
|
## Inference example |
|
|
|
### llamacpp |
|
|
|
With [llama-cpp](https://github.com/ggerganov/llama.cpp) and Q4 model the inference can be done on a CPU: |
|
|
|
```bash |
|
$ cat input.txt |
|
### Instruction: |
|
Write a short query which can be used to search a given document: |
|
|
|
### Input: |
|
Google’s greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to “net zero” by 2030 in doubt. The Silicon Valley company’s pollution amounted to 14.3 million tonnes of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday. Google said the jump highlighted “the challenge of reducing emissions” at the same time as it invests in the build-out of large language models and their associated applications and infrastructure, admitting that “the future environmental impact of AI” was “complex and difficult to predict.” |
|
|
|
### Response: |
|
short query: |
|
|
|
$ ./llama-cli -m ~/models/nixie-querygen-v3/nixie-querygen-v3-q4.gguf -f input.txt -s 1 |
|
|
|
system_info: n_threads = 16 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | |
|
sampling: |
|
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 |
|
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800 |
|
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 |
|
sampling order: |
|
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature |
|
generate: n_ctx = 32768, n_batch = 2048, n_predict = 128, n_keep = 1 |
|
|
|
|
|
### Instruction: |
|
Write a short query which can be used to search a given document: |
|
|
|
### Input: |
|
Google’s greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to “net zero” by 2030 in doubt. |
|
The Silicon Valley company’s pollution amounted to 14.3 million tonnes of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday. |
|
Google said the jump highlighted “the challenge of reducing emissions” at the same time as it invests in the build-out of large language models and their associated applications and infrastructure, admitting that “the future environmental impact of AI” was “complex and difficult to predict.” |
|
|
|
### Response: |
|
short query: google carbon footprint [end of text] |
|
|
|
llama_print_timings: load time = 4497.53 ms |
|
llama_print_timings: sample time = 0.21 ms / 5 runs ( 0.04 ms per token, 23584.91 tokens per second) |
|
llama_print_timings: prompt eval time = 4006.12 ms / 209 tokens ( 19.17 ms per token, 52.17 tokens per second) |
|
llama_print_timings: eval time = 829.37 ms / 4 runs ( 207.34 ms per token, 4.82 tokens per second) |
|
llama_print_timings: total time = 4839.50 ms / 213 tokens``` |
|
``` |
|
|
|
### Transformers |
|
|
|
```python |
|
from transformers import pipeline |
|
import torch |
|
|
|
generator = pipeline(task="text-generation", model='<path>', torch_dtype=torch.bfloat16, device_map="auto") |
|
prompt = "### Instruction:\nWrite a short query which can be used to search a given document:\n\n### Input:\n<doc>\n\n### Response:\nshort query:" |
|
result = generator(prompt, return_full_text=True, max_new_tokens=32, num_return_sequences=1) |
|
``` |
|
|
|
## Training config |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
<details><summary>See axolotl config</summary> |
|
|
|
axolotl version: `0.4.1` |
|
```yaml |
|
base_model: mistralai/Mistral-7B-v0.3 |
|
model_type: MistralForCausalLM |
|
tokenizer_type: LlamaTokenizer |
|
|
|
load_in_8bit: false |
|
load_in_4bit: true |
|
strict: false |
|
val_set_size: 0.001 |
|
datasets: |
|
- path: json |
|
split: train |
|
type: alpaca |
|
data_files: |
|
- /home/shutty/data/querygen/alpaca.json |
|
|
|
dataset_prepared_path: last_run_prepared |
|
output_dir: ./outputs/qlora-out |
|
|
|
adapter: qlora |
|
lora_model_dir: |
|
|
|
sequence_len: 512 |
|
sample_packing: false |
|
pad_to_sequence_len: true |
|
|
|
lora_r: 32 |
|
lora_alpha: 16 |
|
lora_dropout: 0.05 |
|
lora_target_modules: |
|
lora_target_linear: true |
|
lora_fan_in_fan_out: |
|
|
|
wandb_project: |
|
wandb_entity: |
|
wandb_watch: |
|
wandb_name: |
|
wandb_log_model: |
|
|
|
gradient_accumulation_steps: 1 |
|
micro_batch_size: 40 |
|
num_epochs: 1 |
|
optimizer: adamw_torch |
|
lr_scheduler: cosine |
|
learning_rate: 0.00001 |
|
|
|
train_on_inputs: false |
|
group_by_length: false |
|
bf16: auto |
|
fp16: |
|
tf32: false |
|
|
|
gradient_checkpointing: true |
|
gradient_checkpointing_kwargs: |
|
use_reentrant: true |
|
early_stopping_patience: |
|
resume_from_checkpoint: |
|
local_rank: |
|
xformers_attention: |
|
flash_attention: true |
|
|
|
logging_steps: 10 |
|
warmup_steps: 10 |
|
evals_per_epoch: 10 |
|
eval_table_size: |
|
saves_per_epoch: 1 |
|
debug: |
|
deepspeed: |
|
weight_decay: 0.0 |
|
fsdp: |
|
- full_shard |
|
- auto_wrap |
|
fsdp_config: |
|
fsdp_limit_all_gathers: true |
|
fsdp_sync_module_states: true |
|
fsdp_offload_params: false |
|
fsdp_use_orig_params: false |
|
fsdp_cpu_ram_efficient_loading: false |
|
fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer |
|
fsdp_state_dict_type: FULL_STATE_DICT |
|
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP |
|
special_tokens: |
|
# torch_compile: true |
|
# chat_template: chatml |
|
``` |
|
|
|
</details><br> |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 40 |
|
- eval_batch_size: 40 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 2 |
|
- total_train_batch_size: 80 |
|
- total_eval_batch_size: 80 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_steps: 10 |
|
- num_epochs: 1 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:------:|:-----:|:---------------:| |
|
| No log | 0.0000 | 1 | 2.8685 | |
|
| 1.3256 | 0.1000 | 5581 | 1.4044 | |
|
| 1.3539 | 0.2000 | 11162 | 1.3793 | |
|
| 1.3409 | 0.3000 | 16743 | 1.3659 | |
|
| 1.3781 | 0.4000 | 22324 | 1.3552 | |
|
| 1.3909 | 0.5000 | 27905 | 1.3470 | |
|
| 1.4037 | 0.6000 | 33486 | 1.3423 | |
|
| 1.3573 | 0.7000 | 39067 | 1.3383 | |
|
| 1.3088 | 0.8000 | 44648 | 1.3366 | |
|
| 1.3243 | 0.9000 | 50229 | 1.3357 | |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.11.1 |
|
- Transformers 4.41.1 |
|
- Pytorch 2.3.0+cu121 |
|
- Datasets 2.19.1 |
|
- Tokenizers 0.19.1 |
|
|
|
|
|
## License |
|
|
|
Apache 2.0 |