max_position_embeddings update in config.json
#2
by
Inktomi93
- opened
The mistralai/Mistral-Large-Instruct-2407 repo just made an update to config.json that changes max_position_embeddings to 128k context size instead of 32k. Will this require redoing the quants?
I think you can just update the config.
This worked for me:
sed -i 's/"max_position_embeddings": 32768,/"max_position_embeddings": 131072,/' config.json
The max_position_embeddings
key is just a default advertised by the model, usually a hint about what context length the model is trained for. It has no impact on quantization, so you can modify the config of already quantized models to match to change the default, or you can set a new value at load time.
Inktomi93
changed discussion status to
closed