|
|
|
--- |
|
|
|
library_name: transformers |
|
license: apache-2.0 |
|
base_model: Heralax/philosophy-llm-mistral-pretrain |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: philosophy-hardcore-pretraining |
|
results: [] |
|
|
|
--- |
|
|
|
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) |
|
|
|
|
|
# QuantFactory/philosophy-mistral-GGUF |
|
This is quantized version of [Heralax/philosophy-mistral](https://huggingface.co/Heralax/philosophy-mistral) created using llama.cpp |
|
|
|
# Original Model Card |
|
|
|
|
|
# Philosophy LLM |
|
|
|
I would've trained this on Phi so I could've called it Phi-losophy if I had thought of that joke before kicking off the run. Oh well. |
|
It's trained on Mistral instead. That's a Mist opportunity right there. |
|
|
|
This is a narrow domain-expert LLM trained on the top 5 books on Gutenberg: |
|
|
|
- The Problems of Philosophy (Bertrand Russell) |
|
- Beyond Good and Evil (Nietzsche) |
|
- Thus Spake Zarathustra: A Book for All and None (Nietzsche) |
|
- The Prince (Machiavelli) |
|
- Second Treatise of Government |
|
|
|
It's meant to be an interesting novelty, showing off training on a specific domain. It has some quirks. Namely: |
|
|
|
1. It seems to have memorized the training data very well. Ask a question that exists in the training data, with temp 0, and it will usually give you back the exact response word-for-word. This means that, on the subjects covered by its data, it will be very knowledgeable. |
|
2. I forgot to include any generalist instruct data, so it's... not stupid, at least not particularly stupid by 7b standards, but it is very much limited to QA. |
|
3. It's much less fluffy and wasteful with its responses than previous Augmentoolkit domain expert models, due to using a new dataset setting. This tends to make it respond with less detail, but it also may remember stuff better and get to the point easier. |
|
|
|
Some example chats (blame LM studio for not hiding the stop token): |
|
|
|
Asking stuff from the training data: |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/AirHFo61iB1HAP-IXwnZn.png) |
|
|
|
Asking a question directly from the training data and one I came up with on the spot. |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/Ccm-EeDyOFcylCefwDS-W.png) |
|
|
|
Some things that are kinda funny but also show off the drawback of not using any generalist data: |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/e2sCBLIX8Xg91KSevGt_B.png)) |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64825ebceb4befee377cf8ac/P0bhWyENxOaxPvC4fE6jw.png) |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 2 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 6 |
|
- gradient_accumulation_steps: 6 |
|
- total_train_batch_size: 72 |
|
- total_eval_batch_size: 6 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_steps: 136 |
|
- num_epochs: 6 |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.45.0.dev0 |
|
- Pytorch 2.3.1+cu121 |
|
- Datasets 2.21.0 |
|
- Tokenizers 0.19.1 |
|
|
|
|