|
|
|
|
|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
![Ostrich-70B](https://primal.b-cdn.net/media-cache?s=o&a=1&u=https%3A%2F%2Fm.primal.net%2FHyFP.png) |
|
|
|
# Model Card for Ostrich |
|
|
|
**Finetuned from model:** [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) |
|
|
|
# Pre Trained With |
|
|
|
- Health related topics |
|
- Faith related topics |
|
- Nostr notes: This makes the model know more about bitcoin and other topics discussed on Nostr. |
|
- Nutrition related topics |
|
- Medicinal herbs |
|
|
|
**Aligned a bit in these domains:** |
|
- Alternative medicine |
|
- Permaculture |
|
- Phytochemicals |
|
|
|
|
|
|
|
# Uses |
|
|
|
Compared to other models this may know more about Nostr, Bitcoin and healthy living. |
|
Closer aligment to opinions of people on Nostr. It may have alternative ideas to mainstream because of Nostr and my own curation. |
|
So it is basically somewhere in between base llama 3.0 plus Nostr plus my values. |
|
|
|
You can read more about the "LLM curation" via fine tuning with Nostr notes: |
|
[here](https://habla.news/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqgx6cnjvfhhzet20pkhqdn2wenkvu6gy4y) and |
|
[here](https://habla.news/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqxnzde3xumrswfjx56rjwf4kkqhsx). |
|
|
|
I am using the model here as a ground truth for Nostr related questions: https://wikifreedia.xyz/based-llm-leaderboard/npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c |
|
|
|
Use repeat penalty of 1.05 or more to avoid repetitions. |
|
|
|
I hope you like it. Let me know about your experience. You can DM me on Nostr. |
|
|
|
|
|
# Warning |
|
|
|
Users (both direct and downstream) should be aware of the risks, biases and limitations of the model. |
|
The trainer, developer or uploader of this model does not assume any liability. Use it at your own risk. |
|
There is no guarantee that the model will be of any use. It may hallucinate often. |
|
|
|
# Training Details |
|
|
|
## Training Data |
|
|
|
The sources mentioned above are converted to TXT files and used as pre training. A few supervised fine tunings that helped with controlling length of the output. |
|
|
|
Sources include like banned videos from youtube. |
|
|
|
## Training Procedure |
|
|
|
LLaMa-Factory is used to train on 2* RTX 3090! fsdp_qlora is the technique. |
|
|
|
The number in the filenames (like 21345) means the version. I take the training steps and add those as the version. Each source, book, note adds to the version. |
|
|
|
|
|
# Contributions |
|
|
|
You can tip me on Nostr https://primal.net/p/npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c |
|
|
|
By using this LLM you are talking to a combination of content from many great producers. Portions of your support may in the future go to content producers, authors. |