--- license: apache-2.0 --- ![Ostrich-70B](https://primal.b-cdn.net/media-cache?s=o&a=1&u=https%3A%2F%2Fm.primal.net%2FHyFP.png) # Model Card for Ostrich **Finetuned from model:** [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) # Pre Trained With - Nostr notes: This makes the model know more about bitcoin and other topics discussed on Nostr. - Health related topics - Faith related topics - Nutrition related topics - Medicinal herbs **Aligned a bit in these domains:** - Alternative medicine - Permaculture - Phytochemicals # Uses You can read more about the "LLM curation" via fine tuning with Nostr notes: [here](https://habla.news/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqgx6cnjvfhhzet20pkhqdn2wenkvu6gy4y) and [here](https://habla.news/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqxnzde3xumrswfjx56rjwf4kkqhsx). Compared to other models this may know more about Nostr, Bitcoin and healthy living. Closer aligment to opinions of people on Nostr. It may have alternative ideas to mainstream because of Nostr and my own curation. So it is basically somewhere in between base llama 3.0 plus Nostr plus my values. I am using the model here as a ground truth for Nostr related questions: https://wikifreedia.xyz/based-llm-leaderboard/npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c Use repeat penalty of 1.05 or more to avoid repetitions. I hope you like it. Let me know about your experience. You can DM me on Nostr. The number in the filenames (like 21345) means the version. I take the training steps and add those as the version. Each source, book, note adds to the version. # Warning Users (both direct and downstream) should be aware of the risks, biases and limitations of the model. The trainer, developer or uploader of this model does not assume any liability. Use it at your own risk. There is no guarantee that the model will be of any use. It may hallucinate often. # Training Details ## Training Data The sources mentioned above are converted to TXT files and used as pre training. No PPO, DPO or other method of fine tuning. ## Training Procedure LLaMa-Factory is used to train on 2* RTX 3090! fsdp_qlora is the technique.