--- license: apache-2.0 --- ![Ostrich-70B](https://primal.b-cdn.net/media-cache?s=o&a=1&u=https%3A%2F%2Fm.primal.net%2FHyFP.png) # Model Card for Ostrich - **Trained with some of the Nostr notes** - **Aligned a bit in these domains:** - Bitcoin - Health - Permaculture - Phytochemicals - Alternative medicine - Herbs - Nutrition Read more about it here: https://habla.news/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqxnzde3xsunjwfkxcunwv3jvtnjyc A running model is here: https://njump.me/npub1chadadwep45t4l7xx9z45p72xsxv7833zyy4tctdgh44lpc50nvsrjex2m (Though it may be down for maintenance time to time. You need to DM the bot for it to answer.) ## Model Details - **Finetuned from model:** https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct The number in the filenames like 4750 means the version. Higher numbers are newer versions. ## Uses Likes to disagree more compared to Llama3. It can be used for refutation, counter argument production. Ask any question, compared to other models this may know more about Nostr and Bitcoin. You can use llama.cpp to chat with it. You can also use llama-cpp-python package to use it in a Python script. Use repeat penalty of 1.05 or more to avoid repetitions. ## Warning Users (both direct and downstream) should be aware of the risks, biases and limitations of the model. The trainer, developer or uploader of this model does not assume any liability. Use it at your own risk. ## Training Details ### Training Data Nostr related info from web and nostr itself, bitcoin related info. Info on health domain. Information that aligns well with humanity is preferred. ### Training Procedure LLaMa-Factory is used to train on 2x3090! fsdp_qlora is the technique. The Nostr training took ~140 hours for a dataset of ~67MB.