File size: 2,121 Bytes
c871f08 dff99bd c871f08 b4f0118 c871f08 f3848a8 73f3a59 f3848a8 42bc602 138ba60 f3848a8 c871f08 f3848a8 66e3803 c871f08 f3848a8 138ba60 f3848a8 42bc602 f3848a8 c871f08 f3848a8 42bc602 f3848a8 c871f08 f3848a8 42e3ca2 431bdca f3848a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
---
![Ostrich-70B](https://primal.b-cdn.net/media-cache?s=o&a=1&u=https%3A%2F%2Fm.primal.net%2FHyFP.png)
# Model Card for Ostrich
**Finetuned from model:** [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
## Fine Tuned With
**Trained with Nostr notes**: Knowledge and ideas about bitcoin and also Nostr itself. You can read more about the effects of Nostr training here:
https://habla.news/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqxnzde3xumrswfjx56rjwf4kkqhsx
**Aligned a bit in these domains:**
- Health
- Permaculture
- Phytochemicals
- Alternative medicine
- Herbs
- Nutrition
More to come around these topics.
Fine tuned a bit in faith related topics.
## Uses
Compared to other models this may know more about Nostr and Bitcoin. It is aligned with opinions of people on Nostr. It may have alternative ideas to mainstream because Nostr is very censorship resistant.
You can use llama.cpp to use the GGUF file.
Use repeat penalty of 1.05 or more to avoid repetitions.
I am using the model here as a ground truth for Nostr related questions: https://wikifreedia.xyz/based-llm-leaderboard/npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c
## Warning
Users (both direct and downstream) should be aware of the risks, biases and limitations of the model.
The trainer, developer or uploader of this model does not assume any liability. Use it at your own risk.
## Training Details
### Training Data
Nostr related info from web and nostr itself, bitcoin related info.
Information that aligns well with humanity is preferred.
About 80% comes from Nostr notes. The rest is my own curation.
### Training Procedure
LLaMa-Factory is used to train on 2* RTX 3090! fsdp_qlora is the technique.
The Nostr training took ~200 hours.
After training it for a while with notes, I used it to analyze and decide what notes to take in further for training.
The number in the filenames (like 9230) means the version. I take the training steps and add those to the version. There has been more than 9k training steps.
|