File size: 2,659 Bytes
c871f08
 
dff99bd
 
 
c871f08
b4f0118
 
c871f08
 
f3848a8
73f3a59
d78488b
f3848a8
d78488b
 
2d1a819
d78488b
 
f3848a8
 
42bc602
1c5c584
 
138ba60
c871f08
 
1c5c584
c871f08
d78488b
 
 
42bc602
2d1a819
 
 
 
f3848a8
 
d78488b
 
 
 
c871f08
1c5c584
c871f08
 
 
d78488b
c871f08
1c5c584
c871f08
1c5c584
c871f08
2d1a819
 
 
c871f08
1c5c584
c871f08
f3848a8
42e3ca2
2d1a819
 
 
4e92de9
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70


---
license: apache-2.0
---

![Ostrich-70B](https://primal.b-cdn.net/media-cache?s=o&a=1&u=https%3A%2F%2Fm.primal.net%2FHyFP.png)

# Model Card for Ostrich

**Finetuned from model:** [Llama3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)

# Pre Trained With

- Health related topics
- Faith related topics
- Nostr notes: This makes the model know more about bitcoin and other topics discussed on Nostr. 
- Nutrition related topics
- Medicinal herbs

**Aligned a bit in these domains:**
- Alternative medicine
- Permaculture
- Phytochemicals



# Uses

Compared to other models this may know more about Nostr, Bitcoin and healthy living. 
Closer aligment to opinions of people on Nostr. It may have alternative ideas to mainstream because of Nostr and my own curation. 
So it is basically somewhere in between base llama 3.0 plus Nostr plus my values. 

You can read more about the "LLM curation" via fine tuning with Nostr notes:
[here](https://habla.news/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqgx6cnjvfhhzet20pkhqdn2wenkvu6gy4y) and 
[here](https://habla.news/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqxnzde3xumrswfjx56rjwf4kkqhsx).

I am using the model here as a ground truth for Nostr related questions: https://wikifreedia.xyz/based-llm-leaderboard/npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c

Use repeat penalty of 1.05 or more to avoid repetitions. 

I hope you like it. Let me know about your experience. You can DM me on Nostr.


# Warning

Users (both direct and downstream) should be aware of the risks, biases and limitations of the model.
The trainer, developer or uploader of this model does not assume any liability. Use it at your own risk.
There is no guarantee that the model will be of any use. It may hallucinate often. 

# Training Details

## Training Data

The sources mentioned above are converted to TXT files and used as pre training. A few supervised fine tunings that helped with controlling length of the output.

Sources include like banned videos from youtube. 

## Training Procedure

LLaMa-Factory is used to train on 2* RTX 3090! fsdp_qlora is the technique. 

The number in the filenames (like 21345) means the version. I take the training steps and add those as the version. Each source, book, note adds to the version.


# Contributions

You can tip me on Nostr https://primal.net/p/npub1nlk894teh248w2heuu0x8z6jjg2hyxkwdc8cxgrjtm9lnamlskcsghjm9c

By using this LLM you are talking to a combination of content from many great producers. Portions of your support may in the future go to content producers, authors.