pszemraj commited on
Commit
dbf27d0
1 Parent(s): b9ece17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -14
README.md CHANGED
@@ -1,32 +1,50 @@
1
  ---
2
  tags:
3
- - generated_from_trainer
4
- model-index:
5
- - name: gpt-peter-2pt7B-peter_DS-msgs_Ep-1_Bs-2
6
- results: []
7
  ---
8
 
9
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
- should probably proofread and complete it, then remove this comment. -->
11
 
12
- # gpt-peter-2pt7B-peter_DS-msgs_Ep-1_Bs-2
 
 
 
 
 
 
13
 
14
- This model is a fine-tuned version of [pszemraj/gpt-peter-2.7B](https://huggingface.co/pszemraj/gpt-peter-2.7B) on the None dataset.
15
 
16
- ## Model description
 
 
 
17
 
18
- More information needed
19
 
20
- ## Intended uses & limitations
 
 
 
 
 
 
 
 
21
 
22
- More information needed
23
 
24
- ## Training and evaluation data
 
 
25
 
26
- More information needed
27
 
28
  ## Training procedure
29
 
 
30
  ### Training hyperparameters
31
 
32
  The following hyperparameters were used during training:
 
1
  ---
2
  tags:
3
+ - gpt-neo
4
+ - gpt-peter
5
+ - chatbot
6
+
7
  ---
8
 
 
 
9
 
10
+ # pszemraj/gpt-peter-2.7B
11
+
12
+ - This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on about 80k WhatsApp and iMessage texts.
13
+ - The model is too large to use the inference API. linked [here](https://colab.research.google.com/gist/pszemraj/a59b43813437b43973c8f8f9a3944565/testing-pszemraj-gpt-peter-2-7b.ipynb) is a notebook for testing in Colab.
14
+ - alternatively, you can message [a bot on telegram](http://t.me/GPTPeter_bot) based on this model
15
+ - the telegram bot code and the model training code can be found [in this repository](https://github.com/pszemraj/ai-msgbot)
16
+
17
 
18
+ ## Usage in python
19
 
20
+ Install the transformers library if you don't have it:
21
+ ```
22
+ pip install -U transformers
23
+ ```
24
 
25
+ load the model into a `pipeline` object:
26
 
27
+ ```
28
+ from transformers import pipeline
29
+ import torch
30
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
31
+ my_chatbot = pipeline('text-generation',
32
+ 'pszemraj/gpt-peter-2.7B',
33
+ device=0 if device == 'cuda' else -1,
34
+ )
35
+ ```
36
 
37
+ generate text!
38
 
39
+ ```
40
+ my_chatbot('Did you ever hear the tragedy of Darth Plagueis The Wise?')
41
+ ```
42
 
43
+ _(example above for simplicity, but adding generation parameters such as `no_repeat_ngram_size` are recommended to get better generations)_
44
 
45
  ## Training procedure
46
 
47
+
48
  ### Training hyperparameters
49
 
50
  The following hyperparameters were used during training: