bprateek commited on
Commit
c050be9
1 Parent(s): 88f3b9d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -19
README.md CHANGED
@@ -19,11 +19,6 @@ It can generate descriptions for your **home** products by getting a text prompt
19
 
20
  [GPT-2](https://openai.com/blog/better-language-models/) is a large [transformer](https://arxiv.org/abs/1706.03762)-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
21
 
22
- ### Live Demo
23
- For testing model with special configuration, please visit [Demo](https://huggingface.co/spaces/HamidRezaAttar/gpt2-home)
24
-
25
- ### Blog Post
26
- For more detailed information about project development please refer to my [blog post](https://hamidrezaattar.github.io/blog/markdown/2022/02/17/gpt2-home.html).
27
 
28
  ### How to use
29
  For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my [GitHub](https://github.com/HamidRezaAttar/GPT2-Home)
@@ -31,20 +26,8 @@ For best experience and clean outputs, you can use Live Demo mentioned above, al
31
  You can use this model directly with a pipeline for text generation.
32
  ```python
33
  >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
34
- >>> tokenizer = AutoTokenizer.from_pretrained("HamidRezaAttar/gpt2-product-description-generator")
35
- >>> model = AutoModelForCausalLM.from_pretrained("HamidRezaAttar/gpt2-product-description-generator")
36
  >>> generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100})
37
  >>> generated_text = generator("This bed is very comfortable.")
38
  ```
39
-
40
- ### Citation info
41
- ```bibtex
42
- @misc{GPT2-Home,
43
- author = {HamidReza Fatollah Zadeh Attar},
44
- title = {GPT2-Home the English home product description generator},
45
- year = {2021},
46
- publisher = {GitHub},
47
- journal = {GitHub repository},
48
- howpublished = {\url{https://github.com/HamidRezaAttar/GPT2-Home}},
49
- }
50
- ```
 
19
 
20
  [GPT-2](https://openai.com/blog/better-language-models/) is a large [transformer](https://arxiv.org/abs/1706.03762)-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
21
 
 
 
 
 
 
22
 
23
  ### How to use
24
  For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my [GitHub](https://github.com/HamidRezaAttar/GPT2-Home)
 
26
  You can use this model directly with a pipeline for text generation.
27
  ```python
28
  >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
29
+ >>> tokenizer = AutoTokenizer.from_pretrained("bprateek/product_description_generator")
30
+ >>> model = AutoModelForCausalLM.from_pretrained("bprateek/product_description_generator")
31
  >>> generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100})
32
  >>> generated_text = generator("This bed is very comfortable.")
33
  ```