File size: 4,520 Bytes
f0f769f 5ad0b7d b8c3a6e 5b57197 c668e4c 2778c70 c668e4c 79aee36 c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea c668e4c 56992ea 68d93a7 c668e4c 2778c70 c668e4c 2778c70 c668e4c 2778c70 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
---
language:
- en
pipeline_tag: text2text-generation
inference: false
---
# ViPE-M-CTX7
<!-- Provide a quick summary of what the model is/does. -->
ViPE: Visualize Pretty-much Everything, is the first automated model for translating any arbitraty piece of text into a visualizable prompt. It helps any text-to-image model in figurative or non-lexical language visualizations.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Computer Graphics Group, University of Tuebingen](https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/computergrafik/lehrstuhl/)
- **Model type:** Auto-Regressive
- **Language:** English
- **License:** [MIT License for Non-Commercial Use](https://github.com/Hazel1994/ViPE/blob/main/LICENSE)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Github](https://github.com/Hazel1994/ViPE)
- **Paper:** [EMNLP2023](https://2023.emnlp.org/program/)
### Music Video Genrations
- **Repository:** [Github](https://github.com/Hazel1994/ViPE)
- **Demo:**[ViPE Videos] (youtube link)
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can directly use the model to generate detailed prompts for any arbitrary text.
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
def generate(text, model, tokenizer,device,do_sample,top_k=100, epsilon_cutoff=.00005, temperature=1):
#mark the text with special tokens
text=[tokenizer.eos_token + i + tokenizer.eos_token for i in text]
batch=tokenizer(text, padding=True, return_tensors="pt")
input_ids = batch["input_ids"].to(device)
attention_mask = batch["attention_mask"].to(device)
#how many new tokens to generate at max
max_prompt_length=50
generated_ids = model.generate(input_ids=input_ids,attention_mask=attention_mask, max_new_tokens=max_prompt_length, do_sample=do_sample,top_k=top_k, epsilon_cutoff=epsilon_cutoff, temperature=temperature)
#return only the generated prompts
pred_caps = tokenizer.batch_decode(generated_ids[:, -(generated_ids.shape[1] - input_ids.shape[1]):], skip_special_tokens=True)
return pred_caps
device='cpu'
model = GPT2LMHeadModel.from_pretrained('fittar/ViPE-M-CTX7')
model.to(device)
#ViPE-M's tokenizer is identical to that of GPT2-Medium
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
tokenizer.pad_token = tokenizer.eos_token
# A list of abstract/figurative or any arbitrary combinations of keywords
texts=['lalala', 'I wanna start learning', 'free your mind; you will see the other side of life', 'brave; fantasy']
prompts=generate(texts,model,tokenizer,do_sample=True,device=device)
for t,p in zip(texts,prompts):
print('{} --> {}'.format(t,p))
lalala --> A group of people chanting "la la la" around a bonfire on a beach at night
I wanna start learning --> A child sitting in a library surrounded by books, excitedly flipping through pages of a book
free your mind; you will see the other side of life --> An astronaut floating in space with a sense of floating weightlessness, looking down towards the earth
brave; fantasy --> A brave knight with shining armor fighting a fierce dragon in a misty forest
```
### Recommendations
You can use either a comma or a semicolon to combine multiple keywords. for example ['dark, fantasy, brave'] or ['This is gonna be the best day of my life; do you agree?'].
However, a semicolon draws a stronger boundary between the keywords and encourages the model to transfer the last keyword in a given context (previous keywords).
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Citation
If you find ViPE useful, please cite our paper.
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
## Model Card Contact
[Hassan Shahmohammadi](fittar.me) |