File size: 1,004 Bytes
a442f6e
 
8a20155
a442f6e
 
8a20155
a442f6e
 
63353d3
c2e15d5
bddd1c9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c2e15d5
cd9c31c
63353d3
9950528
 
 
a442f6e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
library_name: transformers
pipeline_tag: text-generation
---

    
### Direct Use

```python
import transformers as tfm 

model = tfm.AutoModelForCausalLM.from_pretrained("Owaner/fineweb-falcon")
tokenizer = tfm.PreTrainedTokenizerFast.from_pretrained("Owaner/falcon_tokenizer")

example = "When habitually indulge in "
tokenized_input = tokenizer(example, return_tensors="pt", return_token_type_ids=False)
output = model.generate(
    inputs=tokenized_input["input_ids"],
    attention_mask=tokenized_input["attention_mask"],
    do_sample = True,
    max_length=100,
    temperature=0.7,
    top_k=50,
    top_p=0.95,
    num_return_sequences=5
)
output_text = tokenizer.batch_decode(output, skip_special_tokens=True)

for i, o in enumerate(output_text):
    print(f"Output {i+1}: {o}") 

```
- **Hardware Type:** Single Nvidia A80 memory 80
- **Hours used:** 2 hours
- **Cloud Provider:** DataCrunch
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]