medxiaorudan
commited on
Commit
•
0c08ea0
1
Parent(s):
87c73a7
Update README.md
Browse files
README.md
CHANGED
@@ -35,7 +35,42 @@ If you get the error "ValueError: Tokenizer class CodeLlamaTokenizer does not ex
|
|
35 |
## Uses
|
36 |
|
37 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
### Direct Use
|
40 |
|
41 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
|
|
35 |
## Uses
|
36 |
|
37 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
38 |
+
```python
|
39 |
+
from transformers import AutoTokenizer
|
40 |
+
import transformers
|
41 |
+
import torch
|
42 |
+
|
43 |
+
model = "medxiaorudan/CodeLlama_CPP_FineTuned"
|
44 |
+
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
46 |
+
pipeline = transformers.pipeline(
|
47 |
+
"text-generation",
|
48 |
+
model=model,
|
49 |
+
torch_dtype=torch.float16,
|
50 |
+
device_map="auto",
|
51 |
+
)
|
52 |
+
|
53 |
+
prompt = """
|
54 |
+
Use the Task below and write the Response, which is a programming code that can solve the Task.
|
55 |
+
### Task:
|
56 |
+
Generate a C++ program that accepts numeric input from the user and maintains a record of previous user inputs with timestamps. Ensure the program sorts the user inputs in ascending order based on the provided numeric input. Enhance the program to display timestamps along with the sorted user inputs.
|
57 |
+
### Response:
|
58 |
+
"""
|
59 |
+
sequences = pipeline(
|
60 |
+
prompt,
|
61 |
+
do_sample=True,
|
62 |
+
top_k=10,
|
63 |
+
temperature=0.1,
|
64 |
+
top_p=0.95,
|
65 |
+
num_return_sequences=1,
|
66 |
+
eos_token_id=tokenizer.eos_token_id,
|
67 |
+
max_length=400,
|
68 |
+
add_special_tokens=False
|
69 |
+
)
|
70 |
+
for seq in sequences:
|
71 |
+
print(f"Result: {seq['generated_text']}")
|
72 |
+
```
|
73 |
+
|
74 |
### Direct Use
|
75 |
|
76 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|