Transformers
Safetensors
English
Inference Endpoints
9rofe commited on
Commit
d7dcc26
1 Parent(s): 0f173ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -4
README.md CHANGED
@@ -67,11 +67,74 @@ Users should validate the simplified text with healthcare professionals to ensur
67
 
68
  Use the code below to get started with the model.
69
 
70
- '''python
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  prompt = """
72
- <user>: Convert this text to reading level 6: What are the complications? What are the complications? Nearly 88,000 people die from alcohol-related causes each year in the US, making alcohol use the 4th leading preventable cause of death. Drinking at an increased risk level raises your chance of: Accidents, injuries, and aggression. Drinking too much increases your risk for every type of injury and violence. Alcohol is a factor in about 60% of fatal burn injuries, drownings, and murders; 50% of severe trauma injuries and sexual assaults; and 40% of fatal crashes and falls. Physical health problems. Heavy drinkers have a greater chance of liver and heart disease, stroke, digestive problems, and some types of cancer. Theyre also more likely to have problems with sexual function and premature aging. Emotional and cognitive problems. People who drink too much are more prone to anxiety and depression. They may have trouble sleeping, remembering things, and solving problems. Problems with relationships, work, and studies. Heavy drinking can interfere with your interactions and performance in every area of your life. Birth defects. Drinking during pregnancy can cause brain damage and deformities in the baby. Since scientists dont know whether any amount of alcohol is safe for a developing baby, women who are pregnant or trying to become pregnant should not drink. Alcoholism and alcohol use disorders. Drinking at an increased risk level raises your chance of developing an alcohol use disorder. How is it treated? Your medical provider may refer you to an addiction specialist for consultation if there is a perceived risk. If your risk is moderate to high, you may need to have treatment \(at an outpatient or in an acute hospital setting\) to help you manage withdrawal symptoms and reduce your risks associated with this disorder. Your medical provider may also prescribe medicines to help you to manage your symptoms.
73
  <assistant>:
74
- """.strip()'''
 
75
 
76
  ## Training Details
77
 
@@ -151,7 +214,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
151
  - **Hardware Type:** GPU (NVIDIA A100)
152
  - **Hours used:** 120 hours
153
  - **Cloud Provider:** AWS
154
- - **Compute Region:** US East (N. Virginia)
155
  - **Carbon Emitted:** 500 kg CO2eq
156
 
157
  ## Technical Specifications [optional]
 
67
 
68
  Use the code below to get started with the model.
69
 
70
+ ```python
71
+ from transformers import (
72
+ AutoConfig,
73
+ AutoModelForCausalLM,
74
+ AutoTokenizer,
75
+ BitsAndBytesConfig
76
+ )
77
+
78
+ from peft import PeftConfig
79
+
80
+ MODEL = "9rofe/Wernicke-AI3"
81
+
82
+ bnb_config = BitsAndBytesConfig(
83
+ load_in_4bit=True,
84
+ bnb_4bit_use_double_quant=True,
85
+ bnb_4bit_quant_type="nf4",
86
+ bnb_4bit_compute_dtype=torch.bfloat16
87
+ )
88
+
89
+ config = PeftConfig.from_pretrained(MODEL)
90
+ model = AutoModelForCausalLM.from_pretrained(
91
+ config.base_model_name_or_path,
92
+ return_dict=True,
93
+ quantization_config=bnb_config,
94
+ device_map="auto",
95
+ trust_remote_code=True
96
+ )
97
+
98
+ tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path)
99
+ tokenizer.pad_token = tokenizer.eos_token
100
+
101
+ model = PeftModel.from_pretrained(model, MODEL)
102
+
103
+ generation_config = model.generation_config
104
+ generation_config.max_new_tokens = 500 # MODIFY
105
+ generation_config.temperature = 0.7
106
+ generation_config.top_p = 0.7
107
+ generation_config.num_return_sequences = 1
108
+ generation_config.pad_token_id = tokenizer.eos_token_id
109
+ generation_config.eos_token_id = tokenizer.eos_token_id
110
+
111
+ %%time
112
+ device = "cuda:0"
113
+
114
+ prompt = """
115
+ <user>: Convert this text to reading level 6: {TEXT}
116
+ <assistant>:
117
+ """.strip()
118
+
119
+ encoding = tokenizer(prompt, return_tensors="pt").to(device)
120
+ with torch.inference_mode():
121
+ outputs = model.generate(
122
+ input_ids = encoding.input_ids,
123
+ attention_mask = encoding.attention_mask,
124
+ generation_config = generation_config
125
+ )
126
+
127
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
128
+ ```
129
+
130
+ Utilize this prompt:
131
+
132
+ ```python
133
  prompt = """
134
+ <user>: Convert this text to reading level 6: {text}
135
  <assistant>:
136
+ """.strip()
137
+ ```
138
 
139
  ## Training Details
140
 
 
214
  - **Hardware Type:** GPU (NVIDIA A100)
215
  - **Hours used:** 120 hours
216
  - **Cloud Provider:** AWS
217
+ - **Compute Region:** US West (Utah)
218
  - **Carbon Emitted:** 500 kg CO2eq
219
 
220
  ## Technical Specifications [optional]