File size: 9,947 Bytes
135eaf0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 |
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "5a7c444a-5800-45c3-9cfa-8a73de12c8e5",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"HF_HOME\"] = \"/workspace/local-HF-cache/\"\n",
"os.environ[\"HF_HUB_ENABLE_HF_TRANSFER\"] = \"1\"\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5dd927f1-1cad-43ac-ae9c-817e75048350",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NVIDIA GeForce RTX 4090 (compute capability 8.9) supports NVIDIA Ampere or later, enabled TF32 in PyTorch.\n"
]
}
],
"source": [
"import torch\n",
"\n",
"from textsum.utils import check_ampere_gpu\n",
"check_ampere_gpu() # automatically enables TF32 if Ampere+ available\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0c252525-7f63-4da1-8c9c-f939873cc484",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"12/05/2024 05:46:29 INFO PyTorch version 2.4.1+cu124 available.\n"
]
},
{
"data": {
"text/plain": [
"DatasetDict({\n",
" train: Dataset({\n",
" features: ['title', 'summary_official', 'metadata', 'source', 'summary_chunks', 'summary_generated'],\n",
" num_rows: 1315\n",
" })\n",
"})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from datasets import load_dataset\n",
"\n",
"# Login using e.g. `huggingface-cli login` to access this dataset\n",
"ds = load_dataset(\"pszemraj/scriptbase-pegX-summaries\", \"no-text-4beams\")\n",
"ds"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d1be6062-f813-496d-b15f-4db98b02d66f",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "3cf7628f7d1e403d9339a1bb46a13610",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Downloading shards: 0%| | 0/3 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"12/05/2024 05:46:32 INFO We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a2adc839e69c4c799b789ea92e6e90b9",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c5776ec846244a5ab3ae53c05bb10e02",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"tokenizer_config.json: 0%| | 0.00/20.8k [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"A computer-implemented method of generating a syntactic object involves providing input data sets containing one or more words, each associated with at least one non-adjacent second word, and creating an exocentric relationship between the words by applying neo-ian event semantics. This neo-antagonistic effect results in the generation of a syntactic object, which is then stored for future use. Additionally, a method of learning and using language involves creating a lexicon of words with at least two possible states, selecting a base state for a computational operation, and applying the computational operation to the base state to form a new output state. Furthermore, a computer-implemented method for changing workspaces involves merging two workspaces based on conditions such as an impenetrable condition, constraint on movement, or resource restriction.\n"
]
}
],
"source": [
"import torch\n",
"from transformers import pipeline\n",
"\n",
"pipe = pipeline(\n",
" \"text2text-generation\",\n",
" model=\"pszemraj/flan-t5-xl-summary-map-reduce-1024\",\n",
" device_map=\"auto\",\n",
")\n",
"\n",
"# examples\n",
"text = \"\"\"\"Sangers on a Train\" is a 1950 film about a train driver, Guy Haines, who discovers his wife, Miriam, has been murdered in Metcalf, Washington, DC. The film delves into the relationship between Guy and Anne Burton, focusing on Guy's desire for Anne to marry him.\n",
"\"Screentalk\" is a comedy about Anne Burton and her husband, Guy Haines, who are investigating the murder of their daughter, Miriam. The plot revolves around Anne's relationship with Bruno, who has been arrested for his wife's murder. In the second set, Guy and Anne meet at a tennis court in Washington, DC, where they plan to play against each other. Hennessy and Hammond investigate the crime scene, leading to Guy's arrest.\n",
"\"The Announcer's Boom Forest Hills\" is a tennis game between Guy Haines and Bruno Antony, with the score six-five. In the second set, Haines leads three games to four, but his opponent, Bernard Reynolds, attacks him in the third set. Meanwhile, Anne Hennessy and Barbara Hammond are preparing for dinner at the amusement park, where Guy has been waiting for hours. A police car arrives, followed by a taxi. The boatman and detectives follow Guy through the queue, leading to the conclusion that Guy was the man responsible for the accident.\"\"\"\n",
"\n",
"text = \"\"\"A computer implemented method of generating a syntactic object. The method includes the steps of providing a plurality of input data sets, each input data set comprising one or more words, wherein each word is associated with at least one non-adjacent second word; creating an exocentric relationship between the first and second words by applying a neo-ian event semantics to the input data in such a way that the neo-antagonistic effect results in the generation of the syntactic object; and storing the generated syntactic object for future use.\n",
" A method of learning and using language is disclosed. The method includes the steps of creating a lexicon of words, wherein each word in the lexicon has at least two possible states, selecting a set of one or more of the possible states of the lexicon to be used as a base state for a subsequent computational operation, and applying the computational operation to the base state to form a new output state.\n",
" A computer implemented method for changing a first workspace to a second workspace. The method includes the steps of creating a new workspace by merging the first workspace with the second workspace, wherein the merging is based on at least one of: an impenetrable condition; a constraint on movement; and a resource restriction.\n",
" The brain is constantly loosing neurons because you doesn't want all the junk around.\"\"\"\n",
"\n",
"# generate\n",
"if torch.cuda.is_available():\n",
" torch.cuda.empty_cache()\n",
"with torch.autocast('cuda', dtype=torch.bfloat16):\n",
" res = pipe(\n",
" text,\n",
" max_new_tokens=512, # increase up to 1024 if needed\n",
" num_beams=4,\n",
" early_stopping=True,\n",
" truncation=True,\n",
")\n",
"print(res[0][\"generated_text\"])\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "93b0fda6-6478-4cfe-8ddc-acddb59edb1a",
"metadata": {},
"outputs": [],
"source": [
"pipe.model = torch.compile(pipe.model)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "adf7599a-f52f-4519-b86b-b1079521b1ac",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "7a0bf9e015fd4072b0a1b38380bf47ce",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Map: 0%| | 0/1315 [00:00<?, ? examples/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"def reduce_summary(example):\n",
" if torch.cuda.is_available():\n",
" torch.cuda.empty_cache()\n",
" with torch.autocast('cuda', dtype=torch.bfloat16, enabled=False):\n",
" res = pipe(\n",
" example['summary_chunks'],\n",
" max_new_tokens=512, # increase up to 1024 if needed\n",
" num_beams=4,\n",
" early_stopping=True,\n",
" truncation=True,\n",
" )\n",
" return {\"summary_reduced_flanT5xl\": res[0][\"generated_text\"]}\n",
"\n",
"ds = ds.map(reduce_summary, batched=False)\n",
"ds"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2788312f-804b-4a7e-b2b0-b7e20fac024d",
"metadata": {},
"outputs": [],
"source": [
"# ds.push_to_hub(\"pszemraj/scriptbase-pegX-summaries\", config_name=\"no-text-4beams-flanXL\", \n",
"# commit_description='generate with pszemraj/flan-t5-xl-summary-map-reduce-1024')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
|