czczup commited on
Commit
7aabdc1
1 Parent(s): 052d2b6

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -348,6 +348,35 @@ print(f'User: {question}')
348
  print(f'Assistant: {response}')
349
  ```
350
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
351
  ## Finetune
352
 
353
  SWIFT from ModelScope community has supported the fine-tuning (Image/Video) of InternVL, please check [this link](https://github.com/modelscope/swift/blob/main/docs/source_en/Multi-Modal/internvl-best-practice.md) for more details.
 
348
  print(f'Assistant: {response}')
349
  ```
350
 
351
+ ### Streaming output
352
+
353
+ Besides this method, you can also use the following code to get streamed output.
354
+
355
+ ```python
356
+ from transformers import TextIteratorStreamer
357
+ from threading import Thread
358
+
359
+ # Initialize the streamer
360
+ streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
361
+ # Define the generation configuration
362
+ generation_config = dict(num_beams=1, max_new_tokens=1024, do_sample=False, streamer=streamer)
363
+ # Start the model chat in a separate thread
364
+ thread = Thread(target=model.chat, kwargs=dict(
365
+ tokenizer=tokenizer, pixel_values=pixel_values, question=question,
366
+ history=None, return_history=False, generation_config=generation_config,
367
+ ))
368
+ thread.start()
369
+
370
+ # Initialize an empty string to store the generated text
371
+ generated_text = ''
372
+ # Loop through the streamer to get the new text as it is generated
373
+ for new_text in streamer:
374
+ if new_text == model.conv_template.sep:
375
+ break
376
+ generated_text += new_text
377
+ print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
378
+ ```
379
+
380
  ## Finetune
381
 
382
  SWIFT from ModelScope community has supported the fine-tuning (Image/Video) of InternVL, please check [this link](https://github.com/modelscope/swift/blob/main/docs/source_en/Multi-Modal/internvl-best-practice.md) for more details.