czczup commited on
Commit
07c2078
·
verified ·
1 Parent(s): ac62eaa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -81,7 +81,7 @@ The training pipeline for a single model in InternVL 2.5 is structured across th
81
 
82
  We introduce a progressive scaling strategy to align the vision encoder with LLMs efficiently. This approach trains with smaller LLMs first (e.g., 20B) to optimize foundational visual capabilities and cross-modal alignment before transferring the vision encoder to larger LLMs (e.g., 72B) without retraining. This reuse skips intermediate stages for larger models.
83
 
84
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/AVb_PSxhJq1z2eUFNYoqQ.png)
85
 
86
  Compared to Qwen2-VL's 1.4 trillion tokens, InternVL2.5-78B uses only 120 billion tokens—less than one-tenth. This strategy minimizes redundancy, maximizes pre-trained component reuse, and enables efficient training for complex vision-language tasks.
87
 
@@ -164,7 +164,7 @@ As shown in the following figure, from InternVL 1.5 to 2.0 and then to 2.5, the
164
 
165
  ### Video Understanding
166
 
167
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/uD5aYt2wNYL94Xn8MOVih.png)
168
 
169
  ## Evaluation on Language Capability
170
 
@@ -541,10 +541,10 @@ Many repositories now support fine-tuning of the InternVL series models, includi
541
 
542
  ### LMDeploy
543
 
544
- LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
545
 
546
  ```sh
547
- pip install lmdeploy>=0.5.3
548
  ```
549
 
550
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
@@ -568,8 +568,6 @@ If `ImportError` occurs while executing this case, please install the required d
568
 
569
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
570
 
571
- question = 'Describe this video in detail.'
572
-
573
  ```python
574
  from lmdeploy import pipeline, TurbomindEngineConfig
575
  from lmdeploy.vl import load_image
@@ -633,7 +631,7 @@ print(sess.response.text)
633
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
634
 
635
  ```shell
636
- lmdeploy serve api_server OpenGVLab/InternVL2_5-38B --backend turbomind --server-port 23333 --tp 2
637
  ```
638
 
639
  To use the OpenAI-style interface, you need to install OpenAI:
 
81
 
82
  We introduce a progressive scaling strategy to align the vision encoder with LLMs efficiently. This approach trains with smaller LLMs first (e.g., 20B) to optimize foundational visual capabilities and cross-modal alignment before transferring the vision encoder to larger LLMs (e.g., 72B) without retraining. This reuse skips intermediate stages for larger models.
83
 
84
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/UoNUyS7ctN5pBxNv9KnzH.png)
85
 
86
  Compared to Qwen2-VL's 1.4 trillion tokens, InternVL2.5-78B uses only 120 billion tokens—less than one-tenth. This strategy minimizes redundancy, maximizes pre-trained component reuse, and enables efficient training for complex vision-language tasks.
87
 
 
164
 
165
  ### Video Understanding
166
 
167
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/tcwH-i1qc8H16En-7AZ5M.png)
168
 
169
  ## Evaluation on Language Capability
170
 
 
541
 
542
  ### LMDeploy
543
 
544
+ LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
545
 
546
  ```sh
547
+ pip install lmdeploy>=0.6.4
548
  ```
549
 
550
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
 
568
 
569
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
570
 
 
 
571
  ```python
572
  from lmdeploy import pipeline, TurbomindEngineConfig
573
  from lmdeploy.vl import load_image
 
631
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
632
 
633
  ```shell
634
+ lmdeploy serve api_server OpenGVLab/InternVL2_5-38B --server-port 23333 --tp 2
635
  ```
636
 
637
  To use the OpenAI-style interface, you need to install OpenAI: