czczup commited on
Commit
81ae45a
1 Parent(s): 7bdd9ff

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -11,7 +11,7 @@ pipeline_tag: image-text-to-text
11
 
12
  ## Introduction
13
 
14
- We are excited to announce the release of InternVL 2.0, the latest addition to the InternVL series of multimodal large language models. InternVL 2.0 features a variety of **instruction-tuned models**, ranging from 2 billion to 108 billion parameters. This repository contains the instruction-tuned InternVL2-26B model.
15
 
16
  Compared to the state-of-the-art open-source multimodal large language models, InternVL 2.0 surpasses most open-source models. It demonstrates competitive performance on par with proprietary commercial models across various capabilities, including document and chart comprehension, infographics QA, scene text understanding and OCR tasks, scientific and mathematical problem solving, as well as cultural understanding and integrated multimodal capabilities.
17
 
@@ -60,8 +60,8 @@ InternVL 2.0 is a multimodal large language model series, featuring models of va
60
  | Model Size | - | 34B | 34B | 25.5B | 25.5B |
61
  | | | | | | |
62
  | MVBench | - | - | - | 52.1 | 67.5 |
63
- | Video-MME<br>wo subs | 59.9 | 59.0 | 52.0 | TBD | TBD |
64
- | Video-MME<br>w/ subs | 63.3 | 59.4 | 54.9 | TBD | TBD |
65
 
66
  - We evaluate our models on MVBench by extracting 16 frames from each video, and each frame was resized to a 448x448 image.
67
 
@@ -71,6 +71,8 @@ Limitations: Although we have made efforts to ensure the safety of the model dur
71
 
72
  We provide an example code to run InternVL2-26B using `transformers`.
73
 
 
 
74
  > Please use transformers==4.37.2 to ensure the model works normally.
75
 
76
  ```python
@@ -340,7 +342,7 @@ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
340
  from lmdeploy.vl import load_image
341
 
342
  model = 'OpenGVLab/InternVL2-26B'
343
- system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
344
  image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
345
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
346
  chat_template_config.meta_instruction = system_prompt
@@ -356,13 +358,15 @@ If `ImportError` occurs while executing this case, please install the required d
356
 
357
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
358
 
 
 
359
  ```python
360
  from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
361
  from lmdeploy.vl import load_image
362
  from lmdeploy.vl.constants import IMAGE_TOKEN
363
 
364
  model = 'OpenGVLab/InternVL2-26B'
365
- system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
366
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
367
  chat_template_config.meta_instruction = system_prompt
368
  pipe = pipeline(model, chat_template_config=chat_template_config,
@@ -388,7 +392,7 @@ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
388
  from lmdeploy.vl import load_image
389
 
390
  model = 'OpenGVLab/InternVL2-26B'
391
- system_prompt = '我是书生·万象,英文名是InternVL,是由上海人���智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
392
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
393
  chat_template_config.meta_instruction = system_prompt
394
  pipe = pipeline(model, chat_template_config=chat_template_config,
@@ -412,7 +416,7 @@ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig, Genera
412
  from lmdeploy.vl import load_image
413
 
414
  model = 'OpenGVLab/InternVL2-26B'
415
- system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
416
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
417
  chat_template_config.meta_instruction = system_prompt
418
  pipe = pipeline(model, chat_template_config=chat_template_config,
@@ -428,12 +432,12 @@ print(sess.response.text)
428
 
429
  #### Service
430
 
431
- For lmdeploy v0.5.0, please configure the chat template config first. Create the following JSON file `chat_template.json`.
432
 
433
  ```json
434
  {
435
- "model_name":"internlm2",
436
- "meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。",
437
  "stop_words":["<|im_start|>", "<|im_end|>"]
438
  }
439
  ```
@@ -441,16 +445,50 @@ For lmdeploy v0.5.0, please configure the chat template config first. Create the
441
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
442
 
443
  ```shell
444
- lmdeploy serve api_server OpenGVLab/InternVL2-26B --backend turbomind --chat-template chat_template.json
445
  ```
446
 
447
- The default port of `api_server` is `23333`. After the server is launched, you can communicate with server on terminal through `api_client`:
448
 
449
  ```shell
450
- lmdeploy serve api_client http://0.0.0.0:23333
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
451
  ```
452
 
453
- You can overview and try out `api_server` APIs online by swagger UI at `http://0.0.0.0:23333`, or you can also read the API specification from [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md).
 
 
 
 
 
 
454
 
455
  ## License
456
 
@@ -477,7 +515,7 @@ If you find this project useful in your research, please consider citing:
477
 
478
  ## 简介
479
 
480
- 我们很高兴宣布 InternVL 2.0 的发布,这是 InternVL 系列多模态大语言模型的最新版本。InternVL 2.0 提供了多种**指令微调**的模型,参数从 20 亿到 1080 亿不等。此仓库包含经过指令微调的 InternVL2-26B 模型。
481
 
482
  与最先进的开源多模态大语言模型相比,InternVL 2.0 超越了大多数开源模型。它在各种能力上表现出与闭源商业模型相媲美的竞争力,包括文档和图表理解、信息图表问答、场景文本理解和 OCR 任务、科学和数学问题解决,以及文化理解和综合���模态能力。
483
 
@@ -526,8 +564,8 @@ InternVL 2.0 是一个多模态大语言模型系列,包含各种规模的模
526
  | 模型大小 | - | 34B | 34B | 25.5B | 25.5B |
527
  | | | | | | |
528
  | MVBench | - | - | - | 52.1 | 67.5 |
529
- | Video-MME<br>wo subs | 59.9 | 59.0 | 52.0 | TBD | TBD |
530
- | Video-MME<br>w/ subs | 63.3 | 59.4 | 54.9 | TBD | TBD |
531
 
532
  - 我们通过从每个视频中提取16帧来评估我们的模型在MVBench上的性能,每个视频帧被调整为448x448的图像。
533
 
@@ -537,6 +575,8 @@ InternVL 2.0 是一个多模态大语言模型系列,包含各种规模的模
537
 
538
  我们提供了一个示例代码,用于使用 `transformers` 运行 InternVL2-26B。
539
 
 
 
540
  > 请使用 transformers==4.37.2 以确保模型正常运行。
541
 
542
  示例代码请[点击这里](#quick-start)。
@@ -560,7 +600,7 @@ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
560
  from lmdeploy.vl import load_image
561
 
562
  model = 'OpenGVLab/InternVL2-26B'
563
- system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
564
  image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
565
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
566
  chat_template_config.meta_instruction = system_prompt
@@ -582,7 +622,7 @@ from lmdeploy.vl import load_image
582
  from lmdeploy.vl.constants import IMAGE_TOKEN
583
 
584
  model = 'OpenGVLab/InternVL2-26B'
585
- system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
586
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
587
  chat_template_config.meta_instruction = system_prompt
588
  pipe = pipeline(model, chat_template_config=chat_template_config,
@@ -607,7 +647,7 @@ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
607
  from lmdeploy.vl import load_image
608
 
609
  model = 'OpenGVLab/InternVL2-26B'
610
- system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
611
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
612
  chat_template_config.meta_instruction = system_prompt
613
  pipe = pipeline(model, chat_template_config=chat_template_config,
@@ -631,7 +671,7 @@ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig, Genera
631
  from lmdeploy.vl import load_image
632
 
633
  model = 'OpenGVLab/InternVL2-26B'
634
- system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
635
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
636
  chat_template_config.meta_instruction = system_prompt
637
  pipe = pipeline(model, chat_template_config=chat_template_config,
@@ -647,12 +687,12 @@ print(sess.response.text)
647
 
648
  #### API部署
649
 
650
- 对于 lmdeploy v0.5.0,请先配置聊天模板配置文件。创建如下的 JSON 文件 `chat_template.json`。
651
 
652
  ```json
653
  {
654
- "model_name":"internlm2",
655
- "meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。",
656
  "stop_words":["<|im_start|>", "<|im_end|>"]
657
  }
658
  ```
@@ -660,16 +700,50 @@ print(sess.response.text)
660
  LMDeploy 的 `api_server` 使模型能够通过一个命令轻松打包成服务。提供的 RESTful API 与 OpenAI 的接口兼容。以下是服务启动的示例:
661
 
662
  ```shell
663
- lmdeploy serve api_server OpenGVLab/InternVL2-26B --backend turbomind --chat-template chat_template.json
664
  ```
665
 
666
- `api_server` 的默认端口是 `23333`。服务器启动后,你可以通过 `api_client` 在终端与服务器通信:
667
 
668
  ```shell
669
- lmdeploy serve api_client http://0.0.0.0:23333
670
  ```
671
 
672
- 你可以通过 `http://0.0.0.0:23333` 的 swagger UI 在线查看和试用 `api_server` 的 API,也可以从 [这里](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md) 阅读 API 规范。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
673
 
674
  ## 开源许可证
675
 
 
11
 
12
  ## Introduction
13
 
14
+ We are excited to announce the release of InternVL 2.0, the latest addition to the InternVL series of multimodal large language models. InternVL 2.0 features a variety of **instruction-tuned models**, ranging from 1 billion to 108 billion parameters. This repository contains the instruction-tuned InternVL2-26B model.
15
 
16
  Compared to the state-of-the-art open-source multimodal large language models, InternVL 2.0 surpasses most open-source models. It demonstrates competitive performance on par with proprietary commercial models across various capabilities, including document and chart comprehension, infographics QA, scene text understanding and OCR tasks, scientific and mathematical problem solving, as well as cultural understanding and integrated multimodal capabilities.
17
 
 
60
  | Model Size | - | 34B | 34B | 25.5B | 25.5B |
61
  | | | | | | |
62
  | MVBench | - | - | - | 52.1 | 67.5 |
63
+ | Video-MME<br>wo subs | 59.9 | 59.0 | 52.0 | TODO | TODO |
64
+ | Video-MME<br>w/ subs | 63.3 | 59.4 | 54.9 | TODO | TODO |
65
 
66
  - We evaluate our models on MVBench by extracting 16 frames from each video, and each frame was resized to a 448x448 image.
67
 
 
71
 
72
  We provide an example code to run InternVL2-26B using `transformers`.
73
 
74
+ We also welcome you to experience the InternVL2 series models in our [online demo](https://internvl.opengvlab.com/). Currently, due to the limited GPU resources with public IP addresses, we can only deploy models up to a maximum of 26B. We will expand soon and deploy larger models to the online demo.
75
+
76
  > Please use transformers==4.37.2 to ensure the model works normally.
77
 
78
  ```python
 
342
  from lmdeploy.vl import load_image
343
 
344
  model = 'OpenGVLab/InternVL2-26B'
345
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。'
346
  image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
347
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
348
  chat_template_config.meta_instruction = system_prompt
 
358
 
359
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
360
 
361
+ > Warning: Due to the scarcity of multi-image conversation data, the performance on multi-image tasks may be unstable, and it may require multiple attempts to achieve satisfactory results.
362
+
363
  ```python
364
  from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
365
  from lmdeploy.vl import load_image
366
  from lmdeploy.vl.constants import IMAGE_TOKEN
367
 
368
  model = 'OpenGVLab/InternVL2-26B'
369
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。'
370
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
371
  chat_template_config.meta_instruction = system_prompt
372
  pipe = pipeline(model, chat_template_config=chat_template_config,
 
392
  from lmdeploy.vl import load_image
393
 
394
  model = 'OpenGVLab/InternVL2-26B'
395
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。'
396
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
397
  chat_template_config.meta_instruction = system_prompt
398
  pipe = pipeline(model, chat_template_config=chat_template_config,
 
416
  from lmdeploy.vl import load_image
417
 
418
  model = 'OpenGVLab/InternVL2-26B'
419
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。'
420
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
421
  chat_template_config.meta_instruction = system_prompt
422
  pipe = pipeline(model, chat_template_config=chat_template_config,
 
432
 
433
  #### Service
434
 
435
+ To deploy InternVL2 as an API, please configure the chat template config first. Create the following JSON file `chat_template.json`.
436
 
437
  ```json
438
  {
439
+ "model_name":"internvl-internlm2",
440
+ "meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。",
441
  "stop_words":["<|im_start|>", "<|im_end|>"]
442
  }
443
  ```
 
445
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
446
 
447
  ```shell
448
+ lmdeploy serve api_server OpenGVLab/InternVL2-26B --model-name InternVL2-26B --backend turbomind --server-port 23333 --chat-template chat_template.json
449
  ```
450
 
451
+ To use the OpenAI-style interface, you need to install OpenAI:
452
 
453
  ```shell
454
+ pip install openai
455
+ ```
456
+
457
+ Then, use the code below to make the API call:
458
+
459
+ ```python
460
+ from openai import OpenAI
461
+
462
+ client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
463
+ model_name = client.models.list().data[0].id
464
+ response = client.chat.completions.create(
465
+ model="InternVL2-8B",
466
+ messages=[{
467
+ 'role':
468
+ 'user',
469
+ 'content': [{
470
+ 'type': 'text',
471
+ 'text': 'describe this image',
472
+ }, {
473
+ 'type': 'image_url',
474
+ 'image_url': {
475
+ 'url':
476
+ 'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
477
+ },
478
+ }],
479
+ }],
480
+ temperature=0.8,
481
+ top_p=0.8)
482
+ print(response)
483
  ```
484
 
485
+ ### vLLM
486
+
487
+ TODO
488
+
489
+ ### Ollama
490
+
491
+ TODO
492
 
493
  ## License
494
 
 
515
 
516
  ## 简介
517
 
518
+ 我们很高兴宣布 InternVL 2.0 的发布,这是 InternVL 系列多模态大语言模型的最新版本。InternVL 2.0 提供了多种**指令微调**的模型,参数从 10 亿到 1080 亿不等。此仓库包含经过指令微调的 InternVL2-26B 模型。
519
 
520
  与最先进的开源多模态大语言模型相比,InternVL 2.0 超越了大多数开源模型。它在各种能力上表现出与闭源商业模型相媲美的竞争力,包括文档和图表理解、信息图表问答、场景文本理解和 OCR 任务、科学和数学问题解决,以及文化理解和综合���模态能力。
521
 
 
564
  | 模型大小 | - | 34B | 34B | 25.5B | 25.5B |
565
  | | | | | | |
566
  | MVBench | - | - | - | 52.1 | 67.5 |
567
+ | Video-MME<br>wo subs | 59.9 | 59.0 | 52.0 | TODO | TODO |
568
+ | Video-MME<br>w/ subs | 63.3 | 59.4 | 54.9 | TODO | TODO |
569
 
570
  - 我们通过从每个视频中提取16帧来评估我们的模型在MVBench上的性能,每个视频帧被调整为448x448的图像。
571
 
 
575
 
576
  我们提供了一个示例代码,用于使用 `transformers` 运行 InternVL2-26B。
577
 
578
+ 我们也欢迎你在我们的[在线demo](https://internvl.opengvlab.com/)中体验InternVL2的系列模型。目前,由于具备公网IP地址的GPU资源有限,我们目前只能部署最大到26B的模型。我们会在不久之后进行扩容,把更大的模型部署到在线demo上,敬请期待。
579
+
580
  > 请使用 transformers==4.37.2 以确保模型正常运行。
581
 
582
  示例代码请[点击这里](#quick-start)。
 
600
  from lmdeploy.vl import load_image
601
 
602
  model = 'OpenGVLab/InternVL2-26B'
603
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。'
604
  image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
605
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
606
  chat_template_config.meta_instruction = system_prompt
 
622
  from lmdeploy.vl.constants import IMAGE_TOKEN
623
 
624
  model = 'OpenGVLab/InternVL2-26B'
625
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。'
626
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
627
  chat_template_config.meta_instruction = system_prompt
628
  pipe = pipeline(model, chat_template_config=chat_template_config,
 
647
  from lmdeploy.vl import load_image
648
 
649
  model = 'OpenGVLab/InternVL2-26B'
650
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。'
651
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
652
  chat_template_config.meta_instruction = system_prompt
653
  pipe = pipeline(model, chat_template_config=chat_template_config,
 
671
  from lmdeploy.vl import load_image
672
 
673
  model = 'OpenGVLab/InternVL2-26B'
674
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。'
675
  chat_template_config = ChatTemplateConfig('internvl-internlm2')
676
  chat_template_config.meta_instruction = system_prompt
677
  pipe = pipeline(model, chat_template_config=chat_template_config,
 
687
 
688
  #### API部署
689
 
690
+ 为了将InternVL2部署成API,请先配置聊天模板配置文件。创建如下的 JSON 文件 `chat_template.json`。
691
 
692
  ```json
693
  {
694
+ "model_name":"internvl-internlm2",
695
+ "meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。",
696
  "stop_words":["<|im_start|>", "<|im_end|>"]
697
  }
698
  ```
 
700
  LMDeploy 的 `api_server` 使模型能够通过一个命令轻松打包成服务。提供的 RESTful API 与 OpenAI 的接口兼容。以下是服务启动的示例:
701
 
702
  ```shell
703
+ lmdeploy serve api_server OpenGVLab/InternVL2-26B --model-name InternVL2-26B --backend turbomind --server-port 23333 --chat-template chat_template.json
704
  ```
705
 
706
+ 为了使用OpenAI风格的API接口,您需要安装OpenAI:
707
 
708
  ```shell
709
+ pip install openai
710
  ```
711
 
712
+ 然后,使用下面的代码进行API调用:
713
+
714
+ ```python
715
+ from openai import OpenAI
716
+
717
+ client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
718
+ model_name = client.models.list().data[0].id
719
+ response = client.chat.completions.create(
720
+ model="InternVL2-26B",
721
+ messages=[{
722
+ 'role':
723
+ 'user',
724
+ 'content': [{
725
+ 'type': 'text',
726
+ 'text': 'describe this image',
727
+ }, {
728
+ 'type': 'image_url',
729
+ 'image_url': {
730
+ 'url':
731
+ 'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
732
+ },
733
+ }],
734
+ }],
735
+ temperature=0.8,
736
+ top_p=0.8)
737
+ print(response)
738
+ ```
739
+
740
+ ### vLLM
741
+
742
+ TODO
743
+
744
+ ### Ollama
745
+
746
+ TODO
747
 
748
  ## 开源许可证
749
 
config.json CHANGED
@@ -91,7 +91,7 @@
91
  "tie_word_embeddings": false,
92
  "tokenizer_class": null,
93
  "top_k": 50,
94
- "top_p": null,
95
  "torch_dtype": "bfloat16",
96
  "torchscript": false,
97
  "transformers_version": "4.37.2",
 
91
  "tie_word_embeddings": false,
92
  "tokenizer_class": null,
93
  "top_k": 50,
94
+ "top_p": 1.0,
95
  "torch_dtype": "bfloat16",
96
  "torchscript": false,
97
  "transformers_version": "4.37.2",
configuration_intern_vit.py CHANGED
@@ -1,6 +1,6 @@
1
  # --------------------------------------------------------
2
  # InternVL
3
- # Copyright (c) 2023 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
  import os
 
1
  # --------------------------------------------------------
2
  # InternVL
3
+ # Copyright (c) 2024 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
  import os
configuration_internvl_chat.py CHANGED
@@ -1,6 +1,6 @@
1
  # --------------------------------------------------------
2
  # InternVL
3
- # Copyright (c) 2023 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
 
 
1
  # --------------------------------------------------------
2
  # InternVL
3
+ # Copyright (c) 2024 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
 
conversation.py CHANGED
@@ -330,13 +330,16 @@ def get_conv_template(name: str) -> Conversation:
330
  return conv_templates[name].copy()
331
 
332
 
333
- # Note that for inference, using the Hermes-2 and internlm2-chat templates is equivalent.
 
 
 
334
  register_conv_template(
335
  Conversation(
336
  name='Hermes-2',
337
  system_template='<|im_start|>system\n{system_message}',
338
  # note: The new system prompt was not used here to avoid changes in benchmark performance.
339
- # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。',
340
  system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
341
  roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
342
  sep_style=SeparatorStyle.MPT,
@@ -357,7 +360,7 @@ register_conv_template(
357
  name='internlm2-chat',
358
  system_template='<|im_start|>system\n{system_message}',
359
  # note: The new system prompt was not used here to avoid changes in benchmark performance.
360
- # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。',
361
  system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
362
  roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
363
  sep_style=SeparatorStyle.MPT,
@@ -376,7 +379,7 @@ register_conv_template(
376
  name='phi3-chat',
377
  system_template='<|system|>\n{system_message}',
378
  # note: The new system prompt was not used here to avoid changes in benchmark performance.
379
- # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。',
380
  system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
381
  roles=('<|user|>\n', '<|assistant|>\n'),
382
  sep_style=SeparatorStyle.MPT,
 
330
  return conv_templates[name].copy()
331
 
332
 
333
+ # Both Hermes-2 and internlm2-chat are chatml-format conversation templates. The difference
334
+ # is that during training, the preprocessing function for the Hermes-2 template doesn't add
335
+ # <s> at the beginning of the tokenized sequence, while the internlm2-chat template does.
336
+ # Therefore, they are completely equivalent during inference.
337
  register_conv_template(
338
  Conversation(
339
  name='Hermes-2',
340
  system_template='<|im_start|>system\n{system_message}',
341
  # note: The new system prompt was not used here to avoid changes in benchmark performance.
342
+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。',
343
  system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
344
  roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
345
  sep_style=SeparatorStyle.MPT,
 
360
  name='internlm2-chat',
361
  system_template='<|im_start|>system\n{system_message}',
362
  # note: The new system prompt was not used here to avoid changes in benchmark performance.
363
+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。',
364
  system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
365
  roles=('<|im_start|>user\n', '<|im_start|>assistant\n'),
366
  sep_style=SeparatorStyle.MPT,
 
379
  name='phi3-chat',
380
  system_template='<|system|>\n{system_message}',
381
  # note: The new system prompt was not used here to avoid changes in benchmark performance.
382
+ # system_message='我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。',
383
  system_message='你是由上海人工智能实验室联合商汤科技开发的书生多模态大模型,英文名叫InternVL, 是一个有用无害的人工智能助手。',
384
  roles=('<|user|>\n', '<|assistant|>\n'),
385
  sep_style=SeparatorStyle.MPT,
modeling_intern_vit.py CHANGED
@@ -1,6 +1,6 @@
1
  # --------------------------------------------------------
2
  # InternVL
3
- # Copyright (c) 2023 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
  from typing import Optional, Tuple, Union
 
1
  # --------------------------------------------------------
2
  # InternVL
3
+ # Copyright (c) 2024 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
  from typing import Optional, Tuple, Union