Visual Question Answering
Transformers
Safetensors
llava
image-text-to-text
AIGC
LLaVA
Inference Endpoints
ponytail commited on
Commit
a112ff5
β€’
1 Parent(s): a70b13c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -37,7 +37,7 @@ human-llava has a good performance in both general and special fields
37
  ## News and Update πŸ”₯πŸ”₯πŸ”₯
38
  * Oct.23, 2024. **πŸ€—[HumanCaption-HQ-311K](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-HQ-311K), is released!πŸ‘πŸ‘πŸ‘**
39
  * Sep.12, 2024. **πŸ€—[HumanCaption-10M](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-10M), is released!πŸ‘πŸ‘πŸ‘**
40
- * Sep.8, 2024. **πŸ€—[HumanLLaVA-llama-3-8B](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!πŸ‘πŸ‘πŸ‘**
41
 
42
 
43
 
 
37
  ## News and Update πŸ”₯πŸ”₯πŸ”₯
38
  * Oct.23, 2024. **πŸ€—[HumanCaption-HQ-311K](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-HQ-311K), is released!πŸ‘πŸ‘πŸ‘**
39
  * Sep.12, 2024. **πŸ€—[HumanCaption-10M](https://huggingface.co/datasets/OpenFace-CQUPT/HumanCaption-10M), is released!πŸ‘πŸ‘πŸ‘**
40
+ * Sep.8, 2024. **πŸ€—[HumanVLM](https://huggingface.co/OpenFace-CQUPT/Human_LLaVA), is released!πŸ‘πŸ‘πŸ‘**
41
 
42
 
43