Symbol-LLM

Symbol-LLM

AI & ML interests

Natural Language Processing, Large Language Models, Neuro-Symbolic

Recent Activity

reacted to their post with πŸš€ about 13 hours ago
reacted to their post with πŸ”₯ about 13 hours ago
posted an update about 13 hours ago

Organizations

Symbol-LLM's activity

reacted to their post with πŸš€πŸ”₯ about 13 hours ago
view post
Post
320
πŸ₯³ Thrilled to introduce our recent efforts on bootstrapping VLMs for multi-modal chain-of-thought reasoning !

πŸ“• Title: Vision-Language Models Can Self-Improve Reasoning via Reflection

πŸ”— Link: Vision-Language Models Can Self-Improve Reasoning via Reflection (2411.00855)

πŸ˜‡Takeaways:

- We found that VLMs can self-improve reasoning performance through a reflection mechanism, and importantly, this approach can scale through test-time computing.

- Evaluation on comprehensive and diverse Vision-Language reasoning tasks are included !
posted an update about 13 hours ago
view post
Post
320
πŸ₯³ Thrilled to introduce our recent efforts on bootstrapping VLMs for multi-modal chain-of-thought reasoning !

πŸ“• Title: Vision-Language Models Can Self-Improve Reasoning via Reflection

πŸ”— Link: Vision-Language Models Can Self-Improve Reasoning via Reflection (2411.00855)

πŸ˜‡Takeaways:

- We found that VLMs can self-improve reasoning performance through a reflection mechanism, and importantly, this approach can scale through test-time computing.

- Evaluation on comprehensive and diverse Vision-Language reasoning tasks are included !
reacted to maxiw's post with πŸš€πŸ‘ 9 days ago
view post
Post
1709
Exciting to see open-source models thriving in the computer agent space! πŸ”₯
I just built a demo for OS-ATLAS: A Foundation Action Model For Generalist GUI Agents β€” check it out here: maxiw/OS-ATLAS

This demo predicts bounding boxes based on screenshot + instructions as input.
reacted to their post with πŸš€πŸ”₯ 17 days ago
view post
Post
2103
πŸš€ Excited to introduce a new member of the OS-Copilot family: OS-Atlas - an open-sourced foundational action model for GUI agents

πŸ“˜ Paper: OS-ATLAS: A Foundation Action Model for Generalist GUI Agents (2410.23218)
πŸ”— Website: https://osatlas.github.io

πŸ˜‡ TL;DR: OS-Atlas offers:
1. State-of-the-Art GUI Grounding: Helps GUI agents accurately locate GUI elements.
2. Strong OOD Performance and Cross-platform Compatibility: Excels in out-of-domain agentic tasks across MacOS, Windows, Linux, Android, and Web.
3. Complete Infrastructure for GUI Data Synthesis:
You can easily build your own OS agent upon it!

posted an update 17 days ago
view post
Post
2103
πŸš€ Excited to introduce a new member of the OS-Copilot family: OS-Atlas - an open-sourced foundational action model for GUI agents

πŸ“˜ Paper: OS-ATLAS: A Foundation Action Model for Generalist GUI Agents (2410.23218)
πŸ”— Website: https://osatlas.github.io

πŸ˜‡ TL;DR: OS-Atlas offers:
1. State-of-the-Art GUI Grounding: Helps GUI agents accurately locate GUI elements.
2. Strong OOD Performance and Cross-platform Compatibility: Excels in out-of-domain agentic tasks across MacOS, Windows, Linux, Android, and Web.
3. Complete Infrastructure for GUI Data Synthesis:
You can easily build your own OS agent upon it!

reacted to their post with πŸš€πŸ€— 4 months ago
view post
Post
2116
πŸ”₯Thrilled to release our 8B version of Symbol-LLM-Instruct !

It follows the two-stage training strategy proposed in the original paper and is continually optimized on LLaMA3-Chat-8B model.

Symbol-LLM was accepted by ACL'24 main conference ! See you in Thailand !

Paper link: https://arxiv.org/abs/2311.09278
Paper Title: Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
  • 1 reply
Β·
posted an update 4 months ago
view post
Post
2116
πŸ”₯Thrilled to release our 8B version of Symbol-LLM-Instruct !

It follows the two-stage training strategy proposed in the original paper and is continually optimized on LLaMA3-Chat-8B model.

Symbol-LLM was accepted by ACL'24 main conference ! See you in Thailand !

Paper link: https://arxiv.org/abs/2311.09278
Paper Title: Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
  • 1 reply
Β·
replied to their post 4 months ago
view reply

Thanks for your positive feedback ! πŸ₯³

reacted to FeYuan's post with πŸš€ 5 months ago
view post
Post
4752
Hi everyone,

I am excited to introduce our latest work, LLaMAX. 😁😁😁

LLaMAX is a powerful language model created specifically for multilingual scenarios. Built upon Meta's LLaMA series models, LLaMAX undergoes extensive training across more than 100 languages.

Remarkably, it enhances its multilingual capabilities without compromising its generalization ability, surpassing existing LLMs.

✨Highlights:

🎈 LLaMAX supports the 102 languages covered by Flores-101, and its performance in translating between low-resource languages far surpasses other decoder-only LLMs.

🎈 Even for languages not covered in Flores-200, LLaMAX still shows significant improvements in translation performance.

🎈 By performing simple SFT on English task data, LLaMAX demonstrates impressive multilingual transfer abilities in downstream tasks.

🎈 In our paper, we discuss effective methods for enhancing the multilingual capabilities of LLMs during the continued training phase.

We welcome you to use our model and provide feedback.

More Details:

πŸŽ‰ Code: https://github.com/CONE-MT/LLaMAX/

πŸŽ‰ Model: https://huggingface.co/LLaMAX/
Β·
reacted to their post with πŸš€πŸ”₯ 5 months ago
view post
Post
1911
πŸ“Excited to make public a series of checkpoints !

- Final checkpoints after self-training with ENVISIONS framework
- Cover math, logic, and agent domains
- Include 7B / 13B

πŸ“• Check our paper:
Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Link: https://arxiv.org/abs/2406.11736
  • 2 replies
Β·
posted an update 5 months ago
view post
Post
1911
πŸ“Excited to make public a series of checkpoints !

- Final checkpoints after self-training with ENVISIONS framework
- Cover math, logic, and agent domains
- Include 7B / 13B

πŸ“• Check our paper:
Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models
Link: https://arxiv.org/abs/2406.11736
  • 2 replies
Β·
reacted to their post with πŸ”₯πŸš€ 5 months ago
view post
Post
1777
πŸ“£Thrilled to make public our recent work ENVISIONS !!!

- Without human annotations !
- Without Distilling Strong LLMs !
- Self-improve LLMs in the environment
- Amazing performances on agentic and reasoning tasks
- Insightful analysis on "why" questions

πŸ“ Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models

πŸ“Ž Repo: https://github.com/xufangzhi/ENVISIONS
  • 1 reply
Β·
posted an update 5 months ago
view post
Post
1777
πŸ“£Thrilled to make public our recent work ENVISIONS !!!

- Without human annotations !
- Without Distilling Strong LLMs !
- Self-improve LLMs in the environment
- Amazing performances on agentic and reasoning tasks
- Insightful analysis on "why" questions

πŸ“ Title: Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models

πŸ“Ž Repo: https://github.com/xufangzhi/ENVISIONS
  • 1 reply
Β·
reacted to their post with πŸš€ 8 months ago