File size: 1,248 Bytes
3e3cabe f46fe1a 3e3cabe f46fe1a b9e8daf ddd0981 b9e8daf a03918c 454e1b5 a03918c 454e1b5 a03918c 454e1b5 a03918c 454e1b5 a03918c 454e1b5 a03918c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
license: mit
tags:
- trl
- sft
- retail
- finance
- unsloth
language:
- en
- th
base_model:
- unsloth/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
datasets:
- airesearch/WangchanThaiInstruct
---
# PoonrooSLM Prototype: 3B (พูนรู้ SLM)
## This model is currently on experiment and be able to run on Ollama and LM Studio as the experimental version.
- Lightweight, text-only models that fit onto edge and mobile devices, finetuned version of Llama3.2 with WangchanThaiInstruct Dataset (only Financial and Retail tags)
- Support context length of 128K tokens and are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge.
- The Model empower developers to build personalized, on-device agentic applications with strong privacy where data never leaves the device.
- Outperforms the Gemma 2 2B and Phi 3.5-mini (3.8B) models on tasks such as following instructions, summarization, prompt rewriting, and tool-use, while the 1B is competitive with Gemma.
- Multilingual Support in one single model. (Including Thai Language)
- Enhance more model capabilities by using RAG (Retrieval Augmented Generation) + Responsible AI supported
|