🚀 Introducing Akshara-8B: AI for India 🇮🇳✨

We’re proud to unveil Akshara-8B, our cutting-edge AI fleet built exclusively for India’s diverse linguistic landscape. Akshara is designed to seamlessly understand and generate text in multiple Indian languages, making AI more accessible, powerful, and tailored to our nation’s needs.

🌍 What is Akshara?

Akshara-8B is a highly optimized distilled version of SVECTOR’s flagship large-scale AI model (Akshara). While it retains the core intelligence and multilingual capabilities of its parent model, Akshara-8B is specifically designed for efficiency, speed, and accessibility.
It leverages advanced distillation techniques to provide powerful AI performance while being lightweight and scalable. Akshara-8B embodies SVECTOR’s commitment to bringing cutting-edge AI to India, ensuring robust support for India’s diverse languages and applications. 🚀

Akshara can fluently understand and generate content in: ✅ Hindi
Gujarati
Marathi
Tamil
Telugu
Kannada
Punjabi
English

🔥 Why Akshara?

🔹 Made in India, for India 🇮🇳
🔹 Optimized for speed and efficiency
🔹 Seamless multilingual processing 🗣️
🔹 Balanced accuracy and creativity 🎨
🔹 Lightweight and scalable for real-world applications 🚀


🛠️ Usage Guide

Install Dependencies

pip install transformers torch

Load the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "SVECTOR-CORPORATION/Akshara-8B-Llama-Multilingual-V0.1"

# Load the model
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

# Sample input
input_text = "भारत की सबसे बड़ी भाषा कौनसी है?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

# Generate response
output = model.generate(**input_ids, max_new_tokens=256)
response = tokenizer.decode(output[0], skip_special_tokens=True)

print(response)

💬 Multi-turn Conversation Support

Akshara supports multi-turn, dynamic conversations across languages.

messages = [
    {"role": "system", "content": "आप Akshara हैं, भारत के लिए बना एक AI, जो हिंदी, गुजराती, मराठी, तमिल, तेलुगु, कन्नड़, पंजाबी और अंग्रेजी में बातचीत कर सकता है।"},
    {"role": "user", "content": "नमस्ते! आप क्या कर सकते हैं?"}
]

input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")

outputs = model.generate(input_ids, max_new_tokens=256)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))

🌟 Akshara: Built for the Future of AI in India

By embracing India’s linguistic diversity, Akshara represents a major step toward bridging the AI gap in our country. Whether it's education, research, customer service, content creation, or smart automation, Akshara is here to revolutionize multilingual AI interactions.

Join us as we shape the future of AI for India! 🇮🇳🚀

@misc{SVECTOR2025Akshara,
  title     = {Akshara: A Multilingual AI Model for India},
  author    = {SVECTOR},
  year      = {2025},
  url       = {https://svector.co.in},
  note      = {Developed by SVECTOR CORPORATION for multilingual AI Model},
}
Downloads last month
5
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for SVECTOR-CORPORATION/Akshara-8B-Llama-Multilingual-V0.1

Quantizations
1 model

Collection including SVECTOR-CORPORATION/Akshara-8B-Llama-Multilingual-V0.1