1

Gliese-Query_Tool-0.6B

Gliese-Query_Tool-0.6B is a function-calling and query-oriented reasoning model fine-tuned from Qwen3-0.6B using Salesforce/xlam-function-calling-60k, designed for tool orchestration, structured query resolution, and operation chaining across diverse tasks. It excels in dynamic function execution, structured reasoning pipelines, and multi-tool decision workflows, making it a powerful lightweight solution for developers, tooling platforms, and automation systems.

GGUF: https://huggingface.co/prithivMLmods/Gliese-Query_Tool-0.6B-GGUF


Key Features

  1. Function-Calling Focused Reasoning Fine-tuned with Salesforce/xlam-function-calling-60k, enabling precise function selection, argument formatting, and multi-step tool invocation in complex workflows.

  2. Query-Oriented Workflow Design Built to parse, interpret, and resolve complex queries by selecting and chaining the most relevant functions or tools for the task.

  3. Tool-Orchestration & Automation Handles structured tool calls, dynamic function dispatch, and hybrid reasoning to power intelligent automation, API orchestration, and backend agent pipelines.

  4. Structured Multi-Format Output Outputs formatted responses in JSON, YAML, Markdown, and structured argument objects — ideal for direct integration into software pipelines and agentic systems.

  5. Lightweight, Deployment-Ready Core Compact 0.6B parameter size optimized for edge deployments, on-device inference, and fast cold-starts while maintaining strong reasoning and function-call accuracy.


sample inference.

Solve 2**2
[{"name": "power", "description": "Calculates the power of a number with a specified exponent.",
"parameters": {"number": {"description": "The base for which the power is calculated.", "type": "int"},
"exponent": {"description": "The exponent to which the number should be raised.", "type": "int"}}}]
solve for 'x' in the equation 2x + 5 = 11?
[{"name": "solving_equation", "description": "Solves a linear equation for a variable.",
 "parameters": {"equation": {"description": "The equation to solve. The format is 'a*x + b = c'.
For example, '5x + 2 = 10' or '3x - 7 = 1'.", "type": "str"}, "operation": {"description": "The operation (add, sub, etc.) to perform the solving.",
"type": "str, optional"}, "variable": {"description": "The variable to solve for. Defaults to 'x' if not provided.", "default": "x"}}}]
What is the volume of a sphere with a radius of 6 cm?
[{"name": "volume_of_sphere", "description": "Calculates the volume of a sphere given its radius using the formula (4/3)πr³.",
"parameters": {"radius": {"description": "The radius of the sphere.", "type": "int"}}}]
In an examination 80% of the candidates passed in Urdu and 85% in Hindi, while 75% passed in both . If 45 candidates failed in both. Then the total number of candidates was ?
[{"name": "passing_percentage", "description": "Calculates the passing percentage for an exam given the percentage of students who passed each subject, and the intersection percentage of passing subjects.",
"parameters": {"subject1_percent": {"description": "Percentage of students who passed the first subject (e.g., 85% if Hindi).", "type": "int"},
"subject2_percent": {"description": "Percentage of students who passed the second subject (e.g., 80% if Urdu).", "type": "int"}, "passed_both_percent": {"description": "Percentage of students who passed both subjects.", "type": "int"}}}]

Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Gliese-Query_Tool-0.6B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Call the right function to fetch weather data for Paris and format the output as JSON."

messages = [
    {"role": "system", "content": "You are a query tool model skilled in function-calling, API orchestration, and structured query resolution."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Intelligent function-calling and multi-step query solving
  • API orchestration, agent tool selection, and dynamic workflows
  • Structured data generation and backend reasoning integration
  • Lightweight agentic systems and on-device automation
  • Developer-focused query resolution and toolchain automation

Limitations

  • Focused on function-calling and structured tasks — not suited for open-ended dialogue or creative writing
  • Small model size means very complex reasoning chains may require external planning agents
  • Optimized for structured tool workflows — conversational tone and narrative depth are secondary
  • Long-context multi-tool planning beyond several steps may reduce reliability
Downloads last month
17
Safetensors
Model size
596M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Gliese-Query_Tool-0.6B

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(312)
this model
Quantizations
2 models

Dataset used to train prithivMLmods/Gliese-Query_Tool-0.6B

Collections including prithivMLmods/Gliese-Query_Tool-0.6B