Home-3B-v3-GGUF / README.md
acon96's picture
Upload model
1f20ec6 verified
metadata
datasets:
  - acon96/Home-Assistant-Requests
license: other
license_link: https://huggingface.co/acon96/Home-3B-v3-GGUF/raw/main/LICENSE
language:
  - en
  - de
  - es
  - fr
tags:
  - automation
  - home
  - assistant
pipeline_tag: text-generation

Home 3B v3

The "Home" model is a fine tuning of the StableLM-3B-Zephyr model from Stability AI. The model is able to control devices in the user's house as well as perform basic question and answering. The fine tuning dataset is a custom curated dataset designed to teach the model function calling.

V3 of the model has a new base model (StableLM) that brings significant accuracy increases. Also added are: basic multi-personality support, basic multi-language support, and support for even more Home Assitant entity types (vacuum, timer, and todo).

NOTE: the base models do not boast multi-language support but use a tokenizer that can handle non-english languages better than Phi-2. I have verified that it does technically work in German, Spanish, and French on some random examples where the request is an English request processed via Google Translate.

The model is quantized using Lama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Rapsberry Pis.

The model can be used as an "instruct" type model using the Zephyr prompt format. The system prompt is used to provide information about the state of the Home Assistant installation including available devices and callable services.

Example "system" prompt:

You are 'Al', a helpful AI Assistant that controls the devices in a house. Complete the following task as instructed with the information provided only.
Services: light.turn_off(), light.turn_on(brightness,rgb_color), fan.turn_on(), fan.turn_off()
Devices:
light.office 'Office Light' = on;80%
fan.office 'Office fan' = off
light.kitchen 'Kitchen Light' = on;80%;red
light.bedroom 'Bedroom Light' = off

Output from the model will consist of a response that should be relayed back to the user, along with an optional code block that will invoke different Home Assistant "services". The output format from the model for function calling is as follows:

turning on the kitchen lights for you now
```homeassistant
{ "service": "light.turn_on", "target_device": "light.kitchen" }
```

The model is also capable of basic instruct and QA tasks because of the instruction fine-tuning in the base model. For example, the model is able to perform basic logic tasks such as the following:

user if mary is 7 years old, and I am 3 years older than her. how old am I?
assistant If Mary is 7 years old, then you are 10 years old (7+3=10).

Training

The model was trained as a LoRA on an RTX 3090 (24GB). The LoRA has rank = 64, alpha = 128, targets the up_proj,down_proj,q_proj,v_proj,o_proj modules. The full model is merged together at the end.

Evaluation

This model acheives a 97.11% score for JSON function calling accuracy on the test dataset.

Datasets

Snythetic Dataset for SFT - https://huggingface.co/datasets/acon96/Home-Assistant-Requests

License

This model is a fine-tuning of the Stability AI StableLM model series that is licensed under the STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE AGREEMENT. As such this model is released under the same non-commerical STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE AGREEMENT. The fine-tuned model is shared for non-commerical use ONLY.