Edit model card

Model Card for neoncortex/mini-mistral-openhermes-2.5-chatml-test

A tiny Mistral model trained as an experiment on teknium/OpenHermes-2.5.

Model Details

A 63M parameter auto-regressive LM using Mistral architecture as a base.

  • Multi-query Attention instead of Grouped-query Attention.
  • Sliding window is disabled.
  • Modified ChatML instead of Mistral chat template - TL;DR I used '<|im_start|>human' instead of '<|im_start|>user'

Model Description

Just doing it to see what happens.

It'll take about 40 to 45 hours to train on two Nvidia RTX 3060 12GB.

It uses ChatML for the chat template, but I fucked up the template in the dataset, using '<|im_start|>human' instead of '<|im_start|>user'. ¯_(ツ)_/¯ So, here's the bits:

{%- set ns = namespace(found=false) -%}
{%- for message in messages -%}
    {%- if message['role'] == 'system' -%}
        {%- set ns.found = true -%}
    {%- endif -%}
{%- endfor -%}
{%- for message in messages %}
    {%- if message['role'] == 'system' -%}
        {{- '<|im_start|>system\n' + message['content'].rstrip() + '<|im_end|>\n' -}}
    {%- else -%}
        {%- if message['role'] == 'human' -%}
            {{-'<|im_start|>human\n' + message['content'].rstrip() + '<|im_end|>\n'-}}
        {%- else -%}
            {{-'<|im_start|>assistant\n' + message['content'] + '<|im_end|>\n' -}}
        {%- endif -%}
    {%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
    {{-'<|im_start|>assistant\n'-}}
{%- endif -%}
  • Developed by: RoboApocalypse
  • Funded by: RoboApocalypse
  • Shared by: RoboApocalypse
  • Model type: Mistral
  • Language(s) (NLP): English, maybe others I dunno
  • License: OpenRAIL, IDGAF

Model Sources

Exclusively available right here on HuggingFace!

Uses

If you wanna have a laugh at how bad it is then go ahead, but I wouldn't expect much from it.

Out-of-Scope Use

This model won't work well for pretty much everything, probably.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing

I took the OpenHermes 2.5 dataset and formatted it with ChatML.

Training Hyperparameters

  • Training regime: bf16 mixed precision

Speeds, Sizes, Times

epochs: 9 steps: 140976 batches per device: 6 1.04it/s

Evaluation

I tried to run evals but the eval suite just laughed at me.

Model Examination

Don't be rude.

Environmental Impact

  • Hardware Type: I already told you. Try and keep up.
  • Hours used: ~45 x 2 I guess.
  • Cloud Provider: RoboApocalypse
  • Compute Region: myob
  • Carbon Emitted: Yes, definitely

Compute Infrastructure

I trained it on my PC with no side on it because I like to watch the GPUs do their work.

Hardware

2 x Nvidia RTX 3060 12GB

Software

The wonderful free stuff at HuggingFace (https://huggingface.co)[https://huggingface.co]: transformers, datasets, trl

Model Card Authors

RoboApocalypse, unless you're offended by something, in which case it was hacked by hackers.

Model Card Contact

If you want to send me insults come find me on Reddit I guess.

Downloads last month
35
Safetensors
Model size
62.7M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train neoncortex/mini-mistral-openhermes-2.5-chatml-test

Space using neoncortex/mini-mistral-openhermes-2.5-chatml-test 1