Doge 25M

NOTE: This model is only for testing, more details on the model and training are in the works.

Doge is an ongoing research project where we aim to train a series of small language models to further explore whether the Transformer framework allows for more complex feedforward network structures, enabling the model to have fewer cache states and larger knowledge capacity.

This model is trained by Jingze Shi, it only allows text input and text generation, for detailed algorithm and model architecture, please refer to Wonderful Matrices, the ongoing research repository is Doge.

Uses

>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("LoserCheems/Doge-25M")
>>> model = AutoModelForCausalLM.from_pretrained("LoserCheems/Doge-25M", trust_remote_code=True)
>>> inputs = tokenizer("Hey how are you doing?", return_tensors="pt")

>>> out = model.generate(**inputs, max_new_tokens=10)
>>> print(tokenizer.batch_decode(out))
["Hey how are you doing?\n\nI'm doing great.\n\n"]
Downloads last month
48
Safetensors
Model size
25.4M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train LoserCheems/Doge-25M