Edit model card

WortegaLM 109m

Model Summary

Это GPTneo like модель обученная с нуля на сете в 95gb кода, хабра, пикабу, новостей(порядка 12B токенов) Она умеет решать примитивные задачи, не пригодна для ZS FS, но идеальна как модель для студенческих проектов

Quick Start






from transformers import AutoTokenizer, AutoModelForCausalLM,


tokenizer = AutoTokenizer.from_pretrained('AlexWortega/wortegaLM',padding_side='left')
device = 'cuda'
model = AutoModelForCausalLM.from_pretrained('AlexWortega/wortegaLM')
model.resize_token_embeddings(len(tokenizer))
model.to(device)



def generate_seqs(q,model, k=2):
    gen_kwargs = {
        "min_length": 20,
        "max_new_tokens": 100,
        "top_k": 50,
        "top_p": 0.7,
        "do_sample": True,  
        "early_stopping": True,
        "no_repeat_ngram_size": 2,
        "eos_token_id": tokenizer.eos_token_id,
        "pad_token_id": tokenizer.eos_token_id,
        "use_cache": True,
        "repetition_penalty": 1.5,  
        "length_penalty": 1.2,  
        "num_beams": 4,
        "num_return_sequences": k
    }
    
    t = tokenizer.encode(q, add_special_tokens=False, return_tensors='pt').to(device)
    g = model.generate(t, **gen_kwargs)
    generated_sequences = tokenizer.batch_decode(g, skip_special_tokens=False)
    
    return  generated_sequences
Downloads last month
25
Safetensors
Model size
160M params
Tensor type
F32
·
BOOL
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AlexWortega/wortegaLM

Adapters
1 model

Dataset used to train AlexWortega/wortegaLM