Model Card for DeepSeekCodeCodeQ&A

This is a version of DeepSeek-Coder model that was fine-tuned on the grammatically corrected texts.

Model Details

Model Description

  • Model type: LLaMa
  • Number of Parameters: 6.7B
  • Supported Programming Language: Python
  • Finetuned from model: DeepSeek-Coder

Model Sources [optional]

  • Repository: GitHub Repo
  • Paper: "Leveraging Large Language Models in Code Question Answering: Baselines and Issues" Georgy Andryushchenko, Vladimir V. Ivanov, Vladimir Makharev, Elizaveta Tukhtina, Aidar Valeev

How to Get Started with the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('deepseek-ai/deepseek-coder-6.7b-instruct')
model = AutoModelForCausalLM.from_pretrained('datapaf/DeepSeekCoderCodeQnA', device_map="cuda")

code = ... # Your Python code snippet here
question = ... # Your question regarding the snippet here

q = f"{question}\n{code}"

inputs = tokenizer.encode(q, return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=512, pad_token_id=tokenizer.eos_token_id)
text = tokenizer.decode(outputs[0])
print(text)
-->
Downloads last month
3
Safetensors
Model size
6.74B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.