|
--- |
|
library_name: transformers |
|
license: mit |
|
language: |
|
- en |
|
base_model: Salesforce/codegen-350M-multi |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
Model finetuned to autocomplete for YAML files |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Developed by:** https://huggingface.co/alexvumnov |
|
- **Model type:** Autoregressive decoder only transformer (GPT2-based) |
|
- **Language(s) (NLP):** English, but mostly YAML |
|
- **License:** MIT |
|
- **Finetuned from model [optional]:** https://huggingface.co/Salesforce/codegen-350M-multi/tree/main |
|
|
|
|
|
## Uses |
|
|
|
Model expects a specific prompt format, so please use it for best performance |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
|
|
|
model = AutoModelForCausalLM.from_pretrained("alexvumnov/yaml_completion") |
|
tokenizer = AutoTokenizer.from_pretrained("alexvumnov/yaml_completion", padding='left') |
|
|
|
prompt_format = """ |
|
# Here's a yaml file to offer a completion for |
|
# Lines after the current one |
|
{text_after} |
|
# Lines before the current one |
|
{text_before} |
|
# Completion: |
|
""" |
|
|
|
input_prefix = """ |
|
name: my_awesome_env |
|
dependencies: |
|
|
|
""" |
|
|
|
input_suffix = "" |
|
|
|
generator = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda') |
|
|
|
generator(prompt_format.format(text_after=input_suffix, text_before=input_prefix), max_new_tokens=64) |
|
|
|
# [{'generated_text': "\n# Here's a yaml file to offer a completion for\n# Lines after the current one\n\n# Lines before the current one\n\nname: my_awesome_env\ndependencies:\n\n\n# Completion:\n- deploy"}] |
|
``` |