File size: 3,189 Bytes
ea7d771 09bc830 97d9cc6 1e02428 5848cca 09bc830 e60f88f ea7d771 09bc830 8d3cdb5 09bc830 4fcd3c8 09bc830 73a63c5 09bc830 94bb2c3 09bc830 73a63c5 09bc830 94bb2c3 09bc830 73a63c5 09bc830 94bb2c3 09bc830 94bb2c3 09bc830 94bb2c3 09bc830 ac003c4 4fcd3c8 ac003c4 4fcd3c8 ac003c4 4fcd3c8 ac003c4 94bb2c3 09bc830 94bb2c3 09bc830 73a63c5 5c7c312 73a63c5 5c7c312 94bb2c3 71f4aac ee0810d da2657b 1f7df6e 71f4aac 94bb2c3 e60f88f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
language:
- en
datasets:
- English
tags:
- text generation
- pytorch
- causal-lm
- Writer-data
- NeMo
- palmyra
pipeline_tag: text-generation
library_name: transformers
license: apache-2.0
---
# Palmyra Small 128M
<style>
img {
display: inline;
}
</style>
|[![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-128M-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets)
## Model Description
Palmyra Small was primarily pre-trained with English text. Note that there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra Small is a member of the same family of models that only contain a decoder. As a result, it was pre-trained utilizing the objective of self-supervised causal language modeling. Palmyra Small uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation per GPT-3.
## Use case
Palmyra Small is the fastest of Writer’s LLMs and can perform important tasks such as text parsing, simple classification, address correction, and keyword recognition. Providing more context drives even better performance.
## Training data
Palmyra Small (128M) was trained on Writer’s custom dataset.
## Intended Use and Limitations
Palmyra Small learns an inner representation of the English language that can be used to extract features useful for downstream tasks. However, the model is best at what it was pre-trained for which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Writer/palmyra-small")
tokenizer = AutoTokenizer.from_pretrained("Writer/palmyra-small")
```
### Limitations and Biases
Palmyra Small’s core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra Small to produce factually correct results.
Palmyra Small was trained on Writer’s custom data. As with all language models, it is difficult to predict how Palmyra Small will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results.
## Citation and Related Information
To cite this model:
```
@misc{Palmyra,
author = {Writer Engineering Team},
title = {{Palmyra-base Parameter Autoregressive Language Model}},
howpublished = {\url{https://dev.writer.com}},
year = 2023,
month = January
}
``` |