Edit model card

Silo Language Models: Isolating Legal Risk in a Datastore

This is Silo-PDSW, first introduced in Silo Language Models by researchers at University of Washington, UC Berkeley, and the Allen Institute for AI.

NOTE: Dependencies

To use the model, you need to install a specific transformers fork:

pip install git+https://github.com/kernelmachine/transformers@openlm#egg=transformers

The model also depends on xformers, install via

pip install xformers

Model Description

Silo-PDSW is a 1.3B parameter, decoder-only language model trained on data in the public domain and under permissive software licenses from the Open License Corpus (OLC).

The model is based on the LLaMA architecture as implemented in (OpenLM)[].

The model is trained with 128 A100 GPUs across 16 nodes.

Model and Training Hyperparameters

We follow the model architecture of LLaMa, and we use the GPT-NeoX-20B tokenizer, with 50432 BPE types.

During training, we use 2,048 token sequences that are packed across document boundaries, and we pre-pend a beginning-of-text token to every document.

We use weight decay of 0.1, the Adam optimizer with beta_2 of 0.95, 2,000 steps of warmup, with a cosine learning rate scheduler.

Model #L #H d_model LR Batch
1.3B 24 16 2048 1e-3 2.6M

Training data

Silo-PDSW was trained on data in the public domain and under permissive software licenses from the Open License Corpus (OLC).

The model was trained on the following domain proportions (please see the OLC repository for more details on the data sources for each domain):

Domain Tokens (B) %
Code 58.9 59.1
Legal 27.1 27.2
Conversation 5.9 5.9
Math 3.5 3.5
Books 2.9 2.9
Science 1.2 1.2
News 0.2 0.2
Total 99.6 100.0

We train with early stopping for 250B tokens in total, or a little more than two epochs of training over this subset

Since the distribution of OLC is highly skewed, we perform a simple upweighting scheme where we upsample all data that accounts for less than 5% of the corpus by a factor of 3x, which we found to work well after a sweep of different settings.

Intended Uses and Limitations

This model can be used for prompting for evaluation of downstream tasks as well as text generation.

How to use

You can use this model directly with a pipeline for text generation.

from transformers import pipeline
generator = pipeline('text-generation', model="kernelmachine/silo-pdsw-1.3b", device='cuda')
generator("Hello")
[{'generated_text': "Hello, I'm a new user of Ubuntu. I'm trying to install the latest version of Ubuntu"}]

By default, generation is deterministic. In order to use the top-k sampling, please set do_sample to True.

from transformers import pipeline, set_seed
set_seed(32)
generator = pipeline('text-generation', model="kernelmachine/silo-pdsw-1.3b", device='cuda', do_sample=True)
generator("Hello")
[{'generated_text': 'Hello: Hello World;", ""));\n        }\n\n        [Test]\n        public void'}]

Limitations and Bias

Silo-PDSW inherits the biases and limitations of public domain data, which carry risks of toxic or otherwise unfair output, due to the prevalence of older copyright-expired text.

Silo-PDSW may also output personally identifiable information, because we did not filter that out of training data.

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.