Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference
sam-mosaic commited on
Commit
5bfed7f
1 Parent(s): 8667424

remove demo link while down

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -16,7 +16,6 @@ It was built by finetuning MPT-7B with a context length of 65k tokens on a filte
16
  At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
17
  We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
18
  * License: Apache 2.0
19
- * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-storywriter)
20
 
21
  This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
22
 
 
16
  At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
17
  We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
18
  * License: Apache 2.0
 
19
 
20
  This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
21