Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,23 @@
|
|
1 |
# SliceX AI™ ELM (Efficient Language Models)
|
2 |
-
|
3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
Models are located in the "models" folder. ELM models in this repository comes in three sizes (elm-1.0, elm-0.75 and elm-0.25) and supports the following use-cases.
|
5 |
- news_classification
|
6 |
- toxicity_detection
|
7 |
- news_content_generation
|
8 |
- news_summarization
|
9 |
|
10 |
-
Try out the ELM models in HF spaces at [slicexai/elm-demo-v1](https://huggingface.co/spaces/slicexai/elm-demo-v1)
|
11 |
-
|
12 |
-
## Download ELM repo
|
13 |
```bash
|
14 |
git clone git@hf.co:slicexai/elm-v0.1
|
15 |
sudo apt-get intall git-lfs
|
|
|
1 |
# SliceX AI™ ELM (Efficient Language Models)
|
2 |
+
**ELM** (which stands for **E**fficient **L**anguage **M**odels) is the first version in the series of cutting-edge language models from [SliceX AI](https://slicex.ai) that is designed to achieve the best in class performance in terms of _quality_, _throughput_ & _memory_.
|
3 |
|
4 |
+
<div align="center">
|
5 |
+
<img src="https://github.com/slicex-ai/bazaar2/blob/master/public_releases/v1/logo.png" width="256"/>
|
6 |
+
</div>
|
7 |
+
|
8 |
+
ELM is designed to be a modular and customizable family of neural networks that are highly efficient and performant. Today we are sharing the first version in this series: **ELM-v0.1** models.
|
9 |
+
|
10 |
+
_Model:_ ELM introduces a new type of _(de)-composable LLM model architecture_ along with the algorithmic optimizations required to learn (training) and run (inference) these models. At a high level, we train a single ELM model in a self-supervised manner (during pre-training phase) but once trained the ELM model can be sliced in many ways to fit different user/task needs. The optimizations can be applied to the model either during the pre-training and/or fine-tuning stage.
|
11 |
+
|
12 |
+
_Fast Inference with Customization:_ Once trained, the ELM model architecture permits flexible inference strategies at runtime depending on the deployment needs. For instance, the ELM model can be _decomposed_ into smaller slices, i.e., smaller (or larger) models can be extracted from the original model to create multiple inference endpoints. Alternatively, the original (single) ELM model can be loaded _as is_ for inference and different slices within the model can be queried directly to power faster inference. This provides an additional level of flexibility for users to make compute/memory tradeoffs depending on their application and runtime needs.
|
13 |
+
|
14 |
+
## Download ELM repo
|
15 |
Models are located in the "models" folder. ELM models in this repository comes in three sizes (elm-1.0, elm-0.75 and elm-0.25) and supports the following use-cases.
|
16 |
- news_classification
|
17 |
- toxicity_detection
|
18 |
- news_content_generation
|
19 |
- news_summarization
|
20 |
|
|
|
|
|
|
|
21 |
```bash
|
22 |
git clone git@hf.co:slicexai/elm-v0.1
|
23 |
sudo apt-get intall git-lfs
|