Update README.md
Browse files
README.md
CHANGED
@@ -7,9 +7,11 @@ tags:
|
|
7 |
- microsoft/Orca-2-7b
|
8 |
---
|
9 |
|
10 |
-
|
11 |
|
12 |
-
Medorca-
|
|
|
|
|
13 |
* [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b)
|
14 |
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
|
15 |
|
@@ -74,7 +76,7 @@ from transformers import AutoTokenizer
|
|
74 |
import transformers
|
75 |
import torch
|
76 |
|
77 |
-
model = "Technoculture/Medorca-
|
78 |
|
79 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
80 |
pipeline = transformers.pipeline(
|
|
|
7 |
- microsoft/Orca-2-7b
|
8 |
---
|
9 |
|
10 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63486df1f8f01fcc4b23e97d/ur_lYVzJaOQWguJhUEoh_.png)
|
11 |
|
12 |
+
# Medorca-2x7b
|
13 |
+
|
14 |
+
Medorca-2x7b is a Mixure of Experts (MoE) made with the following models:
|
15 |
* [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b)
|
16 |
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
|
17 |
|
|
|
76 |
import transformers
|
77 |
import torch
|
78 |
|
79 |
+
model = "Technoculture/Medorca-2x7b"
|
80 |
|
81 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
82 |
pipeline = transformers.pipeline(
|