Update README.md
Browse files
README.md
CHANGED
@@ -117,12 +117,10 @@ model-index:
|
|
117 |
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aridoverrun/Foxglove_7B
|
118 |
name: Open LLM Leaderboard
|
119 |
---
|
120 |
-
<div style="margin-bottom: -40px; margin-top: -30px;">
|
121 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ad2502043d53781aad2ee4/ZrffOEBz2rxwAwdgzOkhd.png" alt="favicon" style="display: inline-block; vertical-align: middle; width: 25px; height: 25px; margin-right: 10px;">
|
122 |
-
<span style="display: inline-block; vertical-align: middle; font-weight: 600; font-size: 20px;">Foxglove_7B</span>
|
123 |
-
</div>
|
124 |
|
125 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ad2502043d53781aad2ee4/FUH__CjalqBRPiSaqZfO6.png" alt="image" width="540" height="540" style="margin-bottom:
|
|
|
|
|
126 |
Foxglove is a well-rounded RP model. It is smart, does a great job of sticking to character card, and is proficient at following desired markdown.
|
127 |
|
128 |
Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Foxglove_7B-GGUF).
|
@@ -131,7 +129,7 @@ Foxglove_7B is a merge of the following models using [LazyMergekit](https://cola
|
|
131 |
* [ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B)
|
132 |
* [Epiculous/Mika-7B](https://huggingface.co/Epiculous/Mika-7B)
|
133 |
|
134 |
-
##
|
135 |
|
136 |
```yaml
|
137 |
slices:
|
@@ -152,7 +150,7 @@ parameters:
|
|
152 |
dtype: bfloat16
|
153 |
```
|
154 |
|
155 |
-
##
|
156 |
|
157 |
```python
|
158 |
!pip install -qU transformers accelerate
|
|
|
117 |
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aridoverrun/Foxglove_7B
|
118 |
name: Open LLM Leaderboard
|
119 |
---
|
|
|
|
|
|
|
|
|
120 |
|
121 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ad2502043d53781aad2ee4/FUH__CjalqBRPiSaqZfO6.png" alt="image" width="540" height="540" style="margin-bottom: 30px;">
|
122 |
+
|
123 |
+
# 🌸 Foxglove_7B
|
124 |
Foxglove is a well-rounded RP model. It is smart, does a great job of sticking to character card, and is proficient at following desired markdown.
|
125 |
|
126 |
Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Foxglove_7B-GGUF).
|
|
|
129 |
* [ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B)
|
130 |
* [Epiculous/Mika-7B](https://huggingface.co/Epiculous/Mika-7B)
|
131 |
|
132 |
+
## Configuration
|
133 |
|
134 |
```yaml
|
135 |
slices:
|
|
|
150 |
dtype: bfloat16
|
151 |
```
|
152 |
|
153 |
+
## Usage
|
154 |
|
155 |
```python
|
156 |
!pip install -qU transformers accelerate
|