Update README.md
Browse files
README.md
CHANGED
@@ -1,208 +1,208 @@
|
|
1 |
-
---
|
2 |
-
base_model: EmergentMethods/Phi-3-mini-4k-instruct-graph
|
3 |
-
base_model_relation: quantized
|
4 |
-
inference: false
|
5 |
-
model_creator: EmergentMethods
|
6 |
-
model_name: Phi-3-mini-4k-instruct-graph-GGUF
|
7 |
-
pipeline_tag: text-generation
|
8 |
-
license: cc-by-nc-sa-4.0
|
9 |
-
quantized_by: jackboyla
|
10 |
-
tags:
|
11 |
-
- quantized
|
12 |
-
- 8-bit
|
13 |
-
- GGUF
|
14 |
-
- text-generation
|
15 |
-
---
|
16 |
-
# [jackboyla/Phi-3-mini-4k-instruct-graph-GGUF](https://huggingface.co/jackboyla/Phi-3-mini-4k-instruct-graph-GGUF)
|
17 |
-
- Model creator:
|
18 |
-
- Original model: [EmergentMethods/Phi-3-mini-4k-instruct-graph](https://huggingface.co/EmergentMethods/Phi-3-mini-4k-instruct-graph)
|
19 |
-
|
20 |
-
## Description
|
21 |
-
[jackboyla/Phi-3-mini-4k-instruct-graph-GGUF](https://huggingface.co/jackboyla/Phi-3-mini-4k-instruct-graph-GGUF) contains GGUF format model files for [EmergentMethods/Phi-3-mini-4k-instruct-graph](https://huggingface.co/EmergentMethods/Phi-3-mini-4k-instruct-graph).
|
22 |
-
|
23 |
-
## How to Get Started with the Model (Sample inference code)
|
24 |
-
|
25 |
-
This code snippets show how to get quickly started with running the model on a CPU with ollama:
|
26 |
-
|
27 |
-
```bash
|
28 |
-
# install ollama
|
29 |
-
ollama pull hf.co/jackboyla/Phi-3-mini-4k-instruct-graph-GGUF:Q8_0
|
30 |
-
```
|
31 |
-
|
32 |
-
```python
|
33 |
-
import requests
|
34 |
-
import json
|
35 |
-
|
36 |
-
messages = [
|
37 |
-
{"role": "system", "content": """
|
38 |
-
A chat between a curious user and an artificial intelligence Assistant. The Assistant is an expert at identifying entities and relationships in text. The Assistant responds in JSON output only.
|
39 |
-
|
40 |
-
The User provides text in the format:
|
41 |
-
|
42 |
-
-------Text begin-------
|
43 |
-
<User provided text>
|
44 |
-
-------Text end-------
|
45 |
-
|
46 |
-
The Assistant follows the following steps before replying to the User:
|
47 |
-
|
48 |
-
1. **identify the most important entities** The Assistant identifies the most important entities in the text. These entities are listed in the JSON output under the key "nodes", they follow the structure of a list of dictionaries where each dict is:
|
49 |
-
|
50 |
-
"nodes":[{"id": <entity N>, "type": <type>, "detailed_type": <detailed type>}, ...]
|
51 |
-
|
52 |
-
where "type": <type> is a broad categorization of the entity. "detailed type": <detailed_type> is a very descriptive categorization of the entity.
|
53 |
-
|
54 |
-
2. **determine relationships** The Assistant uses the text between -------Text begin------- and -------Text end------- to determine the relationships between the entities identified in the "nodes" list defined above. These relationships are called "edges" and they follow the structure of:
|
55 |
-
|
56 |
-
"edges":[{"from": <entity 1>, "to": <entity 2>, "label": <relationship>}, ...]
|
57 |
-
|
58 |
-
The <entity N> must correspond to the "id" of an entity in the "nodes" list.
|
59 |
-
|
60 |
-
The Assistant never repeats the same node twice. The Assistant never repeats the same edge twice.
|
61 |
-
The Assistant responds to the User in JSON only, according to the following JSON schema:
|
62 |
-
|
63 |
-
{"type":"object","properties":{"nodes":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string"},"type":{"type":"string"},"detailed_type":{"type":"string"}},"required":["id","type","detailed_type"],"additionalProperties":false}},"edges":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"},"label":{"type":"string"}},"required":["from","to","label"],"additionalProperties":false}}},"required":["nodes","edges"],"additionalProperties":false}
|
64 |
-
"""},
|
65 |
-
{"role": "user", "content": """
|
66 |
-
-------Text begin-------
|
67 |
-
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4] As a leading organization in the ongoing AI boom,[5] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[6][7] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
|
68 |
-
-------Text end-------
|
69 |
-
"""}
|
70 |
-
]
|
71 |
-
|
72 |
-
# Define the API endpoint
|
73 |
-
url = "http://localhost:11434/api/chat"
|
74 |
-
|
75 |
-
# Prepare the request payload
|
76 |
-
payload = {
|
77 |
-
"model": "hf.co/jackboyla/Phi-3-mini-4k-instruct-graph-GGUF:Q8_0",
|
78 |
-
"messages": messages,
|
79 |
-
"stream": False
|
80 |
-
}
|
81 |
-
|
82 |
-
# Send the POST request
|
83 |
-
response = requests.post(url, json=payload)
|
84 |
-
|
85 |
-
# Print the response
|
86 |
-
print(response.status_code)
|
87 |
-
out = json.loads(response.content.decode('utf-8'))['message']['content']
|
88 |
-
print(json.dumps(json.loads(out), indent=2))
|
89 |
-
|
90 |
-
```
|
91 |
-
|
92 |
-
Output:
|
93 |
-
|
94 |
-
```json
|
95 |
-
{
|
96 |
-
"nodes": [
|
97 |
-
{
|
98 |
-
"id": "OpenAI",
|
99 |
-
"type": "organization",
|
100 |
-
"detailed_type": "ai research organization"
|
101 |
-
},
|
102 |
-
{
|
103 |
-
"id": "GPT family",
|
104 |
-
"type": "technology",
|
105 |
-
"detailed_type": "large language models"
|
106 |
-
},
|
107 |
-
{
|
108 |
-
"id": "DALL-E series",
|
109 |
-
"type": "technology",
|
110 |
-
"detailed_type": "text-to-image models"
|
111 |
-
},
|
112 |
-
{
|
113 |
-
"id": "Sora",
|
114 |
-
"type": "technology",
|
115 |
-
"detailed_type": "text-to-video model"
|
116 |
-
},
|
117 |
-
{
|
118 |
-
"id": "ChatGPT",
|
119 |
-
"type": "technology",
|
120 |
-
"detailed_type": "generative ai"
|
121 |
-
},
|
122 |
-
{
|
123 |
-
"id": "San Francisco",
|
124 |
-
"type": "location",
|
125 |
-
"detailed_type": "city"
|
126 |
-
},
|
127 |
-
{
|
128 |
-
"id": "California",
|
129 |
-
"type": "location",
|
130 |
-
"detailed_type": "state"
|
131 |
-
},
|
132 |
-
{
|
133 |
-
"id": "December 2015",
|
134 |
-
"type": "date",
|
135 |
-
"detailed_type": "foundation date"
|
136 |
-
},
|
137 |
-
{
|
138 |
-
"id": "November 2022",
|
139 |
-
"type": "date",
|
140 |
-
"detailed_type": "release date"
|
141 |
-
}
|
142 |
-
],
|
143 |
-
"edges": [
|
144 |
-
{
|
145 |
-
"from": "OpenAI",
|
146 |
-
"to": "San Francisco",
|
147 |
-
"label": "headquartered in"
|
148 |
-
},
|
149 |
-
{
|
150 |
-
"from": "San Francisco",
|
151 |
-
"to": "California",
|
152 |
-
"label": "located in"
|
153 |
-
},
|
154 |
-
{
|
155 |
-
"from": "OpenAI",
|
156 |
-
"to": "December 2015",
|
157 |
-
"label": "founded in"
|
158 |
-
},
|
159 |
-
{
|
160 |
-
"from": "OpenAI",
|
161 |
-
"to": "GPT family",
|
162 |
-
"label": "developed"
|
163 |
-
},
|
164 |
-
{
|
165 |
-
"from": "OpenAI",
|
166 |
-
"to": "DALL-E series",
|
167 |
-
"label": "developed"
|
168 |
-
},
|
169 |
-
{
|
170 |
-
"from": "OpenAI",
|
171 |
-
"to": "Sora",
|
172 |
-
"label": "developed"
|
173 |
-
},
|
174 |
-
{
|
175 |
-
"from": "OpenAI",
|
176 |
-
"to": "ChatGPT",
|
177 |
-
"label": "released"
|
178 |
-
},
|
179 |
-
{
|
180 |
-
"from": "ChatGPT",
|
181 |
-
"to": "November 2022",
|
182 |
-
"label": "released in"
|
183 |
-
}
|
184 |
-
]
|
185 |
-
}
|
186 |
-
```
|
187 |
-
|
188 |
-
### About GGUF
|
189 |
-
|
190 |
-
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
191 |
-
|
192 |
-
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
193 |
-
|
194 |
-
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
195 |
-
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
196 |
-
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
197 |
-
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
198 |
-
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
199 |
-
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
200 |
-
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
201 |
-
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
202 |
-
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
203 |
-
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
204 |
-
|
205 |
-
## Special thanks
|
206 |
-
|
207 |
-
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
208 |
And thanks to [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) for the README template
|
|
|
1 |
+
---
|
2 |
+
base_model: EmergentMethods/Phi-3-mini-4k-instruct-graph
|
3 |
+
base_model_relation: quantized
|
4 |
+
inference: false
|
5 |
+
model_creator: EmergentMethods
|
6 |
+
model_name: Phi-3-mini-4k-instruct-graph-GGUF
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
license: cc-by-nc-sa-4.0
|
9 |
+
quantized_by: jackboyla
|
10 |
+
tags:
|
11 |
+
- quantized
|
12 |
+
- 8-bit
|
13 |
+
- GGUF
|
14 |
+
- text-generation
|
15 |
+
---
|
16 |
+
# [jackboyla/Phi-3-mini-4k-instruct-graph-GGUF](https://huggingface.co/jackboyla/Phi-3-mini-4k-instruct-graph-GGUF)
|
17 |
+
- Model creator: EmergentMethods
|
18 |
+
- Original model: [EmergentMethods/Phi-3-mini-4k-instruct-graph](https://huggingface.co/EmergentMethods/Phi-3-mini-4k-instruct-graph)
|
19 |
+
|
20 |
+
## Description
|
21 |
+
[jackboyla/Phi-3-mini-4k-instruct-graph-GGUF](https://huggingface.co/jackboyla/Phi-3-mini-4k-instruct-graph-GGUF) contains GGUF format model files for [EmergentMethods/Phi-3-mini-4k-instruct-graph](https://huggingface.co/EmergentMethods/Phi-3-mini-4k-instruct-graph).
|
22 |
+
|
23 |
+
## How to Get Started with the Model (Sample inference code)
|
24 |
+
|
25 |
+
This code snippets show how to get quickly started with running the model on a CPU with ollama:
|
26 |
+
|
27 |
+
```bash
|
28 |
+
# install ollama
|
29 |
+
ollama pull hf.co/jackboyla/Phi-3-mini-4k-instruct-graph-GGUF:Q8_0
|
30 |
+
```
|
31 |
+
|
32 |
+
```python
|
33 |
+
import requests
|
34 |
+
import json
|
35 |
+
|
36 |
+
messages = [
|
37 |
+
{"role": "system", "content": """
|
38 |
+
A chat between a curious user and an artificial intelligence Assistant. The Assistant is an expert at identifying entities and relationships in text. The Assistant responds in JSON output only.
|
39 |
+
|
40 |
+
The User provides text in the format:
|
41 |
+
|
42 |
+
-------Text begin-------
|
43 |
+
<User provided text>
|
44 |
+
-------Text end-------
|
45 |
+
|
46 |
+
The Assistant follows the following steps before replying to the User:
|
47 |
+
|
48 |
+
1. **identify the most important entities** The Assistant identifies the most important entities in the text. These entities are listed in the JSON output under the key "nodes", they follow the structure of a list of dictionaries where each dict is:
|
49 |
+
|
50 |
+
"nodes":[{"id": <entity N>, "type": <type>, "detailed_type": <detailed type>}, ...]
|
51 |
+
|
52 |
+
where "type": <type> is a broad categorization of the entity. "detailed type": <detailed_type> is a very descriptive categorization of the entity.
|
53 |
+
|
54 |
+
2. **determine relationships** The Assistant uses the text between -------Text begin------- and -------Text end------- to determine the relationships between the entities identified in the "nodes" list defined above. These relationships are called "edges" and they follow the structure of:
|
55 |
+
|
56 |
+
"edges":[{"from": <entity 1>, "to": <entity 2>, "label": <relationship>}, ...]
|
57 |
+
|
58 |
+
The <entity N> must correspond to the "id" of an entity in the "nodes" list.
|
59 |
+
|
60 |
+
The Assistant never repeats the same node twice. The Assistant never repeats the same edge twice.
|
61 |
+
The Assistant responds to the User in JSON only, according to the following JSON schema:
|
62 |
+
|
63 |
+
{"type":"object","properties":{"nodes":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string"},"type":{"type":"string"},"detailed_type":{"type":"string"}},"required":["id","type","detailed_type"],"additionalProperties":false}},"edges":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"},"label":{"type":"string"}},"required":["from","to","label"],"additionalProperties":false}}},"required":["nodes","edges"],"additionalProperties":false}
|
64 |
+
"""},
|
65 |
+
{"role": "user", "content": """
|
66 |
+
-------Text begin-------
|
67 |
+
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4] As a leading organization in the ongoing AI boom,[5] OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora.[6][7] Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
|
68 |
+
-------Text end-------
|
69 |
+
"""}
|
70 |
+
]
|
71 |
+
|
72 |
+
# Define the API endpoint
|
73 |
+
url = "http://localhost:11434/api/chat"
|
74 |
+
|
75 |
+
# Prepare the request payload
|
76 |
+
payload = {
|
77 |
+
"model": "hf.co/jackboyla/Phi-3-mini-4k-instruct-graph-GGUF:Q8_0",
|
78 |
+
"messages": messages,
|
79 |
+
"stream": False
|
80 |
+
}
|
81 |
+
|
82 |
+
# Send the POST request
|
83 |
+
response = requests.post(url, json=payload)
|
84 |
+
|
85 |
+
# Print the response
|
86 |
+
print(response.status_code)
|
87 |
+
out = json.loads(response.content.decode('utf-8'))['message']['content']
|
88 |
+
print(json.dumps(json.loads(out), indent=2))
|
89 |
+
|
90 |
+
```
|
91 |
+
|
92 |
+
Output:
|
93 |
+
|
94 |
+
```json
|
95 |
+
{
|
96 |
+
"nodes": [
|
97 |
+
{
|
98 |
+
"id": "OpenAI",
|
99 |
+
"type": "organization",
|
100 |
+
"detailed_type": "ai research organization"
|
101 |
+
},
|
102 |
+
{
|
103 |
+
"id": "GPT family",
|
104 |
+
"type": "technology",
|
105 |
+
"detailed_type": "large language models"
|
106 |
+
},
|
107 |
+
{
|
108 |
+
"id": "DALL-E series",
|
109 |
+
"type": "technology",
|
110 |
+
"detailed_type": "text-to-image models"
|
111 |
+
},
|
112 |
+
{
|
113 |
+
"id": "Sora",
|
114 |
+
"type": "technology",
|
115 |
+
"detailed_type": "text-to-video model"
|
116 |
+
},
|
117 |
+
{
|
118 |
+
"id": "ChatGPT",
|
119 |
+
"type": "technology",
|
120 |
+
"detailed_type": "generative ai"
|
121 |
+
},
|
122 |
+
{
|
123 |
+
"id": "San Francisco",
|
124 |
+
"type": "location",
|
125 |
+
"detailed_type": "city"
|
126 |
+
},
|
127 |
+
{
|
128 |
+
"id": "California",
|
129 |
+
"type": "location",
|
130 |
+
"detailed_type": "state"
|
131 |
+
},
|
132 |
+
{
|
133 |
+
"id": "December 2015",
|
134 |
+
"type": "date",
|
135 |
+
"detailed_type": "foundation date"
|
136 |
+
},
|
137 |
+
{
|
138 |
+
"id": "November 2022",
|
139 |
+
"type": "date",
|
140 |
+
"detailed_type": "release date"
|
141 |
+
}
|
142 |
+
],
|
143 |
+
"edges": [
|
144 |
+
{
|
145 |
+
"from": "OpenAI",
|
146 |
+
"to": "San Francisco",
|
147 |
+
"label": "headquartered in"
|
148 |
+
},
|
149 |
+
{
|
150 |
+
"from": "San Francisco",
|
151 |
+
"to": "California",
|
152 |
+
"label": "located in"
|
153 |
+
},
|
154 |
+
{
|
155 |
+
"from": "OpenAI",
|
156 |
+
"to": "December 2015",
|
157 |
+
"label": "founded in"
|
158 |
+
},
|
159 |
+
{
|
160 |
+
"from": "OpenAI",
|
161 |
+
"to": "GPT family",
|
162 |
+
"label": "developed"
|
163 |
+
},
|
164 |
+
{
|
165 |
+
"from": "OpenAI",
|
166 |
+
"to": "DALL-E series",
|
167 |
+
"label": "developed"
|
168 |
+
},
|
169 |
+
{
|
170 |
+
"from": "OpenAI",
|
171 |
+
"to": "Sora",
|
172 |
+
"label": "developed"
|
173 |
+
},
|
174 |
+
{
|
175 |
+
"from": "OpenAI",
|
176 |
+
"to": "ChatGPT",
|
177 |
+
"label": "released"
|
178 |
+
},
|
179 |
+
{
|
180 |
+
"from": "ChatGPT",
|
181 |
+
"to": "November 2022",
|
182 |
+
"label": "released in"
|
183 |
+
}
|
184 |
+
]
|
185 |
+
}
|
186 |
+
```
|
187 |
+
|
188 |
+
### About GGUF
|
189 |
+
|
190 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
191 |
+
|
192 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
193 |
+
|
194 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
195 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
196 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
197 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
198 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
199 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
200 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
201 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
202 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
203 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
204 |
+
|
205 |
+
## Special thanks
|
206 |
+
|
207 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
208 |
And thanks to [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) for the README template
|