Sergei Petrov
commited on
Commit
•
5042efb
0
Parent(s):
initial
Browse files- README.md +25 -0
- gradio_app/app.py +127 -0
- gradio_app/backend/__pycache__/query_llm.cpython-310.pyc +0 -0
- gradio_app/backend/__pycache__/semantic_search.cpython-310.pyc +0 -0
- gradio_app/backend/query_llm.py +154 -0
- gradio_app/backend/semantic_search.py +18 -0
- gradio_app/requirements.txt +9 -0
- gradio_app/templates/template.j2 +8 -0
- gradio_app/templates/template_html.j2 +102 -0
- prep_scripts/lancedb_setup.py +65 -0
- prep_scripts/markdown_to_text.py +49 -0
README.md
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# A template for a RAG system with Gradio UI
|
2 |
+
Deliberately stripped down to leave some room for experimenting
|
3 |
+
|
4 |
+
# Setting it up
|
5 |
+
- Clone https://github.com/huggingface/transformers to a local machine
|
6 |
+
- Use the **prep_scrips/markdown_to_text.py** script to extract raw text from markdown from transformers/docs/source/en/
|
7 |
+
- Break the resulting texts down into semantically meaningful pieces. Experiment with different chunking mechanisms to make sure the semantic meaning is captured.
|
8 |
+
- Use **prep_scrips/lancedb_setup.py** to embed and store chunks in a lancedb instance. It also creates an index for fast ANN retrieval (not really needed for this exercise but necessary at scale). You'll need to put your own values into VECTOR_COLUMN_NAME, TEXT_COLUMN_NAME, DB_TABLE_NAME.
|
9 |
+
- Move the database directory (.lancedb by default) to **gradio_app/**
|
10 |
+
- Use the template given in **gradio_app** to wrap everything into the [Gradio](https://www.gradio.app/docs/interface) app and run it on HF [spaces](https://huggingface.co/docs/hub/spaces-config-reference). Make sure to adjust VECTOR_COLUMN_NAME, TEXT_COLUMN_NAME, DB_TABLE_NAME according to your DB setup.
|
11 |
+
- Set up secrets OPENAI_API_KEY and HUGGING_FACE_HUB_TOKEN to use OpenAI and open-source models correspondingly
|
12 |
+
|
13 |
+
- TODOs:
|
14 |
+
- Experiment with chunking, see how it affects the results. When deciding how to chunk it helps to think about what kind of chunks you'd like to see as context to your queries.
|
15 |
+
- Deliverables: Show how retrieved documents differ with different chunking strategies and how it affects the output
|
16 |
+
- Try out different embedding models (EMB_MODEL_NAME). The models to try are **sentence-transformers/all-MiniLM-L6-v2** - lightweight, **thenlper/gte-large** - relatively heavy but more powerful.
|
17 |
+
- Deliverables: Show how retrieved documents differ with different embedding models and how they affect the output. Provide an estimate of how time to embed the chunks and DB ingestion time differs (happening in **prep_scrips/lancedb_setup.py**).
|
18 |
+
- Add a re-ranker (cross-encoder) to the pipeline. Start with sentence-transformers pages on cross-encoders [1](https://www.sbert.net/examples/applications/cross-encoder/README.html) [2](https://www.sbert.net/examples/applications/retrieve_rerank/README.html), then pick a [pretrained cross-encoder](https://www.sbert.net/docs/pretrained-models/ce-msmarco.html), e.g. **cross-encoder/ms-marco-MiniLM-L-12-v2**. Don't forget to increase the number of *retrieved* documents when using re-ranker. The number of document used as context should stay the same.
|
19 |
+
- Deliverables: Show how retrieved documents differ after adding a re-ranker and how it affects the output. Provide an estimate of how latency changes.
|
20 |
+
- Try another LLM (e.g. LLaMA-2-70b, falcon-180b).
|
21 |
+
- Deliverables: Show how LLMs affect the output and how latency changes with the model size.
|
22 |
+
- Add more documents (e.g. diffusers, tokenizers, optimum etc) to see how the system scales.
|
23 |
+
- Deliverables: Show how latency changes, how it differs with and without index (index is added in **prep_scrips/lancedb_setup.py**).
|
24 |
+
- (Bonus) Use an LLM to quantitatively compare outputs of different variants of the system ([LLM as a Judge](https://huggingface.co/collections/andrewrreed/llm-as-a-judge-653fb861e361fd03c12d41e5))
|
25 |
+
- Deliverables: Describe the experimental setup and what were the evaluation results
|
gradio_app/app.py
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Credit to Derek Thomas, derek@huggingface.co
|
3 |
+
"""
|
4 |
+
|
5 |
+
import subprocess
|
6 |
+
|
7 |
+
subprocess.run(["pip", "install", "--upgrade", "transformers[torch,sentencepiece]==4.34.1"])
|
8 |
+
|
9 |
+
import logging
|
10 |
+
from pathlib import Path
|
11 |
+
from time import perf_counter
|
12 |
+
|
13 |
+
import gradio as gr
|
14 |
+
from jinja2 import Environment, FileSystemLoader
|
15 |
+
|
16 |
+
from backend.query_llm import generate_hf, generate_openai
|
17 |
+
from backend.semantic_search import table, retriever
|
18 |
+
|
19 |
+
VECTOR_COLUMN_NAME = ""
|
20 |
+
TEXT_COLUMN_NAME = ""
|
21 |
+
|
22 |
+
proj_dir = Path(__file__).parent
|
23 |
+
# Setting up the logging
|
24 |
+
logging.basicConfig(level=logging.INFO)
|
25 |
+
logger = logging.getLogger(__name__)
|
26 |
+
|
27 |
+
# Set up the template environment with the templates directory
|
28 |
+
env = Environment(loader=FileSystemLoader(proj_dir / 'templates'))
|
29 |
+
|
30 |
+
# Load the templates directly from the environment
|
31 |
+
template = env.get_template('template.j2')
|
32 |
+
template_html = env.get_template('template_html.j2')
|
33 |
+
|
34 |
+
# Examples
|
35 |
+
examples = ['What is the capital of China?',
|
36 |
+
'Why is the sky blue?',
|
37 |
+
'Who won the mens world cup in 2014?', ]
|
38 |
+
|
39 |
+
|
40 |
+
def add_text(history, text):
|
41 |
+
history = [] if history is None else history
|
42 |
+
history = history + [(text, None)]
|
43 |
+
return history, gr.Textbox(value="", interactive=False)
|
44 |
+
|
45 |
+
|
46 |
+
def bot(history, api_kind):
|
47 |
+
top_k_rank = 4
|
48 |
+
query = history[-1][0]
|
49 |
+
|
50 |
+
if not query:
|
51 |
+
gr.Warning("Please submit a non-empty string as a prompt")
|
52 |
+
raise ValueError("Empty string was submitted")
|
53 |
+
|
54 |
+
logger.warning('Retrieving documents...')
|
55 |
+
# Retrieve documents relevant to query
|
56 |
+
document_start = perf_counter()
|
57 |
+
|
58 |
+
query_vec = retriever.encode(query)
|
59 |
+
documents = table.search(query_vec, vector_column_name=VECTOR_COLUMN_NAME).limit(top_k_rank).to_list()
|
60 |
+
documents = [doc[TEXT_COLUMN_NAME] for doc in documents]
|
61 |
+
|
62 |
+
document_time = perf_counter() - document_start
|
63 |
+
logger.warning(f'Finished Retrieving documents in {round(document_time, 2)} seconds...')
|
64 |
+
|
65 |
+
# Create Prompt
|
66 |
+
prompt = template.render(documents=documents, query=query)
|
67 |
+
prompt_html = template_html.render(documents=documents, query=query)
|
68 |
+
|
69 |
+
if api_kind == "HuggingFace":
|
70 |
+
generate_fn = generate_hf
|
71 |
+
elif api_kind == "OpenAI":
|
72 |
+
generate_fn = generate_openai
|
73 |
+
elif api_kind is None:
|
74 |
+
gr.Warning("API name was not provided")
|
75 |
+
raise ValueError("API name was not provided")
|
76 |
+
else:
|
77 |
+
gr.Warning(f"API {api_kind} is not supported")
|
78 |
+
raise ValueError(f"API {api_kind} is not supported")
|
79 |
+
|
80 |
+
history[-1][1] = ""
|
81 |
+
for character in generate_fn(prompt, history[:-1]):
|
82 |
+
history[-1][1] = character
|
83 |
+
yield history, prompt_html
|
84 |
+
|
85 |
+
|
86 |
+
with gr.Blocks() as demo:
|
87 |
+
chatbot = gr.Chatbot(
|
88 |
+
[],
|
89 |
+
elem_id="chatbot",
|
90 |
+
avatar_images=('https://aui.atlassian.com/aui/8.8/docs/images/avatar-person.svg',
|
91 |
+
'https://huggingface.co/datasets/huggingface/brand-assets/resolve/main/hf-logo.svg'),
|
92 |
+
bubble_full_width=False,
|
93 |
+
show_copy_button=True,
|
94 |
+
show_share_button=True,
|
95 |
+
)
|
96 |
+
|
97 |
+
with gr.Row():
|
98 |
+
txt = gr.Textbox(
|
99 |
+
scale=3,
|
100 |
+
show_label=False,
|
101 |
+
placeholder="Enter text and press enter",
|
102 |
+
container=False,
|
103 |
+
)
|
104 |
+
txt_btn = gr.Button(value="Submit text", scale=1)
|
105 |
+
|
106 |
+
api_kind = gr.Radio(choices=["HuggingFace", "OpenAI"], value="HuggingFace")
|
107 |
+
|
108 |
+
prompt_html = gr.HTML()
|
109 |
+
# Turn off interactivity while generating if you click
|
110 |
+
txt_msg = txt_btn.click(add_text, [chatbot, txt], [chatbot, txt], queue=False).then(
|
111 |
+
bot, [chatbot, api_kind], [chatbot, prompt_html])
|
112 |
+
|
113 |
+
# Turn it back on
|
114 |
+
txt_msg.then(lambda: gr.Textbox(interactive=True), None, [txt], queue=False)
|
115 |
+
|
116 |
+
# Turn off interactivity while generating if you hit enter
|
117 |
+
txt_msg = txt.submit(add_text, [chatbot, txt], [chatbot, txt], queue=False).then(
|
118 |
+
bot, [chatbot, api_kind], [chatbot, prompt_html])
|
119 |
+
|
120 |
+
# Turn it back on
|
121 |
+
txt_msg.then(lambda: gr.Textbox(interactive=True), None, [txt], queue=False)
|
122 |
+
|
123 |
+
# Examples
|
124 |
+
gr.Examples(examples, txt)
|
125 |
+
|
126 |
+
demo.queue()
|
127 |
+
demo.launch(debug=True)
|
gradio_app/backend/__pycache__/query_llm.cpython-310.pyc
ADDED
Binary file (4.36 kB). View file
|
|
gradio_app/backend/__pycache__/semantic_search.cpython-310.pyc
ADDED
Binary file (700 Bytes). View file
|
|
gradio_app/backend/query_llm.py
ADDED
@@ -0,0 +1,154 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import openai
|
2 |
+
import gradio as gr
|
3 |
+
|
4 |
+
from os import getenv
|
5 |
+
from typing import Any, Dict, Generator, List
|
6 |
+
|
7 |
+
from huggingface_hub import InferenceClient
|
8 |
+
from transformers import AutoTokenizer
|
9 |
+
|
10 |
+
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
|
11 |
+
|
12 |
+
temperature = 0.9
|
13 |
+
top_p = 0.6
|
14 |
+
repetition_penalty = 1.2
|
15 |
+
|
16 |
+
OPENAI_KEY = getenv("OPENAI_API_KEY")
|
17 |
+
HF_TOKEN = getenv("HUGGING_FACE_HUB_TOKEN")
|
18 |
+
|
19 |
+
hf_client = InferenceClient(
|
20 |
+
"mistralai/Mistral-7B-Instruct-v0.1",
|
21 |
+
token=HF_TOKEN
|
22 |
+
)
|
23 |
+
|
24 |
+
|
25 |
+
def format_prompt(message: str, api_kind: str):
|
26 |
+
"""
|
27 |
+
Formats the given message using a chat template.
|
28 |
+
|
29 |
+
Args:
|
30 |
+
message (str): The user message to be formatted.
|
31 |
+
|
32 |
+
Returns:
|
33 |
+
str: Formatted message after applying the chat template.
|
34 |
+
"""
|
35 |
+
|
36 |
+
# Create a list of message dictionaries with role and content
|
37 |
+
messages: List[Dict[str, Any]] = [{'role': 'user', 'content': message}]
|
38 |
+
|
39 |
+
if api_kind == "openai":
|
40 |
+
return messages
|
41 |
+
elif api_kind == "hf":
|
42 |
+
return tokenizer.apply_chat_template(messages, tokenize=False)
|
43 |
+
elif api_kind:
|
44 |
+
raise ValueError("API is not supported")
|
45 |
+
|
46 |
+
|
47 |
+
def generate_hf(prompt: str, history: str, temperature: float = 0.9, max_new_tokens: int = 256,
|
48 |
+
top_p: float = 0.95, repetition_penalty: float = 1.0) -> Generator[str, None, str]:
|
49 |
+
"""
|
50 |
+
Generate a sequence of tokens based on a given prompt and history using Mistral client.
|
51 |
+
|
52 |
+
Args:
|
53 |
+
prompt (str): The initial prompt for the text generation.
|
54 |
+
history (str): Context or history for the text generation.
|
55 |
+
temperature (float, optional): The softmax temperature for sampling. Defaults to 0.9.
|
56 |
+
max_new_tokens (int, optional): Maximum number of tokens to be generated. Defaults to 256.
|
57 |
+
top_p (float, optional): Nucleus sampling probability. Defaults to 0.95.
|
58 |
+
repetition_penalty (float, optional): Penalty for repeated tokens. Defaults to 1.0.
|
59 |
+
|
60 |
+
Returns:
|
61 |
+
Generator[str, None, str]: A generator yielding chunks of generated text.
|
62 |
+
Returns a final string if an error occurs.
|
63 |
+
"""
|
64 |
+
|
65 |
+
temperature = max(float(temperature), 1e-2) # Ensure temperature isn't too low
|
66 |
+
top_p = float(top_p)
|
67 |
+
|
68 |
+
generate_kwargs = {
|
69 |
+
'temperature': temperature,
|
70 |
+
'max_new_tokens': max_new_tokens,
|
71 |
+
'top_p': top_p,
|
72 |
+
'repetition_penalty': repetition_penalty,
|
73 |
+
'do_sample': True,
|
74 |
+
'seed': 42,
|
75 |
+
}
|
76 |
+
|
77 |
+
formatted_prompt = format_prompt(prompt, "hf")
|
78 |
+
|
79 |
+
try:
|
80 |
+
stream = hf_client.text_generation(formatted_prompt, **generate_kwargs,
|
81 |
+
stream=True, details=True, return_full_text=False)
|
82 |
+
output = ""
|
83 |
+
for response in stream:
|
84 |
+
output += response.token.text
|
85 |
+
yield output
|
86 |
+
|
87 |
+
except Exception as e:
|
88 |
+
if "Too Many Requests" in str(e):
|
89 |
+
print("ERROR: Too many requests on Mistral client")
|
90 |
+
gr.Warning("Unfortunately Mistral is unable to process")
|
91 |
+
return "Unfortunately, I am not able to process your request now."
|
92 |
+
elif "Authorization header is invalid" in str(e):
|
93 |
+
print("Authetification error:", str(e))
|
94 |
+
gr.Warning("Authentication error: HF token was either not provided or incorrect")
|
95 |
+
return "Authentication error"
|
96 |
+
else:
|
97 |
+
print("Unhandled Exception:", str(e))
|
98 |
+
gr.Warning("Unfortunately Mistral is unable to process")
|
99 |
+
return "I do not know what happened, but I couldn't understand you."
|
100 |
+
|
101 |
+
|
102 |
+
def generate_openai(prompt: str, history: str, temperature: float = 0.9, max_new_tokens: int = 256,
|
103 |
+
top_p: float = 0.95, repetition_penalty: float = 1.0) -> Generator[str, None, str]:
|
104 |
+
"""
|
105 |
+
Generate a sequence of tokens based on a given prompt and history using Mistral client.
|
106 |
+
|
107 |
+
Args:
|
108 |
+
prompt (str): The initial prompt for the text generation.
|
109 |
+
history (str): Context or history for the text generation.
|
110 |
+
temperature (float, optional): The softmax temperature for sampling. Defaults to 0.9.
|
111 |
+
max_new_tokens (int, optional): Maximum number of tokens to be generated. Defaults to 256.
|
112 |
+
top_p (float, optional): Nucleus sampling probability. Defaults to 0.95.
|
113 |
+
repetition_penalty (float, optional): Penalty for repeated tokens. Defaults to 1.0.
|
114 |
+
|
115 |
+
Returns:
|
116 |
+
Generator[str, None, str]: A generator yielding chunks of generated text.
|
117 |
+
Returns a final string if an error occurs.
|
118 |
+
"""
|
119 |
+
|
120 |
+
temperature = max(float(temperature), 1e-2) # Ensure temperature isn't too low
|
121 |
+
top_p = float(top_p)
|
122 |
+
|
123 |
+
generate_kwargs = {
|
124 |
+
'temperature': temperature,
|
125 |
+
'max_tokens': max_new_tokens,
|
126 |
+
'top_p': top_p,
|
127 |
+
'frequency_penalty': max(-2., min(repetition_penalty, 2.)),
|
128 |
+
}
|
129 |
+
|
130 |
+
formatted_prompt = format_prompt(prompt, "openai")
|
131 |
+
|
132 |
+
try:
|
133 |
+
stream = openai.ChatCompletion.create(model="gpt-3.5-turbo-0301",
|
134 |
+
messages=formatted_prompt,
|
135 |
+
**generate_kwargs,
|
136 |
+
stream=True)
|
137 |
+
output = ""
|
138 |
+
for chunk in stream:
|
139 |
+
output += chunk.choices[0].delta.get("content", "")
|
140 |
+
yield output
|
141 |
+
|
142 |
+
except Exception as e:
|
143 |
+
if "Too Many Requests" in str(e):
|
144 |
+
print("ERROR: Too many requests on OpenAI client")
|
145 |
+
gr.Warning("Unfortunately OpenAI is unable to process")
|
146 |
+
return "Unfortunately, I am not able to process your request now."
|
147 |
+
elif "You didn't provide an API key" in str(e):
|
148 |
+
print("Authetification error:", str(e))
|
149 |
+
gr.Warning("Authentication error: OpenAI key was either not provided or incorrect")
|
150 |
+
return "Authentication error"
|
151 |
+
else:
|
152 |
+
print("Unhandled Exception:", str(e))
|
153 |
+
gr.Warning("Unfortunately OpenAI is unable to process")
|
154 |
+
return "I do not know what happened, but I couldn't understand you."
|
gradio_app/backend/semantic_search.py
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import logging
|
2 |
+
import lancedb
|
3 |
+
import os
|
4 |
+
from pathlib import Path
|
5 |
+
from sentence_transformers import SentenceTransformer
|
6 |
+
|
7 |
+
EMB_MODEL_NAME = ""
|
8 |
+
DB_TABLE_NAME = ""
|
9 |
+
|
10 |
+
# Setting up the logging
|
11 |
+
logging.basicConfig(level=logging.INFO)
|
12 |
+
logger = logging.getLogger(__name__)
|
13 |
+
retriever = SentenceTransformer(EMB_MODEL_NAME)
|
14 |
+
|
15 |
+
# db
|
16 |
+
db_uri = os.path.join(Path(__file__).parents[1], ".lancedb")
|
17 |
+
db = lancedb.connect(db_uri)
|
18 |
+
table = db.open_table(DB_TABLE_NAME)
|
gradio_app/requirements.txt
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# transformers[torch,sentencepiece]==4.34.1
|
2 |
+
wikiextractor==3.0.6
|
3 |
+
sentence-transformers>2.2.0
|
4 |
+
ipywidgets==8.1.1
|
5 |
+
tqdm==4.66.1
|
6 |
+
aiohttp==3.8.6
|
7 |
+
huggingface-hub==0.17.3
|
8 |
+
lancedb>=0.3
|
9 |
+
openai==0.28
|
gradio_app/templates/template.j2
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Instructions: Use the following unique documents in the Context section to answer the Query at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
|
2 |
+
Context:
|
3 |
+
{% for doc in documents %}
|
4 |
+
---
|
5 |
+
{{ doc }}
|
6 |
+
{% endfor %}
|
7 |
+
---
|
8 |
+
Query: {{ query }}
|
gradio_app/templates/template_html.j2
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!DOCTYPE html>
|
2 |
+
<html lang="en">
|
3 |
+
<head>
|
4 |
+
<meta charset="UTF-8">
|
5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
6 |
+
<title>Information Page</title>
|
7 |
+
<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@400;600&display=swap">
|
8 |
+
<link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600&display=swap">
|
9 |
+
<style>
|
10 |
+
* {
|
11 |
+
font-family: "Source Sans Pro";
|
12 |
+
}
|
13 |
+
|
14 |
+
.instructions > * {
|
15 |
+
color: #111 !important;
|
16 |
+
}
|
17 |
+
|
18 |
+
details.doc-box * {
|
19 |
+
color: #111 !important;
|
20 |
+
}
|
21 |
+
|
22 |
+
.dark {
|
23 |
+
background: #111;
|
24 |
+
color: white;
|
25 |
+
}
|
26 |
+
|
27 |
+
.doc-box {
|
28 |
+
padding: 10px;
|
29 |
+
margin-top: 10px;
|
30 |
+
background-color: #baecc2;
|
31 |
+
border-radius: 6px;
|
32 |
+
color: #111 !important;
|
33 |
+
max-width: 700px;
|
34 |
+
box-shadow: rgba(0, 0, 0, 0.2) 0px 1px 2px 0px;
|
35 |
+
}
|
36 |
+
|
37 |
+
.doc-full {
|
38 |
+
margin: 10px 14px;
|
39 |
+
line-height: 1.6rem;
|
40 |
+
}
|
41 |
+
|
42 |
+
.instructions {
|
43 |
+
color: #111 !important;
|
44 |
+
background: #b7bdfd;
|
45 |
+
display: block;
|
46 |
+
border-radius: 6px;
|
47 |
+
padding: 6px 10px;
|
48 |
+
line-height: 1.6rem;
|
49 |
+
max-width: 700px;
|
50 |
+
box-shadow: rgba(0, 0, 0, 0.2) 0px 1px 2px 0px;
|
51 |
+
}
|
52 |
+
|
53 |
+
.query {
|
54 |
+
color: #111 !important;
|
55 |
+
background: #ffbcbc;
|
56 |
+
display: block;
|
57 |
+
border-radius: 6px;
|
58 |
+
padding: 6px 10px;
|
59 |
+
line-height: 1.6rem;
|
60 |
+
max-width: 700px;
|
61 |
+
box-shadow: rgba(0, 0, 0, 0.2) 0px 1px 2px 0px;
|
62 |
+
}
|
63 |
+
</style>
|
64 |
+
</head>
|
65 |
+
<body>
|
66 |
+
<div class="prose svelte-1ybaih5" id="component-6">
|
67 |
+
<h2>Prompt</h2>
|
68 |
+
Below is the prompt that is given to the model. <hr>
|
69 |
+
<h2>Instructions</h2>
|
70 |
+
<span class="instructions">Use the following pieces of context to answer the question at the end.<br>If you don't know the answer, just say that you don't know, <span style="font-weight: bold;">don't try to make up an answer.</span></span><br>
|
71 |
+
<h2>Context</h2>
|
72 |
+
{% for doc in documents %}
|
73 |
+
<details class="doc-box">
|
74 |
+
<summary>
|
75 |
+
<b>Doc {{ loop.index }}:</b> <span class="doc-short">{{ doc[:100] }}...</span>
|
76 |
+
</summary>
|
77 |
+
<div class="doc-full">{{ doc }}</div>
|
78 |
+
</details>
|
79 |
+
{% endfor %}
|
80 |
+
|
81 |
+
<h2>Query</h2>
|
82 |
+
<span class="query">{{ query }}</span>
|
83 |
+
</div>
|
84 |
+
|
85 |
+
<script>
|
86 |
+
document.addEventListener("DOMContentLoaded", function() {
|
87 |
+
const detailsElements = document.querySelectorAll('.doc-box');
|
88 |
+
|
89 |
+
detailsElements.forEach(detail => {
|
90 |
+
detail.addEventListener('toggle', function() {
|
91 |
+
const docShort = this.querySelector('.doc-short');
|
92 |
+
if (this.open) {
|
93 |
+
docShort.style.display = 'none';
|
94 |
+
} else {
|
95 |
+
docShort.style.display = 'inline';
|
96 |
+
}
|
97 |
+
});
|
98 |
+
});
|
99 |
+
});
|
100 |
+
</script>
|
101 |
+
</body>
|
102 |
+
</html>
|
prep_scripts/lancedb_setup.py
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import lancedb
|
2 |
+
import torch
|
3 |
+
import pyarrow as pa
|
4 |
+
import pandas as pd
|
5 |
+
from pathlib import Path
|
6 |
+
import tqdm
|
7 |
+
import numpy as np
|
8 |
+
|
9 |
+
from sentence_transformers import SentenceTransformer
|
10 |
+
|
11 |
+
|
12 |
+
EMB_MODEL_NAME = ""
|
13 |
+
DB_TABLE_NAME = ""
|
14 |
+
VECTOR_COLUMN_NAME = ""
|
15 |
+
TEXT_COLUMN_NAME = ""
|
16 |
+
INPUT_DIR = "<chunked docs directory>"
|
17 |
+
db = lancedb.connect(".lancedb") # db location
|
18 |
+
batch_size = 32
|
19 |
+
|
20 |
+
model = SentenceTransformer(EMB_MODEL_NAME)
|
21 |
+
model.eval()
|
22 |
+
|
23 |
+
if torch.backends.mps.is_available():
|
24 |
+
device = "mps"
|
25 |
+
elif torch.cuda.is_available():
|
26 |
+
device = "cuda"
|
27 |
+
else:
|
28 |
+
device = "cpu"
|
29 |
+
|
30 |
+
schema = pa.schema(
|
31 |
+
[
|
32 |
+
pa.field(VECTOR_COLUMN_NAME, pa.list_(pa.float32(), 768)),
|
33 |
+
pa.field(TEXT_COLUMN_NAME, pa.string())
|
34 |
+
])
|
35 |
+
tbl = db.create_table(DB_TABLE_NAME, schema=schema, mode="overwrite")
|
36 |
+
|
37 |
+
input_dir = Path(INPUT_DIR)
|
38 |
+
files = list(input_dir.rglob("*"))
|
39 |
+
|
40 |
+
sentences = []
|
41 |
+
for file in files:
|
42 |
+
with open(file) as f:
|
43 |
+
sentences.append(f.read())
|
44 |
+
|
45 |
+
for i in tqdm.tqdm(range(0, int(np.ceil(len(sentences) / batch_size)))):
|
46 |
+
try:
|
47 |
+
batch = [sent for sent in sentences[i * batch_size:(i + 1) * batch_size] if len(sent) > 0]
|
48 |
+
encoded = model.encode(batch, normalize_embeddings=True, device=device)
|
49 |
+
encoded = [list(vec) for vec in encoded]
|
50 |
+
|
51 |
+
df = pd.DataFrame({
|
52 |
+
VECTOR_COLUMN_NAME: encoded,
|
53 |
+
TEXT_COLUMN_NAME: batch
|
54 |
+
})
|
55 |
+
|
56 |
+
tbl.add(df)
|
57 |
+
except:
|
58 |
+
print(f"batch {i} was skipped")
|
59 |
+
|
60 |
+
'''
|
61 |
+
create ivf-pd index https://lancedb.github.io/lancedb/ann_indexes/
|
62 |
+
with the size of the transformer docs, index is not really needed
|
63 |
+
but we'll do it for demonstrational purposes
|
64 |
+
'''
|
65 |
+
tbl.create_index(num_partitions=256, num_sub_vectors=96, vector_column_name=VECTOR_COLUMN_NAME)
|
prep_scripts/markdown_to_text.py
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from bs4 import BeautifulSoup
|
2 |
+
from markdown import markdown
|
3 |
+
import os
|
4 |
+
import re
|
5 |
+
from pathlib import Path
|
6 |
+
|
7 |
+
|
8 |
+
DIR_TO_SCRAPE = "transformers/docs/source/en/"
|
9 |
+
OUTPUT_DIR = str(Path().resolve() / "docs_dump")
|
10 |
+
|
11 |
+
|
12 |
+
def markdown_to_text(markdown_string):
|
13 |
+
""" Converts a markdown string to plaintext """
|
14 |
+
|
15 |
+
# md -> html -> text since BeautifulSoup can extract text cleanly
|
16 |
+
html = markdown(markdown_string)
|
17 |
+
|
18 |
+
html = re.sub(r'<!--((.|\n)*)-->', '', html)
|
19 |
+
html = re.sub('<code>bash', '<code>', html)
|
20 |
+
|
21 |
+
# extract text
|
22 |
+
soup = BeautifulSoup(html, "html.parser")
|
23 |
+
text = ''.join(soup.findAll(text=True))
|
24 |
+
|
25 |
+
text = re.sub('```(py|diff|python)', '', text)
|
26 |
+
text = re.sub('```\n', '\n', text)
|
27 |
+
text = re.sub('- .*', '', text)
|
28 |
+
text = text.replace('...', '')
|
29 |
+
text = re.sub('\n(\n)+', '\n\n', text)
|
30 |
+
|
31 |
+
return text
|
32 |
+
|
33 |
+
|
34 |
+
dir_to_scrape = Path(DIR_TO_SCRAPE)
|
35 |
+
files = list(dir_to_scrape.rglob("*"))
|
36 |
+
|
37 |
+
os.makedirs(OUTPUT_DIR, exist_ok=True)
|
38 |
+
|
39 |
+
for file in files:
|
40 |
+
parent = file.parent.stem if file.parent.stem != dir_to_scrape.stem else ""
|
41 |
+
if file.is_file():
|
42 |
+
with open(file) as f:
|
43 |
+
md = f.read()
|
44 |
+
|
45 |
+
text = markdown_to_text(md)
|
46 |
+
|
47 |
+
with open(os.path.join(OUTPUT_DIR, f"{parent}_{file.stem}.txt"), "w") as f:
|
48 |
+
f.write(text)
|
49 |
+
|