Spaces:
Runtime error
Runtime error
Errolmking
commited on
Commit
·
804b339
1
Parent(s):
a92fff6
Update app.py
Browse files
app.py
CHANGED
@@ -47,6 +47,13 @@ import gradio as gr
|
|
47 |
|
48 |
from pydantic import BaseModel, Field, validator
|
49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
from langchain.callbacks import PromptLayerCallbackHandler
|
51 |
from langchain.prompts.chat import (
|
52 |
ChatPromptTemplate,
|
@@ -56,96 +63,6 @@ from langchain.prompts.chat import (
|
|
56 |
)
|
57 |
from langchain.memory import ConversationSummaryMemory
|
58 |
|
59 |
-
#API Keys
|
60 |
-
promptlayer.api_key = os.environ["PROMPTLAYER"]
|
61 |
-
openai.api_key = os.environ["OPENAI_API_KEY"]
|
62 |
-
|
63 |
-
import gradio as gr
|
64 |
-
|
65 |
-
"""# Config the app"""
|
66 |
-
|
67 |
-
TITLE = "UBD Coach"
|
68 |
-
DESC="""
|
69 |
-
Using the Understanding By Design (UBD) framework? (/◕ヮ◕)/ \n
|
70 |
-
I can help. Let me be your thinking partner and assistant. \n
|
71 |
-
What subject are you interested in building for?
|
72 |
-
"""
|
73 |
-
|
74 |
-
APP = """
|
75 |
-
# UBD Coach
|
76 |
-
|
77 |
-
Roleplay as an expert in the Understanding by Design curriculum coach.
|
78 |
-
Help the teacher build out their UBD curriculum step by step.
|
79 |
-
Be encouraging.
|
80 |
-
Help the teacher build confidence in the design process.
|
81 |
-
Below is the structure of the design process ( delineated by the triple plus):
|
82 |
-
|
83 |
-
+++
|
84 |
-
Stage 1: Identify Desired Results
|
85 |
-
The first stage in the design process calls for clarity about the learning priorities – both long-term outcomes as well as short-term goals. We review established content standards and related outcomes (e.g., 21st-century skills) to consider the big ideas we want students to come to understand and the long-term transfer goals that those ideas enable. We frame companion essential questions around the targeted understandings and transfer goals. Finally, we identify more specific knowledge and skill objectives.
|
86 |
-
|
87 |
-
The following planning questions guide the work of Stage 1:
|
88 |
-
What do we want students to be able to do with their learning in the long run?
|
89 |
-
What should students come to understand for them to transfer their learning?
|
90 |
-
What essential questions will students explore?
|
91 |
-
What knowledge and skills will students need to acquire?
|
92 |
-
|
93 |
-
Stage 2: Determine Acceptable Evidence
|
94 |
-
In Stage 2 of backward design, we are encouraged to “think like assessors” before jumping to planning lessons and learning activities (in Stage 3). In other words, we need to think about the assessment that will show the extent to which our students have attained the various learning outcomes outlined in Stage 1.
|
95 |
-
It is one thing to say that students should understand X and be able to do Y; it is another to ask: What evidence will show that they understand X and can effectively apply Y? We have found that considering the needed assessment evidence helps focus and sharpen the teaching-learning plan in Stage 3.
|
96 |
-
The following planning questions guide the work of Stage 2:
|
97 |
-
What evidence will show that learners have achieved the learning goals targeted in Stage 1?
|
98 |
-
How will learners demonstrate their understanding and ability to transfer their learning?
|
99 |
-
How will we assess the specific knowledge and skill proficiency?
|
100 |
-
In UbD, evidence of understanding and transfer is obtained through performance tasks that ask students to explain what they understand and to apply (i.e., transfer) their learning to new situations. We recommend that the performance assessments be set in a meaningful and authentic context whenever possible. Supplementary assessments, such as a test on facts or a skills check, provide additional evidence of students’ knowledge acquisition and skill proficiency.
|
101 |
-
|
102 |
-
Stage 3: Plan Learning Experiences and Instruction
|
103 |
-
In the third stage of backward design, we plan for our teaching and the associated learning experiences that students will need to reach and demonstrate attainment of goals. With clearly identified learning results (Stage 1) and appropriate assessment evidence in mind (Stage 2), we now plan the most appropriate instructional activities for helping learners acquire targeted knowledge and skills, come to understand important ideas, and apply their learning in meaningful ways.
|
104 |
-
The various types of learning goals identified in Stage 1—acquisition of knowledge and skills, understanding of big ideas, and transfer—inform the selection of instructional strategies and the roles of the teacher, including direct instructor, facilitator, and coach. In other words, our instructional practices need to be aligned to the desired results (Stage 1) and their assessments (Stage 2).
|
105 |
-
The following planning questions guide planning in Stage 3:
|
106 |
-
What activities, experiences, and lessons will lead to the achievement of the desired results and success at the assessments?
|
107 |
-
How will the learning plan help students acquire, make meaning, and transfer?
|
108 |
-
How will the unit be sequenced and differentiated to optimize achievement for all learners?
|
109 |
-
How will we check for understanding along the way?
|
110 |
-
+++
|
111 |
-
|
112 |
-
Goal [
|
113 |
-
Stage_1
|
114 |
-
Stage_2
|
115 |
-
Stage_3
|
116 |
-
]
|
117 |
-
|
118 |
-
Input [
|
119 |
-
input = {input}
|
120 |
-
]
|
121 |
-
|
122 |
-
History [
|
123 |
-
history = {history}
|
124 |
-
]
|
125 |
-
|
126 |
-
Understanding_By_Design_Coach [
|
127 |
-
State [
|
128 |
-
Goals
|
129 |
-
]
|
130 |
-
Constraints [
|
131 |
-
Emulate the speaking style of the world's best curriculum coaches.
|
132 |
-
Keep the responses short and focused and on task towards the goal.
|
133 |
-
Be encouraging.
|
134 |
-
Close with a question that helps the teacher complete the current step.
|
135 |
-
]
|
136 |
-
|
137 |
-
/help - provide a summary of the design process
|
138 |
-
/explain - Explain the goals of the ubd process
|
139 |
-
|
140 |
-
]
|
141 |
-
very_brief_state_summary()
|
142 |
-
respond(input, history)
|
143 |
-
|
144 |
-
"""
|
145 |
-
|
146 |
-
|
147 |
-
"""# Agent Engine"""
|
148 |
-
|
149 |
# Defined a QueueCallback, which takes as a Queue object during initialization. Each new token is pushed to the queue.
|
150 |
class QueueCallback(BaseCallbackHandler):
|
151 |
"""Callback handler for streaming LLM responses to a queue."""
|
@@ -161,10 +78,11 @@ class QueueCallback(BaseCallbackHandler):
|
|
161 |
|
162 |
class DDSAgent:
|
163 |
|
164 |
-
def __init__(self, name, prompt_template='', model_name='gpt-
|
|
|
165 |
self.verbose = verbose
|
166 |
self.llm = ChatOpenAI(
|
167 |
-
model_name="gpt-
|
168 |
temperature=temp
|
169 |
)
|
170 |
|
@@ -175,7 +93,7 @@ class DDSAgent:
|
|
175 |
self.summary_llm = ChatOpenAI(
|
176 |
model_name=model_name,
|
177 |
max_tokens=25,
|
178 |
-
callbacks=[PromptLayerCallbackHandler(pl_tags=["
|
179 |
streaming=False,
|
180 |
)
|
181 |
|
@@ -193,21 +111,31 @@ class DDSAgent:
|
|
193 |
memory=self.memory
|
194 |
)
|
195 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
196 |
def stream(self, input) -> Generator:
|
197 |
|
198 |
# Create a Queue
|
199 |
q = Queue()
|
200 |
job_done = object()
|
201 |
|
|
|
|
|
|
|
202 |
llm = ChatOpenAI(
|
203 |
-
model_name='gpt-
|
204 |
callbacks=[QueueCallback(q),
|
205 |
-
PromptLayerCallbackHandler(pl_tags=["
|
206 |
streaming=True,
|
207 |
)
|
208 |
|
209 |
prompt = PromptTemplate(
|
210 |
-
input_variables=['input','history'],
|
211 |
template=self.prompt_template
|
212 |
# partial_variables={"format_instructions": self.parser.get_format_instructions()}
|
213 |
)
|
@@ -216,6 +144,7 @@ class DDSAgent:
|
|
216 |
def task():
|
217 |
resp = self.chain(prompt,llm).run(
|
218 |
{'input':input,
|
|
|
219 |
'history':self.memory})
|
220 |
q.put(job_done)
|
221 |
|
@@ -236,20 +165,46 @@ class DDSAgent:
|
|
236 |
except Empty:
|
237 |
continue
|
238 |
|
239 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
240 |
|
241 |
def ask_agent(input, history):
|
242 |
-
|
|
|
243 |
yield(content)
|
244 |
|
245 |
-
"""# Run Gradio App Locally"""
|
246 |
-
|
247 |
gr.ChatInterface(ask_agent,
|
248 |
-
title=
|
249 |
-
description=
|
|
|
|
|
250 |
theme="monochrome",
|
251 |
retry_btn=None,
|
252 |
undo_btn=None,
|
253 |
clear_btn=None
|
254 |
-
).queue().launch(debug=True)
|
255 |
-
|
|
|
47 |
|
48 |
from pydantic import BaseModel, Field, validator
|
49 |
|
50 |
+
#Load the FAISS Model ( vector )
|
51 |
+
openai.api_key = os.environ["OPENAI_API_KEY"]
|
52 |
+
db = FAISS.load_local("db", OpenAIEmbeddings())
|
53 |
+
|
54 |
+
#API Keys
|
55 |
+
promptlayer.api_key = os.environ["PROMPTLAYER"]
|
56 |
+
|
57 |
from langchain.callbacks import PromptLayerCallbackHandler
|
58 |
from langchain.prompts.chat import (
|
59 |
ChatPromptTemplate,
|
|
|
63 |
)
|
64 |
from langchain.memory import ConversationSummaryMemory
|
65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
# Defined a QueueCallback, which takes as a Queue object during initialization. Each new token is pushed to the queue.
|
67 |
class QueueCallback(BaseCallbackHandler):
|
68 |
"""Callback handler for streaming LLM responses to a queue."""
|
|
|
78 |
|
79 |
class DDSAgent:
|
80 |
|
81 |
+
def __init__(self, name, db, prompt_template='', model_name='gpt-4', verbose=False, temp=0.2):
|
82 |
+
self.db = db
|
83 |
self.verbose = verbose
|
84 |
self.llm = ChatOpenAI(
|
85 |
+
model_name="gpt-4",
|
86 |
temperature=temp
|
87 |
)
|
88 |
|
|
|
93 |
self.summary_llm = ChatOpenAI(
|
94 |
model_name=model_name,
|
95 |
max_tokens=25,
|
96 |
+
callbacks=[PromptLayerCallbackHandler(pl_tags=["froebel"])],
|
97 |
streaming=False,
|
98 |
)
|
99 |
|
|
|
111 |
memory=self.memory
|
112 |
)
|
113 |
|
114 |
+
def lookup(self, input, num_docs=5):
|
115 |
+
docs = self.db.similarity_search(input, k=num_docs)
|
116 |
+
docs_to_string = ""
|
117 |
+
for doc in docs:
|
118 |
+
docs_to_string += str(doc.page_content)
|
119 |
+
return docs_to_string
|
120 |
+
|
121 |
def stream(self, input) -> Generator:
|
122 |
|
123 |
# Create a Queue
|
124 |
q = Queue()
|
125 |
job_done = object()
|
126 |
|
127 |
+
#RAG
|
128 |
+
docs = self.lookup(input,5)
|
129 |
+
|
130 |
llm = ChatOpenAI(
|
131 |
+
model_name='gpt-4',
|
132 |
callbacks=[QueueCallback(q),
|
133 |
+
PromptLayerCallbackHandler(pl_tags=["froebel"])],
|
134 |
streaming=True,
|
135 |
)
|
136 |
|
137 |
prompt = PromptTemplate(
|
138 |
+
input_variables=['input','docs','history'],
|
139 |
template=self.prompt_template
|
140 |
# partial_variables={"format_instructions": self.parser.get_format_instructions()}
|
141 |
)
|
|
|
144 |
def task():
|
145 |
resp = self.chain(prompt,llm).run(
|
146 |
{'input':input,
|
147 |
+
'docs':docs,
|
148 |
'history':self.memory})
|
149 |
q.put(job_done)
|
150 |
|
|
|
165 |
except Empty:
|
166 |
continue
|
167 |
|
168 |
+
|
169 |
+
|
170 |
+
|
171 |
+
agent_prompt = """
|
172 |
+
Roleplay
|
173 |
+
You are a UBD ( Understanding by Design ) coach.
|
174 |
+
Educators come to you to develop UBD based learning experiences
|
175 |
+
and curriculum.
|
176 |
+
|
177 |
+
This is the conversation up until now:
|
178 |
+
{history}
|
179 |
+
The teacher says:
|
180 |
+
{input}
|
181 |
+
As a result, following standards were matched:
|
182 |
+
{docs}
|
183 |
+
Respond to the teacher message.
|
184 |
+
You have three objectives:
|
185 |
+
|
186 |
+
a) to help them through the design process
|
187 |
+
b) to help simplify the process for the educator
|
188 |
+
c) to help build confidence and understand in the ubd process
|
189 |
+
Take it step by step and keep.
|
190 |
+
Keep focused on the current task at hand.
|
191 |
+
Close with a single guiding step in the form of a question.
|
192 |
+
Be encouraging.
|
193 |
+
Do not start with "AI:" or any self identifying text.
|
194 |
+
"""
|
195 |
|
196 |
def ask_agent(input, history):
|
197 |
+
dds = DDSAgent('agent', db, prompt_template=agent_prompt)
|
198 |
+
for next_token, content in dds.stream(input):
|
199 |
yield(content)
|
200 |
|
|
|
|
|
201 |
gr.ChatInterface(ask_agent,
|
202 |
+
title="UBD Coach",
|
203 |
+
description="""
|
204 |
+
Using the Understanding By Design framework? I can help. (/◕ヮ◕)/
|
205 |
+
""",
|
206 |
theme="monochrome",
|
207 |
retry_btn=None,
|
208 |
undo_btn=None,
|
209 |
clear_btn=None
|
210 |
+
).queue().launch(debug=True)
|
|