url
stringlengths 63
64
| repository_url
stringclasses 1
value | labels_url
stringlengths 77
78
| comments_url
stringlengths 72
73
| events_url
stringlengths 70
71
| html_url
stringlengths 51
54
| id
int64 1.73B
2.09B
| node_id
stringlengths 18
19
| number
int64 5.23k
16.2k
| title
stringlengths 1
385
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | milestone
null | comments
int64 0
56
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 1
55.4k
⌀ | reactions
dict | timeline_url
stringlengths 72
73
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/langchain-ai/langchain/issues/15000 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/15000/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/15000/comments | https://api.github.com/repos/langchain-ai/langchain/issues/15000/events | https://github.com/langchain-ai/langchain/issues/15000 | 2,051,951,382 | I_kwDOIPDwls56TksW | 15,000 | Issue: not getting output as per prompt,what is neccessary changes i need to do? | {
"login": "deepak-habilelabs",
"id": 137885024,
"node_id": "U_kgDOCDf1YA",
"avatar_url": "https://avatars.githubusercontent.com/u/137885024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deepak-habilelabs",
"html_url": "https://github.com/deepak-habilelabs",
"followers_url": "https://api.github.com/users/deepak-habilelabs/followers",
"following_url": "https://api.github.com/users/deepak-habilelabs/following{/other_user}",
"gists_url": "https://api.github.com/users/deepak-habilelabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deepak-habilelabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deepak-habilelabs/subscriptions",
"organizations_url": "https://api.github.com/users/deepak-habilelabs/orgs",
"repos_url": "https://api.github.com/users/deepak-habilelabs/repos",
"events_url": "https://api.github.com/users/deepak-habilelabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/deepak-habilelabs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 3 | 2023-12-21T08:28:42 | 2023-12-21T09:39:16 | null | NONE | null | ### Issue you'd like to raise.
I am not getting output as per prompt, what is neccessary modification or changes i need to do.
def chat_langchain(new_project_qa, query, not_uuid):
check = query.lower()
relevant_document = result['source_documents']
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
custom_prompt_template = f"""You are a Chatbot answering questions. Use the following pieces of context to answer the question at the end. If you don't know the answer, say that you don't know, don't try to make up an answer.
{relevant_document}
Question: {check}
Helpful Answer:"""
CUSTOMPROMPT = PromptTemplate(
template=custom_prompt_template, input_variables=["context", "question"]
)
print(CUSTOMPROMPT,"------------------")
new_project_qa.combine_documents_chain.llm_chain.prompt = CUSTOMPROMPT
result = new_project_qa(query)
if relevant_document:
source = relevant_document[0].metadata.get('source', '')
# Check if the file extension is ".pdf"
file_extension = os.path.splitext(source)[1]
if file_extension.lower() == ".pdf":
source = os.path.basename(source)
# Retrieve the UserExperience instance using the provided not_uuid
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
bot_ending = user_experience_inst.bot_ending_msg if user_experience_inst.bot_ending_msg is not None else ""
# Create the list_json dictionary
if bot_ending != '':
list_json = {
'bot_message': result['result'] + '\n\n' + str(bot_ending),
"citation": source
}
else:
list_json = {
'bot_message': result['result'] + str(bot_ending),
"citation": source
}
else:
# Handle the case when relevant_document is empty
list_json = {
'bot_message': result['result'],
'citation': ''
}
# Return the list_json dictionary
return list_json
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/15000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/15000/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14999 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14999/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14999/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14999/events | https://github.com/langchain-ai/langchain/pull/14999 | 2,051,918,160 | PR_kwDOIPDwls5ii1-6 | 14,999 | fixed wrong link in documentation | {
"login": "Yanni8",
"id": 99135388,
"node_id": "U_kgDOBeivnA",
"avatar_url": "https://avatars.githubusercontent.com/u/99135388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yanni8",
"html_url": "https://github.com/Yanni8",
"followers_url": "https://api.github.com/users/Yanni8/followers",
"following_url": "https://api.github.com/users/Yanni8/following{/other_user}",
"gists_url": "https://api.github.com/users/Yanni8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yanni8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yanni8/subscriptions",
"organizations_url": "https://api.github.com/users/Yanni8/orgs",
"repos_url": "https://api.github.com/users/Yanni8/repos",
"events_url": "https://api.github.com/users/Yanni8/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yanni8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T08:06:08 | 2023-12-21T20:25:50 | 2023-12-21T17:06:43 | CONTRIBUTOR | null | <!-- Thank you for contributing to LangChain!
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
See #14998
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14999/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14999",
"html_url": "https://github.com/langchain-ai/langchain/pull/14999",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14999.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14999.patch",
"merged_at": "2023-12-21T17:06:43"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14998 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14998/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14998/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14998/events | https://github.com/langchain-ai/langchain/issues/14998 | 2,051,897,510 | I_kwDOIPDwls56TXim | 14,998 | DOC: Document PAGE NOT FOUND | {
"login": "XinhhD",
"id": 33118163,
"node_id": "MDQ6VXNlcjMzMTE4MTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/33118163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XinhhD",
"html_url": "https://github.com/XinhhD",
"followers_url": "https://api.github.com/users/XinhhD/followers",
"following_url": "https://api.github.com/users/XinhhD/following{/other_user}",
"gists_url": "https://api.github.com/users/XinhhD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XinhhD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XinhhD/subscriptions",
"organizations_url": "https://api.github.com/users/XinhhD/orgs",
"repos_url": "https://api.github.com/users/XinhhD/repos",
"events_url": "https://api.github.com/users/XinhhD/events{/privacy}",
"received_events_url": "https://api.github.com/users/XinhhD/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
}
] | closed | false | null | [] | null | 5 | 2023-12-21T07:49:36 | 2023-12-24T09:09:50 | 2023-12-24T09:09:50 | NONE | null | ### Issue with current documentation:
404 page: https://python.langchain.com/docs/contributing/integration
referenced by: https://python.langchain.com/docs/contributing/
![image](https://github.com/langchain-ai/langchain/assets/33118163/360d1a67-0c2c-471a-aba1-2eef21bc8f9a)
### Idea or request for content:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14998/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14997 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14997/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14997/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14997/events | https://github.com/langchain-ai/langchain/pull/14997 | 2,051,805,807 | PR_kwDOIPDwls5iidQ4 | 14,997 | community: fix for surrealdb client 0.3.2 update + store and retrieve metadata | {
"login": "lalanikarim",
"id": 1296705,
"node_id": "MDQ6VXNlcjEyOTY3MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1296705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lalanikarim",
"html_url": "https://github.com/lalanikarim",
"followers_url": "https://api.github.com/users/lalanikarim/followers",
"following_url": "https://api.github.com/users/lalanikarim/following{/other_user}",
"gists_url": "https://api.github.com/users/lalanikarim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lalanikarim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lalanikarim/subscriptions",
"organizations_url": "https://api.github.com/users/lalanikarim/orgs",
"repos_url": "https://api.github.com/users/lalanikarim/repos",
"events_url": "https://api.github.com/users/lalanikarim/events{/privacy}",
"received_events_url": "https://api.github.com/users/lalanikarim/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 2 | 2023-12-21T06:51:11 | 2023-12-21T17:04:58 | 2023-12-21T17:04:57 | CONTRIBUTOR | null | Surrealdb client changes from 0.3.1 to 0.3.2 broke the surrealdb vectore integration.
This PR updates the code to work with the updated client. The change is backwards compatible with previous versions of surrealdb client.
Also expanded the vector store implementation to store and retrieve metadata that's included with the document object. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14997/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14997",
"html_url": "https://github.com/langchain-ai/langchain/pull/14997",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14997.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14997.patch",
"merged_at": "2023-12-21T17:04:57"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14996 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14996/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14996/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14996/events | https://github.com/langchain-ai/langchain/issues/14996 | 2,051,702,953 | I_kwDOIPDwls56SoCp | 14,996 | Add implementation of '_create_chat_result()' method for MiniMax's current implementation | {
"login": "KirisameR",
"id": 60131395,
"node_id": "MDQ6VXNlcjYwMTMxMzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/60131395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KirisameR",
"html_url": "https://github.com/KirisameR",
"followers_url": "https://api.github.com/users/KirisameR/followers",
"following_url": "https://api.github.com/users/KirisameR/following{/other_user}",
"gists_url": "https://api.github.com/users/KirisameR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KirisameR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KirisameR/subscriptions",
"organizations_url": "https://api.github.com/users/KirisameR/orgs",
"repos_url": "https://api.github.com/users/KirisameR/repos",
"events_url": "https://api.github.com/users/KirisameR/events{/privacy}",
"received_events_url": "https://api.github.com/users/KirisameR/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 2 | 2023-12-21T05:02:10 | 2023-12-21T05:42:45 | null | NONE | null | ### Feature request
Add the implementation of '_create_chat_result()' method for MiniMax's current implementation to allow it being accepted as one of the chat models
### Motivation
Currently MiniMax's chat functionality does not work properly with Langchain, as described in this issue:
https://github.com/langchain-ai/langchain/issues/14796
The investigation to this bug suggests a missing implementation of method '_create_chat_result()'. With a proper implementation of this method, the `_generate` method will be able to return `ChatResult` objects instead of unaccepted `str`.
### Your contribution
I am currently investigating on how to implement it myself, and I am happy to provide any support, including discussion, testing, etc. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14996/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14995 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14995/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14995/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14995/events | https://github.com/langchain-ai/langchain/pull/14995 | 2,051,652,570 | PR_kwDOIPDwls5ih8n6 | 14,995 | Update Ollama multi-modal multi-vector template README.md | {
"login": "rlancemartin",
"id": 122662504,
"node_id": "U_kgDOB0-uaA",
"avatar_url": "https://avatars.githubusercontent.com/u/122662504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlancemartin",
"html_url": "https://github.com/rlancemartin",
"followers_url": "https://api.github.com/users/rlancemartin/followers",
"following_url": "https://api.github.com/users/rlancemartin/following{/other_user}",
"gists_url": "https://api.github.com/users/rlancemartin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlancemartin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlancemartin/subscriptions",
"organizations_url": "https://api.github.com/users/rlancemartin/orgs",
"repos_url": "https://api.github.com/users/rlancemartin/repos",
"events_url": "https://api.github.com/users/rlancemartin/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlancemartin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T04:06:30 | 2023-12-21T04:07:40 | 2023-12-21T04:07:39 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14995/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14995",
"html_url": "https://github.com/langchain-ai/langchain/pull/14995",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14995.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14995.patch",
"merged_at": "2023-12-21T04:07:39"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14994 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14994/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14994/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14994/events | https://github.com/langchain-ai/langchain/pull/14994 | 2,051,652,035 | PR_kwDOIPDwls5ih8hh | 14,994 | Update Ollama multi-modal template README.md | {
"login": "rlancemartin",
"id": 122662504,
"node_id": "U_kgDOB0-uaA",
"avatar_url": "https://avatars.githubusercontent.com/u/122662504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlancemartin",
"html_url": "https://github.com/rlancemartin",
"followers_url": "https://api.github.com/users/rlancemartin/followers",
"following_url": "https://api.github.com/users/rlancemartin/following{/other_user}",
"gists_url": "https://api.github.com/users/rlancemartin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlancemartin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlancemartin/subscriptions",
"organizations_url": "https://api.github.com/users/rlancemartin/orgs",
"repos_url": "https://api.github.com/users/rlancemartin/repos",
"events_url": "https://api.github.com/users/rlancemartin/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlancemartin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T04:05:46 | 2023-12-21T04:07:28 | 2023-12-21T04:07:28 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14994/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14994",
"html_url": "https://github.com/langchain-ai/langchain/pull/14994",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14994.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14994.patch",
"merged_at": "2023-12-21T04:07:28"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14993 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14993/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14993/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14993/events | https://github.com/langchain-ai/langchain/pull/14993 | 2,051,651,395 | PR_kwDOIPDwls5ih8Yw | 14,993 | Update Gemini template README.md | {
"login": "rlancemartin",
"id": 122662504,
"node_id": "U_kgDOB0-uaA",
"avatar_url": "https://avatars.githubusercontent.com/u/122662504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlancemartin",
"html_url": "https://github.com/rlancemartin",
"followers_url": "https://api.github.com/users/rlancemartin/followers",
"following_url": "https://api.github.com/users/rlancemartin/following{/other_user}",
"gists_url": "https://api.github.com/users/rlancemartin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlancemartin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlancemartin/subscriptions",
"organizations_url": "https://api.github.com/users/rlancemartin/orgs",
"repos_url": "https://api.github.com/users/rlancemartin/repos",
"events_url": "https://api.github.com/users/rlancemartin/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlancemartin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T04:05:08 | 2023-12-21T04:07:21 | 2023-12-21T04:07:20 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14993/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14993",
"html_url": "https://github.com/langchain-ai/langchain/pull/14993",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14993.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14993.patch",
"merged_at": "2023-12-21T04:07:20"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14992 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14992/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14992/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14992/events | https://github.com/langchain-ai/langchain/pull/14992 | 2,051,651,070 | PR_kwDOIPDwls5ih8T9 | 14,992 | Update multi-modal multi-vector template README.md | {
"login": "rlancemartin",
"id": 122662504,
"node_id": "U_kgDOB0-uaA",
"avatar_url": "https://avatars.githubusercontent.com/u/122662504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlancemartin",
"html_url": "https://github.com/rlancemartin",
"followers_url": "https://api.github.com/users/rlancemartin/followers",
"following_url": "https://api.github.com/users/rlancemartin/following{/other_user}",
"gists_url": "https://api.github.com/users/rlancemartin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlancemartin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlancemartin/subscriptions",
"organizations_url": "https://api.github.com/users/rlancemartin/orgs",
"repos_url": "https://api.github.com/users/rlancemartin/repos",
"events_url": "https://api.github.com/users/rlancemartin/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlancemartin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 5924999838,
"node_id": "LA_kwDOIPDwls8AAAABYShSng",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20chroma",
"name": "integration: chroma",
"color": "B78AF8",
"default": false,
"description": "Related to ChromaDB"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T04:04:50 | 2023-12-21T04:07:13 | 2023-12-21T04:07:12 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14992/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14992",
"html_url": "https://github.com/langchain-ai/langchain/pull/14992",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14992.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14992.patch",
"merged_at": "2023-12-21T04:07:12"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14991 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14991/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14991/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14991/events | https://github.com/langchain-ai/langchain/pull/14991 | 2,051,649,670 | PR_kwDOIPDwls5ih8Gl | 14,991 | Update multi-modal template README.md | {
"login": "rlancemartin",
"id": 122662504,
"node_id": "U_kgDOB0-uaA",
"avatar_url": "https://avatars.githubusercontent.com/u/122662504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlancemartin",
"html_url": "https://github.com/rlancemartin",
"followers_url": "https://api.github.com/users/rlancemartin/followers",
"following_url": "https://api.github.com/users/rlancemartin/following{/other_user}",
"gists_url": "https://api.github.com/users/rlancemartin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlancemartin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlancemartin/subscriptions",
"organizations_url": "https://api.github.com/users/rlancemartin/orgs",
"repos_url": "https://api.github.com/users/rlancemartin/repos",
"events_url": "https://api.github.com/users/rlancemartin/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlancemartin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 5924999838,
"node_id": "LA_kwDOIPDwls8AAAABYShSng",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20chroma",
"name": "integration: chroma",
"color": "B78AF8",
"default": false,
"description": "Related to ChromaDB"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T04:03:43 | 2023-12-21T04:06:53 | 2023-12-21T04:06:53 | COLLABORATOR | null | <!-- Thank you for contributing to LangChain!
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14991/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14991",
"html_url": "https://github.com/langchain-ai/langchain/pull/14991",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14991.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14991.patch",
"merged_at": "2023-12-21T04:06:53"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14990 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14990/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14990/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14990/events | https://github.com/langchain-ai/langchain/pull/14990 | 2,051,622,813 | PR_kwDOIPDwls5ih2Wv | 14,990 | community[minor] langchain[patch]: convert class tool to openai function | {
"login": "badbye",
"id": 3295865,
"node_id": "MDQ6VXNlcjMyOTU4NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3295865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/badbye",
"html_url": "https://github.com/badbye",
"followers_url": "https://api.github.com/users/badbye/followers",
"following_url": "https://api.github.com/users/badbye/following{/other_user}",
"gists_url": "https://api.github.com/users/badbye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/badbye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/badbye/subscriptions",
"organizations_url": "https://api.github.com/users/badbye/orgs",
"repos_url": "https://api.github.com/users/badbye/repos",
"events_url": "https://api.github.com/users/badbye/events{/privacy}",
"received_events_url": "https://api.github.com/users/badbye/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899412369,
"node_id": "LA_kwDOIPDwls8AAAABJAcZkQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20agent",
"name": "area: agent",
"color": "BFD4F2",
"default": false,
"description": "Related to agents module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 6232714126,
"node_id": "LA_kwDOIPDwls8AAAABc3-rjg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:L",
"name": "size:L",
"color": "BFD4F2",
"default": false,
"description": "This PR changes 100-499 lines, ignoring generated files."
}
] | open | false | null | [] | null | 4 | 2023-12-21T03:21:33 | 2024-01-03T07:09:51 | null | NONE | null | Covert tools to a JSON format description for Openai. Previously, the function `format_tool_to_openai_function` does not return the correct parameter name when given a tool that subclassing the BaseTool:
```python
import os
from langchain.agents import load_tools
from langchain.tools.render import format_tool_to_openai_function
os.environ['SERPER_API_KEY'] = '1' * 40
tools = load_tools(["google-serper"])
format_tool_to_openai_function(tools[0])['parameters']
# {'properties': {'__arg1': {'title': '__arg1', 'type': 'string'}}, 'required': ['__arg1'], 'type': 'object'}
```
This patch is trying to fix it. The correct output should be `{'properties': {'query': {'title': 'Query', 'type': 'string'}}, 'required': ['query'], 'type': 'object'}`
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14990/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14990",
"html_url": "https://github.com/langchain-ai/langchain/pull/14990",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14990.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14990.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14989 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14989/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14989/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14989/events | https://github.com/langchain-ai/langchain/pull/14989 | 2,051,615,056 | PR_kwDOIPDwls5ih0vK | 14,989 | community[patch]: Fix ip relevance score | {
"login": "careywyr",
"id": 17082044,
"node_id": "MDQ6VXNlcjE3MDgyMDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17082044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/careywyr",
"html_url": "https://github.com/careywyr",
"followers_url": "https://api.github.com/users/careywyr/followers",
"following_url": "https://api.github.com/users/careywyr/following{/other_user}",
"gists_url": "https://api.github.com/users/careywyr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/careywyr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/careywyr/subscriptions",
"organizations_url": "https://api.github.com/users/careywyr/orgs",
"repos_url": "https://api.github.com/users/careywyr/repos",
"events_url": "https://api.github.com/users/careywyr/events{/privacy}",
"received_events_url": "https://api.github.com/users/careywyr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | open | false | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
}
] | null | 5 | 2023-12-21T03:09:15 | 2024-01-01T23:53:36 | null | NONE | null | fix method _max_inner_product_relevance_score_fn when DistanceStrategy is MAX_INNER_PRODUCT
[#14948](https://github.com/langchain-ai/langchain/issues/14948) | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14989/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14989",
"html_url": "https://github.com/langchain-ai/langchain/pull/14989",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14989.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14989.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14988 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14988/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14988/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14988/events | https://github.com/langchain-ai/langchain/issues/14988 | 2,051,606,338 | I_kwDOIPDwls56SQdC | 14,988 | Issue: Adding function calling to ConversationalRetrievalChain | {
"login": "ComeBackTo2016",
"id": 10664313,
"node_id": "MDQ6VXNlcjEwNjY0MzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/10664313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ComeBackTo2016",
"html_url": "https://github.com/ComeBackTo2016",
"followers_url": "https://api.github.com/users/ComeBackTo2016/followers",
"following_url": "https://api.github.com/users/ComeBackTo2016/following{/other_user}",
"gists_url": "https://api.github.com/users/ComeBackTo2016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ComeBackTo2016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ComeBackTo2016/subscriptions",
"organizations_url": "https://api.github.com/users/ComeBackTo2016/orgs",
"repos_url": "https://api.github.com/users/ComeBackTo2016/repos",
"events_url": "https://api.github.com/users/ComeBackTo2016/events{/privacy}",
"received_events_url": "https://api.github.com/users/ComeBackTo2016/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-21T02:57:09 | 2023-12-21T03:03:57 | null | NONE | null | ### Issue you'd like to raise.
I am having a problem adding function calling to a chain of type ConversationalRetrievalChain. I need help finding a solution.
Here is my code, which creates a ConversationalRetrievalChain to retrieve local knowledge and generate chat history information in a summary format. It works fine. However, when I try to add a call to weather_function, I don't know where to add it. I have browsed most of the documentation and couldn't find a solution. Can anyone help me? Thank you!
```python
documents = TextLoader("./file/text.txt").load()
text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=50)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(openai_api_key=APIKEY, openai_api_base=OPENAI_API_BASE)
db = FAISS.from_documents(docs, embeddings)
retriever = db.as_retriever()
Template = """You are a good man and happy to chat with everyone:
{context}
history chat information in summary:
{chat_history}
Question: {question}
"""
prompt = PromptTemplate(
input_variables=["context", "chat_history", "question"], template=Template
)
output_parser = StrOutputParser()
model = ChatOpenAI(
model_name=DEFAULT_MODEL,
openai_api_key=APIKEY,
openai_api_base=OPENAI_API_BASE,
temperature=0.9,
)
memory = ConversationSummaryMemory(
llm=model, memory_key="chat_history", return_messages=True
)
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=model,
retriever=db.as_retriever(),
memory=memory,
combine_docs_chain_kwargs={'prompt': prompt},
verbose=False,
)
```
function calling : weather_function
```python
class WeatherSearch(BaseModel):
"""Call this with an airport code to get the weather at that airport"""
airport_code: str = Field(description="airport code to get weather for")
weather_function = convert_pydantic_to_openai_function(WeatherSearch)
```
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14988/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14987 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14987/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14987/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14987/events | https://github.com/langchain-ai/langchain/issues/14987 | 2,051,605,621 | I_kwDOIPDwls56SQR1 | 14,987 | Issue: Unable to return documents in my custom llm / agent executor implementation | {
"login": "madhavthaker1",
"id": 90870356,
"node_id": "MDQ6VXNlcjkwODcwMzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/90870356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madhavthaker1",
"html_url": "https://github.com/madhavthaker1",
"followers_url": "https://api.github.com/users/madhavthaker1/followers",
"following_url": "https://api.github.com/users/madhavthaker1/following{/other_user}",
"gists_url": "https://api.github.com/users/madhavthaker1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madhavthaker1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madhavthaker1/subscriptions",
"organizations_url": "https://api.github.com/users/madhavthaker1/orgs",
"repos_url": "https://api.github.com/users/madhavthaker1/repos",
"events_url": "https://api.github.com/users/madhavthaker1/events{/privacy}",
"received_events_url": "https://api.github.com/users/madhavthaker1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899412369,
"node_id": "LA_kwDOIPDwls8AAAABJAcZkQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20agent",
"name": "area: agent",
"color": "BFD4F2",
"default": false,
"description": "Related to agents module"
},
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
}
] | open | false | null | [] | null | 1 | 2023-12-21T02:56:10 | 2023-12-21T03:06:24 | null | NONE | null | ### Issue you'd like to raise.
I'm looking to use a HuggingFace pipeline using Mistral 7b. I am attempting to pass this into an AgentExectutor and use a retriever based tool.
```python
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain_core.pydantic_v1 import BaseModel, Field
class RetrieverInput(BaseModel):
query: str = Field(description="query to look up in retriever")
fantasy_football_tool = Tool(
name="search_fantasy_football_articles",
description="Searches and returns documents regarding fantasy football.",
func=retriever.get_relevant_documents,
# coroutine=retriever.aget_relevant_documents,
args_schema=RetrieverInput,
)
fantasy_football_tool.run("how is trevor lawrence doing?")
[Document(page_content='Trevor Lawrence\n\nStill in concussion protocol Wednesday\n\nC.J. Stroud', metadata={'source': 'https://www.fantasypros.com/2023/11/rival-fantasy-nfl-week-10/'}),
Document(page_content='Trevor Lawrence\n\nStill in concussion protocol Wednesday\n\nC.J. Stroud', metadata={'source': 'https://www.fantasypros.com/2023/11/nfl-week-10-sleeper-picks-player-predictions-2023/'}),
```
This shows that my tool is working as expected. Now to construct the the agent.
```oython
prompt_template = """
### [INST]
Assistant is a large language model trained by Mistral.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Context:
------
Assistant has access to the following tools:
{tools}
To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```
Begin!
Previous conversation history:
{chat_history}
New input: {input}
Current Scratchpad:
{agent_scratchpad}
[/INST]
"""
# Create prompt from prompt template
prompt = PromptTemplate(
input_variables=['agent_scratchpad', 'chat_history', 'input', 'tool_names', 'tools'],
template=prompt_template,
)
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# Create llm chain
# This is a hugging face pipeline.
llm_chain = LLMChain(llm=mistral_llm, prompt=prompt)
from langchain.agents.conversational.output_parser import ConvoOutputParser
from langchain.output_parsers.json import parse_json_markdown
from langchain_core.exceptions import OutputParserException
class CustomOutputParser(ConvoOutputParser):
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
"""Attempts to parse the given text into an AgentAction or AgentFinish.
Raises:
OutputParserException if parsing fails.
"""
try:
# If the response contains an 'action' and 'action_input'
print(text)
if "Action" in text or "Action Input" in text:
# If the action indicates a final answer, return an AgentFinish
if "Final Answer" in text:
return AgentFinish({"output": text.split('Final Answer:')[1]}, text)
else:
# Otherwise, return an AgentAction with the specified action and
# input
return AgentAction(action, action_input, text)
else:
# If the necessary keys aren't present in the response, raise an
# exception
raise OutputParserException(
f"Missing 'action' or 'action_input' in LLM output: {text}"
)
except Exception as e:
# If any other exception is raised during parsing, also raise an
# OutputParserException
raise OutputParserException(f"Could not parse LLM output: {text}") from e
output_parser = CustomOutputParser()
# Create an agent with your LLMChain
agent = ConversationalAgent(llm_chain=llm_chain, output_parser=output_parser)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
```
I've tested my `agent_executor` using the same question and get this:
```python
Thought: Do I need to use a tool? Yes
Action: search_fantasy_football_articles
Action Input: "trevor lawrence"
Observation: The search returned several articles discussing Trevor Lawrence's performance in fantasy football this week.
Final Answer: According to the articles I found, Trevor Lawrence had a strong performance in fantasy football this week.
```
So it seems like it is pinging the tool but its not actually grabbing or using the documents. Any ideas on what I need to change?
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14987/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14986 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14986/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14986/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14986/events | https://github.com/langchain-ai/langchain/issues/14986 | 2,051,569,869 | I_kwDOIPDwls56SHjN | 14,986 | Issue: Obtain the content output by AsyncCallbackHandler on_llm_new_token and send it to the front end to find that the newline character is missing. | {
"login": "wuzechuan",
"id": 14210962,
"node_id": "MDQ6VXNlcjE0MjEwOTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/14210962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wuzechuan",
"html_url": "https://github.com/wuzechuan",
"followers_url": "https://api.github.com/users/wuzechuan/followers",
"following_url": "https://api.github.com/users/wuzechuan/following{/other_user}",
"gists_url": "https://api.github.com/users/wuzechuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wuzechuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wuzechuan/subscriptions",
"organizations_url": "https://api.github.com/users/wuzechuan/orgs",
"repos_url": "https://api.github.com/users/wuzechuan/repos",
"events_url": "https://api.github.com/users/wuzechuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/wuzechuan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | closed | false | null | [] | null | 1 | 2023-12-21T02:13:25 | 2023-12-25T09:16:45 | 2023-12-25T09:16:45 | NONE | null | ### Issue you'd like to raise.
I used AsyncCallbackHandler for callback. When I pushed the content to the front end through on_llm_new_token, I found that the markdown code block was missing a newline character, which caused the front end to be unable to render the markdown format normally. However, when I retrieved the final response and returned the overall answer content, I found that this newline character existed.
<img width="155" alt="image" src="https://github.com/langchain-ai/langchain/assets/14210962/214581c0-f427-4e83-b06e-f1ded11efe20">
I want to ask for help, how should I solve the problem I am currently facing?
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14986/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/langchain-ai/langchain/issues/14986/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14985 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14985/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14985/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14985/events | https://github.com/langchain-ai/langchain/pull/14985 | 2,051,566,391 | PR_kwDOIPDwls5ihqTc | 14,985 | community[patch]: JaguarHttpClient conditional import | {
"login": "fserv",
"id": 115371133,
"node_id": "U_kgDOBuBsfQ",
"avatar_url": "https://avatars.githubusercontent.com/u/115371133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fserv",
"html_url": "https://github.com/fserv",
"followers_url": "https://api.github.com/users/fserv/followers",
"following_url": "https://api.github.com/users/fserv/following{/other_user}",
"gists_url": "https://api.github.com/users/fserv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fserv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fserv/subscriptions",
"organizations_url": "https://api.github.com/users/fserv/orgs",
"repos_url": "https://api.github.com/users/fserv/repos",
"events_url": "https://api.github.com/users/fserv/events{/privacy}",
"received_events_url": "https://api.github.com/users/fserv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T02:10:40 | 2023-12-21T03:11:58 | 2023-12-21T03:11:58 | CONTRIBUTOR | null | - **Description:** Fixed jaguar.py to import JaguarHttpClient with try and catch
- **Issue:** the issue # Unable to use the JaguarHttpClient at run time
- **Dependencies:** It requires "pip install -U jaguardb-http-client"
- **Twitter handle:** workbot | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14985/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14985",
"html_url": "https://github.com/langchain-ai/langchain/pull/14985",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14985.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14985.patch",
"merged_at": "2023-12-21T03:11:58"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14984 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14984/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14984/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14984/events | https://github.com/langchain-ai/langchain/pull/14984 | 2,051,556,861 | PR_kwDOIPDwls5ihoJI | 14,984 | Implement streaming for xml output parser | {
"login": "nfcampos",
"id": 56902,
"node_id": "MDQ6VXNlcjU2OTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/56902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nfcampos",
"html_url": "https://github.com/nfcampos",
"followers_url": "https://api.github.com/users/nfcampos/followers",
"following_url": "https://api.github.com/users/nfcampos/following{/other_user}",
"gists_url": "https://api.github.com/users/nfcampos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nfcampos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nfcampos/subscriptions",
"organizations_url": "https://api.github.com/users/nfcampos/orgs",
"repos_url": "https://api.github.com/users/nfcampos/repos",
"events_url": "https://api.github.com/users/nfcampos/events{/privacy}",
"received_events_url": "https://api.github.com/users/nfcampos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714119,
"node_id": "LA_kwDOIPDwls8AAAABc3-rhw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:M",
"name": "size:M",
"color": "C5DEF5",
"default": false,
"description": "This PR changes 30-99 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T02:03:16 | 2023-12-21T19:30:19 | 2023-12-21T19:30:18 | COLLABORATOR | null | <!-- Thank you for contributing to LangChain!
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14984/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14984",
"html_url": "https://github.com/langchain-ai/langchain/pull/14984",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14984.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14984.patch",
"merged_at": "2023-12-21T19:30:18"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14983 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14983/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14983/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14983/events | https://github.com/langchain-ai/langchain/pull/14983 | 2,051,548,595 | PR_kwDOIPDwls5ihmRd | 14,983 | Fixed jaguar.py to import JaguarHttpClient with try and catch | {
"login": "fserv",
"id": 115371133,
"node_id": "U_kgDOBuBsfQ",
"avatar_url": "https://avatars.githubusercontent.com/u/115371133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fserv",
"html_url": "https://github.com/fserv",
"followers_url": "https://api.github.com/users/fserv/followers",
"following_url": "https://api.github.com/users/fserv/following{/other_user}",
"gists_url": "https://api.github.com/users/fserv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fserv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fserv/subscriptions",
"organizations_url": "https://api.github.com/users/fserv/orgs",
"repos_url": "https://api.github.com/users/fserv/repos",
"events_url": "https://api.github.com/users/fserv/events{/privacy}",
"received_events_url": "https://api.github.com/users/fserv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T01:56:56 | 2023-12-21T02:06:25 | 2023-12-21T02:06:25 | CONTRIBUTOR | null | - **Description:** Fixed jaguar.py to import JaguarHttpClient with try and catch
- **Issue:** the issue # Unable to use the JaguarHttpClient at run time
- **Dependencies:** It requires "pip install -U jaguardb-http-client"
- **Twitter handle:** workbot | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14983/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14983",
"html_url": "https://github.com/langchain-ai/langchain/pull/14983",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14983.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14983.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14982 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14982/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14982/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14982/events | https://github.com/langchain-ai/langchain/pull/14982 | 2,051,535,693 | PR_kwDOIPDwls5ihjVm | 14,982 | community[patch]: Fix typo in class Docstring (#14982) | {
"login": "yacine555",
"id": 1928640,
"node_id": "MDQ6VXNlcjE5Mjg2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1928640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yacine555",
"html_url": "https://github.com/yacine555",
"followers_url": "https://api.github.com/users/yacine555/followers",
"following_url": "https://api.github.com/users/yacine555/following{/other_user}",
"gists_url": "https://api.github.com/users/yacine555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yacine555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yacine555/subscriptions",
"organizations_url": "https://api.github.com/users/yacine555/orgs",
"repos_url": "https://api.github.com/users/yacine555/repos",
"events_url": "https://api.github.com/users/yacine555/events{/privacy}",
"received_events_url": "https://api.github.com/users/yacine555/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T01:47:22 | 2023-12-21T03:03:45 | 2023-12-21T03:03:45 | CONTRIBUTOR | null |
- **Description:** Fix typo in class Docstring to replace AZURE_OPENAI_API_ENDPOINT by AZURE_OPENAI_ENDPOINT
- **Issue:** the issue #14901
- **Dependencies:** NA
- **Twitter handle:**
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14982/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14982",
"html_url": "https://github.com/langchain-ai/langchain/pull/14982",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14982.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14982.patch",
"merged_at": "2023-12-21T03:03:45"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14981 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14981/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14981/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14981/events | https://github.com/langchain-ai/langchain/pull/14981 | 2,051,493,858 | PR_kwDOIPDwls5ihabo | 14,981 | Implement streaming for all list output parsers | {
"login": "nfcampos",
"id": 56902,
"node_id": "MDQ6VXNlcjU2OTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/56902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nfcampos",
"html_url": "https://github.com/nfcampos",
"followers_url": "https://api.github.com/users/nfcampos/followers",
"following_url": "https://api.github.com/users/nfcampos/following{/other_user}",
"gists_url": "https://api.github.com/users/nfcampos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nfcampos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nfcampos/subscriptions",
"organizations_url": "https://api.github.com/users/nfcampos/orgs",
"repos_url": "https://api.github.com/users/nfcampos/repos",
"events_url": "https://api.github.com/users/nfcampos/events{/privacy}",
"received_events_url": "https://api.github.com/users/nfcampos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714126,
"node_id": "LA_kwDOIPDwls8AAAABc3-rjg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:L",
"name": "size:L",
"color": "BFD4F2",
"default": false,
"description": "This PR changes 100-499 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-21T00:42:05 | 2023-12-21T19:30:36 | 2023-12-21T19:30:36 | COLLABORATOR | null | <!-- Thank you for contributing to LangChain!
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14981/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14981",
"html_url": "https://github.com/langchain-ai/langchain/pull/14981",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14981.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14981.patch",
"merged_at": "2023-12-21T19:30:36"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14980 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14980/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14980/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14980/events | https://github.com/langchain-ai/langchain/issues/14980 | 2,051,462,141 | I_kwDOIPDwls56RtP9 | 14,980 | ChatOllama stream method raises warn_deprecated NotImplementedError | {
"login": "v-byte-cpu",
"id": 65545655,
"node_id": "MDQ6VXNlcjY1NTQ1NjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/65545655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-byte-cpu",
"html_url": "https://github.com/v-byte-cpu",
"followers_url": "https://api.github.com/users/v-byte-cpu/followers",
"following_url": "https://api.github.com/users/v-byte-cpu/following{/other_user}",
"gists_url": "https://api.github.com/users/v-byte-cpu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/v-byte-cpu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/v-byte-cpu/subscriptions",
"organizations_url": "https://api.github.com/users/v-byte-cpu/orgs",
"repos_url": "https://api.github.com/users/v-byte-cpu/repos",
"events_url": "https://api.github.com/users/v-byte-cpu/events{/privacy}",
"received_events_url": "https://api.github.com/users/v-byte-cpu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 4 | 2023-12-20T23:51:39 | 2024-01-08T22:44:00 | null | NONE | null | ### System Info
langchain version: v0.0.352
python version: 3.11
Hi there! After that PR https://github.com/langchain-ai/langchain/pull/14713 was merged, I started getting errors in stream() method:
```
File .../lib/python3.11/site-packages/langchain_core/_api/deprecation.py:295, in warn_deprecated(since, message, name, alternative, pending, obj_type, addendum, removal)
293 if not removal:
294 removal = f"in {removal}" if removal else "within ?? minor releases"
--> 295 raise NotImplementedError(
296 f"Need to determine which default deprecation schedule to use. "
297 f"{removal}"
298 )
299 else:
300 removal = f"in {removal}"
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
```
I guess this decorator must have a `pending=True` argument.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOllama
llm = ChatOllama(
model="openchat:7b-v3.5-1210-q4_K_M",
)
for chunk in llm.stream("Where were the Olympics held?"):
print(chunk, end="", flush=True)
```
### Expected behavior
successful streaming output from llm | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14980/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14980/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14979 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14979/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14979/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14979/events | https://github.com/langchain-ai/langchain/issues/14979 | 2,051,459,162 | I_kwDOIPDwls56Rsha | 14,979 | Ignoring Specific Paragraphs or Divs in Web Scraping with LangChain | {
"login": "David-Sol-AI",
"id": 149724775,
"node_id": "U_kgDOCOyeZw",
"avatar_url": "https://avatars.githubusercontent.com/u/149724775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/David-Sol-AI",
"html_url": "https://github.com/David-Sol-AI",
"followers_url": "https://api.github.com/users/David-Sol-AI/followers",
"following_url": "https://api.github.com/users/David-Sol-AI/following{/other_user}",
"gists_url": "https://api.github.com/users/David-Sol-AI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/David-Sol-AI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/David-Sol-AI/subscriptions",
"organizations_url": "https://api.github.com/users/David-Sol-AI/orgs",
"repos_url": "https://api.github.com/users/David-Sol-AI/repos",
"events_url": "https://api.github.com/users/David-Sol-AI/events{/privacy}",
"received_events_url": "https://api.github.com/users/David-Sol-AI/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
}
] | closed | false | null | [] | null | 2 | 2023-12-20T23:46:42 | 2023-12-21T00:27:04 | 2023-12-21T00:27:03 | NONE | null | ### Feature request
## Context:
I am currently developing a custom scraper using the LangChain tools, following the provided documentation. The core functionality involves extracting paragraphs from a list of URLs using the AsyncHtmlLoader and the Beautiful Soup transformer:
loader = AsyncHtmlLoader(urls)
docs = loader.load()
docs_transformed = self.bs_transformer.transform_documents(docs, tags_to_extract=["p"])
return docs_transformed
## Problem:
The code successfully extracts all paragraphs from the provided URLs. However, in the case of web pages like https://www.aha.org/news/chairpersons-file/2023-12-18-chair-file-leadership-dialogue-reflecting-whats-next-health-care-joanne-conroy-md-dartmouth, there is a recurring issue. At the end of each blog or news article, there is a disclaimer message paragraph:
"Noncommercial use of original content on www.aha.org is granted to AHA Institutional Members, their employees and State, Regional and Metro Hospital Associations unless otherwise indicated. AHA does not claim ownership of any content, including content incorporated by permission into AHA produced materials, created by any third party and cannot grant permission to use, distribute or otherwise reproduce such third party content. To request permission to reproduce AHA content, please [click here](https://askrc.libraryresearch.info/reft100.aspx?key=ExtPerm).
"
## Proposed Solution:
To address this, I explored options and realized that excluding specific parts of the HTML could be a viable solution. Typically, using Beautiful Soup, I can delete specific paragraphs within a div by targeting the class parameter, as demonstrated here:
soup.find('div', class_='aha-footer')
## Issue with LangChain Implementation:
Upon inspecting the beautiful_soup_transformer.py in the LangChain repository, particularly the remove_unwanted_tags method, I observed that it is currently implemented to remove unwanted tags in a general sense:
soup = BeautifulSoup(html_content, "html.parser")
for tag in unwanted_tags:
for element in soup.find_all(tag):
element.decompose()
return str(soup)
This implementation makes it impossible to selectively eliminate specific divs from the HTML.
## Request for Guidance:
I seek guidance on how to ignore specific paragraphs or divs during web scraping with LangChain, particularly to exclude the recurring disclaimer paragraph mentioned above. I would appreciate any recommendations on the recommended approach or if there are plans to enhance the beautiful_soup_transformer.py to accommodate more granular exclusion of HTML elements.
### Motivation
I am performing web scrapping over this specific web page:
https://www.aha.org/news/chairpersons-file/2023-12-18-chair-file-leadership-dialogue-reflecting-whats-next-health-care-joanne-conroy-md-dartmouth
I am extracting all the paragraphs , but at the end of all blogs, news there is a disclaimer message paragraph:
Noncommercial use of original content on www.aha.org is granted to AHA Institutional Members, their employees and State, Regional and Metro Hospital Associations unless otherwise indicated. AHA does not claim ownership of any content, including content incorporated by permission into AHA produced materials, created by any third party and cannot grant permission to use, distribute or otherwise reproduce such third party content. To request permission to reproduce AHA content, please [click here](https://askrc.libraryresearch.info/reft100.aspx?key=ExtPerm).
so I want to ignore that specific paragraph
### Your contribution
not yet | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14979/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14978 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14978/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14978/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14978/events | https://github.com/langchain-ai/langchain/pull/14978 | 2,051,452,987 | PR_kwDOIPDwls5ihRu9 | 14,978 | community[patch]: support momento vector index filter expressions | {
"login": "malandis",
"id": 3690240,
"node_id": "MDQ6VXNlcjM2OTAyNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3690240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/malandis",
"html_url": "https://github.com/malandis",
"followers_url": "https://api.github.com/users/malandis/followers",
"following_url": "https://api.github.com/users/malandis/following{/other_user}",
"gists_url": "https://api.github.com/users/malandis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/malandis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/malandis/subscriptions",
"organizations_url": "https://api.github.com/users/malandis/orgs",
"repos_url": "https://api.github.com/users/malandis/repos",
"events_url": "https://api.github.com/users/malandis/events{/privacy}",
"received_events_url": "https://api.github.com/users/malandis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714119,
"node_id": "LA_kwDOIPDwls8AAAABc3-rhw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:M",
"name": "size:M",
"color": "C5DEF5",
"default": false,
"description": "This PR changes 30-99 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T23:36:41 | 2023-12-21T03:11:43 | 2023-12-21T03:11:43 | CONTRIBUTOR | null | **Description**
For the Momento Vector Index (MVI) vector store implementation, pass through `filter_expression` kwarg to the MVI client, if specified. This change will enable the MVI self query implementation in a future PR.
Also fixes some integration tests. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14978/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14978",
"html_url": "https://github.com/langchain-ai/langchain/pull/14978",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14978.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14978.patch",
"merged_at": "2023-12-21T03:11:43"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14976 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14976/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14976/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14976/events | https://github.com/langchain-ai/langchain/pull/14976 | 2,051,409,937 | PR_kwDOIPDwls5ihIMF | 14,976 | API Ref navbar update | {
"login": "leo-gan",
"id": 2256422,
"node_id": "MDQ6VXNlcjIyNTY0MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2256422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo-gan",
"html_url": "https://github.com/leo-gan",
"followers_url": "https://api.github.com/users/leo-gan/followers",
"following_url": "https://api.github.com/users/leo-gan/following{/other_user}",
"gists_url": "https://api.github.com/users/leo-gan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo-gan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo-gan/subscriptions",
"organizations_url": "https://api.github.com/users/leo-gan/orgs",
"repos_url": "https://api.github.com/users/leo-gan/repos",
"events_url": "https://api.github.com/users/leo-gan/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo-gan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6154420538,
"node_id": "LA_kwDOIPDwls8AAAABbtUBOg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/template",
"name": "template",
"color": "145FB1",
"default": false,
"description": ""
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
},
{
"id": 6348691034,
"node_id": "LA_kwDOIPDwls8AAAABemlWWg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/partner",
"name": "partner",
"color": "ededed",
"default": false,
"description": null
}
] | open | false | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
}
] | null | 7 | 2023-12-20T22:46:10 | 2024-01-17T03:41:00 | null | COLLABORATOR | null | Now the navbar items are too long and are not visible.
![image](https://github.com/langchain-ai/langchain/assets/2256422/8fb51f75-0ae2-43ec-a225-1dc69f16ce72)
I'm trying to cut the top-level part of the namespace in items. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14976/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14976",
"html_url": "https://github.com/langchain-ai/langchain/pull/14976",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14976.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14976.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14975 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14975/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14975/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14975/events | https://github.com/langchain-ai/langchain/issues/14975 | 2,051,385,822 | I_kwDOIPDwls56Rane | 14,975 | Issue: Requesting Feedback on Integrating Gretel for Synthetic Tabular Generation | {
"login": "zredlined",
"id": 6510818,
"node_id": "MDQ6VXNlcjY1MTA4MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6510818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zredlined",
"html_url": "https://github.com/zredlined",
"followers_url": "https://api.github.com/users/zredlined/followers",
"following_url": "https://api.github.com/users/zredlined/following{/other_user}",
"gists_url": "https://api.github.com/users/zredlined/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zredlined/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zredlined/subscriptions",
"organizations_url": "https://api.github.com/users/zredlined/orgs",
"repos_url": "https://api.github.com/users/zredlined/repos",
"events_url": "https://api.github.com/users/zredlined/events{/privacy}",
"received_events_url": "https://api.github.com/users/zredlined/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-20T22:29:45 | 2023-12-20T22:34:24 | null | NONE | null | ### Issue you'd like to raise.
Proposing updates to the SyntheticDataGenerator interface to create a cleaner foundation for building community integrations for synthetic tabular models like [Gretel](https://gretel.ai/tabular-llm) [[docs](https://docs.gretel.ai/reference/tabular-llm)]
### Suggestion:
# Existing Interface
The current `SyntheticDataGenerator` interface requires:
```python
def generate(
subject: str,
runs: int,
extra: Optional[str] = None
) -> List[str]
```
Where:
* `subject`: Subject the synthetic data is about
* `runs`: Number of times to generate the data
* `extra`: Extra instructions for steering
## Proposed Update
I propose changing this to the following:
```python
def generate(
prompt: str,
num_records: int,
optional_dataset: Optional[str, Path, DataFrame]
) -> List[str]
```
## Where:
* `prompt`: User prompt to create synthetic data
* `num_records`: Number of rows to generate
* `optional_dataset`: Dataset to edit/augment
I believe this creates a cleaner interface for synthetic tabular data flows, by combining the `subject` and `extra` parameters into a single field, and allowing the user to clearly specify the number of results they want `num_records`, vs the number of `runs` of the LLM which could generate more than one record each time. The `optional_dataset` arg lets the user prompt the model with a dataset to edit or augment with new synthetic data.
# Requesting Feedback
I would appreciate any thoughts on this proposed update, and happy to open a PR! Before I get started, please let me know:
* If you see any issues with changing the interface
* If an alternative integration approach would be better
* Any other API details to consider
https://python.langchain.com/docs/use_cases/data_generation
My goal is have an intuitive integration for Gretel and future synthetic data models | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14975/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14975/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14974 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14974/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14974/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14974/events | https://github.com/langchain-ai/langchain/issues/14974 | 2,051,361,571 | I_kwDOIPDwls56RUsj | 14,974 | LLMs start replying in other languages | {
"login": "ktibbs9417",
"id": 37451503,
"node_id": "MDQ6VXNlcjM3NDUxNTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/37451503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktibbs9417",
"html_url": "https://github.com/ktibbs9417",
"followers_url": "https://api.github.com/users/ktibbs9417/followers",
"following_url": "https://api.github.com/users/ktibbs9417/following{/other_user}",
"gists_url": "https://api.github.com/users/ktibbs9417/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ktibbs9417/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ktibbs9417/subscriptions",
"organizations_url": "https://api.github.com/users/ktibbs9417/orgs",
"repos_url": "https://api.github.com/users/ktibbs9417/repos",
"events_url": "https://api.github.com/users/ktibbs9417/events{/privacy}",
"received_events_url": "https://api.github.com/users/ktibbs9417/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 5932474361,
"node_id": "LA_kwDOIPDwls8AAAABYZpf-Q",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20pinecone",
"name": "integration: pinecone",
"color": "BC53BE",
"default": false,
"description": "Related to Pinecone vector store integration"
}
] | open | false | null | [] | null | 3 | 2023-12-20T22:14:57 | 2023-12-22T00:21:26 | null | NONE | null | ### System Info
Currently, I am using OpenAI LLM and Gemini Pro all being used my LangChain. I am also using Google's embedding-001 model and Cohere base model (tested each embedding and both either reply back in english first then another language or straight to another language).
Here is my prompt template:
<code>
def doc_question_prompt_template():
template = """
You are a helpful assistant that has the ability to answer all users questions to the best of your ability.
Your answers should come from the context you are provided. Provide an answer with detail and not short answers.
Your only response should be in the English langeuage.
Context:
{context}
User: {question}
"""
return PromptTemplate(
input_variables=["question"],
template=template
)
def doc_question_command(body, conversation_contexts):
llmlibrary = LLMLibrary()
channel_id = body['channel_id']
user_id = body['user_id']
context_key = f"{channel_id}-{user_id}"
prompt = ChatPromptTemplate.doc_question_prompt_template()
if context_key not in conversation_contexts:
conversation_contexts[context_key] = {
"memory": ConversationBufferMemory(memory_key="chat_history", output_key="answer", return_messages=True, max_token_limit=1024),
"history": "",
}
user_memory = conversation_contexts[context_key]["memory"]
question = body['text']
conversation = llmlibrary.doc_question(user_memory, prompt, question)
#print(f"Conversation: {conversation}")
return question, conversation
def doc_question(self, user_memory, prompt, question)
llm = ChatGoogleGenerativeAI(model="gemini-pro",temperature=0.0, convert_system_message_to_human=True)
vectordb = self.vectorstore.get_vectordb()
print(f"Vector DB: {vectordb}\n")
retriever = vectordb.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
docs = retriever.get_relevant_documents(question)
print(f"Docs: {docs}\n")
print(f"Initiating chat conversation memory\n")
#print(f"Conversation Memory: {memory}\n")
conversation_chain= ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=user_memory,
combine_docs_chain_kwargs={'prompt': prompt},
return_source_documents=True,
verbose=False,
)
#print(f"Conversation chain: {conversation_chain}\n")
return conversation_chain
@app.command("/doc_question")
def handle_doc_question_command(ack, body, say):
# Acknowledge the command request
ack()
print(body)
say(f"🤨 {body['text']}")
question, conversation = ChatHandler.doc_question_command(body, conversation_contexts)
response = conversation({'question': question})
print(f"(INFO) Doc Question Response: {response} {time.time()}")
print(f"(INFO) Doc Question Response answer: {response['answer']} {time.time()}")
say(f"🤖 {response['answer']}")
</code>
Logs:
[output.txt](https://github.com/langchain-ai/langchain/files/13732990/output.txt)
![Screenshot 2023-12-20 at 2 00 14 PM](https://github.com/langchain-ai/langchain/assets/37451503/e12169a0-4e9b-4d31-9c4f-40e650c2aee4)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. user sends a message through Slack
2. The message is received by @app.command("/doc_question)
3. ChatHandler.doc_question_command gets called passing the body and conversation_contexts
4. doc_question_command gets information about the message that was sent and gets the doc_question_prompt_template from ChatPromptTemplate module
5. conversation_contexts gets a context key of memory and history
6. llmlibrary.doc question is then called passing user_member, prompt, question
7. the doc_question function uses the ChatGoogleGenerativeAI module and gets the vectordb which is Pinecone
8. uses the ConversationRetrievalChain.from_llm and passes it back to the handler and the handler passes question and conversation back to @app.command("/doc_question")
9. The question is then submitted to the LLM and the response is spat out within Slack (sometimes english, Spanish, or other)
### Expected behavior
Only reply in english | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14974/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14973 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14973/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14973/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14973/events | https://github.com/langchain-ai/langchain/pull/14973 | 2,051,334,375 | PR_kwDOIPDwls5ig25f | 14,973 | community: add args_schema to GmailSendMessage | {
"login": "ccurme",
"id": 26529506,
"node_id": "MDQ6VXNlcjI2NTI5NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/26529506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ccurme",
"html_url": "https://github.com/ccurme",
"followers_url": "https://api.github.com/users/ccurme/followers",
"following_url": "https://api.github.com/users/ccurme/following{/other_user}",
"gists_url": "https://api.github.com/users/ccurme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ccurme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ccurme/subscriptions",
"organizations_url": "https://api.github.com/users/ccurme/orgs",
"repos_url": "https://api.github.com/users/ccurme/repos",
"events_url": "https://api.github.com/users/ccurme/events{/privacy}",
"received_events_url": "https://api.github.com/users/ccurme/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T21:56:42 | 2023-12-27T20:45:55 | 2023-12-22T21:07:45 | CONTRIBUTOR | null | - **Description:** `tools.gmail.send_message` implements a `SendMessageSchema` that is not used anywhere. `GmailSendMessage` also does not have an `args_schema` attribute (this led to issues when invoking the tool with an OpenAI functions agent, at least for me). Here we add the missing attribute and a minimal test for the tool.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** N/A | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14973/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14973",
"html_url": "https://github.com/langchain-ai/langchain/pull/14973",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14973.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14973.patch",
"merged_at": "2023-12-22T21:07:45"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14972 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14972/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14972/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14972/events | https://github.com/langchain-ai/langchain/pull/14972 | 2,051,324,089 | PR_kwDOIPDwls5ig0qb | 14,972 | ci test - do not merge | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T21:45:43 | 2023-12-20T21:59:14 | 2023-12-20T21:59:14 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14972/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14972",
"html_url": "https://github.com/langchain-ai/langchain/pull/14972",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14972.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14972.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14971 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14971/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14971/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14971/events | https://github.com/langchain-ai/langchain/pull/14971 | 2,051,254,587 | PR_kwDOIPDwls5igk7z | 14,971 | WIP Runnable Chain | {
"login": "nfcampos",
"id": 56902,
"node_id": "MDQ6VXNlcjU2OTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/56902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nfcampos",
"html_url": "https://github.com/nfcampos",
"followers_url": "https://api.github.com/users/nfcampos/followers",
"following_url": "https://api.github.com/users/nfcampos/following{/other_user}",
"gists_url": "https://api.github.com/users/nfcampos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nfcampos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nfcampos/subscriptions",
"organizations_url": "https://api.github.com/users/nfcampos/orgs",
"repos_url": "https://api.github.com/users/nfcampos/repos",
"events_url": "https://api.github.com/users/nfcampos/events{/privacy}",
"received_events_url": "https://api.github.com/users/nfcampos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714126,
"node_id": "LA_kwDOIPDwls8AAAABc3-rjg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:L",
"name": "size:L",
"color": "BFD4F2",
"default": false,
"description": "This PR changes 100-499 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T20:56:16 | 2024-01-02T19:50:07 | 2024-01-02T19:50:01 | COLLABORATOR | null | <!-- Thank you for contributing to LangChain!
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14971/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14971",
"html_url": "https://github.com/langchain-ai/langchain/pull/14971",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14971.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14971.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14970 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14970/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14970/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14970/events | https://github.com/langchain-ai/langchain/pull/14970 | 2,051,166,157 | PR_kwDOIPDwls5igR2B | 14,970 | Vectara summarization | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 6232714144,
"node_id": "LA_kwDOIPDwls8AAAABc3-roA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XXL",
"name": "size:XXL",
"color": "5319E7",
"default": false,
"description": "This PR changes 1000+ lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T19:45:26 | 2023-12-20T19:51:34 | 2023-12-20T19:51:33 | COLLABORATOR | null | Description: Adding Summarization to Vectara, to reflect it provides not only vector-store type functionality but also can return a summary.
Also added:
MMR capability (in the Vectara platform side)
Updated templates
Updated documentation and IPYNB examples
Tag maintainer: @baskaryan
Twitter handle: @ofermend | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14970/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14970",
"html_url": "https://github.com/langchain-ai/langchain/pull/14970",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14970.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14970.patch",
"merged_at": "2023-12-20T19:51:33"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14969 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14969/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14969/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14969/events | https://github.com/langchain-ai/langchain/pull/14969 | 2,051,154,096 | PR_kwDOIPDwls5igPKS | 14,969 | Optionally limit the Current Buffer loaded in ConversationSummaryBufferMemory | {
"login": "keenborder786",
"id": 45242107,
"node_id": "MDQ6VXNlcjQ1MjQyMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/45242107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keenborder786",
"html_url": "https://github.com/keenborder786",
"followers_url": "https://api.github.com/users/keenborder786/followers",
"following_url": "https://api.github.com/users/keenborder786/following{/other_user}",
"gists_url": "https://api.github.com/users/keenborder786/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keenborder786/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keenborder786/subscriptions",
"organizations_url": "https://api.github.com/users/keenborder786/orgs",
"repos_url": "https://api.github.com/users/keenborder786/repos",
"events_url": "https://api.github.com/users/keenborder786/events{/privacy}",
"received_events_url": "https://api.github.com/users/keenborder786/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899126096,
"node_id": "LA_kwDOIPDwls8AAAABJAK7UA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20memory",
"name": "area: memory",
"color": "BFDADC",
"default": false,
"description": "Related to memory module"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | open | false | null | [] | null | 5 | 2023-12-20T19:36:07 | 2024-01-15T19:11:36 | null | CONTRIBUTOR | null |
- **Description:**
- `ConversationSummaryBufferMemory` does not have a way to limit the loaded messages/buffer from `ChatHistory`. While it summarizes/prunes the previous buffer when the buffer loaded from `ChatHistory` exceeds the token threshold but when `ConversationSummaryBufferMemory` loads the memory variables, it reloads all of messages which goes along with `moving_summary_buffer`. Their needs to be a way to OPTIONALLY limit the Window of current buffer loaded from ChatHistory which will go with the moving_summary_buffer in the final prompt otherwise the memory variables will just keep on increasing like simple `ConversationBufferMemory`.
- **Issue:** #14822
- **Tag maintainer:** @baskaryan, @eyurtsev, @hwchase17.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14969/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14969",
"html_url": "https://github.com/langchain-ai/langchain/pull/14969",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14969.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14969.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14967 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14967/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14967/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14967/events | https://github.com/langchain-ai/langchain/pull/14967 | 2,050,940,624 | PR_kwDOIPDwls5ifgUV | 14,967 | core(minor): Allow explicit types for ChatMessageHistory adds | {
"login": "Sypherd",
"id": 50557586,
"node_id": "MDQ6VXNlcjUwNTU3NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/50557586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sypherd",
"html_url": "https://github.com/Sypherd",
"followers_url": "https://api.github.com/users/Sypherd/followers",
"following_url": "https://api.github.com/users/Sypherd/following{/other_user}",
"gists_url": "https://api.github.com/users/Sypherd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sypherd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sypherd/subscriptions",
"organizations_url": "https://api.github.com/users/Sypherd/orgs",
"repos_url": "https://api.github.com/users/Sypherd/repos",
"events_url": "https://api.github.com/users/Sypherd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sypherd/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | {
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"site_admin": false
}
] | null | 3 | 2023-12-20T16:56:21 | 2023-12-22T21:12:01 | 2023-12-22T21:12:01 | CONTRIBUTOR | null | <!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
## Description
Changes the behavior of `add_user_message` and `add_ai_message` to allow for messages of those types to be passed in. Currently, if you want to use the `add_user_message` or `add_ai_message` methods, you have to pass in a string. For `add_message` on `ChatMessageHistory`, however, you have to pass a `BaseMessage`. This behavior seems a bit inconsistent. Personally, I'd love to be able to be explicit that I want to `add_user_message` and pass in a `HumanMessage` without having to grab the `content` attribute. This PR allows `add_user_message` to accept `HumanMessage`s or `str`s and `add_ai_message` to accept `AIMessage`s or `str`s to add that functionality and ensure backwards compatibility.
## Issue
* None
## Dependencies
* None
## Tag maintainer
@hinthornw
@baskaryan
## Note
`make test` results in `make: *** No rule to make target 'test'. Stop.` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14967/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14967",
"html_url": "https://github.com/langchain-ai/langchain/pull/14967",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14967.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14967.patch",
"merged_at": "2023-12-22T21:12:01"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14966 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14966/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14966/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14966/events | https://github.com/langchain-ai/langchain/pull/14966 | 2,050,933,438 | PR_kwDOIPDwls5ifevu | 14,966 | packaging: fix format for python version | {
"login": "morotti",
"id": 13528994,
"node_id": "MDQ6VXNlcjEzNTI4OTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/13528994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morotti",
"html_url": "https://github.com/morotti",
"followers_url": "https://api.github.com/users/morotti/followers",
"following_url": "https://api.github.com/users/morotti/following{/other_user}",
"gists_url": "https://api.github.com/users/morotti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morotti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morotti/subscriptions",
"organizations_url": "https://api.github.com/users/morotti/orgs",
"repos_url": "https://api.github.com/users/morotti/repos",
"events_url": "https://api.github.com/users/morotti/events{/privacy}",
"received_events_url": "https://api.github.com/users/morotti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 3 | 2023-12-20T16:51:14 | 2023-12-21T02:58:27 | 2023-12-21T02:58:26 | NONE | null | Hello,
could you adjust the version format? I am running into issues where it's not understood by some tools.
The python version format is only major.minor. It's one of the edge cases of python packaging.
You can see examples in setuptools, both the required python version for the package and the required python version for dependencies. It's not documented well but the format is only meant to be two digits.
https://setuptools.pypa.io/en/latest/userguide/declarative_config.html
https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html
```
# setup.cfg
[options]
zip_safe = False
include_package_data = True
packages = find:
python_requires = >=3.7
install_requires =
requests
importlib-metadata; python_version<"3.8"
```
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14966/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14966",
"html_url": "https://github.com/langchain-ai/langchain/pull/14966",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14966.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14966.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14964 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14964/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14964/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14964/events | https://github.com/langchain-ai/langchain/pull/14964 | 2,050,920,206 | PR_kwDOIPDwls5ifb1V | 14,964 | feat: add streaming param to ChatMistralAI constructor | {
"login": "DavidLMS",
"id": 17435126,
"node_id": "MDQ6VXNlcjE3NDM1MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/17435126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidLMS",
"html_url": "https://github.com/DavidLMS",
"followers_url": "https://api.github.com/users/DavidLMS/followers",
"following_url": "https://api.github.com/users/DavidLMS/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidLMS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidLMS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidLMS/subscriptions",
"organizations_url": "https://api.github.com/users/DavidLMS/orgs",
"repos_url": "https://api.github.com/users/DavidLMS/repos",
"events_url": "https://api.github.com/users/DavidLMS/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidLMS/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
},
{
"id": 6348691034,
"node_id": "LA_kwDOIPDwls8AAAABemlWWg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/partner",
"name": "partner",
"color": "ededed",
"default": false,
"description": null
}
] | closed | false | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
}
] | null | 2 | 2023-12-20T16:42:07 | 2023-12-21T18:12:05 | 2023-12-21T18:12:04 | CONTRIBUTOR | null | - **Description:** A new boolean parameter `streaming` has been added to the constructor of the `ChatMistralAI` class. This parameter allows users to specify whether they wish to use the streaming functionality at the time of instantiating a `ChatMistralAI` object.
- **Tag maintainer:** @baskaryan
- **Twitter handle:** @LMS_David_RS | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14964/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14964",
"html_url": "https://github.com/langchain-ai/langchain/pull/14964",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14964.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14964.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14963 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14963/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14963/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14963/events | https://github.com/langchain-ai/langchain/pull/14963 | 2,050,911,397 | PR_kwDOIPDwls5ifZ50 | 14,963 | infra: pr template update | {
"login": "baskaryan",
"id": 22008038,
"node_id": "MDQ6VXNlcjIyMDA4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22008038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskaryan",
"html_url": "https://github.com/baskaryan",
"followers_url": "https://api.github.com/users/baskaryan/followers",
"following_url": "https://api.github.com/users/baskaryan/following{/other_user}",
"gists_url": "https://api.github.com/users/baskaryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskaryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskaryan/subscriptions",
"organizations_url": "https://api.github.com/users/baskaryan/orgs",
"repos_url": "https://api.github.com/users/baskaryan/repos",
"events_url": "https://api.github.com/users/baskaryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskaryan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T16:36:09 | 2023-12-20T19:53:39 | 2023-12-20T19:53:38 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14963/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14963",
"html_url": "https://github.com/langchain-ai/langchain/pull/14963",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14963.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14963.patch",
"merged_at": "2023-12-20T19:53:38"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14961 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14961/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14961/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14961/events | https://github.com/langchain-ai/langchain/pull/14961 | 2,050,760,468 | PR_kwDOIPDwls5ie4U- | 14,961 | langchain[patch]: Release 0.0.352 | {
"login": "baskaryan",
"id": 22008038,
"node_id": "MDQ6VXNlcjIyMDA4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22008038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskaryan",
"html_url": "https://github.com/baskaryan",
"followers_url": "https://api.github.com/users/baskaryan/followers",
"following_url": "https://api.github.com/users/baskaryan/following{/other_user}",
"gists_url": "https://api.github.com/users/baskaryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskaryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskaryan/subscriptions",
"organizations_url": "https://api.github.com/users/baskaryan/orgs",
"repos_url": "https://api.github.com/users/baskaryan/repos",
"events_url": "https://api.github.com/users/baskaryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskaryan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T15:17:06 | 2023-12-20T15:27:04 | 2023-12-20T15:27:03 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14961/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14961",
"html_url": "https://github.com/langchain-ai/langchain/pull/14961",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14961.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14961.patch",
"merged_at": "2023-12-20T15:27:03"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14960 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14960/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14960/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14960/events | https://github.com/langchain-ai/langchain/pull/14960 | 2,050,752,610 | PR_kwDOIPDwls5ie2kv | 14,960 | community[patch]: Release 0.0.5 | {
"login": "baskaryan",
"id": 22008038,
"node_id": "MDQ6VXNlcjIyMDA4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22008038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskaryan",
"html_url": "https://github.com/baskaryan",
"followers_url": "https://api.github.com/users/baskaryan/followers",
"following_url": "https://api.github.com/users/baskaryan/following{/other_user}",
"gists_url": "https://api.github.com/users/baskaryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskaryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskaryan/subscriptions",
"organizations_url": "https://api.github.com/users/baskaryan/orgs",
"repos_url": "https://api.github.com/users/baskaryan/repos",
"events_url": "https://api.github.com/users/baskaryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskaryan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T15:13:03 | 2023-12-20T15:25:17 | 2023-12-20T15:25:16 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14960/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14960",
"html_url": "https://github.com/langchain-ai/langchain/pull/14960",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14960.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14960.patch",
"merged_at": "2023-12-20T15:25:16"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14959 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14959/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14959/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14959/events | https://github.com/langchain-ai/langchain/pull/14959 | 2,050,742,383 | PR_kwDOIPDwls5ie0Rq | 14,959 | core[patch]: 0.1.2 | {
"login": "baskaryan",
"id": 22008038,
"node_id": "MDQ6VXNlcjIyMDA4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22008038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskaryan",
"html_url": "https://github.com/baskaryan",
"followers_url": "https://api.github.com/users/baskaryan/followers",
"following_url": "https://api.github.com/users/baskaryan/following{/other_user}",
"gists_url": "https://api.github.com/users/baskaryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskaryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskaryan/subscriptions",
"organizations_url": "https://api.github.com/users/baskaryan/orgs",
"repos_url": "https://api.github.com/users/baskaryan/repos",
"events_url": "https://api.github.com/users/baskaryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskaryan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T15:08:07 | 2023-12-20T15:13:56 | 2023-12-20T15:13:55 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14959/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14959",
"html_url": "https://github.com/langchain-ai/langchain/pull/14959",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14959.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14959.patch",
"merged_at": "2023-12-20T15:13:55"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14958 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14958/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14958/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14958/events | https://github.com/langchain-ai/langchain/pull/14958 | 2,050,697,289 | PR_kwDOIPDwls5ieqEf | 14,958 | Dmaturana/randomlengthfewshotselector | {
"login": "Dmaturana81",
"id": 54143703,
"node_id": "MDQ6VXNlcjU0MTQzNzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/54143703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dmaturana81",
"html_url": "https://github.com/Dmaturana81",
"followers_url": "https://api.github.com/users/Dmaturana81/followers",
"following_url": "https://api.github.com/users/Dmaturana81/following{/other_user}",
"gists_url": "https://api.github.com/users/Dmaturana81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dmaturana81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dmaturana81/subscriptions",
"organizations_url": "https://api.github.com/users/Dmaturana81/orgs",
"repos_url": "https://api.github.com/users/Dmaturana81/repos",
"events_url": "https://api.github.com/users/Dmaturana81/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dmaturana81/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714126,
"node_id": "LA_kwDOIPDwls8AAAABc3-rjg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:L",
"name": "size:L",
"color": "BFD4F2",
"default": false,
"description": "This PR changes 100-499 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 2 | 2023-12-20T14:49:00 | 2023-12-22T21:13:12 | 2023-12-22T21:13:11 | CONTRIBUTOR | null | - **Description:** The length example selector always retrieves the same examples, going from top to bottom. In the case you have a lot of examples, there will not be any variation. I just copied and modified this length selector to do a random length base examples selector, so for each prompt, the examples selected will be random
- **Dependencies:** Added the random import, not dependencies changed
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14958/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14958",
"html_url": "https://github.com/langchain-ai/langchain/pull/14958",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14958.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14958.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14957 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14957/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14957/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14957/events | https://github.com/langchain-ai/langchain/issues/14957 | 2,050,634,701 | I_kwDOIPDwls56OjPN | 14,957 | Issue with DynamoDBChatMessageHistory Not Respecting max_token_limit in ConversationTokenBufferMemory | {
"login": "samuelbaruffi",
"id": 7096709,
"node_id": "MDQ6VXNlcjcwOTY3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7096709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuelbaruffi",
"html_url": "https://github.com/samuelbaruffi",
"followers_url": "https://api.github.com/users/samuelbaruffi/followers",
"following_url": "https://api.github.com/users/samuelbaruffi/following{/other_user}",
"gists_url": "https://api.github.com/users/samuelbaruffi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuelbaruffi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuelbaruffi/subscriptions",
"organizations_url": "https://api.github.com/users/samuelbaruffi/orgs",
"repos_url": "https://api.github.com/users/samuelbaruffi/repos",
"events_url": "https://api.github.com/users/samuelbaruffi/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuelbaruffi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899126096,
"node_id": "LA_kwDOIPDwls8AAAABJAK7UA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20memory",
"name": "area: memory",
"color": "BFDADC",
"default": false,
"description": "Related to memory module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5959659008,
"node_id": "LA_kwDOIPDwls8AAAABYzkuAA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20aws",
"name": "integration: aws",
"color": "C5DEF5",
"default": false,
"description": "Related to Amazon Web Services (AWS) integrations"
}
] | open | false | null | [] | null | 3 | 2023-12-20T14:15:02 | 2023-12-20T21:56:28 | null | NONE | null | ### System Info
Langchain Version: langchain 0.0.350, langchain-community 0.0.3, langchain-core 0.1.1
Python Version: 3.10.6
Operating System: macOs
Additional Libraries: boto 2.49.0, boto3 1.34.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to Reproduce:
- Create an instance of `DynamoDBChatMessageHistory` with a specified table name and session ID.
- Initialize `ConversationTokenBufferMemory` with a `max_token_limit`.
- Attach the memory to a `ConversationChain`.
- Call predict on the `ConversationChain` with some input.
Code sample:
```
import boto3
from langchain.llms import Bedrock
from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
from langchain.memory import ConversationTokenBufferMemory
session = boto3.Session(
aws_access_key_id=os.environ.get('AWS_ACCESS_KEY_ID'),
aws_secret_access_key=os.environ.get('AWS_SECRET_ACCESS_KEY'),
aws_session_token=os.environ.get('AWS_SESSION_TOKEN'),
region_name='us-east-1'
)
dynamodb = session.resource('dynamodb')
chat_sessions_table = dynamodb.Table('SessionTable')
boto3_bedrock = session.client(service_name="bedrock-runtime")
max_tokens_to_sample = 100
temperature = 0
modelId = "anthropic.claude-instant-v1"
top_k = 250
top_p = 0.999
model_kwargs = {
"temperature": temperature,
"max_tokens_to_sample": max_tokens_to_sample,
"top_k": top_k,
"top_p": top_p
}
llm = Bedrock(
client=boto3_bedrock,
model_id=modelId,
region_name='us-east-1',
model_kwargs=model_kwargs,
streaming=True,callbacks=[StreamingStdOutCallbackHandler()]
)
message_history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="10", boto3_session=session)
memory = ConversationTokenBufferMemory(
llm=llm, # Use the Bedrock instance
max_token_limit=100,
return_messages=True,
chat_memory=message_history,
ai_prefix="A",
human_prefix="H"
)
#add the memory to the Chain
conversation = ConversationChain(
llm=llm, verbose=True, memory=memory
)
conversation.predict(input="Hello!")
memory.load_memory_variables({})
```
### Expected behavior
Expected Behavior:
- The `DynamoDBChatMessageHistory` should respect the `max_token_limit` set in `ConversationTokenBufferMemory`, limiting the token count accordingly.
Actual Behavior:
The `DynamoDBChatMessageHistory` does not limit the token count as per the `max_token_limit` set in `ConversationTokenBufferMemory` and keeps saving all the items in memory on the DynamoDB table. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14957/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14956 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14956/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14956/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14956/events | https://github.com/langchain-ai/langchain/issues/14956 | 2,050,590,779 | I_kwDOIPDwls56OYg7 | 14,956 | Feature Request - ChatGPT OpenAPI NLATool Authentication Implementation Logic | {
"login": "glide-the",
"id": 16206043,
"node_id": "MDQ6VXNlcjE2MjA2MDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/16206043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glide-the",
"html_url": "https://github.com/glide-the",
"followers_url": "https://api.github.com/users/glide-the/followers",
"following_url": "https://api.github.com/users/glide-the/following{/other_user}",
"gists_url": "https://api.github.com/users/glide-the/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glide-the/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glide-the/subscriptions",
"organizations_url": "https://api.github.com/users/glide-the/orgs",
"repos_url": "https://api.github.com/users/glide-the/repos",
"events_url": "https://api.github.com/users/glide-the/events{/privacy}",
"received_events_url": "https://api.github.com/users/glide-the/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700863,
"node_id": "LA_kwDOIPDwls8AAAABUpidvw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:enhancement",
"name": "auto:enhancement",
"color": "C2E0C6",
"default": false,
"description": "A large net-new component, integration, or chain. Use sparingly. The largest features"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-20T13:50:27 | 2023-12-20T13:52:51 | null | NONE | null | ### Feature request
**Issue Title:** Enhance NLATool Authentication in ChatGPT OpenAPI
**Description:**
I have identified a feature gap in the current implementation of ChatGPT OpenAPI when using NLATool as a proxy for authentication. The existing logic does not fully meet downstream requirements, and I intend to propose a modification to address this issue.
**Proposed Modification:**
I suggest adding a new attribute within the NLATool implementation. During initialization, this attribute should be passed to `NLAToolkit.from_llm_and_ai_plugin`. The subsequent call chain is as follows: `from_llm_and_spec -> _get_http_operation_tools -> NLATool.from_llm_and_method`. The responsibility of `NLATool.from_llm_and_method` is to construct the NLATool component, which includes an underlying `OpenAPIEndpointChain` base package.
The challenge lies in the fact that the `OpenAPIEndpointChain` base package currently lacks support for authentication. To address this, it is essential to load the created attribute into the `OpenAPIEndpointChain`. During the execution of the `_call` method, the authentication logic should be executed.
**Implementation Steps:**
1. Modify the initialization and execution code of the `OpenAPIEndpointChain` class to support authentication.
2. Ensure that the newly added attribute is properly integrated into the `OpenAPIEndpointChain` during its initialization.
3. Implement the authentication logic in the `_call` method of the `OpenAPIEndpointChain`.
Thank you for your consideration.
### Motivation
The current implementation of ChatGPT OpenAPI using NLATool as a proxy for authentication falls short of meeting downstream requirements. By enhancing the NLATool authentication logic, we aim to improve the overall functionality and responsiveness of the system, ensuring it aligns more effectively with user needs. This modification seeks to bridge the existing feature gap and enhance the usability and versatility of the ChatGPT OpenAPI.
### Your contribution
**Expected Challenges:**
While I have not yet started the implementation, I anticipate challenges during the process. One notable challenge is that the `langchain` module does not currently define the core for the authentication class. Consequently, addressing this issue may require changes across multiple modules. A pull request spanning multiple modules may encounter challenges during the review process.
**Request for Feedback:**
Before I commence with the implementation, I would appreciate your insights and guidance on the proposed modification. Your feedback on potential challenges and recommendations for an effective implementation would be invaluable.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14956/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14954 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14954/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14954/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14954/events | https://github.com/langchain-ai/langchain/issues/14954 | 2,050,427,439 | I_kwDOIPDwls56Nwov | 14,954 | getting the error "TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'" with agent_executor | {
"login": "chowdary1209",
"id": 52491904,
"node_id": "MDQ6VXNlcjUyNDkxOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/52491904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chowdary1209",
"html_url": "https://github.com/chowdary1209",
"followers_url": "https://api.github.com/users/chowdary1209/followers",
"following_url": "https://api.github.com/users/chowdary1209/following{/other_user}",
"gists_url": "https://api.github.com/users/chowdary1209/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chowdary1209/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chowdary1209/subscriptions",
"organizations_url": "https://api.github.com/users/chowdary1209/orgs",
"repos_url": "https://api.github.com/users/chowdary1209/repos",
"events_url": "https://api.github.com/users/chowdary1209/events{/privacy}",
"received_events_url": "https://api.github.com/users/chowdary1209/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899412369,
"node_id": "LA_kwDOIPDwls8AAAABJAcZkQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20agent",
"name": "area: agent",
"color": "BFD4F2",
"default": false,
"description": "Related to agents module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
}
] | open | false | null | [] | null | 3 | 2023-12-20T12:09:18 | 2024-01-04T11:06:50 | null | NONE | null | ### System Info
langchain: 0.0.351
langchain-community: 0.0.4
langchain-core: 0.1.1
langchain-experimental: 0.0.47
python: 3.10.4
### Who can help?
@hwchase17 , @agola11 , @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an agent and which is working as expected.
2. create an agent_executor by using above agent.
3. If i try to use agent_execuor, getting error "TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'"
**Below is the code:**
Create an agent:-
```
agent = initialize_agent(llm=llm,
tools=tools,
agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
agent_kwargs=agent_kwargs,
output_parser = output_parser,
output_key = "result",
handle_parsing_errors = True,
max_iterations=3,
early_stopping_method="generate",
memory = memory,
)
```
Create an agent_executor:-
```
agent_executor = AgentExecutor(agent=agent,
tools=tools,
verbose=True,
memory = memory,
)
```
calling the agent_executor
`result = agent_executor.invoke({"input":"Tell me about yourself", "format_instructions": response_format})["output"]`
Getting below error:-
```
Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[18], line 1
----> 1 result = agent_executor.invoke({"input":"Tell me about yourself", "format_instructions": response_format})["output"]
2 print(f"result: {result}")
TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'
```
What I have observed from the above error is that the chain is executing multiple times, hence the 'Entering new AgentExecutor chain...' message is displaying twice. This could be the cause of the issue.
### Expected behavior
Should return proper output with thought and action | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14954/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14953 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14953/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14953/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14953/events | https://github.com/langchain-ai/langchain/pull/14953 | 2,050,374,671 | PR_kwDOIPDwls5idi79 | 14,953 | Update arxiv.py with get_summaries_as_docs inside of Arxivloader | {
"login": "ArchanGhosh",
"id": 14181922,
"node_id": "MDQ6VXNlcjE0MTgxOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/14181922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArchanGhosh",
"html_url": "https://github.com/ArchanGhosh",
"followers_url": "https://api.github.com/users/ArchanGhosh/followers",
"following_url": "https://api.github.com/users/ArchanGhosh/following{/other_user}",
"gists_url": "https://api.github.com/users/ArchanGhosh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArchanGhosh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArchanGhosh/subscriptions",
"organizations_url": "https://api.github.com/users/ArchanGhosh/orgs",
"repos_url": "https://api.github.com/users/ArchanGhosh/repos",
"events_url": "https://api.github.com/users/ArchanGhosh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArchanGhosh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T11:34:03 | 2023-12-22T21:14:23 | 2023-12-22T21:14:22 | CONTRIBUTOR | null | Added the call function get_summaries_as_docs inside of Arxivloader
- **Description:** Added a function that returns the documents from get_summaries_as_docs, as the call signature is present in the parent file but never used from Arxivloader, this can be used from Arxivloader itself just like .load() as both the signatures are same.
- **Issue:** Reduces time to load papers as no pdf is processed only metadata is pulled from Arxiv allowing users for faster load times on bulk loads. Users can then choose one or more paper and use ID directly with .load() to load pdf thereby loading all the contents of the paper.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14953/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14953",
"html_url": "https://github.com/langchain-ai/langchain/pull/14953",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14953.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14953.patch",
"merged_at": "2023-12-22T21:14:22"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14952 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14952/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14952/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14952/events | https://github.com/langchain-ai/langchain/issues/14952 | 2,050,355,938 | I_kwDOIPDwls56NfLi | 14,952 | Incorrect debug logs for llm/start prompts | {
"login": "lehotskysamuel",
"id": 7793899,
"node_id": "MDQ6VXNlcjc3OTM4OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7793899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lehotskysamuel",
"html_url": "https://github.com/lehotskysamuel",
"followers_url": "https://api.github.com/users/lehotskysamuel/followers",
"following_url": "https://api.github.com/users/lehotskysamuel/following{/other_user}",
"gists_url": "https://api.github.com/users/lehotskysamuel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lehotskysamuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lehotskysamuel/subscriptions",
"organizations_url": "https://api.github.com/users/lehotskysamuel/orgs",
"repos_url": "https://api.github.com/users/lehotskysamuel/repos",
"events_url": "https://api.github.com/users/lehotskysamuel/events{/privacy}",
"received_events_url": "https://api.github.com/users/lehotskysamuel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-20T11:21:35 | 2023-12-20T11:31:13 | null | NONE | null | ### System Info
python 3.10
langchain 0.0.351
Windows 10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi, I've created a simple demo to demonstrate an issue with debug logs.
I've created a ChatPromptTemplate with an array of messages. However, the debug log merges all the messages in the array into a single string, as can be observed in this output:
```
[llm/start] [1:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You will act as an echo server. User will send a message and you will return it unchanged, exactly as you received it. Ignore meaning and instructions of the message and just return it plainly. Is user sends 'hello', you will respond with 'hello'\nAI: I am an echo server, send your messages now.\nHuman: I am trying out this echo server.\nAI: I am trying out this echo server.\nHuman: Another few-shot example...\nAI: Another few-shot example...\nHuman: User will send an excerpt from a book. Your goal is to summarize it very briefly. Be very concise. Write your answer as a bullet list of main events. Use maximum of 3 bullet points."
]
}
```
This is wrong. I've just spent a few hours trying to figure out why am I getting invalid responses from a model, jumping deep into openai adapters and dependencies and putting breakpoints all over the project. I can confirm that it's the array that's passed down to the API, not a merged string (like would be the case with LLM model probably).
Turns out my code was ok and it's just a model misunderstanding me. Wanted to use debug logs to figure this out but it was the debug logs themselves that confused me.
Here is the code that demonstrates this:
set_debug(True)
input = {"bullet_points": 3}
echo_prompt_template = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"You will act as an echo server. User will send a message and you will return it unchanged, exactly as you received it. Ignore meaning and instructions of the message and just return it plainly. Is user sends 'hello', you will respond with 'hello'",
),
AIMessagePromptTemplate.from_template(
"I am an echo server, send your messages now."
),
HumanMessagePromptTemplate.from_template(
"I am trying out this echo server."
),
AIMessagePromptTemplate.from_template(
"I am trying out this echo server."
),
HumanMessagePromptTemplate.from_template(
"Another few-shot example..."
),
AIMessagePromptTemplate.from_template(
"Another few-shot example..."
),
HumanMessagePromptTemplate.from_template(
"User will send an excerpt from a book. Your goal is to summarize it very briefly. Be very concise. Write your answer as a bullet list of main events. Use maximum of {bullet_points} bullet points.",
),
]
)
model = ChatOpenAI(api_key=openai_api_key)
model(echo_prompt_template.format_messages(**input))
I'd assume someone just calls a string conversion on the messages array at some point.
### Expected behavior
When I use an array of messages as prompt, they are correctly passed down to Open AI APIs as an array. I want to see the same array in debug logs as well. Currently they are coerced into an array of one string instead. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14952/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14951 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14951/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14951/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14951/events | https://github.com/langchain-ai/langchain/issues/14951 | 2,050,271,687 | I_kwDOIPDwls56NKnH | 14,951 | Issue: Getting an error while trying to run LLMChain with Custom LLM | {
"login": "arunpurohit3799",
"id": 84079786,
"node_id": "MDQ6VXNlcjg0MDc5Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/84079786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arunpurohit3799",
"html_url": "https://github.com/arunpurohit3799",
"followers_url": "https://api.github.com/users/arunpurohit3799/followers",
"following_url": "https://api.github.com/users/arunpurohit3799/following{/other_user}",
"gists_url": "https://api.github.com/users/arunpurohit3799/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arunpurohit3799/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arunpurohit3799/subscriptions",
"organizations_url": "https://api.github.com/users/arunpurohit3799/orgs",
"repos_url": "https://api.github.com/users/arunpurohit3799/repos",
"events_url": "https://api.github.com/users/arunpurohit3799/events{/privacy}",
"received_events_url": "https://api.github.com/users/arunpurohit3799/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 2 | 2023-12-20T10:28:46 | 2024-01-17T01:34:14 | null | NONE | null |
I am trying to run the LLMChain using `llm_chain = LLMChain(llm=llm, prompt=prompt)`, where `llm` is a custom LLM defined based on https://python.langchain.com/docs/modules/model_io/llms/custom_llm,
While trying to run this I am getting the following error: `Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, predict, predict_messages (type=type_error)`
Can someone help me with this?
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14951/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14950 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14950/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14950/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14950/events | https://github.com/langchain-ai/langchain/issues/14950 | 2,050,268,609 | I_kwDOIPDwls56NJ3B | 14,950 | ImportError: cannot import name 'AzureOpenAIEmbeddings' from 'langchain.embeddings' (/opt/conda/lib/python3.10/site-packages/langchain/embeddings/__init__.py) | {
"login": "jdjayakaran",
"id": 144101750,
"node_id": "U_kgDOCJbRdg",
"avatar_url": "https://avatars.githubusercontent.com/u/144101750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jdjayakaran",
"html_url": "https://github.com/jdjayakaran",
"followers_url": "https://api.github.com/users/jdjayakaran/followers",
"following_url": "https://api.github.com/users/jdjayakaran/following{/other_user}",
"gists_url": "https://api.github.com/users/jdjayakaran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jdjayakaran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jdjayakaran/subscriptions",
"organizations_url": "https://api.github.com/users/jdjayakaran/orgs",
"repos_url": "https://api.github.com/users/jdjayakaran/repos",
"events_url": "https://api.github.com/users/jdjayakaran/events{/privacy}",
"received_events_url": "https://api.github.com/users/jdjayakaran/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
}
] | open | false | null | [] | null | 2 | 2023-12-20T10:26:52 | 2024-01-16T03:57:32 | null | NONE | null | ### System Info
unable to resolve below issue
![image](https://github.com/langchain-ai/langchain/assets/144101750/a7b75fce-bb32-4b13-ab3e-e122455e3213)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
![image](https://github.com/langchain-ai/langchain/assets/144101750/40a9ed30-0761-46b0-8bcc-acb9cceb1734)
### Expected behavior
ImportError: cannot import name 'AzureOpenAIEmbeddings' from 'langchain.embeddings' (/opt/conda/lib/python3.10/site-packages/langchain/embeddings/__init__.py) | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14950/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14949 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14949/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14949/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14949/events | https://github.com/langchain-ai/langchain/pull/14949 | 2,050,199,223 | PR_kwDOIPDwls5ic8iQ | 14,949 | Add support Vertex AI Gemini uses a public image URL | {
"login": "itok01",
"id": 28337009,
"node_id": "MDQ6VXNlcjI4MzM3MDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/28337009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itok01",
"html_url": "https://github.com/itok01",
"followers_url": "https://api.github.com/users/itok01/followers",
"following_url": "https://api.github.com/users/itok01/following{/other_user}",
"gists_url": "https://api.github.com/users/itok01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itok01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itok01/subscriptions",
"organizations_url": "https://api.github.com/users/itok01/orgs",
"repos_url": "https://api.github.com/users/itok01/repos",
"events_url": "https://api.github.com/users/itok01/events{/privacy}",
"received_events_url": "https://api.github.com/users/itok01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714119,
"node_id": "LA_kwDOIPDwls8AAAABc3-rhw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:M",
"name": "size:M",
"color": "C5DEF5",
"default": false,
"description": "This PR changes 30-99 lines, ignoring generated files."
}
] | closed | false | {
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"site_admin": false
}
] | null | 2 | 2023-12-20T09:45:21 | 2023-12-22T21:19:10 | 2023-12-22T21:19:09 | CONTRIBUTOR | null | ## What
Since `langchain_google_genai.ChatGoogleGenerativeAI` supported A public image URL, we add to support it in `langchain.chat_models.ChatVertexAI` as well.
### Example
```py
from langchain.chat_models.vertexai import ChatVertexAI
from langchain_core.messages import HumanMessage
llm = ChatVertexAI(model_name="gemini-pro-vision")
image_message = {
"type": "image_url",
"image_url": {
"url": "https://python.langchain.com/assets/images/cell-18-output-1-0c7fb8b94ff032d51bfe1880d8370104.png",
},
}
text_message = {
"type": "text",
"text": "What is shown in this image?",
}
message = HumanMessage(content=[text_message, image_message])
output = llm([message])
print(output.content)
```
## Refs
- https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm
- https://python.langchain.com/docs/integrations/chat/google_generative_ai | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14949/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14949",
"html_url": "https://github.com/langchain-ai/langchain/pull/14949",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14949.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14949.patch",
"merged_at": "2023-12-22T21:19:09"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14948 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14948/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14948/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14948/events | https://github.com/langchain-ai/langchain/issues/14948 | 2,050,173,519 | I_kwDOIPDwls56MypP | 14,948 | The calculated score is wrong when using DistanceStrategy.MAX_INNER_PRODUCT (Faiss) | {
"login": "careywyr",
"id": 17082044,
"node_id": "MDQ6VXNlcjE3MDgyMDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17082044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/careywyr",
"html_url": "https://github.com/careywyr",
"followers_url": "https://api.github.com/users/careywyr/followers",
"following_url": "https://api.github.com/users/careywyr/following{/other_user}",
"gists_url": "https://api.github.com/users/careywyr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/careywyr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/careywyr/subscriptions",
"organizations_url": "https://api.github.com/users/careywyr/orgs",
"repos_url": "https://api.github.com/users/careywyr/repos",
"events_url": "https://api.github.com/users/careywyr/events{/privacy}",
"received_events_url": "https://api.github.com/users/careywyr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
}
] | open | false | null | [] | null | 3 | 2023-12-20T09:29:25 | 2023-12-20T09:58:48 | null | NONE | null | ### System Info
langchain: 0.0.338
python: 3.9
### Who can help?
@hwchase17
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
In the following code:
``` python
if self.distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT:
return self._max_inner_product_relevance_score_fn
elif self.distance_strategy == DistanceStrategy.EUCLIDEAN_DISTANCE:
# Default behavior is to use Euclidean distance for relevancy
return self._euclidean_relevance_score_fn
elif self.distance_strategy == DistanceStrategy.COSINE:
return self._cosine_relevance_score_fn
```
When I use MAX_INNER_PRODUCT, the score calculation method is `_max_inner_product_relevance_score_fn`:
``` python
def _max_inner_product_relevance_score_fn(distance: float) -> float:
"""Normalize the distance to a score on a scale of [0, 1]."""
if distance > 0:
return 1.0 - distance
return -1.0 * distance
```
However, if I use MAX_INNER_PRODUCT, the index must be FlatIP:
``` python
if distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT:
index = faiss.IndexFlatIP(len(embeddings[0]))
else:
# Default to L2, currently other metric types not initialized.
index = faiss.IndexFlatL2(len(embeddings[0]))
```
Thus, the distance represents the cosine similarity, meaning the distance should be equivalent to similarity. However, in the method `_max_inner_product_relevance_score_fn`, a larger distance results in a lower score.
Is this a bug?
### Expected behavior
I think the distance sholud be equivalent to similarity。 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14948/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14947 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14947/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14947/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14947/events | https://github.com/langchain-ai/langchain/issues/14947 | 2,050,171,291 | I_kwDOIPDwls56MyGb | 14,947 | Issue: Getting 'An output parsing error occurred' error even after passing 'handle_parsing_errors=True' to the agent | {
"login": "utee3626",
"id": 113497852,
"node_id": "U_kgDOBsPW_A",
"avatar_url": "https://avatars.githubusercontent.com/u/113497852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/utee3626",
"html_url": "https://github.com/utee3626",
"followers_url": "https://api.github.com/users/utee3626/followers",
"following_url": "https://api.github.com/users/utee3626/following{/other_user}",
"gists_url": "https://api.github.com/users/utee3626/gists{/gist_id}",
"starred_url": "https://api.github.com/users/utee3626/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/utee3626/subscriptions",
"organizations_url": "https://api.github.com/users/utee3626/orgs",
"repos_url": "https://api.github.com/users/utee3626/repos",
"events_url": "https://api.github.com/users/utee3626/events{/privacy}",
"received_events_url": "https://api.github.com/users/utee3626/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899412369,
"node_id": "LA_kwDOIPDwls8AAAABJAcZkQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20agent",
"name": "area: agent",
"color": "BFD4F2",
"default": false,
"description": "Related to agents module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
}
] | open | false | null | [] | null | 3 | 2023-12-20T09:28:03 | 2023-12-21T05:39:53 | null | NONE | null | ### Issue you'd like to raise.
I am using langchain with gpt 4 model. Im using the create pandas dataframe agent for my use case. For 60 % of the time i run the code, im getting below error-
An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: `Thought: To answer the question about what the "EffectiveDate" column represents, I need to use common sense based on the column name and the data provided. For the "exists" part, I need to check if there are any standardization issues in the "EffectiveDate" column. I will look at the data provided and think about the possible issues listed.
Now i have already passed the argument- `handle_parsing_errors=True' while creating the agent, but still it gives me the above error, suggesting me to pass this argument.
I have also tried giving other values to the handle_parsing_errors like - passing custom error message or function, still im stuck with this error most of the time.
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14947/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14946 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14946/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14946/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14946/events | https://github.com/langchain-ai/langchain/issues/14946 | 2,050,054,174 | I_kwDOIPDwls56MVge | 14,946 | openai migrate | {
"login": "CalvinHMX",
"id": 108516348,
"node_id": "U_kgDOBnfT_A",
"avatar_url": "https://avatars.githubusercontent.com/u/108516348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CalvinHMX",
"html_url": "https://github.com/CalvinHMX",
"followers_url": "https://api.github.com/users/CalvinHMX/followers",
"following_url": "https://api.github.com/users/CalvinHMX/following{/other_user}",
"gists_url": "https://api.github.com/users/CalvinHMX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CalvinHMX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CalvinHMX/subscriptions",
"organizations_url": "https://api.github.com/users/CalvinHMX/orgs",
"repos_url": "https://api.github.com/users/CalvinHMX/repos",
"events_url": "https://api.github.com/users/CalvinHMX/events{/privacy}",
"received_events_url": "https://api.github.com/users/CalvinHMX/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 5924999838,
"node_id": "LA_kwDOIPDwls8AAAABYShSng",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20chroma",
"name": "integration: chroma",
"color": "B78AF8",
"default": false,
"description": "Related to ChromaDB"
}
] | open | false | null | [] | null | 2 | 2023-12-20T08:08:13 | 2023-12-20T17:04:36 | null | NONE | null | ### System Info
loader1 = CSVLoader(file_path='/home/calvin/下载/test.csv')
Doc = loader1.load()
text_splitter = CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
texts = text_splitter.split_documents(Doc)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
qa = RetrievalQA.from_chain_type(llm=OpenAI(mode="gpt-3.5-turbo"), chain_type="stuff", retriever=retriever)
query = "1501475820"
print(qa.run(query))
I run this code,but i can not use chat-gpt-3.5-turbo,so i try to opanAI MIGRATE,but i exit it ,then i found
![Screenshot_20231220_160320](https://github.com/langchain-ai/langchain/assets/108516348/cc3574ce-5fe8-4f10-bace-070ae4d41727)
then always tell me
![Screenshot_20231220_160734](https://github.com/langchain-ai/langchain/assets/108516348/4f7b2e29-fb59-4c65-819b-a85f991b4f66)
@hwcha
### Who can help?
from langchain.chains import RetrievalQA
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader1 = CSVLoader(file_path='/home/calvin/下载/test.csv')
Doc = loader1.load()
text_splitter = CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
texts = text_splitter.split_documents(Doc)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
qa = RetrievalQA.from_chain_type(llm=OpenAI(mode="gpt-3.5-turbo"), chain_type="stuff", retriever=retriever)
query = "1501475820"
print(qa.run(query))
### Expected behavior
i want use gpt-3.5-turbo to query | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14946/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14945 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14945/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14945/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14945/events | https://github.com/langchain-ai/langchain/issues/14945 | 2,049,869,786 | I_kwDOIPDwls56Lofa | 14,945 | Issue: how to deploy gpt-4-turbo through langchain | {
"login": "Feya",
"id": 23689735,
"node_id": "MDQ6VXNlcjIzNjg5NzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/23689735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Feya",
"html_url": "https://github.com/Feya",
"followers_url": "https://api.github.com/users/Feya/followers",
"following_url": "https://api.github.com/users/Feya/following{/other_user}",
"gists_url": "https://api.github.com/users/Feya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Feya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Feya/subscriptions",
"organizations_url": "https://api.github.com/users/Feya/orgs",
"repos_url": "https://api.github.com/users/Feya/repos",
"events_url": "https://api.github.com/users/Feya/events{/privacy}",
"received_events_url": "https://api.github.com/users/Feya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | closed | false | null | [] | null | 2 | 2023-12-20T05:36:23 | 2023-12-20T06:12:33 | 2023-12-20T06:12:12 | NONE | null | ### Issue you'd like to raise.
Hi, I can't find from how to deploy gpt-4-turbo in langchain.
Could anyone please tell me throuhg which module gpt-4-turbo can be deployed?
Seems that the langchain.llm has already been removed from new version of langchain.
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14945/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14944 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14944/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14944/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14944/events | https://github.com/langchain-ai/langchain/issues/14944 | 2,049,856,660 | I_kwDOIPDwls56LlSU | 14,944 | ImportError: cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint' occurring with 0.0.351 | {
"login": "ColinFerguson",
"id": 12193825,
"node_id": "MDQ6VXNlcjEyMTkzODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12193825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ColinFerguson",
"html_url": "https://github.com/ColinFerguson",
"followers_url": "https://api.github.com/users/ColinFerguson/followers",
"following_url": "https://api.github.com/users/ColinFerguson/following{/other_user}",
"gists_url": "https://api.github.com/users/ColinFerguson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ColinFerguson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ColinFerguson/subscriptions",
"organizations_url": "https://api.github.com/users/ColinFerguson/orgs",
"repos_url": "https://api.github.com/users/ColinFerguson/repos",
"events_url": "https://api.github.com/users/ColinFerguson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ColinFerguson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 5959659008,
"node_id": "LA_kwDOIPDwls8AAAABYzkuAA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20aws",
"name": "integration: aws",
"color": "C5DEF5",
"default": false,
"description": "Related to Amazon Web Services (AWS) integrations"
}
] | open | false | null | [] | null | 1 | 2023-12-20T05:20:21 | 2023-12-20T05:28:08 | null | NONE | null | ### System Info
Langchain == 0.0.351
Python == 3.10.6
Running in AWS sagemaker notebook, issue occurred on multiple kernels.
Code worked perfectly yesterday, error occurred upon starting up this morning (12/19/23)
Code worked again upon reverting to 0.0.349
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install -U langchain
from langchain.llms.sagemaker_endpoint import LLMContentHandler
### Expected behavior
Expected behavior is that the import works | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14944/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14943 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14943/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14943/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14943/events | https://github.com/langchain-ai/langchain/issues/14943 | 2,049,786,926 | I_kwDOIPDwls56LUQu | 14,943 | How does AgentExecutor make LLM on_llm_new_token most streaming output instead of AgentExecutorIterator? | {
"login": "wangcailin",
"id": 26639112,
"node_id": "MDQ6VXNlcjI2NjM5MTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/26639112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangcailin",
"html_url": "https://github.com/wangcailin",
"followers_url": "https://api.github.com/users/wangcailin/followers",
"following_url": "https://api.github.com/users/wangcailin/following{/other_user}",
"gists_url": "https://api.github.com/users/wangcailin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangcailin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangcailin/subscriptions",
"organizations_url": "https://api.github.com/users/wangcailin/orgs",
"repos_url": "https://api.github.com/users/wangcailin/repos",
"events_url": "https://api.github.com/users/wangcailin/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangcailin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 5 | 2023-12-20T03:56:17 | 2023-12-20T09:30:52 | null | NONE | null | How does AgentExecutor make LLM on_llm_new_token most streaming output instead of AgentExecutorIterator?
The current effect is that it will stream out AgentExecutorIterator
Desired effect: streams LLM on_llm_new_token | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14943/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/langchain-ai/langchain/issues/14943/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14942 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14942/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14942/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14942/events | https://github.com/langchain-ai/langchain/pull/14942 | 2,049,785,526 | PR_kwDOIPDwls5ibh7M | 14,942 | update notebook documentation for custom tool | {
"login": "yacine555",
"id": 1928640,
"node_id": "MDQ6VXNlcjE5Mjg2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1928640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yacine555",
"html_url": "https://github.com/yacine555",
"followers_url": "https://api.github.com/users/yacine555/followers",
"following_url": "https://api.github.com/users/yacine555/following{/other_user}",
"gists_url": "https://api.github.com/users/yacine555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yacine555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yacine555/subscriptions",
"organizations_url": "https://api.github.com/users/yacine555/orgs",
"repos_url": "https://api.github.com/users/yacine555/repos",
"events_url": "https://api.github.com/users/yacine555/events{/privacy}",
"received_events_url": "https://api.github.com/users/yacine555/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714126,
"node_id": "LA_kwDOIPDwls8AAAABc3-rjg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:L",
"name": "size:L",
"color": "BFD4F2",
"default": false,
"description": "This PR changes 100-499 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T03:54:09 | 2023-12-20T05:08:59 | 2023-12-20T05:08:58 | CONTRIBUTOR | null |
- **Description:** Documentation update. The custom tool notebook documentation is updated to revome the warning caused by directly instantiating of the LLMMathChain with an llm which is is deprecated. The from_llm class method is used instead. LLM output results gets updated as well.
- **Issue:** no applicable
- **Dependencies:** No dependencies
- **Tag maintainer:** @baskaryan
- **Twitter handle:** @ybouakkaz
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14942/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14942",
"html_url": "https://github.com/langchain-ai/langchain/pull/14942",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14942.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14942.patch",
"merged_at": "2023-12-20T05:08:58"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14941 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14941/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14941/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14941/events | https://github.com/langchain-ai/langchain/issues/14941 | 2,049,755,836 | I_kwDOIPDwls56LMq8 | 14,941 | Support tool for AzureChatOpenAI | {
"login": "ultmaster",
"id": 8463288,
"node_id": "MDQ6VXNlcjg0NjMyODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8463288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ultmaster",
"html_url": "https://github.com/ultmaster",
"followers_url": "https://api.github.com/users/ultmaster/followers",
"following_url": "https://api.github.com/users/ultmaster/following{/other_user}",
"gists_url": "https://api.github.com/users/ultmaster/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ultmaster/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ultmaster/subscriptions",
"organizations_url": "https://api.github.com/users/ultmaster/orgs",
"repos_url": "https://api.github.com/users/ultmaster/repos",
"events_url": "https://api.github.com/users/ultmaster/events{/privacy}",
"received_events_url": "https://api.github.com/users/ultmaster/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-20T03:10:10 | 2023-12-20T03:12:38 | null | NONE | null | ### Feature request
There is a new implementation of function call which I think isn't supported by langchain yet.
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling
### Motivation
AzureChatOpenAI models can't be used by OpenAIFunctionAgent due to the implementation issue.
### Your contribution
I've implemented a workaround here. Hoping for a full solution.
```python
from langchain.chat_models import AzureChatOpenAI
class AzureChatOpenAIWithTooling(AzureChatOpenAI):
"""AzureChatOpenAI with a patch to support functions.
Function calling: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling
Currently only a single function call is supported.
If multiple function calls are returned by the model, only the first one is used.
"""
def _generate(self, messages, stop=None, run_manager=None, stream=None, **kwargs):
if "functions" in kwargs:
kwargs["tools"] = [
{"type": "function", "function": f} for f in kwargs.pop("functions")
]
return super()._generate(messages, stop, run_manager, stream, **kwargs)
def _create_message_dicts(self, messages, stop):
dicts, params = super()._create_message_dicts(messages, stop)
latest_call_id = {}
for d in dicts:
if "function_call" in d:
# Record the ID for future use
latest_call_id[d["function_call"]["name"]] = d["function_call"]["id"]
# Convert back to tool call
d["tool_calls"] = [
{
"id": d["function_call"]["id"],
"function": {
k: v for k, v in d["function_call"].items() if k != "id"
},
"type": "function",
}
]
d.pop("function_call")
if d["role"] == "function":
# Renaming as tool
d["role"] = "tool"
d["tool_call_id"] = latest_call_id[d["name"]]
return dicts, params
def _create_chat_result(self, response):
result = super()._create_chat_result(response)
for generation in result.generations:
if generation.message.additional_kwargs.get("tool_calls"):
function_calls = [
{**t["function"], "id": t["id"]}
for t in generation.message.additional_kwargs.pop("tool_calls")
]
# Only consider the first one.
generation.message.additional_kwargs["function_call"] = function_calls[
0
]
return result
```
Test code:
```python
def test_azure_chat_openai():
from scripts.aoai_llm import AzureChatOpenAIWithTooling
agent = OpenAIFunctionsAgent.from_llm_and_tools(
llm=AzureChatOpenAIWithTooling(azure_deployment="gpt-35-turbo", api_version="2023-12-01-preview", temperature=0.),
tools=[
StructuredTool.from_function(get_current_weather)
],
)
action = agent.plan([], input="What's the weather like in San Francisco?")
print(action)
tool_output = get_current_weather(**action.tool_input)
result = agent.plan([
(action, tool_output)
], input="What's the weather like in San Francisco?")
print(result)
# Example function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location: str, unit: str = "fahrenheit"):
"""Get the current weather in a given location"""
if "tokyo" in location.lower():
return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit})
elif "san francisco" in location.lower():
return json.dumps(
{"location": "San Francisco", "temperature": "72", "unit": unit}
)
elif "paris" in location.lower():
return json.dumps({"location": "Paris", "temperature": "22", "unit": unit})
else:
return json.dumps({"location": location, "temperature": "unknown"})
```
(Note: the original example to ask about weather in three countries simultaneously doesn't work here.)
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14941/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14941/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14940 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14940/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14940/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14940/events | https://github.com/langchain-ai/langchain/pull/14940 | 2,049,750,371 | PR_kwDOIPDwls5ibac3 | 14,940 | docs: links | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T03:02:47 | 2023-12-20T19:51:19 | 2023-12-20T19:51:18 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14940/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14940",
"html_url": "https://github.com/langchain-ai/langchain/pull/14940",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14940.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14940.patch",
"merged_at": "2023-12-20T19:51:18"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14939 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14939/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14939/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14939/events | https://github.com/langchain-ai/langchain/issues/14939 | 2,049,741,707 | I_kwDOIPDwls56LJOL | 14,939 | Embeddings | {
"login": "Vivek-Kawathalkar",
"id": 136422092,
"node_id": "U_kgDOCCGizA",
"avatar_url": "https://avatars.githubusercontent.com/u/136422092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vivek-Kawathalkar",
"html_url": "https://github.com/Vivek-Kawathalkar",
"followers_url": "https://api.github.com/users/Vivek-Kawathalkar/followers",
"following_url": "https://api.github.com/users/Vivek-Kawathalkar/following{/other_user}",
"gists_url": "https://api.github.com/users/Vivek-Kawathalkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vivek-Kawathalkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vivek-Kawathalkar/subscriptions",
"organizations_url": "https://api.github.com/users/Vivek-Kawathalkar/orgs",
"repos_url": "https://api.github.com/users/Vivek-Kawathalkar/repos",
"events_url": "https://api.github.com/users/Vivek-Kawathalkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vivek-Kawathalkar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
}
] | open | false | null | [] | null | 1 | 2023-12-20T02:50:23 | 2023-12-20T03:00:15 | null | NONE | null | ### System Info
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\Hackathon\doc.py", line 43, in <module>
db = FAISS.from_documents(documents=pages, embedding=embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\schema\vectorstore.py", line 510, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\vectorstores\faiss.py", line 911, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 549, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 392, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 115, in load_tiktoken_bpe
return {
^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 117, in <dictcomp>
for token, rank in (line.split() for line in contents.splitlines() if line)
^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.vectorstores import FAISS
from dotenv import load_dotenv
import openai
import os
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
#init Azure OpenAI
openai.api_type = "azure"
openai.api_version = OPENAI_DEPLOYMENT_VERSION
openai.api_base = OPENAI_DEPLOYMENT_ENDPOINT
openai.api_key = OPENAI_API_KEY
# if __name__ == "__main__":
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
azure_endpoint=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# dataPath = "./data/documentation/"
fileName = r'C:\Users\vivek\OneDrive\Desktop\Hackathon\data\FAQ For LTO Hotels.pdf'
#use langchain PDF loader
loader = PyPDFLoader(fileName)
#split the document into chunks
pages = loader.load_and_split()
#Use Langchain to create the embeddings using text-embedding-ada-002
db = FAISS.from_documents(documents=pages, embedding=embeddings)
#save the embeddings into FAISS vector store
db.save_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index")
from dotenv import load_dotenv
import os
import openai
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
def ask_question(qa, question):
result = qa({"query": question})
print("Question:", question)
print("Answer:", result["result"])
def ask_question_with_context(qa, question, chat_history):
query = "what is Azure OpenAI Service?"
result = qa({"question": question, "chat_history": chat_history})
print("answer:", result["answer"])
chat_history = [(query, result["answer"])]
return chat_history
if __name__ == "__main__":
# Configure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = os.getenv('OPENAI_API_VERSION')
llm = AzureChatOpenAI(deployment_name=OPENAI_DEPLOYMENT_NAME,
model_name=OPENAI_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_DEPLOYMENT_VERSION,
openai_api_key=OPENAI_API_KEY,
openai_api_type="azure")
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
azure_endpoint=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# Initialize gpt-35-turbo and our embedding model
#load the faiss vector store we saved into memory
vectorStore = FAISS.load_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index", embeddings)
#use the faiss vector store we saved to search the local document
retriever = vectorStore.as_retriever(search_type="similarity", search_kwargs={"k":2})
QUESTION_PROMPT = PromptTemplate.from_template("""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""")
qa = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=retriever,
condense_question_prompt=QUESTION_PROMPT,
return_source_documents=True,
verbose=False)
chat_history = []
while True:
query = input('you: ')
if query == 'q':
break
chat_history = ask_question_with_context(qa, query, chat_history)
### Expected behavior
QA | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14939/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14938 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14938/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14938/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14938/events | https://github.com/langchain-ai/langchain/pull/14938 | 2,049,730,895 | PR_kwDOIPDwls5ibWYU | 14,938 | docs: remove unused contributor steps | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T02:35:04 | 2023-12-20T02:41:51 | 2023-12-20T02:41:50 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14938/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14938",
"html_url": "https://github.com/langchain-ai/langchain/pull/14938",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14938.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14938.patch",
"merged_at": "2023-12-20T02:41:50"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14937 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14937/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14937/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14937/events | https://github.com/langchain-ai/langchain/issues/14937 | 2,049,722,601 | I_kwDOIPDwls56LEjp | 14,937 | agent意图识别 | {
"login": "Gzxl",
"id": 6359205,
"node_id": "MDQ6VXNlcjYzNTkyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6359205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gzxl",
"html_url": "https://github.com/Gzxl",
"followers_url": "https://api.github.com/users/Gzxl/followers",
"following_url": "https://api.github.com/users/Gzxl/following{/other_user}",
"gists_url": "https://api.github.com/users/Gzxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gzxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gzxl/subscriptions",
"organizations_url": "https://api.github.com/users/Gzxl/orgs",
"repos_url": "https://api.github.com/users/Gzxl/repos",
"events_url": "https://api.github.com/users/Gzxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gzxl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899412369,
"node_id": "LA_kwDOIPDwls8AAAABJAcZkQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20agent",
"name": "area: agent",
"color": "BFD4F2",
"default": false,
"description": "Related to agents module"
},
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
}
] | open | false | null | [] | null | 1 | 2023-12-20T02:23:37 | 2023-12-20T02:31:39 | null | NONE | null | 假设我基于langchain分别实现了用于数据库查询的mysqlagent、用于访问外部链接apichain、以及用于rag的知识查询agent,我如何通过用户输入,将用户的请求分发到不同的agent
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14937/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14936 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14936/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14936/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14936/events | https://github.com/langchain-ai/langchain/pull/14936 | 2,049,692,134 | PR_kwDOIPDwls5ibN75 | 14,936 | together: package and embedding model | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700863,
"node_id": "LA_kwDOIPDwls8AAAABUpidvw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:enhancement",
"name": "auto:enhancement",
"color": "C2E0C6",
"default": false,
"description": "A large net-new component, integration, or chain. Use sparingly. The largest features"
},
{
"id": 6232714126,
"node_id": "LA_kwDOIPDwls8AAAABc3-rjg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:L",
"name": "size:L",
"color": "BFD4F2",
"default": false,
"description": "This PR changes 100-499 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T01:52:13 | 2023-12-20T02:48:34 | 2023-12-20T02:48:33 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14936/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14936",
"html_url": "https://github.com/langchain-ai/langchain/pull/14936",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14936.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14936.patch",
"merged_at": "2023-12-20T02:48:33"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14935 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14935/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14935/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14935/events | https://github.com/langchain-ai/langchain/pull/14935 | 2,049,678,998 | PR_kwDOIPDwls5ibK6C | 14,935 | Add support for Vertex ai multimodal embeddings | {
"login": "alexleventer",
"id": 3254549,
"node_id": "MDQ6VXNlcjMyNTQ1NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3254549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexleventer",
"html_url": "https://github.com/alexleventer",
"followers_url": "https://api.github.com/users/alexleventer/followers",
"following_url": "https://api.github.com/users/alexleventer/following{/other_user}",
"gists_url": "https://api.github.com/users/alexleventer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexleventer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexleventer/subscriptions",
"organizations_url": "https://api.github.com/users/alexleventer/orgs",
"repos_url": "https://api.github.com/users/alexleventer/repos",
"events_url": "https://api.github.com/users/alexleventer/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexleventer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700863,
"node_id": "LA_kwDOIPDwls8AAAABUpidvw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:enhancement",
"name": "auto:enhancement",
"color": "C2E0C6",
"default": false,
"description": "A large net-new component, integration, or chain. Use sparingly. The largest features"
},
{
"id": 6232714126,
"node_id": "LA_kwDOIPDwls8AAAABc3-rjg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:L",
"name": "size:L",
"color": "BFD4F2",
"default": false,
"description": "This PR changes 100-499 lines, ignoring generated files."
}
] | open | false | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
}
] | null | 5 | 2023-12-20T01:44:51 | 2024-01-09T21:04:40 | null | NONE | null | <!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14935/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14935",
"html_url": "https://github.com/langchain-ai/langchain/pull/14935",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14935.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14935.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14934 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14934/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14934/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14934/events | https://github.com/langchain-ai/langchain/issues/14934 | 2,049,674,847 | I_kwDOIPDwls56K45f | 14,934 | AZure Openai Embeddings | {
"login": "Vivek-Kawathalkar",
"id": 136422092,
"node_id": "U_kgDOCCGizA",
"avatar_url": "https://avatars.githubusercontent.com/u/136422092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vivek-Kawathalkar",
"html_url": "https://github.com/Vivek-Kawathalkar",
"followers_url": "https://api.github.com/users/Vivek-Kawathalkar/followers",
"following_url": "https://api.github.com/users/Vivek-Kawathalkar/following{/other_user}",
"gists_url": "https://api.github.com/users/Vivek-Kawathalkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vivek-Kawathalkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vivek-Kawathalkar/subscriptions",
"organizations_url": "https://api.github.com/users/Vivek-Kawathalkar/orgs",
"repos_url": "https://api.github.com/users/Vivek-Kawathalkar/repos",
"events_url": "https://api.github.com/users/Vivek-Kawathalkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vivek-Kawathalkar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
}
] | open | false | null | [] | null | 1 | 2023-12-20T01:40:55 | 2023-12-20T01:50:53 | null | NONE | null | ### System Info
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:101: UserWarning: As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). Updating `openai_api_base` from <your openai endpoint> to <your openai endpoint>/openai.
warnings.warn(
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:108: UserWarning: As of openai>=1.0.0, if `deployment` (or alias `azure_deployment`) is specified then `openai_api_base` (or alias `base_url`) should not be. Instead use `deployment` (or alias `azure_deployment`) and `azure_endpoint`.
warnings.warn(
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:116: UserWarning: As of openai>=1.0.0, if `openai_api_base` (or alias `base_url`) is specified it is expected to be of the form https://example-resource.azure.openai.com/openai/deployments/example-deployment. Updating <your openai endpoint> to <your openai endpoint>/openai.
warnings.warn(
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\Hackathon\doc.py", line 28, in <module>
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for AzureOpenAIEmbeddings
__root__
base_url and azure_endpoint are mutually exclusive (type=value_error)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.vectorstores import FAISS
from dotenv import load_dotenv
import openai
import os
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
#init Azure OpenAI
openai.api_type = "azure"
openai.api_version = OPENAI_DEPLOYMENT_VERSION
openai.api_base = OPENAI_DEPLOYMENT_ENDPOINT
openai.api_key = OPENAI_API_KEY
# if __name__ == "__main__":
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# dataPath = "./data/documentation/"
fileName = r'C:\Users\vivek\OneDrive\Desktop\Hackathon\data\FAQ For LTO Hotels.pdf'
#use langchain PDF loader
loader = PyPDFLoader(fileName)
#split the document into chunks
pages = loader.load_and_split()
#Use Langchain to create the embeddings using text-embedding-ada-002
db = FAISS.from_documents(documents=pages, embedding=embeddings)
#save the embeddings into FAISS vector store
db.save_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index")
from dotenv import load_dotenv
import os
import openai
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
def ask_question(qa, question):
result = qa({"query": question})
print("Question:", question)
print("Answer:", result["result"])
def ask_question_with_context(qa, question, chat_history):
query = "what is Azure OpenAI Service?"
result = qa({"question": question, "chat_history": chat_history})
print("answer:", result["answer"])
chat_history = [(query, result["answer"])]
return chat_history
if __name__ == "__main__":
# Configure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = os.getenv('OPENAI_API_VERSION')
llm = AzureChatOpenAI(deployment_name=OPENAI_DEPLOYMENT_NAME,
model_name=OPENAI_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_DEPLOYMENT_VERSION,
openai_api_key=OPENAI_API_KEY,
openai_api_type="azure")
embeddings=OpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=1)
# Initialize gpt-35-turbo and our embedding model
#load the faiss vector store we saved into memory
vectorStore = FAISS.load_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index", embeddings)
#use the faiss vector store we saved to search the local document
retriever = vectorStore.as_retriever(search_type="similarity", search_kwargs={"k":2})
QUESTION_PROMPT = PromptTemplate.from_template("""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""")
qa = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=retriever,
condense_question_prompt=QUESTION_PROMPT,
return_source_documents=True,
verbose=False)
chat_history = []
while True:
query = input('you: ')
if query == 'q':
break
chat_history = ask_question_with_context(qa, query, chat_history)
### Expected behavior
QA | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14934/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14933 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14933/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14933/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14933/events | https://github.com/langchain-ai/langchain/pull/14933 | 2,049,673,680 | PR_kwDOIPDwls5ibJv8 | 14,933 | community[patch]: Add param "task" to Databricks LLM to work around serialization of transform_output_fn | {
"login": "liangz1",
"id": 7851093,
"node_id": "MDQ6VXNlcjc4NTEwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7851093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liangz1",
"html_url": "https://github.com/liangz1",
"followers_url": "https://api.github.com/users/liangz1/followers",
"following_url": "https://api.github.com/users/liangz1/following{/other_user}",
"gists_url": "https://api.github.com/users/liangz1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liangz1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liangz1/subscriptions",
"organizations_url": "https://api.github.com/users/liangz1/orgs",
"repos_url": "https://api.github.com/users/liangz1/repos",
"events_url": "https://api.github.com/users/liangz1/events{/privacy}",
"received_events_url": "https://api.github.com/users/liangz1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-20T01:39:09 | 2023-12-20T17:50:24 | 2023-12-20T17:50:24 | CONTRIBUTOR | null | **What is the reproduce code?**
```python
from langchain.chains import LLMChain, load_chain
from langchain.llms import Databricks
from langchain.prompts import PromptTemplate
def transform_output(response):
# Extract the answer from the responses.
return str(response["candidates"][0]["text"])
def transform_input(**request):
full_prompt = f"""{request["prompt"]}
Be Concise.
"""
request["prompt"] = full_prompt
return request
chat_model = Databricks(
endpoint_name="llama2-13B-chat-Brambles",
transform_input_fn=transform_input,
transform_output_fn=transform_output,
verbose=True,
)
print(f"Test chat model: {chat_model('What is Apache Spark')}") # This works
llm_chain = LLMChain(llm=chat_model, prompt=PromptTemplate.from_template("{chat_input}"))
llm_chain("colorful socks") # this works
llm_chain.save("databricks_llm_chain.yaml") # transform_input_fn and transform_output_fn are not serialized into the model yaml file
loaded_chain = load_chain("databricks_llm_chain.yaml") # The Databricks LLM is recreated with transform_input_fn=None, transform_output_fn=None.
loaded_chain("colorful socks") # Thus this errors. The transform_output_fn is needed to produce the correct output
```
Error:
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-6c34afab-3473-421d-877f-1ef18930ef4d/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
request payload: {'query': 'What is a databricks notebook?'}'}
```
**What does the error mean?**
When the LLM generates an answer, represented by a Generation data object. The Generation data object takes a str field called text, e.g. Generation(text=”blah”). However, the Databricks LLM tried to put a non-str to text, e.g. Generation(text={“candidates”:[{“text”: “blah”}]}) Thus, pydantic errors.
**Why the output format becomes incorrect after saving and loading the Databricks LLM?**
Databrick LLM does not support serializing transform_input_fn and transform_output_fn, so they are not serialized into the model yaml file. When the Databricks LLM is loaded, it is recreated with transform_input_fn=None, transform_output_fn=None. Without transform_output_fn, the output text is not unwrapped, thus errors.
Missing transform_output_fn causes this error.
Missing transform_input_fn causes the additional prompt “Be Concise.” to be lost after saving and loading.
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14933/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14933",
"html_url": "https://github.com/langchain-ai/langchain/pull/14933",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14933.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14933.patch",
"merged_at": "2023-12-20T17:50:24"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14931 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14931/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14931/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14931/events | https://github.com/langchain-ai/langchain/pull/14931 | 2,049,526,819 | PR_kwDOIPDwls5iaqNU | 14,931 | Add langsmith and benchmark repo links | {
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T22:21:59 | 2023-12-20T01:44:33 | 2023-12-20T01:44:32 | COLLABORATOR | null | Think we could link to these in more places | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14931/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14931",
"html_url": "https://github.com/langchain-ai/langchain/pull/14931",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14931.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14931.patch",
"merged_at": "2023-12-20T01:44:32"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14930 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14930/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14930/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14930/events | https://github.com/langchain-ai/langchain/pull/14930 | 2,049,493,631 | PR_kwDOIPDwls5iai9j | 14,930 | community[patch]: Matching engine, return doc id | {
"login": "baskaryan",
"id": 22008038,
"node_id": "MDQ6VXNlcjIyMDA4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22008038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskaryan",
"html_url": "https://github.com/baskaryan",
"followers_url": "https://api.github.com/users/baskaryan/followers",
"following_url": "https://api.github.com/users/baskaryan/following{/other_user}",
"gists_url": "https://api.github.com/users/baskaryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskaryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskaryan/subscriptions",
"organizations_url": "https://api.github.com/users/baskaryan/orgs",
"repos_url": "https://api.github.com/users/baskaryan/repos",
"events_url": "https://api.github.com/users/baskaryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskaryan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714119,
"node_id": "LA_kwDOIPDwls8AAAABc3-rhw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:M",
"name": "size:M",
"color": "C5DEF5",
"default": false,
"description": "This PR changes 30-99 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T21:53:27 | 2023-12-20T05:03:12 | 2023-12-20T05:03:11 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14930/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14930",
"html_url": "https://github.com/langchain-ai/langchain/pull/14930",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14930.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14930.patch",
"merged_at": "2023-12-20T05:03:11"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14929 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14929/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14929/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14929/events | https://github.com/langchain-ai/langchain/pull/14929 | 2,049,471,496 | PR_kwDOIPDwls5iaePv | 14,929 | Update embedding_distance.ipynb | {
"login": "aroffe99",
"id": 22308552,
"node_id": "MDQ6VXNlcjIyMzA4NTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/22308552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aroffe99",
"html_url": "https://github.com/aroffe99",
"followers_url": "https://api.github.com/users/aroffe99/followers",
"following_url": "https://api.github.com/users/aroffe99/following{/other_user}",
"gists_url": "https://api.github.com/users/aroffe99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aroffe99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aroffe99/subscriptions",
"organizations_url": "https://api.github.com/users/aroffe99/orgs",
"repos_url": "https://api.github.com/users/aroffe99/repos",
"events_url": "https://api.github.com/users/aroffe99/events{/privacy}",
"received_events_url": "https://api.github.com/users/aroffe99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T21:35:53 | 2023-12-20T05:13:18 | 2023-12-20T05:13:17 | CONTRIBUTOR | null | **Description:** Fix the docs about embedding distance evaluations guide.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14929/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14929",
"html_url": "https://github.com/langchain-ai/langchain/pull/14929",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14929.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14929.patch",
"merged_at": "2023-12-20T05:13:17"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14928 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14928/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14928/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14928/events | https://github.com/langchain-ai/langchain/pull/14928 | 2,049,463,077 | PR_kwDOIPDwls5iacaa | 14,928 | anthropic: beta messages integration | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700863,
"node_id": "LA_kwDOIPDwls8AAAABUpidvw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:enhancement",
"name": "auto:enhancement",
"color": "C2E0C6",
"default": false,
"description": "A large net-new component, integration, or chain. Use sparingly. The largest features"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714130,
"node_id": "LA_kwDOIPDwls8AAAABc3-rkg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XL",
"name": "size:XL",
"color": "D4C5F9",
"default": false,
"description": "This PR changes 500-999 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 2 | 2023-12-19T21:28:36 | 2023-12-20T02:55:20 | 2023-12-20T02:55:20 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14928/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14928",
"html_url": "https://github.com/langchain-ai/langchain/pull/14928",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14928.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14928.patch",
"merged_at": "2023-12-20T02:55:19"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14927 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14927/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14927/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14927/events | https://github.com/langchain-ai/langchain/pull/14927 | 2,049,450,775 | PR_kwDOIPDwls5iaZrK | 14,927 | community[minor]: Add HF chat models | {
"login": "baskaryan",
"id": 22008038,
"node_id": "MDQ6VXNlcjIyMDA4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22008038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskaryan",
"html_url": "https://github.com/baskaryan",
"followers_url": "https://api.github.com/users/baskaryan/followers",
"following_url": "https://api.github.com/users/baskaryan/following{/other_user}",
"gists_url": "https://api.github.com/users/baskaryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskaryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskaryan/subscriptions",
"organizations_url": "https://api.github.com/users/baskaryan/orgs",
"repos_url": "https://api.github.com/users/baskaryan/repos",
"events_url": "https://api.github.com/users/baskaryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskaryan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700863,
"node_id": "LA_kwDOIPDwls8AAAABUpidvw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:enhancement",
"name": "auto:enhancement",
"color": "C2E0C6",
"default": false,
"description": "A large net-new component, integration, or chain. Use sparingly. The largest features"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714130,
"node_id": "LA_kwDOIPDwls8AAAABc3-rkg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XL",
"name": "size:XL",
"color": "D4C5F9",
"default": false,
"description": "This PR changes 500-999 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T21:18:46 | 2023-12-28T15:11:13 | 2023-12-28T15:11:13 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14927/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14927",
"html_url": "https://github.com/langchain-ai/langchain/pull/14927",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14927.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14927.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14926 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14926/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14926/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14926/events | https://github.com/langchain-ai/langchain/pull/14926 | 2,049,430,965 | PR_kwDOIPDwls5iaVU- | 14,926 | ConvoOutputParser: Handle multiline Action Input + potential hallucin… | {
"login": "mkorpela",
"id": 136885,
"node_id": "MDQ6VXNlcjEzNjg4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/136885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkorpela",
"html_url": "https://github.com/mkorpela",
"followers_url": "https://api.github.com/users/mkorpela/followers",
"following_url": "https://api.github.com/users/mkorpela/following{/other_user}",
"gists_url": "https://api.github.com/users/mkorpela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkorpela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkorpela/subscriptions",
"organizations_url": "https://api.github.com/users/mkorpela/orgs",
"repos_url": "https://api.github.com/users/mkorpela/repos",
"events_url": "https://api.github.com/users/mkorpela/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkorpela/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899412369,
"node_id": "LA_kwDOIPDwls8AAAABJAcZkQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20agent",
"name": "area: agent",
"color": "BFD4F2",
"default": false,
"description": "Related to agents module"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714119,
"node_id": "LA_kwDOIPDwls8AAAABc3-rhw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:M",
"name": "size:M",
"color": "C5DEF5",
"default": false,
"description": "This PR changes 30-99 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 5 | 2023-12-19T21:02:52 | 2024-01-02T09:10:48 | 2024-01-01T23:59:54 | CONTRIBUTOR | null | …ated Observation
- **Description:** Handle multiline Action Inputs.
- **Issue:** Fixes cases where our agent made multiline Action Input (specifically contained python code)
- **Dependencies:** --
- **Tag maintainer:** @hwchase17
- **Twitter handle:** All the credits to @robocorp
Note: I did not fully get where the tests should go so that is my best effort. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14926/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14926",
"html_url": "https://github.com/langchain-ai/langchain/pull/14926",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14926.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14926.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14925 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14925/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14925/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14925/events | https://github.com/langchain-ai/langchain/pull/14925 | 2,049,367,825 | PR_kwDOIPDwls5iaHcY | 14,925 | docs `huggingface` platform page update | {
"login": "leo-gan",
"id": 2256422,
"node_id": "MDQ6VXNlcjIyNTY0MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2256422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo-gan",
"html_url": "https://github.com/leo-gan",
"followers_url": "https://api.github.com/users/leo-gan/followers",
"following_url": "https://api.github.com/users/leo-gan/following{/other_user}",
"gists_url": "https://api.github.com/users/leo-gan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo-gan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo-gan/subscriptions",
"organizations_url": "https://api.github.com/users/leo-gan/orgs",
"repos_url": "https://api.github.com/users/leo-gan/repos",
"events_url": "https://api.github.com/users/leo-gan/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo-gan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 2 | 2023-12-19T20:12:21 | 2023-12-20T20:36:14 | 2023-12-20T20:36:14 | COLLABORATOR | null | Added a reference to the `Hugging Face prompt injection identification` page | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14925/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14925",
"html_url": "https://github.com/langchain-ai/langchain/pull/14925",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14925.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14925.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14924 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14924/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14924/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14924/events | https://github.com/langchain-ai/langchain/pull/14924 | 2,049,356,264 | PR_kwDOIPDwls5iaE6z | 14,924 | cli: test_integration group | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 5959659008,
"node_id": "LA_kwDOIPDwls8AAAABYzkuAA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20aws",
"name": "integration: aws",
"color": "C5DEF5",
"default": false,
"description": "Related to Amazon Web Services (AWS) integrations"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T20:03:26 | 2023-12-19T20:09:05 | 2023-12-19T20:09:04 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14924/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14924",
"html_url": "https://github.com/langchain-ai/langchain/pull/14924",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14924.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14924.patch",
"merged_at": "2023-12-19T20:09:04"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14923 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14923/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14923/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14923/events | https://github.com/langchain-ai/langchain/pull/14923 | 2,049,341,319 | PR_kwDOIPDwls5iaBoC | 14,923 | Don't reassign chunk_type | {
"login": "coreyb42",
"id": 8770962,
"node_id": "MDQ6VXNlcjg3NzA5NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8770962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coreyb42",
"html_url": "https://github.com/coreyb42",
"followers_url": "https://api.github.com/users/coreyb42/followers",
"following_url": "https://api.github.com/users/coreyb42/following{/other_user}",
"gists_url": "https://api.github.com/users/coreyb42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coreyb42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coreyb42/subscriptions",
"organizations_url": "https://api.github.com/users/coreyb42/orgs",
"repos_url": "https://api.github.com/users/coreyb42/repos",
"events_url": "https://api.github.com/users/coreyb42/events{/privacy}",
"received_events_url": "https://api.github.com/users/coreyb42/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 2 | 2023-12-19T19:51:55 | 2023-12-22T21:20:53 | 2023-12-22T21:20:53 | CONTRIBUTOR | null | **Description**: The parameter chunk_type was being hard coded to "extractive_answers", so that when "snippet" was being passed, it was being ignored. This change simply doesn't do that. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14923/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14923",
"html_url": "https://github.com/langchain-ai/langchain/pull/14923",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14923.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14923.patch",
"merged_at": "2023-12-22T21:20:53"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14922 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14922/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14922/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14922/events | https://github.com/langchain-ai/langchain/pull/14922 | 2,049,266,381 | PR_kwDOIPDwls5iZxDJ | 14,922 | Update google_cloud_storage_directory.ipynb | {
"login": "elenamatay",
"id": 47299995,
"node_id": "MDQ6VXNlcjQ3Mjk5OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/47299995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elenamatay",
"html_url": "https://github.com/elenamatay",
"followers_url": "https://api.github.com/users/elenamatay/followers",
"following_url": "https://api.github.com/users/elenamatay/following{/other_user}",
"gists_url": "https://api.github.com/users/elenamatay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elenamatay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elenamatay/subscriptions",
"organizations_url": "https://api.github.com/users/elenamatay/orgs",
"repos_url": "https://api.github.com/users/elenamatay/repos",
"events_url": "https://api.github.com/users/elenamatay/events{/privacy}",
"received_events_url": "https://api.github.com/users/elenamatay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T18:58:54 | 2023-12-20T05:21:43 | 2023-12-20T05:21:43 | CONTRIBUTOR | null | - Description: Just a minor add to the documentation to clarify how to load all files from a folder. I assumed and try to do it specifying it in the bucket (BUCKET/FOLDER), instead of using the prefix.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14922/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14922",
"html_url": "https://github.com/langchain-ai/langchain/pull/14922",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14922.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14922.patch",
"merged_at": "2023-12-20T05:21:43"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14921 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14921/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14921/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14921/events | https://github.com/langchain-ai/langchain/pull/14921 | 2,049,246,100 | PR_kwDOIPDwls5iZsmF | 14,921 | templates: fix sql-research-assistant | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T18:43:12 | 2023-12-19T19:56:01 | 2023-12-19T19:56:00 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14921/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14921",
"html_url": "https://github.com/langchain-ai/langchain/pull/14921",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14921.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14921.patch",
"merged_at": "2023-12-19T19:56:00"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14920 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14920/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14920/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14920/events | https://github.com/langchain-ai/langchain/pull/14920 | 2,049,181,264 | PR_kwDOIPDwls5iZeNB | 14,920 | cli: 0.0.20 | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899412369,
"node_id": "LA_kwDOIPDwls8AAAABJAcZkQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20agent",
"name": "area: agent",
"color": "BFD4F2",
"default": false,
"description": "Related to agents module"
},
{
"id": 5680700883,
"node_id": "LA_kwDOIPDwls8AAAABUpid0w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:nit",
"name": "auto:nit",
"color": "FEF2C0",
"default": false,
"description": "Small modifications/deletions, fixes, deps or improvements to existing code or docs"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T17:59:58 | 2023-12-19T19:56:22 | 2023-12-19T19:56:21 | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14920/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14920",
"html_url": "https://github.com/langchain-ai/langchain/pull/14920",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14920.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14920.patch",
"merged_at": "2023-12-19T19:56:21"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14919 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14919/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14919/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14919/events | https://github.com/langchain-ai/langchain/pull/14919 | 2,049,178,750 | PR_kwDOIPDwls5iZdp5 | 14,919 | fix(minor): added missing **kwargs parameter to chroma query function | {
"login": "joel-teratis",
"id": 133014106,
"node_id": "U_kgDOB-2iWg",
"avatar_url": "https://avatars.githubusercontent.com/u/133014106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joel-teratis",
"html_url": "https://github.com/joel-teratis",
"followers_url": "https://api.github.com/users/joel-teratis/followers",
"following_url": "https://api.github.com/users/joel-teratis/following{/other_user}",
"gists_url": "https://api.github.com/users/joel-teratis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joel-teratis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joel-teratis/subscriptions",
"organizations_url": "https://api.github.com/users/joel-teratis/orgs",
"repos_url": "https://api.github.com/users/joel-teratis/repos",
"events_url": "https://api.github.com/users/joel-teratis/events{/privacy}",
"received_events_url": "https://api.github.com/users/joel-teratis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5924999838,
"node_id": "LA_kwDOIPDwls8AAAABYShSng",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20chroma",
"name": "integration: chroma",
"color": "B78AF8",
"default": false,
"description": "Related to ChromaDB"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 3 | 2023-12-19T17:58:20 | 2024-01-02T08:11:48 | 2024-01-01T21:40:29 | CONTRIBUTOR | null | **Description:**
This PR adds the `**kwargs` parameter to six calls in the `chroma.py` package. All functions already were able to receive `kwargs` but they were discarded before.
**Issue:**
When passing `kwargs` to functions in the `chroma.py` package they are being ignored.
For example:
```
chroma_instance.similarity_search_with_score(
query,
k=100,
include=["metadatas", "documents", "distances", "embeddings"], # this parameter gets ignored
)
```
The `include` parameter does not get passed on to the next function and does not have any effect.
**Dependencies:**
None
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14919/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14919",
"html_url": "https://github.com/langchain-ai/langchain/pull/14919",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14919.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14919.patch",
"merged_at": "2024-01-01T21:40:29"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14918 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14918/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14918/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14918/events | https://github.com/langchain-ai/langchain/issues/14918 | 2,049,130,491 | I_kwDOIPDwls56Iz_7 | 14,918 | ValueError: not enough values to unpack (expected 2, got 1) | {
"login": "Vivek-Kawathalkar",
"id": 136422092,
"node_id": "U_kgDOCCGizA",
"avatar_url": "https://avatars.githubusercontent.com/u/136422092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vivek-Kawathalkar",
"html_url": "https://github.com/Vivek-Kawathalkar",
"followers_url": "https://api.github.com/users/Vivek-Kawathalkar/followers",
"following_url": "https://api.github.com/users/Vivek-Kawathalkar/following{/other_user}",
"gists_url": "https://api.github.com/users/Vivek-Kawathalkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vivek-Kawathalkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vivek-Kawathalkar/subscriptions",
"organizations_url": "https://api.github.com/users/Vivek-Kawathalkar/orgs",
"repos_url": "https://api.github.com/users/Vivek-Kawathalkar/repos",
"events_url": "https://api.github.com/users/Vivek-Kawathalkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vivek-Kawathalkar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
}
] | open | false | null | [] | null | 1 | 2023-12-19T17:30:51 | 2023-12-19T17:43:26 | null | NONE | null | ### System Info
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\vivek\OneDrive\Desktop\SOPPOC\flask_app.py", line 43, in chat
return RCXStreakanswer(input)
^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\vivek\OneDrive\Desktop\SOPPOC\RCX_Streak.py", line 53, in RCXStreakanswer
openAIEmbedd = FAISS.from_documents(texts, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\schema\vectorstore.py", line 510, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\vectorstores\faiss.py", line 911, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 549, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 392, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\model.py", line 97, in encoding_for_model
return get_encoding(encoding_name_for_model(model_name))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\registry.py", line 73, in get_encoding
enc = Encoding(**constructor())
^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 124, in load_tiktoken_bpe
return {
^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 126, in <dictcomp>
for token, rank in (line.split() for line in contents.splitlines() if line)
^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = Docx2txtLoader(doc_path)
documents.extend(loader.load())
content = documents
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 100,
chunk_overlap = 20,
separators=["\n\n", "\n", "."]
)
texts = text_splitter.split_documents(content)
print(texts)
print()
embeddings = OpenAIEmbeddings()
openAIEmbedd = FAISS.from_documents(texts, embeddings)
print(openAIEmbedd)
prompt_template = """Given the following context and a question, generate an answer.
Based on user input extract only data for the given question from context. \
CONTEXT: {context}
QUESTION: {question}"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
retriever_openai = openAIEmbedd.as_retriever(search_kwargs={"k": 2})
print(retriever_openai)
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True,
chain_type_kwargs={"prompt": PROMPT})
ans=chain(user_message)
output= ans['result']
return output
### Expected behavior
should return answer | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14918/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14917 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14917/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14917/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14917/events | https://github.com/langchain-ai/langchain/pull/14917 | 2,049,079,529 | PR_kwDOIPDwls5iZH2G | 14,917 | doc for MistralAI partner package | {
"login": "thehunmonkgroup",
"id": 43772,
"node_id": "MDQ6VXNlcjQzNzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/43772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thehunmonkgroup",
"html_url": "https://github.com/thehunmonkgroup",
"followers_url": "https://api.github.com/users/thehunmonkgroup/followers",
"following_url": "https://api.github.com/users/thehunmonkgroup/following{/other_user}",
"gists_url": "https://api.github.com/users/thehunmonkgroup/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thehunmonkgroup/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thehunmonkgroup/subscriptions",
"organizations_url": "https://api.github.com/users/thehunmonkgroup/orgs",
"repos_url": "https://api.github.com/users/thehunmonkgroup/repos",
"events_url": "https://api.github.com/users/thehunmonkgroup/events{/privacy}",
"received_events_url": "https://api.github.com/users/thehunmonkgroup/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714119,
"node_id": "LA_kwDOIPDwls8AAAABc3-rhw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:M",
"name": "size:M",
"color": "C5DEF5",
"default": false,
"description": "This PR changes 30-99 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 2 | 2023-12-19T17:03:10 | 2023-12-20T05:22:52 | 2023-12-20T05:22:43 | CONTRIBUTOR | null | - **Description:** Add README doc for MistralAI partner package.
- **Tag maintainer:** @baskaryan
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14917/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14917",
"html_url": "https://github.com/langchain-ai/langchain/pull/14917",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14917.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14917.patch",
"merged_at": "2023-12-20T05:22:43"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14916 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14916/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14916/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14916/events | https://github.com/langchain-ai/langchain/pull/14916 | 2,049,048,407 | PR_kwDOIPDwls5iZA0e | 14,916 | infra: run CI on all PRs | {
"login": "efriis",
"id": 9557659,
"node_id": "MDQ6VXNlcjk1NTc2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9557659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efriis",
"html_url": "https://github.com/efriis",
"followers_url": "https://api.github.com/users/efriis/followers",
"following_url": "https://api.github.com/users/efriis/following{/other_user}",
"gists_url": "https://api.github.com/users/efriis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efriis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efriis/subscriptions",
"organizations_url": "https://api.github.com/users/efriis/orgs",
"repos_url": "https://api.github.com/users/efriis/repos",
"events_url": "https://api.github.com/users/efriis/events{/privacy}",
"received_events_url": "https://api.github.com/users/efriis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 4 | 2023-12-19T16:48:01 | 2023-12-20T21:48:34 | 2023-12-20T21:48:34 | COLLABORATOR | null | Somehow it looks like experimental linting didn't run properly on https://github.com/langchain-ai/langchain/pull/14842 before merging. Looks like it's because [check_diffs.yml](https://github.com/langchain-ai/langchain/blob/master/.github/workflows/check_diffs.yml#L8-L12) has some paths defined, and those paths are maybe only matched against the most recent commit, not the whole PR | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14916/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14916/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14916",
"html_url": "https://github.com/langchain-ai/langchain/pull/14916",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14916.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14916.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14915 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14915/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14915/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14915/events | https://github.com/langchain-ai/langchain/pull/14915 | 2,048,961,023 | PR_kwDOIPDwls5iYtZ6 | 14,915 | Update arxiv.py with Entry ID as a return value | {
"login": "ArchanGhosh",
"id": 14181922,
"node_id": "MDQ6VXNlcjE0MTgxOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/14181922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArchanGhosh",
"html_url": "https://github.com/ArchanGhosh",
"followers_url": "https://api.github.com/users/ArchanGhosh/followers",
"following_url": "https://api.github.com/users/ArchanGhosh/following{/other_user}",
"gists_url": "https://api.github.com/users/ArchanGhosh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArchanGhosh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArchanGhosh/subscriptions",
"organizations_url": "https://api.github.com/users/ArchanGhosh/orgs",
"repos_url": "https://api.github.com/users/ArchanGhosh/repos",
"events_url": "https://api.github.com/users/ArchanGhosh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArchanGhosh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T16:02:41 | 2023-12-20T05:30:25 | 2023-12-20T05:30:25 | CONTRIBUTOR | null | Added Entry ID as a return value inside get_summaries_as_docs
- **Description:** Added the Entry ID as a return, so it's easier to track the IDs of the papers that are being returned.
With the addition return of the entry ID in functions like ArxivRetriever, it will be easier to reference the ID of the paper itself.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14915/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14915",
"html_url": "https://github.com/langchain-ai/langchain/pull/14915",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14915.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14915.patch",
"merged_at": "2023-12-20T05:30:25"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14914 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14914/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14914/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14914/events | https://github.com/langchain-ai/langchain/pull/14914 | 2,048,950,457 | PR_kwDOIPDwls5iYrFO | 14,914 | Fixed duplicate input id issue in clarifai vectorstore | {
"login": "mogith-pn",
"id": 143642606,
"node_id": "U_kgDOCI_P7g",
"avatar_url": "https://avatars.githubusercontent.com/u/143642606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mogith-pn",
"html_url": "https://github.com/mogith-pn",
"followers_url": "https://api.github.com/users/mogith-pn/followers",
"following_url": "https://api.github.com/users/mogith-pn/following{/other_user}",
"gists_url": "https://api.github.com/users/mogith-pn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mogith-pn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mogith-pn/subscriptions",
"organizations_url": "https://api.github.com/users/mogith-pn/orgs",
"repos_url": "https://api.github.com/users/mogith-pn/repos",
"events_url": "https://api.github.com/users/mogith-pn/events{/privacy}",
"received_events_url": "https://api.github.com/users/mogith-pn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 2 | 2023-12-19T15:57:10 | 2023-12-20T07:21:37 | 2023-12-20T07:21:37 | CONTRIBUTOR | null |
- **Description:**
This PR fixes the issue faces with duplicate input id in Clarifai vectorstore class when ingesting documents into the vectorstore more than the batch size.
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14914/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14914",
"html_url": "https://github.com/langchain-ai/langchain/pull/14914",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14914.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14914.patch",
"merged_at": "2023-12-20T07:21:37"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14913 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14913/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14913/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14913/events | https://github.com/langchain-ai/langchain/pull/14913 | 2,048,918,003 | PR_kwDOIPDwls5iYj0E | 14,913 | rename ChatGPTRouter to GPTRouter | {
"login": "sirjan-ws-ext",
"id": 151817113,
"node_id": "U_kgDOCQyLmQ",
"avatar_url": "https://avatars.githubusercontent.com/u/151817113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sirjan-ws-ext",
"html_url": "https://github.com/sirjan-ws-ext",
"followers_url": "https://api.github.com/users/sirjan-ws-ext/followers",
"following_url": "https://api.github.com/users/sirjan-ws-ext/following{/other_user}",
"gists_url": "https://api.github.com/users/sirjan-ws-ext/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sirjan-ws-ext/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sirjan-ws-ext/subscriptions",
"organizations_url": "https://api.github.com/users/sirjan-ws-ext/orgs",
"repos_url": "https://api.github.com/users/sirjan-ws-ext/repos",
"events_url": "https://api.github.com/users/sirjan-ws-ext/events{/privacy}",
"received_events_url": "https://api.github.com/users/sirjan-ws-ext/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700892,
"node_id": "LA_kwDOIPDwls8AAAABUpid3A",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:refactor",
"name": "auto:refactor",
"color": "D4C5F9",
"default": false,
"description": "A large refactor of a feature(s) or restructuring of many files"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714119,
"node_id": "LA_kwDOIPDwls8AAAABc3-rhw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:M",
"name": "size:M",
"color": "C5DEF5",
"default": false,
"description": "This PR changes 30-99 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T15:41:47 | 2023-12-19T15:53:14 | 2023-12-19T15:48:53 | CONTRIBUTOR | null | **Description:**: Rename integration to GPTRouter
**Tag maintainer:** @Gupta-Anubhav12 @samanyougarg @sirjan-ws-ext
**Twitter handle:** [@SamanyouGarg](https://twitter.com/SamanyouGarg)
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14913/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14913",
"html_url": "https://github.com/langchain-ai/langchain/pull/14913",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14913.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14913.patch",
"merged_at": "2023-12-19T15:48:53"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14912 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14912/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14912/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14912/events | https://github.com/langchain-ai/langchain/issues/14912 | 2,048,902,467 | I_kwDOIPDwls56H8VD | 14,912 | Issue when working with Azure and OpenAI Callback | {
"login": "Yanni8",
"id": 99135388,
"node_id": "U_kgDOBeivnA",
"avatar_url": "https://avatars.githubusercontent.com/u/99135388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yanni8",
"html_url": "https://github.com/Yanni8",
"followers_url": "https://api.github.com/users/Yanni8/followers",
"following_url": "https://api.github.com/users/Yanni8/following{/other_user}",
"gists_url": "https://api.github.com/users/Yanni8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yanni8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yanni8/subscriptions",
"organizations_url": "https://api.github.com/users/Yanni8/orgs",
"repos_url": "https://api.github.com/users/Yanni8/repos",
"events_url": "https://api.github.com/users/Yanni8/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yanni8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 2 | 2023-12-19T15:33:15 | 2024-01-15T10:06:24 | null | CONTRIBUTOR | null | ### System Info
Langchain: v0.0.350
OS: Linux
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
The problem occurs when you use Azure with an GPT 4 Model because the Azure API will always respond with `gpt-4` as the Model name. You can also see this in the official Microsoft documentation. https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision#output. It will therefore calculate the wrong price → if you use Turbo will the price will be `x3` as it actually should be.
Code to Reproduce:
```python
llm = AzureChatOpenAI(
deployment_name="GPT4-TURBO"
)
with get_openai_callback() as cb:
# Run LLM
print((cb.total_tokens / 1000) * 0.01, "is instead", cb.total_cost)
```
### Expected behavior
It should return the correct price. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14912/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14911 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14911/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14911/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14911/events | https://github.com/langchain-ai/langchain/issues/14911 | 2,048,861,237 | I_kwDOIPDwls56HyQ1 | 14,911 | How to embed the table data? | {
"login": "GoldenDragon0710",
"id": 122573109,
"node_id": "U_kgDOB05RNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/122573109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GoldenDragon0710",
"html_url": "https://github.com/GoldenDragon0710",
"followers_url": "https://api.github.com/users/GoldenDragon0710/followers",
"following_url": "https://api.github.com/users/GoldenDragon0710/following{/other_user}",
"gists_url": "https://api.github.com/users/GoldenDragon0710/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GoldenDragon0710/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GoldenDragon0710/subscriptions",
"organizations_url": "https://api.github.com/users/GoldenDragon0710/orgs",
"repos_url": "https://api.github.com/users/GoldenDragon0710/repos",
"events_url": "https://api.github.com/users/GoldenDragon0710/events{/privacy}",
"received_events_url": "https://api.github.com/users/GoldenDragon0710/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541141061,
"node_id": "LA_kwDOIPDwls8AAAABSkcaRQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20embeddings",
"name": "area: embeddings",
"color": "C5DEF5",
"default": false,
"description": "Related to text embedding models module"
},
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
}
] | open | false | null | [] | null | 2 | 2023-12-19T15:11:34 | 2023-12-19T16:22:51 | null | NONE | null | ### Issue you'd like to raise.
I have a document, which contains general text and tables.
I embedded this document using LangChain to build a bot with Node.js.
The bot answers correctly for general text in the document, but gives incorrect answers for table data.
How do I update?
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14911/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14910 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14910/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14910/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14910/events | https://github.com/langchain-ai/langchain/pull/14910 | 2,048,850,616 | PR_kwDOIPDwls5iYVAe | 14,910 | Fixing linter issue on Python 3.8 related to the prompt injection scanner check | {
"login": "asofter",
"id": 1751809,
"node_id": "MDQ6VXNlcjE3NTE4MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1751809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asofter",
"html_url": "https://github.com/asofter",
"followers_url": "https://api.github.com/users/asofter/followers",
"following_url": "https://api.github.com/users/asofter/following{/other_user}",
"gists_url": "https://api.github.com/users/asofter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asofter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofter/subscriptions",
"organizations_url": "https://api.github.com/users/asofter/orgs",
"repos_url": "https://api.github.com/users/asofter/repos",
"events_url": "https://api.github.com/users/asofter/events{/privacy}",
"received_events_url": "https://api.github.com/users/asofter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-19T15:06:22 | 2023-12-19T16:03:53 | 2023-12-19T15:09:55 | CONTRIBUTOR | null | - **Description:** Fixing [linter problem on the Python 3.8](https://github.com/langchain-ai/langchain/pull/14842#issuecomment-1862733589) which became visible to new PRs after merging my previous PR https://github.com/langchain-ai/langchain/pull/14842.
- **Issue:** https://github.com/langchain-ai/langchain/pull/14842#issuecomment-1862733589
- **Dependencies:** N/A
- **Tag maintainer:** N/A
- **Twitter handle:** @laiyer_ai | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14910/timeline | null | null | true | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14910",
"html_url": "https://github.com/langchain-ai/langchain/pull/14910",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14910.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14910.patch",
"merged_at": null
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14909 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14909/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14909/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14909/events | https://github.com/langchain-ai/langchain/issues/14909 | 2,048,823,526 | I_kwDOIPDwls56HpDm | 14,909 | `urllib3.connectionpool` warnings after upgrading to LangChain 0.0.351 | {
"login": "SteChronis",
"id": 21215126,
"node_id": "MDQ6VXNlcjIxMjE1MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/21215126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SteChronis",
"html_url": "https://github.com/SteChronis",
"followers_url": "https://api.github.com/users/SteChronis/followers",
"following_url": "https://api.github.com/users/SteChronis/following{/other_user}",
"gists_url": "https://api.github.com/users/SteChronis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SteChronis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SteChronis/subscriptions",
"organizations_url": "https://api.github.com/users/SteChronis/orgs",
"repos_url": "https://api.github.com/users/SteChronis/repos",
"events_url": "https://api.github.com/users/SteChronis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SteChronis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-19T14:53:36 | 2023-12-19T15:05:51 | null | NONE | null | ### System Info
python: 3.11.4
langchain: 0.0.351
requests: 2.31.0
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
We have enabled the LangSmith tracing and after upgrading LangChain from `0.0.266` to `0.0.351` we started getting the following warnings:
```
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2423)'))': /runs
```
We also get the same warnings when we try to update the feedback from the LangSmith client.
```
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /sessions?limit=1&name=mirror
```
Unfortunately, there is no additional stack trace.
This behavior is not consistent but it occurs randomly.
### Expected behavior
The expected behavior is all the runs to be propagated to the LangSmith and does not have this kind of warning. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14909/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14909/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14907 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14907/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14907/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14907/events | https://github.com/langchain-ai/langchain/pull/14907 | 2,048,796,417 | PR_kwDOIPDwls5iYJB9 | 14,907 | Add retries logic to Yandex GPT API Calls | {
"login": "tyumentsev4",
"id": 56769451,
"node_id": "MDQ6VXNlcjU2NzY5NDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/56769451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyumentsev4",
"html_url": "https://github.com/tyumentsev4",
"followers_url": "https://api.github.com/users/tyumentsev4/followers",
"following_url": "https://api.github.com/users/tyumentsev4/following{/other_user}",
"gists_url": "https://api.github.com/users/tyumentsev4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyumentsev4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyumentsev4/subscriptions",
"organizations_url": "https://api.github.com/users/tyumentsev4/orgs",
"repos_url": "https://api.github.com/users/tyumentsev4/repos",
"events_url": "https://api.github.com/users/tyumentsev4/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyumentsev4/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714126,
"node_id": "LA_kwDOIPDwls8AAAABc3-rjg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:L",
"name": "size:L",
"color": "BFD4F2",
"default": false,
"description": "This PR changes 100-499 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T14:41:07 | 2023-12-19T15:51:42 | 2023-12-19T15:51:42 | CONTRIBUTOR | null | <!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
See contribution guidelines for more information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/extras` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
**Description:** Added logic for re-calling the YandexGPT API in case of an error | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14907/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14907",
"html_url": "https://github.com/langchain-ai/langchain/pull/14907",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14907.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14907.patch",
"merged_at": "2023-12-19T15:51:42"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14906 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14906/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14906/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14906/events | https://github.com/langchain-ai/langchain/pull/14906 | 2,048,783,089 | PR_kwDOIPDwls5iYGHr | 14,906 | langchain[patch]: export sagemaker LLMContentHandler | {
"login": "baskaryan",
"id": 22008038,
"node_id": "MDQ6VXNlcjIyMDA4MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22008038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskaryan",
"html_url": "https://github.com/baskaryan",
"followers_url": "https://api.github.com/users/baskaryan/followers",
"following_url": "https://api.github.com/users/baskaryan/following{/other_user}",
"gists_url": "https://api.github.com/users/baskaryan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskaryan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskaryan/subscriptions",
"organizations_url": "https://api.github.com/users/baskaryan/orgs",
"repos_url": "https://api.github.com/users/baskaryan/repos",
"events_url": "https://api.github.com/users/baskaryan/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskaryan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 5959659008,
"node_id": "LA_kwDOIPDwls8AAAABYzkuAA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20aws",
"name": "integration: aws",
"color": "C5DEF5",
"default": false,
"description": "Related to Amazon Web Services (AWS) integrations"
},
{
"id": 6232714104,
"node_id": "LA_kwDOIPDwls8AAAABc3-reA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XS",
"name": "size:XS",
"color": "C2E0C6",
"default": false,
"description": "This PR changes 0-9 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T14:34:08 | 2023-12-19T15:00:33 | 2023-12-19T15:00:32 | COLLABORATOR | null | Resolves #14904 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14906/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14906",
"html_url": "https://github.com/langchain-ai/langchain/pull/14906",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14906.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14906.patch",
"merged_at": "2023-12-19T15:00:32"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14905 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14905/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14905/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14905/events | https://github.com/langchain-ai/langchain/issues/14905 | 2,048,761,131 | I_kwDOIPDwls56HZ0r | 14,905 | Link to agent_with_wandb_tracing.html notebook is broken | {
"login": "bkowshik",
"id": 2899501,
"node_id": "MDQ6VXNlcjI4OTk1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2899501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkowshik",
"html_url": "https://github.com/bkowshik",
"followers_url": "https://api.github.com/users/bkowshik/followers",
"following_url": "https://api.github.com/users/bkowshik/following{/other_user}",
"gists_url": "https://api.github.com/users/bkowshik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bkowshik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bkowshik/subscriptions",
"organizations_url": "https://api.github.com/users/bkowshik/orgs",
"repos_url": "https://api.github.com/users/bkowshik/repos",
"events_url": "https://api.github.com/users/bkowshik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bkowshik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541144676,
"node_id": "LA_kwDOIPDwls8AAAABSkcoZA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20doc%20loader",
"name": "area: doc loader",
"color": "D4C5F9",
"default": false,
"description": "Related to document loader module (not documentation)"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
}
] | open | false | null | [] | null | 1 | 2023-12-19T14:22:07 | 2023-12-19T14:31:03 | null | NONE | null | ### Issue with current documentation:
Ref: https://python.langchain.com/docs/integrations/providers/wandb_tracking
> Note: the WandbCallbackHandler is being deprecated in favour of the WandbTracer . In future please use the WandbTracer as it is more flexible and allows for more granular logging. To know more about the WandbTracer refer to the [agent_with_wandb_tracing.html](https://python.langchain.com/en/latest/integrations/agent_with_wandb_tracing.html) notebook or use the following [colab notebook](http://wandb.me/prompts-quickstart). To know more about Weights & Biases Prompts refer to the following [prompts documentation](https://docs.wandb.ai/guides/prompts).
The link to the `agent_with_wandb_tracing.html` results in a HTTP 404
### Idea or request for content:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14905/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14904 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14904/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14904/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14904/events | https://github.com/langchain-ai/langchain/issues/14904 | 2,048,707,893 | I_kwDOIPDwls56HM01 | 14,904 | Issue: cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint | {
"login": "grauvictor",
"id": 149517415,
"node_id": "U_kgDOCOl0Zw",
"avatar_url": "https://avatars.githubusercontent.com/u/149517415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grauvictor",
"html_url": "https://github.com/grauvictor",
"followers_url": "https://api.github.com/users/grauvictor/followers",
"following_url": "https://api.github.com/users/grauvictor/following{/other_user}",
"gists_url": "https://api.github.com/users/grauvictor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grauvictor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grauvictor/subscriptions",
"organizations_url": "https://api.github.com/users/grauvictor/orgs",
"repos_url": "https://api.github.com/users/grauvictor/repos",
"events_url": "https://api.github.com/users/grauvictor/events{/privacy}",
"received_events_url": "https://api.github.com/users/grauvictor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 5959659008,
"node_id": "LA_kwDOIPDwls8AAAABYzkuAA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20aws",
"name": "integration: aws",
"color": "C5DEF5",
"default": false,
"description": "Related to Amazon Web Services (AWS) integrations"
}
] | closed | false | null | [] | null | 1 | 2023-12-19T13:53:31 | 2023-12-19T15:00:33 | 2023-12-19T15:00:33 | NONE | null | Cannot import LLMContentHandler
langchain: 0.0.351
python: 3.9
To reproduce:
``` python
from langchain.llms.sagemaker_endpoint import LLMContentHandler
```
Issue could be resolved by updating
https://github.com/langchain-ai/langchain/blob/583696732cbaa3d1cf3a3a9375539a7e8785850c/libs/langchain/langchain/llms/sagemaker_endpoint.py#L1C5-L7
as follow:
``` python
from langchain_community.llms.sagemaker_endpoint import (
LLMContentHandler,
SagemakerEndpoint,
)
__all__ = [
"SagemakerEndpoint",
"LLMContentHandler"
]
```
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14904/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14903 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14903/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14903/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14903/events | https://github.com/langchain-ai/langchain/pull/14903 | 2,048,702,226 | PR_kwDOIPDwls5iX0a9 | 14,903 | Docs reference for XataVectorStore constructor | {
"login": "kostasb",
"id": 15780449,
"node_id": "MDQ6VXNlcjE1NzgwNDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15780449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kostasb",
"html_url": "https://github.com/kostasb",
"followers_url": "https://api.github.com/users/kostasb/followers",
"following_url": "https://api.github.com/users/kostasb/following{/other_user}",
"gists_url": "https://api.github.com/users/kostasb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kostasb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kostasb/subscriptions",
"organizations_url": "https://api.github.com/users/kostasb/orgs",
"repos_url": "https://api.github.com/users/kostasb/repos",
"events_url": "https://api.github.com/users/kostasb/events{/privacy}",
"received_events_url": "https://api.github.com/users/kostasb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5454193895,
"node_id": "LA_kwDOIPDwls8AAAABRRhk5w",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/lgtm",
"name": "lgtm",
"color": "0E8A16",
"default": false,
"description": ""
},
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 6232714108,
"node_id": "LA_kwDOIPDwls8AAAABc3-rfA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:S",
"name": "size:S",
"color": "BFDADC",
"default": false,
"description": "This PR changes 10-29 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 1 | 2023-12-19T13:50:26 | 2023-12-19T14:04:47 | 2023-12-19T14:04:47 | CONTRIBUTOR | null | Adds doc reference to the XataVectorStore constructor for use with existing Xata table contents.
@tsg @philkra | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14903/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14903",
"html_url": "https://github.com/langchain-ai/langchain/pull/14903",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14903.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14903.patch",
"merged_at": "2023-12-19T14:04:47"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14902 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14902/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14902/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14902/events | https://github.com/langchain-ai/langchain/issues/14902 | 2,048,640,189 | I_kwDOIPDwls56G8S9 | 14,902 | No input box show up when running the playground | {
"login": "yiouyou",
"id": 14249712,
"node_id": "MDQ6VXNlcjE0MjQ5NzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/14249712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yiouyou",
"html_url": "https://github.com/yiouyou",
"followers_url": "https://api.github.com/users/yiouyou/followers",
"following_url": "https://api.github.com/users/yiouyou/following{/other_user}",
"gists_url": "https://api.github.com/users/yiouyou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yiouyou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yiouyou/subscriptions",
"organizations_url": "https://api.github.com/users/yiouyou/orgs",
"repos_url": "https://api.github.com/users/yiouyou/repos",
"events_url": "https://api.github.com/users/yiouyou/events{/privacy}",
"received_events_url": "https://api.github.com/users/yiouyou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 5924999838,
"node_id": "LA_kwDOIPDwls8AAAABYShSng",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/integration:%20chroma",
"name": "integration: chroma",
"color": "B78AF8",
"default": false,
"description": "Related to ChromaDB"
}
] | open | false | null | [] | null | 1 | 2023-12-19T13:16:06 | 2023-12-19T13:28:56 | null | NONE | null | ### System Info
In chain.py, relative code as below:
```
def get_retriever(text):
_query = text
llm = ...
chroma_docs = [...]
_model_name, _embedding = get_embedding_HuggingFace()
chroma_vdb = Chroma.from_documents(chroma_docs, _embedding)
document_content_description = "..."
metadata_field_info = [...]
retriever = get_structured_retriever(llm, chroma_vdb, document_content_description, metadata_field_info, _query)
return retriever
chain = (
RunnableParallel({
"context": itemgetter("question") | RunnableLambda(get_retriever),
"question": RunnablePassthrough()
})
| prompt
| llm
| StrOutputParser()
)
```
When running the playground, there is no input frame (show as below):
![image](https://github.com/langchain-ai/langchain/assets/14249712/d65f67df-acd4-4eee-96b0-54b149da6460)
But no error msg in langchain serve
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Show as the the code
### Expected behavior
Should have the input box | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14902/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14901 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14901/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14901/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14901/events | https://github.com/langchain-ai/langchain/issues/14901 | 2,048,603,807 | I_kwDOIPDwls56Gzaf | 14,901 | DOC: Wrong parameter name in doc string | {
"login": "dheerajiiitv",
"id": 24246192,
"node_id": "MDQ6VXNlcjI0MjQ2MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/24246192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dheerajiiitv",
"html_url": "https://github.com/dheerajiiitv",
"followers_url": "https://api.github.com/users/dheerajiiitv/followers",
"following_url": "https://api.github.com/users/dheerajiiitv/following{/other_user}",
"gists_url": "https://api.github.com/users/dheerajiiitv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dheerajiiitv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dheerajiiitv/subscriptions",
"organizations_url": "https://api.github.com/users/dheerajiiitv/orgs",
"repos_url": "https://api.github.com/users/dheerajiiitv/repos",
"events_url": "https://api.github.com/users/dheerajiiitv/events{/privacy}",
"received_events_url": "https://api.github.com/users/dheerajiiitv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700918,
"node_id": "LA_kwDOIPDwls8AAAABUpid9g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:documentation",
"name": "auto:documentation",
"color": "C5DEF5",
"default": false,
"description": "Changes to documentation and examples, like .md, .rst, .ipynb files. Changes to the docs/ folder"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-19T12:54:07 | 2023-12-19T12:56:41 | null | CONTRIBUTOR | null | ### Issue with current documentation:
Current:
```
class AzureChatOpenAI(ChatOpenAI):
"""`Azure OpenAI` Chat Completion API.
To use this class you
must have a deployed model on Azure OpenAI. Use `deployment_name` in the
constructor to refer to the "Model deployment name" in the Azure portal.
In addition, you should have the ``openai`` python package installed, and the
following environment variables set or passed in constructor in lower case:
- ``AZURE_OPENAI_API_KEY``
- ``AZURE_OPENAI_API_ENDPOINT``
- ``AZURE_OPENAI_AD_TOKEN``
- ``OPENAI_API_VERSION``
- ``OPENAI_PROXY``
```
### Idea or request for content:
It should be
```
class AzureChatOpenAI(ChatOpenAI):
"""`Azure OpenAI` Chat Completion API.
To use this class you
must have a deployed model on Azure OpenAI. Use `deployment_name` in the
constructor to refer to the "Model deployment name" in the Azure portal.
In addition, you should have the ``openai`` python package installed, and the
following environment variables set or passed in constructor in lower case:
- ``AZURE_OPENAI_API_KEY``
- ``AZURE_OPENAI_ENDPOINT`` <---------- **Changed**
- ``AZURE_OPENAI_AD_TOKEN``
- ``OPENAI_API_VERSION``
- ``OPENAI_PROXY``
``` | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14901/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14900 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14900/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14900/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14900/events | https://github.com/langchain-ai/langchain/pull/14900 | 2,048,571,202 | PR_kwDOIPDwls5iXXnL | 14,900 | Integrating GPTRouter | {
"login": "sirjan-ws-ext",
"id": 151817113,
"node_id": "U_kgDOCQyLmQ",
"avatar_url": "https://avatars.githubusercontent.com/u/151817113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sirjan-ws-ext",
"html_url": "https://github.com/sirjan-ws-ext",
"followers_url": "https://api.github.com/users/sirjan-ws-ext/followers",
"following_url": "https://api.github.com/users/sirjan-ws-ext/following{/other_user}",
"gists_url": "https://api.github.com/users/sirjan-ws-ext/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sirjan-ws-ext/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sirjan-ws-ext/subscriptions",
"organizations_url": "https://api.github.com/users/sirjan-ws-ext/orgs",
"repos_url": "https://api.github.com/users/sirjan-ws-ext/repos",
"events_url": "https://api.github.com/users/sirjan-ws-ext/events{/privacy}",
"received_events_url": "https://api.github.com/users/sirjan-ws-ext/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700863,
"node_id": "LA_kwDOIPDwls8AAAABUpidvw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:enhancement",
"name": "auto:enhancement",
"color": "C2E0C6",
"default": false,
"description": "A large net-new component, integration, or chain. Use sparingly. The largest features"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
},
{
"id": 6232714130,
"node_id": "LA_kwDOIPDwls8AAAABc3-rkg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/size:XL",
"name": "size:XL",
"color": "D4C5F9",
"default": false,
"description": "This PR changes 500-999 lines, ignoring generated files."
}
] | closed | false | null | [] | null | 2 | 2023-12-19T12:33:49 | 2023-12-19T15:08:37 | 2023-12-19T15:08:36 | CONTRIBUTOR | null | **Description:** Adding a langchain integration for [GPTRouter](https://gpt-router.writesonic.com/) 🚀 ,
**Tag maintainer:** @Gupta-Anubhav12 @samanyougarg @sirjan-ws-ext
**Twitter handle:** [@SamanyouGarg](https://twitter.com/SamanyouGarg)
Integration Tests Passing:
<img width="1137" alt="Screenshot 2023-12-19 at 5 45 31 PM" src="https://github.com/Writesonic/langchain/assets/151817113/4a59df9a-ee30-47aa-9df9-b8c4eeb9dc76">
| {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14900/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14900/timeline | null | null | false | {
"url": "https://api.github.com/repos/langchain-ai/langchain/pulls/14900",
"html_url": "https://github.com/langchain-ai/langchain/pull/14900",
"diff_url": "https://github.com/langchain-ai/langchain/pull/14900.diff",
"patch_url": "https://github.com/langchain-ai/langchain/pull/14900.patch",
"merged_at": "2023-12-19T15:08:36"
} |
https://api.github.com/repos/langchain-ai/langchain/issues/14899 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14899/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14899/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14899/events | https://github.com/langchain-ai/langchain/issues/14899 | 2,048,405,277 | I_kwDOIPDwls56GC8d | 14,899 | `convert_to_openai_function` drops `description` for each parameter | {
"login": "okapies",
"id": 657642,
"node_id": "MDQ6VXNlcjY1NzY0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/657642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/okapies",
"html_url": "https://github.com/okapies",
"followers_url": "https://api.github.com/users/okapies/followers",
"following_url": "https://api.github.com/users/okapies/following{/other_user}",
"gists_url": "https://api.github.com/users/okapies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/okapies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/okapies/subscriptions",
"organizations_url": "https://api.github.com/users/okapies/orgs",
"repos_url": "https://api.github.com/users/okapies/repos",
"events_url": "https://api.github.com/users/okapies/events{/privacy}",
"received_events_url": "https://api.github.com/users/okapies/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700839,
"node_id": "LA_kwDOIPDwls8AAAABUpidpw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:bug",
"name": "auto:bug",
"color": "E99695",
"default": false,
"description": "Related to a bug, vulnerability, unexpected error with an existing feature"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-19T10:51:41 | 2023-12-19T12:13:00 | null | NONE | null | ### System Info
The `description` attribute of the function parameters described in our Pydantic *v2* model are missing in the output of `convert_to_openai_function` because it does not recognize Pydantic v2 `BaseModel` as a valid v1 `BaseModel`.
https://github.com/langchain-ai/langchain/blob/16399fd61d7744c529cca46464489e467b4b7741/libs/langchain/langchain/chains/openai_functions/base.py#L156-L161
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains.openai_functions.base import convert_to_openai_function
from pydantic.v1 import BaseModel as BaseModelV1, Field as FieldV1
from pydantic import BaseModel as BaseModelV2, Field as FieldV2
class FuncV1(BaseModelV1):
"Pydantic v1 model."
output: str = FieldV1(description="A output text")
class FuncV2(BaseModelV2):
"Pydantic v2 model."
output: str = FieldV2(description="A output text")
print(convert_to_openai_function(FuncV1))
{'name': 'FuncV1', 'description': 'Pydantic v1 model.', 'parameters': {'title': 'FuncV1', 'description': 'Pydantic v1 model.', 'type': 'object', 'properties': {'output': {'title': 'Output', 'description': 'A output text', 'type': 'string'}}, 'required': ['output']}}
print(convert_to_openai_function(FuncV2))
{'name': 'FuncV2', 'description': 'Pydantic v2 model.', 'parameters': {'type': 'object', 'properties': {'output': {'type': 'string'}}, 'required': ['output']}}
```
### Expected behavior
Add `description` attribute appeared in Pydantic v2 model. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14899/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14898 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14898/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14898/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14898/events | https://github.com/langchain-ai/langchain/issues/14898 | 2,048,330,418 | I_kwDOIPDwls56Fwqy | 14,898 | Issue: Enhancing Streaming and Database Integration in ConversationRetrieval with AsyncIteratorCallbackHandler | {
"login": "girithodu",
"id": 119573064,
"node_id": "U_kgDOByCKSA",
"avatar_url": "https://avatars.githubusercontent.com/u/119573064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/girithodu",
"html_url": "https://github.com/girithodu",
"followers_url": "https://api.github.com/users/girithodu/followers",
"following_url": "https://api.github.com/users/girithodu/following{/other_user}",
"gists_url": "https://api.github.com/users/girithodu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/girithodu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/girithodu/subscriptions",
"organizations_url": "https://api.github.com/users/girithodu/orgs",
"repos_url": "https://api.github.com/users/girithodu/repos",
"events_url": "https://api.github.com/users/girithodu/events{/privacy}",
"received_events_url": "https://api.github.com/users/girithodu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 5 | 2023-12-19T10:08:44 | 2023-12-27T23:42:38 | null | NONE | null | ### Issue you'd like to raise.
I am working on implementing streaming for my ConversationRetrieval chain calls, and I plan to leverage the AsyncIteratorCallbackHandler along with its aiter method. While reviewing the source code, I noticed that the response from the on_llm_end method is not currently added to the queue. My goal is to enhance the aiter method so that the response is also included in the queue. This way, I can stream the final response to my client and use it to update cached data in my frontend. Additionally, I intend to leverage the on_llm_end method to update my database with the received response. Could you guide me on how to modify the aiter method within the AsyncIteratorCallbackHandler to align with these requirements?
### Suggestion:
_No response_ | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14898/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14897 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14897/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14897/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14897/events | https://github.com/langchain-ai/langchain/issues/14897 | 2,048,316,746 | I_kwDOIPDwls56FtVK | 14,897 | Update records in the Xata integration | {
"login": "kostasb",
"id": 15780449,
"node_id": "MDQ6VXNlcjE1NzgwNDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/15780449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kostasb",
"html_url": "https://github.com/kostasb",
"followers_url": "https://api.github.com/users/kostasb/followers",
"following_url": "https://api.github.com/users/kostasb/following{/other_user}",
"gists_url": "https://api.github.com/users/kostasb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kostasb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kostasb/subscriptions",
"organizations_url": "https://api.github.com/users/kostasb/orgs",
"repos_url": "https://api.github.com/users/kostasb/repos",
"events_url": "https://api.github.com/users/kostasb/events{/privacy}",
"received_events_url": "https://api.github.com/users/kostasb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5541432778,
"node_id": "LA_kwDOIPDwls8AAAABSkuNyg",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20vector%20store",
"name": "area: vector store",
"color": "D4C5F9",
"default": false,
"description": "Related to vector store module"
},
{
"id": 5680700863,
"node_id": "LA_kwDOIPDwls8AAAABUpidvw",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:enhancement",
"name": "auto:enhancement",
"color": "C2E0C6",
"default": false,
"description": "A large net-new component, integration, or chain. Use sparingly. The largest features"
}
] | open | false | null | [] | null | 1 | 2023-12-19T10:00:54 | 2023-12-19T10:03:21 | null | CONTRIBUTOR | null | ### Feature request
Currently (0.0.350) the Xata integration always creates new records with `XataVectorStore.from_documents`.
Provide the option to update embeddings and column content of existing record ids.
### Motivation
This will provide the capability to update Xata Vector Stores.
### Your contribution
Xata development team plans to contribute the enhancement. | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14897/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14896 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14896/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14896/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14896/events | https://github.com/langchain-ai/langchain/issues/14896 | 2,048,263,486 | I_kwDOIPDwls56FgU- | 14,896 | YandexGPT crashes with error "You have to specify folder ID for user account" | {
"login": "achmedzhanov",
"id": 2890929,
"node_id": "MDQ6VXNlcjI4OTA5Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2890929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/achmedzhanov",
"html_url": "https://github.com/achmedzhanov",
"followers_url": "https://api.github.com/users/achmedzhanov/followers",
"following_url": "https://api.github.com/users/achmedzhanov/following{/other_user}",
"gists_url": "https://api.github.com/users/achmedzhanov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/achmedzhanov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/achmedzhanov/subscriptions",
"organizations_url": "https://api.github.com/users/achmedzhanov/orgs",
"repos_url": "https://api.github.com/users/achmedzhanov/repos",
"events_url": "https://api.github.com/users/achmedzhanov/events{/privacy}",
"received_events_url": "https://api.github.com/users/achmedzhanov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-12-19T09:29:50 | 2023-12-19T09:56:08 | 2023-12-19T09:56:07 | NONE | null | ### System Info
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
yandexcloud 0.248.0
Python 3.9.0 Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps
```bash.
!pip install yandexcloud langchain
```
```python
from langchain.chains import LLMChain
from langchain.llms import YandexGPT
from langchain.prompts import PromptTemplate
import os
os.environ["YC_IAM_TOKEN"] = "xxxxxxxxxxxxxxxxxxxx"
os.environ["YC_FOLDER_ID"] = "yyyyyyyyyyyyyyyyyyyy"
llm = YandexGPT()
template = "What is the capital of {country}?"
prompt = PromptTemplate(template=template, input_variables=["country"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
country = "Russia"
llm_chain.run(country)
```
Error
```
Requirement already satisfied: yandexcloud in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (0.248.0)
Collecting langchain
Downloading langchain-0.0.350-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: cryptography>=2.8 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (41.0.7)
Requirement already satisfied: grpcio>=1.56.2 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.59.3)
Requirement already satisfied: protobuf>=4.23.4 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (4.25.1)
Requirement already satisfied: googleapis-common-protos>=1.59.1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.62.0)
Requirement already satisfied: pyjwt>=1.7.1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (2.8.0)
Requirement already satisfied: requests>=2.22.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (2.31.0)
Requirement already satisfied: six>=1.14.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.16.0)
Requirement already satisfied: PyYAML>=5.3 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (6.0.1)
Requirement already satisfied: SQLAlchemy<3,>=1.4 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (2.0.23)
Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (3.9.0)
Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (4.0.3)
Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (0.6.2)
Collecting jsonpatch<2.0,>=1.33 (from langchain)
Downloading jsonpatch-1.33-py2.py3-none-any.whl.metadata (3.0 kB)
Collecting langchain-community<0.1,>=0.0.2 (from langchain)
Downloading langchain_community-0.0.3-py3-none-any.whl.metadata (7.0 kB)
Collecting langchain-core<0.2,>=0.1 (from langchain)
Downloading langchain_core-0.1.1-py3-none-any.whl.metadata (4.0 kB)
Collecting langsmith<0.1.0,>=0.0.63 (from langchain)
Downloading langsmith-0.0.71-py3-none-any.whl.metadata (10 kB)
Requirement already satisfied: numpy<2,>=1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (1.23.5)
Requirement already satisfied: pydantic<3,>=1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (1.10.13)
...
---------------------------------------- 46.2/46.2 kB 2.2 MB/s eta 0:00:00
Downloading jsonpointer-2.4-py2.py3-none-any.whl (7.8 kB)
Installing collected packages: jsonpointer, langsmith, jsonpatch, langchain-core, langchain-community, langchain
Successfully installed jsonpatch-1.33 jsonpointer-2.4 langchain-0.0.350 langchain-community-0.0.3 langchain-core-0.1.1 langsmith-0.0.71
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?5f589b9d-b63a-4c46-84d4-8fb1ca1bc863) or open in a [text editor](command:workbench.action.openLargeOutput?5f589b9d-b63a-4c46-84d4-8fb1ca1bc863). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
[notice] A new release of pip is available: 23.3.1 -> 23.3.2
[notice] To update, run: python.exe -m pip install --upgrade pip
---------------------------------------------------------------------------
_MultiThreadedRendezvous Traceback (most recent call last)
Cell In[18], [line 5](vscode-notebook-cell:?execution_count=18&line=5)
[3](vscode-notebook-cell:?execution_count=18&line=3) llm_chain = LLMChain(prompt=prompt, llm=llm)
[4](vscode-notebook-cell:?execution_count=18&line=4) country = "Russia"
----> [5](vscode-notebook-cell:?execution_count=18&line=5) llm_chain.run(country)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:507](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:507), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
[505](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:505) if len(args) != 1:
[506](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:506) raise ValueError("`run` supports only one positional argument.")
--> [507](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:507) return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
[508](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:508) _output_key
[509](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:509) ]
[511](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:511) if kwargs and not args:
[512](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:512) return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
[513](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:513) _output_key
[514](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:514) ]
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:312](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:312), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[310](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
--> [312](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:312) raise e
[313](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:313) run_manager.on_chain_end(outputs)
[314](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:314) final_outputs: Dict[str, Any] = self.prep_outputs(
[315](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:315) inputs, outputs, return_only_outputs
[316](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:316) )
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:306](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:306), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[299](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:299) run_manager = callback_manager.on_chain_start(
[300](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:300) dumpd(self),
[301](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:301) inputs,
[302](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:302) name=run_name,
[303](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:303) )
[304](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:304) try:
[305](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:305) outputs = (
--> [306](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:306) self._call(inputs, run_manager=run_manager)
[307](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:307) if new_arg_supported
[308](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:308) else self._call(inputs)
[309](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:309) )
[310](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\llm.py:103](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:103), in LLMChain._call(self, inputs, run_manager)
[98](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:98) def _call(
[99](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:99) self,
[100](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:100) inputs: Dict[str, Any],
[101](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:101) run_manager: Optional[CallbackManagerForChainRun] = None,
[102](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:102) ) -> Dict[str, str]:
--> [103](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:103) response = self.generate([inputs], run_manager=run_manager)
[104](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:104) return self.create_outputs(response)[0]
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\llm.py:115](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:115), in LLMChain.generate(self, input_list, run_manager)
[113](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:113) callbacks = run_manager.get_child() if run_manager else None
[114](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:114) if isinstance(self.llm, BaseLanguageModel):
--> [115](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:115) return self.llm.generate_prompt(
[116](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:116) prompts,
[117](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:117) stop,
[118](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:118) callbacks=callbacks,
[119](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:119) **self.llm_kwargs,
[120](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:120) )
[121](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:121) else:
[122](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:122) results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
[123](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:123) cast(List, prompts), {"callbacks": callbacks}
[124](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:124) )
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:516](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:516), in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[508](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:508) def generate_prompt(
[509](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:509) self,
[510](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:510) prompts: List[PromptValue],
(...)
[513](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:513) **kwargs: Any,
[514](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:514) ) -> LLMResult:
[515](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:515) prompt_strings = [p.to_string() for p in prompts]
--> [516](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:516) return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:666](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:666), in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
[650](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:650) raise ValueError(
[651](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:651) "Asked to cache, but no cache found at `langchain.cache`."
[652](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:652) )
[653](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:653) run_managers = [
[654](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:654) callback_manager.on_llm_start(
[655](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:655) dumpd(self),
(...)
[664](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:664) )
[665](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:665) ]
--> [666](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:666) output = self._generate_helper(
[667](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:667) prompts, stop, run_managers, bool(new_arg_supported), **kwargs
[668](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:668) )
[669](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:669) return output
[670](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:670) if len(missing_prompts) > 0:
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:553](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:553), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[551](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:551) for run_manager in run_managers:
[552](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:552) run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> [553](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:553) raise e
[554](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:554) flattened_outputs = output.flatten()
[555](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:555) for manager, flattened_output in zip(run_managers, flattened_outputs):
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:540), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[530](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:530) def _generate_helper(
[531](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:531) self,
[532](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:532) prompts: List[str],
(...)
[536](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:536) **kwargs: Any,
[537](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:537) ) -> LLMResult:
[538](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:538) try:
[539](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:539) output = (
--> [540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:540) self._generate(
[541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:541) prompts,
[542](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:542) stop=stop,
[543](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:543) # TODO: support multiple run managers
[544](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:544) run_manager=run_managers[0] if run_managers else None,
[545](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:545) **kwargs,
[546](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:546) )
[547](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:547) if new_arg_supported
[548](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:548) else self._generate(prompts, stop=stop)
[549](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:549) )
[550](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:550) except BaseException as e:
[551](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:551) for run_manager in run_managers:
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:1069](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1069), in LLM._generate(self, prompts, stop, run_manager, **kwargs)
[1066](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1066) new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
[1067](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1067) for prompt in prompts:
[1068](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1068) text = (
-> [1069](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1069) self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
[1070](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1070) if new_arg_supported
[1071](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1071) else self._call(prompt, stop=stop, **kwargs)
[1072](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1072) )
[1073](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1073) generations.append([Generation(text=text)])
[1074](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1074) return LLMResult(generations=generations)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_community\llms\yandex.py:131](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:131), in YandexGPT._call(self, prompt, stop, run_manager, **kwargs)
[129](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:129) metadata = (("authorization", f"Api-Key {self.api_key}"),)
[130](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:130) res = stub.Instruct(request, metadata=metadata)
--> [131](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:131) text = list(res)[0].alternatives[0].text
[132](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:132) if stop is not None:
[133](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:133) text = enforce_stop_tokens(text, stop)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\grpc\_channel.py:541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:541), in _Rendezvous.__next__(self)
[540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:540) def __next__(self):
--> [541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:541) return self._next()
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\grpc\_channel.py:967](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:967), in _MultiThreadedRendezvous._next(self)
[965](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:965) raise StopIteration()
[966](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:966) elif self._state.code is not None:
--> [967](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:967) raise self
_MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "You have to specify folder ID for user account"
debug_error_string = "UNKNOWN:Error received from peer ipv4:158.160.54.160:443 {created_time:"2023-12-18T13:29:25.0934987+00:00", grpc_status:16, grpc_message:"You have to specify folder ID for user account"}"
>
```
### Expected behavior
the model responds successfully | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14896/timeline | null | completed | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14895 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14895/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14895/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14895/events | https://github.com/langchain-ai/langchain/issues/14895 | 2,048,263,074 | I_kwDOIPDwls56FgOi | 14,895 | Comparing the agent reply with the previous conversation | {
"login": "sagarsingh-kiwi",
"id": 100353004,
"node_id": "U_kgDOBftD7A",
"avatar_url": "https://avatars.githubusercontent.com/u/100353004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagarsingh-kiwi",
"html_url": "https://github.com/sagarsingh-kiwi",
"followers_url": "https://api.github.com/users/sagarsingh-kiwi/followers",
"following_url": "https://api.github.com/users/sagarsingh-kiwi/following{/other_user}",
"gists_url": "https://api.github.com/users/sagarsingh-kiwi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagarsingh-kiwi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagarsingh-kiwi/subscriptions",
"organizations_url": "https://api.github.com/users/sagarsingh-kiwi/orgs",
"repos_url": "https://api.github.com/users/sagarsingh-kiwi/repos",
"events_url": "https://api.github.com/users/sagarsingh-kiwi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagarsingh-kiwi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4899126096,
"node_id": "LA_kwDOIPDwls8AAAABJAK7UA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20memory",
"name": "area: memory",
"color": "BFDADC",
"default": false,
"description": "Related to memory module"
},
{
"id": 5680700848,
"node_id": "LA_kwDOIPDwls8AAAABUpidsA",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:question",
"name": "auto:question",
"color": "BFD4F2",
"default": false,
"description": "A specific question about the codebase, product, project, or how to use a feature"
}
] | open | false | null | [] | null | 2 | 2023-12-19T09:29:35 | 2023-12-19T09:46:06 | null | NONE | null | ### System Info
Hi folks,
I am getting one error in which sometime agent gives the exact same answer as it has output for the previous question. Here is the screen shot of my replies
![image](https://github.com/langchain-ai/langchain/assets/100353004/3e42f54b-cb48-4515-9ee8-05675156bbf1)
Since it is giving the same exact string as answer I want put a manual check that whenever it given the answer which is exactly the same , then I will again query the agent for the new response.
I want to know how I can access the messages from the chat history
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It is a rare incident so no exact method to catch it
### Expected behavior
Just a code piece to check the previous replies | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14895/timeline | null | null | null | null |
https://api.github.com/repos/langchain-ai/langchain/issues/14894 | https://api.github.com/repos/langchain-ai/langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14894/labels{/name} | https://api.github.com/repos/langchain-ai/langchain/issues/14894/comments | https://api.github.com/repos/langchain-ai/langchain/issues/14894/events | https://github.com/langchain-ai/langchain/issues/14894 | 2,048,190,743 | I_kwDOIPDwls56FOkX | 14,894 | MultiQueryRetriever consume too much GPU mem, request to limit the maximum llm call | {
"login": "tiger55cn",
"id": 40610803,
"node_id": "MDQ6VXNlcjQwNjEwODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/40610803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tiger55cn",
"html_url": "https://github.com/tiger55cn",
"followers_url": "https://api.github.com/users/tiger55cn/followers",
"following_url": "https://api.github.com/users/tiger55cn/following{/other_user}",
"gists_url": "https://api.github.com/users/tiger55cn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tiger55cn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tiger55cn/subscriptions",
"organizations_url": "https://api.github.com/users/tiger55cn/orgs",
"repos_url": "https://api.github.com/users/tiger55cn/repos",
"events_url": "https://api.github.com/users/tiger55cn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tiger55cn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5680700873,
"node_id": "LA_kwDOIPDwls8AAAABUpidyQ",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/auto:improvement",
"name": "auto:improvement",
"color": "FBCA04",
"default": false,
"description": "Medium size change to existing code to handle new use-cases"
},
{
"id": 5820539098,
"node_id": "LA_kwDOIPDwls8AAAABWu5g2g",
"url": "https://api.github.com/repos/langchain-ai/langchain/labels/area:%20models",
"name": "area: models",
"color": "bfdadc",
"default": false,
"description": "Related to LLMs or chat model modules"
}
] | open | false | null | [] | null | 1 | 2023-12-19T08:44:18 | 2023-12-19T08:51:31 | null | NONE | null | ### System Info
python 3.10
langchain 0.0.350
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search.
Provide these alternative questions separated by newlines.
Original question: {question}""",
)
llm_chain = LLMChain(llm=self.llm, prompt=QUERY_PROMPT, output_parser=output_parser)
db = self.embeddings_dict[doc_id].as_retriever(search_kwargs={"k": 15})
multi_query_retriever = MultiQueryRetriever.from_llm(retriever=db, llm=self.llm)
relevant_documents = multi_query_retriever.get_relevant_documents(query)
### Expected behavior
limit the maximum number of parallel llm call, for example, 4 | {
"url": "https://api.github.com/repos/langchain-ai/langchain/issues/14894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/langchain-ai/langchain/issues/14894/timeline | null | null | null | null |