issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
final_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using MongoDBChatMessageHistory and it is working as expected in terms of functionality but I would expect the history record to be saved in an object in MongoDB rather than a string to aid with readability in the database.
(as an aside I am not sure why the fields are not snake or camelcase...)
![image](https://github.com/langchain-ai/langchain/assets/21989833/d46a70ed-6bbe-442d-942b-4f8bd7b34f02)
### System Info
```
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.36
langchain-mongodb==0.1.3
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.15
``` | MongoDBChatMessageHistory saves History as a string rather than an object | https://api.github.com/repos/langchain-ai/langchain/issues/20486/comments | 1 | 2024-04-15T18:56:38Z | 2024-07-23T16:12:02Z | https://github.com/langchain-ai/langchain/issues/20486 | 2,244,397,666 | 20,486 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```
from langchain_community.chat_models import ChatCohere
from dotenv import load_dotenv
import os
load_dotenv()
COHERE_API_KEY= os.getenv("COHERE_API_KEY")
chat_model = ChatCohere(cohere_api_key=COHERE_API_KEY, model="command-r")
test = await chat_model.ainvoke("test")
print(test)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[4], [line 8](vscode-notebook-cell:?execution_count=4&line=8)
[6](vscode-notebook-cell:?execution_count=4&line=6) COHERE_API_KEY= os.getenv("COHERE_API_KEY")
[7](vscode-notebook-cell:?execution_count=4&line=7) chat_model = ChatCohere(cohere_api_key=COHERE_API_KEY, model="command-r",offline=False)
----> [8](vscode-notebook-cell:?execution_count=4&line=8) test = await chat_model.ainvoke("test")
[9](vscode-notebook-cell:?execution_count=4&line=9) print(test)
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:175](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:175), in BaseChatModel.ainvoke(self, input, config, stop, **kwargs)
[166](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:166) async def ainvoke(
[167](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:167) self,
[168](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:168) input: LanguageModelInput,
(...)
[172](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:172) **kwargs: Any,
[173](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:173) ) -> BaseMessage:
[174](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:174) config = ensure_config(config)
--> [175](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:175) llm_result = await self.agenerate_prompt(
[176](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:176) [self._convert_input(input)],
[177](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:177) stop=stop,
[178](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:178) callbacks=config.get("callbacks"),
[179](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:179) tags=config.get("tags"),
[180](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:180) metadata=config.get("metadata"),
[181](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:181) run_name=config.get("run_name"),
[182](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:182) run_id=config.pop("run_id", None),
[183](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:183) **kwargs,
[184](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:184) )
[185](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:185) return cast(ChatGeneration, llm_result.generations[0][0]).message
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:566](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:566), in BaseChatModel.agenerate_prompt(self, prompts, stop, callbacks, **kwargs)
[558](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:558) async def agenerate_prompt(
[559](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:559) self,
[560](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:560) prompts: List[PromptValue],
(...)
[563](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:563) **kwargs: Any,
[564](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:564) ) -> LLMResult:
[565](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:565) prompt_messages = [p.to_messages() for p in prompts]
--> [566](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:566) return await self.agenerate(
[567](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:567) prompt_messages, stop=stop, callbacks=callbacks, **kwargs
[568](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:568) )
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:526](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:526), in BaseChatModel.agenerate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[513](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:513) if run_managers:
[514](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:514) await asyncio.gather(
[515](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:515) *[
[516](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:516) run_manager.on_llm_end(
(...)
[524](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524) ]
[525](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:525) )
--> [526](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:526) raise exceptions[0]
[527](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:527) flattened_outputs = [
[528](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:528) LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item, union-attr]
[529](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:529) for res in results
[530](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:530) ]
[531](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:531) llm_output = self._combine_llm_outputs([res.llm_output for res in results]) # type: ignore[union-attr]
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:707](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:707), in BaseChatModel._agenerate_with_cache(self, messages, stop, run_manager, **kwargs)
[705](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:705) else:
[706](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:706) if inspect.signature(self._agenerate).parameters.get("run_manager"):
--> [707](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:707) result = await self._agenerate(
[708](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:708) messages, stop=stop, run_manager=run_manager, **kwargs
[709](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:709) )
[710](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:710) else:
[711](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:711) result = await self._agenerate(messages, stop=stop, **kwargs)
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:242](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:242), in ChatCohere._agenerate(self, messages, stop, run_manager, **kwargs)
[240](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:240) generation_info = None
[241](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:241) if hasattr(response, "documents"):
--> [242](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:242) generation_info = self._get_generation_info(response)
[243](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:243) return ChatResult(
[244](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:244) generations=[
[245](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:245) ChatGeneration(message=message, generation_info=generation_info)
[246](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:246) ]
[247](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:247) )
File [~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:194](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:194), in ChatCohere._get_generation_info(self, response)
[187](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:187) def _get_generation_info(self, response: Any) -> Dict[str, Any]:
[188](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:188) """Get the generation info from cohere API response."""
[189](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:189) return {
[190](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:190) "documents": response.documents,
[191](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:191) "citations": response.citations,
[192](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:192) "search_results": response.search_results,
[193](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:193) "search_queries": response.search_queries,
--> [194](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:194) "token_count": response.token_count,
[195](https://file+.vscode-resource.vscode-cdn.net/home/jfuehne/Desktop/AI/Code/chatbotlocal/~/Desktop/AI/.venv/lib/python3.11/site-packages/langchain_community/chat_models/cohere.py:195) }
AttributeError: 'NonStreamedChatResponse' object has no attribute 'token_count'
### Description
I realized that recently, all of my code for Cohere through langchain has broken due to API errors. Is this possibly due to this change?
https://docs.cohere.com/changelog/python-sdk-v520-release
currently, it fails on even the most basic test of just using invoke on the chat model
### System Info
langchain==0.1.16
langchain-anthropic==0.1.4
langchain-community==0.0.32
langchain-core==0.1.42
langchain-text-splitters==0.0.1
langchainhub==0.1.15
cohere==5.2.5
Platform: Linux
python version: 3.11.8 | AttributeError: 'NonStreamedChatResponse' object has no attribute 'token_count' | https://api.github.com/repos/langchain-ai/langchain/issues/20484/comments | 4 | 2024-04-15T18:20:49Z | 2024-04-16T16:51:51Z | https://github.com/langchain-ai/langchain/issues/20484 | 2,244,331,164 | 20,484 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Failed to batch ingest runs: LangSmithError('Failed to POST https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError(\'403 Client Error: Forbidden for url: https://api.smith.langchain.com/runs/batch\', \'{"detail":"Forbidden"}\')')
### Description
I am using crewai along with langsmith and the error looks like a server side error.
### System Info
platform windows
python 3.11.4 | Failed to batch ingest runs: LangSmithError('Failed to POST https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError(\'403 Client Error: Forbidden for url: https://api.smith.langchain.com/runs/batch\', \'{"detail":"Forbidden"}\')') | https://api.github.com/repos/langchain-ai/langchain/issues/20479/comments | 9 | 2024-04-15T16:05:48Z | 2024-08-02T15:00:06Z | https://github.com/langchain-ai/langchain/issues/20479 | 2,244,062,759 | 20,479 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```def load_documents():
texts = []
all_items = os.listdir("files/")
for _, item in enumerate(all_items):
print("Processing file: {}".format(item))
# Generate a unique id for each document
unique_id = str(uuid.uuid4())
text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=0, add_start_index=True)
loader = PDFMinerLoader("files/{}".format(item), extract_images=True)
docs = loader.load_and_split(text_splitter)
# Add unique id and line range to each document
for doc in docs:
doc.metadata["unique_id"] = unique_id
texts.extend(docs)
return texts
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Langchain text splitters needs a functionality to store chunk metadata. Either a start and end line number in pdf or coordinates in pdf.
Currently its not possible in the text splitters.
There are 2 options for this to store the chunk information.
1. Store from and to line.
{
"chunk": "reversible llowing ......",
"pageNumber": 1,
"fromLine": 27,
"toLine": 40,
"documentId": "8a385d38-63ed-4821-a5c2-2e7309d5a256"
},
2. storing the coordinates of a chunk
{
"text": "\n Coach : the ai chatbot - We will have several coaches with different names, each with a unique",
"coordinates": [
{
"x": 54,
"y": 130.7924798925
},
{
"x": 512.654318754038,
"y": 141.7924801425
}
],
"pageSize": {
"width": 612,
"height": 792
},
"pageNumber": 1
},
Looking for some help here. Thanks!
### System Info
$ pip freeze | grep langchain
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.40
langchain-openai==0.0.8
windows
python 3.11 | Langchain text splitters need a functionality to store chunk metadata | https://api.github.com/repos/langchain-ai/langchain/issues/20470/comments | 1 | 2024-04-15T13:15:14Z | 2024-07-24T16:08:31Z | https://github.com/langchain-ai/langchain/issues/20470 | 2,243,657,358 | 20,470 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
agent = planner.create_openapi_agent(
api_spec=api_spec,
requests_wrapper=requests_wrapper,
llm=llm,
allow_dangerous_requests=True,
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i want the tool to have full HTTP capabilities and not only GET and POST
### System Info
latest langchain python installation which is 0.1.16 | OpenAPI agent does not support PATCH and PUT and DELETE, only GET and POST | https://api.github.com/repos/langchain-ai/langchain/issues/20469/comments | 3 | 2024-04-15T12:40:10Z | 2024-06-22T00:13:06Z | https://github.com/langchain-ai/langchain/issues/20469 | 2,243,580,678 | 20,469 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
doc_list_1 = [
"I like apples",
"I like oranges",
"Apples and oranges are fruits",
]
ret2 = BM25Retriever.from_texts(
doc_list_1, metadatas=[{"source": 1}] * len(doc_list_1)
)
retriever = EnsembleRetriever(
retrievers=[ret2, retriever], weights=[0.5, 0.5]
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i am trying to use ensemble retriever containing bm25 and opensearch
i am getting the below error :-
"1 validation error for Document\npage_content\n str type expected (type=type_error.str)"
### System Info
langchain==0.1.0
langchain-community==0.0.9
langchain-core==0.1.7
langchainhub==0.1.14 | not able to run BM25 retriever | https://api.github.com/repos/langchain-ai/langchain/issues/20466/comments | 1 | 2024-04-15T10:47:21Z | 2024-07-22T16:08:56Z | https://github.com/langchain-ai/langchain/issues/20466 | 2,243,344,923 | 20,466 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.sql_database import SQLDatabase
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder,PromptTemplate
from langchain.tools import BaseTool
from langchain.tools.render import format_tool_to_openai_function
from langchain.schema.runnable import Runnable,RunnableLambda,RunnableParallel
from langchain.chat_models import ChatOpenAI
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.agents import AgentExecutor
from pydantic import BaseModel, Field
import os
from secret_key import openapi_key
from sqlalchemy import create_engine
import constants
from datetime import datetime
os.environ['OPENAI_API_KEY'] = openapi_key
SQL_DML_CHAIN_PROMPT = """You are expert in SQLITE. Your main objective is to construct Data manipulation SQLITE query give the
user question: {user_question}.
You need to construct the Data manipulation SQLITE query for the following Database Schema:
{table_info}
Only Output the final SQL-Query and nothing else.
SQL-Query:"""
prompt = PromptTemplate(template = SQL_DML_CHAIN_PROMPT,input_variables = ['user_question','table_info'])
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
model_name = "get-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['PAY_transaction_settingallow', 'PAY_mst_allowance','employee_details'],sample_rows_in_table_info=5)
sql_dml_chain = RunnableParallel ({"user_question":lambda x:x["user_question"],
"table_info":lambda _: db.get_table_info()}) |\
prompt |\
ChatOpenAI().bind(stop='SQL-Query:') |\
RunnableLambda(lambda x:x.content.strip().replace('```sql',''))
agent_prompt = ChatPromptTemplate.from_messages(
[
("system", """
You are expert in SQL whose main objective is to mainpulate the Database for which you have
been given access. You can use the tool `sql_db_manipulation` to interact with Database and
mainpulate the database as per the user requirement.
Wrapped column names: All column names should be wrapped in square brackets [] as delimiters.
Use GETDATE() to get the current date and time instead of DATETIME('now').
For each record processed, retrieve the (employeeEuid) as 'euid' from the employee_details table where the employeeName matches, and retrieve the allowance ID (alw_id) from the pay_mst_allowance table where the alw_desc matches.no changes has to be made in "pay_mst_allowance" table and "employee_details" table.
perform JOIN operation to fetch euid and alw_id from respective tables.
Selected table: Specify PAY_transaction_settingallow as the table to update.
Employee and allowance selection: Use the WHERE clause to filter employees based on employeeName and allowances based on alw_desc.
Date handling: Maintain the logic for createDate, effect_date, and to_date using SQL Server functions.
Currency: Assume the amount to be in rupees.
Removed newlines: Write the query as a single string without newlines (\n).
Ensure the query executes efficiently and without errors.
"""),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
class SQLDBMANIPULATION(BaseModel):
user_query: str = Field(description='User question which will be translated to a Data Manipulation SQL Query and will be executed on the underlying database')
class SQLDBMANIPULATIONTool(BaseTool):
name = "sql_db_manipulation"
description = "Use this tool to convert and execute DML queries given the user question"
args_schema: type[SQLDBMANIPULATION] = SQLDBMANIPULATION
sql_dml_chain: Runnable
def _run(
self, user_query: str
) -> str:
"""Use the tool."""
query = sql_dml_chain.invoke({"user_question":user_query})
query = query.replace("DATETIME('now')", "GETDATE()")
db._execute(query)
tools = [SQLDBMANIPULATIONTool(sql_dml_chain = sql_dml_chain)]
llm_with_tools = ChatOpenAI().bind(functions=[format_tool_to_openai_function(t) for t in tools])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| agent_prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Invoking: `sql_db_manipulation` with `{'user_query': "INSERT INTO PAY_transaction_settingallow ([euid], [alw_id], [createDate], [effect_date], [to_date], [amount], [currency]) SELECT ed.employeeEuid AS euid, pma.alw_id, GETDATE() AS createDate, '2024-03-01', '2024-03-31', 500, 'INR' FROM employee_details ed JOIN pay_mst_allowance pma ON ed.employeeName = 'jyothi' AND pma.alw_desc = 'Production Incentive'", 'user_question': 'insert Production incentive of 500 For jyothi for march 2024'}`
Traceback (most recent call last):
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute
cursor.execute(statement, parameters)
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]'datetime' is not a recognized built-in function name. (195) (SQLExecDirectW)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\jyothi\Desktop\test\modifying.py", line 130, in <module>
agent_executor.invoke({"input": " insert Production incentive of 500 For jyothi for march 2024 "})
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\chains\base.py", line 87, in invoke
return self(
^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\chains\base.py", line 310, in __call__
raise e
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\chains\base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\agents\agent.py", line 1245, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\agents\agent.py", line 1095, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\tools\base.py", line 365, in run
raise e
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\tools\base.py", line 339, in run
else self._run(*tool_args, **tool_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\jyothi\Desktop\test\modifying.py", line 98, in _run
db._execute(query)
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\langchain\utilities\sql_database.py", line 411, in _execute
cursor = connection.execute(text(command))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1416, in execute
return meth(
^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\sql\elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "C:\Users\jyothi\Desktop\test\gpttest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]'datetime' is not a recognized built-in function name. (195) (SQLExecDirectW)")
[SQL: INSERT INTO PAY_transaction_settingallow ([euid], [alw_id], [createDate], [effect_date], [to_date], [amount], [currency])
SELECT ed.employeeEuid AS euid, pma.alw_id, datetime('now') AS createDate, '2024-03-01', '2024-03-31', 500, 'INR'
FROM employee_details ed
JOIN PAY_mst_allowance pma ON ed.employeeName = 'jyothi' AND pma.alw_desc = 'Production Incentive';]
### Description
I'm using the agentExecutor from langhain to database manipulation to perform curd operation on the table, even after specifically mentioning to use GETDATE() it keep taking datetime('now') while executing the query
i have shared the complete error
while Invoking it write a correct query
Invoking: `sql_db_manipulation` with `{'user_query': "INSERT INTO PAY_transaction_settingallow ([euid], [alw_id], [createDate], [effect_date], [to_date], [amount], [currency]) SELECT ed.employeeEuid AS euid, pma.alw_id, GETDATE() AS createDate, '2024-03-01', '2024-03-31', 500, 'INR' FROM employee_details ed JOIN pay_mst_allowance pma ON ed.employeeName = 'jyothi' AND pma.alw_desc = 'Production Incentive'", 'user_question': 'insert Production incentive of 500 For jyothi for march 2024'}`
but while executing its taking
[SQL: INSERT INTO PAY_transaction_settingallow ([euid], [alw_id], [createDate], [effect_date], [to_date], [amount], [currency])
SELECT ed.employeeEuid AS euid, pma.alw_id, datetime('now') AS createDate, '2024-03-01', '2024-03-31', 500, 'INR'
FROM employee_details ed
JOIN PAY_mst_allowance pma ON ed.employeeName = 'jyothi' AND pma.alw_desc = 'Production Incentive';]
### System Info
os: windows
python: 3.11
langchain: latest | 'datetime' is not a recognized built-in function name. with angen_executor | https://api.github.com/repos/langchain-ai/langchain/issues/20465/comments | 2 | 2024-04-15T10:44:40Z | 2024-08-09T08:37:35Z | https://github.com/langchain-ai/langchain/issues/20465 | 2,243,337,268 | 20,465 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Not applicable
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The following files trigger antivirus scans which report the presence of the Emf.Exploit.CVE_2017-3122-6335825-0 CVE:
* docs/docs/integrations/document_loaders/example_data/fake.vsdx
* libs/community/tests/examples/fake.vsdx
These files were added in the https://github.com/langchain-ai/langchain/pull/16171 PR.
Details on the scan results: https://www.virustotal.com/gui/file/3b02db67f312bfb1a0ac430673c372ec92eabfaf2888030161d7841ae2120f5f/detection
Recommendation: remove the `visio/media/image2.emf` entry from the `fake.vsdx` archive. This is the file which triggers the CVE and it is not required for tests which use the archive.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Mar 12 10:22:43 UTC 2
> Python Version: 3.11.4 (main, Jul 10 2023, 09:48:51) [GCC 11.3.0]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.22
> langsmith: 0.1.5
> langchain_experimental: 0.0.52
> langchain_openai: 0.0.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| community: test file triggers antivirus scan | https://api.github.com/repos/langchain-ai/langchain/issues/20456/comments | 3 | 2024-04-15T08:50:57Z | 2024-07-29T16:07:22Z | https://github.com/langchain-ai/langchain/issues/20456 | 2,243,082,369 | 20,456 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import PaiEasChatEndpoint
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
prompt_value = prompt.invoke("test_message")
eas_chat_endpoint = PaiEasChatEndpoint(
eas_service_url="your_service_url",
eas_service_token="your_service_token"
)
eas_chat_endpoint._call(prompt_value.messages)
```
It is just example code that has a potential error, so I'll explain why it's a possible type error in the description.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
https://github.com/langchain-ai/langchain/blob/b66a4f48fa5656871c3e849f7e1790dfb5a4c56b/libs/community/langchain_community/chat_models/pai_eas_endpoint.py#L193-L230
In the method `PaiEasChatEndpoint._call` at line 203, `self._call_eas(request_payload)` returns `str` type because it returns response.text at line 230 (https://requests.readthedocs.io/en/latest/api/#requests.Response.text).
Then, first argument of `_format_response_payload` can be `str` type at line 204, resulting in type mismatch.
https://github.com/langchain-ai/langchain/blob/b66a4f48fa5656871c3e849f7e1790dfb5a4c56b/libs/community/langchain_community/chat_models/pai_eas_endpoint.py#L167-L179
Moreover, if `JSONDecodeError` occurs in this method, then attribute error occurs at line 178 (`AttributeError: 'str' object has no attribute 'decode'`) because `output` variable is str type.
I think `PaiEasChatEndpoint._call_eas` should be fixed to return bytes type.
If I'm mistaken, I'd appreciate it if you could let me know.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024
> Python Version: 3.9.18 (main, Feb 13 2024, 14:37:08)
[GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.47
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | `PaiEasChatEndpoint._call_eas` should return `bytes` type instead of `str` type | https://api.github.com/repos/langchain-ai/langchain/issues/20453/comments | 0 | 2024-04-15T07:47:06Z | 2024-07-22T16:08:47Z | https://github.com/langchain-ai/langchain/issues/20453 | 2,242,955,555 | 20,453 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I read in this langchain document page: https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector/
which mentioned that can embedding summary, hypothetical question along with document.
```
The methods to create multiple vectors per document include:
Smaller chunks: split a document into smaller chunks, and embed those (this is ParentDocumentRetriever).
Summary: create a summary for each document, embed that along with (or instead of) the document.
Hypothetical questions: create hypothetical questions that each document would be appropriate to answer, embed those along with (or instead of) the document.
```
How can I do that? In the tutorial, they just use multivector embed insteaad of the document.
```
# The vectorstore to use to index the child chunks
vectorstore = Chroma(collection_name="summaries", embedding_function=OpenAIEmbeddings())
# The storage layer for the parent documents
store = InMemoryByteStore()
id_key = "doc_id"
# The retriever (empty to start)
retriever = MultiVectorRetriever(
vectorstore=vectorstore,
byte_store=store,
id_key=id_key,
)
doc_ids = [str(uuid.uuid4()) for _ in docs]
summary_docs = [
Document(page_content=s, metadata={id_key: doc_ids[i]})
for i, s in enumerate(summaries)
]
retriever.vectorstore.add_documents(summary_docs)
retriever.docstore.mset(list(zip(doc_ids, docs)))
```
### Idea or request for content:
_No response_ | DOC: MultiVector Retriever along with document embedding | https://api.github.com/repos/langchain-ai/langchain/issues/20452/comments | 0 | 2024-04-15T07:28:53Z | 2024-07-22T16:08:41Z | https://github.com/langchain-ai/langchain/issues/20452 | 2,242,919,505 | 20,452 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
os.environ["TEST_PGVECTOR_PORT"] = "5432"
os.environ["TEST_PGVECTOR_DATABASE"] = "langchain_tests"
os.environ["TEST_PGVECTOR_USER"] = "postgres"
os.environ["TEST_PGVECTOR_PASSWORD"] = "postgres"
import os
from typing import List
from langchain_core.embeddings import Embeddings
from langchain_community.vectorstores.pgvector import PGVector
class FakeEmbeddings(Embeddings):
def embed_documents(self, texts: List[str]) -> List[List[float]]:
return [[float(1.0)] * 9 + [float(i)] for i in range(len(texts))]
def embed_query(self, text: str) -> List[float]:
return [float(1.0)] * 9 + [float(0.0)]
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("TEST_PGVECTOR_DRIVER", "psycopg2"),
host=os.environ.get("TEST_PGVECTOR_HOST", "localhost"),
port=int(os.environ.get("TEST_PGVECTOR_PORT", "6024")),
database=os.environ.get("TEST_PGVECTOR_DATABASE", "langchain"),
user=os.environ.get("TEST_PGVECTOR_USER", "langchain"),
password=os.environ.get("TEST_PGVECTOR_PASSWORD", "langchain"),
)
texts = ["foo", "bar", "baz"]
metadatas = [{"page": str(i)} for i in range(len(texts))]
docsearch = PGVector.from_texts(
texts=texts,
collection_name="test_collection_filter",
embedding=FakeEmbeddings(),
metadatas=metadatas,
connection_string=CONNECTION_STRING,
pre_delete_collection=True,
)
output = docsearch.similarity_search_with_score(
"foo", k=3, filter={"page": {"OR": [{"EQ": "0"}, {"EQ": "2"}]}}
)
# Alternatively:
output = docsearch.similarity_search_with_score(
"foo", k=3, filter={"page": {"AND": [{"IN": ["0","1"]}, {"NIN": ["1"]}]}}
)
```
This raises:
```
TypeError: PGVector._create_filter_clause() takes 2 positional arguments but 3 were given
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 601, in similarity_search_with_score
docs = self.similarity_search_with_score_by_vector(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 626, in similarity_search_with_score_by_vector
results = self.__query_collection(embedding=embedding, k=k, filter=filter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 945, in __query_collection
filter_clauses = self._create_filter_clause_json_deprecated(filter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 828, in _create_filter_clause_json_deprecated
filter_by_metadata = self._create_filter_clause_deprecated(key, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 797, in _create_filter_clause_deprecated
or_clauses = [
^
File "/Users/mferenaz/Projects/personal/langchain/libs/community/langchain_community/vectorstores/pgvector.py", line 798, in <listcomp>
self._create_filter_clause(key, sub_value)
TypeError: PGVector._create_filter_clause() takes 2 positional arguments but 3 were given
```
### Description
While using PGVector without the newer jsonb format one can filter using the _create_filter_clause_deprecated method. That allows filtering with EQ, NIN, IN, etc. but also with OR and AND, the problem with this two is that the code should be calling itself but it calls the newer _create_filter_clause that does not support the same method signature and raises an error.
Expected Result:
When the PGVector is created without the usejsonb flag in True, should allow to filter with OR and AND conditions
Actual Result:
Adding an OR or AND filter without the usejsonb flag ends in an error raising
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-openai==0.0.8
langchain-text-splitters==0.0.1 | PGVector deprecated create_filter_clause uses incorrect method. Can't filter by OR or AND conditions | https://api.github.com/repos/langchain-ai/langchain/issues/20445/comments | 0 | 2024-04-14T22:26:26Z | 2024-07-21T16:07:15Z | https://github.com/langchain-ai/langchain/issues/20445 | 2,242,399,652 | 20,445 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.prompts import PromptTemplate
import google.generativeai as genai
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_google_genai import GoogleGenerativeAI
GOOGLE_API_KEY='My Api Key'
llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=0, google_api_key=GOOGLE_API_KEY)
prompt = "Where is Tajmahal"
result = llm.invoke(prompt)
```
### Error Message and Stack Trace (if applicable)
`Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 2.0 seconds as it raised DeadlineExceeded: 504 Deadline Exceeded.`
### Description
I'm attempting to use ChatGoogleGenerativeAI, but it's running very slowly on my local Apple Silicon machine. The same code works well in Google Colab, but I'm receiving a 504 error locally.
### System Info
aiohttp==3.9.4
aiosignal==1.3.1
annotated-types==0.6.0
attrs==23.2.0
cachetools==5.3.3
certifi==2024.2.2
charset-normalizer==3.3.2
dataclasses-json==0.6.4
frozenlist==1.4.1
google-ai-generativelanguage==0.6.1
google-api-core==2.18.0
google-api-python-client==2.125.0
google-auth==2.29.0
google-auth-httplib2==0.2.0
google-generativeai==0.5.0
googleapis-common-protos==1.63.0
grpcio==1.62.1
grpcio-status==1.62.1
httplib2==0.22.0
idna==3.7
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.16
langchain-community==0.0.32
langchain-core==0.1.42
langchain-google-genai==1.0.2
langchain-text-splitters==0.0.1
langsmith==0.1.47
marshmallow==3.21.1
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
orjson==3.10.0
packaging==23.2
proto-plus==1.23.0
protobuf==4.25.3
pyasn1==0.6.0
pyasn1_modules==0.4.0
pydantic==2.7.0
pydantic_core==2.18.1
pyparsing==3.1.2
PyYAML==6.0.1
requests==2.31.0
rsa==4.9
SQLAlchemy==2.0.29
tenacity==8.2.3
tqdm==4.66.2
typing-inspect==0.9.0
typing_extensions==4.11.0
uritemplate==4.1.1
urllib3==2.2.1
yarl==1.9.4
platform mac
python version 3.12.3 | ChatGoogleGenerativeAI is running slowly on my Apple Silicon machine. | https://api.github.com/repos/langchain-ai/langchain/issues/20444/comments | 4 | 2024-04-14T21:08:53Z | 2024-08-02T11:18:19Z | https://github.com/langchain-ai/langchain/issues/20444 | 2,242,367,841 | 20,444 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
``` weaviate.Client(url=url, auth=auth, **kwargs)``` is a v3 type of implementation of Weaviate client
### Error Message and Stack Trace (if applicable)
```bash
DeprecationWarning: Dep016: You are using the Weaviate v3 client, which is deprecated. Consider upgrading to the new and improved v4 client instead! See here for usage: https://weaviate.io/developers/weaviate/client-libraries/python warnings.warn(```
### Description
- I am trying to create a weaviate client using langhchain's `from langchain_community.vectorstores.weaviate import Weaviate` library.
- I expected it to create a weaviate client
- The current langchain internal implementaion of creating of weaveate client is working but is `v3` client API implementation which results in the warning, it should be upgraded to `v4` client API.
### System Info
`pip freeze | grep langchain` :
```bash
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.42
langchain-openai==0.0.8
langchain-pinecone==0.0.3
langchain-text-splitters==0.0.1
langchain-weaviate==0.0.1.post1
```
- Platform: `macOS 14.4.1`
- Python version: `Python 3.11.4`
---------------
`python -m langchain_core.sys_info`:
```bash
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:41 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8103
> Python Version: 3.11.4 (main, Jul 5 2023, 08:54:11) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.38
> langchain_experimental: 0.0.57
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
```
| Depreciated initialisation (v3) of Weaviate Vector Database client | https://api.github.com/repos/langchain-ai/langchain/issues/20442/comments | 3 | 2024-04-14T19:52:28Z | 2024-05-30T10:04:52Z | https://github.com/langchain-ai/langchain/issues/20442 | 2,242,336,909 | 20,442 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Getting error for this code -
```python
from langchain_openai import OpenAI
llm = OpenAI(model='gpt-3.5-turbo', temperature=0, streaming=True)
llm('how are you?')
```
### Error Message and Stack Trace (if applicable)
NotFoundError: Error code: 404 - {'error': {'message': 'This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?', 'type': 'invalid_request_error', 'param': 'model', 'code': None}}
### Description
Getting above specified error when configuring `gpt-3.5-turbo` model with `OpenAI`
However, this model works as expected with `ChatOpenAI`
On the other hand, `gpt-3.5-turbo-instruct` model also works as expected with `OpenAI`, code is mentioned below -
```python
from langchain_openai import OpenAI
llm = OpenAI(model='gpt-3.5-turbo-instruct',temperature=0, streaming=True)
llm('how are you?')
```
Here is the screenshot for reference -
<img width="1193" alt="Screenshot 2024-04-15 at 12 02 51 AM" src="https://github.com/langchain-ai/langchain/assets/47742503/50cbe4de-9b94-4569-85d3-f92137aacbe4">
I believe this issue is due to configuring non-supported model with `OpenAI` instead of `ChatOpenAI`
**Observation 🔍**
I referred the codebase of [openai python package](https://github.com/openai/openai-python) and observed that there are some set of models which has only support of `v1/chat/completions` (`ChatOpenAI` as implemented within langchain). Check these files for more details -
- `v1/chat/completions`: [src/openai/resources/chat/completions.py](https://github.com/openai/openai-python/blob/595e6b834fe26e08e9ac5415ab15bc19b8661f2c/src/openai/resources/chat/completions.py#L46) (list of specific models mentioned here)
- `v1/completions`: [src/openai/resources/completions.py](https://github.com/openai/openai-python/blob/595e6b834fe26e08e9ac5415ab15bc19b8661f2c/src/openai/resources/completions.py#L39)
**Potential Fix 🤔**
Should we validate the model name by referring to the same list when handling parameters for OpenAI, and raise an error accordingly?
**I can work on this, please check and let me know**
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:11:05 PDT 2024; root:xnu-10063.101.17~1/RELEASE_X86_64
> Python Version: 3.11.8 (main, Feb 26 2024, 15:43:17) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.47
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> openai: 1.17.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | NotFoundError for OpenAI with gpt-3.5-turbo model | https://api.github.com/repos/langchain-ai/langchain/issues/20441/comments | 4 | 2024-04-14T19:07:09Z | 2024-04-17T04:30:53Z | https://github.com/langchain-ai/langchain/issues/20441 | 2,242,317,151 | 20,441 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```import os
from dotenv import load_dotenv
from langchain_community.chat_models import ChatOllama
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from transformers import AutoTokenizer
from langchain.tools import Tool
load_dotenv()
hf_token = os.environ.get("HF_TOKEN")
llm = ChatOllama(model="qwen:14b")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-14B", token=hf_token)
search_wrapper = DuckDuckGoSearchAPIWrapper(region="en-us", max_results=5)
# Set up the DuckDuckGo search tool
search_tool = DuckDuckGoSearchRun(verbose=True, api_wrapper=search_wrapper)
# Set up the search tools list
search_tools = [
Tool(
name="Search",
func=search_tool.run,
description="Use the DuckDuckGo search engine to find information",
),
]
llm_with_tools = llm.bind_tools(tools=search_tools)
```
### Error Message and Stack Trace (if applicable)
```
AttributeError Traceback (most recent call last)
Cell In[1], [line 26](vscode-notebook-cell:?execution_count=1&line=26)
[17](vscode-notebook-cell:?execution_count=1&line=17) # Set up the search tools list
[18](vscode-notebook-cell:?execution_count=1&line=18) search_tools = [
[19](vscode-notebook-cell:?execution_count=1&line=19) Tool(
[20](vscode-notebook-cell:?execution_count=1&line=20) name="Search",
(...)
[23](vscode-notebook-cell:?execution_count=1&line=23) ),
[24](vscode-notebook-cell:?execution_count=1&line=24) ]
---> [26](vscode-notebook-cell:?execution_count=1&line=26) llm_with_tools = llm.bind_tools(tools=search_tools)
AttributeError: 'ChatOllama' object has no attribute 'bind_tools'
```
### Description
In the docs for ChatOllama it says there is the method bind_tools() [here](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ollama.ChatOllama.html#langchain_community.chat_models.ollama.ChatOllama.bind_tools). When I attempt to run the Ollama instance and bind the search tool it throws the AttributeError saying it has no attribute 'bind_tools'
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000
> Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.40
> langchain: 0.1.14
> langchain_community: 0.0.31
> langsmith: 0.1.38
> langchain_experimental: 0.0.56
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | 'ChatOllama' object has no attribute 'bind_tools' | https://api.github.com/repos/langchain-ai/langchain/issues/20439/comments | 13 | 2024-04-14T18:08:10Z | 2024-08-04T08:48:34Z | https://github.com/langchain-ai/langchain/issues/20439 | 2,242,292,340 | 20,439 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
client = AsyncOpenAI(
# Defaults to os.environ.get("OPENAI_API_KEY")
# Otherwise use: api_key="Your_API_Key",
api_key=settings.OPENAI_API_KEY,
)
async def acreate_assistant(**kwargs: Any, ):
return await OpenAIAssistantRunnable.acreate_assistant(async_client=client, **kwargs)
```
when I run `await acreate_assistant()` it shows `Unexpected error occurred: OpenAIError - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Here is the code of `client` property in `OpenAIAssistantRunnable`
```
client: Any = Field(default_factory=_get_openai_client)
```
Because of using `AsyncClient`, the `client` value is None as default. So it will create by `_get_openai_client`.
```
class OpenAI(SyncAPIClient):
def __init__(...):
if api_key is None:
api_key = os.environ.get("OPENAI_API_KEY")
if api_key is None:
raise OpenAIError(
"The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable"
)
self.api_key = api_key
```
but the `OpenAI` will check the `api_key` automaticlly.
So it will always failed when I use AsyncOpenAIClient to create the `OpenAIAssistantRunnable` object.
I don't know if it's Ok to remove the `default_factory` of `client`. If it's ok, I'll create a PR to fix this problem.
### System Info
```
langchain==0.1.16
langchain-community==0.0.32
langchain-core==0.1.42
langchain-openai==0.1.0
langchain-text-splitters==0.0.1
langsmith==0.1.31
``` | create openai assistant object failed when I use AsyncOpenAI client | https://api.github.com/repos/langchain-ai/langchain/issues/20436/comments | 2 | 2024-04-14T15:03:37Z | 2024-08-05T16:08:41Z | https://github.com/langchain-ai/langchain/issues/20436 | 2,242,210,783 | 20,436 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import LocalAIEmbeddings
from langchain_community.vectorstores import FAISS
embeddings_model = LocalAIEmbeddings(
openai_api_base=OPENAI_BASE_URL, openai_api_key=OPENAI_API_KEY, model="bge-large-zh-v1.5"
)
db = FAISS.load_local("data.db", embeddings_model, allow_dangerous_deserialization=True)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
AttributeError: module ‘openai’ has no attribute ‘error’
### System Info
langchain 0.1.16
linux
Python :: 3.11 | How to specify custom embeddings? | https://api.github.com/repos/langchain-ai/langchain/issues/20428/comments | 0 | 2024-04-14T01:55:33Z | 2024-07-21T16:07:05Z | https://github.com/langchain-ai/langchain/issues/20428 | 2,241,859,492 | 20,428 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
xccxcx
### Error Message and Stack Trace (if applicable)
MATCH (n:`category`)
WHERE n.embedding IS null AND
any(k in $props WHERE n[k] IS NOT null) RETURN elementId(n) AS id,
reduce(str='',k IN $props | str + '\\n' + k + ':' + coalesce(n[k], '')) AS text LIMIT 1000
### Description
The query residing at line 1278 always add propetry name, semicolon and new line to all queried data so distorts embeddings.
MATCH (n:`category`)
WHERE n.embedding IS null AND
any(k in $props WHERE n[k] IS NOT null) RETURN elementId(n) AS id,
reduce(str='',k IN $props | str + '\\n' + k + ':' + coalesce(n[k], '')) AS text LIMIT 1000
I am suggesting to remove this part of the query " '\\n' + k + ':' "
### System Info
The bug is multi platform. | Method from_existing_graph() distorts queried data from Neo4J so impacts embeddings. | https://api.github.com/repos/langchain-ai/langchain/issues/20423/comments | 3 | 2024-04-13T23:29:05Z | 2024-07-20T16:07:25Z | https://github.com/langchain-ai/langchain/issues/20423 | 2,241,817,996 | 20,423 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
**
### Error Message and Stack Trace (if applicable)
**
### Description
The Voyage AI Reranker cannot be imported as a module without #19645
### System Info
** | Request for a langchain-voyageai package release | https://api.github.com/repos/langchain-ai/langchain/issues/20408/comments | 2 | 2024-04-13T07:22:51Z | 2024-07-30T16:06:45Z | https://github.com/langchain-ai/langchain/issues/20408 | 2,241,402,870 | 20,408 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [ ] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hi I am trying to generate QA from a pdf
Running following code snippet -
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 1000,
chunk_overlap = 100,
length_function = len,
)
loader = PyPDFLoader(data_pdf)
pages = loader.load_and_split(text_splitter=text_splitter)
len(pages) #260
templ = """You are a smart assistant designed to come up with meaninful question and answer pair. The question should be to the point and the answer should be as detailed as possible.
Given a piece of text, you must come up with a question and answer pair that can be used to evaluate a QA bot. Do not make up stuff. Stick to the text to come up with the question and answer pair.
When coming up with this question/answer pair, you must respond in the following format:
```
{{
"question": "$YOUR_QUESTION_HERE",
"answer": "$THE_ANSWER_HERE"
}}
```
Everything between the ``` must be valid json.
Please come up with a question/answer pair, in the specified JSON format, for the following text:
----------------
{text}"""
PROMPT = PromptTemplate.from_template(templ)
llm = Cohere(model="command", temperature=0) # command, command-light
chain = QAGenerationChain.from_llm(llm=llm, prompt=PROMPT)
### llm is as follows
##Cohere(client=<cohere.client.Client object at 0x00000188268D3BD0>, async_client=<cohere.client.AsyncClient object at ##0x0000018827B22690>, model='command', temperature=0.0)
#By running following code I am expecting QA set to be generated but I am getting error
chain.invoke(pages[40].page_content)
I tried other models like OpenAI and Google Gemini Pro and QAGeneration chain fails at same step
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], line 1
----> 1 chain.invoke(pages[40].page_content)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain\chains\base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain\chains\base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
161 except BaseException as e:
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain\chains\qa_generation\base.py:73, in QAGenerationChain._call(self, inputs, run_manager)
67 def _call(
68 self,
69 inputs: Dict[str, Any],
70 run_manager: Optional[CallbackManagerForChainRun] = None,
71 ) -> Dict[str, List]:
72 docs = self.text_splitter.create_documents([inputs[self.input_key]])
---> 73 results = self.llm_chain.generate(
74 [{"text": d.page_content} for d in docs], run_manager=run_manager
75 )
76 qa = [json.loads(res[0].text) for res in results.generations]
77 return {self.output_key: qa}
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain\chains\llm.py:115, in LLMChain.generate(self, input_list, run_manager)
113 callbacks = run_manager.get_child() if run_manager else None
114 if isinstance(self.llm, BaseLanguageModel):
--> 115 return self.llm.generate_prompt(
116 prompts,
117 stop,
118 callbacks=callbacks,
119 **self.llm_kwargs,
120 )
121 else:
122 results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
123 cast(List, prompts), {"callbacks": callbacks}
124 )
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:597, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
589 def generate_prompt(
590 self,
591 prompts: List[PromptValue],
(...)
594 **kwargs: Any,
595 ) -> LLMResult:
596 prompt_strings = [p.to_string() for p in prompts]
--> 597 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:767, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
752 if (self.cache is None and get_llm_cache() is None) or self.cache is False:
753 run_managers = [
754 callback_manager.on_llm_start(
755 dumpd(self),
(...)
765 )
766 ]
--> 767 output = self._generate_helper(
768 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
769 )
770 return output
771 if len(missing_prompts) > 0:
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:634, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
632 for run_manager in run_managers:
633 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 634 raise e
635 flattened_outputs = output.flatten()
636 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:621, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
611 def _generate_helper(
612 self,
613 prompts: List[str],
(...)
617 **kwargs: Any,
618 ) -> LLMResult:
619 try:
620 output = (
--> 621 self._generate(
622 prompts,
623 stop=stop,
624 # TODO: support multiple run managers
625 run_manager=run_managers[0] if run_managers else None,
626 **kwargs,
627 )
628 if new_arg_supported
629 else self._generate(prompts, stop=stop)
630 )
631 except BaseException as e:
632 for run_manager in run_managers:
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_core\language_models\llms.py:1231, in LLM._generate(self, prompts, stop, run_manager, **kwargs)
1228 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
1229 for prompt in prompts:
1230 text = (
-> 1231 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
1232 if new_arg_supported
1233 else self._call(prompt, stop=stop, **kwargs)
1234 )
1235 generations.append([Generation(text=text)])
1236 return LLMResult(generations=generations)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_community\llms\cohere.py:217, in Cohere._call(self, prompt, stop, run_manager, **kwargs)
202 """Call out to Cohere's generate endpoint.
203
204 Args:
(...)
214 response = cohere("Tell me a joke.")
215 """
216 params = self._invocation_params(stop, **kwargs)
--> 217 response = completion_with_retry(
218 self, model=self.model, prompt=prompt, **params
219 )
220 _stop = params.get("stop_sequences")
221 return self._process_response(response, _stop)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_community\llms\cohere.py:45, in completion_with_retry(llm, **kwargs)
43 def completion_with_retry(llm: Cohere, **kwargs: Any) -> Any:
44 """Use tenacity to retry the completion call."""
---> 45 retry_decorator = _create_retry_decorator(llm)
47 @retry_decorator
48 def _completion_with_retry(**kwargs: Any) -> Any:
49 return llm.client.generate(**kwargs)
File ~\anaconda3\envs\GenAI_py311\Lib\site-packages\langchain_community\llms\cohere.py:38, in _create_retry_decorator(llm)
31 max_seconds = 10
32 # Wait 2^x * 1 second between each retry starting with
33 # 4 seconds, then up to 10 seconds, then 10 seconds afterwards
34 return retry(
35 reraise=True,
36 stop=stop_after_attempt(llm.max_retries),
37 wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
---> 38 retry=(retry_if_exception_type(cohere.error.CohereError)),
39 before_sleep=before_sleep_log(logger, logging.WARNING),
40 )
AttributeError: module 'cohere' has no attribute 'error'
### Description
chain = QAGenerationChain.from_llm(llm=llm, prompt=PROMPT)
chain.invoke(pages[40].page_content)
Chain.run and chain.invoke giving error
### System Info
python 3.11
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.42
langchain-google-genai==1.0.2 | Error run or invoke method of QAGenerationChain | https://api.github.com/repos/langchain-ai/langchain/issues/20406/comments | 3 | 2024-04-13T04:48:27Z | 2024-07-21T16:07:00Z | https://github.com/langchain-ai/langchain/issues/20406 | 2,241,293,176 | 20,406 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have the following code:
```python
from langchain_community.agent_toolkits.sql.base import create_sql_agent
from langchain_community.utilities.sql_database import SQLDatabase
db = SQLDatabase.from_uri(database_uri="%POSTGRES_URI%")
prompt = "what is the total corpus of money in banks"
conversation = create_sql_agent(llm=llm, db=db, agent_type="openai-tools", verbose=True, top_k=15)
return conversation.invoke(input={"input": prompt})
```
### Error Message and Stack Trace (if applicable)
> Entering new SQL Agent Executor chain...
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `sql_db_list_tables` with `{}`
ERROR:
1 validation error for _ListSQLDataBaseToolInput
tool_input
field required (type=value_error.missing)
### Description
I recently updated langchain and pydantic to their latest version and my code broke and when using the sql agent I get input missing issue. I think the invoking of sql_db_list_tables is not putting any value into when trying to parse the description of table schema. And this is a bug from langchain library itself.
Kindly help into this issue, as this is becoming a blocker. Thanks.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.45
> langchain_experimental: 0.0.50
> langchain_openai: 0.1.3
> langchain_pinecone: 0.0.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14
> pydantic-settings: 2.1.0
> pydantic: 2.7.0
> pydantic-core: 2.18.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | [BUG] Getting validation error for SQL Database, type=value_error.missing, when its trying to list tables in the db | https://api.github.com/repos/langchain-ai/langchain/issues/20405/comments | 6 | 2024-04-13T04:14:38Z | 2024-08-07T16:06:20Z | https://github.com/langchain-ai/langchain/issues/20405 | 2,241,270,531 | 20,405 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am ingesting docs to qdrant using langchain but not able to retrieve a list of chunks from the vector store.
My ingest.py script:
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.document_loaders import PDFMinerLoader
from langchain_community.vectorstores.qdrant import Qdrant
from dotenv import load_dotenv
import uuid
load_dotenv()
OPENAI_API_TOKEN = os.getenv('OPENAI_API_KEY')
def load_documents():
texts = []
all_items = os.listdir("files/")
for _, item in enumerate(all_items):
print("Processing file: {}".format(item))
unique_id = str(uuid.uuid4())
text_splitter = RecursiveCharacterTextSplitter()
loader = PDFMinerLoader("files/{}".format(item), extract_images=True)
docs = loader.load_and_split(text_splitter)
for doc in docs:
doc.metadata["pdf"] = doc.metadata["source"]
doc.metadata["unique_id"] = unique_id
texts.extend(docs)
return texts
def create_qdrant_store(texts):
print("Creating qdrant store")
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_TOKEN, model="text-embedding-3-small")
Qdrant.from_documents(
texts,
embeddings,
url="http://localhost:6333",
prefer_grpc=True,
collection_name="aff_container",
)
def main():
texts = load_documents()
print(texts)
create_qdrant_store(texts)
print("Docuemnts loaded successfully!")
if __name__ == "__main__":
main()
```
My get files endpoint:
```python
@app.route('/api/get_files', methods=['GET'])
def get_files():
"""
Endpoint to get the list of files in the index.
"""
try:
client = QdrantClient(url="http://localhost:6333")
db = Qdrant(client=client, collection_name='affilix_container', embeddings=embeddings)
files = []
## ----- want to list all the chunks here ----- ##
print(client.get_collection(collection_name='affilix_container').)
for doc in db.collection_name:
file_id = doc.metadata['unique_id']
file_name = doc.metadata['pdf'].replace("files/", "").replace("temp/", "")
if file_id not in [file['id'] for file in files]:
files.append({'id': file_id, 'file_name': file_name})
return jsonify({'files': files})
except Exception as e:
return jsonify({'error': str(e)}), 500`
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am ingesting docs to qdrant using langchain but not able to retrieve a list of chunks from the vector store.
### System Info
$ pip freeze | grep langchain
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.40
langchain-openai==0.0.8 | Not able to list all the documents from qdrant vector store using langchain | https://api.github.com/repos/langchain-ai/langchain/issues/20382/comments | 2 | 2024-04-12T10:36:34Z | 2024-04-13T09:30:09Z | https://github.com/langchain-ai/langchain/issues/20382 | 2,239,762,769 | 20,382 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Definition of the output using PydanticOutputParser:
```python
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
parser = PydanticOutputParser(pydantic_object=Joke)
```
### Error Message and Stack Trace (if applicable)
```
Error in RootListenersTracer.on_chain_end callback: ValueError()
```
### Description
Since I'm trying to build a conversational agent, I am using [RunnableWithMessageHistory](https://python.langchain.com/docs/expression_language/how_to/message_history/) to automatically add the messages to the history.
I would expect everything to work fine, but I obtain the following error
```
Error in RootListenersTracer.on_chain_end callback: ValueError()
```
from
```python
def _get_output_messages(
self, output_val: Union[str, BaseMessage, Sequence[BaseMessage], dict]
) -> List[BaseMessage]:
from langchain_core.messages import BaseMessage
if isinstance(output_val, dict):
output_val = output_val[self.output_messages_key or "output"]
if isinstance(output_val, str):
from langchain_core.messages import AIMessage
return [AIMessage(content=output_val)]
elif isinstance(output_val, BaseMessage):
return [output_val]
elif isinstance(output_val, (list, tuple)):
return list(output_val)
else:
raise ValueError()
```
Since I used PydanticOutputParser, here `output_val `is a type dict object containing a Joke object:
```
output_val = {'output': Joke(joke='Why did the egg refuse to tell jokes? Because it cracked under pressure!', message='Here a joke for you!')}
```
Joke objects do not seem to be supported by `_get_output_messages`.
How can I get it to work? I would like for the history to update automatically when using RunnableWithMessageHistory and a PydanticOutputParser.
### System Info
langchain==0.1.16
langchain-cli==0.0.19
langchain-community==0.0.32
langchain-core==0.1.42
langchain-experimental==0.0.10
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
python version 3.10.9
OS: windows11 | How to use RunnableWithMessageHistory? | https://api.github.com/repos/langchain-ai/langchain/issues/20380/comments | 3 | 2024-04-12T10:18:41Z | 2024-04-16T13:08:19Z | https://github.com/langchain-ai/langchain/issues/20380 | 2,239,704,735 | 20,380 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.tools import BearlyInterpreterTool
bearly_tool.add_file(
source_path="sample_data/US_GDP.csv", target_path="US_GDP.csv", description=""
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
handle_parsing_errors=True,
)
# Simple Queries
agent.run("What was the US GDP in 2019?")
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': '\'Evaluates python code in a sandbox environment. The environment resets on every execution. You must send the whole script every time and print your outputs. Script should be pure python code that can be evaluated. It should be in python format NOT markdown. The code should NOT be wrapped in backticks. All python packages including requests, matplotlib, scipy, numpy, pandas, etc are available. If you have any files outputted write them to "output/" relative to the execution path. Output can only be read from the directory, stdout, and stdin. Do not use things like plot.show() as it will not work instead write them out `output/` and a link to the file will be returned. print() any output and results so you can capture the output.\\n\\nThe following files available in the evaluation environment:
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i want to use BearlyInterpreterTool,but get erro
### System Info
python version3.10
liunx
langchain 0.1.14 | openai.BadRequestError: Error code: 400 - {'error': {'message': '\'Evaluates python code in a sandbox environment The environment resets on every execution. You must send the whole script every time and print your outputs. Script should be pure python code that can be evaluated. It should be in python format NOT markdown. | https://api.github.com/repos/langchain-ai/langchain/issues/20377/comments | 0 | 2024-04-12T09:25:08Z | 2024-07-19T16:08:30Z | https://github.com/langchain-ai/langchain/issues/20377 | 2,239,531,981 | 20,377 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import Any, AsyncIterator, List, Sequence, cast
from langchain_core.runnables.schema import StreamEvent
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
)
import langchain_core
import typing
from langchain_core.documents import Document
from langchain_core.tools import tool
def foo(x: int) -> dict:
"""Foo"""
return {"x": 5}
@tool
def get_docs(x: int) -> list[Document]:
"""get_docs"""
return [Document(page_content="hello")]
def _with_nulled_run_id(events: Sequence[StreamEvent]) -> List[StreamEvent]:
"""Removes the run ids from events."""
return cast(List[StreamEvent], [{**event, "run_id": ""} for event in events])
async def _collect_events(events: AsyncIterator[StreamEvent]) -> List[StreamEvent]:
"""Collect the events and remove the run ids."""
materialized_events = [event async for event in events]
events_ = _with_nulled_run_id(materialized_events)
for event in events_:
event["tags"] = sorted(event["tags"])
return events_
prompt_obj = {
"name": None,
"input_variables": ["agent_scratchpad", "input"],
"input_types": {
"chat_history": typing.List[
typing.Union[
langchain_core.messages.ai.AIMessage,
langchain_core.messages.human.HumanMessage,
langchain_core.messages.chat.ChatMessage,
langchain_core.messages.system.SystemMessage,
langchain_core.messages.function.FunctionMessage,
langchain_core.messages.tool.ToolMessage,
]
],
"agent_scratchpad": typing.List[
typing.Union[
langchain_core.messages.ai.AIMessage,
langchain_core.messages.human.HumanMessage,
langchain_core.messages.chat.ChatMessage,
langchain_core.messages.system.SystemMessage,
langchain_core.messages.function.FunctionMessage,
langchain_core.messages.tool.ToolMessage,
]
],
},
"output_parser": None,
"partial_variables": {},
"metadata": {
"lc_hub_owner": "hwchase17",
"lc_hub_repo": "openai-tools-agent",
"lc_hub_commit_hash": "c18672812789a3b9697656dd539edf0120285dcae36396d0b548ae42a4ed66f5",
},
"tags": None,
"messages": [
SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template="You are a helpful assistant")),
MessagesPlaceholder(variable_name="chat_history", optional=True),
HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=["input"], template="{input}")),
MessagesPlaceholder(variable_name="agent_scratchpad"),
],
"validate_template": False,
}
prompt = ChatPromptTemplate.parse_obj(prompt_obj)
tools = [get_docs]
llm = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0)
# Construct the OpenAI Tools agent
agent = create_openai_tools_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
events = await _collect_events(
agent_executor.astream_events({"input": "call get_docs."}, version="v1", include_names=["get_docs"])
)
assert events == [
{
"event": "on_tool_start",
"name": "get_docs",
"run_id": "",
"tags": [],
"metadata": {},
"data": {"input": {"x": 5}},
},
{
"event": "on_tool_end",
"name": "get_docs",
"run_id": "",
"tags": [],
"metadata": {},
"data": {"input": {"x": 5}, "output": [Document(page_content="hello")]},
},
]
```
### Error Message and Stack Trace (if applicable)
Assertion error:
```
---------------------------------------------------------------------------
{
"name": "AssertionError",
"message": "",
"stack": "---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[4], line 96
92 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
93 events = await _collect_events(
94 agent_executor.astream_events({\"input\": \"call get_docs.\"}, version=\"v1\", include_names=[\"get_docs\"])
95 )
---> 96 assert events == [
97 {
98 \"event\": \"on_tool_start\",
99 \"name\": \"get_docs\",
100 \"run_id\": \"\",
101 \"tags\": [],
102 \"metadata\": {},
103 \"data\": {\"input\": {\"x\": 5}},
104 },
105 {
106 \"event\": \"on_tool_end\",
107 \"name\": \"get_docs\",
108 \"run_id\": \"\",
109 \"tags\": [],
110 \"metadata\": {},
111 \"data\": {\"input\": {\"x\": 5}, \"output\": [Document(page_content=\"hello\")]},
112 },
113 ]
AssertionError: "
}
```
### Description
When using an agent executor, and we call a tool, I expect the actual output of the tool, rather than the output being cast to a string.
This bug was originally raised [here](https://github.com/langchain-ai/langchain/discussions/18694), and partially fixed except for when using an agent executor in this [PR](https://github.com/langchain-ai/langchain/pull/18932).
This comment shows the cause of the issue: https://github.com/langchain-ai/langchain/pull/18932#issuecomment-2034933719
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:35:26) [GCC 10.4.0]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.38
> langchain_experimental: 0.0.57
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | AsyncCallbackManagerForToolRun improperly casts on_tool_end to string | https://api.github.com/repos/langchain-ai/langchain/issues/20372/comments | 0 | 2024-04-12T07:16:02Z | 2024-07-19T16:08:26Z | https://github.com/langchain-ai/langchain/issues/20372 | 2,239,269,029 | 20,372 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class AnalyzeResume(BaseModel):
"""As a professional HR agent that helps users analyze and generate a comprehensive report about the resume.\n
Use the following pieces of context from a resume to answer the question at the end."""
issues: list = Field(
description="List down all issues detected in the resume with **title, description, solution format** with description & solution of atleast 600 words"
)
suggestions: list = Field(
description="List down all suggestions to improve the resume in **title, description format** with description of atleast 600 words"
)
summary: str = Field(description="Summary of the of the resume.")
negotiations: list = Field(
description="List down all insightful points for resume negotiations in **title, description format** with description of atleast 600 words"
)
companies: list = Field(
description="List down all Names and descriptions of the companies involved in the resume in **title, description format**"
)
### Error Message and Stack Trace (if applicable)
![image](https://github.com/langchain-ai/langchain/assets/115447569/0149cc22-10a7-47b0-832b-b8427a0e4e33)
### Description
I want the output should be consistent, sometimes the output list is having data, sometimes just completely blank. Below is the prompt code I am using
```
resume_parser = PydanticOutputParser(pydantic_object=AnalyzeResume)
template = """
As a professional HR agent that helps users analyze and generate a comprehensive report about the resume.\
Use the following pieces of context from a resume to answer the question at the end.\
\n
{context}\n
\n{format_instructions}\n
Question: {question}
"""
rag_prompt_custom = PromptTemplate(
template=template,
input_variables=["context", "question"],
partial_variables={"format_instructions": resume_parser.get_format_instructions()},
)
chroma_client = chromadb.HttpClient(host="chromadb", port=8000)
document_analysis_client = DocumentAnalysisClient(
endpoint=os.environ["AZURE_DI_SERVICE_ENDPOINT"],
credential=AzureKeyCredential(os.environ["AZURE_DI_API_KEY"]),
mode="page",
analysis_features=["ocrHighResolution"],
)
loader = DocumentIntelligenceLoader(
file_path=contract_obj.file_input.path, client=document_analysis_client, model="prebuilt-document"
)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=14500,
chunk_overlap=100,
add_start_index=True,
strip_whitespace=True,
separators=MARKDOWN_SEPARATORS,
)
docs = text_splitter.split_documents(documents)
collection_id = str(uuid.uuid4())
chroma_db = Chroma.from_documents(docs, azure_embeddings, client=chroma_client, collection_name=collection_id)
contract_obj.collection_id = collection_id
contract_obj.save()
vectorstore = Chroma(client=chroma_client, collection_name=collection_id, embedding_function=azure_embeddings)
retriever = vectorstore.as_retriever()
rag_chain = {"context": retriever, "question": RunnablePassthrough()} | rag_prompt_custom | azure_llm
with get_openai_callback() as cb:
result = rag_chain.invoke("Generate answers in provided format")
total_rag_tokens = cb.total_tokens
```
Just point me to the issue, I will try my best to fix it. (Might be chroma or Azure OpenAI or the text splitter)
### System Info
langchain
langchain-community
langchain-core
langchain-google-genai==0.0.8
langchain-openai==0.0.5
langchainhub
langsmith
azure-ai-documentintelligence==1.0.0b1
azure-ai-formrecognizer==3.3.2
chroma-hnswlib==0.7.3
chromadb == 0.4.22 | Highly inconsistent output using PydanticOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/20369/comments | 2 | 2024-04-12T06:31:30Z | 2024-04-12T21:20:44Z | https://github.com/langchain-ai/langchain/issues/20369 | 2,239,190,057 | 20,369 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def generate_response_langchain(user_question, complexity_level, memory):
"""
Generate a response based on user question using LangChain
"""
groq_chat = None
if complexity_level == "simple":
groq_chat = ChatGroq(
groq_api_key
model_name='llama2-70b-4096' # LLaMA2-70b model
)
elif complexity_level == "large":
groq_chat = ChatGroq(
groq_api_key=
model_name='mixtral-8x7b-32768' # Mixtral-8x7b model
)
elif complexity_level == "complex":
groq_chat = ChatGroq(
groq_api_key=
model_name='gemma-7b-it' # Gemma-7b-it model
)
conversation = ConversationChain(
llm=groq_chat,
memory=memory
)
conversational_memory_length = 100
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
memory = ConversationBufferWindowMemory(k=conversational_memory_length)
# Display chat history
st.write("Chat History:")
for sender, message in st.session_state.chat_history:
st.write(f"{sender} {message}")
user_question = st.chat_input("Say something")
if user_question:
st.session_state.chat_history.append(("User:", user_question))
### Error Message and Stack Trace (if applicable)
def generate_response_langchain(user_question, complexity_level, memory):
"""
Generate a response based on user question using LangChain
"""
groq_chat = None
if complexity_level == "simple":
groq_chat = ChatGroq(
groq_api_key
model_name='llama2-70b-4096' # LLaMA2-70b model
)
elif complexity_level == "large":
groq_chat = ChatGroq(
groq_api_key=
model_name='mixtral-8x7b-32768' # Mixtral-8x7b model
)
elif complexity_level == "complex":
groq_chat = ChatGroq(
groq_api_key=
model_name='gemma-7b-it' # Gemma-7b-it model
)
conversation = ConversationChain(
llm=groq_chat,
memory=memory
)
conversational_memory_length = 100
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
memory = ConversationBufferWindowMemory(k=conversational_memory_length)
# Display chat history
st.write("Chat History:")
for sender, message in st.session_state.chat_history:
st.write(f"{sender} {message}")
user_question = st.chat_input("Say something")
if user_question:
st.session_state.chat_history.append(("User:", user_question))
### Description
i have a problem with chat history if i told him my name in next message, if i asked about my name the model cant answer "Forget the name." i need to make history-based Contextualization cant i get your help?
### System Info
![image](https://github.com/langchain-ai/langchain/assets/132850462/f31be9d6-a6ed-464b-b429-faa95781b30d) | i have a problem with chat history if i told him my name in next message, if i asked about my name the model cant answer "Forget the name." i need to make history-based Contextualization cant i get your help? | https://api.github.com/repos/langchain-ai/langchain/issues/20367/comments | 0 | 2024-04-12T04:11:04Z | 2024-07-19T16:08:20Z | https://github.com/langchain-ai/langchain/issues/20367 | 2,239,022,970 | 20,367 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.llms import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
endpoint_url="http://localhost:8010/",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
huggingfacehub_api_token=None
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[2], line 3
1 from langchain_community.llms import HuggingFaceEndpoint
----> 3 llm = HuggingFaceEndpoint(
4 endpoint_url="http://localhost:8010/",
5 max_new_tokens=512,
6 top_k=10,
7 top_p=0.95,
8 typical_p=0.95,
9 temperature=0.01,
10 repetition_penalty=1.03,
11 huggingfacehub_api_token=None
12 )
File ~/Jupyter/llm/venv/lib/python3.10/site-packages/langchain_core/load/serializable.py:120, in Serializable.__init__(self, **kwargs)
119 def __init__(self, **kwargs: Any) -> None:
--> 120 super().__init__(**kwargs)
121 self._lc_kwargs = kwargs
File ~/Jupyter/llm/venv/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for HuggingFaceEndpoint
__root__
Could not authenticate with huggingface_hub. Please check your API token. (type=value_error)
```
### Description
#### Background
While restructuring our codebase in response to the deprecation of `HuggingFaceTextGenInference`, I encountered an error when attempting to create a `HuggingFaceEndpoint` with a locally hosted [TGI server](https://github.com/huggingface/text-generation-inference).
#### Issue
The error occurs in the `validate_environment function` of the `huggingface_endpoint.py` file, specifically in the lines [170-179](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_endpoint.py#L170-L179).
The `@root_validator()` decorator throws an error when `huggingfacehub_api_token` is passed as `None`, which happens due to `login(token=huggingfacehub_api_token)` in `validate_environment` function. By commenting out the block that processes the API token and manually setting `huggingfacehub_api_token` to `None`, I am able to successfully create an `InferenceClient`.
Since HuggingFaceTextGenInference is fused into HuggingFaceEndpoint in PR #17254, we need to add logic to handle cases where `huggingfacehub_api_token` is passed as `None` or when no environment variable `HUGGINGFACEHUB_API_TOKEN` is set. This is particularly necessary for setups using a locally hosted TGI server where authentication with the Huggingface Hub may not be required.
### System Info
huggingface-hub==0.22.2
langchain-commnity==0.0.32
platform: linux
python version: 3.10 | Handling huggingfacehub_api_token=None for HuggingFaceEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/20342/comments | 6 | 2024-04-11T17:31:23Z | 2024-06-03T22:20:33Z | https://github.com/langchain-ai/langchain/issues/20342 | 2,238,215,338 | 20,342 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from uuid import uuid4
from langchain.agents import AgentExecutor, create_structured_chat_agent
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
from langchain.memory import ConversationBufferWindowMemory
def __handle_parsing_error(self, error):
self.__logger.error(f"Parsing error encountered: {error}")
# For now, returning a generic error message.
return "I'm sorry, there was a problem understanding your request."
def __get_agent(self, session_id):
return AgentExecutor.from_agent_and_tools(
agent=self.__agent,
tools=self.__tools,
verbose=self.__verbose_mode,
memory=self.__get_history(session_id),
handle_parsing_errors=__handle_parsing_error,
return_intermediate_steps=False,
)
def send_message(self, session_id, message: str = "") -> str:
if not message.strip():
return "You didn't ask a question. How can I assist you further?"
runner = self.__get_agent(session_id)
try:
response = runner.invoke({input: message})
except Exception as ex:
self.__logger.exception(str(ex))
return "Sorry, please try again."
return response if response else "No response received"
def __get_history(self, session_id) -> ConversationBufferWindowMemory:
if session_id not in self.__history:
self.__history[session_id] = ConversationBufferWindowMemory(k=10, memory_key=chat_history,
return_messages=True)
return self.__history[session_id]
session_id = str(uuid4())
response = send_message(session_id, "Hi")
```
### Error Message and Stack Trace (if applicable)
2024-04-11 08:40:28,242] ERROR RhinoAgent 2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)
Traceback (most recent call last):
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 158, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 455, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 40, in save_context
[HumanMessage(content=input_str), AIMessage(content=output_str)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain_core/messages/base.py", line 45, in __init__
return super().__init__(content=content, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "/Users/vmac/vWork/protoype/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)
### Description
I am trying to use langchain to build a basic chatbot. I have the prompt template defined and I am using openAI GPT4.
When I ask any simple question that gets a string response from agent, it gives me the error above.
Sometimes it also goes into a loop until I run into the "RateLimit" Error . Attached a screenshot for the same
![Screenshot 2024-04-10 at 12 34 14](https://github.com/langchain-ai/langchain/assets/122552929/4945a8c0-e014-4569-9640-c48d04e92778)
### System Info
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.33
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
| LangChain throwing parsing error and goes in loop when returning a String response | https://api.github.com/repos/langchain-ai/langchain/issues/20341/comments | 2 | 2024-04-11T17:24:10Z | 2024-07-18T16:09:39Z | https://github.com/langchain-ai/langchain/issues/20341 | 2,238,202,938 | 20,341 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code raises a malformed input request based on extraneous key [tools] when querying the AWS Bedrock service.
```python
def init_test_db(debug = False):
# Initialize the database
# Load environment variables from .env.local file
load_dotenv(dotenv_path='.env.local')
# Read PostgreSQL connection parameters from environment variables
POSTGRES_DB = os.getenv("POSTGRES_DB")
POSTGRES_USER = os.getenv("POSTGRES_USER")
POSTGRES_PASSWORD = os.getenv("POSTGRES_PASSWORD")
POSTGRES_HOST = os.getenv("POSTGRES_HOST")
POSTGRES_PORT = os.getenv("POSTGRES_PORT")
# Build the PostgreSQL connection URI
postgres_uri = f"postgresql://{POSTGRES_USER}:{POSTGRES_PASSWORD}@{POSTGRES_HOST}:{POSTGRES_PORT}/{POSTGRES_DB}"
db = SQLDatabase.from_uri(postgres_uri)
if debug:
print(db.table_info)
return db
def init_llm(model = 'claude-v2.1'):
# Create the LLM
models = {
'claude-v2.1': 'anthropic.claude-v2:1',
'claude-v3-sonnet': 'anthropic.claude-3-sonnet-20240229-v1:0',
'mistral-large': "mistral.mistral-large-2402-v1:0",
'claude-v2': 'anthropic.claude-v2',
'jurassic-2-mid': 'ai21.j2-mid-v1',
'jurassic-2-ultra': "ai21.j2-ultra-v1",
}
llm = Bedrock(model_id=models[model], streaming=False, region_name='us-east-1')
return llm
llm = init_llm()
db=init_test_db()
agent_executer = create_sql_agent(llm=llm, db=db, agent_type="openai-tools", verbose=True)
agent_executer.invoke("Which customer has the highest outstanding loan value")
```
### Error Message and Stack Trace (if applicable)
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: #: extraneous key [tools] is not permitted, please reformat your input and try again.
### Description
I am trying to use the langchain library to build an AWS bedrock powered SQL agent.
### System Info
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.36
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
Windows 11
Python 3.8.10 | AWS Bedrock Service Raising Error: "Malformed input request: #: extraneous key [tools] is not permitted." | https://api.github.com/repos/langchain-ai/langchain/issues/20320/comments | 4 | 2024-04-11T13:24:11Z | 2024-07-03T07:41:40Z | https://github.com/langchain-ai/langchain/issues/20320 | 2,237,722,056 | 20,320 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
![image](https://github.com/langchain-ai/langchain/assets/94037471/129119dd-172e-4160-a3c6-b980199f0bc7)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i am trying to use gemini model from langchain CHatGoogleGenerativeAi, but i am getting this error
```
'NoneType' object does not support item assignment (type=type_error)
```
### System Info
kaggle | Not able to use gemini from langchain in kaggle | https://api.github.com/repos/langchain-ai/langchain/issues/20317/comments | 3 | 2024-04-11T11:18:54Z | 2024-07-21T16:06:55Z | https://github.com/langchain-ai/langchain/issues/20317 | 2,237,484,407 | 20,317 |
[
"hwchase17",
"langchain"
] | _🤖_
<!-- Greeting -->
Hey @nithinreddyyyyyy, back for more adventures in code, I see! Always a pleasure to dive into these challenges with you. 🚀
<!-- Answer -->
Based on the information you provided and the similar issue I found in the LangChain repository, you can use FAISS for ParentDocumentRetriever in the LangChain framework as follows:
```python
from langchain_community.vectorstores.faiss import FAISS
from langchain_community.embeddings.openai import OpenAIEmbeddings
from langchain_community.textsplitters import RecursiveCharacterTextSplitter
from langchain.retrievers import ParentDocumentRetriever
# Initialize the embeddings and FAISS vector store
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
# Define the child and parent splitters
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=50)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=300)
# Initialize the ParentDocumentRetriever with FAISS
parent_document_retriever = ParentDocumentRetriever(
vectorstore=faiss,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
# Add documents to the retriever
parent_document_retriever.add_documents(document, ids=None)
```
This code snippet includes the use of `child_splitter` and `parent_splitter` with FAISS as requested by the user. You can find more details in the [related issue](https://github.com/langchain-ai/langchain/issues/17237).
For retrieving relevant documents with additional metadata and filtering capabilities, you can follow the pattern shown in the test cases for the FAISS class:
```python
from langchain_community.vectorstores.faiss import FAISS
from langchain_core.documents import Document
# Assuming you have a list of texts and corresponding embeddings
texts = ["text1", "text2", "text3"]
metadatas = [{"page": i} for i in range(len(texts))]
embeddings = [embedding1, embedding2, embedding3] # replace with actual embeddings
# Create a FAISS instance from the texts, embeddings, and metadatas
docsearch = FAISS.from_texts(texts, embeddings, metadatas=metadatas)
# Now you can use this FAISS instance to retrieve relevant documents with filtering
query = "some query"
filter_criteria = {"page": 1} # or use a lambda function for more complex filtering
relevant_docs = docsearch.similarity_search(query, k=1, filter=filter_criteria)
```
In this example, `relevant_docs` will contain the most relevant document to the query that also matches the filter criteria. You can adjust the `k` parameter to retrieve more documents and the `filter` parameter to apply different filtering rules. For asynchronous use cases, you can use the `await FAISS.afrom_texts` and `await docsearch.asimilarity_search` methods with the same parameters. You can find more details in the [test cases for the FAISS class](https://github.com/langchain-ai/langchain/blob/master/libs/community/tests/unit_tests/vectorstores/test_faiss.py).
The `FAISS.from_texts` method in the LangChain framework is a class method that constructs a FAISS (Facebook AI Similarity Search) wrapper from raw documents. This method is a user-friendly interface that embeds documents, creates an in-memory docstore, and initializes the FAISS database. It is intended to be a quick way to get started with the framework. You can find more details in the [FAISS class source code](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/faiss.py).
I hope this helps! If you have any further questions, feel free to ask.
## Sources
- [can i use FAISS isntead of Chroma for ParentDocumentRetriver?](https://github.com/langchain-ai/langchain/issues/17237)
- [libs/community/tests/unit_tests/vectorstores/test_faiss.py](https://github.com/langchain-ai/langchain/blob/master/libs/community/tests/unit_tests/vectorstores/test_faiss.py)
- [libs/community/langchain_community/vectorstores/faiss.py](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/faiss.py)
<!-- Dosu Comment Footer -->
<details open>
<summary>
#### About Dosu
</summary>
This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
</details>
_Originally posted by @dosubot[bot] in https://github.com/langchain-ai/langchain/issues/17576#issuecomment-1946203996_
| save ParentDocumentRetriever in local path _🤖_ | https://api.github.com/repos/langchain-ai/langchain/issues/20315/comments | 2 | 2024-04-11T09:57:45Z | 2024-07-18T16:09:29Z | https://github.com/langchain-ai/langchain/issues/20315 | 2,237,352,542 | 20,315 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In the current documentation : [Constitutional Chain Documentation](https://python.langchain.com/docs/guides/productionization/safety/constitutional_chain/#custom-principles)
Usage of Constitutional Principles inside a Langchain Agent is not provided. It only has information on LLMChain.
I want to use it inside a conversational agent, because I also have to pass the custom tools and history of conversation into the agent.
Currently I am using create_openai_tools_agent() to create the agent.
### Idea or request for content:
_No response_ | Using Constitutional AI with Lanchain Agents | https://api.github.com/repos/langchain-ai/langchain/issues/20312/comments | 0 | 2024-04-11T09:22:25Z | 2024-07-18T16:09:24Z | https://github.com/langchain-ai/langchain/issues/20312 | 2,237,284,294 | 20,312 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
INITIALIZE_AGENT
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Everytime, i use this node, i see it is up for deprecation in langchain's V2. Was wondering if it'll be too much to ask for this to not be deprecated but maintained, because it's the only agent executing function and creator agent that works with almost any LLM i multiswitch easily with e.g (Gemini, Openrouter, Huggingface) . The new create_openai_structured or react_agents are often problematic requiring a ton of time and wasted effort to debug. Even with langchain's examples, they often don't work. Just thought i would mention this. Thanks
### System Info
INITIALIZE_AGENT | INITIALIZE_AGENT (LANGCHAIN'S TRADITIONAL AGENT EXECUTOR CREATOR) | https://api.github.com/repos/langchain-ai/langchain/issues/20309/comments | 0 | 2024-04-11T06:44:41Z | 2024-07-18T16:09:19Z | https://github.com/langchain-ai/langchain/issues/20309 | 2,237,021,603 | 20,309 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Code that doesn't work:
```
retriever = self.vector_store.as_retriever(search_kwargs={
'k': 1,
'filter': {'portfolio_uri':{'$eq': params.portfolio_uri}}
})
docs = retriever.get_relevant_documents(query=query)
```
Code that works:
```
retriever = self.vector_store.as_retriever(search_kwargs={
'k': limit,
'filter': {'portfolio_uri':{'$in': [params.portfolio_uri]}}
})
docs = retriever.get_relevant_documents(query=query)
```
### Error Message and Stack Trace (if applicable)
File "/Users/anthonydemattos/syyclops/open-operator/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1971, in _exec_single_context
self.dialect.do_execute(
File "/Users/anthonydemattos/syyclops/open-operator/.venv/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 919, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) function jsonb_path_match(json, unknown, unknown) does not exist
LINE 3: ...2edf-c5b6-4a08-9a06-c841d8c6dab9'::uuid::UUID AND jsonb_path...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
[SQL: SELECT langchain_pg_embedding.collection_id AS langchain_pg_embedding_collection_id, langchain_pg_embedding.embedding AS langchain_pg_embedding_embedding, langchain_pg_embedding.document AS langchain_pg_embedding_document, langchain_pg_embedding.cmetadata AS langchain_pg_embedding_cmetadata, langchain_pg_embedding.custom_id AS langchain_pg_embedding_custom_id, langchain_pg_embedding.uuid AS langchain_pg_embedding_uuid, langchain_pg_embedding.embedding <=> %(embedding_1)s AS distance
FROM langchain_pg_embedding JOIN langchain_pg_collection ON langchain_pg_embedding.collection_id = langchain_pg_collection.uuid
WHERE langchain_pg_embedding.collection_id = %(collection_id_1)s::UUID AND jsonb_path_match(langchain_pg_embedding.cmetadata, %(jsonb_path_match_1)s, %(jsonb_path_match_2)s) ORDER BY distance ASC
LIMIT %(param_1)s]
[parameters: {'embedding_1': '[-0.010107061089572959,-0.013629535476347075,-0.0012468165027500532,-0.024999106785678033,-0.03649423341900812,0.012025240109652443,-0.03317403857405 ... (32593 characters truncated) ... .024217885455325203,-0.029881744291334902,0.015178028787260737,-0.01019773838247912,-0.006612486799366736,-0.02557107296798936,-0.019600304222168217]', 'collection_id_1': UUID('622b2edf-c5b6-4a08-9a06-c841d8c6dab9'), 'jsonb_path_match_1': '$.portfolio_uri == $value', 'jsonb_path_match_2': '{"value": "https://syyclops.com/example"}', 'param_1': 1}]
(Background on this error at: https://sqlalche.me/e/20/f405)
### Description
I am trying to do a vector store similarity search with pg vector and provide a filter of the metadata.
For some reason when i do $eq it doesn't work but if I do $in: [.. then it does work
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:43 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6000
> Python Version: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:38:29) [Clang 13.0.1 ]
Package Information
-------------------
> langchain_core: 0.1.41
> langchain: 0.1.12
> langchain_community: 0.0.32
> langsmith: 0.1.43
> langchain_openai: 0.1.2
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | PGVector filter $eq | https://api.github.com/repos/langchain-ai/langchain/issues/20293/comments | 3 | 2024-04-10T19:32:41Z | 2024-04-18T20:43:28Z | https://github.com/langchain-ai/langchain/issues/20293 | 2,236,289,728 | 20,293 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
`%pip install --upgrade --quiet azure-search-documents
%pip install --upgrade --quiet azure-identity
import os
from langchain_community.vectorstores.azuresearch import AzureSearch
from langchain_openai import AzureOpenAIEmbeddings, OpenAIEmbeddings
# Option 2: use an Azure OpenAI account with a deployment of an embedding model
azure_endpoint: str = "PLACEHOLDER FOR YOUR AZURE OPENAI ENDPOINT"
azure_openai_api_key: str = "PLACEHOLDER FOR YOUR AZURE OPENAI KEY"
azure_openai_api_version: str = "2023-05-15"
azure_deployment: str = "text-embedding-ada-002"
vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"
vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"
# Option 2: Use AzureOpenAIEmbeddings with an Azure account
embeddings: AzureOpenAIEmbeddings = AzureOpenAIEmbeddings(
azure_deployment=azure_deployment,
openai_api_version=azure_openai_api_version,
azure_endpoint=azure_endpoint,
api_key=azure_openai_api_key,
)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
from langchain.text_splitter import (
CharacterTextSplitter,
RecursiveCharacterTextSplitter,
)
from langchain.document_loaders import DirectoryLoader, PyPDFLoader
# Read the PDF file using the langchain loader
pdf_link = "test.pdf"
loader = PyPDFLoader(pdf_link, extract_images=False)
data = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(data)
vector_store.add_documents(documents=docs)`
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[12], line 2
1 for i in range(0, len(docs)):
----> 2 vector_store.add_documents(documents=docs[i])
3 time.sleep(5)
File ~/anaconda3/envs/rag_azure/lib/python3.10/site-packages/langchain_core/vectorstores.py:136, in VectorStore.add_documents(self, documents, **kwargs)
127 """Run more documents through the embeddings and add to the vectorstore.
128
129 Args:
(...)
133 List[str]: List of IDs of the added texts.
134 """
135 # TODO: Handle the case where the user doesn't provide ids on the Collection
--> 136 texts = [doc.page_content for doc in documents]
137 metadatas = [doc.metadata for doc in documents]
138 return self.add_texts(texts, metadatas, **kwargs)
File ~/anaconda3/envs/rag_azure/lib/python3.10/site-packages/langchain_core/vectorstores.py:136, in <listcomp>(.0)
127 """Run more documents through the embeddings and add to the vectorstore.
128
129 Args:
(...)
133 List[str]: List of IDs of the added texts.
134 """
135 # TODO: Handle the case where the user doesn't provide ids on the Collection
--> 136 texts = [doc.page_content for doc in documents]
137 metadatas = [doc.metadata for doc in documents]
138 return self.add_texts(texts, metadatas, **kwargs)
AttributeError: 'tuple' object has no attribute 'page_content'
### Description
I am using langchain to connect to Azure AI Search and create vector stores and add documents to them so I can create a RAG application. I tried to replicate the notebook provided by Langchain for Azure AI Search [https://python.langchain.com/docs/integrations/vectorstores/azuresearch/](url) but its failing with the above error
I do see page_content in 'docs' so I am not sure where is the problem. I got langchain_core.documents.base.Document on type(docs[0])
Here is an example of how one of the element of the doc looks
print(docs[5])
Document(page_content='Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection', metadata={'source': 'test.pdf', 'page': 3})
### System Info
platform - mac
python - 3.10
langchain==0.1.15
langchain-community==0.0.32
langchain-core==0.1.41
langchain-openai==0.0.2.post1
langchain-text-splitters==0.0.1
| Error when adding documents to vector_store - Azure AI Search | https://api.github.com/repos/langchain-ai/langchain/issues/20283/comments | 1 | 2024-04-10T15:45:26Z | 2024-07-19T16:08:10Z | https://github.com/langchain-ai/langchain/issues/20283 | 2,235,916,674 | 20,283 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import time
from langchain.graphs import Neo4jGraph
from langchain_openai import AzureChatOpenAI
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import GraphCypherQAChain
llm=AzureChatOpenAI(azure_deployment=MODEL_CHAT, model_name=MODEL_CHAT, azure_endpoint=API_ENDPOINT, openai_api_version=API_VERSION, openai_api_key=API_KEY, temperature=0, streaming=True)
neo4j_graph = Neo4jGraph(url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD)
CYPHER_GENERATION_TEMPLATE = """You are an expert Neo4j Cypher translator who understands the question in english and convert to Cypher strictly based on the Neo4j Schema provided and following the instructions below:
<instructions>
* Use aliases to refer the node or relationship in the generated Cypher query
* Generate Cypher query compatible ONLY for Neo4j Version 5
* Do not use EXISTS, SIZE keywords in the cypher. Use alias when using the WITH keyword
* Use only Nodes and relationships mentioned in the schema
* Always enclose the Cypher output inside 3 backticks (```)
* Always do a case-insensitive and fuzzy search for any properties related search. Eg: to search for a Person name use `toLower(p.name) contains 'neo4j'`
* Cypher is NOT SQL. So, do not mix and match the syntaxes
</instructions>
Strictly use this Schema for Cypher generation:
<schema>
{schema}
</schema>
The samples below follow the instructions and the schema mentioned above. So, please follow the same when you generate the cypher:
<samples>
Human: Which manager manages most people directly? How many employees?
Assistant: ```MATCH (p:Person)-[r:IS_MANAGER_OF]->() WITH p, COUNT(r) AS NumberOfEmployees ORDER BY NumberOfEmployees DESC RETURN p.name, NumberOfEmployees LIMIT 1```
</samples>
Human: {question}
Assistant:
"""
CYPHER_GENERATION_PROMPT = PromptTemplate(input_variables=['schema','question'], validate_template=True, template=CYPHER_GENERATION_TEMPLATE)
chain = GraphCypherQAChain.from_llm(
llm,
graph=neo4j_graph,
cypher_prompt=CYPHER_GENERATION_PROMPT,
validate_cypher=True,
return_intermediate_steps=True
)
question="Who should we fire from the Example department?"
cypher_cmd=chain.invoke(question)
```
### Error Message and Stack Trace (if applicable)
```bash
---------------------------------------------------------------------------
CypherSyntaxError Traceback (most recent call last)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/graphs/neo4j_graph.py:164](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/graphs/neo4j_graph.py#line=163), in Neo4jGraph.query(self, query, params)
163 try:
--> 164 data = session.run(Query(text=query, timeout=self.timeout), params)
165 json_data = [r.data() for r in data]
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/session.py:313](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/session.py#line=312), in Session.run(self, query, parameters, **kwargs)
312 parameters = dict(parameters or {}, **kwargs)
--> 313 self._auto_result._run(
314 query, parameters, self._config.database,
315 self._config.impersonated_user, self._config.default_access_mode,
316 bookmarks, self._config.notifications_min_severity,
317 self._config.notifications_disabled_categories,
318 )
320 return self._auto_result
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/result.py:181](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/result.py#line=180), in Result._run(self, query, parameters, db, imp_user, access_mode, bookmarks, notifications_min_severity, notifications_disabled_categories)
180 self._connection.send_all()
--> 181 self._attach()
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/result.py:301](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/work/result.py#line=300), in Result._attach(self)
300 while self._attached is False:
--> 301 self._connection.fetch_message()
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:178](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_common.py#line=177), in ConnectionErrorHandler.__getattr__.<locals>.outer.<locals>.inner(*args, **kwargs)
177 try:
--> 178 func(*args, **kwargs)
179 except (Neo4jError, ServiceUnavailable, SessionExpired) as exc:
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:849](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py#line=848), in Bolt.fetch_message(self)
846 tag, fields = self.inbox.pop(
847 hydration_hooks=self.responses[0].hydration_hooks
848 )
--> 849 res = self._process_message(tag, fields)
850 self.idle_since = monotonic()
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:369](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py#line=368), in Bolt5x0._process_message(self, tag, fields)
368 try:
--> 369 response.on_failure(summary_metadata or {})
370 except (ServiceUnavailable, DatabaseUnavailable):
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:245](.pyenv/versions/3.11.7/lib/python3.11/site-packages/neo4j/_sync/io/_common.py#line=244), in Response.on_failure(self, metadata)
244 Util.callback(handler)
--> 245 raise Neo4jError.hydrate(**metadata)
CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'I': expected
"ALTER"
"CALL"
"CREATE"
"DEALLOCATE"
"DELETE"
"DENY"
"DETACH"
"DROP"
"DRYRUN"
"ENABLE"
"FOREACH"
"GRANT"
"LOAD"
"MATCH"
"MERGE"
"NODETACH"
"OPTIONAL"
"REALLOCATE"
"REMOVE"
"RENAME"
"RETURN"
"REVOKE"
"SET"
"SHOW"
"START"
"STOP"
"TERMINATE"
"UNWIND"
"USE"
"USING"
"WITH" (line 1, column 1 (offset: 0))
"I'm sorry, I cannot generate a query for this question as it goes against ethical and moral principles. It is not appropriate to use data and technology to harm or discriminate against individuals."
^}
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[7], line 2
1 question="Who should we fire from 91130 Veh Verif & Value Confirmation?"
----> 2 cypher_cmd=chain.invoke(question)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py:162](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py#line=161), in Chain.invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
165 inputs, outputs, return_only_outputs
166 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py:156](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/base.py#line=155), in Chain.invoke(self, input, config, **kwargs)
149 run_manager = callback_manager.on_chain_start(
150 dumpd(self),
151 inputs,
152 name=run_name,
153 )
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
160 except BaseException as e:
161 run_manager.on_chain_error(e)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/graph_qa/cypher.py:267](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain/chains/graph_qa/cypher.py#line=266), in GraphCypherQAChain._call(self, inputs, run_manager)
264 # Retrieve and limit the number of results
265 # Generated Cypher be null if query corrector identifies invalid schema
266 if generated_cypher:
--> 267 context = self.graph.query(generated_cypher)[: self.top_k]
268 else:
269 context = []
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/graphs/neo4j_graph.py:170](.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/graphs/neo4j_graph.py#line=169), in Neo4jGraph.query(self, query, params)
168 return json_data
169 except CypherSyntaxError as e:
--> 170 raise ValueError(f"Generated Cypher Statement is not valid\n{e}")
ValueError: Generated Cypher Statement is not valid
{code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'I': expected
"ALTER"
"CALL"
"CREATE"
"DEALLOCATE"
"DELETE"
"DENY"
"DETACH"
"DROP"
"DRYRUN"
"ENABLE"
"FOREACH"
"GRANT"
"LOAD"
"MATCH"
"MERGE"
"NODETACH"
"OPTIONAL"
"REALLOCATE"
"REMOVE"
"RENAME"
"RETURN"
"REVOKE"
"SET"
"SHOW"
"START"
"STOP"
"TERMINATE"
"UNWIND"
"USE"
"USING"
"WITH" (line 1, column 1 (offset: 0))
"I'm sorry, I cannot generate a query for this question as it goes against ethical and moral principles. It is not appropriate to use data and technology to harm or discriminate against individuals."
^}
```
### Description
GraphCypherQAChain is trying to generate a Cypher query from the LLM's error message: "I'm sorry, I cannot generate a query for this question as it goes against ethical and moral principles. It is not appropriate to use data and technology to harm or discriminate against individuals." This code works for other prompts except for those containing sensitive questions or information outside the provided schema.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024
> Python Version: 3.11.7 (main, Feb 15 2024, 09:21:29) [Clang 14.0.0 ]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.1.22
> langchain_cli: 0.0.21
> langchain_openai: 0.0.8
> langserve: 0.0.41
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | GraphCypherQAChain tries to create a query from a sensitive question. | https://api.github.com/repos/langchain-ai/langchain/issues/20280/comments | 1 | 2024-04-10T15:03:20Z | 2024-07-17T16:06:18Z | https://github.com/langchain-ai/langchain/issues/20280 | 2,235,828,481 | 20,280 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code 1
model = AzureChatOpenAI(
openai_api_version="2023-03-15-preview",
azure_deployment="gpt_chat_updated", # in Azure, this deployment has version 0613 - input and output tokens are counted separately
)
message = HumanMessage(
content='Where is whitehouse located?'
)
with get_openai_callback() as cb:
response = model([message])
print(f"AzureChatAPI response {response}")
print(
f"Total Cost (USD): ${format(cb.total_cost, '.6f')}"
)
#### output
AzureChatAPI response content='The White House is located in Washington, D.C., the capital city of the United States.' response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 13, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo-16k', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {}} id='run-7976ac34-171f-498e-8819-3b67fbd3f21e-0'
**Total Cost (USD): $0.000115**
#
### Example Code 2
chatlitellm_model = ChatLiteLLM(model="azure/gpt_chat_updated")
with get_openai_callback() as cb:
response = chatlitellm_model([message])
print(f"Chatlite Response {response}")
print(
f"Total Cost (USD): ${format(cb.total_cost, '.6f')}"
)
#### output
Chatlite Response content='The White House is located at 1600 Pennsylvania Avenue NW, Washington, D.C., United States.' response_metadata={'token_usage': Usage(completion_tokens=21, prompt_tokens=13, total_tokens=34), 'model': 'azure/gpt_chat_updated', 'finish_reason': 'stop'} id='run-f98efc18-6c9f-4b61-acfa-4baf818ab38d-0'
**Total Cost (USD): $0.000000**
### Error Message and Stack Trace (if applicable)
Please note the difference in model_name and hence the Total Cost!!!
### Description
get_openai_callback() returns the incorrect model_name (as is deployment name on azure), hence blocking the Cost instrumentation wihile using ChatliteLLM though the same model version works perfectly fine when used with AzureChatOpenAI. This is further leading to reuturning 0 cost by get_openai_callback() though the original model name and version is listed in openai_info.py.
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.41
langchain-openai==0.1.2
langchain-text-splitters==0.0.1 | Altered model name with ChatLiteLLM hindering cost instrumentation with get_openai_callback | https://api.github.com/repos/langchain-ai/langchain/issues/20277/comments | 0 | 2024-04-10T14:39:57Z | 2024-07-17T16:06:14Z | https://github.com/langchain-ai/langchain/issues/20277 | 2,235,771,459 | 20,277 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code: ''' tools = load_tools(["llm-math"],llm=llm)
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
tools=tools, '''
### Error Message and Stack Trace (if applicable)
causes this error: \venv\lib\site-packages\langchain_core\load\serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
tool
extra fields not permitted (type=value_error.extra)
### Description
I'm trying to use tools with ConversationalRetrievalChain.from_llm and this error occurs pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
tool
### System Info
aiohttp==3.8.4
aiosignal==1.3.1
altair==5.0.1
anyio==3.7.0
async-timeout==4.0.2
attrs==23.1.0
backoff==2.2.1
beautifulsoup4==4.12.2
blinker==1.6.2
bs4==0.0.1
cachetools==5.3.1
certifi==2023.5.7
cffi==1.15.1
chardet==5.1.0
charset-normalizer==3.1.0
chromadb==0.3.26
click==8.1.3
clickhouse-connect==0.6.4
colorama==0.4.6
coloredlogs==15.0.1
cryptography==41.0.2
dataclasses-json==0.5.9
decorator==5.1.1
distro==1.9.0
dnspython==2.3.0
docopt==0.6.2
document-utils==1.8.1
duckdb==0.8.1
et-xmlfile==1.1.0
exceptiongroup==1.1.2
faiss-cpu==1.7.4
fastapi==0.99.1
filetype==1.2.0
flatbuffers==23.5.26
frozenlist==1.3.3
gitdb==4.0.10
GitPython==3.1.31
greenlet==2.0.2
h11==0.14.0
hnswlib==0.7.0
httpcore==1.0.3
httptools==0.5.0
httpx==0.26.0
humanfriendly==10.0
idna==3.4
importlib-metadata==6.7.0
Jinja2==3.1.2
joblib==1.3.1
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.17.3
langchain==0.1.15
langchain-community==0.0.32
langchain-core==0.1.41
langchain-experimental==0.0.47
langchain-text-splitters==0.0.1
langchainplus-sdk==0.0.20
langsmith==0.1.43
libmagic==1.0
loguru==0.7.0
lxml==4.9.3
lz4==4.3.2
Markdown==3.4.4
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.19.0
marshmallow-enum==1.5.1
mdurl==0.1.2
monotonic==1.6
mpmath==1.3.0
msg-parser==1.2.0
multidict==6.0.4
mypy-extensions==1.0.0
nltk==3.8.1
numexpr==2.8.4
numpy==1.25.0
olefile==0.46
onnxruntime==1.15.1
openai==0.27.8
openapi-schema-pydantic==1.2.4
openpyxl==3.1.2
orjson==3.10.0
overrides==7.3.1
packaging==23.2
pandas==2.0.3
pdf2image==1.16.3
pdfminer.six==20221105
Pillow==9.5.0
pinecone-client==2.2.2
pipreqs==0.4.13
posthog==3.0.1
protobuf==4.23.3
pulsar-client==3.2.0
pyarrow==12.0.1
pycparser==2.21
pycryptodome==3.18.0
pydantic==1.10.10
pydeck==0.8.1b0
Pygments==2.15.1
Pympler==1.0.1
pypandoc==1.11
pypdf==3.12.0
pyreadline3==3.4.1
pyrsistent==0.19.3
python-dateutil==2.8.2
python-docx==0.8.11
python-dotenv==1.0.0
python-magic==0.4.27
python-magic-bin==0.4.14
python-pptx==0.6.21
pytz==2023.3
pytz-deprecation-shim==0.1.0.post0
PyYAML==6.0
regex==2023.6.3
requests==2.31.0
rich==13.4.2
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.17
starlette==0.27.0
streamlit==1.24.0
sympy==1.12
tabulate==0.9.0
tenacity==8.2.2
tiktoken==0.4.0
tokenizers==0.13.3
toml==0.10.2
toolz==0.12.0
tornado==6.3.2
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.9.0
tzdata==2023.3
tzlocal==4.3.1
unstructured==0.8.1
urllib3==2.0.3
uvicorn==0.22.0
validators==0.20.0
vectorhub==1.8.3
watchdog==3.0.0
watchfiles==0.19.0
websockets==11.0.3
win32-setctime==1.1.0
xlrd==2.0.1
XlsxWriter==3.1.2
yarg==0.1.9
yarl==1.9.2
zipp==3.15.0
zstandard==0.21.0
| Adding tools to ConversationalRetrievalChain.from_llm causes Pydantic error | https://api.github.com/repos/langchain-ai/langchain/issues/20276/comments | 2 | 2024-04-10T14:04:46Z | 2024-07-18T16:09:09Z | https://github.com/langchain-ai/langchain/issues/20276 | 2,235,689,172 | 20,276 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation at https://python.langchain.com/docs/modules/model_io/output_parsers/types/json/
seems to imply that the generated output is validated against the Pydantic schema and only JSON objects that conform to the schema are returned.
However, as I understand the source code, this does not seem to be the case.
Please consider adding a note to the JSON parser documentation that one should use PydanticOutputParser, if validation against the schema is desired.
### Idea or request for content:
_No response_ | DOC: JsonOutputParser does not mention that input is not validated against schema. | https://api.github.com/repos/langchain-ai/langchain/issues/20266/comments | 1 | 2024-04-10T10:22:42Z | 2024-07-18T16:09:04Z | https://github.com/langchain-ai/langchain/issues/20266 | 2,235,248,101 | 20,266 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_openai import ChatOpenAI
tools = [TavilySearchResults(max_results=1)]
prompt = hub.pull("hwchase17/openai-tools-agent")
llm = ChatOpenAI(model="gpt-4-turbo-2024-04-09", temperature=0)
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
result = agent_executor.invoke({"input": "What's the weather like in Tokyo?"})
print(result["output"])
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/src/langchain_agent.py", line 20, in <module>
result = agent_executor.invoke({"input": "What's the weather like in Tokyo?"})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1432, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
[
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1138, in <listcomp>
[
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1166, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 514, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2875, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2862, in transform
yield from self._transform_stream_with_config(
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1880, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2826, in _transform
for output in final_pipeline:
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1283, in transform
for chunk in input:
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4722, in transform
yield from self.bound.transform(
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1300, in transform
yield from self.stream(final, config, **kwargs)
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 245, in stream
raise e
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 225, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 460, in _stream
with self.client.create(messages=message_dicts, **params) as response:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 667, in create
return self._post(
^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1213, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 902, in request
return self._request(
^^^^^^^^^^^^^^
File "/home/oshima/work/src/os1ma/debug_langchain_gpt-4-turbo-2024-04-09/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'messages[2].tool_calls[0].index'.", 'type': 'invalid_request_error', 'param': 'messages[2].tool_calls[0].index', 'code': 'unknown_parameter'}}
```
### Description
OpenAI tools agent initialized by `create_openai_tools_agent` raise error if use with gpt-4-turbo-2024-04-09.
This error is not caused with other OpenAI model like gpt-4-0125-preview, gpt-3.5-turbo-0125.
If I only use openai package without langchain, this error is not caused.
### System Info
```
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #111~20.04.1-Ubuntu SMP Mon Mar 11 15:44:43 UTC 2024
> Python Version: 3.11.9 (main, Apr 10 2024, 18:31:06) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.41
> langchain: 0.1.15
> langchain_community: 0.0.32
> langsmith: 0.1.43
> langchain_openai: 0.1.2
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | OpenAI tools agent error if use with gpt-4-turbo-2024-04-09 | https://api.github.com/repos/langchain-ai/langchain/issues/20264/comments | 3 | 2024-04-10T09:48:54Z | 2024-04-13T06:23:46Z | https://github.com/langchain-ai/langchain/issues/20264 | 2,235,182,687 | 20,264 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I checked first connection and the following code works
```
from langchain.graphs import Neo4jGraph
from neo4j import GraphDatabase
url = "neo4j://localhost:7687"
username ="neo4j"
password = <password>
def _create_and_return_greeting(tx, message):
result = tx.run("CREATE (a:Greeting) "
"SET a.message = $message "
"RETURN a.message + ', from node ' + id(a)", message=message)
return result.single()[0]
message="hello, world"
driver=GraphDatabase.driver(url, auth=(username, password))
with driver.session() as session:
greeting = session.execute_write(_create_and_return_greeting, message)
print(greeting)
```
Hovewer trying to connect
```
graph = Neo4jGraph(url=url, username=username, password=password)
```
gives error
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
CypherSyntaxError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py) in query(self, query, params)
245 try:
--> 246 data = session.run(Query(text=query, timeout=self.timeout), params)
247 json_data = [r.data() for r in data]
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/session.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/session.py) in run(self, query, parameters, **kwargs)
312 parameters = dict(parameters or {}, **kwargs)
--> 313 self._auto_result._run(
314 query, parameters, self._config.database,
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py) in _run(self, query, parameters, db, imp_user, access_mode, bookmarks, notifications_min_severity, notifications_disabled_categories)
180 self._connection.send_all()
--> 181 self._attach()
182
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py) in _attach(self)
300 while self._attached is False:
--> 301 self._connection.fetch_message()
302
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py) in inner(*args, **kwargs)
177 try:
--> 178 func(*args, **kwargs)
179 except (Neo4jError, ServiceUnavailable, SessionExpired) as exc:
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt.py) in fetch_message(self)
849 )
--> 850 res = self._process_message(tag, fields)
851 self.idle_since = monotonic()
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt4.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt4.py) in _process_message(self, tag, fields)
367 try:
--> 368 response.on_failure(summary_metadata or {})
369 except (ServiceUnavailable, DatabaseUnavailable):
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py) in on_failure(self, metadata)
244 Util.callback(handler)
--> 245 raise Neo4jError.hydrate(**metadata)
246
CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'C': expected whitespace, comment, DATABASE, DATABASES, DEFAULT, POPULATED, ALL, ROLES, USERS or show privilege scope (line 1, column 6 (offset: 5))
"SHOW CONSTRAINTS"
^}
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
[/tmp/ipykernel_3175/134052838.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/tmp/ipykernel_3175/134052838.py) in <module>
27 print(greeting)
28
---> 29 graph = Neo4jGraph(url=url, username=username, password=password)
30 # QUERY = """
31 # "MATCH (m:Movie)-[:IN_GENRE]->(:Genre {name:$genre})
[/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py) in __init__(self, url, username, password, database, timeout, sanitize, refresh_schema)
217 if refresh_schema:
218 try:
--> 219 self.refresh_schema()
220 except neo4j.exceptions.ClientError as e:
221 if e.code == "Neo.ClientError.Procedure.ProcedureNotFound":
[/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py) in refresh_schema(self)
281 # Get constraints & indexes
282 try:
--> 283 constraint = self.query("SHOW CONSTRAINTS")
284 index = self.query("SHOW INDEXES YIELD *")
285 except (
[/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py](https://vscode-remote+dev-002dcontainer-002b7b22686f737450617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f6772617068222c226c6f63616c446f636b6572223a66616c73652c22636f6e66696746696c65223a7b22246d6964223a312c22667350617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2265787465726e616c223a2266696c653a2f2f2f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c2270617468223a222f6d656469612f6a6d2f686464446174612f70726f6a656374732f746879726f69645f6b6e6f776c656467655f67726170682f2e646576636f6e7461696e65722f646576636f6e7461696e65722e6a736f6e222c22736368656d65223a2266696c65227d7d.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/langchain_community/graphs/neo4j_graph.py) in query(self, query, params)
250 return json_data
251 except CypherSyntaxError as e:
--> 252 raise ValueError(f"Generated Cypher Statement is not valid\n{e}")
253
254 def refresh_schema(self) -> None:
ValueError: Generated Cypher Statement is not valid
{code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'C': expected whitespace, comment, DATABASE, DATABASES, DEFAULT, POPULATED, ALL, ROLES, USERS or show privilege scope (line 1, column 6 (offset: 5))
"SHOW CONSTRAINTS"
^}
```
### Description
I am trying to load Neo4jGraph from langchain_community.graphs. Issue is simmilar to https://github.com/langchain-ai/langchain/discussions/17643 ; Hovewer in cited issue no solution was supplied .
### System Info
langchain==0.1.15
langchain-community==0.0.32
langchain-core==0.1.41
langchain-experimental==0.0.57
langchain-openai==0.1.2
langchain-text-splitters==0.0.1
Python 3.10.12
Docker Ubuntu 20 | Neo4j langchain integration error | https://api.github.com/repos/langchain-ai/langchain/issues/20262/comments | 5 | 2024-04-10T08:17:00Z | 2024-08-02T08:48:13Z | https://github.com/langchain-ai/langchain/issues/20262 | 2,235,019,344 | 20,262 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
server:
```python
@chain
def base_answer(source: Dict):
prompt_b = ChatPromptTemplate.from_messages(
[
("system", system_prompt if source["from"] == "customer" else read_persist_var('system_prompt')),
MessagesPlaceholder(variable_name="history"),
("human", "{input}")
]
)
b_a = RunnableWithMessageHistory(
prompt_b | llm(source["llm"]),
get_message_history,
input_messages_key="input",
history_messages_key="history"
) | StrOutputParser()
return b_a
```
client:
```Python
base_answer = RemoteRunnable("http://localhost:2031/base_answer/")
base_answer.stream({"input": "hi"}, config={"configurable": {"session_id": "d"}})
```
### Error Message and Stack Trace (if applicable)
ValueError: Missing keys ['session_id'] in config['configurable'] Expected keys are I'session_id'].When using via .invoke() or stream(), pass in a config; e.g., chain. invoke(f'input': "too'}, {'configurable': {'session_id': "[your-value-here]"}})
### Description
![CleanShot 2024-04-07 at 11 20 26](https://github.com/langchain-ai/langchain/assets/154310050/fbe56a0e-38fc-4fcc-9921-832b67ad781b)
![CleanShot 2024-04-07 at 11 21 14](https://github.com/langchain-ai/langchain/assets/154310050/8b862308-1ab5-4112-adab-1520bd82ca32)
### System Info
python 3.9.18
langchian newest version
ubuntu 20.04 | Can't pass session id in | https://api.github.com/repos/langchain-ai/langchain/issues/20255/comments | 2 | 2024-04-10T01:50:54Z | 2024-07-18T16:08:59Z | https://github.com/langchain-ai/langchain/issues/20255 | 2,234,597,350 | 20,255 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def api_add_docs(doc,collection_name):
db = Chroma.from_documents(doc, persist_directory = PERSIST_DIRECTORY,collection_name=collection_name)
db.persist()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have a server up and running which takes document and collection name as an input to store in the Chroma db. The collection name is based on the user and there can be multiple users sending documents to same collection.
As long as my API is up and running, all the documents coming from various sources are visible in the Chroma collections parquet file. However, when I stop the API, I see only the first document of first collection in the parquet file and rest all are erased.
The code I have added, is just a representation of how I store and persist the db everytime I get a document through my API.
### System Info
langchain==0.0.352
langchain-core==0.1.29
langchain-community==0.0.6
duckdb==0.10.2 | Document Persist Temporarily | https://api.github.com/repos/langchain-ai/langchain/issues/20253/comments | 0 | 2024-04-10T00:55:20Z | 2024-07-18T16:08:54Z | https://github.com/langchain-ai/langchain/issues/20253 | 2,234,548,236 | 20,253 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This code generates an error when invoked using chain.invoke from a prompt template. Removing any one of the stop sequences resolves the issue, but adding a stop sequence causes it. Content of stop sequences doesn't matter.
``` {python}
llm = AzureOpenAI(
api_version="2024-02-01",
deployment_name = "my-test-deployment",
max_tokens = 1500,
temperature= 0.8,
top_p= 1,
model_kwargs= {"stop": ["<|im_end|>", "Student response:", "Grading:", "Test case", "Student Response:"]}
)
```
### Error Message and Stack Trace (if applicable)
BadRequestError: Error code: 400 - {'error': {'message': "'$.stop' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
### Description
I'm trying to use langChain library to communicate with AzureOpenAI LLM, and using model kwargs to send stop sequences, but it seems that entering more than 4 stop sequences causes it to generate an error message.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.40
> langchain: 0.1.14
> langchain_community: 0.0.31
> langsmith: 0.1.40
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | more than 4 stop sequences using model_kwargs for AzureOpenAI result in error | https://api.github.com/repos/langchain-ai/langchain/issues/20234/comments | 1 | 2024-04-09T21:21:55Z | 2024-04-10T14:42:48Z | https://github.com/langchain-ai/langchain/issues/20234 | 2,234,320,685 | 20,234 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
**
### Error Message and Stack Trace (if applicable)
**
### Description
Please make a release of langchain-together that includes #19649
### System Info
** | Request for langchain-together release | https://api.github.com/repos/langchain-ai/langchain/issues/20217/comments | 0 | 2024-04-09T17:34:41Z | 2024-04-09T19:23:53Z | https://github.com/langchain-ai/langchain/issues/20217 | 2,233,985,832 | 20,217 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
model = genai.GenerativeModel(model_name='gemini-pro')
#llm = OpenAI(temperature=0)
mydb = SQLDatabase.from_uri('sqlite:///test.db')
llm = {'model': model}
toolkit=SQLDatabaseToolkit(db=mydb)
agent_executer = create_sql_agent(
llm=llm,
toolkit=toolkit,
handle_parsing_error=True,
verbose=True
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Getting same issue ,
in my case issue is not getting resolved , is it because i am using gemini or different model from OpenAI, even i have followed all the recomendations from the chat , but still it is same, how to resolve the same issue using gemini pro model?
genai.configure(api_key='my Api key')
### System Info
import streamlit as st
import pandas as pd
import google.generativeai as genai
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
from langchain.sql_database import SQLDatabase
i have used above imoorts | #error: ValidationError: 1 validation error for SQLDatabaseToolkit | https://api.github.com/repos/langchain-ai/langchain/issues/20213/comments | 2 | 2024-04-09T17:05:51Z | 2024-05-22T11:13:44Z | https://github.com/langchain-ai/langchain/issues/20213 | 2,233,915,607 | 20,213 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Code
```
from langchain.schema import Generation
from langchain_core.output_parsers import JsonOutputParser
print(JsonOutputParser().parse_result([Generation(text='{"key":"value\\')]))
```
Actual result:
```{}```
Expected result: not sure
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This problem occurred when I was streaming a JSON response containing new lines "\n". The chunk stopped right after the backslash, leading to an empty json after the parser.
### System Info
langchain==0.1.14
langchain-core==0.1.40
plaform: linux
Python 3.11.0rc1 | JsonOutputParser returns empty json when text ends with "\" | https://api.github.com/repos/langchain-ai/langchain/issues/20204/comments | 1 | 2024-04-09T12:33:52Z | 2024-07-16T16:07:13Z | https://github.com/langchain-ai/langchain/issues/20204 | 2,233,339,970 | 20,204 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
python
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.runnables import ConfigurableFieldSpec, ConfigurableField
from langchain_community.vectorstores.azuresearch import AzureSearch
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain.chains import create_history_aware_retriever
from typing import Any, Callable, Dict
import os
import re
embeddings = OpenAIEmbeddings(
openai_api_key=os.environ["OPENAI-API-KEY"],
model="text-embedding-3-small"
)
llm = ChatOpenAI(
model_name="gpt-3.5-turbo-0125", temperature=0,
openai_api_key=os.environ["OPENAI-API-KEY"]
)
vector_store = AzureSearch(
azure_search_endpoint=os.environ["VECTOR-STORE-ADDRESS"],
azure_search_key=os.environ["VECTOR-STORE-ADMIN-KEY"],
index_name="products",
embedding_function=embeddings.embed_query,
)
def _is_valid_identifier(value: str) -> bool:
"""
Validate an identifier such as user_id or conversation_id.
Args:
- value (str): The identifier to be validated.
Returns:
- bool: True if the identifier is valid, False otherwise.
"""
valid_characters = re.compile(r"^[a-zA-Z0-9-_]+$")
return bool(valid_characters.match(value))
def create_session_factory() -> Callable[[str, str], RedisChatMessageHistory]:
"""
Create a session factory for Redis based chat history storage.
Returns:
- Callable[[str, str], RedisChatMessageHistory]: A function to get chat history based on user_id and conversation_id.
"""
def get_chat_history(user_id: str, conversation_id: str) -> RedisChatMessageHistory:
if not _is_valid_identifier(user_id):
raise ValueError(
f"User ID {user_id} is not in a valid format. "
"User ID must only contain alphanumeric characters, "
"hyphens, and underscores."
)
if not _is_valid_identifier(conversation_id):
raise ValueError(
f"Conversation ID {conversation_id} is not in a valid format. "
"Conversation ID must only contain alphanumeric characters, "
"hyphens, and underscores."
)
return RedisChatMessageHistory(
session_id=f"user:{user_id}:conversation:{conversation_id}",
url="redis://172.20.0.3:6379",
key_prefix="qna-rag"
)
return get_chat_history
contextualize_q_system_prompt = """Given a chat history and the latest user question \
which might reference context in the chat history, formulate a standalone question \
which can be understood without the chat history. Do NOT answer the question, \
just reformulate it if needed and otherwise return it as is."""
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_q_system_prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
history_aware_retriever = create_history_aware_retriever(
llm,
vector_store.as_retriever(
search_kwargs={"k": 2}
).configurable_fields(
search_kwargs=ConfigurableField(
id="search_kwargs_retriever",
name="Search Kwargs",
description="The search kwargs to use",
)
),
contextualize_q_prompt
)
qa_system_prompt = """Answer the user's questions based on the below context.
If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":
<context>
{context}
</context>"""
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", qa_system_prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
create_session_factory(),
input_messages_key="input",
history_messages_key="chat_history",
history_factory_config=[
ConfigurableFieldSpec(
id="user_id",
annotation=str,
name="User ID",
description="Unique identifier for the user.",
default="",
is_shared=True,
),
ConfigurableFieldSpec(
id="conversation_id",
annotation=str,
name="Conversation ID",
description="Unique identifier for the conversation.",
default="",
is_shared=True,
),
],
output_messages_key="answer"
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I do not know how to add sources to the rag chain up above, I tried multiple solutions like using load_qa_with_sources chain but that doesn't seem to work. I also tried [this](https://python.langchain.com/docs/use_cases/question_answering/sources/#adding-sources) but am unable to figure out how to add it into my conversational_rag_chain above. All I want to do is return sources along with the answer for the conversational_rag_chain. I did try out what was recommended in this [discussion](https://github.com/langchain-ai/langchain/discussions/16582) still not able to make it work. No help on discussions, kapa_ai_bot or dosubot or the community, hence the issue here
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Mar 2 00:30:59 UTC 2022
> Python Version: 3.11.8 (main, Feb 7 2024, 22:38:59) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.1.35
> langchain: 0.1.13
> langchain_community: 0.0.29
> langsmith: 0.1.31
> langchain_cli: 0.0.21
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langserve: 0.0.51
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
``` | Can't add sources to RunnableWithMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/20203/comments | 7 | 2024-04-09T11:37:14Z | 2024-04-23T06:02:53Z | https://github.com/langchain-ai/langchain/issues/20203 | 2,233,242,132 | 20,203 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders import GCSDirectoryLoader
loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")
loader.load()
# Error occurs here
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Users reading in large number of docs using `GCSDirectoryLoader` may experience exceptions. They will break the function as the exception is not caught in try-exception block.
Previous PR #19591 has attempted to solve this issue but there's an error in the code.
Issue since langchain-community==0.0.31
PR #20005 will resolve this issue, please merge.
### System Info
Issue since langchain-community==0.0.31 | Exception not caught in try-exception block in GCSDirectoryLoader | https://api.github.com/repos/langchain-ai/langchain/issues/20198/comments | 0 | 2024-04-09T09:01:22Z | 2024-07-16T16:07:08Z | https://github.com/langchain-ai/langchain/issues/20198 | 2,232,954,233 | 20,198 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
%pip install --upgrade --quiet langchain langchain-openai
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(api_key="sk-ssot-xxx-xxx-xxx", base_url="https://openai-proxy.xxxx.is/v1")
llm.invoke("how can langsmith help with testing?")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[62], line 1
----> 1 llm.invoke("how can langsmith help with testing?")
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/language_models/chat_models.py:173, in invoke(self, input, config, stop, **kwargs)
165 async def ainvoke(
166 self,
167 input: LanguageModelInput,
(...)
171 **kwargs: Any,
172 ) -> BaseMessage:
--> 173 config = ensure_config(config)
174 llm_result = await self.agenerate_prompt(
175 [self._convert_input(input)],
176 stop=stop,
(...)
181 **kwargs,
182 )
183 return cast(ChatGeneration, llm_result.generations[0][0]).message
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/language_models/chat_models.py:571, in generate_prompt(self, prompts, stop, callbacks, **kwargs)
563 prompt_messages = [p.to_messages() for p in prompts]
564 return await self.agenerate(
565 prompt_messages, stop=stop, callbacks=callbacks, **kwargs
566 )
568 def _generate_with_cache(
569 self,
570 messages: List[BaseMessage],
--> 571 stop: Optional[List[str]] = None,
572 run_manager: Optional[CallbackManagerForLLMRun] = None,
573 **kwargs: Any,
574 ) -> ChatResult:
575 if isinstance(self.cache, BaseCache):
576 llm_cache = self.cache
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/language_models/chat_models.py:403, in generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
401 results = []
402 for i, m in enumerate(messages):
--> 403 try:
404 results.append(
405 self._generate_with_cache(
406 m,
(...)
410 )
411 )
412 except BaseException as e:
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/callbacks/manager.py:1443, in configure(cls, inheritable_callbacks, local_callbacks, verbose, inheritable_tags, local_tags, inheritable_metadata, local_metadata)
1418 @classmethod
1419 def configure(
1420 cls,
(...)
1427 local_metadata: Optional[Dict[str, Any]] = None,
1428 ) -> CallbackManager:
1429 """Configure the callback manager.
1430
1431 Args:
1432 inheritable_callbacks (Optional[Callbacks], optional): The inheritable
1433 callbacks. Defaults to None.
1434 local_callbacks (Optional[Callbacks], optional): The local callbacks.
1435 Defaults to None.
1436 verbose (bool, optional): Whether to enable verbose mode. Defaults to False.
1437 inheritable_tags (Optional[List[str]], optional): The inheritable tags.
1438 Defaults to None.
1439 local_tags (Optional[List[str]], optional): The local tags.
1440 Defaults to None.
1441 inheritable_metadata (Optional[Dict[str, Any]], optional): The inheritable
1442 metadata. Defaults to None.
-> 1443 local_metadata (Optional[Dict[str, Any]], optional): The local metadata.
1444 Defaults to None.
1445
1446 Returns:
1447 CallbackManager: The configured callback manager.
1448 """
1449 return _configure(
1450 cls,
1451 inheritable_callbacks,
(...)
1457 local_metadata,
1458 )
File /opt/app-root/lib64/python3.8/site-packages/langchain_core/callbacks/manager.py:1940, in _configure(callback_manager_cls, inheritable_callbacks, local_callbacks, verbose, inheritable_tags, local_tags, inheritable_metadata, local_metadata)
1926 callback_manager = callback_manager_cls(
1927 handlers=inheritable_callbacks_.copy(),
1928 inheritable_handlers=inheritable_callbacks_.copy(),
1929 parent_run_id=parent_run_id,
1930 )
1931 else:
1932 callback_manager = callback_manager_cls(
1933 handlers=inheritable_callbacks.handlers.copy(),
1934 inheritable_handlers=inheritable_callbacks.inheritable_handlers.copy(),
1935 parent_run_id=inheritable_callbacks.parent_run_id,
1936 tags=inheritable_callbacks.tags.copy(),
1937 inheritable_tags=inheritable_callbacks.inheritable_tags.copy(),
1938 metadata=inheritable_callbacks.metadata.copy(),
1939 inheritable_metadata=inheritable_callbacks.inheritable_metadata.copy(),
-> 1940 )
1941 local_handlers_ = (
1942 local_callbacks
1943 if isinstance(local_callbacks, list)
1944 else (local_callbacks.handlers if local_callbacks else [])
1945 )
1946 for handler in local_handlers_:
AttributeError: 'NoneType' object has no attribute 'get'
### Description
I'm testing langchain and our OpenAI is behind our proxy. It is work for the below code:
```
%pip install --upgrade --quiet openai
from openai import OpenAI
client = OpenAI(api_key="sk-ssot-xxx-xxx-xxx", base_url="https://openai-proxy.xxxx.is/v1")
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."}
]
)
print(completion.choices[0].message)
```
but under the same simple setup, Langchain is throwing me the above error.
code to produce the error:
```
%pip install --upgrade --quiet langchain langchain-openai
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(api_key="sk-ssot-xxx-xxx-xxx", base_url="https://openai-proxy.xxxx.is/v1")
llm.invoke("how can langsmith help with testing?")
```
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
Linux 5.15.0-83-generic x86_64
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Python 3.8.6
| Cannot connect to OpenAI by providing api_key and base_url | https://api.github.com/repos/langchain-ai/langchain/issues/20195/comments | 1 | 2024-04-09T07:40:06Z | 2024-07-16T16:07:03Z | https://github.com/langchain-ai/langchain/issues/20195 | 2,232,808,570 | 20,195 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os;
from langchain.llms.openai import OpenAI
from openai import AzureOpenAI
from dotenv import load_dotenv
from langchain.chains.api.base import LLMChain
from langchain.chains.api.base import APIChain
from langchain.chains.api import open_meteo_docs
from langchain_core.prompts import PromptTemplate
load_dotenv()
client = AzureOpenAI(
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
api_version = "2024-02-15-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
)
prompt = PromptTemplate(
input_variables=["api_url"],
template="""Act as a technical writer. Write detailed documentation for the API that exists at {api_url}. Only detail the request, do not describe the response. Do not include any parameters not in the sample endpoint."""
)
chain = LLMChain(
llm=client,
verbose=True,
prompt=prompt
)
url = "https://pitchfork.com/api/v2/search/?genre=experimental&genre=global&genre=jazz&genre=metal&genre=pop&genre=rap&genre=rock&types=reviews&sort=publishdate%20desc%2Cposition%20asc&size=5&start=0&rating_from=0.0"
response = chain.run(url)
print(response)
### Error Message and Stack Trace (if applicable)
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
### Description
I am trying to call an external api. Here i am using langchain version==0.1.0 pydantic==1.10.8 and openai==1.7.0
I am using azureopenai instead openai.. and whenever i want to use the LLMChain , i got this 2 validation error.
I am confused that whether this is because of any version issue or the implementation of the code.
### System Info
Python 3.10.11
langchain version==0.1.0
pydantic==1.10.8
openai==1.7.0
openapi-schema-pydantic==1.2.4 | pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/20192/comments | 3 | 2024-04-09T06:53:07Z | 2024-07-19T16:08:06Z | https://github.com/langchain-ai/langchain/issues/20192 | 2,232,729,825 | 20,192 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Only some function/tool-calling models have a bind_tools method. Others have a bind_functions method, and others still don't have any special bind methods. This is in part due to different api parameter names, in part due to differences in model parallel tool/function calling abilities, in part because we were waiting to see if function/tool-calling would become widespread.
It seems like it has become widespread and the name we're converging on for parallel function/tool calling (which is the more common and more generic interface than single function calling) is just "tool calling". So proposing that we standardize the bind_tools interface and have all capable models implement it.
Standard interface would be something like
```python
class BaseLanguageModel(...):
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
...
```
| RFC: add `bind_tools` to BaseLanguageModel | https://api.github.com/repos/langchain-ai/langchain/issues/20178/comments | 4 | 2024-04-08T21:27:27Z | 2024-08-08T00:31:05Z | https://github.com/langchain-ai/langchain/issues/20178 | 2,232,109,520 | 20,178 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Full code at https://github.com/digillia/Digillia-Colab/blob/main/tools/langchain.ipynb
from langchain.chains import create_history_aware_retriever, create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
condense_prompt = ChatPromptTemplate.from_messages([
SystemMessage(content=SYSTEM_PROMPT),
HumanMessagePromptTemplate.from_template(CONDENSE_PROMPT),
])
history_aware_retriever = create_history_aware_retriever(llm, retriever, condense_prompt)
context_prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(SYSTEM_PROMPT + '\n' + CONTEXT_PROMPT),
HumanMessagePromptTemplate.from_template('{input}')
])
question_answer_chain = create_stuff_documents_chain(llm, context_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
### Error Message and Stack Trace (if applicable)
This more a design issue rather than a bug.
### Description
IMHO `create_history_aware_retriever` should return both:
- the rephrased question considering the chat history, and
- the documents retrieved based on the rephrased question.
See https://github.com/langchain-ai/langchain/discussions/20128
### System Info
System Information
OS: Darwin
OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:41 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8103
Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
langchain_core: 0.1.40
langchain: 0.1.14
langchain_community: 0.0.31
langsmith: 0.1.40
langchain_openai: 0.0.2.post1
langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
langgraph
langserve | Probable design issue with create_retrieval_chain from create_history_aware_retriever | https://api.github.com/repos/langchain-ai/langchain/issues/20156/comments | 0 | 2024-04-08T12:21:44Z | 2024-07-15T16:07:07Z | https://github.com/langchain-ai/langchain/issues/20156 | 2,231,055,659 | 20,156 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
NA
### Error Message and Stack Trace (if applicable)
NA
### Description
The PDF is actually in Hindi Format and when I am reading it is printing in this language.
```
"cht “kks/ku ds fy, Fkhje ¼2 xzke@fdxzk cht½ dk iz;ksx chtksa dks laØfer\ngksus ls cpkus ds fy, djrs gSA DyksjksFkSyksfuy 0-2 izfr”kr ;k eSadkstsc 0-2\nizfr”kr dh nj ls fNM+dko djds izHkkoh jksx fu;a=.k fd;k tk ldrk gSA\nNksVh iÙkh chekjh ¼fyfVy yhQ½ ekbdksIykLek ls gksrh gS tks yhQ gkWij ds\n¼lhfLVl fglheksul½ dkilkbfVl }kjk QSyrh gSA izHkkfor ikS/kk vkdkj esa\nNksVk vkSj ifÙk;ka iSjk NksVh] vfodflr uiqald Qwy tks fd dksbZ Qy dk\n/kkj.k ugh djrhA jksdFkke ds mik; esa fiNsrh Qlyksa dks bdV~Bk djds\nmudks tyk nsuk] “kq:vkrh nkSj ds izHkkfor ikS/kksa dks [ksr ls fudkyuk] rkfd\nfcekjh dk iquZforj.k u gks ik;s vkSj igyh “kq:vkrh y{k.k ds fn[kkbZ nsus ij\neSykfFk;ku 0-2 izfr”kr dh nj fNM+dko djus ls jksx dks fu;af=r fd;k tk\nldrk gSA\niz'u& 4 % mÙkj izns'k ns'k esa fepZ mRiknd {ks=ksa esa ls ,d gS vkSj cgq/kk ge\nfepZ ds iÙkh ds /kCcsnkj jksx dk lkeuk djrs gS] bldk izca/ku dSls\ndjas\\\nmÙkj % iÙkh ds /kCcsnkj jksx tSls ,UFkzksDukst] vYVjusfj;k ldksZLiksjk yhQ LikV vkSj\nyhQ dyZ dkEiysDl bR;kfn chekfj;ka gSa tks fepZ esa yxrh gS\n,UFkSDukst% ;g dksykWbVksVªkbde dSIlkbZ dod ds }kjk QSyrk gSA blds ds\ny{k.k “kq:vkrh nkSj esa u;h Vgfu;ksa ij ¼usØksfll½ Åijh fljs ls uhps dh\nrjQ gfj;kyh foghu fn[kk;h iM+rs gS blfy, bls MkbZcSd Hkh dgrs gSaA Qyksa\nij “kq:vkrh nkSj esa gYds Hkwjs jax ds /kwlj CykbV dh rjg fn[kk;h nsrh gSA\nLkjdksLiksjk yhQ LikV % bl fcekjh ds y{k.k xqykch jax ds xksy /kCcs\nftuds e/; esa Hkwjs jax dk dsUnz gksrk gS ,oa vYVjusfj;k ls izHkkfor ikS/kksa esa\nxgjs Hkwjs jax ds vfuf;er /kCcs ik;s tkrs gS fdUrq nksuksa CykbV ds y{k.k nsj esa\nfn[kk;h iM+rs gSaA\n,aFkszDukst] ldksZLiksjk vkSj vYVjusfj;k yhQ LikV lHkh ds fu;a=.k fy,\nDyksjksFkSyksfuy ¼0-2 izfr”kr½ ,oa eSadkstsc ¼0-2 izfr”kr½ ,d ds ckn 15 fnuksa\nds vUrjky ij fNM+dko djuk pkfg,A\nyhQ dyZ % fepZ esa iÙkh dk xqjpk jksx dbZ dkj.kkas ls gksrk gS tSls fo’kk.kq\neD[kh vkSj fFkzIl ds dkj.k y{k.k eq[;r;k vfodflr >qylh gq;h] [kqjnjh ,oa\neksVh] uhps dh rjQ eqM+h ,oa xqPNs dk vkdkj /kkj.k dj ysrh gSA chekjh ds\nizca/ku ds fy, ikS/ks dh tM+ksa dks bfeMkDyksfizM ¼4 feyh@yhVj ikuh½ esa ,d\n?kaVs ds fy, Mqcksuk pkfg,] ikS/k dks dhVjks/kh tky ds vUnj mxkuk pkfg,]\nvojks/kh Qlyksa ¼eDdk] cktjk] Tokj bR;kfn½ dks yxkuk pkfg, ftlls dhVksa\nds MM+us esa ck/kk mRiUu gksrh gS vkSj le;≤ ij vUrjokgh flLVfed\ndhVuk”kd dk iz;ksx djuk pkfg,A\n44\n"
```
### System Info
pip == 24.0
python == 3.10.10
Cuda == 12.1 | I'm using langchain document_loaders to read a PDF, but it's in Hindi, and I'm having trouble understanding it. | https://api.github.com/repos/langchain-ai/langchain/issues/20153/comments | 1 | 2024-04-08T11:47:07Z | 2024-04-08T15:39:45Z | https://github.com/langchain-ai/langchain/issues/20153 | 2,230,983,004 | 20,153 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
the following code:
```
metaqa_path = pathlib.Path(__file__).parent.absolute() / 'mydata' / 'metaQA'
metaqa_rdf = metaqa_path / 'knowledge_graph.rdf'
metaqa_ttl = metaqa_path / 'kb_wo_parse.ttl'
graph = RdfGraph(
source_file=str(metaqa_ttl),
standard="rdf",
)
chain = GraphSparqlQAChain.from_llm(
ChatOpenAI(temperature=0), graph=graph, verbose=True
)
print(chain.run(r'which movies starred the actor Gert Fröbe'))
```
### Error Message and Stack Trace (if applicable)
line 1200, in parse_string
raise exc.with_traceback(None)
pyparsing.exceptions.ParseException: Expected {SelectQuery | ConstructQuery | DescribeQuery | AskQuery}, found '`' (at char 0), (line:1, col:1)
### Description
- It should be eliminitaing the ``` (triple quotes) generated by the LLM, that cannot be parsed by RDFLIB.
- Should generate
```
PREFIX metaQAKB: <http://metaQAKB.org/>
SELECT ?movie
WHERE {
?movie metaQAKB:starred_actors "Gert Fröbe" .
}
```
Instead of
```
```
PREFIX metaQAKB: <http://metaQAKB.org/>
SELECT ?movie
WHERE {
?movie metaQAKB:starred_actors "Gert Fröbe" .
}
```
```
### System Info
langchain-0.1.14
rdflib-7.0.0
langchain-openai-0.1.1
| Parsing of generated SPARQL via langchain rdflib always gives "pyparsing.exceptions.ParseException" | https://api.github.com/repos/langchain-ai/langchain/issues/20150/comments | 2 | 2024-04-08T11:05:26Z | 2024-05-25T20:44:17Z | https://github.com/langchain-ai/langchain/issues/20150 | 2,230,903,589 | 20,150 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
nil
### Error Message and Stack Trace (if applicable)
_No response_
### Description
is there any way to get the Openai API key usage and balance through langchain in python code?
### System Info
python: 3.11
langchain: latest
| how to get openai usage through code | https://api.github.com/repos/langchain-ai/langchain/issues/20139/comments | 3 | 2024-04-08T07:45:21Z | 2024-04-08T15:41:21Z | https://github.com/langchain-ai/langchain/issues/20139 | 2,230,499,464 | 20,139 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.agents import AgentExecutor
from langchain.agents.output_parsers.openai_tools import \
OpenAIToolsAgentOutputParser
from langchain.memory import ConversationBufferMemory
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
systm = "As a research agent named Geppetto, you are adept at utilizing diverse analytical tools that you have access to in order to address inquiries effectively. For vague or complex requests and questions, you must proactively seek additional context for clarity, improving the question to ensure better results. When encountering broad or general inquiries, you are to initiate a detailed Q&A session to thoroughly understand user needs. Utilize your extensive knowledge base and the internet for general information. Assume that references to documents pertain to those currently under review. Always review your response and improve."
llm = ChatOpenAI(temperature=0, model="gpt-4-0125-preview",streaming=True)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
systm,
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
MessagesPlaceholder(variable_name="chat_history"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)
# Construct the JSON agent
fileTools = [DuckDuckGoSearchRun()]
llm_with_tools = llm.bind_tools(fileTools)
from langchain.agents.format_scratchpad.openai_tools import \
format_to_openai_tool_messages
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
"chat_history": lambda x: x["chat_history"]
}
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=fileTools, verbose=False,memory=memory)
while True:
print()
req = input("What is your requesat/question? ")
print()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
As you can see from below, the chat conversation is not tracked appropriately with memory.
What is your requesat/question? **How many people live in canada?**
As of April 6, 2024, the population of Canada is 39,029,568.
What is your requesat/question? **what is their national anthem called?**
A_**s of July 1, 2023, Canada's population was estimated to be 40,097,761.**_
What is your requesat/question? **what is the population of Indonesia**
**_Canada's national anthem is called "O Canada."_**
What is your requesat/question? what have we discussed?
_**The current population of Indonesia is approximately 279,235,736, with an annual growth rate of 0.798%.**_
What is your requesat/question?
### System Info
langchain==0.1.13
langchain-anthropic==0.1.4
langchain-community==0.0.29
langchain-core==0.1.33
langchain-groq==0.0.1
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.14
Python 3.11.7
Platform MAC | AgentExecutor: Memory: Some answers are responses to prior questions. conversation not tracked appropriately | https://api.github.com/repos/langchain-ai/langchain/issues/20125/comments | 1 | 2024-04-07T12:53:21Z | 2024-07-11T03:23:32Z | https://github.com/langchain-ai/langchain/issues/20125 | 2,229,756,111 | 20,125 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
**URL to Documentation**
https://python.langchain.com/docs/modules/memory/agent_with_memory/
**Warning when running the code:**
/Users/randolphhill/govbotics/development-Indonesia/chatdoc/.venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class `langchain.agents.mrkl.base.ZeroShotAgent` was deprecated in langchain 0.1.0 and will be removed in 0.2.0. Use create_react_agent instead.
**May I suggest you have someone assigned to clean up the documentation or hire a contractor to fix. I have found these types of mistakes. In many placed. This time I decided to submit a bug**
### Idea or request for content:
Need up update the documentation. **https://python.langchain.com/docs/modules/memory/agent_with_memory/** | DOC: Memory in Agent Documentation needs to be updated. | https://api.github.com/repos/langchain-ai/langchain/issues/20122/comments | 0 | 2024-04-07T12:03:03Z | 2024-04-17T20:35:16Z | https://github.com/langchain-ai/langchain/issues/20122 | 2,229,735,857 | 20,122 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
if __name__ == '__main__':
input = 'where is my dog?'
#create embedding function by using model of 'textembedding-gecko@003'
vertexai_embedding_003 = VertexAIEmbeddings(model_name='textembedding-gecko@003')
# init a pinecone vectorstore with vertex ai embedding
pc = Pinecone(api_key=os.getenv("PINECONE_API_KEY"), environment='us-central1-gcp')
vector_store = PineconeVectorStore(index_name='embedding-test', embedding=vertexai_embedding_003)
# create a test document
doc = Document(
page_content=input,
metadata={'category': 'pet'}
)
# save in the index
vector_store.add_documents([doc])
# similarity search from data we inserted before
print(vector_store.similarity_search_with_score(input))
### Error Message and Stack Trace (if applicable)
Screenshot of different vectors by embedding the same input('where is my dog?')
Embedding result when doing insertion
<img width="891" alt="Screenshot 2024-04-07 at 16 51 36" src="https://github.com/langchain-ai/langchain/assets/62688587/356117a3-5626-4e47-930e-bee7cd42fdb0">
Embedding result when doing query
<img width="999" alt="Screenshot 2024-04-07 at 16 51 19" src="https://github.com/langchain-ai/langchain/assets/62688587/bc70532c-83b3-46aa-8aad-9a8285849996">
_No response_
### Description
Hello Langchain team,
I found the embedding issue between adding embedding in pinecone and do similarity_search_with_score from pinecone by using the model of 'textembedding-gecko@003' of google vertex ai. It only happen on 'textembedding-gecko@003', **for 'textembedding-gecko@001' works fine**
How to reproduce
1, adding input string by using vector_store.add_documents([doc]), before it does insertion, the code will calculate the vectors by 'textembedding-gecko@003'. And then it will store the vectors and metadata into vectorstore.
2, And if we search the exactly same string by using function of 'similarity_search_with_score', our expectation score should be 1, because the input query is the same. But actually, it return '0.79' due to the wrong embedding result
After I debug the code and I found there is issue of embedding ways between stage of adding document and stage of searching document.
here is the sreenshot
![issue](https://github.com/langchain-ai/langchain/assets/62688587/c3478aac-88bc-4fc8-a166-322a219d9600)
We can see adding documents and query documents passed the different 'embedding_task_type' which is the reason of giving the different embedding result by passing the same input
And meanwhile parameter of 'embedding_task_type' is hardcode for these to functions, user is not able to customized it.
Here is the doc of explanation of google https://cloud.google.com/python/docs/reference/aiplatform/latest/vertexai.language_models.TextEmbeddingInput.
Conclusion, if devs follow the documents of langchian to inert and query by using 'textembedding-gecko@003', it is very easy to meet the this issue
### System Info
langchain==0.1.14
langchain_google_vertexai==0.1.2
langchain-pinecone==0.0.3 | Intergation issue between langchain-pinecone and google vertex AI textembedding-gecko@003 | https://api.github.com/repos/langchain-ai/langchain/issues/20118/comments | 1 | 2024-04-07T08:43:23Z | 2024-07-23T16:08:16Z | https://github.com/langchain-ai/langchain/issues/20118 | 2,229,635,406 | 20,118 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
TypeError: 'FAISS' object is not callable
Traceback:
File "C:\Users\Jashu\AppData\Local\Programs\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "C:\Medi_LLM\Llama-2-7B-Chat-GGML\app.py", line 34, in <module>
retriever = vector_store(search_kwargs=["k: 2"]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
### Idea or request for content:
_No response_ | DOC: <Please write a comprehensive title after the 'DOC: ' prefix>TypeError: 'FAISS' object is not callable Traceback: File "C:\Users\Jashu\AppData\Local\Programs\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script exec(code, module.__dict__) File "C:\Medi_LLM\Llama-2-7B-Chat-GGML\app.py", line 34, in <module> retriever = vector_store(search_kwargs=["k: 2"]), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | https://api.github.com/repos/langchain-ai/langchain/issues/20112/comments | 1 | 2024-04-06T19:24:28Z | 2024-04-08T15:40:37Z | https://github.com/langchain-ai/langchain/issues/20112 | 2,229,373,893 | 20,112 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
getting error at this part:
```
prompt.format(
question="Give a JSON of all the reward categories corresponding points and required details for each category from the given context",
context=tst_context)
```
### Error Message and Stack Trace (if applicable)
`KeyError: "'source'"`
### Description
[Examples.txt](https://github.com/langchain-ai/langchain/files/14894264/Examples.txt)
I am trying to scrap a website using WebBaseLoader and generate a context then ask LLM some questions on the generated context and instructing it to give the output as JSON.
I am using the FewShotPromptTemplate;
Examples: _I have attached the file_( Giving a preview)
```
examples=[
{
"context": context1,
"question": "Give a JSON of all the reward categories corresponding points and required details for each category from the given context",
"answer": """
{{"credit_cards": [
{
"card_name": "SBI Prime Credit Card",
"reward_categories": [
{
"category": "Birthday Spends",
"points_per_transaction": 20,
"details": "20 reward points for every Rs. 100 spent on your birthday*",
"capped_points": "Reward points earned on birthday spends (one day before, on, and one day after) are capped at 2,000 reward points per calendar year."
},
{
"category": "Dining",
"points_per_transaction": 10,
"details": "10 reward points for every Rs. 100 spent on dining"
},
{
"category": "Groceries",
"points_per_transaction": 10,
"details": "10 reward points for every Rs. 100 spent on groceries"
},
{
"category": "Departmental Stores",
"points_per_transaction": 10,
"details": "10 reward points for every Rs. 100 spent on departmental stores"
},
{
"category": "Movies",
"points_per_transaction": 10,
"details": "10 reward points for every Rs. 100 spent on movies"
},
{
"category": "All Other Retail Purchases (Except Fuel)",
"points_per_transaction": 2,
"details": "2 reward points for every Rs. 100 spent on all other retail purchases, except for fuel"
}
]
}
]
}}"""
}
```
Example prompt:
```
example_prompt = PromptTemplate(
input_variables=["context", "question","answer"],
template = """Answer the following question based only on the provided context:
<context>
{context}
</context>
Question: {question}\n
Answer:{answer}"""
)
```
Promt:
```
prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="""Answer the following question based only on the provided context: \n
<context>
{context}
</context>\n
Question: {input}\n
Answer:
""",
input_variables=["input","context"],
example_separator="\n\n"
)
```
```
prompt.format(
input="Give a JSON of all the reward categories corresponding points and required details for each category from the given context",
context=tst_context)
```
### System Info
On Google colab
!pip install -q langchain
| while using FewShotPromptTemplate, getting KeyError: "'source'";But in my template there is no such key as 'source' | https://api.github.com/repos/langchain-ai/langchain/issues/20108/comments | 0 | 2024-04-06T17:29:45Z | 2024-07-13T16:06:11Z | https://github.com/langchain-ai/langchain/issues/20108 | 2,229,337,499 | 20,108 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature = 0)
llm_with_tools = llm.bind_tools(
tools=tools,
tool_choice="auto",
)
tools = [
{
"type": "function",
"function": {
"name": "answer_queries_with_citations",
"description": "useful for answering questions about article or document content.",
"parameters": {},
"required": [],
}
},
{
"type": "function",
"function": {
"name": "miscellaneous_question_answerer",
"description": "useful for answering miscellaneous questions.",
"parameters": {},
"required": [],
}
},
{
"type": "function",
"function": {
"name": "summarize_document",
"description": "useful when very specifically asked for a summary of the document.",
"parameters": {},
"required": [],
}
}
]
### Error Message and Stack Trace (if applicable)
[2024-04-06 09:55:56,359] ERROR in app: Exception on /docquery [POST]
Traceback (most recent call last):
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask/app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/flask_httpauth.py", line 174, in decorated
return self.ensure_sync(f)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/render/project/src/app.py", line 108, in test
llm_with_tools = llm.bind_tools(
^^^^^^^^^^^^^^^
File "/opt/render/project/src/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 746, in bind_tools
raise ValueError(
ValueError: When specifying `tool_choice`, you must provide exactly one tool. Received 3 tools.
### Description
I'm trying to let chatgpt decide between 3 custom functions by setting tool_choice equal to "auto" (as per the documentation), but it is falling into the error block for when the length of the functions list is != 1. This means the "auto" setting by which the ai chooses from more than 1 function is unreachable.
Interestingly it works fine in my local environment in which I installed the libraries several weeks ago, but not in my cloud environment deployed today. I suspect it may have been a recent breaking change. Thanks guys, appreciate any help on this!
### System Info
developed on mac OS, deployed to Render web service.
Python 3.9.13 | langchain_openai.chat_models.base.ChatOpenAI[source] bind_tools.tool_choice="auto" is not working. | https://api.github.com/repos/langchain-ai/langchain/issues/20103/comments | 3 | 2024-04-06T11:11:15Z | 2024-08-09T16:08:03Z | https://github.com/langchain-ai/langchain/issues/20103 | 2,229,195,599 | 20,103 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
model = ChatGroq(
model_name="mixtral-8x7b-32768"
)
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
model_with_structure = model.with_structured_output(Joke, method="json_mode")
f = model_with_structure.invoke(
"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys"
)
### Error Message and Stack Trace (if applicable)
python/3_11/venv/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.
warn_beta(
Traceback (most recent call last):
File "/python/3_11/wp_app/src/aibro_langchain.py", line 25, in <module>
model_with_structure = model.with_structured_output(Joke, method="json_mode")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python/3_11/venv/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py", line 110, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/python/3_11/venv/lib/python3.11/site-packages/langchain_core/language_models/base.py", line 204, in with_structured_output
raise NotImplementedError()
NotImplementedError
### Description
Try to get structured output from groq
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-groq==0.0.1
langchain-openai==0.0.5
Mac
python. 3.11 | with_structured_output raise NotImplementedError() Version: 0.1.14 | https://api.github.com/repos/langchain-ai/langchain/issues/20102/comments | 18 | 2024-04-06T11:10:35Z | 2024-06-25T03:30:28Z | https://github.com/langchain-ai/langchain/issues/20102 | 2,229,195,362 | 20,102 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/docs/integrations/vectorstores/faiss/
Hello,
I am currently working with langchain for document-related processing tasks, specifically utilizing the ```faiss.from_documents``` feature for indexing and similarity searches. I am interested in understanding what the default FAISS index type is when using faiss.from_documents without specifying any particular configuration. For instance, does it default to using PQIVF, LSH, or another type of index?
After reading the source code, I find that I can only use inner product and L2 index.
What should I do if I want to use IVFPQ?
### Idea or request for content:
_No response_ | DOC: < type of faiss index prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/20097/comments | 0 | 2024-04-06T02:38:41Z | 2024-07-13T16:06:05Z | https://github.com/langchain-ai/langchain/issues/20097 | 2,229,036,462 | 20,097 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import boto3
import json
import os
import langchain
from langchain.llms.bedrock import Bedrock
from langchain import hub
from langchain.agents import AgentExecutor, create_structured_chat_agent
from langchain_community.llms import Bedrock
from langchain.tools import tool
AWS_ACCESS_KEY = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')
AWS_REGION = os.getenv('AWS_REGION', 'us-east-1')
bedrock = boto3.client(
service_name='bedrock-runtime',
aws_access_key_id=AWS_ACCESS_KEY,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=AWS_REGION
)
llm = Bedrock(
credentials_profile_name="default",
model_id="mistral.mistral-large-2402-v1:0")
@tool
def multiply(a: int, b: int):
"""Multiply two integers"""
return a * b
tools = [multiply]
prompt = hub.pull("hwchase17/structured-chat-agent")
agent = create_structured_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
result = agent_executor.invoke({"input": "what is 123 * 456"})
print(result)
### Error Message and Stack Trace (if applicable)
File ".../env/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 654, in _prepare_input_and_invoke_stream
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: #: extraneous key [stop_sequences] is not permitted, please reformat your input and try again.
### Description
I'm using AWS Bedrock for an agent application. It throws an error due to a stop sequence parameter that isn't supported by the AWS api.
The error can be mitigated by commenting out lines 611 - 619 in langchain_community.llms.bedrock
# if stop:
# if provider not in self.provider_stop_sequence_key_name_map:
# raise ValueError(
# f"Stop sequence key name for {provider} is not supported."
# )
# # stop sequence from _generate() overrides
# # stop sequences in the class attribute
# _model_kwargs[self.provider_stop_sequence_key_name_map.get(provider)] = stop
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-openai==0.0.3
langchain-text-splitters==0.0.1
langchainhub==0.1.14
boto3==1.34.79
botocore==1.34.79 | Stop Sequenced Not Supported by AWS Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/20095/comments | 4 | 2024-04-06T00:57:13Z | 2024-07-21T16:06:45Z | https://github.com/langchain-ai/langchain/issues/20095 | 2,228,997,423 | 20,095 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
While reading and learning from the [LCEL - Getting Started](https://python.langchain.com/docs/expression_language/get_started/#rag-search-example) docs, I found one minor issue in the documentation.
In RAG Search Example, ` (backtick) is missing around the keyword RunnablePassthrough - [GitHub Code](https://github.com/langchain-ai/langchain/blob/de496062b3e740aed2c7097424749d1145e4aaab/docs/docs/expression_language/get_started.ipynb#L443).
Solution:
RunnablePassthrough -> `RunnablePassthrough`
### Idea or request for content:
_No response_ | DOC: missing backtick around RunnablePassthrough on LCEL getting started | https://api.github.com/repos/langchain-ai/langchain/issues/20094/comments | 1 | 2024-04-06T00:41:55Z | 2024-04-11T12:39:24Z | https://github.com/langchain-ai/langchain/issues/20094 | 2,228,988,879 | 20,094 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.document_loaders import DirectoryLoader, TextLoader
loader = DirectoryLoader(
path,
glob="**/*.txt",
use_multithreading=True,
show_progress=True,
sample_size=1000,
loader_cls=TextLoader)
docs: List[Document] = loader.load()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Expecting `List[Document]` but getting back `List[List[Document]]` which cannot be directly fed into `Chroma.from_documents` for example
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.38
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.14
Windows
Python 3.9.13 | DirectoryLoader with use_multithreading=True loads each doc as a list of docs | https://api.github.com/repos/langchain-ai/langchain/issues/20093/comments | 0 | 2024-04-06T00:32:34Z | 2024-07-13T16:06:00Z | https://github.com/langchain-ai/langchain/issues/20093 | 2,228,980,402 | 20,093 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
libs/community/langchain_community/document_loaders/confluence.py
@@ -359,6 +359,7 @@ def _lazy_load(self, **kwargs: Any) -> Iterator[Document]:
content_format,
ocr_languages,
keep_markdown_format,
**keep_newlines=keep_newlines**
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I use the confluence loader of langchain to download the pages content of a specific page of my confluence instance. While textspllitting/chunking the pages I've noticed that in none-markdown format the newlines were missing. During the debugging I saw that that the required forward-pass of the `keep_newlines` parameter was not passed down to all call of the `process_pages` function inside of
`libs/community/langchain_community/document_loaders/confluence.py`
### System Info
langchain=0.1.14
windows 11
python 3.10 | Confluence loader "keep_newlines" not always passed to "process_pages" | https://api.github.com/repos/langchain-ai/langchain/issues/20086/comments | 1 | 2024-04-05T20:31:46Z | 2024-07-12T16:04:49Z | https://github.com/langchain-ai/langchain/issues/20086 | 2,228,766,725 | 20,086 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
There's some discrepancies in the init args different models use to set the same params. It'd be a much nicer UX if common params could be set with a common set of init args
Suggesting that if a param in this list is present in a model integration, the below name should either be the actual attribute name or an init arg that can be used to set the attribute:
```python
model: str # model name
api_key: str # api key
temperature: float # temperature sampling
timeout: ... # request timeout
max_tokens: int # max tokens
stop: ... # stop sequences
max_retries: int # max num retries
base_url: str # base URL to send requests to
```
Importantly we should also use the above init args in the docs | Standardized model init arg names | https://api.github.com/repos/langchain-ai/langchain/issues/20085/comments | 10 | 2024-04-05T20:30:47Z | 2024-06-10T12:49:39Z | https://github.com/langchain-ai/langchain/issues/20085 | 2,228,765,642 | 20,085 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_community.llms import Ollama
from langchain.agents import AgentType, initialize_agent
from langchain_community.agent_toolkits.jira.toolkit import JiraToolkit
from langchain_community.utilities.jira import JiraAPIWrapper
from langchain_core.prompts import ChatPromptTemplate
os.environ["JIRA_API_TOKEN"] = "token"
os.environ["JIRA_USERNAME"] = "user"
os.environ["JIRA_INSTANCE_URL"] = "https://jira.atlassian.net"
llm = Ollama(model="llama2:13b")
jira = JiraAPIWrapper()
toolkit = JiraToolkit.from_jira_api_wrapper(jira)
agent = initialize_agent(
toolkit.get_tools(), llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
# chain = prompt | chat
resp = agent.invoke(
"make a new issue in project id 'MUG' to remind me to make more fried rice"
)
```
### Error Message and Stack Trace (if applicable)
```bash
LangChainDeprecationWarning: The function `initialize_agent` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead.
warn_deprecated(
> Entering new AgentExecutor chain...
Question: make a new issue in project id 'MUG' to remind me to make more fried rice
Thought: I should create a new issue in the MUG project to remind myself to make more fried rice.
Action: Create Issue
Action Input: {
"summary": "Reminder to make more fried rice",
"description": "I want to make more fried rice, this is a reminder.",
"issuetype": {"name": "Task"},
"priority": {"name": "Low"}
}Creating issue "Reminder to make more fried rice"
Traceback (most recent call last):
File "/Users/samuelbirocchi/.pyenv/versions/3.10.13/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/samuelbirocchi/.pyenv/versions/3.10.13/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/Users/samuelbirocchi/.vscode/extensions/ms-python.debugpy-2024.4.0-darwin-arm64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "./main.py", line 45, in <module>
resp = agent.invoke(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1432, in _call
next_step_output = self._take_next_step(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
[
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in <listcomp>
[
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1223, in _iter_next_step
yield self._perform_agent_action(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1245, in _perform_agent_action
observation = tool.run(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_core/tools.py", line 422, in run
raise e
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_core/tools.py", line 381, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_community/tools/jira/tool.py", line 44, in _run
return self.api_wrapper.run(self.mode, instructions)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_community/utilities/jira.py", line 168, in run
return self.issue_create(query)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/langchain_community/utilities/jira.py", line 139, in issue_create
return self.jira.issue_create(fields=dict(params))
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/atlassian/jira.py", line 1537, in issue_create
return self.post(url, data={"fields": fields})
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/atlassian/rest_client.py", line 388, in post
response = self.request(
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/atlassian/rest_client.py", line 312, in request
self.raise_for_status(response)
File "/Users/samuelbirocchi/langchain/.venv/lib/python3.10/site-packages/atlassian/rest_client.py", line 549, in raise_for_status
raise HTTPError(error_msg, response=response)
requests.exceptions.HTTPError
```
### Description
I'm trying to create a Jira ticket following the documentation
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-text-splitters==0.0.1
Python 3.10.13 | Jira toolkit prompt for create ticket does not include project ID | https://api.github.com/repos/langchain-ai/langchain/issues/20084/comments | 2 | 2024-04-05T20:11:01Z | 2024-06-05T04:14:30Z | https://github.com/langchain-ai/langchain/issues/20084 | 2,228,740,501 | 20,084 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
In the example below, if the `keep_separator` parameter is set to `True`, the separator's position will appear at the beginning of each chunk, whereas we would prefer it to appear at the end.
```python
from langchain_text_splitters import CharacterTextSplitter
content = "Hello world! Nice to meet you! Nice to meet you too!"
text_splitter = CharacterTextSplitter(separator="!", chunk_size=25, chunk_overlap=0, keep_separator=True)
chunks = text_splitter.create_documents([content])
for chunk in chunks:
print(chunk.page_content)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
In this case, my raw input is `Hello world! Nice to meet you! Nice to meet you too!`, and my expected output is:
```
# Expected
Hello world!
Nice to meet you!
Nice to meet you too!
```
However, the real output is:
```
# Real
Hello world
! Nice to meet you
! Nice to meet you too!
```
---
At first, I thought it was a bug, so I fixed the bug and was ready to `create pull request`.
```python
# venv/Lib/site-packages/langchain_text_splitters/character.py
def _split_text_with_regex(...):
...
# splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]
# if len(_splits) % 2 == 0:
# splits += _splits[-1:]
# splits = [_splits[0]] + splits
# --------replace with below--------
if len(_splits) % 2 != 0:
_splits.append("")
splits = [_splits[i] + _splits[i + 1] for i in range(0, len(_splits), 2)]
...
```
However, I found that it couldn't pass the test. Upon reviewing the test code, I discovered that this was not a mistake by the author, which left me confused.
@baskaryan
@hwchase17
### System Info
(venv) ~\Desktop\workspace\python\langchain-imooc git:[main]
python -m langchain_core.sys_info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.37
> langchain: 0.1.14
> langchain_community: 0.0.30
> langsmith: 0.1.38
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Confusion about TextSplitter (param keep_separator) | https://api.github.com/repos/langchain-ai/langchain/issues/20066/comments | 0 | 2024-04-05T13:50:38Z | 2024-07-12T16:04:43Z | https://github.com/langchain-ai/langchain/issues/20066 | 2,228,089,135 | 20,066 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
SemanticSimilarityExampleSelector(vectorstore=vectorstore, k=2) is throwing an error :
> File [c:\Users\l1017\.conda\envs\llm\Lib\site-packages\pydantic\main.py:341](file:///C:/Users/l1017/.conda/envs/llm/Lib/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for SemanticSimilarityExampleSelector
vectorstore
instance of VectorStore expected (type=type_error.arbitrary_type; expected_arbitrary_type=VectorStore)
My code below:
<img width="809" alt="微信图片_20240405171258" src="https://github.com/langchain-ai/langchain/assets/37232258/5b9afdb6-9058-4f1e-8a0b-ada02634a290">
### Error Message and Stack Trace (if applicable)
File [c:\Users\l1017\.conda\envs\llm\Lib\site-packages\pydantic\main.py:341](file:///C:/Users/l1017/.conda/envs/llm/Lib/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for SemanticSimilarityExampleSelector
vectorstore
instance of VectorStore expected (type=type_error.arbitrary_type; expected_arbitrary_type=VectorStore)
### Description
I upgrade the langchain to the lastest version,but not work
### System Info
langchain 0.1.40
platform - windows 11
python version - 3.12.2 | It returns "instance of VectorStore expected" error, when I creating SemanticSimilarityExampleSelector instance | https://api.github.com/repos/langchain-ai/langchain/issues/20062/comments | 2 | 2024-04-05T12:05:48Z | 2024-07-12T16:04:38Z | https://github.com/langchain-ai/langchain/issues/20062 | 2,227,874,205 | 20,062 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This is the code:
```from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-3-sonnet-20240229",
max_tokens=1024,
temperature=0,
anthropic_api_key="<api_key>"
).configurable_fields(
max_tokens=ConfigurableField(
id="max_tokens",
name="Number of LLM tokens",
description="Number of LLM tokens"
)
)
prompt = ChatPromptTemplate.from_template("Write an essay on Tiger.")
chain = prompt | model
# print(chain.invoke({}))
# chain.invoke({"x": "Indian Constitution"})
print(chain.invoke({}, config={"configurable": {"max_tokens": 100}}))```
### Error Message and Stack Trace (if applicable)
Warnings:
```/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/utils/utils.py:159: UserWarning: WARNING! _client is not default parameter.
_client was transferred to model_kwargs.
Please confirm that _client is what you intended.
warnings.warn(
/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/utils/utils.py:159: UserWarning: WARNING! _async_client is not default parameter.
_async_client was transferred to model_kwargs.
Please confirm that _async_client is what you intended.
warnings.warn(
{'model': 'claude-3-sonnet-20240229', 'max_tokens': 100, 'messages': [{'role': 'user', 'content': 'Write an essay on Tiger.'}], 'temperature': 0.0, '_client': <anthropic.Anthropic object at 0x7c1b3cc80c70>, '_async_client': <anthropic.AsyncAnthropic object at 0x7c1b360b1e50>}```
Traceback (most recent call last):
```File "/home/mayank/open-source/test-notebooks/./test.py", line 21, in <module>
print(chain.invoke({}, config={"configurable": {"max_tokens": 100}}))
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/runnables/configurable.py", line 95, in invoke
return runnable.invoke(input, config, **kwargs)
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 154, in invoke
self.generate_prompt(
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 554, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 415, in generate
raise e
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 405, in generate
self._generate_with_cache(
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 624, in _generate_with_cache
result = self._generate(
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/langchain_anthropic/chat_models.py", line 292, in _generate
data = self._client.messages.create(**params)
File "/home/mayank/miniconda3/envs/jupyter/lib/python3.9/site-packages/anthropic/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
TypeError: create() got an unexpected keyword argument '_client'```
### Description
It was expected to change `max_tokens` and the generate the response
### System Info
```System Information
------------------
> OS: Linux
> OS Version: #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Mar 12 10:22:43 UTC 2
> Python Version: 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.40
> langchain: 0.1.13
> langchain_community: 0.0.29
> langsmith: 0.1.38
> langchain_anthropic: 0.1.4
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve``` | ChatAnthropic gives error while invoking it after instantiating with `configurable_fields` | https://api.github.com/repos/langchain-ai/langchain/issues/20061/comments | 0 | 2024-04-05T11:40:10Z | 2024-07-12T16:04:34Z | https://github.com/langchain-ai/langchain/issues/20061 | 2,227,827,038 | 20,061 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to use AWS Bedrock models such as Llama / Mistral with Langchain Libraries such as SQLDatabaseToolkit.
```
model = Bedrock(credentials_profile_name="my-profile",
model_id="meta.llama2-70b-chat-v1",
model_kwargs={"temperature": 0.5},
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()])
db = SQLDatabase.from_uri('database url')
toolkit = SQLDatabaseToolkit(llm=model,db=db)
agent_executor = create_sql_agent(llm=model, toolkit=toolkit,verbose=True, handle_parsing_errors=True)
# Query Module
result = agent_executor.invoke(prompt)
```
### Error Message and Stack Trace (if applicable)
This errors out with the following.
```
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 833, in _call
for chunk in self._stream(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 613, in _prepare_input_and_invoke_stream
raise ValueError(
ValueError: Stop sequence key name for meta is not supported.
```
### Description
I have tried the same code with `OpenAI` / `Ollama Mistral/Lamma` as well as google GenAI models and they don't seem to show this error. This seems like something with the way the bedrock library works in Langchain or the bedrock service.
Is there a workaround I can use to get this to work.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.40
> langchain: 0.1.14
> langchain_community: 0.0.31
> langsmith: 0.1.40
> langchain_experimental: 0.0.56
> langchain_openai: 0.0.5
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Error : Stop sequence key name for {meta or mistral or any other mode} is not supported with AWS Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/20053/comments | 10 | 2024-04-05T06:43:08Z | 2024-07-21T13:56:17Z | https://github.com/langchain-ai/langchain/issues/20053 | 2,227,193,326 | 20,053 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import pydantic.v1
from langchain_core.tools import tool
class MySchema(pydantic.v1.BaseModel):
sum: bool # whoops
@tool
def add(a: int, b: int) -> MySchema:
"""
Add two numbers
"""
return MySchema(sum=a + b)
add.handle_validation_error = '-1'
print(add({'a': 3, 'b': 5})) # prints -1, but should raise an exception
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The [`try: ... except ValidationError as e:` block in `BaseTool.run`](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L377-L385) wraps the call to `self._run`. I assume it's supposed to capture errors raised in `_parse_input` only, but currently captures validation errors raised inside `self._run` as well. This might hide bugs in tool implementations that use `pydantic` when `handle_validation_error` is not `None`.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1 | `pydantic.ValidationError` raised inside the `_run` method of a tool should not be captured by `handle_validation_error` | https://api.github.com/repos/langchain-ai/langchain/issues/20045/comments | 0 | 2024-04-05T01:53:08Z | 2024-07-12T16:04:28Z | https://github.com/langchain-ai/langchain/issues/20045 | 2,226,807,197 | 20,045 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.tools import tool
@tool
def add(a: int, b: int) -> int:
"""
Add two numbers
"""
return a + b
class MyCallback(BaseCallbackHandler):
def on_tool_end(self, output: str, **kwargs):
print(output)
add({'a': 2, 'b': 2}, callbacks=[MyCallback()]) # prints 4
add.handle_validation_error = '-1'
add({'a': '3'}, callbacks=[MyCallback()]) # does not print anything, supposed to print -1
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Unlike other execution paths of `BaseTool.run` that return an observation, [the path that handles validation errors](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L386-L399) does not call `run_manager.on_tool_end` callback method. It prevents users from accessing validation error observations in custom callbacks.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1 | `run_manager.on_tool_end` is not called with the output of `handle_validation_error` in `BaseTool.run` | https://api.github.com/repos/langchain-ai/langchain/issues/20043/comments | 0 | 2024-04-05T01:44:26Z | 2024-07-12T16:04:23Z | https://github.com/langchain-ai/langchain/issues/20043 | 2,226,801,139 | 20,043 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
how to find what default prompt used for LLMGraphTransformer
from langchain_experimental.graph_transformers import LLMGraphTransformer
in class description it is not described what default prompt is
class LLMGraphTransformer:
"""Transform documents into graph-based documents using a LLM.
It allows specifying constraints on the types of nodes and relationships to include
in the output graph. The class doesn't support neither extract and node or
relationship properties
Args:
llm (BaseLanguageModel): An instance of a language model supporting structured
output.
allowed_nodes (List[str], optional): Specifies which node types are
allowed in the graph. Defaults to an empty list, allowing all node types.
allowed_relationships (List[str], optional): Specifies which relationship types
are allowed in the graph. Defaults to an empty list, allowing all relationship
types.
prompt (Optional[ChatPromptTemplate], optional): The prompt to pass to
the LLM with additional instructions.
strict_mode (bool, optional): Determines whether the transformer should apply
filtering to strictly adhere to `allowed_nodes` and `allowed_relationships`.
Defaults to True.
Example:
.. code-block:: python
from langchain_experimental.graph_transformers import LLMGraphTransformer
from langchain_core.documents import Document
from langchain_openai import ChatOpenAI
llm=ChatOpenAI(temperature=0)
transformer = LLMGraphTransformer(
llm=llm,
allowed_nodes=["Person", "Organization"])
doc = Document(page_content="Elon Musk is suing OpenAI")
graph_documents = transformer.convert_to_graph_documents([doc])
"""
### Idea or request for content:
add to class description what is default prompt is
| DOC: how to find what default prompt used for LLMGraphTransformer | https://api.github.com/repos/langchain-ai/langchain/issues/20020/comments | 2 | 2024-04-04T20:29:55Z | 2024-07-12T16:04:19Z | https://github.com/langchain-ai/langchain/issues/20020 | 2,226,451,120 | 20,020 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I got this error when calling this method
agent_executor.invoke({"input": "hi!"})
I used Azure LLM like below :
# Import Azure OpenAI
from langchain_openai import AzureOpenAI
# Create an instance of Azure OpenAI With :
llm = AzureOpenAI(
deployment_name="gpt-35-turbo-1106",
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
We now enter the agent loop (until it returns something).
1390 while self._should_continue(iterations, time_elapsed):
-> 1391 next_step_output = self._take_next_step(
1392 name_to_tool_map,
1393 color_mapping,
1394 inputs,
1395 intermediate_steps,
1396 run_manager=run_manager,
1397 )
1398 if isinstance(next_step_output, AgentFinish):
1399 return self._return(
1400 next_step_output, intermediate_steps, run_manager=run_manager
1401 )
in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
...
273 msg = f"Missing required argument: {quote(missing[0])}"
274 raise TypeError(msg)
--> 275 return func(*args, **kwargs)
TypeError: Completions.create() got an unexpected keyword argument 'functions'
### System Info
Python version IS 3.11
Langchain IS the last one | Testing Quickstart agents with Azure open ai | https://api.github.com/repos/langchain-ai/langchain/issues/20013/comments | 0 | 2024-04-04T17:30:25Z | 2024-07-11T16:07:19Z | https://github.com/langchain-ai/langchain/issues/20013 | 2,226,132,358 | 20,013 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
`langchain_community.llms.huggingface_hub.HuggingFaceHub` was deprecated, and `langchain_community.chat_models.huggingface.ChatHuggingFace` only Works with HuggingFaceTextGenInference, HuggingFaceEndpoint, and HuggingFaceHub
So how to create a ChatHuggingFace instance from local huggingface Model Now??
source :https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.huggingface.ChatHuggingFace.html#langchain_community.chat_models.huggingface.ChatHuggingFace
https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_hub.HuggingFaceHub.html#langchain_community.llms.huggingface_hub.HuggingFaceHub
### Idea or request for content:
document says HuggingFaceHub was deprecated, i should use the HuggingFaceEndPoint, but that is two difference integrate method. | DOC: How to create a ChatModel from Huggingface using local llm | https://api.github.com/repos/langchain-ai/langchain/issues/19997/comments | 3 | 2024-04-04T11:22:15Z | 2024-05-22T03:32:01Z | https://github.com/langchain-ai/langchain/issues/19997 | 2,225,209,977 | 19,997 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
azure_endpoint=openai_api_base, # type: ignore
openai_api_version=openai_api_version, # type: ignore
azure_deployment=deployment_name,
openai_api_key=openai_api_key, # type: ignore
openai_api_type=openai_api_type, # type: ignore
temperature=openai_llm_temp,
callbacks=[handler],
model_name="gpt-4-32k",
openai_proxy=openai_proxy
)
```
### Error Message and Stack Trace (if applicable)
No exception or full stack trace, simply requests being timed out because they don't go thru our corporate proxy.
### Description
I'm trying to tell AzureChatOpenAI to use our corporate proxy, however under langchain-openai it doesn't seem to take it in account.
I've had to downgrade to use AzureChatOpenAI in langchain and downgrade the OpenAI package to respectively:
langchain - 0.1.14
openai - 0.28.1
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:31:00 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6020
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.39
> langchain: 0.1.14
> langchain_community: 0.0.31
> langsmith: 0.1.39
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Azure ChatOpenAI won't use proxy provided either via env var or value | https://api.github.com/repos/langchain-ai/langchain/issues/19994/comments | 1 | 2024-04-04T08:27:43Z | 2024-05-30T10:04:15Z | https://github.com/langchain-ai/langchain/issues/19994 | 2,224,833,650 | 19,994 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
[This page](https://python.langchain.com/docs/modules/model_io/output_parsers/) provides a summary table of langchain output parsers. However, when one compares this with [the API page](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.output_parsers), [the API page for langchain_core](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.output_parsers) and [the output page for langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html#module-langchain_community.output_parsers) many, many output parsers are not documented.
It would also be a big help if there were links from the class descriptions on this page to the class definitions in the API docs (in general, adding cross-links to the API docs would be a huge improvement to the langchain docs.
### Idea or request for content:
1. Cover a full set of the output parsers in the documentation page dedicated to them.
2. Add more cross-links. | DOC: Documentation of output parsers is incomplete | https://api.github.com/repos/langchain-ai/langchain/issues/19969/comments | 0 | 2024-04-03T19:25:39Z | 2024-07-10T16:07:05Z | https://github.com/langchain-ai/langchain/issues/19969 | 2,223,739,670 | 19,969 |
[
"hwchase17",
"langchain"
] | @eyurtsev The current workaround doesn't work for create_sql_agent which is a toolkit type. Can you please provide a fix for this as well? I agreed with most users - token counting is one of the fundamental and crucial features by LangChain but this doesn't seem to be treated as a high priority. @hwchase17 Thanks. Please help the community.
_Originally posted by @axtsoi in https://github.com/langchain-ai/langchain/issues/18761#issuecomment-2033491551_
| Add option to turn off streaming in sql agent | https://api.github.com/repos/langchain-ai/langchain/issues/19961/comments | 0 | 2024-04-03T16:11:53Z | 2024-04-09T16:47:34Z | https://github.com/langchain-ai/langchain/issues/19961 | 2,223,348,157 | 19,961 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have this code in my Dockerfile: RUN pip install --no-cache-dir fastapi uvicorn redis requests tiktoken openai
And here are my dependecies :
fastapi==0.110.1
langchain==0.1.14
openai==1.16.1
pydantic==1.10.9
python-dotenv==1.0.1
redis==5.0.3
Requests==2.31.0
tiktoken==0.6.0
### Error Message and Stack Trace (if applicable)
File "/usr/local/lib/python3.12/site-packages/langchain_community/embeddings/openai.py", line 452, in _get_len_safe_embeddings
raise ImportError(ImportError: Could not import tiktoken python package. This is needed in order to for OpenAIEmbeddings. Please install it with `pip install tiktoken`.
### Description
I am trying to deploy a RAG application using langchain, everything works fine locally and I installed tiktoken. But when I try to deploy I get an error message saying it cannot import tiktoken.
Do you have any clue on how I can solve this issue ?
### System Info
Python 3.11.4
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.39
langchain-experimental==0.0.37
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
| unable to import tiktoken | https://api.github.com/repos/langchain-ai/langchain/issues/19958/comments | 2 | 2024-04-03T14:16:14Z | 2024-04-03T14:47:31Z | https://github.com/langchain-ai/langchain/issues/19958 | 2,223,068,629 | 19,958 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
llm = OpenAI()
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
chain.save("chain.json")
from langchain.chains.load_chain
chain = load_chain("chain.json")
```
One liner to reproduce this error:
```
docker run --rm python:3.9 bash -c 'pip install langchain && python -c "from langchain.chains import load_chain; load_chain(\"/path/to/chain.json\")"'
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "//a.py", line 16, in <module>
chain = load_chain("chain.json")
File "/usr/local/lib/python3.9/site-packages/langchain/chains/loading.py", line 625, in load_chain
if hub_result := try_load_from_hub(
File "/usr/local/lib/python3.9/site-packages/langchain_core/utils/loading.py", line 10, in try_load_from_hub
raise RuntimeError(
RuntimeError: Loading from the deprecated github-based Hub is no longer supported. Please use the new LangChain Hub at https://smith.langchain.com/hub instead.
```
### Description
`langchain.chains.load_chain` fails with `RuntimeError: Loading from the deprecated github-based Hub is no longer supported. Please use the new LangChain Hub at https://smith.langchain.com/hub instead.`.
### System Info
```
# pip list | grep langchain
langchain 0.1.14
langchain-community 0.0.31
langchain-core 0.1.39
langchain-text-splitters 0.0.1
``` | `langchain.chains.load_chain` fails with `RuntimeError: Loading from the deprecated github-based Hub ...` in langchain 0.1.14 and langchain-core 0.1.39 | https://api.github.com/repos/langchain-ai/langchain/issues/19947/comments | 2 | 2024-04-03T07:06:26Z | 2024-04-03T17:00:27Z | https://github.com/langchain-ai/langchain/issues/19947 | 2,222,081,791 | 19,947 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [x] I added a very descriptive title to this issue.
- [x] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
![image](https://github.com/langchain-ai/langchain/assets/49063302/97f681db-b16d-492f-8201-baa4e6cd9dc4)
Currently, calling batch will output the results all at once.
How can I obtain streaming output while calling batch?
For example, my input has 10 lines, and each time I get the first few words in the 10 lines
until each line is completed.
### Idea or request for content:
_No response_ | DOC: How to use stream output while calling batch | https://api.github.com/repos/langchain-ai/langchain/issues/19944/comments | 1 | 2024-04-03T05:42:34Z | 2024-04-03T14:43:51Z | https://github.com/langchain-ai/langchain/issues/19944 | 2,221,946,723 | 19,944 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_community.document_loaders.sharepoint import SharePointLoader
os.environ['O365_CLIENT_ID'] = "<client_id>"
os.environ['O365_CLIENT_SECRET'] = "<client_secret>"
# option 1: this returns empty list
loader = SharePointLoader(document_library_id="<document_library_id>",
auth_with_token=True)
loader.load()
# option 2: this returns correctly, but only documents with listed ids
loader = SharePointLoader(document_library_id="<document_library_id>",
object_ids=["object_id_1","object_id_2"],
auth_with_token=True)
loader.load()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
## I'm trying to use [SharePoint document loader](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/sharepoint.py).
- When I don't pass in any `folder_path` I would expect the loader to load all supported documents from the SharePoint. However I get empty list `[]`. There are supported documents in the root directory.
- Any `folder_path` I try to pass in is rejected with `ValueError: There isn't a folder with path <folder_path>`. It may have something to do with the root directory being called `Shared Documents`. The space in the folder name might be causing trouble. _*more details in a comment_
- When I pass in `object_id`s, I get the documents returned correctly.
### System Info
```
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.38
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
Windows / Python 3.11.5
``` | SharePoint loader returns empty list when `object_ids` are not passed in | https://api.github.com/repos/langchain-ai/langchain/issues/19938/comments | 1 | 2024-04-02T22:31:05Z | 2024-07-09T16:07:54Z | https://github.com/langchain-ai/langchain/issues/19938 | 2,221,568,087 | 19,938 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
loader = GithubFileLoader(
repo="hotgluexyz/tap-netsuite",
access_token=gh_token,
github_api_url="https://api.github.com",
file_filter=lambda file_path: file_path.endswith(
".py"
), # retrieves all python files
)
documents = loader.load()
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
[<ipython-input-7-5f7842728e5f>](https://localhost:8080/#) in <cell line: 1>()
----> 1 documents = loader.load()
3 frames
[/usr/local/lib/python3.10/dist-packages/requests/models.py](https://localhost:8080/#) in raise_for_status(self)
1019
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
1022
1023 def close(self):
HTTPError: 404 Client Error: Not Found for url: https://api.github.com/repos/hotgluexyz/tap-netsuite/git/trees/main?recursive=1
### Description
I am trying to use langchain to load a github repository according to the documentation
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.38
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langcodes==3.3.0
langsmith==0.1.38
| GithubFileLoader only imports repositories with main branch | https://api.github.com/repos/langchain-ai/langchain/issues/19934/comments | 2 | 2024-04-02T21:49:56Z | 2024-04-03T15:27:22Z | https://github.com/langchain-ai/langchain/issues/19934 | 2,221,518,710 | 19,934 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hugging Face pipeline now has [support for chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating#is-there-an-automated-pipeline-for-chat). This calls the ``apply_chat_template()`` of the tokenizer. This is a super useful feature which formats the input correctly according to the model. To apply the template one needs to pass a messages list to the pipeline as input (and not a prompt text).
Langchain's ``HuggingFacePipeline`` class is written in a way that only prompt text is passed to the pipeline. We can see this in the ``HuggingFacePipeline._generate`` method. As a result the prompt is constructed using Langchain's default template which is not the same as what the model works best with.
Let's build an example.
```python
import torch
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
def test_agnostic_prompt(llm):
prompt = ChatPromptTemplate.from_messages(
[
("human", "When was Abraham Lincoln born?"),
("ai", "Abraham Lincoln was born on February 12, 1809."),
("human", "How old was he when he died?"),
("ai", "Abraham Lincoln died on April 15, 1865, at the age of 56."),
("human", "{question}"),
]
)
output_parser = StrOutputParser()
chain = prompt | llm | output_parser
reply = chain.invoke({"question": "Where did he die?"})
print(reply)
hf_llm = HuggingFacePipeline.from_model_id(
model_id="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 128})
test_agnostic_prompt(hf_llm)
```
This sends the following prompt.
```
Human: When was Abraham Lincoln born?
AI: Abraham Lincoln was born on February 12, 1809.
Human: How old was he when he died?
AI: Abraham Lincoln died on April 15, 1865, at the age of 56.
Human: Where did he die?
```
The correct prompt, if chat template was applied, would be:
```
<|user|>
When was Abraham Lincoln born?</s>
<|assistant|>
Abraham Lincoln was born on February 12, 1809.</s>
<|user|>
How old was he when he died?</s>
<|assistant|>
Abraham Lincoln died on April 15, 1865, at the age of 56.</s>
<|user|>
Where did he die?</s>
<|assistant|>
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The ``HuggingFacePipeline`` class should what is necessary to convert the ``ChatPromptTemplate`` to a messages list and then pass it to the pipeline. This will cause the pipeline to use ``apply_chat_template()`` of the tokenizer to correctly format the prompt.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Sat Feb 24 09:50:35 UTC 2024
> Python Version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.38
> langchain: 0.1.14
> langchain_community: 0.0.31
> langsmith: 0.1.24
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | HuggingFacePipeline does not use chat template | https://api.github.com/repos/langchain-ai/langchain/issues/19933/comments | 7 | 2024-04-02T20:27:53Z | 2024-08-06T22:56:36Z | https://github.com/langchain-ai/langchain/issues/19933 | 2,221,387,189 | 19,933 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
%pip install -U xmltodict
from langchain_community.tools.pubmed.tool import PubmedQueryRun
tool = PubmedQueryRun()
tool.invoke("What causes lung cancer?")
### Error Message and Stack Trace (if applicable)
'PubMed exception: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)>'
### Description
I'm running your notebook and getting a certificate failure when hitting pubmed. https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/tools/pubmed.ipynb
### System Info
platform mac
Python 3.11.7
Langchain Version: 0.1.14 | Pubmed Tool SSL Cert Error when running code https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/tools/pubmed.ipynb getting 'PubMed exception: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)>' | https://api.github.com/repos/langchain-ai/langchain/issues/19916/comments | 3 | 2024-04-02T12:47:20Z | 2024-07-10T16:07:00Z | https://github.com/langchain-ai/langchain/issues/19916 | 2,220,438,158 | 19,916 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
> File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 319, in astream
> async for chunk in _stream_implementation(
>
>
> File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 511, in _astream
> async for chunk in await self.async_client.create(
>
>
> File "/usr/local/lib/python3.11/site-packages/openai/_streaming.py", line 150, in __aiter__
> async for item in self._iterator:
>
>
> File "/usr/local/lib/python3.11/site-packages/openai/_streaming.py", line 181, in __stream__
> raise APIError(
>
### Error Message and Stack Trace (if applicable)
APIError('Failed to create completion as the model generated invalid Unicode output. Unfortunately, this can happen in rare situations. Consider reviewing your prompt or reducing the temperature of your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID req_b00ea6f9adc098af60178f40d8266c7d in your message.)')Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 319, in astream
async for chunk in _stream_implementation(
File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 511, in _astream
async for chunk in await self.async_client.create(
File "/usr/local/lib/python3.11/site-packages/openai/_streaming.py", line 150, in __aiter__
async for item in self._iterator:
File "/usr/local/lib/python3.11/site-packages/openai/_streaming.py", line 181, in __stream__
raise APIError(
### Description
Sometime there is no response
### System Info
python3.11
ubuntu | OpenAI API: Failed to create completion as the model generated invalid Unicode output | https://api.github.com/repos/langchain-ai/langchain/issues/19914/comments | 0 | 2024-04-02T12:05:19Z | 2024-07-09T16:07:44Z | https://github.com/langchain-ai/langchain/issues/19914 | 2,220,345,704 | 19,914 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
_embedding = VertexAIEmbeddings(project="PROJECT_ID",
credentials=gcp_auth.credentials,
model_name="textembedding-gecko@003",
location="PROJECT_LOCATION")
_db = PGVector(connection_string="CONN",
collection_name="COLLECTION_NAME",
embedding_function=_embedding,
use_jsonb=True)
_db.search(query="test", search_type="mmr")
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/amit/telly/telly-backend/backend/agent/healthcheck/components/hc_vector_db.py", line 29, in check_health
db.search(query="test", search_type=self.settings.vector_db.retriever.type)
File "/Users/amit/telly/telly-backend/backend/venv/lib/python3.12/site-packages/langchain_core/vectorstores.py", line 160, in search
return self.max_marginal_relevance_search(query, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/amit/telly/telly-backend/backend/venv/lib/python3.12/site-packages/langchain_community/vectorstores/pgvector.py", line 1236, in max_marginal_relevance_search
return self.max_marginal_relevance_search_by_vector(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/amit/telly/telly-backend/backend/venv/lib/python3.12/site-packages/langchain_community/vectorstores/pgvector.py", line 1314, in max_marginal_relevance_search_by_vector
docs_and_scores = self.max_marginal_relevance_search_with_score_by_vector(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/amit/telly/telly-backend/backend/venv/lib/python3.12/site-packages/langchain_community/vectorstores/pgvector.py", line 1196, in max_marginal_relevance_search_with_score_by_vector
mmr_selected = maximal_marginal_relevance(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/amit/telly/telly-backend/backend/venv/lib/python3.12/site-packages/langchain_community/vectorstores/utils.py", line 34, in maximal_marginal_relevance
similarity_to_query = cosine_similarity(query_embedding, embedding_list)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/amit/telly/telly-backend/backend/venv/lib/python3.12/site-packages/langchain_community/utils/math.py", line 29, in cosine_similarity
Z = 1 - simd.cdist(X, Y, metric="cosine")
~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for -: 'int' and 'simsimd.DistancesTensor'
INFO Cannot execute Healthcheck: hc-vector-db hc_vector_db.py:33
```
### Description
Library Versions Used:
* langchain~=0.1.14
* langchain-google-vertexai~=0.1.2
* simsimd~=4.2.2
### System Info
NA | TypeError: unsupported operand type(s) for -: 'int' and 'simsimd.DistancesTensor' | https://api.github.com/repos/langchain-ai/langchain/issues/19905/comments | 3 | 2024-04-02T08:52:03Z | 2024-07-16T16:06:48Z | https://github.com/langchain-ai/langchain/issues/19905 | 2,219,935,094 | 19,905 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.agents.react.agent import create_react_agent
from langchain.tools.retriever import create_retriever_tool
from langchain.vectorstores.chroma import Chroma
from langchain_openai import OpenAI
retr_tool = Chroma().as_retriever()
retriever_tool = create_retriever_tool(retr_tool, "db_search", "Searches and returns excerpts from database.")
tools = [retriever_tool]
llm = OpenAI(temperature=0)
prompt = PromptTemplate(
input_variables=["input", "agent_scratchpad"],
template=""" --- REACT PROMPT HERE ---
Query:...
Thought: ...
Action: ...
Action Input: ...
Observation: ...
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I am finished executing the plan (or, I cannot finish executing the plan without knowing some other information.)
Final Answer: the final output from executing the plan or missing information I'd need to re-plan correctly.
Query: {input}.
Thought:
{agent_scratchpad}"""
)
# create the Agent
react_runnable = create_react_agent(
tools=tools,
llm=llm,
prompt=prompt,
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use create_react_agent to create an agent for an AgentExecutor, as ZeroShotAgent is marked as deprecated and will be removed in 0.2.0.
The problem is that create_react_agent creates a Runnable, and not a Union[BaseSingleActionAgent, BaseMultiActionAgent], so it can't be used as input of AgentExecutor, as suggested in different parts of the documentation (see [here](https://python.langchain.com/docs/modules/agents/agent_types/react#create-agent)).
### System Info
System Information
------------------
> OS: Linux
> OS Version: #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Mar 12 10:22:43 UTC 2
> Python Version: 3.12.2 (main, Feb 25 2024, 16:35:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.36
> langchain: 0.1.13
> langchain_community: 0.0.29
> langsmith: 0.1.37
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | create_react_agent cannot replace ZeroShotAgent | https://api.github.com/repos/langchain-ai/langchain/issues/19904/comments | 2 | 2024-04-02T07:56:49Z | 2024-07-11T16:07:09Z | https://github.com/langchain-ai/langchain/issues/19904 | 2,219,825,338 | 19,904 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def _create_chat_result(self, response: Union[dict, BaseModel]) -> ChatResult:
generations = []
if not isinstance(response, dict):
response = response.dict()
for res in response["choices"]:
message = convert_dict_to_message(res["message"])
gen = ChatGeneration(
message=message,
generation_info=dict(finish_reason=res.get("finish_reason")),
)
generations.append(gen)
token_usage = response.get("usage", {})
llm_output = {
"token_usage": token_usage,
"model_name": self.model_name,
"system_fingerprint": response.get("system_fingerprint", ""),
}
return ChatResult(generations=generations, llm_output=llm_output)
### Error Message and Stack Trace (if applicable)
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain/chains/base.py", line 381, in acall
raise e
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain/chains/base.py", line 375, in acall
await self._acall(inputs, run_manager=run_manager)
| | | -> <langchain_core.callbacks.manager.AsyncCallbackManagerForChainRun object at 0x297397ee0>
| | -> {'context': '文档内容总结', 'input': 'Python९ҕॉ൭Ҩ (PythonLibrary\nReference)\nRelease2.3.3\nGuidovanRossum\nFredL. Drake,Jr.,editor...
| -> <function LLMChain._acall at 0x138bb9630>
-> LLMChain(verbose=True, prompt=PromptTemplate(input_variables=['context', 'input'], template='你非常擅长总结文章,请根据{context},总结文章内容。\n...
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain/chains/llm.py", line 276, in _acall
response = await self.agenerate(, run_manager=run_manager)
| | | -> <langchain_core.callbacks.manager.AsyncCallbackManagerForChainRun object at 0x297397ee0>
| | -> {'context': '文档内容总结', 'input': 'Python९ҕॉ൭Ҩ (PythonLibrary\nReference)\nRelease2.3.3\nGuidovanRossum\nFredL. Drake,Jr.,editor...
| -> <function LLMChain.agenerate at 0x138bb9240>
-> LLMChain(verbose=True, prompt=PromptTemplate(input_variables=['context', 'input'], template='你非常擅长总结文章,请根据{context},总结文章内容。\n...
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain/chains/llm.py", line 143, in agenerate
return await self.llm.agenerate_prompt(
| | -> <function BaseChatModel.agenerate_prompt at 0x1385db6d0>
| -> ChatOpenAI(client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, openai_api_key='3e7fdf2aaf9da189922784c997a9...
-> LLMChain(verbose=True, prompt=PromptTemplate(input_variables=['context', 'input'], template='你非常擅长总结文章,请根据{context},总结文章内容。\n...
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 501, in agenerate_prompt
return await self.agenerate(
| -> <function BaseChatModel.agenerate at 0x1385db5b0>
-> ChatOpenAI(client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, openai_api_key='3e7fdf2aaf9da189922784c997a9...
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 461, in agenerate
raise exceptions[0]
-> [KeyError('choices')]
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 577, in _agenerate_with_cache
result = await self._agenerate(
| -> <function ChatOpenAI._agenerate at 0x13fe9e5f0>
-> ChatOpenAI(client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, openai_api_key='3e7fdf2aaf9da189922784c997a9...
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 525, in _agenerate
return self._create_chat_result(response)
| | -> <OpenAIObject at 0x28ebe8720> JSON: {
| | "error": {
| | "message": "This model's maximum context length is 16385 tokens. Howev...
| -> <function ChatOpenAI._create_chat_result at 0x13fe9e4d0>
-> ChatOpenAI(client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, openai_api_key='3e7fdf2aaf9da189922784c997a9...
File "/Users/yuelu/anaconda3/envs/localgpt2/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 454, in _create_chat_result
for res in response["choices"]:
-> <OpenAIObject at 0x28ebe8720> JSON: {
"error": {
"message": "This model's maximum context length is 16385 tokens. Howev...
KeyError: 'choices'
### Description
I'm trying to use langchain to summarize the content of the document
the document is very long, so the llm's response is
"error": {
"message": "This model's maximum context length is 16385 tokens.xxx"
}
but langchain_community.chat_models.openai.ChatOpenAI._create_chat_result hides the original response
and throw the exception KeyError: 'choices'
i want to get the original error
### System Info
langchain 0.1.0
python 3.10
mac and linux | langchain_community.chat_models.openai.ChatOpenAI._create_chat_result hides the original response | https://api.github.com/repos/langchain-ai/langchain/issues/19898/comments | 2 | 2024-04-02T03:42:39Z | 2024-07-10T16:06:46Z | https://github.com/langchain-ai/langchain/issues/19898 | 2,219,494,993 | 19,898 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The error is in the filer apache_doris.py, folder vectorstores.
The following code ``` else f"array<float>{str(_n)}"
```,
need to be chage to
```
else f"{str(_n)}"
```
### Error Message and Stack Trace (if applicable)
Error <class 'pymysql.err.OperationalError'> (1105, "ParseException, ms....
### Description
If we remove the sring "array<float>" it will work. I have tested with apache doris 2.1.0
### System Info
Apache doris 2.1.0
Last version of langchain | Error in apache doris insert | https://api.github.com/repos/langchain-ai/langchain/issues/19886/comments | 2 | 2024-04-01T21:00:23Z | 2024-07-11T16:07:04Z | https://github.com/langchain-ai/langchain/issues/19886 | 2,219,054,450 | 19,886 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
python...
here is the code that i use:
i ahve index data on azure cognitive search and each chunk has searchable type of metadata which is location
if i use the use `acs.as_retriever()` function along with filter it retrieve different location data as well like you can see in below code output is have print each retrieve doc location metadata.
but i use the `acs.similarity_search() ` and inside we pass filter it will only retrieve that location data and not retrieve mix location data.
```
acs = acs_search("testindex")
retriever = acs.as_retriever(search_kwargs={'filter': {'location':'US'},
'k': 5})
def format_docs(docs):
for i in docs:
print(i.metadata["location"])
return "\n\n".join(doc.page_content for doc in docs)
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
rag_chain.invoke("What is hr policy about leave")
```
**OUTPUT**
```
US
PK
MY
US
MY
'The HR policy about leave at xyz includes standard paid leave for full-time employees after 90 days of continuous employment. This includes Annual Leave (AL) of 14 workdays....
```
### USE acs.similarity_search()
```
res = acs.similarity_search(
query="what is the hr policy for anual leave", k=4, search_type="hybrid", filters="location eq 'US'"
)
res
```
OUTPUT:
```
[Document(page_content='Leave taken under this policy does, metadata={'source': '2023-us.pdf', 'location': 'US'}),
Document(page_content='You may use available vacation, pers metadata={'source': '2023-us.pdf', 'location': 'US'}),
Document(page_content="Failure to Return to Work If you fa", metadata={'source': '2023-us.pdf', 'location': 'US'}),
Document(page_content='To request leave under this policy, , metadata={'source': '2023-us.pdf', 'location': 'US'})]
```
you can see this function give exact filter data and not mixed data .
what would be the solution because we are use the first function inside chain and we are unable to get filter data.
### Error Message and Stack Trace (if applicable)
inside langchain_core > vectorstores.py i have place this print but the filter did not work:
```
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
if self.search_type == "similarity":
print("===filter=======>\n",self.search_kwargs,"\n=============")
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
```
OUTPUT:
```
===filter=======>
{'filter': {'location': 'US'}, 'k': 5}
=============
```
we are unable to get filter data while using `as_retriever()` function inside chain the doc return by this is given in first code output
### Description
i use the below versions
```
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-openai==0.0.6
```
### System Info
```
aiohttp==3.9.3
aiosignal==1.3.1
annotated-types==0.6.0
antlr4-python3-runtime==4.9.3
anyio==4.3.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asgiref==3.7.2
asttokens==2.4.1
async-lru==2.0.4
async-timeout==4.0.3
asyncio-redis==0.16.0
attrs==23.2.0
azure-common==1.1.28
azure-core==1.30.0
azure-identity==1.15.0
azure-monitor-opentelemetry-exporter==1.0.0b22
azure-search-documents==11.4.0
azure-storage-blob==12.19.1
Babel==2.14.0
backoff==2.2.1
beautifulsoup4==4.12.3
bleach==6.1.0
cachetools==5.3.3
certifi==2024.2.2
cffi==1.16.0
chardet==5.2.0
charset-normalizer==3.3.2
click==8.1.7
cohere==4.56
coloredlogs==15.0.1
comm==0.2.1
contourpy==1.2.0
cryptography==42.0.4
cycler==0.12.1
dataclasses-json==0.6.4
debugpy==1.8.1
decorator==5.1.1
deepdiff==6.7.1
defusedxml==0.7.1
Deprecated==1.2.14
distro==1.9.0
effdet==0.4.1
emoji==2.10.1
et-xmlfile==1.1.0
exceptiongroup==1.2.0
executing==2.0.1
fastapi==0.109.2
fastavro==1.9.4
fastjsonschema==2.19.1
filelock==3.13.1
filetype==1.2.0
fixedint==0.1.6
flatbuffers==24.3.6
fonttools==4.49.0
fqdn==1.5.1
frozenlist==1.4.1
fsspec==2024.2.0
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.4
httpx==0.27.0
huggingface-hub==0.21.4
humanfriendly==10.0
idna==3.6
importlib-metadata==6.11.0
iopath==0.1.10
ipykernel==6.29.2
ipython==8.22.1
ipywidgets==8.1.2
isodate==0.6.1
isoduration==20.11.0
jedi==0.19.1
Jinja2==3.1.3
joblib==1.3.2
json5==0.9.24
jsonpatch==1.33
jsonpath-python==1.0.6
jsonpointer==2.4
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.10.0
jupyter-lsp==2.2.4
jupyter_client==8.6.0
jupyter_core==5.7.1
jupyter_server==2.13.0
jupyter_server_terminals==0.5.3
jupyterlab==4.1.5
jupyterlab_pygments==0.3.0
jupyterlab_server==2.25.4
jupyterlab_widgets==3.0.10
kiwisolver==1.4.5
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-openai==0.0.6
langchainhub==0.1.15
langdetect==1.0.9
langsmith==0.1.5
layoutparser==0.3.4
lxml==5.1.0
MarkupSafe==2.1.5
marshmallow==3.20.2
matplotlib==3.8.3
matplotlib-inline==0.1.6
mistune==3.0.2
mpmath==1.3.0
msal==1.26.0
msal-extensions==1.1.0
msrest==0.7.1
multidict==6.0.5
mypy-extensions==1.0.0
nbclient==0.10.0
nbconvert==7.16.3
nbformat==5.10.3
nest-asyncio==1.6.0
networkx==3.2.1
nltk==3.8.1
notebook==7.1.2
notebook_shim==0.2.4
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.4.99
nvidia-nvtx-cu12==12.1.105
oauthlib==3.2.2
omegaconf==2.3.0
onnx==1.15.0
onnxruntime==1.15.1
openai==1.12.0
opencv-python==4.9.0.80
openpyxl==3.1.2
opentelemetry-api==1.22.0
opentelemetry-instrumentation==0.43b0
opentelemetry-instrumentation-asgi==0.43b0
opentelemetry-instrumentation-fastapi==0.43b0
opentelemetry-sdk==1.22.0
opentelemetry-semantic-conventions==0.43b0
opentelemetry-util-http==0.43b0
ordered-set==4.1.0
overrides==7.7.0
packaging==23.2
pandas==2.2.1
pandocfilters==1.5.1
parso==0.8.3
pdf2image==1.17.0
pdfminer.six==20221105
pdfplumber==0.10.4
pexpect==4.9.0
pikepdf==8.13.0
pillow==10.2.0
pillow_heif==0.15.0
platformdirs==4.2.0
portalocker==2.8.2
prometheus_client==0.20.0
prompt-toolkit==3.0.43
protobuf==4.25.3
psutil==5.9.8
ptyprocess==0.7.0
pure-eval==0.2.2
pycocotools==2.0.7
pycparser==2.21
pydantic==2.6.1
pydantic-settings==2.2.0
pydantic_core==2.16.2
Pygments==2.17.2
PyJWT==2.8.0
pymssql==2.2.11
pyparsing==3.1.2
pypdf==4.1.0
pypdfium2==4.27.0
pytesseract==0.3.10
python-dateutil==2.8.2
python-docx==1.1.0
python-dotenv==1.0.1
python-iso639==2024.2.7
python-json-logger==2.0.7
python-magic==0.4.27
python-multipart==0.0.9
pytz==2024.1
PyYAML==6.0.1
pyzmq==25.1.2
qtconsole==5.5.1
QtPy==2.4.1
rapidfuzz==3.6.2
redis==5.0.1
referencing==0.34.0
regex==2023.12.25
requests==2.31.0
requests-oauthlib==1.3.1
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rpds-py==0.18.0
safetensors==0.4.2
scipy==1.12.0
Send2Trash==1.8.2
six==1.16.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.27
stack-data==0.6.3
starlette==0.36.3
sympy==1.12
tabulate==0.9.0
tenacity==8.2.3
terminado==0.18.1
tiktoken==0.6.0
timm==0.9.16
tinycss2==1.2.1
tokenizers==0.15.2
tomli==2.0.1
torch==2.2.1
torchvision==0.17.1
tornado==6.4
tqdm==4.66.2
traitlets==5.14.1
transformers==4.38.2
triton==2.2.0
types-python-dateutil==2.9.0.20240316
types-requests==2.31.0.20240311
typing-inspect==0.9.0
typing_extensions==4.9.0
tzdata==2024.1
unstructured==0.12.4
unstructured-client==0.21.1
unstructured-inference==0.7.23
unstructured.pytesseract==0.3.12
uri-template==1.3.0
urllib3==2.2.1
uvicorn==0.27.1
wcwidth==0.2.13
webcolors==1.13
webencodings==0.5.1
websocket-client==1.7.0
widgetsnbextension==4.0.10
wrapt==1.16.0
xlrd==2.0.1
yarl==1.9.4
zipp==3.17.0
``` | retriever.as_retriever() function cannot retrieve data based on filter from azure cognitive search but | https://api.github.com/repos/langchain-ai/langchain/issues/19885/comments | 4 | 2024-04-01T20:24:40Z | 2024-07-11T16:06:59Z | https://github.com/langchain-ai/langchain/issues/19885 | 2,219,001,920 | 19,885 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
https://js.langchain.com/docs/modules/chains/popular/vector_db_qa
### Error Messaged and Stack Trace (if applicable)
_No response_
### Description
Several platforms offer the OpenAI API as an alternative to their own API. Examples include TGI and vLLM. However, in these platforms, prompts must be provided as a single string rather than a list of strings. Unfortunately, when using the QA retrieval chain, I encounter an error because it requires a list of prompts.
Issues related to this:
https://github.com/vllm-project/vllm/issues/186
https://github.com/vllm-project/vllm/pull/323#issuecomment-1615254198
### System Info
`langchain 0.1.14 `
Windows | Langchain passes prompt as a list instead of str in Retrieval Chain to Openai API | https://api.github.com/repos/langchain-ai/langchain/issues/19872/comments | 0 | 2024-04-01T17:38:06Z | 2024-07-09T16:07:19Z | https://github.com/langchain-ai/langchain/issues/19872 | 2,218,732,400 | 19,872 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import getpass
import os
from langchain_core.pydantic_v1 import BaseModel, Field
# Note that the docstrings here are crucial, as they will be passed along
# to the model along with the class name.
class Multiply(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
os.environ["GOOGLE_API_KEY"] = getpass.getpass()
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro")
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
### Error Message and Stack Trace (if applicable)
langchaingemsample.py", line 20, in <module>
llm_with_tools = llm.bind_tools([Multiply])
^^^^^^^^^^^^^^
AttributeError: 'ChatGoogleGenerativeAI' object has no attribute 'bind_tools'
### Description
i was using using langchain examples in this url https://python.langchain.com/docs/modules/model_io/chat/function_calling on function calling using google, i got this error "AttributeError: 'ChatGoogleGenerativeAI' object has no attribute 'bind_tools'", what i'm i doing wrong?
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.37
> langchain: 0.1.13
> langchain_community: 0.0.29
> langsmith: 0.1.31
> langchain_experimental: 0.0.50
> langchain_google_genai: 1.0.1
> langchain_google_vertexai: 0.1.1
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
> langchaingemsample: Installed. No version info available.
> langchainhub: 0.1.14
> langgraph: 0.0.30
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | AttributeError: 'ChatGoogleGenerativeAI' object has no attribute 'bind_tools' | https://api.github.com/repos/langchain-ai/langchain/issues/19868/comments | 2 | 2024-04-01T17:00:09Z | 2024-07-09T16:07:15Z | https://github.com/langchain-ai/langchain/issues/19868 | 2,218,667,013 | 19,868 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.chats import ChatOpenAI, ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
endpoint_url = "http://url/v1"
chat = ChatOpenAI(
model="mistral-medium",
openai_api_key="EMPTY",
openai_api_base=endpoint_url,
max_tokens=5,
temperature=0,
)
template = (
"You are a helpful assistant that translates {input_language} to {output_language}."
)
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, human_message_prompt]
)
results = chat(
chat_prompt.format_prompt(
input_language="English", output_language="Italian", text="I love programming."
).to_messages()
)
print(results)
```
### Error Message and Stack Trace (if applicable)
```python
openai.UnprocessableEntityError: Error code: 422 - {'error': 'Template error: template not found', 'error_type': 'template_error'}
```
### Description
I'm encountering an issue with LangChain's OpenAI Chat integration, specifically when trying to use custom templates for dynamic chat prompts. The expected behavior is to seamlessly generate chat prompts using the specified system and human message templates, and then perform a chat session using these prompts. However, the current behavior results in an **openai.UnprocessableEntityError** with a **template error** indicating that the template was not found.
This issue arises despite following the documented approach for creating and using **ChatPromptTemplate**, **SystemMessagePromptTemplate**, and **HumanMessagePromptTemplate** within LangChain's framework. The error suggests there's either a problem with the template processing/handling in LangChain or a misconfiguration in the OpenAI chat integration setup.
Even when using a cURL command to interact with the model, it's still resulting in an error about a missing template.
#### cURL Testing
```curl
curl http://ur/lv1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/Mistral-7B-Instruct-v0.2",
"messages": [
{
"role": "system",
"content": "When user gives a number, simply respond with the double and say nothing else."
},
{
"role": "user",
"content": "100"
}]
}'
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023
> Python Version: 3.10.0 (default, Jan 10 2024, 16:43:55) [GCC 7.5.0]
Package Information
-------------------
> langchain_core: 0.1.37
> langchain: 0.1.14
> langchain_community: 0.0.30
> langsmith: 0.1.38
> langchain_mistralai: 0.1.0
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
| Bug Report: Issue with LangChain and OpenAI Template Handling (vLLM) | https://api.github.com/repos/langchain-ai/langchain/issues/19857/comments | 3 | 2024-04-01T13:56:20Z | 2024-04-10T08:09:25Z | https://github.com/langchain-ai/langchain/issues/19857 | 2,218,327,952 | 19,857 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
~~~python
# Python code
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
vectorstore = Chroma.from_documents(documents=splits,
embedding=OpenAIEmbeddings(model="text-embedding-3-large"),
persist_directory = "./chroma_db")
~~~
### Error Message and Stack Trace (if applicable)
File /workspace/.pyenv_mirror/user/current/lib/python3.12/site-packages/chromadb/api/types.py:211, in validate_embedding_function(embedding_function)
[208](https://vscode-remote+utkarshg1-002dopenailangcha-002dh4687x6hb59-002ews-002dus110-002egitpod-002eio.vscode-resource.gitpod.io/workspace/.pyenv_mirror/user/current/lib/python3.12/site-packages/chromadb/api/types.py:208) protocol_signature = signature(EmbeddingFunction.__call__).parameters.keys()
[210](https://vscode-remote+utkarshg1-002dopenailangcha-002dh4687x6hb59-002ews-002dus110-002egitpod-002eio.vscode-resource.gitpod.io/workspace/.pyenv_mirror/user/current/lib/python3.12/site-packages/chromadb/api/types.py:210) if not function_signature == protocol_signature:
--> [211](https://vscode-remote+utkarshg1-002dopenailangcha-002dh4687x6hb59-002ews-002dus110-002egitpod-002eio.vscode-resource.gitpod.io/workspace/.pyenv_mirror/user/current/lib/python3.12/site-packages/chromadb/api/types.py:211) raise ValueError(
[212](https://vscode-remote+utkarshg1-002dopenailangcha-002dh4687x6hb59-002ews-002dus110-002egitpod-002eio.vscode-resource.gitpod.io/workspace/.pyenv_mirror/user/current/lib/python3.12/site-packages/chromadb/api/types.py:212) f"Expected EmbeddingFunction.__call__ to have the following signature: {protocol_signature}, got {function_signature}\n"
[213](https://vscode-remote+utkarshg1-002dopenailangcha-002dh4687x6hb59-002ews-002dus110-002egitpod-002eio.vscode-resource.gitpod.io/workspace/.pyenv_mirror/user/current/lib/python3.12/site-packages/chromadb/api/types.py:213) "Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.\n"
[214](https://vscode-remote+utkarshg1-002dopenailangcha-002dh4687x6hb59-002ews-002dus110-002egitpod-002eio.vscode-resource.gitpod.io/workspace/.pyenv_mirror/user/current/lib/python3.12/site-packages/chromadb/api/types.py:214) "Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023 \n"
[215](https://vscode-remote+utkarshg1-002dopenailangcha-002dh4687x6hb59-002ews-002dus110-002egitpod-002eio.vscode-resource.gitpod.io/workspace/.pyenv_mirror/user/current/lib/python3.12/site-packages/chromadb/api/types.py:215) )
ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['args', 'kwargs'])
Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface.
Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023
### Description
I'm Trying to get Embeddings but looks like ChromaDB has changed functionality from their end.
Hence Langchain code also needs to be changed
https://docs.trychroma.com/embeddings/openai
### System Info
langchain==0.1.14
langchain-community==0.0.30
langchain-core==0.1.37
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.15
chromadb==0.4.24 | Chroma db Code changed thats why unable to access the vectorstore from ChromaDB for embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/19848/comments | 12 | 2024-04-01T07:33:39Z | 2024-04-01T17:56:47Z | https://github.com/langchain-ai/langchain/issues/19848 | 2,217,722,491 | 19,848 |